Learning Experience of University Music Course Based on Emotional Computing
Main Article Content
Abstract
Identification and analysis of piano performance music are essential activities in the study of music knowledge and enjoyment. The goal of this study is to employ recurrent neural networks (RNNs) to create a multimedia identification and analysis method for piano performance music. RNNs excel at capturing the temporal relationships and dynamics seen in musical performances. The project entails gathering a large dataset of piano performance recordings that spans a variety of genres, performers, and playing techniques. The audio and video components of the performances are pre-processed in order to extract pertinent information. Long Short-Term Memory (LSTM), an RNN architecture, is used to mimic the sequential nature of the performances. The RNN is trained on the features that were retrieved in order to discover the patterns and traits connected to various piano performances. The similarity between the performance representations may be measured using similarity measures using Euclidean distance. The RNN-based system may be further developed to do tasks like score following, expressive performance analysis, and stylistic variation creation to aid in performance analysis. The system may provide insights into timing accuracy, dynamics, phrasing, and other expressive qualities of the piano performances by matching performance data with related musical scores. The proposed method RNN-LSTM provides accuracy about 99%, precision about 97% , recall about 98.9% and F1 Score of 97.6%.
Article Details
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.
References
Martín-Gómez, L., Pérez-Marcos, J., Rivero, A. J. L., & Bermúdez, G. M. T. (2022, July). Drawing Music: Using Neural Networks to Compose Descriptive Music from Illustrations. In International Conference on Disruptive Technologies, Tech Ethics and Artificial Intelligence (pp. 30-42). Cham: Springer International Publishing.
Wang, D. (2022). Recognition and error correction techniques for piano playing music based on convolutional cyclic hashing method. Wireless Communications and Mobile Computing, 2022.
Gong, X. (2022). Research on Discrete Dynamic System Modeling of Vocal Performance Teaching Platform Based on Big Data Environment. Discrete Dynamics in Nature and Society, 2022.
Bhattarai, B., Pandeya, Y. R., Jie, Y., Lamichhane, A. K., & Lee, J. (2023). High-Resolution Representation Learning and Recurrent Neural Network for Singing Voice Separation. Circuits, Systems, and Signal Processing, 42(2), 1083-1104.
Chen, Y. (2022). Construction and Application of Music Style Intelligent Learning System Based on Situational Awareness. Mathematical Problems in Engineering, 2022.
Ghatas, Y., Fayek, M., & Hadhoud, M. (2022). A hybrid deep learning approach for musical difficulty estimation of piano symbolic music. Alexandria Engineering Journal, 61(12), 10183-10196.
Ramoneda, P., Tamer, N. C., Eremenko, V., Serra, X., & Miron, M. (2022, May). Score difficulty analysis for piano performance education based on fingering. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 201-205). IEEE.
Xiao, Z., Chen, X., & Zhou, L. (2023). Polyphonic Piano Transcription Based on Graph Convolutional Network. Signal Processing, 109134.
Song, G., & Wang, Z. (2022). An Efficient Hidden Markov Model with Periodic Recurrent Neural Network Observer for Music Beat Tracking. Electronics, 11(24), 4186.
Ramoneda, P., Jeong, D., Nakamura, E., Serra, X., & Miron, M. (2022, October). Automatic piano fingering from partially annotated scores using autoregressive neural networks. In Proceedings of the 30th ACM International Conference on Multimedia (pp. 6502-6510).
Rafee, S. R. M., Fazekas, G., & Wiggins, G. (2023, June). HIPI: A Hierarchical Performer Identification Model Based on Symbolic Representation of Music. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
Duan, Y., & Wang, J. (2022). Design of Semiautomatic Digital Creation System for Electronic Music Based on Recurrent Neural Network. Computational Intelligence and Neuroscience, 2022.
Wang, T. (2022). Neural Network-Based Dynamic Segmentation and Weighted Integrated Matching of Cross-Media Piano Performance Audio Recognition and Retrieval Algorithm. Computational Intelligence and Neuroscience, 2022.
Wang, Y. (2023). Intelligent auxiliary system for music performance under edge computing and long short-term recurrent neural networks. Plos one, 18(5), e0285496.
Ramoneda, P., Tamer, N. C., Eremenko, V., Srra, X. and M. Miron.(2022). "Score Difficulty Analysis for Piano Performance Education based on Fingering," IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, Singapore, 2022, pp. 201-205
Wang, W., Pan, J., Song, H. . Yi, Z. and Li, M.(2021)."Audio-Based Piano Performance Evaluation for Beginners With Convolutional Neural Network and Attention Mechanism," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 1119-1133, 2021,
Jiang, T., Xiao, Q. , Yin, X.(2019)."Music Generation Using Bidirectional Recurrent Network," 2019 IEEE 2nd International Conference on Electronics Technology (ICET), Chengdu, China, 2019, pp. 564-569.
Sigtia, S., Benetos, E.,Dixon, S.(2016). "An End-to-End Neural Network for Polyphonic Piano Music Transcription," in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 5, pp. 927-939.
Böck, S.., Schedl, M. (2012). "Polyphonic piano note transcription with recurrent neural networks," 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 2012, pp. 121-124.
Liao, Yuanyuan. (2022). Educational Evaluation of Piano Performance by the Deep Learning Neural Network Model. Mobile Information Systems. 2022. 1-12. 10.1155/2022/6975824.
Jian Pan., Shaode Yu., Zhang, Zi., Mingliang Wei. (2022). "The Generation of Piano Music Using Deep Learning Aided by Robotic Technology", Computational Intelligence and Neuroscience, vol. 2022,
Sisman, J., Yamagishi, S., King, Li, H. (2021). “An overview of voice conversion and its challenges: from statistical modeling to deep learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 132–157, 2021.
Talebi, H., Milanfar, P.(2018). “NIMA: neural image assessment,” IEEE Transactions on Image Processing, vol. 27, no. 8, pp. 3998–4011, 2018.
Rizvi, D. R., Nissar, I., Masood,S., Ahmed, M.(2020). “An LSTM based Deep learning model for voice-based detection of Parkinson’s disease,” Int. J. Adv. Sci. Technol, vol. 29, no. 8, 2020.
Demir, G., Çekmiş, A., Yeşilkaynak, V. B., Unal, G. (2021). Detecting visual design principles in art and architecture through deep convolutional neural networks,” Automation in Construction, vol. 130, Article ID 103826, 2021.