Multimodal Co-Ranking of Music Sentiment Analysis with the Deep Learning Model

Main Article Content

Ge Tian

Abstract

Deep learning techniques have emerged as powerful tools for analyzing music sentiment, offering a sophisticated understanding of the emotional content embedded within musical compositions. Analyzing music sentiment in piano performance poses a unique challenge due to the multifaceted nature of musical expression and the complexity of piano dynamics. In recent years, deep learning techniques have shown promising results in various music analysis tasks. This paper introduces a novel approach, termed Co-ranking Multimodal Fuzzy Cluster Deep Network (CRMFC-DL), designed specifically for analyzing music sentiment in piano performance. The proposed CRMFC-DL framework leverages the complementary information from both audio and symbolic representations of piano music. By integrating deep neural networks with fuzzy clustering, CRMFC-DL effectively captures the intricate relationships between different musical features and their corresponding sentiment labels. Moreover, the co-ranking mechanism facilitates the joint optimization of multimodal feature representations, leading to enhanced model performance. Through extensive experimentation, CRMFC-DL achieves an average sentiment analysis accuracy of 87.5%, surpassing existing methods by 8.2%. The both audio and symbolic representations of piano music, CRMFC-DL effectively captures the intricate relationships between different musical features and their corresponding sentiment labels.   

Article Details

Section
Articles