Neural Networks for Musical Creativity Generation in Instant Piano Performance

Main Article Content

Xianji Liao

Abstract

This study investigates the efficacy of recurrent neural networks (RNNs) in generating musical creativity for instant piano performance. Leveraging the computational power of deep learning, the study explores the intersection of artificial intelligence and artistic expression, aiming to push the boundaries of real-time musical improvisation. The experimental methodology involves training an RNN model on a diverse dataset of piano performances and evaluating its output using both objective and subjective metrics. Objective measures such as pitch accuracy, rhythm consistency, and harmonic progression are employed to assess the fidelity of the generated music to established musical conventions, while subjective evaluations capture human perceptions of creativity, expressiveness, and aesthetic appeal. Statistical analysis reveals that the RNN achieves high levels of pitch accuracy (92.5%), rhythm consistency (88.3%), and harmonic progression (85.7%), indicating its ability to capture the nuances of piano performance. Moreover, subjective evaluations yield overwhelmingly positive responses, with average ratings of 4.6 out of 5 for creativity, 4.8 for expressiveness, and 4.7 for aesthetic appeal. Statistical significance testing demonstrates that the RNN model outperforms baseline models with a statistically significant difference (p < 0.05) across all metrics. Comparisons with human-generated piano performances reveal no statistically significant difference in perceived creativity, expressiveness, or aesthetic appeal, suggesting that the RNN model is capable of producing piano performances on par with those of human musicians. This study highlights the potential of RNNs to inspire new forms of artistic expression and collaboration in the realm of music, paving the way for future innovations in AI-driven musical creativity.

Article Details

Section
Articles