Technologies for interactive music learning

Project funded by the Spanish Ministry of Economy and Competitiveness (Subprograma de Proyectos de Investigación Fundamental no Orientada, TIN2013-48152) that started in January 2014 and lasted for 36 months. This project was led by the Universitat Alacant (PI Jose Manuel Iñesta Quereda) and coordinated at the MTG by Rafael Ramírez.

Abstract: To attain a high level of expertise in music education requires a long learning trajectory and intensive practice. Learning to play music is mostly based on the master-apprentice model in which the teacher mainly gives verbal feedback on the performance of the student. In such a learning model, modern technologies are rarely employed and almost never go beyond audio and video recording. In addition, the student‘s interaction and socialization is often restricted to short and punctual contact with the teacher followed by long periods of self-study, which often makes musical learning a lonely experience, resulting in high abandonment rates. Similarly to other disciplines such as sport, where technology is commonly used to improve the training and performance of athletes, this project proposed to incorporate the latest technological advances to music training in order to define optimal pedagogical methods and tools, to facilitate learning, and to make music learning a more interactive and social process.

The main aim of the project is to study how we learn music performance from a pedagogical and scientific perspective and to create new assistive, multimodal, interactive, and social-aware systems complementary to traditional teaching. Music performance is not simply playing the right note at the right time.

The project aimed to investigate and explore all the relevant aspects in order to produce methods and tools for music education with innovative pedagogical paradigms, taking into account key factors such as expressivity, interactivity, gesture control, and cooperative work among participants. As a result of a tightly coupled interaction between the participating partners, the project studied questions such as ―How will the music learning environments be in 5-10 years time? ―What impact will these new musical environments have in music learning as a whole?

The general objectives of the project were: (1) to design and implement new multi-modal interaction paradigms for music learning and training based on state-of-the-art audio processing, music analysis and pattern recognition techniques, (2) to evaluate from a pedagogical point of view the effectiveness of such new paradigms, (3) based on the evaluation results, to develop new multimodal interactive music learning prototypes for student-teacher, student only, and collaborative learning scenarios, and (4) to create a publicly available reference database of music recordings with multimodal information for cooperative learning. The results of the project served as a basis for the development of next generation music learning systems, thereby improving on current student-teacher interaction, student-only practice, and furthermore providing the potential to make music education accessible to a substantially wider public.

Keywords: Music technology, music learning, machine learning, pattern recognition, human-computer interaction, multimodality, pedagogy, individual learning, collective intelligence

Other projects related to TIMUL: TELMI project


Giraldo, S., & Ramirez R. (2015).  Performance to Score Sequence Matching for Automatic Ornament Detection in Jazz Music. International Conference on New Music Concepts.

Vamvakousis, Z., & Ramirez R. (2015).  Is an auditory P300-based Brain-Computer Musical Interface feasible?. CMMR2015: International Workshop on BCMI.

Vamvakousis, Z., & Ramirez R. (2015).  EEG Signal classification in a Brain-Computer Music Interface. 8th International Workshop on Machine Learning and Music. 28-30.

Vamvakousis, Z., & Ramirez R. (2016).  The EyeHarp: A Gaze-Controlled Digital Musical Instrument. Frontiers in Psychology . 7

Bantula, H., Giraldo S., & Ramirez R. (2016).  JAZZ ENSEMBLE EXPRESSIVE PERFORMANCE MODELING. ISMIR 2016.

Giraldo, S. (2016).  Computational Modelling of Expressive Music Performance in Jazz Guitar: A Machine Learning Approach. Department of Information and Communication Technologies. 158.

Dalmazzo, D. C., & Ramirez R. (2017).  Air Violin: A Machine Learning Approach to Fingering Gesture Recognition. MIE’17, November 13, 2017, Glasgow, UK. 4.