At the MTG we combine scientific, technological and artistic methodologies to understand, model and generate sound and music signals. We aim at contributing to a number of strategic areas of significant social and economic impact:
Artistic creation: We develop tools aiming to empower people's creativity.
Cultural preservation: We work to understand, access and preserve our world's music heritage.
Education: We develop technologies for enhancing musical practice, thus promoting music learning.
Health and wellbeing: We investigate the benefits of music as a regulator, inductor, companion, or enhancer in individual and social contexts and in daily regular activities.
Sustainable development: We analyze, describe, and monitor our sonic surroundings, aiming to contribute to the preservation and improvement of our environment.
The MTG is organized into four labs (Audio Signal Processing Lab, Music Information Research Lab, Music and Multimodal Interaction Lab and Music and Machine Learning Lab), each one led by a faculty member. Our research results into publications and software & datasets, while also generating technology transfer and outreach initiatives.
Audio Signal Processing Lab
Head of the lab: Xavier Serra
The focus of the lab is to advance in the understanding of sound and music signals by combining signal processing and machine learning methods. The lab works both on data-driven methodologies, in which the development and use of large data collections is a fundamental aspect, and on knowledge-driven approaches, in which domain knowledge of the problem to be addressed is needed. Combining these research approaches, we are able to tackle practical problems related to automatic sound and music description, music exploration and recommendation, and music education.
Music Information Research Lab
Head of the lab: Emilia Gómez
The lab works on topics such as sound and music description, music information retrieval, singing voice synthesis, audio source separation, music and audio processing. The current challenges of the lab are: exploit the multimodal character of music to enhance its automatic processing, reduce the semantic gap between automatic features and user descriptors through user-centered paradigms, connect music description and music creation, incorporate advanced learning techniques for music processing.
Music and Multimodal Interaction Lab
Head of the lab: Sergi Jordà
The lab focuses on multimodal interactive technologies and how to apply them for music creation. Its current research agenda combines techniques from fields such as Human Computer Interaction, Music Information Research, Machine Learning and Physiological Computing, and addresses the application of Artificial Intelligence and Deep Learning techniques into Computational Creativity, with a focus on music but covering also transversal application domains, such as Virtual Reality, education and happy healthy aging.
Music and Machine Learning Lab
Head of the lab: Rafael Ramírez
The lab is focused on the intersection of music technology, artificial intelligence, deep learning, and neuroscience, with applications to technology-enhanced learning of music instruments, expressive music performance computational modeling, accessible digital music instruments design for people with motor disabilities, brain-computer music interfaces, and music as therapeutic tool in autism, emotional disorders, palliative care, cerebral palsy and stroke rehabilitation.