At the MTG we combine scientific, technological and artistic methodologies to understand, model and generate sound and music signals. We aim at contributing to a number of strategic areas of significant social and economic impact:
Artistic creation: We develop tools to empower people's creativity.
Cultural preservation: We work on the understanding, access and preservation of our world's music heritage.
Education: We develop technologies to support and enhance music learning.
Health and wellbeing: We investigate how technologies can support the personal and social benefits of music.
Sustainable development: We develop methodologies to describe and preserve our sonic surroundings.
Audio Signal Processing Lab
Head of the lab: Xavier Serra
The focus of the lab is to advance in the understanding of sound and music signals by combining signal processing and machine learning methods. The lab works both on data-driven methodologies, in which the development and use of large data collections is a fundamental aspect, and on knowledge-driven approaches, in which domain knowledge of the problem to be addressed is needed. Combining these research approaches, we are able to tackle practical problems related to automatic sound and music description, music exploration and recommendation, and music education.
Music and Multimodal Interaction Lab
Head of the lab: Sergi Jordà
The lab focuses on multimodal interactive technologies and how to apply them for music creation. Its current research agenda combines techniques from fields such as Human Computer Interaction, Music Information Research, Machine Learning and Physiological Computing, and addresses the application of Artificial Intelligence and Deep Learning techniques into Computational Creativity, with a focus on music but covering also transversal application domains, such as Virtual Reality, education and happy healthy aging.
Music Information Research Lab
Head of the lab: Emilia Gómez
The lab works on topics such as sound and music description, music information retrieval, singing voice synthesis, audio source separation, music and audio processing. The current challenges of the lab are: exploit the multimodal character of music to enhance its automatic processing, reduce the semantic gap between automatic features and user descriptors through user-centered paradigms, connect music description and music creation, incorporate advanced learning techniques for music processing.
Music and Machine Learning Lab
Head of the lab: Rafael Ramírez
The lab is focused on the intersection of music technology, artificial intelligence, deep learning, and neuroscience, with applications to technology-enhanced learning of music instruments, expressive music performance computational modeling, accessible digital music instruments design for people with motor disabilities, brain-computer music interfaces, and music as therapeutic tool in autism, emotional disorders, palliative care, cerebral palsy and stroke rehabilitation.