The MTG produces a wide variety of technologies that contribute to the music technology research community. Some of these technologies are also aimed at industrial applications.
Explore the MTG demos to discover some of our technologies.
Need more information? Contact us
Sound and music analysis and classificationWe work on different levels of analysis, from low- and mid-level acoustic and musical features to semantic categories such as moods, genres, and instrumentation, applying a wide range of signal processing and machine learning methodologies.
Essentia: Music audio descriptors in the browser
Essentia: TensorFlow models
Examples of inference with the pre-trained TensorFlow models for music auto-tagging and classification tasks.
The AcousticBrainz project aims to crowd source acoustic information for all music in the world and to make it available to the public. This acoustic information describes the acoustic characteristics of music and includes low-level spectral information and information for genres, moods, keys, scales and much more.
Technologies applied to music education
Based on signal processing, machine learning or motion capture techniques, we develop technologies to understand and enhance the music learning process. We facilitate the learning and assessment of most aspects of music performance: from sound quality production and gestural control to intonation, rhythm, timbre, and expression..
Music Critic is an innovative technology with which to evaluate musical exercises sung or played by students and can be easily integrated in online applications or education platforms. Music Critic is used in different types of music exercises, like evaluation of guitar performances of a chromatic scale:
Interactive tools for facilitating the understanding of different musical cultures to the general public. An example of those tools is the Rāg visualizer, aimed to guide the listening attention to the characteristic melodic phrases in real performances and how they are rendered every time they are performed, as well as to browse a particular recording in terms of these characteristic phrases.
Interaction and exploration of large sound and music collections
A common goal in many of our research projects is the creation of large sound and music collections, releasing them as open datasets that can be reused by researchers and practitioners. Some examples are Freesound, a collaborative database of sounds; AcousticBrainz, an open dataset containing music analysis data of millions of tracks; or Dunya, a music corpora created with the aim of studying particular music traditions.
Freesound API: Freesound explorer
Freesound API allows to browse, search, and retrieve information about Freesound. You can find similar sounds to a given target (based on content analysis) and retrieve automatically extracted features from audio files, as well as perform advanced queries combining content analysis features and other metadata. Freesound Explorer is an example of Freesound API potential functionalities. It is a visual interface for exploring Freesound in a 2-dimensional space and create music.
Freesound API: SOURCE
Dunya comprises music corpora and related software tools created with the aim of studying particular music traditions: Hindustani (North India), Carnatic (South India), Turkish-Makam (Turkey), Arab-Andalusian (Maghreb), and Beijing Opera (China). They include audio recordings plus complementary information that describes the recordings. Each corpus has specific characteristics and the developed software tools allow to process the available information in order to study and explore the characteristics of each musical repertoire.