We develop a large number of software tools and hosting infrastructures to support the research developed at the Department. We will be detailing in this section the different tools available. You can take a look for the moment at the offer available within the UPF Knowledge Portal, the innovations created in the context of EU projects in the Innovation Radar and the software sections of some of our research groups:

 

 Artificial Intelligence

 Nonlinear Time Series Analysis

 Web Research 

 

 Music Technology

 Interactive  Technologies

 Barcelona MedTech

 Natural Language  Processing

 Nonlinear Time Series  Analysis

UbicaLab

Wireless Networking

Educational Technologies

GitHub

 

 

Back [AUDIO FEATURES AND IMAGES] MSD-I: Million Song Dataset with Images for Multimodal Genre Classification

MSD-I: Million Song Dataset with Images for Multimodal Genre Classification

The Million Song Dataset (https://labrosa.ee.columbia.edu/millionsong/) is a collection of metadata and precomputed audio features for 1 million songs. Along with this dataset, a dataset with annotations of 15 top-level genres with a single label per song was released. In our work, we combine the CD2c version of this genre datase (http://www.tagtraum.com/msd_genre_datasets.html) with a collection of album cover images. 

The final dataset contains 30,713 tracks from the MSD and their related album cover images, each annotated with a unique genre label among 15 classes. Based on an initial analysis on the images, we identified that this set of tracks is associated to 16,753 albums, yielding an average of 1.8 songs per album.

We randomly divide the dataset into three parts: 70% for training, 15% for validation, and 15% for test, with no artist and album overlap across these sets. This is crucial to avoid possible overfitting, as the classifier may learn to predict the artist instead of the genre. 

10.5281/zenodo.1240485

Content:

MSD-I dataset (mapping, metadata, annotations and links to images)
Data splits and feature vectors for TISMIR single-label classification experiments 

These data can be used together with the Tartarus deep learning python module https://github.com/sergiooramas/tartarus.

Scientific References:

Please cite the following paper if using MSD-I dataset or Tartarus software.

Oramas, S., Barbieri, F., Nieto, O., and Serra, X (2018). Multimodal Deep Learning for Music Genre Classification, Transactions of the International Society for Music Information Retrieval, V(1).