musicnn: pre-trained convolutional neural networks for music audio tagging

  • Authors
  • Pons J, Serra X
  • UPF authors
  • SERRA CASALS, FRANCESC XAVIER; PONS PUIG, JORDI;
  • Authors of the book
  • -
  • Book title
  • 20th International Society for Music Information Retrieval Conference
  • Publisher
  • Society for Music Information Retrieval
  • Publication year
  • 2019
  • Pages
  • -
  • ISBN
  • ISBN-2685
  • Abstract
  • quot;, the musicnn library contains a set of pre-trained musically motivated convolutional neural networks for music audio tagging: this https URL. This repository also includes some pre-trained vgg-like baselines. These models can be used as out-of-the-box music audio taggers, as music feature extractors, or as pre-trained models for transfer learning. We also provide the code to train the aforementioned models: this https URL. This framework also allows implementing novel models. For example, a musically motivated convolutional neural network with an attention-based output layer (instead of the temporal pooling layer) can achieve state-of-the-art results for music audio tagging: 90.77 ROC-AUC / 38.61 PR-AUC on the MagnaTagATune dataset --- and 88.81 ROC-AUC / 31.51 PR-AUC on the Million Song Dataset.
  • Complete citation
  • Pons J, Serra X. musicnn: pre-trained convolutional neural networks for music audio tagging. In: -. 20th International Society for Music Information Retrieval Conference 1 ed. Society for Music Information Retrieval; 2019.