Multimodal Metric Learning For Tag-Based Music Retrieval

  • Authors
  • Won M, Oramas S, Nieto O, Gouyon F, Serra X
  • UPF authors
  • SERRA, XAVIER;
  • Authors of the book
  • Androutsos, Dimitri; Plataniotis, Kostas; Zhang, Xiao-Ping
  • Book title
  • Proceedings of the 46th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • Publication year
  • 2021
  • Pages
  • 591-595
  • ISBN
  • 978-1-7281-7605-5
  • Abstract
  • Tag-based music retrieval is crucial to browse large-scale mu-sic libraries efficiently. Hence, automatic music tagging has been actively explored, mostly as a classification task, which has an inherent limitation: a fixed vocabulary. On the other hand, metric learning enables flexible vocabularies by using pretrained word embeddings as side information. Also, met-ric learning has proven its suitability for cross-modal retrieval tasks in other domains (e.g., text-to-image) by jointly learning a multimodal embedding space. In this paper, we investigate three ideas to successfully introduce multimodal metric learning for tag-based music retrieval: elaborate triplet sampling, acoustic and cultural music information, and domain-specific word embeddings. Our experimental results show that the proposed ideas enhance the retrieval system quantitatively and qualitatively. Furthermore, we release the MSD500: a subset of the Million Song Dataset (MSD) containing 500 cleaned tags, 7 manually annotated tag categories, and user taste profiles.
  • Complete citation
  • Won M, Oramas S, Nieto O, Gouyon F, Serra X. Multimodal Metric Learning For Tag-Based Music Retrieval. In: Androutsos, Dimitri; Plataniotis, Kostas; Zhang, Xiao-Ping. Proceedings of the 46th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 1 ed. 2021. p. 591-595.