Unsupervised Contrastive Learning Of Sound Event Representations

  • Authors
  • apos;Connor NE, Serra X
  • UPF authors
  • SERRA, XAVIER;
  • Authors of the book
  • Androutsos, Dimitri; Plataniotis, Kostas; Zhang, Xiao-Ping
  • Book title
  • Proceedings of the 46th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • Publication year
  • 2021
  • Pages
  • 371-375
  • ISBN
  • 978-1-7281-7605-5
  • Abstract
  • Self-supervised representation learning can mitigate the limitations in recognition tasks with few manually labeled data but abundant unlabeled data¿a common scenario in sound event research. In this work, we explore unsupervised contrastive learning as a way to learn sound event representations. To this end, we propose to use the pretext task of contrasting differently augmented views of sound events. The views are computed primarily via mixing of training examples with unrelated backgrounds, followed by other data augmentations. We analyze the main components of our method via ablation experiments. We evaluate the learned representations using linear evaluation, and in two in-domain downstream sound event classification tasks, namely, using limited manually labeled data, and using noisy labeled data. Our results suggest that unsupervised contrastive pre-training can mitigate the impact of data scarcity and increase robustness against noisy labels.
  • Complete citation
  • Fonseca E, Ortego D, McGuinness K, O'Connor NE, Serra X. Unsupervised Contrastive Learning Of Sound Event Representations. In: Androutsos, Dimitri; Plataniotis, Kostas; Zhang, Xiao-Ping. Proceedings of the 46th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 1 ed. 2021. p. 371-375.