An Interpretable Deep Learning Model for Automatic Sound Classification

  • Authors
  • Zinemanas P, Rocamora M, Miron M, Font F, Serra X
  • UPF authors
  • SERRA, XAVIER; ZINEMANAS FRIET, PABLO; FONT CORBERA, FREDERIC; MIRON, MARIUS;
  • Type
  • Scholarly articles
  • Journal títle
  • Electronics
  • Publication year
  • 2021
  • Volume
  • 10
  • Number
  • 7
  • Pages
  • 1-23
  • ISSN
  • 0013-5070
  • Publication State
  • Published
  • Abstract
  • Deep learning models have improved cutting-edge technologies in many research areas, but their black-box structure makes it difficult to understand their inner workings and the rationale behind their predictions. This may lead to unintended effects, such as being susceptible to adversarial attacks or the reinforcement of biases. There is still a lack of research in the audio domain, despite the increasing interest in developing deep learning models that provide explanations of their decisions. To reduce this gap, we propose a novel interpretable deep learning model for automatic sound classification, which explains its predictions based on the similarity of the input to a set of learned prototypes in a latent space. We leverage domain knowledge by designing a frequency-dependent similarity measure and by considering different time-frequency resolutions in the feature space. The proposed model achieves results that are comparable to that of the state-of-the-art methods in three different sound classification tasks involving speech, music, and environmental audio. In addition, we present two automatic methods to prune the proposed model that exploit its interpretability. Our system is open source and it is accompanied by a web application for the manual editing of the model, which allows for a human-in-the-loop debugging approach.
  • Complete citation
  • Zinemanas P, Rocamora M, Miron M, Font F, Serra X. An Interpretable Deep Learning Model for Automatic Sound Classification. Electronics 2021; 10(7): 1-23.
Bibliometric indicators
  • 2 times cited Scopus
  • 2 times cited WOS