Back LoudSense

LoudSense

LoudSense
AI system for automatic audibility estimation of background music in audiovisual productions

Music from audiovisual productions (television or on-demand video) is an important source of income for the music industry, thanks to copyright. The royalties distribution rules vary between countries and often consider aspects such as the time slot, the role of music within the production, etc. However, the audibility of background music is not duly taken into account (due to technical limitations, mainly), which has given rise to debate in recent years.

LoudSense project aims to study how the audibility of music changes according to the contexts and objectives of listening and, taking advantage of the knowledge generated in this study, to develop a new technology that automatically establishes the degree of audibility of the background music of audiovisual productions, in order to provide a new service to the music industry, which allows the distribution of copyright considering this factor.

Specifically, LoudSense considers the use case in which the signal aired by the media consists of the mixture of music and the non-musical signal (such as the spoken voice). The main technical challenge is that the signal to be analyzed is the final mixture broadcasted by television, which contains musical and non-musical sources simultaneously, so it is not possible to measure the perceived volume of each signal separately. For this reason the project requires the use of artificial intelligence algorithms capable of classifying the mixture as audible music, barely audible music or inaudible music, following WIPO (World Intellectual Property Organization) recommendations.

Starting date: May 2021

Project duration: 2 years

Project partners:  BMAT and MTG

 

With the support of ACCIÓ