Blogs

Training data for deep learning source separation methods

(Text from Marius Miron's web, check the full text on SMC conference here)

Marius Miron obtained one of the Open Science awards during the PhD workshop 2017 at DTIC (see the entry in his blog on it). With the funds, he took part in SMC Conference with a paper on training robust models for classical music source separation, departing from the score, and also visited Aalto University. Below his report on the experience.

During July 5th and 9th I have attended the Sound and Music Computing Conference in Helsinki, Finland. I presented a paper on generating training data for deep learning source separation method, particularly in classical music, where you have the score but no multi-track data. The slides can be found online and the code is on the source separation github repository DeepConvSep.

I had the opportunity to visit the acoustics lab at Aalto University and attend a few demos. I’ve been in the anechoic rooms where they recorded the orchestra dataset which I have annotated and used in my paper on score-informed orchestral separation. Interestingly, in one of the demos, Jukka Patynen convolved close-microphone recordings with impulse responses taken from famous concert venues, to demonstrate how different the same recording can sound in varius halls.

There were quite a few interesting posters, from which I mention:

(...continue in his blog!)