Three MIR talks by researchers from McGill

Three MIR talks by researchers from McGill


22 May 2017

Gabriel Vigliensoni, Martha Thomae, and Jorge Calvo-Zaragoza, from McGill University, Canada, will present their research on Monday, May 22nd, at 3:30pm in room 55.309.
Gabriel Vigliensoni
Title: A case study with the Music Listening Histories Dataset: Do demographic, profiling, and listening context features improve the performance of automatic music recommendation systems?
Abstract: Digital music services provide us with real-time access to millions of songs. Automatic music recommendation systems offer us new ways to discover music. The systems, however, do not account for the context of music listening. The function of music in everyday life depends on the context of music listening. Incorporating information about people’s music listening habits can be used to improve the recommendations. During the discussion, I present my research on collecting music listening histories spanning half a million users, and I explain how insights generated from the data can improve prediction accuracy of a music recommendation model. 
Martha Thomae
Title: A Methodology for Encoding Mensural Music: Introducing the Mensural MEI Translator
Abstract: Polyphonic music from the Late Middle Ages (thirteenth century) and the Renaissance (fourteenth and fifteenth centuries) was written in mensural notation, a system of notation characterized by note durations that are context-dependent. Efforts have been made to encode this music in a machine-readable format, with the goal of preserving the repertoire in its original notation while still allowing for computational musical analysis. There are only a few formats that provide support for encoding this old system of notation, one of these formats is MEI (Music Encoding Initiative). Due to the inefficiency of hand coding music in general, and the added complication in mensural notation of interpreting the value of the notes while coding, we propose a methodology to facilitate this task of encoding the music into a Mensural MEI file through a tool we developed called the Mensural MEI Translator. The methodology allows the musicologist to enter the piece through a score-editor, instead of directly encoding it into a Mensural MEI file. Through a series of processes, this file is converted into a Mensural MEI file that encodes the piece in the original (mensural) notation. 
Jorge Calvo-Zaragoza
Title: Document Analysis for Music Scores with Deep Learning
Abstract: Content within musical documents is not restricted to notes but involves heterogeneous information such as symbols, text, staff lines, ornaments or annotations. Before any attempt at automatically recognizing the information on the scores with an Optical Music Recognition system, it is necessary to detect and classify each constituent layer of information into different categories. The greatest obstacle of this classification process is the high heterogeneity among music collections, which makes it difficult to propose methods that can be generalizable to a broad range of sources. This presentation discusses a data-driven document analysis framework based on the use of Deep Learning methods, namely Convolutional Neural Networks. It focuses on extracting the different layers within musical documents by categorizing the image at pixel level. 
The main advantage of the approach is that it can be used regardless of the type of document provided, as long as training data is available. We illustrate some of the capabilities of the framework by showing examples of common tasks that are frequently performed on images of musical documents. We believe that this framework will allow the development of generalizable and scalable automatic music recognition systems, thus facilitating the creation of large-scale browsable and searchable repositories of music documents.