Back PhD defense by Álvaro Sarasúa and seminars by jury members Frederic Bevilacqua and Maarten Grachten

PhD defense by Álvaro Sarasúa and seminars by jury members Frederic Bevilacqua and Maarten Grachten

26.05.2017

 

 

Date: Monday May 29th

Location: Universitat Pompeu Fabra, Tanger building, room 55.309.

Program:

11:00 PhD defense of Álvaro Sarasúa Berodia, Musical Interaction Based on the Conductor Metaphor.  

         Supervised by Emilia Gómez and Enric Guaus in the context of the phenicx project and in a joint collaboration by Musich Technology Group and Sonology Department, Escola Superior de Música de Catalunya

         Jury members: Frederic Bevilacqua (IRCAM), Sergi Jordà (Universitat Pompeu Fabra), Maarten Gratchen (Johannes Kepler University)

15:30 Invited seminars 

Frederic Bevilacqua, Movement Sound Interaction: from creative applications to rehabilitation

I will present an overview of the research we have been conducting at IRCAM on gesture capture and analysis. We have been collaborating with various composers, performers, which allows us to develop important concept and paradigms for the development of musical interactive systems. For examples, we have developed various augmented instruments by adding motion-capture systems to acoustic instruments such the violin. This allows us to study instrumental gestures and develop software for following/recognising gestures. We have also developed specific tangible interfaces such as the MO - Modular Musical Objects, or more recently the RIoT that allows for interacting with digital sound environments. Finally, we will present some recent studies and applications related to sensori-motor learning and embodied music cognition.

Bio: Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris His research concerns the modeling and the design of interaction between movement and sound, and the development of gesture-based interactive systems.
 
Maarten Grachten, Basis models of musical expression for creating and explaining music performances

Expression in music performance is an important aspect of score-based music traditions such as Western classical music: Music performed by skilled musicians can be captivating as much as an improper performance can put listeners off. Computational modeling of expression in music performance is a challenging and ongoing effort, aiming both at a better understanding of the underlying principles, and at novel applications in music technology. In this talk, we will present a recently proposed modeling framework for musical expression, utilizing basis-function representations of score information. We show how it can be used for predictive modeling---to generate an expressive performance of a musical score---as well as for explanatory purposes. We illustrate this framework both in the context of solo piano music and in classical symphonic music.

Bio: Maarten Grachten holds a Ph.D. degree in computer science and digital communication (2006, Pompeu Fabra University, Spain). He is a former member of the Artificial Intelligence Research Institute (IIIA, Spain), the Music Technology Group (MTG, Spain), the Institute for Psychoacoustics and Electronic Music (Belgium), and the Austrian Research Institute for Artificial Intelligence (OFAI, Austria). Currently, he is a senior researcher at the Department of Computational Perception (Johannes Kepler University, Austria). Grachten has published in and reviewed for numerous international journals and conferences, on topics related to machine learning, music information retrieval, affective computing, music cognition, and computational musicology. His current research focuses on computational modeling of musical expectation and expressive performance.

 

Multimedia

Categories:

SDG - Sustainable Development Goals:

Els ODS a la UPF

Contact