We are interested in modeling expression in music performances. We are also interested in the use of emotions in brain-computer (music) interfaces. Finally, we study how we learn musical instruments.
Team
-
Rafael Ramírez, Professor, Head of lab |
|
Sergio Giraldo, Postdoc |
David Alberto Cabrera, PhD student |
Vicente Pallarés, PhD student |
Fabio Ortega, PhD student |
Research
- We are interested in modeling expression in music performances. Expressive music performance studies how skilled musicians manipulate sound properties such as pitch, timing, amplitude and timbre in order to ‘express’ their interpretation of a musical piece. While these manipulations are clearly distinguishable by the listeners and often are reflected in concert attendance and recording sales, they are extremely difficult to formalize. Using machine learning techniques, we investigate the creative process of manipulating these sound properties in an attempt to understand, recreate and teach expression in performances (Maestre & Ramirez, 2010; Ramirez et al, 2010).
-
-
We are also interested in the use of emotions in brain-computer (music) interfaces. Emotions in human-computer interaction are important in order to address new user needs. We apply machine learning techniques to detect emotion from brain activity, recorded as electroencephalograph (EEG). We apply these technologies in order to investigate the potential benefits of combining music and brain-computer interfaces for improving users’ health and quality of life. Specifically, we investigate the emotional reinforcement capacity of brain-computer music interfaces, and their ability to improve conditions such as depression, Parkinson decease and autism.
Finally, we study how we learn musical instruments. Taking the violin as a case study, we aim to create new interactive, assistive, self-learning, augmented-feedback, and social-aware systems complementary to traditional instrument teaching. As a result of a tightly coupled interaction between technical and pedagogical partners, we are the coordinators of TELMI, a H2020 project which attempts to answer questions such as “How will the musical instrument learning environments be in 5-10 years time?” and “What impact will these new musical environments have in instrument learning as a whole?” The general objectives of the TELMI project are: (1) to design and implement new interaction paradigms for music learning and training based on state-of-the-art multi-modal (audio, image, video and motion) technologies, (2) to evaluate from a pedagogical point of view the effectiveness of such new paradigms, (3) based on the evaluation results, to develop new multi-modal interactive music learning prototypes for student-teacher, student only, and collaborative learning scenarios, and (4) to create a publicly available reference database of multimodal recordings for online learning and social interaction among students.
-