In the Music and Multimodal Interaction Lab we focus on multimodal interactive technologies and how to use them in the music and sound field.
|Sergi Jordà, Head of lab||Pere Calopa, Researcher|
|Sebastian Mealla C., Postdoc||Ángel Faraldo, PhD student|
|Panos Papiotis, Postdoc||Daniel Gómez Marín, PhD student|
|Perfecto Herrera, Researcher collaborator||Cárthach Ó Nuanáin, PhD student|
Through the years we have approached this problem from different perspectives such as collective musical creation (Jordà, 2001) or developing a framework for the conception and design of new musical instruments (Jordà, 2005). An interesting result of these studies was the development of the Reactable (Jordà et al., 2005), an electronic music instrument that combines a tangible tabletop interface with concepts or techniques such as modular synthesis, visual programming and visual feedback.
From the technical needs of the Reactable project, we have also developed technologies such as reacTIVision (Bencina et al., 2005) for the tracking of tagged objects on tabletop surfaces or the TUIO protocol (Kaltenbrunner et al., 2005), specifically designed to simplify the communication between processes in a tangible user interface environment.
For a while we focused our research on tabletop and tangible interaction (Jordà et al., 2010), studying how these type of interfaces could favor multi-dimensional and continuous real-time interaction, exploration and multi-user collaboration, thus expanding our areas of interest beyond the musical performance domain. Some of the topics we explored included: the potential of surface computing in areas such as edutainment, children, elder people and special education (Gallardo et al., 2008); the potential of these type of interfaces in complex interactive situations and in exploratory and expressive activities (Julià & Jordà, 2009); their potential for enhancing creative collaboration through effective emotional communication; and extending surface computing interaction beyond the surface.
Arising from this focus, we explored the implications of multi-user, multi-task gesture recognition (Julià & Jordà, 2015), beyond-the-surface interaction for tabletops using 3D markers (Gallardo & Jordà 2013), and sonic interaction design for implicit physiological computing (Mealla et al., 2016). Other outcomes from this period were frameworks for rapid development of musical tabetop applications (Julià et al. 2011).
We are currently focused on transferring research outcomes on multimodal interaction technology to the creative industries through the EU-funded project RAPID-MIX, and on new expressive tools for MIR-informed music creation in the EU-funded project GiantSteps.