Back Multimodal annotation for expressive communication

Multimodal annotation for expressive communication

Multimodal annotation for expressive communication

This research line Multimodal annotation for expressive communication builds upon the results and experience gained in the H2020 KRISTINA (Knowledge-Based Information Agent with Social Competence and Human Interaction Capabilities) project  in the areas of natural language processing, computer vision, and virtual character design research, with the participation of 3 of DTIC’s research groups: Natural Language Processing (TALN, project coordinator, http://www.taln.upf.edu/), Cognitive Media Technologies (CMTech, http://cmtech.upf.edu/), and Interactive Technologies (http://gti.upf.edu/).

 

The research and development activities within KRISTINA are expected to bring deeper understanding and know-how in natural human-computer interaction and affective computing, with the advantage of being embedded in a strong consortium that can boost international visibility and impact of the results.

 

The DTIC groups are responsible for the activities related to computer vision and low level facial expression analysis, language analysis and expressive speech synthesis and virtual character design and realization. In particular:

 

  • We are building on our previous expertise in algorithms developed for static image processing and extending these to dynamic processing, where our expertise is considerably more recent.

  • We are extending our research oriented natural language processing toolkit. The modules of language parsing and generation that are being developed further in KRISTINA will imply a significant increase of the value of the toolkit. The individual modules of the toolkit will be made available through software libraries to the community.

  • We are integrating the KRISTINA development into our VR toolkit, adding to it such important features as natural facial modelling, including synchronized speech – lip movements.

 

The availability of sufficiently large volumes of training material (or ground truth) is indispensable for all areas of data-driven scientific research. To function as “training material”, data must be annotated. Specifically, the activities within this research line require large amounts of this training material. In other words, specific features identified in the data (text corpora, videos, time series, etc.) as characteristic (and thus suitable to capture distinctive patterns in the data) must be highlighted. We are currently working on the design of coherent guidelines that take into account and synchronize all communication modi (gestures, mimics, voice) and annotate the material following such guidelines, in order to provide valuable resources not only for this research line and, in particular, for the current and future research of the three involved DTIC groups, but also for the multimodal communication research community in general.

 

To know more:

 

  • Wanner L. et al. (2017) KRISTINA: A Knowledge-Based Virtual Conversation Agent. In: Demazeau Y., Davidsson P., Bajo J., Vale Z. (eds) Advances in Practical Applications of Cyber-Physical Multi-Agent Systems: The PAAMS Collection. PAAMS 2017. Lecture Notes in Computer Science, vol 10349. Springer, Cham
  • Presentation of the project at the Data-driven Knowledge Extraction Workshop, June 2016 (Slides and information on KRISTINA, an EU funded research project, which aims at developing technologies for a human-like socially competent and communicative agent. It runs on mobile communication devices and serves for migrants with language and cultural barriers in the host country)

 

 

Principal researchers

Leo Wanner

Researchers

Xavier Binefa
Josep Blat
Mónica Domínguez
Alun Evans
Mireia Farrús
Federico Sukno
Jens Grivolla
Beatriz Fisas