Ruiz A, Martinez O, Binefa X, Sukno FM. Fusion of Valence and Arousal Annotations through Dynamic Subjective Ordinal Modelling. In Proc. 12th IEEE Conference on Automatic Face and Gesture Recognition, Washington DC, USA, in press, 2017
List of results published directly linked with the projects co-funded by the Spanish Ministry of Economy and Competitiveness under the María de Maeztu Units of Excellence Program (MDM-2015-0502).
List of publications acknowledging the funding in Scopus.
The record for each publication will include access to postprints (following the Open Access policy of the program), as well as datasets and software used. Ongoing work with UPF Library and Informatics will improve the interface and automation of the retrieval of this information soon.
The MdM Strategic Research Program has its own community in Zenodo for material available in this repository as well as at the UPF e-repository
Ruiz A, Martinez O, Binefa X, Sukno FM. Fusion of Valence and Arousal Annotations through Dynamic Subjective Ordinal Modelling. In Proc. 12th IEEE Conference on Automatic Face and Gesture Recognition, Washington DC, USA, in press, 2017
A. Ruiz, O. Martinez, X. Binefa and F.M. Sukno. Fusion of Valence and Arousal Annotations through Dynamic Subjective Ordinal Modelling. In Proc. 12th IEEE Conference on Automatic Face and Gesture Recognition, Washington DC, USA, in press, 2017.
An essential issue when training and validating computer vision systems for affect analysis is how to obtain reliable ground-truth labels from a pool of subjective annotations. In this paper, we address this problem when labels are given in an ordinal scale and annotated items are structured as temporal sequences. This problem is of special importance in affective computing, where collected data is typically formed by videos of human interactions annotated according to the Valence and Arousal (V-A) dimensions. Moreover, recent works have shown that inter-observer agreement of V-A annotations can be considerably improved if these are given in a discrete ordinal scale. In this context, we propose a novel framework which explicitly introduces ordinal constraints to model the subjective perception of annotators. We also incorporate dynamic information to take into account temporal correlations between ground-truth labels. In our experiments over synthetic and real data with V-A annotations, we show that the proposed method outperforms alternative approaches which do not take into account either the ordinal structure of labels or their temporal correlation.
Additional material: