Articles and book chapters Articles and book chapters

Return to Full Page

Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision

  • Authors
  • Sorodoc, I T; Pezzelle, S; Bernardi, R
  • UPF authors
  • SORODOC ., IONUT-TEODOR;
  • Authors of the book
  • VV.AA
  • Book title
  • 6th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)
  • Publication year
  • 2018
  • Pages
  • 419-430
  • ISBN
  • 2721
  • Abstract
  • The present work investigates whether different quantification mechanisms (set comparison, vague quantification, and proportional estimation) can be jointly learned from visual scenes by a multi-task computational model. The motivation is that, in humans, these processes underlie the same cognitive, nonsymbolic ability, which allows an automatic estimation and comparison of set magnitudes. We show that when information about lowercomplexity tasks is available, the higher-level proportional task becomes more accurate than when performed in isolation. Moreover, the multi-task model is able to generalize to unseen combinations of target/non-target objects. Consistently with behavioral evidence showing the interference of absolute number in the proportional task, the multi-task model no longer works when asked to provide the number of target objects in the scene.
  • Complete citation
  • Sorodoc, I T; Pezzelle, S; Bernardi, R. Comparatives, Quantifiers, Proportions: A Multi-Task Model for the Learning of Quantities from Vision. In: VV.AA. 6th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 1 ed. 2018. p. 419-430.