The role of interactional language in human-machine interactions. What can we learn from mindless interactants? (INTERACT)
Agencia Estatal de Investigación,TED2021-129546B-I00, 2021-2023
Associated GLiF Researchers:
The goal of this project is to explore the role of interactional language (henceforth i-language) in human-machine interactions(HMI). i-language refers to those aspects of language that do not contribute to the content conveyed in the interaction but rather to those dedicated to regulating the interaction itself: linguistic means to regulate turn-taking and the construction of common ground (e.g., hey, huh, hmmm). I will explore i-language in HMI from two different perspectives: i) Does the machine interactant (M) comprehend and produce i-language. For example, Siri, apple’s virtual assistant, responds to the use of Hey like a human interactant would: Uttering: Hey Siri… is the preferred verbal way to get her attention. Similarly, Siri frequently uses hmmmm in utterance-initial position before she explains that she does not understand the human interactant’s prior move. However, Siri does not respond like a human to the use of huh?, a unit of language used, among other things, to initiate a repair: it is used when an interlocutor fails to understand a previous utterance. Siri does not produce huh nor does she seem to comprehend it. ii) Does the human interactant (H) use i-language in HMI? Is H’s actual use of i-language determined by M’s production and comprehension of i-language? If M’s language use in conversations were not limited, what would H consider to be an ideal use of i-language in HMI?
Experimental Studies on Discourse Structure (EXPEDIS)
Agencia Estatal de Investigación, PID2021-122779NB-I00
PI: Laia Mayol
Associated GLiF Researchers: Enric Vallduví, Sara Amido, Sebastian Buchczyk, Xixian Liao
When humans communicate, we do so through discourses, which encompass both written texts and oral speeches, both monologues and dialogues (or multilogues). Yet the scientific study of language has traditionally focused at the sentence level (both from the syntactic and semantic point of view) and has paid less attention to this upper-level structure. However, in order to understand how we communicate and interpret meaning, it is crucial to understand how discourse is structured. The general goal of EXPEDIS is to gain a better understanding of discourse structure using experimental techniques. We will examine four different case studies concerning different phenomena at the semantics/pragmatics interface which can be informed from a better understanding of discourse structure and inform it as well. EXPEDIS will decidedly deploy experimental methodologies used in the field of experimental pragmatics, which is becoming fundamental to understand the underpinnings of language and its relationship to the human mind.
Microdiachrony in endangered languages across modalities (MICRODIAC(H)RO)
Ministerio de Ciencia, Innovación y Universidades / Agencia Estatal de Investigación, PID2020-119041GB-I00, 2021-2025
co-PIs: Josep Quer and Gemma Barberà
Associated GLiF Researchers: Emanuela Pinna
This project focuses on two endangered languages in different modalities: Catalan Sign Language (LSC, Catalonia) and Griko (Italiot- Greek variety of Salento, Southern Italy). Change in morphosyntactic properties within such languages has received very little attention so far. This project will make a novel contribution to fill this gap in research from a cross-modal perspective on two fronts: (a) exploring how diachronic change in a relatively small time span can be detected, described and accounted for (microdiachrony), and (b) creating research resources in the form of two corpora that include transcribed and annotated data to make the study of grammatical change feasible.
Contextual effects in the choice of referring expressions for visually presented entities (CORE)
Agencia Estatal de Investigación, PID2020-112602GB-I00, 2021-2025
co-PIs: Louise McNally and Gemma Boleda
Associated GLiF Researchers: Josep M. Fontana, Peter Sutton, Jialing Liang
People use language to talk about the world, that is, to refer; accordingly, language offers a very rich set of resources for reference. For example, in any given context, a speaker can choose between a more or less specific expression (the dog, the small dog, the chihuahua), or between expressions that convey complementary information about the referent (the woman, the skier). Which referring expression (RE) a speaker chooses on a given occasion depends on various semantic and pragmatic factors. The theoretical goal of the CORE project is to contribute to better understanding the following specific factors and their interaction in RE choice in context, including the set of general principles that intervene in efficient communication, the contextually salient properties of the entity being referred to and features of its immediate environment that influence successful reference, and the implicit semantic organization of RE alternatives and the conventionalized division of labor between them, especially organization based on implicative semantic relations and alternative cross-classifications which highlight different properties of the referred to entities (e.g., woman vs. skier, or variation in the use of noun classifiers in languages such as Mandarin Chinese). Our empirical goal is to study RE choice under more naturalistic conditions than has previously been done. To combine these two broad goals in a single, feasible project, we make the practical decision of centering our attention on reference to single physical entities in visual contexts. We takes as an empirical starting point the ManyNames dataset, the result of a large-scale collection of RE choices for naturalistic images (Silberer et al. 2020).
The Sign Hub: preserving, researching and fostering the linguistic, historical and cultural heritage of European Deaf signing communities with an integral resource
European Commission, 693349, 2016-2020
PI/Coordinator: Josep Quer
Associated GLiF Researchers: Gemma Barberà, Jordina Sánchez, Alexandra Navarrete, Sara Cañas, Raquel Veiga, Giorgia Zorzi
SIGN-HUB is a 4-year research project funded by the European Commission within Horizon 2020 Reflective Society 2015, Research and Innovation actions. This project, designed by a European research team, aims to provide the first comprehensive response to the societal and scientific challenge resulting from generalized neglect of the cultural and linguistic identity of signing Deaf communities in Europe. It will provide an innovative and inclusive resource hub for the linguistic, historical and cultural documentation of the Deaf communities' heritage and for sign language assessment in clinical intervention and school settings. To this end, it will create an open state-of-the-art digital platform with customized accessible interfaces. The project will initially feed that platform with core content in the following domains, expandable in the future to other sign languages:
(i) digital grammars of 6 sign languages, produced with a new online grammar writing tool;
(ii) an interactive digital atlas of linguistic structures of the world's sign languages;
(iii) online sign language assessment instruments for education and clinical intervention, and
(iv) the first digital archive of life narratives by elderly signers, subtitled and partially annotated for linguistic properties.
The Grammar of Reference in Catalan Sign Language (GRAMREFLSC)
Ministerio de Economía y Competitividad, FFI2015-68594- P, 2016-2019
PI: Josep Quer
Associated GLiF Researchers: Gemma Barberà, Sara Cañas, Alexandra Navarrete, Raquel Veiga, Giorgia Zorzi
Correspondences between contextual resources and sentential information structure (Core-IS)
Ministerio de Economía y Competitividad, FFI2015-67991-P, 2016-2018
PI: Enric Vallduví
Associated GLiF Researchers: Laia Mayol, Julie Hunter, Chenjie Yuan
It is generally agreed that sentential information structure (IS) concerns context-sensitive aspects of meaning, but there is less agreement on how exactly context is to be brought into an analysis of the semantics of IS notions such as theme and rheme, contrast and background, focus, and topic. Core-IS adopts the radical view that there exist direct correlations between particular sentential IS categories and specific contextual resources. The overall aim of the project is to investigate these correlations building on recent models of dialogical context which provide richly structured representations inhabited by a limited set of contextual resources —ranked questions-under-discussion, salient sub-utterances, moves (basic discourse units with intentional and context-update effects), etc.— motivated independently on the basis of an array of interactive phenomena in natural language. Core-IS intends to map particular aspects of the connection between dialogical context and IS involving (a) the theme-rheme partition and questions under discussion, (b) focus/contrast and salient sub-utterances, and (c) topic and constituents in the move list. Light will be shed into issues such as the nature of question-answer congruence and the ontological connections between the categories of focus and rheme and between contrastive topic and the more general notion of (continuous/shifted) topic. On a more general theoretical plane, the results of Core-IS will hopefully further endorse the view that the dynamics of context and the interactive nature of linguistic communication are of the essence in linguistic interpretation.
Highest argument agreement (HAA)
Ministerio de Economía y Competitividad, FFI2014-56735-P, 2015-2017
PI: Alex AlsinaAssociated GLiF Researchers: Eugenio Vigo
This project seeks to identify linguistic phenomena in which the verb agrees with whatever constituent that ranks higher with respect to any language-specific prominence scale. The important aspect of this type of phenomenon is that the agreement features of the verb are not associated with any particular grammatical function, but solely depend on what constituents are part of the sentence.
NOTE: For additional past projects, please see the individual pages of our group members or the aggregate GLiF data on the UPF Scientific Output Portal.