Back AMORE Mini-workshop on Linguistic Ambiguity and Deep Learning
AMORE Mini-workshop on Linguistic Ambiguity and Deep Learning
When: Friday 25 March 15:00h - 17:00h
Zoom link: https://upf-edu.zoom.us/j/89661142204
Schedule (see talk abstracts below):
Resolving ambiguity of speaker intentions - Hannah Rohde
Given rife ambiguity in speakers' productions, language comprehenders are tasked with interpreting what they hear and attempting to recover speakers' intended meanings. To identify the speaker's intention when a lexical item is ambiguous, comprehenders must rely on a combination of their bottom-up lexical knowledge alongside other context-driven information: For example, understanding "mouse" in "I left my mouse on the desk" requires using information about the possible senses of "mouse" (?, ?) as well as the context of use, where the latter incorporates real-world knowledge and context-driven expectations about what objects might plausibly appear in such situations (Aina 2022). The resolution of lexical/syntactic/referential ambiguity underlies the interpretation of another kind of ambiguity -- that of speaker goals and intentions more broadly, specifically whether the speaker intends their utterance to be interpreted transparently or rather with additional inferences that go beyond what has been explicitly (albeit potentially ambiguously!) stated.
For example, a speaker who utters the following sentence could give rise to an expectation that the upcoming word will describe a familiar desk object or it could induce an expectation for something more newsworthy: "On the desk there's a..." (?, ?). An expectation for the mention of a familiar object (?) would align with a model of language in which comprehenders expect speakers to use language "transparently" to report what is happening in a situation. An expectation for the mention of a newsworthy object (?) would suggest tha comprehenders are aware that speakers filter a set of candidate meanings and select ones that are both true and sufficiently interesting to be worth mentioning; this in turn can permit comprehenders to draw additional inferences (e.g., that the situation has changed or is otherwise atypical -- that a mouse ? isn't typically present in this location).
This talk considers this interaction between our knowledge of the world (what situations are typical in the real world?) and our knowledge of speakers' linguistic choices (what situations would a speaker decide are interesting enough to merit talking about?). The goal is to test the extent to which language is used to convey real-world plausible content (transparent meaning) or pragmatically informative content (filtered meaning with additional inferences available). I present a set of psycholinguistic studies testing (i) whether speakers' production decisions show a preference for newsworthy content, (ii) whether comprehenders' expectations reflect this preference, and (iii) what additional inferences arise when content is not sufficiently newsworthy. The findings raise questions about the ways in which learners (humans, machines) learn about the world when it is has been filtered through speakers' linguistic choices and when learners are faced with the ambiguity of the speaker's intention.
Foundations of ambiguity resolution: Assessing meaning extraction in deep learning models - Allyson Ettinger
The capacity of deep learning models to handle linguistic ambiguity hinges critically on their ability to extract and represent linguistic information more generally, at the levels of abstraction at which the relevant ambiguities exist. In this talk I will discuss recent work executing controlled tests of linguistic competence in deep learning models, with a focus on examining models' ability to extract and represent robust semantic information at the lexical, phrase, and sentence levels. This work assesses in these models a number of aspects of linguistic processing with central relevance for the handling of linguistic ambiguity: sensitivity to information from surrounding context, capacity for compositional meaning representation, and robustness of information encoding and relevance filtering. I will discuss the implications of our findings from these tests with respect to the current state of NLP models, and for the potential of these models to handle different aspects of linguistic ambiguity.