The 57th Annual Meeting of the Association for Computational Linguistics (Proceedings of the Conference)
Association for Computational Linguistics
In neural network models of language, wordsare commonly represented using context-invariant representations (word embeddings)which are then put in context in the hidden lay-ers. Since words are often ambiguous, repre-senting the contextually relevant informationis not trivial. We investigate how an LSTMlanguage model deals with lexical ambiguityin English, designing a method to probe itshidden representations for lexical and contex-tual information about words. We find thatboth types of information are represented toa large extent, but also that there is room forimprovement for contextual information.
Aina L, Gulordava K, Boleda G. Putting Words in Context: LSTM Language Models and Lexical Ambiguity. In: Nakov, P.; Palmer, A. (eds.). The 57th Annual Meeting of the Association for Computational Linguistics (Proceedings of the Conference). 1 ed. East Stroudsburg PA: Association for Computational Linguistics; 2019. p. 3342-3348.