Articles and book chapters Articles and book chapters

Return to Full Page

Putting Words in Context: LSTM Language Models and Lexical Ambiguity

  • Authors
  • Aina L, Gulordava K, Boleda G
  • UPF authors
  • BOLEDA TORRENT, GEMMA; AINA ., LAURA; GULORDAVA ., KRISTINA;
  • Authors of the book
  • Nakov, P.; Palmer, A. (eds.)
  • Book title
  • The 57th Annual Meeting of the Association for Computational Linguistics (Proceedings of the Conference)
  • Publisher
  • Association for Computational Linguistics
  • Publication year
  • 2019
  • Pages
  • 3342-3348
  • ISBN
  • 978-1-950737-48-2
  • Abstract
  • In neural network models of language, wordsare commonly represented using context-invariant representations (word embeddings)which are then put in context in the hidden lay-ers. Since words are often ambiguous, repre-senting the contextually relevant informationis not trivial. We investigate how an LSTMlanguage model deals with lexical ambiguityin English, designing a method to probe itshidden representations for lexical and contex-tual information about words. We find thatboth types of information are represented toa large extent, but also that there is room forimprovement for contextual information.
  • Complete citation
  • Aina L, Gulordava K, Boleda G. Putting Words in Context: LSTM Language Models and Lexical Ambiguity. In: Nakov, P.; Palmer, A. (eds.). The 57th Annual Meeting of the Association for Computational Linguistics (Proceedings of the Conference). 1 ed. East Stroudsburg PA: Association for Computational Linguistics; 2019. p. 3342-3348.