Back Natural Language Processing for Sign Languages

Natural Language Processing for Sign Languages

Natural Language Processing for Sign Languages
Sign Language Resources, Recognition, Processing and Translation

Information and communication technologies have greatly improved the access to information globally: The ubiquitous presence of the Internet and the possibility to access all kinds of multi-modal, multilingual contents on the Web by means of different interconnected applications has greatly transformed our societies. The field of Natural Language Processing (NLP) has considerably advanced in recent years thanks to a paradigmatic shift caused by the availability of massive amounts of text, deep learning models and powerful computational resources. NLP technology is currently available for many domains and languages, notably translation between spoken languages has considerably evolved. However, where Sign Languages (SLs) are of concern it is fair to say that NLP is in its infancy. There are many reasons for the current situation due to the specific characteristics of SLs and to the lesser availability of resources for most SLs. Sign Languages are produced in the visual-spatial modality (rather than the oral-auditory modality of spoken languages) using manual articulators (the hands), and non-manual articulators such as facial expression, eye gaze and the physical space on and around the signer. According to the Cambridge Encyclopedia of Language “Sign Languages have a structure of comparable complexity to spoken and written language and perform a similar range of functions. There are rules governing the way signs are formed, and how they are sequenced....”. Therefore we argue that Sign Language Processing tools should be able to uncover these structures from a multi-modal stream of information to support different language processing applications over SL content, such as translation, question answering, information extraction, information retrieval, etc. Because SLs also exhibit simultaneity in production, meaning that two signs may be produced at the same time (for example one on each hand), SL tools must also tackle non-linearity in the input stream.

The field of SL processing has long been the concern of computer vision research: tasks such as sign language detection, sign language identification, sign language segmentation have all been addressed within a computer vision paradigm. However, given that SLs are natural languages, we firmly believe that a multi-disciplinary approach which includes linguistics and computational linguistics research in addition to computer vision should be considered. Natural language processing of SLs aims to analyse sequences of signs in order to, for example, associate lexical categories to signs or disambiguate them or establishing dependency relations between them to produce linguistically rich representations to support for example the identification of specific types of information in the SL stream of content (e.g. who did what to whom, when, and how). In general, non-visual representations of SLs have been adopted in order to support the above-mentioned processes, that is symbolic, instead of video-based representations are used to further analyse the output of computer vision processes. Such representations could be automatically produced if high quality datasets were available for training NLP approaches.

We are currently working in the EU SignON project where we are investigating the problem of translating between spoken languages and SLs. One of the main problems faced is the scarce availability of language resources. Departing from our work on Sign Language Translation in the SignON project we propose the following objectives:

  • (O1) Implement data collection, linguistic annotation, and data augmentation mechanisms to increase the availability of SL resources (with special focus on Spanish Sign Language (LSE) and Catalan Sign Language (LSC)).
  • (O2) Investigate current architectures for Sign Language Recognition (SLR) and adapt them to LSE and LSC datasets.
  • (O3) Develop Natural Language Processing (e.g. PoS tagging, parsing, sense disambiguation) for LSE and LSC.
  • (O4) Adopt hybrid approaches to Sign Language Translation (LSE - Spanish, LCS - Catalan. etc.): Combining Machine Learning and Linguistic Information.
  • (O5) Implement technological demonstrators such as for example Information Extraction for SLs.

In two Workpackages:

- Data augmentation strategies for Sign Language gloss data (September 2023 - August 2025)

- Architectures for Sign Languages (February 2024 - March 2025)

Principal researchers

Horacio Saggion
Josep Quer


Santiago Egea Gómez
Euan McGill
Luis Chiruzzo
Universidad de Vigo: Maruxa Cabeza, Jos ́e Luis Alba Castro, Jose Maria Garcia-Miguel Gallego,

The project will be supported by the PhD Fellowship program at the Department of Information and Communication Technologies at UPF. The project builds on results from the EU SignOn project and will use resources granted under an Oracle Research Grant to Horacio Saggion.