Forgetting how to see language
Forgetting how to see language
Forgetting how to see language
We will identify the key visual features and discover low-level patterns in the visual data that infants in the early months of life are able to capture and use to discriminate between different sign languages
Human infants are born with the ability to learn any language within a very limited time and with quite reduced cognitive capacities. To achieve this goal, infants already possess at birth some basic abilities that will significantly change over the first months of life. Such early capacities include discriminating any sound existing in any language of the world, even if their parents cannot produce (and perceive) the contrast. Infants born in monolingual Japanese speaking homes have no difficulties in discriminating the minimal pair road-load. Such abilities will change over the first year of life, diminishing the ability to perceive differences between non-native phonemes and increasing the ability to perceive native ones. A phenomenon known as “Perceptual Narrowing”. Perceptual Narrowing has been reported in other linguistic and non-linguistic domains and sensory modalities: such as the ability to discriminate between faces from unfamiliar ethnic origins or the ability to visually discriminate spoken languages. Until 6 months of age, infants have no difficulties in noticing the differences between silent videos of individuals speaking in French and English, while at 12 months they cannot visually discriminate between these languages.
Such discriminatory abilities are not restricted to oral languages, but also to languages that are genuinely visual such as sign languages. There is a bulk of literature showing perceptual narrowing for the analogous to phoneme discrimination in sign languages: discrimination of handshape, movement, etc. There is also evidence that the capacity to discriminate between sign language and pantomime changes during the first months of life in hearing infants never exposed to sign language. Recently, an international team of researchers led by one of us (co-PI Sebastián) us has shown that 8-month-old hearing infants, without any experience with any sign language can discriminate between British Sign Language (BSL) and Japanese Sign Language (JSL). This discrimination starts to decline at 12 months of age and is totally absent in hearing adults. Only adults with extensive experience with American Sign Language (ASL) can discriminate between these languages (BSL and JSL are typologically unrelated between them and to ASL). In a control experiment we showed that when videos were blurred to 50%, 8-month-old hearing infants could no longer discriminate between the languages. In a follow-up research we passively exposed a new group of participants to the same stimuli and we have found significant changes in gaze patterns across age, paralleling the discriminatory abilities present at each age (on the left, the three areas of interest, next page, developmental changes in gaze behaviour).
In the present research we want to use techniques from image analysis, computer vision and artificial intelligence. We plan to use both model-based and data-driven methods for visual data analysis and understanding. In particular, for analyzing the videos and the corresponding gaze data to uncover the nature of the developmental changes taking place in the first months of life, underlying the developmental pattern we have observed.
We will identify the key visual features and discover low-level patterns in the visual data that infants in the early months of life are able to capture and use to discriminate between different sign languages. To that goal, we will possibly incorporate spatio-temporal information. The previously mentioned gathered data will be used in the data-driven (deep learning) approaches.
The project will be supported by the PhD Fellowship program at the Department of Information and Communication Technologies at UPF.