Prediction in a visual language: real-time sentence processing in American Sign Language across development

Files
Manuscript with Tables and Figs.pdf(2.61 MB)
Accepted manuscript
Date
2018-02-28
Authors
Lieberman, Amy M.
Borovsky, Arielle
Mayberry, Rachel
Version
Accepted manuscript
Embargo Date
2021-01-15
OA Version
Citation
Amy M. Lieberman, Arielle Borovsky, Rachel Mayberry. 2018. "Prediction in a visual language: real-time sentence processing in American Sign Language across development." Language, Cognition and Neuroscience, Volume 33, Issue 4, pp. 387 - 401. https://doi.org/10.1080/23273798.2017.1411961
Abstract
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eye-tracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4–8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimising visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
Description
License