The integration of auditory and textual input in vocabulary learning from subtitled viewing: An eye-tracking study
Loading...
Date
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Interviewee
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
University of Hawaii National Foreign Language Resource Center
Center for Language & Technology
Center for Language & Technology
Volume
29
Number/Issue
3
Starting Page
70
Ending Page
91
Alternative Title
Abstract
Numerous studies have documented the benefits of watching audio-visual materials with on-screen text for L2 vocabulary learning (Montero Perez, 2022). The provision of both auditory and textual input allows learners to link auditory and written forms (or L1 meanings) of unknown words during viewing, which could potentially facilitate vocabulary learning. However, little is known about the dynamics of text-audio synchrony in subtitled viewing and how the processing of written words in relation to the audio may lead to vocabulary learning. Eighty-one intermediate-to-advanced Chinese learners of English watched an English documentary with one of three on-screen texts (i.e., captions, L1 subtitles, and bilingual subtitles), while their eye movements were monitored. Participants’ awareness of 17 unknown words and vocabulary learning gains were assessed via stimulated recalls and three vocabulary tests. Results revealed that captions facilitated text-audio synchronisation, whereas L1 subtitles generally led to reading ahead and skipping. Bilingual subtitles enabled synchronisation of L1 translations with L2 audio but often resulted in skipping L2 forms. Most text-audio processing behaviours led to moderate predicted probabilities of vocabulary learning and participants’ reported awareness, with no significant within-group difference, except for the processing of L2 unknown words in bilingual subtitles.
Description
Citation
Wang, A. (2025). The integration of auditory and textual input in incidental vocabulary learning from subtitled viewing: An eye-tracking study. Language Learning & Technology, 29(3), 70–91. https://doi.org/10.64152/10125/73648
Extent
22
Format
Geographic Location
Time Period
Related To
Related To (URI)
Table of Contents
Rights
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Rights Holder
Catalog Record
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.
