Restructuring multimodal corrective feedback through Augmented Reality (AR)-enabled videoconferencing in L2 pronunciation teaching

dc.contributor.author Wen, Yiran
dc.contributor.author Li, Jian
dc.contributor.author Xu, Hongkang
dc.contributor.author Hu, Hanwen
dc.date.accessioned 2023-09-29T22:41:37Z
dc.date.available 2023-09-29T22:41:37Z
dc.date.issued 2023-10-02
dc.description.abstract The problem of cognitive overload is particularly pertinent in multimedia L2 classroom corrective feedback (CF), which involves rich communicative tools to help the class to notice the mismatch between the target input and learners’ pronunciation. Based on multimedia design principles, this study developed a new multimodal CF model through augmented reality (AR)-enabled videoconferencing to eliminate extraneous cognitive load and guide learners’ attention to the essential material. Using a quasi-experimental design, this study aims to examine the effectiveness of this new CF model in improving Chinese L2 students’ segmental production and identification of the targeted English consonants (dark /ɫ/, /ð/and /θ/), as well as their attitudes towards this application. Results indicated that the online multimodal CF environment equipped with AR annotation and filters played a significant role in improving the participants’ production of the target segments. However, this advantage was not found in the auditory identification tests compared to the offline CF multimedia class. In addition, the learners reported that the new CF model helped to direct their attention to the articulatory gestures of the student being corrected, and enhance the class efficiency. Implications for computer-assisted pronunciation training and the construction of online/offline multimedia learning environments are also discussed.
dc.identifier.citation Wen, Y., Li, J., Xu, H., & Hu, H. (2023). Restructuring multimodal corrective feedback through Augmented Reality (AR)-enabled videoconferencing in L2 pronunciation teaching. Language Learning & Technology, 27(3), 83–107. https://hdl.handle.net/10125/73533
dc.identifier.issn 1094-3501
dc.identifier.uri https://hdl.handle.net/10125/73533
dc.publisher University of Hawaii National Foreign Language Resource Center
dc.publisher Center for Language & Technology
dc.rights Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Corrective Feedback
dc.subject Multimedia Learning
dc.subject Computer-assisted Pronunciation Training (CAPT)
dc.subject AR-enabled Videoconferencing
dc.title Restructuring multimodal corrective feedback through Augmented Reality (AR)-enabled videoconferencing in L2 pronunciation teaching
dc.type Article
dspace.entity.type
prism.endingpage 107
prism.number 3
prism.publicationname Language Learning & Technology
prism.startingpage 83
prism.volume 27
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
27_01_10125-73533.pdf
Size:
1.3 MB
Format:
Adobe Portable Document Format
Description: