Volume 26 Number 2, June 2022 Special Issue: Automated Writing Evaluation
Permanent URI for this collection
Browse
Recent Submissions
Item Call for papers on systematic reviews and meta-analyses of CALL studies(University of Hawaii National Foreign Language Resource Center, 2022-06-10)Item Call for papers for a special issue on artificial intelligence for language learning(University of Hawaii National Foreign Language Resource Center, 2022-06-10)Item Exploring AWE-supported writing process: An activity theory perspective(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Chen, Zhenzhen; Chen, Weichao; Jia, Jiyou; Le, HuixiaoDespite the growing interest in investigating the pedagogical application of Automated Writing Evaluation (AWE) systems, studies on the process of AWE-supported writing are still scant. Adopting activity theory as the framework, this qualitative study aims to examine how students incorporated AWE feedback into their writing in an English as a foreign language setting. We conducted semi-structured interviews with four Chinese students sampled from two classes and collected their AWE submissions and feedback for data analysis. Our findings demonstrate that AWE-supported writing is a tool-mediated, purposive, and collective activity shaped by individual and contextual factors. Students used various strategies to attain their learning goals and to address the tensions arising from their activity systems. This study contributes to the research on the effectiveness of AWE by assuming a process-oriented approach that was informed by activity theory. Our findings also shed light on the complex process of second language writing mediated by new technology innovations. Pedagogical implications of our findings are discussed in the conclusion.Item Enhancing the use of evidence in argumentative writing through collaborative processing of content- based automated writing evaluation feedback(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Shi, Zhan; Liu, Fengkai; Lai, Chun; Jin, TanAutomated Writing Evaluation (AWE) systems have been found to enhance the accuracy, readability, and cohesion of writing responses (Stevenson & Phakiti, 2019). Previous research indicates that individual learners may have difficulty utilizing content-based AWE feedback and collaborative processing of feedback might help to cope with this challenge (Elabdali, 2021; Wang et al., 2020). However, how learners might collaboratively process content-based AWE feedback remains an open question. This study intends to fill this gap by following a group of five Chinese undergraduate EFL students’ collaborative processing of content-based AWE feedback on the use of evidence in L2 argumentative writing during five writing tasks over a semester. Student collaboration was examined through tracking the recordings of collaborative discussion sessions as well as their written drafts and revisions, and through collecting interview responses from individual learners. The findings revealed that the collaborative processing of AWE feedback was experienced in three phases, namely the trustful phase, skeptical phase, and critical phase. Although content-based AWE feedback could facilitate the development of some aspects of evidence use, collaborative discourses were instrumental in developing learners’ understanding and skills for certain aspects of evidence use that AWE feedback failed to address. The findings suggest that collaborative discourse around content-based AWE feedback can be an important pedagogical move in realizing the potential of AWE feedback for writing development.Item L2 learners’ engagement with automated feedback: An eye-tracking study(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Liu, Sha; Yu, GuoxingThis study used eye-tracking, in combination with stimulated recalls and reflective journals, to investigate L2 learners’ engagement with automated feedback and the impact of feedback explicitness and accuracy on their engagement. Twenty-four Chinese EFL learners revised their writing through Write & Improve with Cambridge, a new automated writing evaluation system that generates automated feedback with three different levels of explicitness. Data from multiple perspectives were collected and examined, including participants’ eye movements, their stimulated recalls, and their responses/revisions to automated feedback on their multiple drafts. The results revealed that participants spent significantly more time and expended more cognitive effort in processing indirect than direct feedback. However, a lower percentage of indirect feedback was taken up, and the revisions participants made based on such feedback were less successful. These findings suggest feedback explicitness as a determining factor affecting learners’ engagement with automated feedback and point to the need for timely, supplemental teacher or peer scaffolding in addition to automated feedback. The results also suggest that AWE tools need to be constantly updated to improve their feedback accuracy, as error-prone feedback may cause participants to make inaccurate amendments to their writing. In addition, teachers should help learners confirm the accuracy of AWE feedback.Item Genre-based AWE system for engineering graduate writing: Development and evaluation(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Feng, Hui-Hsien; Chukharev-Hudilainen, EvgenyAutomated writing evaluation (AWE) systems have been introduced to ESL/EFL classes in the hopes of reducing teachers' workloads and improving students' writing by providing instant holistic scores and corrective feedback (Jiang & Yu, 2020; Link et al., 2014; Ranalli & Yamashita, 2019; Warschauer & Ware, 2006). When it comes to genre-specific writing, general AWE feedback may be insufficient because communicative purposes should be achieved, for which feedback is needed beyond grammar and mechanics. However, very few genre-specific AWE systems based on rhetorical move analysis have been developed. Therefore, the present study reports on the development and evaluation of a genre-based AWE system to facilitate Taiwanese engineering graduate students' writing of research abstracts. This AWE system provides automated feedback on two linguistic features, lexical bundles and grammatical categories of verbs (i.e., tense, aspect, and voice), associated with moves in abstracts. The feedback was designed to be co-constructed between learners and computers in order to promote interaction. The effectiveness of the AWE system was evaluated following Chapelle's (2001) computer-assisted language learning evaluation framework. The findings revealed positive effects; with appropriate guidance, the AWE system was able to draw participants' attention to and enhanced their use of these two linguistic features to achieve the communicative purposes of rhetorical moves in their abstracts.Item Review of Video enhanced observation for language teaching: Reflection and professional development(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Irgin, Pelin; Ruslan SuvorovItem Review of Researching and teaching second language writing in the digital age(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Cherub, Aubri; Kessler, Matt; Ruslan SuvorovItem A study of pre-service EFL teachers’ acceptance of online teaching and the influencing factors(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Sun, Weifeng; Zou, Bin; Mimi LiIt is expected that the field of language education will see an increased need for teachers to accept online teaching. Based on the Technology Acceptance Model, this study examined pre-service EFL teachers’ acceptance of online teaching and the factors influencing them. The participants were TESOL majors at three universities in China. The data were collected from a questionnaire survey with 204 participants and semi-structured individual interviews with 12 participants. The study reveals that pre-service English teachers generally accept online teaching after completing one-semester of purely online learning during the COVID-19 pandemic. The results also suggest that participants’ enjoyable experiences in using online technologies, perceived usefulness of online teaching, social influences, and technological pedagogical content knowledge influence their acceptance of online teaching.Item SMART Teacher Lab: A learning platform for the professional development of EFL teachers(University of Hawaii National Foreign Language Resource Center, 2022-06-10) Kim, Heyoung; Lee, Jang Ho; Mimi LiThis article introduces the structure and content of an online learning platform called SMART Teacher Lab (STL, henceforth) implemented at the authors’ university since 2014. “STL” is an online platform specifically built for the professional development of pre- and in-service English as a Foreign Language (EFL) teachers, containing and accumulating various types of hands-on and field-specific educational resources. These resources include information on the preparation of teaching practicums, video clips of teaching demonstrations, student or teacher interviews, lecture materials on recent educational approaches and technology, and more. STL was originally designed as an open-access mobile-based platform based on the previous literature of non-formal learning, and a development-centered view of bottom-up teacher education. Providing examples of resources related to English education majors and highlighting the strengths of STL, this article aims to emphasize the importance of such a platform for the successful and sustainable professional development of EFL teachers. Suggestions for EFL teacher trainers in other pedagogical contexts are also included.