Exploring relationships between automated and human evaluations of L2 texts

Date
2018-10-01
Authors
Matthews, Joshua
Wijeyewardene, Ingrid
Journal Title
Journal ISSN
Volume Title
Publisher
University of Hawaii National Foreign Language Resource Center
Michigan State University Center for Language Education and Research
Abstract
Despite the current potential to use computers to automatically generate a large range of text-based indices, many issues remain unresolved about how to apply these data in established language teaching and assessment contexts. One way to resolve these issues is to explore the degree to which automatically generated indices, which are reflective of key measures of text quality, align with parallel measures derived from locally relevant, human evaluations of texts. This study describes the automated evaluation of 104 English as a second language texts through use of the computational tool Coh-Metrix, which was used to generate indices reflecting text cohesion, lexical characteristics, and syntactic complexity. The same texts were then independently evaluated by two experienced human assessors through use of an analytic scoring rubric. The interrelationships between the computer and human generated evaluations of the texts are presented in this paper with a particular focus on the automatically generated indices that were most strongly linked to the human generated measures. A synthesis of these findings is then used to discuss the role that such automated evaluation may have in the teaching and assessment of second language writing.
Description
Keywords
Writing, Assessment/Testing, Language Teaching Methodology, Research Methods
Citation
Matthews, J., & Wijeyewardene, I. (2018). Exploring relationships between automated and human evaluations of L2 texts. Language Learning & Technology, 22(3), 143–158. https://doi.org/10125/44661
Rights
Access Rights
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.