Matthews, JoshuaWijeyewardene, Ingrid2018-10-032018-10-032018-10-01Matthews, J., & Wijeyewardene, I. (2018). Exploring relationships between automated and human evaluations of L2 texts. Language Learning & Technology, 22(3), 143–158. https://doi.org/10125/446611094-3501http://hdl.handle.net/10125/44661Despite the current potential to use computers to automatically generate a large range of text-based indices, many issues remain unresolved about how to apply these data in established language teaching and assessment contexts. One way to resolve these issues is to explore the degree to which automatically generated indices, which are reflective of key measures of text quality, align with parallel measures derived from locally relevant, human evaluations of texts. This study describes the automated evaluation of 104 English as a second language texts through use of the computational tool Coh-Metrix, which was used to generate indices reflecting text cohesion, lexical characteristics, and syntactic complexity. The same texts were then independently evaluated by two experienced human assessors through use of an analytic scoring rubric. The interrelationships between the computer and human generated evaluations of the texts are presented in this paper with a particular focus on the automatically generated indices that were most strongly linked to the human generated measures. A synthesis of these findings is then used to discuss the role that such automated evaluation may have in the teaching and assessment of second language writing.WritingAssessment/TestingLanguage Teaching MethodologyResearch MethodsExploring relationships between automated and human evaluations of L2 textsArticle10125/44661