Evaluating Rating Variations in Holisting Writing Placement Assessment
Date
2007
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
This paper reports on an investigation of a writing assessment, the English Language Institute (ELI) writing placement test for international graduate students. The purpose of the study was four-fold: (a) to determine how many placement essays need further adjudication (i.e., additional readers); (b) to investigate the use of rating agreement ratios for estimating reliability; (c) to explore the potential use of collapsing the current seven-point rating scale into a threepoint rating scale; and (d) to determine how borderline placement ratings contribute in determining placement. The findings showed that 35% of essays needed further adjudications. The results of this study also indicated that there are no apparent losses in collapsing the seven-point rating scale into a three-point scale. The findings also suggested that the use of rating agreement ratios is superior to the Pearson product moment correlation coefficient in illuminating the similarities of ratings, rather than a similarity in rating patterns, which supports the conclusions of other researchers working with language performance assessments (e.g., Halleck, 1995, 1996; Kenyon & Tschirner, 2000; Norris, 2001; Thomson, 1995, 1996). The implications of these findings are discussed in terms of recommendations to the ELI for future writing placement practice.
Description
Keywords
placement test, writing assessment, rating variation, academic writing
Citation
Extent
60 pages
Format
Geographic Location
Time Period
Related To
Related To (URI)
Table of Contents
Rights
Rights Holder
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.