Performance Assessment of ESL and EFL Students

Date
2000
Authors
Brown, James Dean
Hudson, Thom
Norris, John M.
Bonk, William
Contributor
Advisor
Brown, James D.
Department
University of Hawaii at Manoa. Department of Second Language Studies.
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
Thirteen prototypical performance tasks were selected from over 100 based on their generic appropriateness for the target population and on posited difficulty levels (associated with plus or niinus values for linguistic code command, cognitive operations, and communicative adaptation, as discussed in Norris, Brown, Hudson, & Yoshioka, 1998, after Skehan, 1996, 1998). These l3 tasks were used to create three test forms (with one anchor task common to all forms), two for use in an ESL setting at the University of Hawai'i, and one for use in an EFL setting at Kanda University of International Studies in Japan. In addition, two sets ofrating scales were created based on task-dependent and task-independent categories. For each individual task, the criteria for the task-dependent categories were created in consultation with an advanced language learner, a language teacher, and a non-ESL teacher, all ofwhom were well-acquainted with the target population and the prototype tasks. These criteria for success were allowed to differ from task to task depending on the input ofour consultants. The task-independent categories were created for each of three theoretically motivated components of task difficulty in terms of the adequacy of: (linguistic) code command, cognitive operations, and communicative adaptation. A third rating scale was developed for examinees to rate their own performance in terms of their familiarity with the task, their performance on the task, and the difficulty of the task. Pilot data were gathered from ESL and EFL students at a wide range of proficiency levels. Their performances were scored by raters using the task dependent and task-independent criteria. Analyses included descriptive statistics, reliability estimates (interrater, Cronbach alpha, etc.), correlational analysis, and implicational scale analysis. The results are interpreted and discussed in terms of: (a) the distributions ofscores for the task-dependent and task-independent ratings, (b) test reliability and ways to improve the consistency of measurement, and (c) test validify and the relationship of our task-based test to theory.
Description
Keywords
Citation
Extent
41 pages
Format
Geographic Location
Time Period
Related To
University of Hawai'i Working Papers in English as a Second Language 18(2)
Table of Contents
Rights
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.