Please use this identifier to cite or link to this item:
Performance Assessment of ESL and EFL Students
|Title:||Performance Assessment of ESL and EFL Students|
|Authors:||Brown, James Dean|
Norris, John M.
|Contributors:||Brown, James D. (advisor)|
University of Hawaii at Manoa. Department of Second Language Studies. (department)
|Abstract:||Thirteen prototypical performance tasks were selected from over 100 based on their generic appropriateness for the target population and on posited difficulty levels (associated with plus or niinus values for linguistic code command, cognitive operations, and communicative adaptation, as discussed in Norris, Brown, Hudson, & Yoshioka, 1998, after Skehan, 1996, 1998). These l3 tasks were used to create three test forms (with one anchor task common to all forms), two for use in an ESL setting at the University of Hawai'i, and one for use in an EFL setting at Kanda University of International Studies in Japan. In addition, two sets ofrating scales were created based on task-dependent and task-independent categories. For each individual task, the criteria for the task-dependent categories were created in consultation with an advanced language learner, a language teacher, and a non-ESL teacher, all ofwhom were well-acquainted with the target population and the prototype tasks. These criteria for success were allowed to differ from task to task depending on the input ofour consultants. The task-independent categories were created for each of three theoretically motivated components of task difficulty in terms of the adequacy of: (linguistic) code command, cognitive operations, and communicative adaptation. A third rating scale was developed for examinees to rate their own performance in terms of their familiarity with the task, their performance on the task, and the difficulty of the task. Pilot data were gathered from ESL and EFL students at a wide range of proficiency levels. Their performances were scored by raters using the task dependent and task-independent criteria. Analyses included descriptive statistics, reliability estimates (interrater, Cronbach alpha, etc.), correlational analysis, and implicational scale analysis. The results are interpreted and discussed in terms of: (a) the distributions ofscores for the task-dependent and task-independent ratings, (b) test reliability and ways to improve the consistency of measurement, and (c) test validify and the relationship of our task-based test to theory.|
|Appears in Collections:||
Please email firstname.lastname@example.org if you need this content in ADA-compliant format.
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.