Performance Assessment of ESL and EFL Students

dc.contributor.advisor Brown, James D.
dc.contributor.author Brown, James Dean
dc.contributor.author Hudson, Thom
dc.contributor.author Norris, John M.
dc.contributor.author Bonk, William
dc.contributor.department University of Hawaii at Manoa. Department of Second Language Studies.
dc.date.accessioned 2016-05-09T22:06:50Z
dc.date.available 2016-05-09T22:06:50Z
dc.date.issued 2000
dc.description.abstract Thirteen prototypical performance tasks were selected from over 100 based on their generic appropriateness for the target population and on posited difficulty levels (associated with plus or niinus values for linguistic code command, cognitive operations, and communicative adaptation, as discussed in Norris, Brown, Hudson, & Yoshioka, 1998, after Skehan, 1996, 1998). These l3 tasks were used to create three test forms (with one anchor task common to all forms), two for use in an ESL setting at the University of Hawai'i, and one for use in an EFL setting at Kanda University of International Studies in Japan. In addition, two sets ofrating scales were created based on task-dependent and task-independent categories. For each individual task, the criteria for the task-dependent categories were created in consultation with an advanced language learner, a language teacher, and a non-ESL teacher, all ofwhom were well-acquainted with the target population and the prototype tasks. These criteria for success were allowed to differ from task to task depending on the input ofour consultants. The task-independent categories were created for each of three theoretically motivated components of task difficulty in terms of the adequacy of: (linguistic) code command, cognitive operations, and communicative adaptation. A third rating scale was developed for examinees to rate their own performance in terms of their familiarity with the task, their performance on the task, and the difficulty of the task. Pilot data were gathered from ESL and EFL students at a wide range of proficiency levels. Their performances were scored by raters using the task dependent and task-independent criteria. Analyses included descriptive statistics, reliability estimates (interrater, Cronbach alpha, etc.), correlational analysis, and implicational scale analysis. The results are interpreted and discussed in terms of: (a) the distributions ofscores for the task-dependent and task-independent ratings, (b) test reliability and ways to improve the consistency of measurement, and (c) test validify and the relationship of our task-based test to theory.
dc.format.digitalorigin reformatted digital
dc.format.extent 41 pages
dc.identifier.uri http://hdl.handle.net/10125/40808
dc.language eng
dc.relation.ispartof University of Hawai'i Working Papers in English as a Second Language 18(2)
dc.title Performance Assessment of ESL and EFL Students
dc.type Working Paper
dc.type.dcmi Text
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Brown et al. (2000)_WP18(2).pdf
Size:
14.16 MB
Format:
Adobe Portable Document Format
Description: