Please use this identifier to cite or link to this item:

Validating task-based assessment of L2 pragmatics in interaction using mixed methods

File Description SizeFormat 
Youn_Soo Jung_r.pdfVersion for non-UH users. Copying/Printing is not permitted2.54 MBAdobe PDFView/Open
Youn_Soo Jung_uh.pdfVersion for UH users2.53 MBAdobe PDFView/Open

Item Summary

Title: Validating task-based assessment of L2 pragmatics in interaction using mixed methods
Authors: Youn, Soo Jung
Keywords: English for Academic Purposes (EAP)
Issue Date: Aug 2013
Publisher: [Honolulu] : [University of Hawaii at Manoa], [August 2013]
Abstract: This study investigates the validity for task-based assessment of L2 pragmatics in interaction in an English for Academic Purposes (EAP) setting for classroom assessment and meaningful score interpretations for stakeholders. As a validity framework, Kane's (2006) argument-based approach to validity was employed. In view of the complexity of assessing L2 pragmatics in interaction and score interpretations from observed L2 pragmatic performances, score interpretations were built upon interpretive arguments comprising of four inferences, which guided the types of data analyzed and research steps taken in this study. Using the sequential mixed methods approach (Green, 2007; Tashakkori & Teddlie, 2003), qualitative and quantitative evidence was collected to provide valid support for the inferences and to strengthen the validity argument.
Based on a large-scale needs analysis on EAP L2 pragmatic learning needs, two open role-play tasks that are meaningful and relevant to stakeholders in an EAP context were developed. Unlike the closed role-play task format, examinees were allowed to negotiate and naturally interact with interlocutors. In order to provide meaningful score interpretations to the stakeholders and to assist raters in making accurate evaluations of examinees' interaction-involved L2 pragmatic performance, conversation analysis (CA)-informed analytical rating criteria with detailed descriptions were developed based on the detailed analyses of examinees' L2 pragmatic performances in the role-play tasks. One hundred two adult ESL examinees completed the role-play tasks and monologic tasks. Four rater groups, consisting of 12 raters in total, scored each examinee's pragmatic performance. A many-facet Rasch measurement using FACETS (Linacre, 2006) indicated that the role-play tasks displayed different levels of difficulty, which stably differentiated between the varying degrees of the 102 examinees' pragmatic abilities. The raters showed internal consistency despite their different degrees of severity. Stable fit statistics and distinct difficulties were reported within each of the interaction-sensitive rating criteria, indicating that they contribute to measuring L2 pragmatic competence. In particular, the two rating categories for interactional competence were distinct in their difficulty levels, which supports the CA findings.
Based on the qualitative and quantitative analyses, all of the validity evidence was woven into the validity argument for task-based assessment of L2 pragmatics in interaction, focusing on the four types of inference, domain description, evaluation, generalization, and extrapolation. The current study exemplifies how the construct of L2 pragmatics in interaction can be operationalized based on the qualitative scrutiny of the target performance that is also supported by the quantitative findings, which contributes to the advancement of measuring L2 pragmatics. Lastly, this study provides additional grounds for the recent development of the validity argument approach in the field of language assessment at large.
Description: Ph.D. University of Hawaii at Manoa 2013.
Includes bibliographical references.
Appears in Collections:Ph.D. - Second Language Studies

Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.