Please use this identifier to cite or link to this item:
Strategic decoding of sociopragmatic assessment tasks–An exploratory think-aloud validation study
|Title:||Strategic decoding of sociopragmatic assessment tasks–An exploratory think-aloud validation study|
|Advisor:||Brown, James D.|
|Abstract:||Sociopragmatics has proven to be a challenging domain in language testing given that pragmatic expectations and assessments are highly culture- and context-specific (Liu, 2007). Thus, it is challenging to avoid construct-irrelevant variance and to draw valid inferences on the basis of overall test scores. In an attempt to answer Roever’s (2011) call for a “broadening of the evidence base that allows extrapolation inferences to a target domain of social language use in academic and non-institutional contexts” (p. 3), this study investigated the cognitive processes of university-level German learners of English when solving receptive sociopragmatic assessment tasks. Two groups of university-level EFL students with different amounts of exposure to the target language environment (each with n = 7) were asked to answer seven multiple-choice discourse completion tasks, taken from the American English Sociopragmatics Comprehension Test (AESCT), an intercultural sociopragmatics comprehension test that focuses on U.S.-American English as well as the academic context in the United States. Verbal report methodology was used to access respondents’ cognitive processes while they were working on the tasks. By means of a grounded theory analysis, the author systematically investigated the respondents’ strategic processing and compiled a taxonomy of 24 strategies in three categories: recall, evaluation, and other. A contrastive between-group investigation showed that respondents with higher exposure to the target language context showed a greater ability to contextualize, while candidates without exposure revealed a stronger reliance on the text and evaluation strategies to compensate for the lack of (experiential) knowledge. Although a final analysis with regard to the substantive aspect of construct validity (Messick, 1989, 1996) revealed that patterns in the data supported the trends hypothesized in the test construct, it also exposed some items that underrepresent the construct. The issue of underrepresentation will be discussed in further detail given that the results clearly reveal limitations of the multiple-choice test format for the assessment of (socio)pragmatic comprehension.|
|Appears in Collections:||SLS Papers|
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.