Please use this identifier to cite or link to this item:
Evaluating an instrument for assessing connected speech performance using facets analysis
|Title:||Evaluating an instrument for assessing connected speech performance using facets analysis|
|Authors:||Seong, Yoon Ah|
|Contributors:||Brown, James D. (advisor)|
|Abstract:||In the area of English pronunciation teaching, connected speech is increasingly being introduced and covered in pronunciation textbooks (e.g., Hagen, 2000; Weinstein, 2001). Connected speech is a phenomenon in spoken language that collectively includes phonological processes such as reduction, elision, intrusion, assimilation, and contraction. Several research studies have shown that connected speech instruction can help learners to more easily comprehend rapid speech used by native speakers (e.g., Brown & Hilferty, 2006; Celce-Murcia, Brinton, & Goodwin, 1996; Matsuzawa, 2006). Moreover, use of connected speech features can make learners sound more comprehensible and natural with less marked foreign accent (Brown & Kondo-Brown, 2006a; Dauer & Browne, 1992). However, compared to the growing connected speech literature regarding what forms to teach and how, there seems to be very little information on how to assess connected speech especially in terms of production. Therefore, the purpose of this study was to develop and evaluate a new test of connected speech performance within the context of an English study abroad program. The multi-faceted Rasch software FACETS was used to examine the effectiveness of the test instrument. The analyses used data from two administrations, a pretest and a posttest, and examined the relationships between examinee scores and various aspects of the testing situation (i.e., facets). The four facets investigated in this study were: (a) the examinees, (b) items, (c) raters, and (d) the rater L1 background. The results indicated that assessing the production of certain connected speech forms using this type of test instrument has potential. Detailed inspection of several items, as well as unpredictable examinees’ performances, and inconsistent ratings from the raters lead to suggestions for revision and improvement in the item selection (elimination of a single item), rating scales (inclusion of concrete descriptors), and assessment procedures (detailed rater guidelines and training).|
|Appears in Collections:||
SLS Papers (2000-present)|
Please email firstname.lastname@example.org if you need this content in ADA-compliant format.
Items in ScholarSpace are protected by copyright, with all rights reserved, unless otherwise indicated.