Publications

Detailed Information

Scoring Behavior of Native vs. Non-native Speaker Raters of Writing Exams

Cited 0 time in Web of Science Cited 0 time in Scopus
Authors

Kim, Ah-Young; Gennaro, Kristen di

Issue Date
2012
Publisher
서울대학교 언어교육원
Citation
어학연구, Vol.48 No.2, pp. 319-342
Keywords
assessing writing abilityscoring behaviorrater severity
Abstract
In performance testing, where judges provide scores on examinees' abilities, program administrators may seek to ensure that raters are consistent in their interpretations of the scoring criteria. A Rasch analysis not only allows us to examine differences in rater consistency, but also to check for interactions between rater characteristics and scoring behavior. In this paper, we present the results of an analysis of rater severity in assessing students' writing ability. Our main focus was to examine differences between native speaker (NS) and non-native speaker (NNS) raters. Results showed that raters differed in terms of severity, with NNS raters as a group more severe than NS raters. In addition, the severity of NNS raters varied more than that of the NS raters. Bias analysis indicated some rater-examinee bias and rater-domain bias in both NS raters and NNS raters, with the majority found among the NNS raters. The paper provides implications for training NNS raters. (153 words)
ISSN
0254-4474
Language
English
URI
https://hdl.handle.net/10371/86486
Files in This Item:
Appears in Collections:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share