Publications

Detailed Information

Scoring Behavior of Native vs. Non-native Speaker Raters of Writing Exams

DC Field Value Language
dc.contributor.authorKim, Ah-Young-
dc.contributor.authorGennaro, Kristen di-
dc.date.accessioned2014-01-07T08:02:11Z-
dc.date.available2014-01-07T08:02:11Z-
dc.date.issued2012-
dc.identifier.citation어학연구, Vol.48 No.2, pp. 319-342ko_KR
dc.identifier.issn0254-4474-
dc.identifier.urihttps://hdl.handle.net/10371/86486-
dc.description.abstractIn performance testing, where judges provide scores on examinees' abilities, program administrators may seek to ensure that raters are consistent in their interpretations of the scoring criteria. A Rasch analysis not only allows us to examine differences in rater consistency, but also to check for interactions between rater characteristics and scoring behavior. In this paper, we present the results of an analysis of rater severity in assessing students' writing ability. Our main focus was to examine differences between native speaker (NS) and non-native speaker (NNS) raters. Results showed that raters differed in terms of severity, with NNS raters as a group more severe than NS raters. In addition, the severity of NNS raters varied more than that of the NS raters. Bias analysis indicated some rater-examinee bias and rater-domain bias in both NS raters and NNS raters, with the majority found among the NNS raters. The paper provides implications for training NNS raters. (153 words)ko_KR
dc.language.isoenko_KR
dc.publisher서울대학교 언어교육원ko_KR
dc.subjectassessing writing abilityko_KR
dc.subjectscoring behaviorko_KR
dc.subjectrater severityko_KR
dc.titleScoring Behavior of Native vs. Non-native Speaker Raters of Writing Examsko_KR
dc.typeSNU Journalko_KR
dc.citation.journaltitle어학연구-
Appears in Collections:
Files in This Item:

Altmetrics

Item View & Download Count

  • mendeley

Items in S-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

Share