Establishing an Operational Model of Rating Scale Construction for English Writing Assessment


  •  Xuefeng Wu    

Abstract

Rating scales for writing assessment are critical in that they determine directly the quality and fairness of such performance tests. However, in many EFL contexts, rating scales are made, to certain extent, based on the intuition of teachers who strongly need a feasible and scientific route to guide their construction of rating scales. This study aims to design an operational model of rating scale construction with English summary writing as an example. Altogether 325 university English teachers, 4 experts in language assessment and 60 English majors in China participated in the study. 20 textual attributes were extracted, through text analysis, from China’s Standards of English Language Ability (CSE), theoretical construct of summary writing, comments on sample summary writing essays from 8 English teachers and their personal judgement. The textual attributes were then investigated through a large-scale questionnaire survey. Exploratory factor analysis and expert judgement were employed to determine rating scale dimensions. Regression analysis and expert judgement were conducted to determine the weighting distribution across all dimensions. Based on such endeavors, a tentative operational model of rating scale construction was established, which can also be applied and adapted to develop rating scales in other writing assessment. 



This work is licensed under a Creative Commons Attribution 4.0 License.