An Empirical Study on the Validity of the AES Systems Juku and iWrite for Continuation Writing Task Assessment


  •  Ziqing Luo    
  •  Si Luo    

Abstract

The automatic English scoring (AES) systems are coming to the forefront of English learners’ minds with their speed, accuracy and personalized feedback. However, fewer researchers have studied the validity of AES systems in assessing narrative texts such as continuation writing tasks. Therefore, this paper empirically investigates the scoring between two AES systems, Juku and iWrite, and the difference in scoring validity between these two systems and the teacher.

This study mainly uses a quantitative method. The subjects of the study were the continuation writing tasks scores of 30 senior high school students in a Chinese middle school. Each task was scored by a professional teacher, Juku and iWrite, all with a perfect score of 25. Then the scores were statistically analyzed using SPSS 26.0.

The results of the analysis showed that (1) iWrite was more consistent and correlated with manual scoring than Juku. (2) In terms of mean scores, the manual scores were significantly higher than the Juku and iWrite scores. (3) In terms of discrimination, the system scores were not as good as the manual scores, but the latter were more subjective. (4) In terms of accuracy and stability, the AES systems were higher than manual scoring. Therefore, learners can use the AES scoring system as a reference and practice narrative writing based on the system’s feedback on grammar and the teacher’s feedback on plot and content.



This work is licensed under a Creative Commons Attribution 4.0 License.
  • ISSN(Print): 1925-4768
  • ISSN(Online): 1925-4776
  • Started: 2011
  • Frequency: quarterly

Journal Metrics

h-index (July 2022): 26

i10-index (July 2022): 61

Learn more

Contact