An Evaluation of China’s Automated Scoring System Bingo English
- Jianmin Gao
- Xin Li
- Peiqi Gu
- Ziqi Liu
Abstract
The study evaluated the effectiveness of Bingo English, one of the representative automated essay scoring (AES) systems in China. 84 essays in an English test held in a Chinese university were collected as the research materials. All the essays were scored by both two trained and experienced human raters and Bingo English, and the linguistic features of them were also quantified in terms of complexity, accuracy, fluency (CAF), content quality, and organization. After examining the agreement between human scores and automated scores and the correlation of human and automated scores with the indicators of the essays’ linguistic features, it was found that Bingo English scores could only reflect the essays’ quality in a general way, and the use of it should be treated with caution.
- Full Text: PDF
- DOI:10.5539/ijel.v10n6p30
Journal Metrics
Google-based Impact Factor (2021): 1.43
h-index (July 2022): 45
i10-index (July 2022): 283
h5-index (2017-2021): 25
h5-median (2017-2021): 37
Index
- Academic Journals Database
- ANVUR (Italian National Agency for the Evaluation of Universities and Research Institutes)
- CNKI Scholar
- CrossRef
- Excellence in Research for Australia (ERA)
- IBZ Online
- JournalTOCs
- Linguistic Bibliography
- Linguistics and Language Behavior Abstracts
- LOCKSS
- MIAR
- MLA International Bibliography
- PKP Open Archives Harvester
- Scilit
- Semantic Scholar
- SHERPA/RoMEO
- UCR Library
Contact
- Diana XuEditorial Assistant
- ijel@ccsenet.org