Comparison of Automatic and Expert Teachers’ Rating of Computerized English Listening-Speaking Test
- Cao Linlin
Abstract
Through Many-Facet Rasch analysis, this study explores the rating differences between 1 computer automatic rater and 5 expert teacher raters on scoring 119 students in a computerized English listening-speaking test. Results indicate that both automatic and the teacher raters demonstrate good inter-rater reliability, though the automatic rater indicates less intra-rater reliability than college teacher and high school teacher raters under the stringent infit limits. There’s no central tendency and random effect for both automatic and human raters. This research provides evidence for the automatic rating reform of the computerized English listening-speaking test (CELST) in Guangdong NMET and encourages the application of MFRM in actual score monitoring.
- Full Text: PDF
- DOI:10.5539/elt.v13n1p18
Journal Metrics
Index
- Academic Journals Database
- CNKI Scholar
- Educational Research Abstracts
- Elektronische Zeitschriftenbibliothek (EZB)
- EuroPub Database
- Excellence in Research for Australia (ERA)
- GETIT@YALE (Yale University Library)
- Harvard Library E-Journals
- IBZ Online
- INDEX ISLAMICUS
- JournalSeek
- JournalTOCs
- LearnTechLib
- Linguistics Abstracts Online
- LOCKSS
- MIAR
- MLA International Bibliography
- NewJour
- Open J-Gate
- PKP Open Archives Harvester
- Publons
- ResearchGate
- ROAD
- SHERPA/RoMEO
- Standard Periodical Directory
- Technische Informationsbibliothek (TIB)
- The Keepers Registry
- Ulrich's
- Universe Digital Library
Contact
- Gavin YuEditorial Assistant
- elt@ccsenet.org