Scoring Difficulty in Summary Writing Assessment: Toward the Reconstruction of Analytic Rubric
- Makiko Kato
Abstract
This study aims to examine whether differences exist in the factors influencing the difficulty of scoring English summaries and determining scores based on the raters’ attributes, and to collect candid opinions, considerations, and tentative suggestions for future improvements to the analytic rubric of summary writing for English learners. In this study, seven trained raters with diverse attribute backgrounds evaluated two kinds of English summaries written by Japanese university students using the analytic rubric with three evaluation items. A questionnaire was used to determine which of the three items were difficult to assess and why the raters perceived such difficulty, as well as what backgrounds and factors influenced their scoring decision-making. Moreover, through the raters’ most recent experience, candid comments were collected for developing future rubrics. The results showed that whether the evaluators’ attributes affected the difficulty of the evaluation was not clear. However, depending on the raters’ experience in teaching English/assessing summary writing, requests for improvements in the descriptors of the evaluation items and in the rubric emerged. This study proposes a tentative analytic rubric for summary writing, providing a foundation for constructing a rubric that can be used more easily by future raters. It also highlights the opinions of expert and novice teachers conducting summary evaluations in education.
- Full Text: PDF
- DOI:10.5539/jel.v14n2p74
Journal Metrics
Google-based Impact Factor (2021): 1.93
h-index (July 2022): 48
i10-index (July 2022): 317
h5-index (2017-2021): 31
h5-median (2017-2021): 38
Index
Contact
- Grace LinEditorial Assistant
- jel@ccsenet.org