Should Items and Answer Keys of Small-Scale Exams Be Published?
- Hüseyin Selvi
Abstract
This study aimed to examine the effect of using items from previous exams on students’ pass-fail rates and on the psychometric properties of the tests and items.
The study included data from 115 tests and 11,500 items used in the midterm and final exams of 3,910 students in the preclinical term at the Faculty of Medicine from 2014 to 2019. Data were analyzed using descriptive statistics related to the total test scores, item difficulty and item discrimination values, and internal consistency values for reliability. The Shapiro-Wilks test was used to evaluate the distribution structure, and t test were used to analyze the differences between groups.
The findings showed that the mean item repetition rate from 2014 to 2019 ranged from 16.98% to 39.00%. The total score variance decreased significantly as the percentage of test items increased. There was a significant, moderately positive relationship between the percentage of repeated test items and the number of students eligible to pass their grades. Item difficulty values obtained from initial item use were significantly lower than those obtained from repeated item use.
We conclude that test items and answer keys should not be published by test makers unless they have the means such as the infrastructure, budget, and personnel to develop new items in place of the ones previously published in test banks.
- Full Text: PDF
- DOI:10.5539/hes.v10n2p107
Index
- AcademicKeys
- CNKI Scholar
- Education Resources Information Center (ERIC)
- Elektronische Zeitschriftenbibliothek (EZB)
- EuroPub Database
- Excellence in Research for Australia (ERA)
- Google Scholar
- InfoBase
- JournalSeek
- LOCKSS
- Mendeley
- MIAR
- Open Access Journals Search Engine(OAJSE)
- PKP Open Archives Harvester
- Scilit
- SHERPA/RoMEO
- Ulrich's
- WorldCat
Contact
- Sherry LinEditorial Assistant
- hes@ccsenet.org