How reliable is it to automatically score open-ended items? An application in the Turkish language

Yükleniyor...
Küçük Resim

Tarih

2021

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Assoc Measurement & Evaluation Education & Psychology

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

The use of open-ended items, especially in large-scale tests, created difficulties in scoring open-ended items. However, this problem can be overcome with an approach based on automated scoring of open-ended items. The aim of this study was to examine the reliability of the data obtained by scoring open-ended items automatically. One of the objectives was to compare different algorithms based on machine learning in automated scoring (support vector machines, logistic regression, multinominal Naive Bayes, long-short term memory, and bidirectional long-short term memory). The other objective was to investigate the change in the reliability of automated scoring by differentiating the data rate used in testing the automated scoring system (33%, 20%, and 10%). While examining the reliability of automated scoring, a comparison was made with the reliability of the data obtained from human raters. In this study, which demonstrated the first automated scoring attempt of open-ended items in the Turkish language, Turkish test data of the Academic Skills Monitoring and Evaluation (ABIDE) program administered by the Ministry of National Education were used. Cross-validation was used to test the system. Regarding the coefficients of agreement to show reliability, the percentage of agreement, the quadratic-weighted Kappa, which is frequently used in automated scoring studies, and the Gwet's AC1 coefficient, which is not affected by the prevalence problem in the distribution of data into categories, were used. The results of the study showed that automated scoring algorithms could be utilized. It was found that the best algorithm to be used in automated scoring is bidirectional long-short term memory. Long-short term memory and multinominal Naive Bayes algorithms showed lower performance than support vector machines, logistic regression, and bidirectional long-short term memory algorithms. In automated scoring, it was determined that the coefficients of agreement at 33% test data rate were slightly lower comparing 10% and 20% test data rates, but were within the desired range.

Açıklama

Anahtar Kelimeler

Open-Ended Item, Machine Learning Algorithms, Automated Scoring, Inter-Rater Reliability, Coefficients of Agreement, Gwet's AC1

Kaynak

Journal of Measurement and Evaluation in Education and Psychology-Epod

WoS Q Değeri

N/A

Scopus Q Değeri

Q4

Cilt

12

Sayı

1

Künye

Uysal, I., & DOĞAN, N. (2021). How Reliable Is It to Automatically Score Open-Ended Items? An Application in the Turkish Language. Journal of Measurement and Evaluation in Education and Psychology, 12(1), 28-53.