Automated essay scoring effect on test equating errors in mixed-format test

Yükleniyor...
Küçük Resim

Tarih

2021

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

IZZET KARA

Erişim Hakkı

info:eu-repo/semantics/openAccess

Özet

Scoring constructed-response items can be highly difficult, time-consuming, and costly in practice. Improvements in computer technology have enabled automated scoring of constructed-response items. However, the application of automated scoring without an investigation of test equating can lead to serious problems. The goal of this study was to score the constructed-response items in mixed-format tests automatically with different test/training data rates and to investigate the indirect effect of these scores on test equating compared with human raters. Bidirectional long-short term memory (BLSTM) was selected as the automated scoring method for the best performance. During the test equating process, methods based on classical test theory and item response theory were utilized. In most of the equating methods, errors of the equating resulting from automated scoring were close to the errors occurring in equating processes conducted by human raters. It was concluded that automated scoring can be applied because it is convenient in terms of equating.

Açıklama

Anahtar Kelimeler

Test Equating, Automated Scoring, Classical Test Theory, Exploratory Factor-Analysis, Constructed Response, Dimensionality

Kaynak

International Journal of Assessment Tools in Education

WoS Q Değeri

N/A

Scopus Q Değeri

Cilt

8

Sayı

2

Künye

Uysal, I., & DOĞAN, N. (2021). Automated essay scoring effect on test equating errors in mixed-format test. International Journal of Assessment Tools in Education, 8(2), 222-238.