Published July 5, 2023 | Version v1
Conference paper Open

Evaluating Quadratic Weighted Kappa as the Standard Performance Metric for Automated Essay Scoring

  • 1. WestEd, USA
  • 2. EPFL, Switzerland
  • 3. Google Research and Indian Institute of Science, India

Description

Automated Essay Scoring (AES) tools aim to improve the efficiency and consistency of essay scoring by using machine learning algorithms. In the existing research work on this topic, most researchers agree that human-automated score agreement remains the benchmark for assessing the accuracy of machine-generated scores. To measure the performance of AES models, the Quadratic Weighted Kappa (QWK) is commonly used as the evaluation metric. However, we have identified several limitations of using QWK as the sole metric for evaluating AES model performance. These limitations include its sensitivity to the rating scale, the potential for the so-called kappa paradox to occur, the impact of prevalence, the impact of the position of agreements in the diagonal agreement matrix, and its limitation in handling a large number of raters. Our findings suggest that relying solely on QWK as the evaluation metric for AES performance may not be sufficient. We further discuss insights into additional metrics to comprehensively evaluate the performance and accuracy of AES models.

Files

2023.EDM-long-papers.9.pdf

Files (756.9 kB)

Name Size Download all
md5:3fd61117e4188bfcbde5ec4e9ec3ea3e
756.9 kB Preview Download