Published April 1, 2019 | Version v1
Journal article Open

A scoring rubric for automatic short answer grading system

  • 1. STMIK Amikom Purwokerto
  • 2. Universitas Gadjah Mada

Description

During the past decades, researches about automatic grading have become an interesting issue. These studies focuses on how to make machines are able to help human on assessing students’ learning outcomes. Automatic grading enables teachers to assess student's answers with more objective, consistent, and faster. Especially for essay model, it has two different types, i.e. long essay and short answer. Almost of the previous researches merely developed automatic essay grading (AEG) instead of automatic short answer grading (ASAG). This study aims to assess the sentence similarity of short answer to the questions and answers in Indonesian without any language semantic's tool. This research uses pre-processing steps consisting of case folding, tokenization, stemming, and stopword removal. The proposed approach is a scoring rubric obtained by measuring the similarity of sentences using the stringbased similarity methods and the keyword matching process. The dataset used in this study consists of 7 questions, 34 alternative reference answers and 224 student’s answers. The experiment results show that the proposed approach is able to achieve a correlation value between 0.65419 up to 0.66383 at Pearson's correlation, with Mean Absolute Error (𝑀𝐴𝐸) value about 0.94994 until 1.24295. The proposed approach also leverages the correlation value and decreases the error value in each method.

Files

28 11785.pdf

Files (543.4 kB)

Name Size Download all
md5:7159bf65c0130d372931595cc511cb51
543.4 kB Preview Download