Published June 21, 2023 | Version v1
Journal article Open

Less But Enough: Evaluation of peer reviews through pseudo-labeling with less annotated data

  • 1. North Carolina State University, USA
  • 2. University of North Carolina at Chapel Hill, USA

Description

A peer-assessment system provides a structured learning process for students and allows them to write
textual feedback on each other’s assignments and projects. This helps instructors or teaching assistants
perform a more comprehensive evaluation of students’ work. However, the contribution of peer assessment
to students’ learning relies heavily on the quality of the review. Therefore, a thorough evaluation
of the quality of peer assessment is essential to assuring that the process will benefit students’ learning.
Previous studies have focused on applying machine learning to evaluate peer assessment by identifying
characteristics of reviews (e.g., Do they mention a problem, make a suggestion, or tell the students where
to make a change?). Unfortunately, collecting ground-truth labels of the characteristics is an arbitrary,
subjective, and labor-intensive task. Besides in most cases, those labels are assigned by students, not all
of whom are reliable as a source of labeling. In this study, we propose a semi-supervised pseudo-labeling
approach to build a robust peer assessment evaluation system to utilize large unlabeled datasets along
with only a small amount of labeled data. We aim to evaluate the peer assessment from two angles: Detect
a problem statement (Does the reviewer mention a problem with the work?) and suggestion (Does
the reviewer give a suggestion to the author?)

Files

613Liu123To140.pdf

Files (1.0 MB)

Name Size Download all
md5:a3f61413790939b817c2183c10ea54b9
1.0 MB Preview Download

Additional details

Related works