Published July 18, 2022 | Version v1
Conference paper Open

Algorithmic unfairness mitigation in student models: When fairer methods lead to unintended results

Contributors

  • 1. University of Canterbury, NZ
  • 2. University of Illinois Urbana–Champaign, US

Description

Systematically unfair education systems lead to different levels of learning for students from different demographic groups, which, in the context of AI-driven education, has inspired work on mitigating unfairness in machine learning methods. However, unfairness mitigation methods may lead to unintended consequences for classrooms and students. We examined preprocessing and postprocessing unfairness mitigation algorithms in the context of a large dataset, the State of Texas Assessments of Academic Readiness (STAAR) outcome data, to investigate these issues. We evaluated each unfairness mitigation algorithm across multiple machine learning models using different definitions of fairness. We then evaluated how unfairness mitigation impacts classifications of students across different combinations of machine learning models, unfairness mitigation methods, and definitions of fairness. On average, unfairness mitigation methods led to a 22\% improvement in fairness. When examining the impacts of unfairness mitigation methods on predictions, we found that these methods led to models that can and did overgeneralize groups. Consequently, predictions made by such models may not reach the intended audiences. We discuss the implications for AI-driven interventions and student support.

Files

2022.EDM-posters.68.pdf

Files (272.5 kB)

Name Size Download all
md5:34eec4ac0fd40e049e4a9c5dbe310619
272.5 kB Preview Download

Additional details