Published July 12, 2024
| Version v1
Conference paper
Open
Examining the Algorithmic Fairness in Predicting High School Dropouts
Authors/Creators
Contributors
Editors:
- 1. Bielefeld University, Germany
- 2. University of Alberta, Canada
Description
There is less attention on examining algorithmic fairness in secondary education dropout predictions. Also, the inclusion of protected attributes in machine learning models remains a subject of debate. This study delves into the use of machine learning models for predicting high school dropouts, focusing on the role of protected attributes like gender and race/ethnicity. Utilizing a comprehensive national dataset, we critically evaluate the predictive performance and algorithmic fairness of these models via the novel Differential Algorithmic Functioning (DAF) method. Our results show that the impact of protected attributes on predictions varies, displaying model-specific biases across different threshold ranges. It suggests that researchers should not only evaluate but also document the safe (bias-free) threshold range of their predictive models. Furthermore, it recommends that the decision to include or exclude protected attributes should be based on their effect on predictive performance, algorithmic fairness, and practical model deployment considerations. The findings offer significant insights for educational policymakers and researchers in the development of fair and effective predictive models.
Files
2024.EDM-short-papers.22.pdf
Files
(914.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:d600d34efe231acfb41ca781c647e491
|
914.4 kB | Preview Download |