Published January 1, 2026 | Version v1
Journal article Open

Six Approaches To Measuring Algorithmic Bias: An Empirical Evaluation Of Fairness Metrics In Machine Learning

Description

Fairness metrics have become central instruments for identifying, quantifying, and mitigating bias in machine learning (ML) systems deployed in high-stakes decision-making contexts such as credit scoring, employment screening, welfare allocation, and criminal risk assessment. However, the rapid proliferation of fairness definitions has introduced substantial ambiguity regarding how algorithmic bias should be measured, interpreted, and governed in practice. This paper presents a comprehensive conceptual and empirical analysis of six widely adopted fairness metrics: Statistical Parity, Disparate Impact, Equalized Odds, Predictive Parity, Calibration, and Individual Fairness. Using a supervised classification task on a benchmark dataset, we empirically evaluate how fairness assessments vary across metrics under identical modeling conditions and decision thresholds. Our findings reveal substantial divergence among fairness metrics, with models satisfying one fairness criterion frequently violating others. These results demonstrate that algorithmic fairness is inherently multidimensional and context-dependent. We conclude that responsible AI governance requires multi-metric auditing, transparent metric selection, and domain-specific interpretation rather than reliance on any single fairness definition.

Files

IJSRET_V12_issue1_173.pdf

Files (683.5 kB)

Name Size Download all
md5:38a085608d50e71b7c81a0d35c8a870a
683.5 kB Preview Download

Additional details