Published June 30, 2022 | Version v1
Conference paper Open

Bias in Face Image Classification Machine Learning Models: The Impact of Annotator's Gender and Race

  • 1. Andreas
  • 2. Stylianos
  • 3. Zenonas

Description

An important factor that ensures the correct operation of Machine Learning models is the quality of data used during the model training process. Quite often, training data is annotated by humans, and as a result, annotation bias may be introduced. In this study, we focus on face image classification and aim to quantify the effect of annotation bias introduced by different groups of annotators, allowing in that way the understanding of the problems that arise due to annotation bias. The results of the experiments indicate that the performance of Machine Learning models in several face image interpretation tasks is correlated to the self-reported demographic characteristics of the annotators. In particular, we found significant correlation to annotator race, while cor- relation to gender is less profound. Furthermore, experimental results show that it is possible to determine the group of annotators involved in the annotation process by considering the annotation data provided by previously unseen annotators. The results emphasize the risks of anno- tation bias in Machine Learning models

Files

ArtificialIntelligenceApplicat.pdf

Files (1.8 MB)

Name Size Download all
md5:87bc9cc886b1c796e5c1022ff91a2cb0
1.8 MB Preview Download

Additional details

Funding

European Commission
RISE – Research Center on Interactive Media, Smart System and Emerging Technologies 739578