Given the potential of algorithmic systems to influence the social world, by amplifying or abating bias and potential discrimination, the KeepA(n)I project develops a structured, methodological approach to aid developers and machine learning practitioners to detect social bias at the input datasets and output data of the application. In contrast to existing methods posed by the Fair ML community, which evaluate group and individual fairness in datasets and algorithmic results, in an attempt to reduce/mitigate the effect of bias, KeepA(n)I takes a different approach. The project focuses on the expression of social stereotypes (e.g., based on gender, race or socio-economic status) and how those are reflected in biases shared by groups of people interacting in different ways with the system. KeepA(n)I is envisioned as a human-in-the-loop approach, methodically
exposing social stereotypes and reducing the negative impact or even enhancing people’s access to opportunities and resources when interacting with AI applications. By engaging humans in the evaluation process (i.e., through crowdsourcing), KeepA(n)I will achieve a diverse (e.g., across cultures) and dynamic (e.g., across contexts and time) evaluation of social norms, according to the objective of the evaluated application. The project will focus on computer vision applications that analyse people-related media (e.g., image content analysis or “tagging,” gender or age recognition from a profile photo) with significant implications for high-risk applications (e.g., screening job applicant profiles or dating applications).