Matsangidou Maria
Otterbacher Jahna
2019-08-23
<p>Image recognition algorithms that automatically tag or moderate content are crucial in many applications but are increasingly opaque. Given transparency concerns, we focus on understanding how algorithms tag people images and their inferences on attractiveness. Theoretically, attractiveness has an evolutionary basis, guiding mating behaviors, although it also drives social behaviors. We test image-tagging APIs as to whether they encode biases surrounding attractiveness. We use the Chicago Face Database, containing images of diverse individuals, along with subjective norming data and objective facial measurements. The<br>
algorithms encode biases surrounding attractiveness, perpetuating the stereotype that “what is beautiful is good.” Furthermore, women are often misinterpreted as men. We discuss the algorithms’ reductionist nature, and their potential to infringe on users’ autonomy and well-being, as well as the ethical and legal considerations for developers. Future services should monitor algorithms’ behaviors given their prevalence in the information ecosystem and influence on media.</p>
This work has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement No 739578 and under Grant Agreement No 810105 and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.
https://doi.org/10.1007/978-3-030-29390-1_14
oai:zenodo.org:3522961
eng
Springer International Publishing
https://zenodo.org/communities/rise-teaming-cyprus
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution Non Commercial No Derivatives 4.0 International
https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode
Algorithmic bias
Attractiveness
Image recognition
Stereotyping
What is Beautiful Continues to be Good: People Images and Algorithmic Inferences on Physical Attractiveness
info:eu-repo/semantics/conferencePaper