Conference paper Open Access

What Makes an Image Tagger Fair? Proprietary Auto-tagging and Interpretations on People Images

Barlas Pinar; Kleanthous Styliani; Kyriakou Kyriakos; Otterbacher Jahna

Image analysis algorithms have been a boon to personalization in digital systems and are now widely available via easy-to-use APIs.
However, it is important to ensure that they behave fairly in applications that involve processing images of people, such as dating apps. We conduct an experiment to shed light on the factors influencing the perception of “fairness." Participants are shown a photo along with two descriptions (human- and algorithm-generated). They are then asked to indicate which is “more fair" in the context of a dating site, and explain their reasoning. We vary a number of factors, including the gender, race and attractiveness of the person in the photo. While participants generally found human-generated tags to be more fair, API tags were judged as being more fair in one setting - where the image depicted an “attractive," white individual. In their explanations, participants often mention accuracy, as well as the objectivity/subjectivity of the tags in the description. We relate our work to the ongoing conversation about fairness in opaque tools like image tagging APIs, and their potential to result in harm.

This work has been partly supported by the project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 (RISE – Call: H2020-WIDESPREAD-01-2016-2017-TeamingPhase2) and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.
Files (5.5 MB)
Name Size
UMAP_fairnessintagging_authorscopy.pdf
md5:7770255913aba31a4f08837041d666a1
5.5 MB Download
4
5
views
downloads
All versions This version
Views 44
Downloads 55
Data volume 27.6 MB27.6 MB
Unique views 33
Unique downloads 44

Share

Cite as