What Makes an Image Tagger Fair? Proprietary Auto-tagging and Interpretations on People Images
- 1. Research centre for Interactive media, Smart systems and Emerging technologies Nicosia, Cyprus
- 2. Cyprus Center for Algorithmic Transparency Open University of Cyprus Nicosia, Cyprus
Description
Image analysis algorithms have been a boon to personalization in digital systems and are now widely available via easy-to-use APIs.
However, it is important to ensure that they behave fairly in applications that involve processing images of people, such as dating apps. We conduct an experiment to shed light on the factors influencing the perception of “fairness." Participants are shown a photo along with two descriptions (human- and algorithm-generated). They are then asked to indicate which is “more fair" in the context of a dating site, and explain their reasoning. We vary a number of factors, including the gender, race and attractiveness of the person in the photo. While participants generally found human-generated tags to be more fair, API tags were judged as being more fair in one setting - where the image depicted an “attractive," white individual. In their explanations, participants often mention accuracy, as well as the objectivity/subjectivity of the tags in the description. We relate our work to the ongoing conversation about fairness in opaque tools like image tagging APIs, and their potential to result in harm.
Notes
Files
UMAP_fairnessintagging_authorscopy.pdf
Files
(5.5 MB)
Name | Size | Download all |
---|---|---|
md5:7770255913aba31a4f08837041d666a1
|
5.5 MB | Preview Download |