Conference paper Open Access

How Do We Talk About Other People? Group (Un)Fairness in Natural Language Image Descriptions

Otterbacher, Jahna; Barlas, Pinar; Kleanthous, Styliani; Kyriakou, Kyriakos

Crowdsourcing plays a key role in developing algorithms for image recognition or captioning. Major datasets, such as MS COCO or Flickr30K, have been built by eliciting natural language descriptions of images from workers. Yet such elicitation tasks are susceptible to human biases, including stereotyping people depicted in images. Given the growing concerns surrounding discrimination in algorithms, as well as in the data used to train them, it is necessary to take a critical look at this practice. We conduct experiments at Figure Eight using a controlled set of people images. Men and women of various races are positioned in the same manner, wearing a grey t-shirt. We prompt workers for 10 descriptive labels, and consider them using the human-centric approach, which assumes reporting bias. We find that “what’s worth saying” about these uniform images often differs as a function of the gender and race of the depicted person, violating the notion of group fairness. Although this diversity in natural language people descriptions is expected and often beneficial, it could result in automated disparate impact if not managed properly.

Files (2.6 MB)
Name Size
2.6 MB Download
All versions This version
Views 4949
Downloads 2727
Data volume 69.2 MB69.2 MB
Unique views 4747
Unique downloads 2727


Cite as