Published September 7, 2019 | Version pre-print
Conference paper Open

How Do We Talk About Other People? Group (Un)Fairness in Natural Language Image Descriptions

  • 1. Open University of Cyprus & Research Centre on Interactive Media, Smart Systems and Emerging Technologies
  • 2. Research Centre on Interactive Media, Smart Systems and Emerging Technologies

Description

Crowdsourcing plays a key role in developing algorithms for image recognition or captioning. Major datasets, such as MS COCO or Flickr30K, have been built by eliciting natural language descriptions of images from workers. Yet such elicitation tasks are susceptible to human biases, including stereotyping people depicted in images. Given the growing concerns surrounding discrimination in algorithms, as well as in the data used to train them, it is necessary to take a critical look at this practice. We conduct experiments at Figure Eight using a controlled set of people images. Men and women of various races are positioned in the same manner, wearing a grey t-shirt. We prompt workers for 10 descriptive labels, and consider them using the human-centric approach, which assumes reporting bias. We find that “what’s worth saying” about these uniform images often differs as a function of the gender and race of the depicted person, violating the notion of group fairness. Although this diversity in natural language people descriptions is expected and often beneficial, it could result in automated disparate impact if not managed properly.

Files

FULL-OtterbacherJ.20.pdf

Files (2.6 MB)

Name Size Download all
md5:ae75d7118969bdc4ff931e6ed3195920
2.6 MB Preview Download

Additional details

Funding

RISE – Research Center on Interactive Media, Smart System and Emerging Technologies 739578
European Commission
CyCAT – Cyprus Center for Algorithmic Transparency 810105
European Commission