Conference paper Open Access

Social Cues, Social Biases: Stereotypes in Annotations on People Images

Jahna Otterbacher

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Jahna Otterbacher</dc:creator>
  <dc:description>Human computation is often subject to systematic biases. We consider the case of linguistic biases and their consequences
for the words that crowd workers use to describe people images in an annotation task. Social psychologists explain that when describing others, the subconscious perpetuation of stereotypes is inevitable, as we describe stereotype-congruent people and/or in-group members more abstractly than others. In an MTurk experiment we show evidence of these biases, which are exacerbated when an image’s “popular tags” are displayed, a common feature used to provide social information to workers. Underscoring recent calls for a deeper examination of the role of training data quality in algorithmic biases, results suggest that it is rather easy to sway human judgment.</dc:description>
  <dc:description>This work has received funding from the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement  No 739578 and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.

Copyright © 2018, Association for the Advancement of Artificial Intelligence</dc:description>
  <dc:publisher>AAAI Press</dc:publisher>
  <dc:subject>linguistic biases</dc:subject>
  <dc:subject>social stereotypes</dc:subject>
  <dc:subject>social cues</dc:subject>
  <dc:subject>social biases</dc:subject>
  <dc:title>Social Cues, Social Biases: Stereotypes in Annotations on People Images</dc:title>
All versions This version
Views 148149
Downloads 6666
Data volume 31.1 MB31.1 MB
Unique views 127128
Unique downloads 6161


Cite as