Conference paper Open Access

Social B(eye)as: Human andMachine Descriptions of People Images

Pınar Barlas; Kyriakos Kyriakou; Styliani Kleanthous; Jahna Otterbacher


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="URL">https://zenodo.org/record/3522936</identifier>
  <creators>
    <creator>
      <creatorName>Pınar Barlas</creatorName>
      <affiliation>Research Centre on Interactive Media, Smart Systems and Emerging Technologies (Nicosia, CYPRUS)</affiliation>
    </creator>
    <creator>
      <creatorName>Kyriakos Kyriakou</creatorName>
      <affiliation>Research Centre on Interactive Media, Smart Systems and Emerging Technologies (Nicosia, CYPRUS)</affiliation>
    </creator>
    <creator>
      <creatorName>Styliani Kleanthous</creatorName>
      <affiliation>Research Centre on Interactive Media, Smart Systems and Emerging Technologies (Nicosia, CYPRUS) and Cyprus Center for Algorithmic Transparency, Open University of Cyprus (Latsia, CYPRUS)</affiliation>
    </creator>
    <creator>
      <creatorName>Jahna Otterbacher</creatorName>
      <affiliation>Research Centre on Interactive Media, Smart Systems and Emerging Technologies (Nicosia, CYPRUS) and Cyprus Center for Algorithmic Transparency, Open University of Cyprus (Latsia, CYPRUS)</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Social B(eye)as: Human andMachine Descriptions of People Images</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2019</publicationYear>
  <subjects>
    <subject>APIs</subject>
    <subject>image tagging</subject>
    <subject>social biases</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2019-10-30</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3522936</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.7910/DVN/APZKSS</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/rise-teaming-cyprus</relatedIdentifier>
  </relatedIdentifiers>
  <version>Published</version>
  <rightsList>
    <rights rightsURI="http://creativecommons.org/licenses/by-nc-nd/4.0/legalcode">Creative Commons Attribution Non Commercial No Derivatives 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Image analysis algorithms have become an indispensable tool in our information ecosystem, facilitating new forms of visual communication and information sharing. At the same time, they enable large-scale sociotechnical research which would otherwise be difficult to carry out. However, their outputs may exhibit social bias, especially when analyzing people images. Since&lt;br&gt;
most algorithms are proprietary and opaque, we propose a method of auditing their outputs for social biases. To be able to compare how algorithms interpret a controlled set of people images, we collected descriptions across six image tagging APIs. In order to compare these results to human behavior, we also collected descriptions on the same images from crowdworkers in two anglophone regions. While the APIs do not output explicitly offensive descriptions, as humans do, future work should consider if and how they reinforce social inequalities in implicit ways. Beyond computer vision auditing, the dataset of human- and machine-produced&lt;br&gt;
tags, and the typology of tags, can be used to explore a range of research questions related to both algorithmic&lt;br&gt;
and human behaviors.&lt;/p&gt;</description>
    <description descriptionType="Other">This work has been partly supported by the project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 (RISE – Call: H2020-WIDESPREAD-01-2016-2017-TeamingPhase2) and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/739578/">739578</awardNumber>
      <awardTitle>Research Center on Interactive Media, Smart System and Emerging Technologies</awardTitle>
    </fundingReference>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/810105/">810105</awardNumber>
      <awardTitle>Cyprus Center for Algorithmic Transparency</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
18
11
views
downloads
Views 18
Downloads 11
Data volume 10.7 MB
Unique views 16
Unique downloads 10

Share

Cite as