Report Open Access

Understanding bias in facial recognition technologies

Leslie, David

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.4050457</identifier>
      <creatorName>Leslie, David</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="">0000-0001-9369-1653</nameIdentifier>
      <affiliation>The Alan Turing Institute</affiliation>
    <title>Understanding bias in facial recognition technologies</title>
    <subject>facial recognition technologies</subject>
    <subject>algorithmic bias</subject>
    <subject>digital ethics</subject>
    <subject>responsible innovation</subject>
    <subject>biometric technologies</subject>
    <date dateType="Issued">2020-09-26</date>
  <resourceType resourceTypeGeneral="Report"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.4050456</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;Over the past couple of years, the growing debate around automated facial recognition has reached a boiling point. As developers have continued to swiftly expand the scope of these kinds of technologies into an almost unbounded range of applications, an increasingly strident chorus of critical voices has sounded concerns about the injurious effects of the proliferation of such systems on impacted individuals and communities. Opponents argue that the irresponsible design and use of facial detection and recognition technologies (FDRTs) threatens to violate civil liberties, infringe on basic human rights and further entrench structural racism and systemic marginalisation. They also caution that the gradual creep of face surveillance infrastructures into every domain of lived experience may eventually eradicate the modern democratic forms of life that have long provided cherished means to individual flourishing, social solidarity and human self-creation. Defenders, by contrast, emphasise the gains in public safety, security and efficiency that digitally streamlined capacities for facial identification, identity verification and trait characterisation may bring. In this explainer, I focus on one central aspect of this debate: the role that dynamics of bias and discrimination play in the development and deployment of FDRTs. I examine how historical patterns of discrimination have made inroads into the design and implementation of FDRTs from their very earliest moments. And, I explain the ways in which the use of biased FDRTs can lead to distributional and recognitional injustices. I also describe how certain complacent attitudes of innovators and users toward redressing &amp;nbsp;these harms raise serious concerns about expanding future adoption. The explainer concludes with an exploration of broader ethical questions around the potential proliferation of pervasive face-based surveillance infrastructures and makes some recommendations for cultivating more responsible approaches to the development and governance of these technologies.&amp;nbsp;&lt;/p&gt;</description>
All versions This version
Views 1,1601,160
Downloads 729729
Data volume 5.2 GB5.2 GB
Unique views 1,0281,028
Unique downloads 649649


Cite as