Report Open Access

Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal

Leslie, David; Burr, Christopher; Aitken, Mhairi; Katell, Michael; Briggs, Morgan; Rincon, Cami


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Artificial Intelligence</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">AI ethics</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">trustworthy AI</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">AI governance</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">AI assurance</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">human rights due diligence</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">algorithmic impact assessment</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">human rights impact assessment</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Council of Europe</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">stakeholder analysis</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">multi-stakeholder engagement</subfield>
  </datafield>
  <controlfield tag="005">20220206134912.0</controlfield>
  <controlfield tag="001">5981676</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">The Alan Turing Institute</subfield>
    <subfield code="a">Burr, Christopher</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">The Alan Turing Institute</subfield>
    <subfield code="a">Aitken, Mhairi</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">The Alan Turing Institute</subfield>
    <subfield code="a">Katell, Michael</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">The Alan Turing Institute</subfield>
    <subfield code="a">Briggs, Morgan</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">The Alan Turing Institute</subfield>
    <subfield code="a">Rincon, Cami</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">8332735</subfield>
    <subfield code="z">md5:76512d9148b3d176e6d59fa122ddc8a7</subfield>
    <subfield code="u">https://zenodo.org/record/5981676/files/HUDERAF_CoE_Pub.pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2022-02-06</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="o">oai:zenodo.org:5981676</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">The Alan Turing Institute</subfield>
    <subfield code="a">Leslie, David</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">ES/T007354/1</subfield>
    <subfield code="a">PATH-AI: Mapping an Intercultural Path to Privacy, Agency, and Trust in Human-AI Ecosystems</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">EP/T001569/1</subfield>
    <subfield code="a">Strategic Priorities Fund - AI for Science, Engineering, Health and Government</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;Following on from the publication of its&amp;nbsp;&lt;em&gt;Feasibility Study&amp;nbsp;&lt;/em&gt;in December 2020, the Council of Europe&amp;rsquo;s Ad Hoc Committee on Artificial Intelligence (and its subgroups) initiated efforts to formulate and draft its&amp;nbsp;&lt;em&gt;Possible elements of a legal framework on artificial intelligence, based on the Council of Europe&amp;rsquo;s standards on human rights, democracy, and the rule of law&lt;/em&gt;. This document was ultimately adopted by the CAHAI plenary in December 2021. To support this effort, The Alan Turing Institute undertook a programme of research that explored the governance processes and practical tools needed to operationalise the integration of human right due diligence with the assurance of trustworthy AI innovation practices.&lt;/p&gt;

&lt;p&gt;The resulting output, &lt;em&gt;Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems: A proposal,&lt;/em&gt;&amp;nbsp;was completed and submitted to the Council of Europe in September 2021. It presents an end-to-end approach to the assurance of AI project lifecycles that integrates context-based risk analysis and appropriate stakeholder engagement with comprehensive impact assessment, and transparent risk management, impact mitigation, and innovation assurance practices. Taken together, these interlocking processes constitute a&amp;nbsp;Human Rights, Democracy and the Rule of Law Assurance Framework&amp;nbsp;(HUDERAF). The HUDERAF combines the procedural requirements for principles-based human rights due diligence with the governance mechanisms needed to set up technical and socio-technical guardrails for responsible and trustworthy AI innovation practices. Its purpose is to provide an accessible and user-friendly set of mechanisms for facilitating compliance with a binding legal framework on artificial intelligence, based on the Council of Europe&amp;rsquo;s standards on human rights, democracy, and the rule of law, and to ensure that AI innovation projects are carried out with appropriate levels of public accountability, transparency, and democratic governance.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.5981675</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.5981676</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">report</subfield>
  </datafield>
</record>
553
356
views
downloads
All versions This version
Views 553553
Downloads 356356
Data volume 3.0 GB3.0 GB
Unique views 505505
Unique downloads 319319

Share

Cite as