Report Open Access
Leslie, David; Burr, Christopher; Aitken, Mhairi; Katell, Michael; Briggs, Morgan; Rincon, Cami
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.5981676</identifier> <creators> <creator> <creatorName>Leslie, David</creatorName> <givenName>David</givenName> <familyName>Leslie</familyName> <affiliation>The Alan Turing Institute</affiliation> </creator> <creator> <creatorName>Burr, Christopher</creatorName> <givenName>Christopher</givenName> <familyName>Burr</familyName> <affiliation>The Alan Turing Institute</affiliation> </creator> <creator> <creatorName>Aitken, Mhairi</creatorName> <givenName>Mhairi</givenName> <familyName>Aitken</familyName> <affiliation>The Alan Turing Institute</affiliation> </creator> <creator> <creatorName>Katell, Michael</creatorName> <givenName>Michael</givenName> <familyName>Katell</familyName> <affiliation>The Alan Turing Institute</affiliation> </creator> <creator> <creatorName>Briggs, Morgan</creatorName> <givenName>Morgan</givenName> <familyName>Briggs</familyName> <affiliation>The Alan Turing Institute</affiliation> </creator> <creator> <creatorName>Rincon, Cami</creatorName> <givenName>Cami</givenName> <familyName>Rincon</familyName> <affiliation>The Alan Turing Institute</affiliation> </creator> </creators> <titles> <title>Human rights, democracy, and the rule of law assurance framework for AI systems: A proposal</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2022</publicationYear> <subjects> <subject>Artificial Intelligence</subject> <subject>AI ethics</subject> <subject>trustworthy AI</subject> <subject>AI governance</subject> <subject>AI assurance</subject> <subject>human rights due diligence</subject> <subject>algorithmic impact assessment</subject> <subject>human rights impact assessment</subject> <subject>Council of Europe</subject> <subject>stakeholder analysis</subject> <subject>multi-stakeholder engagement</subject> </subjects> <dates> <date dateType="Issued">2022-02-06</date> </dates> <resourceType resourceTypeGeneral="Report"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5981676</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5981675</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>Following on from the publication of its&nbsp;<em>Feasibility Study&nbsp;</em>in December 2020, the Council of Europe&rsquo;s Ad Hoc Committee on Artificial Intelligence (and its subgroups) initiated efforts to formulate and draft its&nbsp;<em>Possible elements of a legal framework on artificial intelligence, based on the Council of Europe&rsquo;s standards on human rights, democracy, and the rule of law</em>. This document was ultimately adopted by the CAHAI plenary in December 2021. To support this effort, The Alan Turing Institute undertook a programme of research that explored the governance processes and practical tools needed to operationalise the integration of human right due diligence with the assurance of trustworthy AI innovation practices.</p> <p>The resulting output, <em>Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems: A proposal,</em>&nbsp;was completed and submitted to the Council of Europe in September 2021. It presents an end-to-end approach to the assurance of AI project lifecycles that integrates context-based risk analysis and appropriate stakeholder engagement with comprehensive impact assessment, and transparent risk management, impact mitigation, and innovation assurance practices. Taken together, these interlocking processes constitute a&nbsp;Human Rights, Democracy and the Rule of Law Assurance Framework&nbsp;(HUDERAF). The HUDERAF combines the procedural requirements for principles-based human rights due diligence with the governance mechanisms needed to set up technical and socio-technical guardrails for responsible and trustworthy AI innovation practices. Its purpose is to provide an accessible and user-friendly set of mechanisms for facilitating compliance with a binding legal framework on artificial intelligence, based on the Council of Europe&rsquo;s standards on human rights, democracy, and the rule of law, and to ensure that AI innovation projects are carried out with appropriate levels of public accountability, transparency, and democratic governance.</p></description> </descriptions> <fundingReferences> <fundingReference> <funderName>Research Councils UK</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000690</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/RCUK/ESRC/ES/T007354/1/">ES/T007354/1</awardNumber> <awardTitle>PATH-AI: Mapping an Intercultural Path to Privacy, Agency, and Trust in Human-AI Ecosystems</awardTitle> </fundingReference> <fundingReference> <funderName>Research Councils UK</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000690</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/RCUK/EPSRC/EP%2FT001569%2F1/">EP/T001569/1</awardNumber> <awardTitle>Strategic Priorities Fund - AI for Science, Engineering, Health and Government</awardTitle> </fundingReference> </fundingReferences> </resource>
All versions | This version | |
---|---|---|
Views | 553 | 553 |
Downloads | 356 | 356 |
Data volume | 3.0 GB | 3.0 GB |
Unique views | 505 | 505 |
Unique downloads | 319 | 319 |