Published November 30, 2020 | Version v1
Proposal Open

Making Audits Meaningful – Overseeing the Use of AI in Content Moderation

  • 1. Texas A&M University, School of Law, USA
  • 2. University of Luxembourg, Luxembourg
  • 3. George Washington University, Elliott School of International Affairs, USA

Description

While platforms use increasingly sophisticated technology to make content-related decisions that affect public discourse, firms are tight-lipped about exactly how the technologies of content moderation function. The laconic nature of industry disclosure relating to their use of algorithmic content moderation is thoroughly unacceptable, considering that regulators need to understand the platform ecosystem in order to design evidence-based regulations and monitor risks associated with the use of AI in content moderation. This white paper sets out to explain how and why audits, a specific type of transparency measure, should be mandated by law within the four clear principles of independence, access, publicity, and resources. We go on to unpack the types of transparency, and then contextualize audits in this framework while also describing risks and benefits. The white paper concludes with the explanation of the four principles, as they are derived from the previous sections.

Files

[EoD] Policy Paper Auditing AI.pdf

Files (254.7 kB)

Name Size Download all
md5:bae9f09c3718740b242346441036a8fa
254.7 kB Preview Download