Published July 30, 2024 | Version v1
Report Open

Policy Report on Generative Artificial Intelligence

  • 1. University of Edinburgh

Description

Our study is based on a comprehensive literature review of text-to-image generative models, identifying four high-priority risks associated with generative artificial intelligence (AI):

  1. At-scale production of discriminatory content.
  2. At-scale toxic and harmful (mis)use.
  3. Rapid and cheap production of misinformation and disinformation.
  4. Privacy and copyright infringement. 

Recognising the importance of a well-informed and holistic approach to AI development and regulation, we show how the UK’s pro-innovation framework for AI regulation can be adapted to regulate generative AI models and offset the aforementioned risks.

We propose that the UK’s financial support for generative AI model development aligns with the regulatory recommendations outlined in this report. Specifically, we recommend that a portion of this investment should be allocated to the implementation of socio-technical safeguards that mitigate the high-priority risks. 

We argue that establishing strong connections among academic, policy, and regulatory institutions is essential for effective knowledge sharing and application. This ensures that the integrity of all knowledge forms is maintained, contributing to a well-rounded and informed strategy for generative AI development and regulation.

Files

Kasirzadeh et al_Policy Report on Generative Artificial Intelligence.pdf

Files (695.6 kB)

Additional details

Funding

Arts and Humanities Research Council
Bridging Responsible AI Divides AH/X007146/1