Policy Report on Generative Artificial Intelligence
Description
Our study is based on a comprehensive literature review of text-to-image generative models, identifying four high-priority risks associated with generative artificial intelligence (AI):
- At-scale production of discriminatory content.
- At-scale toxic and harmful (mis)use.
- Rapid and cheap production of misinformation and disinformation.
- Privacy and copyright infringement.
Recognising the importance of a well-informed and holistic approach to AI development and regulation, we show how the UK’s pro-innovation framework for AI regulation can be adapted to regulate generative AI models and offset the aforementioned risks.
We propose that the UK’s financial support for generative AI model development aligns with the regulatory recommendations outlined in this report. Specifically, we recommend that a portion of this investment should be allocated to the implementation of socio-technical safeguards that mitigate the high-priority risks.
We argue that establishing strong connections among academic, policy, and regulatory institutions is essential for effective knowledge sharing and application. This ensures that the integrity of all knowledge forms is maintained, contributing to a well-rounded and informed strategy for generative AI development and regulation.
Files
Kasirzadeh et al_Policy Report on Generative Artificial Intelligence.pdf
Files
(695.6 kB)
Name | Size | Download all |
---|---|---|
md5:3d4d72e1ac25f5ba2165dc763a7c28ce
|
695.6 kB | Preview Download |
Additional details
Funding
- Arts and Humanities Research Council
- Bridging Responsible AI Divides AH/X007146/1