Published May 24, 2024 | Version v1
Working paper Open

Private Ordering and Generative AI: What Can We Learn From Model Terms and Conditions?

  • 1. University of Newcastle
  • 2. ROR icon University of Glasgow
  • 3. ROR icon University of Edinburgh

Description

Large or “foundation” models, sometimes also described as General Purpose Artificial Intelligence (GPAI), are now being widely used to generate not just text and images but also video, games, music and code from prompts or other inputs. Although this “generative AI” revolution is clearly driving new opportunities for innovation and creativity,  it is also enabling easy and rapid dissemination of harmful speech such as deepfakes, hate speech and disinformation, as well as potentially infringing existing laws such as copyright and privacy. Much attention has been paid recently to how we can draft bespoke legislation to control these risks and harms, notably in the EU, US and China, as well as considering how existing laws can be tweaked or supplemented. However private ordering by generative AI providers, via user contracts, licenses, privacy policies and more fuzzy materials such as acceptable use guidelines or “principles”, has so far attracted less attention.  Yet across the globe, and pending the coming into force of new rules in a number of countries, T&C may be the most pertinent form of governance out there.

Drawing on the extensive history of study of the terms and conditions (T&C) and privacy policies of social media companies, this  paper reports the results of pilot empirical work conducted in January-March 2023, in which  T&C were mapped across a representative sample of generative AI providers as well as some downstream deployers. Our study looked at providers of multiple modes of output (text, image, etc), small and large sizes, and varying countries of origin. Although the study looked at terms relating to a wide range of issues including content restrictions and moderation, dispute resolution and consumer liability, the focus here is on copyright and data protection. Our early  findings indicate the emergence of a “platformisation paradigm”, in which providers of generative AI attempt to position themselves as neutral intermediaries similarly to search and social media platforms, but without the governance increasingly imposed on these actors, and in contradistinction to their function as content generators rather than mere hosts for third party content. This study  concludes that in light of these findings, new laws being drafted to rein in the power of “big tech” must be reconsidered carefully, if the imbalance of power between users and platforms in the social media era, only now being combatted, is not to be repeated via the private ordering of the providers of generative AI.

Files

2024_05_GenAI.pdf

Files (1.0 MB)

Name Size Download all
md5:8b280a08deedf9112507298d8ba34767
1.0 MB Preview Download