Working paper Open Access
Platforms have emerged as a new kind of regulatory object over a short period of time. There is accelerating global regulatory competition to conceptualise and govern online platforms in response to social, economic and political discontent – articulated in terms such as ‘fake news’, ‘online harms’ or ‘dark patterns’.
In this paper, we empirically map the emergence of the regulatory field of platform regulation in the UK. We focus on the 18-month period between September 2018 and February 2020 which saw an upsurge of regulatory activism. Through a legal-empirical content analysis of eight official reports issued by the UK government, parliamentary committees and regulatory agencies, we (1) code over 80 distinct online harms to which regulation is being asked to respond; we (2) identify eight areas of law referred in the reports (data protection and privacy, competition, education, media and broadcasting, consumer protection, tax law and financial regulation, intellectual property law, security law); we (3) analyse nine agencies mentioned in the reports for their statutory and accountability status in law, and identify their centrality in the regulatory network; we (4) assess their regulatory powers (advisory, investigatory, enforcement); and the regulatory toolbox of potential measures ascribed to agencies; we (5) quantify the number of mentions platform companies received in the reports analysed.
We find that Ofcom (the communications regulator) and the CMA (the Competition and Markets Authority) are the most central actors in the regulatory field, with the Information Commissioner (the data regulator) following close behind. We find that security- and terrorism-related interventions remain particularly obscure and hard to capture with a socio-legal analysis of public documents.
We find that the political focus is overwhelmingly on a handful of US multinational companies. Just two companies, Google and Facebook, account for three-quarters of the references made to firms in the documents we examined. Six Chinese firms are mentioned, and two EU firms. Not a single UK-headquartered company appears. This could be interpreted as a focus on the defence of national sovereignty that has crowded out questions of market entry or innovation in the UK.
We find that the regulatory agenda is driven by an ever-wider list of harms, with child protection, security and misinformation concerns surfacing in many different forms. We also identify an amorphous and deep disquiet with lawful but socially undesirable activities. We suggest that this ‘moral panic’ has engendered an epistemic blind spot regarding the processual questions that should be at the core of rule-governed regulation: how to monitor (by way of information-gathering powers), trigger intervention, and remove and prevent certain kinds of content. Filtering technologies, processes of notification, redress mechanisms, transparency and audit requirements all need to be addressed. The question arises as to whether the emergent recourse to codes of practice or codes of conduct (for example delegating content moderation functions to private firms) will be appropriate to address the wide range of regulatory challenges now faced.
We delineate a further epistemic gap – the effects of platform regulation on cultural production and consumption. Platforms’ roles as cultural gatekeepers in governing information flows (content identification, rankings, recommendations), and directing remuneration still remain poorly understood.