Published October 27, 2024 | Version v1
Publication Open

How might Generative AI affect Research Integrity

Description

The UK Committee on Research Integrity notes that generative AI is the next stress test of the research sector for integrity, but that the system is familiar in adapting and responding to emerging and disruptive technologies. We also recognise that AI could be a boon to improve research integrity through error checking and analysis of larger and more varied data sets. The Committee proposes that the 5 principles of the concordat to support research integrity (honesty, transparency, rigour, accountability, and care & respect) is an existing framework, familiar to the community that can be used to centre research integrity in response to the rapidly growing use of AI in the sector.

The following was developed by the Committee's AI working group and refined further with great thanks to several contributors across the UK research sector and internationally. The references to “AI” refer to generative artificial intelligence tools that are publicly available unless specified otherwise.

Committee membership at time of publication: Andrew George (co-chair), Chris Graf*, Ian Gilmore*, Jane Alfred, Jeremy Watson*, Jil Matheson*, Louise Dunlop, Maria Delgado, Miles Padgett, Nandini Das, Rachael Gooberman-Hill (co-chair), Ralitsa Madsen.

*Denotes Committee members part of the AI working group.

Files

How-might-AI-affect-Research-Integrity.pdf

Files (169.7 kB)

Name Size Download all
md5:2f06916ad374974ba228d242f4dd8317
169.7 kB Preview Download