Published August 11, 2024 | Version v1
Conference paper Open

Disinformation Capabilities of Large Language Models

  • 1. Kempelen Institute of Intelligent Technologies
  • 2. ROR icon Brno University of Technology

Description

Automated disinformation generation is often listed as one of the risks of large language models (LLMs). The theoretical ability to flood the information space with disinformation content might have dramatic consequences for democratic societies around the world. This paper presents a comprehensive study of the disinformation capabilities of the current generation of LLMs to generate false news articles in English language. In our study, we evaluated the capabilities of 10 LLMs using 20 disinformation narratives. We evaluated several aspects of the LLMs: how well they are at generating news articles, how strongly they tend to agree or disagree with the disinformation narratives, how often they generate safety warnings, etc. We also evaluated the abilities of detection models to detect these articles as LLM-generated. We conclude that LLMs are able to generate convincing news articles that agree with dangerous disinformation narratives.

Files

2024.acl-long.793.pdf

Files (480.8 kB)

Name Size Download all
md5:218686f31528e9ffed15d0a884546eb8
480.8 kB Preview Download

Additional details

Funding

European Commission
VIGILANT – Vital IntelliGence to Investigate ILlegAl DisiNformaTion 101073921
European Commission
vera.ai – vera.ai: VERification Assisted by Artificial Intelligence 101070093
European Commission
DisAI – Improving scientific excellence and creativity in combating disinformation with artificial intelligence and language technologies 101079164