mitmedialab/deceptive-AI: Release 1.0
Description
AI systems that generate deceptive explanations can amplify beliefs in false information
A repository for the paper "AI systems that generate deceptive explanations can amplify beliefs in false information," TBD.
Author: Valdemar Danry1, Pat Pataranutaporn1, Matthew Groh2, Ziv Epstein1, Pattie Maes1
1 MIT Media Lab, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
2 Northwestern University, Evanston, IL, United States
* e-mail: vdanry[at]mit.edu
Abstract
Artificial Intelligence systems, such as large language models (LLMs), have the remarkable capability to not only generate responses on various topics but also can provide supportive explanations for their responses, thereby bolstering their credibility. However, there is also potential for these systems to generate plausible yet deceptive explanations such as false political commentary and manipulative decision support, which could mislead individuals for malicious purposes. We investigate the impact of AI-generated deceptive explanations on people and the personal factors that mediate their effects in a pre-registered experiment with 23,840 observations from 1192 participants. We demonstrate that with a single prompt, a large language model can easily be used to generate deceptive explanations of information that the public may encounter. Our study reveals that AI-generated deceptive explanations significantly increase belief in false headlines and decrease belief in true headlines. Compared to deceptive classifications without explanations, AI-generated deceptive explanations can be more persuasive in convincing people. We also investigated the personal factors that moderate the influence of AI-generated deceptive explanations such as measures of cognitive reflection, trust in AI, and prior knowledge. These findings indicate that the proliferation of advanced AI systems could potentially reshape the misinformation landscape, underlining the necessity of further study on mitigating these effects.
For more information: https://www.media.mit.edu/projects/beliefs-about-ai/overview/
Files
mitmedialab/deceptive-AI-Release.zip
Files
(2.5 MB)
Name | Size | Download all |
---|---|---|
md5:43c03e0eb38bb48bd7e02733f3b18236
|
2.5 MB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/mitmedialab/deceptive-AI/tree/Release (URL)