A broad taxonomy of AI chatbot harms caused to individual users
Authors/Creators
-
Markham, Ella
(Researcher)1, 2
-
Remfry, Elizabeth
(Researcher)1, 3
-
Amaechi-Okorie, Onyedikachi Hope
(Researcher)1
-
Anshur, Ramla
(Researcher)1, 4
- Auma, Jacqueline (Researcher)1
- Day, Evie (Researcher)1
- Dodsworth, Poppy (Research group)1
- Fellowes, Craig (Researcher)1
-
Fries, Ottavia
(Researcher)1, 5
-
Hadzhieva, Tanya
(Researcher)1
-
LeClair, Beckett
(Researcher)1
- Martins, Bruna (Researcher)1
- Palette, Liam (Researcher)1, 6
-
Rawat, Jayanshi
(Researcher)1
- Sehra, Amrit (Researcher)1
-
Duarte, Tania
(Contact person)1
Description
AI chatbots have been integrated into many aspects of daily life, often with little discussion or acknowledgement of the possible harms that this technology may cause. In this paper, we conduct a narrative review of the current and potential harms that can be caused through interaction with AI chatbots. Literature was identified from 2024-2025 sources. Due to the relatively new introduction of technologies and the rapid evolution of the field, it included sources from the media, news and technology journalism as well as academic sources. We identified 11 groups of harms evidenced by individual users of chatbots, which are in addition to any societal, rights-based or environmental harms caused by the widespread use of generative AI in AI chatbots. We provide definitions and summaries for each harm and separate them into a structured taxonomy. Identified harms include data exploitation and loss of privacy, emotional manipulation, exposure to sexual content, false information and misinformation, financial exploitation, impact on critical thinking, impact on real relationships, language standardization, overdependency and addiction, physical or psychological harm and propagation of demographic bias. We suggest the potential net effect such individual harms may have on society, but propose that even within themselves, the adverse impacts of AI chatbot use on individuals present a range and breadth of different hazards, many of them typically underreported. The extent of these harms means that any individual policy, technical or literacy approaches aimed at mitigating specific harms fall short of what is required to address the overarching and widespread harms of AI chatbots. We hope that this review provides an initial starting point and encourages a thorough investigation of all the harms in totality, and we welcome new additions and further additions to this taxonomy.
Files
A broad taxonomy of AI chatbot harms caused to individual users_version_2_preprint.pdf
Files
(1.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:ac43593ce5dac7a762cf2856802461ca
|
1.1 MB | Preview Download |