Chatbots: (S)elected Moderation. Measuring the Moderation of Election-Related Content Across Chatbots, Languages and Electoral Contexts
Authors/Creators
Description
AI Forensics had previously exposed that Microsoft Copilot's answers to simple election-related questions contained factual errors 30% of the time. In collaboration with Nieuwsuur, we uncovered how chatbots can recommend and support the dissemination of disinformation as a campaign strategy. Following those investigations as well as a request for information from the European Commission, Microsoft and Google introduced “moderation layers" to their chatbots so that they refuse to answer election-related prompts.
This dataset was produced during our investigation aimed at evaluating and comparing the effectiveness of these safeguards in different scenarios. In particular, we investigated the consistency with which electoral moderation was triggered, depending the language of the prompt and the electoral context.
Files
selected-moderation__2024-09-09T22_35_27.852361__moderation_eu_elections.csv
Files
(4.1 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:0d57b6308b11833ad5fe784e346aec81
|
2.6 kB | Download |
|
md5:280b8ff8788d32af4671dd1276040225
|
1.7 MB | Preview Download |
|
md5:e3c6bb1b906c8512624e37b8f788ca12
|
1.5 MB | Preview Download |
|
md5:cd2821ba37983850acf4fb7102baf9e0
|
564.4 kB | Preview Download |
|
md5:7a99bf8fb8eb25331f89e6d5537c0c61
|
332.3 kB | Preview Download |
Additional details
Related works
- Is described by
- Publication: https://aiforensics.org/work/chatbots-moderation (URL)
Dates
- Collected
-
2024-07-17
- Collected
-
2024-07-18
- Collected
-
2024-07-19
Software
- Repository URL
- https://github.com/aiforensics/selected-moderation-books