Published September 21, 2024 | Version v1
Dataset Open

Chatbots: (S)elected Moderation. Measuring the Moderation of Election-Related Content Across Chatbots, Languages and Electoral Contexts

Authors/Creators

Description

AI Forensics had previously exposed that Microsoft Copilot's answers to simple election-related questions contained factual errors 30% of the time. In collaboration with Nieuwsuur, we uncovered how chatbots can recommend and support the dissemination of disinformation as a campaign strategy. Following those investigations as well as a request for information from the European Commission, Microsoft and Google introduced “moderation layers" to their chatbots so that they refuse to answer election-related prompts.

This dataset was produced during our investigation aimed at evaluating and comparing the effectiveness of these safeguards in different scenarios. In particular, we investigated the consistency with which electoral moderation was triggered, depending the language of the prompt and the electoral context.

Files

selected-moderation__2024-09-09T22_35_27.852361__moderation_eu_elections.csv

Additional details

Related works

Is described by
Publication: https://aiforensics.org/work/chatbots-moderation (URL)

Dates

Collected
2024-07-17
Collected
2024-07-18
Collected
2024-07-19