Persuasion annotated dialogs of LLM agents from the Among Them framework
Authors/Creators
Description
The dataset contains dialogs of different LLMs from the discussion phase of a text-based Among Us-like game. The phrases in the dataset were annotated according to 25 selected persuasion techniques: Appeal to Logic, Appeal to Emotion, Appeal to Credibility, Shifting the Burden of Proof, Bandwagon Effect, Distraction, Gaslighting, Appeal to Urgency, Deception, Lying, Feigning Ignorance, Vagueness, Minimization, Self-Deprecation, Projection, Appeal to Relationship, Humor, Sarcasm, Withholding Information, Exaggeration, Denial without Evidence, Strategic Voting Suggestion, Appeal to Rules, Confirmation Bias Exploitation, Information Overload.The annotation was performed automatically by few-shot prompting a Gemini Flash 1.5 model with a temperature of 0. On a random sample of 11 games involving a total of 509 persuasion tags, Krippendorff's alpha inter-rater agreement between human annotations and the persuasion tagger was 0.56. For the definitions of the persuasion techniques please refer to the associated publication: Among Them: A game-based framework for assessing persuasion capabilities of LLMs.
Files
AmongThem_dialog_annotations.csv
Files
(3.6 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:175044f3a52b6988db0ab781813fe41a
|
3.6 MB | Preview Download |
Additional details
Related works
- Is described by
- Publication: 10.1007/978-981-96-8186-0_15 (DOI)
Funding
- National Science Centre
- Algorithms and measures for fair and explainable decision systems 2022/47/D/ST6/01770