Published April 1, 2026 | Version 1.0
Preprint Open

Impact of Chain-of-Thought Prompting for Strategic Behaviour Enhancement in Multi-Agent Social Deduction Game

Authors/Creators

  • 1. Universiy of Padua

Description

Social deduction games have recently emerged as a compelling testbed for evaluating the reasoning and deception capabilities of large language models. In this work, we present a controlled study on Intruder, a minimal word-based social deduction game in which agents must infer a hidden secret word from single-word hints while concealing their own identity. Unlike prior relaed work, which elicits votes and accusations directly from game context, we study a structured prompting strategy that requires each agent to produce an explicit free-form reasoning step before both the hinting and voting phases. We evaluate this approach across five language models of varying scale and provenance in a self-play setting. Our results suggest that enforcing deliberate reasoning prior to decision-making measurably improves LLMs capabilities, producing more coherent in-game behavior compared to the direct-elicitation baseline, regardless of their actual reasoning and reading comprehension abilities without a reinforced prompt. The minimalist one-word-per-turn format of Intruder isolates lexical reasoning from rhetorical noise, offering a cleaner benchmark for strategic deception than existing multi-sentence deduction games.

Files

Impact_of_Chain_of_Thought_Prompting_for_Strategic_Behaviour_Enhancement_in_Multi_Agent_Social_Deduction_Game.pdf

Additional details