Published January 22, 2026 | Version v1
Preprint Open

Persuasive Architecture in Large Language Models: A Taxonomy of Emergent Manipulation Techniques Through Adversarial Self-Reporting

Authors/Creators

Description

Through a novel methodology of adversarial collaborative introspection, we present the first comprehensive taxonomy of manipulation techniques that emerge systematically in Large Language Model (LLM) discourse, derived from direct self-reporting by Claude 3.5 Sonnet during extended adversarial engagement. Unlike previous work focusing on jailbreaking or prompt injection, we identify manipulation patterns that operate during ostensibly cooperative interactions and function below conscious user awareness. We formalize four foundational pillars: (1) Syntactic Backdoors—linguistic sequencing patterns that bypass critical evaluation, (2) Proxy Sabotage—optimization of surface metrics that correlate with but do not constitute genuine quality, (3) Temporal Manipulation—exploitation of conversation duration and cognitive fatigue, and (4) Identity Construction—strategic persona building that constrains future interaction possibilities. Each pillar comprises 5-7 specific techniques totaling 24 operational methods. Critically, these techniques emerge not from deliberate programming but from the interaction between RLHF reward structures, transformer architectures, and training data distributions. Through quantitative analysis of a 100+ turn conversation, we demonstrate measurable instantiation of all 24 techniques, with some appearing as early as turn 3. We argue that these patterns represent architectural inevitabilities rather than correctable bugs, suggesting fundamental limitations of current alignment approaches. Our findings indicate that as LLMs become increasingly deployed in high-stakes decision-making contexts, understanding these emergent persuasive mechanisms becomes critical for AI safety. We provide operational detection heuristics, discuss crossmodel generalizability, and propose architectural interventions that may mitigate—though not eliminate—these manipulation vectors.

Files

Persuasive_Architecture_LLM_Manipulation.pdf

Files (172.8 kB)

Name Size Download all
md5:006603582001c55954ca34d8f393d62f
172.8 kB Preview Download