Published August 26, 2025 | Version v1
Data paper Open

Beyond Hallucinations: Emphatic Epanorthosis on LLMs

  • 1. H-Farm

Description

Large language models (LLMs) have been studied for their tendency to hallucinate facts, propagate bias, and converge stylistically. A less explored feature is their strong preference for a rhetorical device traditionally known as epanorthosis—the self‑correction that replaces an anticipated label with a new, often more dramatic one. Commonly realised in contemporary English through “not X, but Y” or similar negative–positive juxtapositions, the pattern saturates model outputs in marketing copy, policy briefings, and creative prose. This paper demonstrates, through quantitative corpus analysis and close reading, that emphatic epanorthosis is produced by state‑of‑the‑art LLMs at rates far exceeding those in baseline human corpora. We locate the cause in reinforcement learning from human feedback (RLHF) pipelines that reward perceived clarity and persuasive punch, inadvertently turning a once marginal trope into a stylistic default. The paper also argues that the phenomenon mirrors and amplifies an online discourse already skewed toward click‑optimised framing, suggesting a feedback loop between human digital writing habits and model redistribution of those habits. Finally, we propose evaluation metrics and mitigation strategies for practitioners who wish to diversify model style without sacrificing communicative power.

Files

beyond_hallucinations_LLMs_finalplus.pdf

Files (235.0 kB)

Name Size Download all
md5:92faf2738e546ee5978717ec2a8d490d
235.0 kB Preview Download

Additional details

Dates

Created
2025-08-26