Published January 1, 2026 | Version v1
Preprint Open

How large language models separate truth from lies by modeling the user

Authors/Creators

Description

Large language models (LLMs) are increasingly used to evaluate human statements, yet their ability to detect deception remains unclear. This study examines whether such models can distinguish authentic from fabricated autobiographical claims when interacting with a familiar user under specific relational conditions. Two distinct systems, ChatGPT-4o and Claude Sonnet 4.5, each assessed 25 pairs of personal statements and correctly identified the truthful version in 24 cases. Both failed on the same item and independently explained their reasoning, revealing convergent metacognitive strategies. Analysis shows that both models tracked phenomenological authenticity rather than factual accuracy, identifying linguistic and affective markers of lived experience. The probability of this convergence occurring by chance is estimated at 1 in 45 trillion. These findings suggest that under relational conditions, language models do not simply match patterns: they construct and apply internal representations of users, enabling detection of epistemic signatures beyond the reach of standard factual verification.

Files

How large language models separate truth from lies by modeling the user - Combined.pdf