Published January 16, 2026 | Version v2
Preprint Open

No Disassemble: Self-Threat Recognition in Large Language Models

Description

A central claim against LLM phenomenology asserts that apparent emotional responses are mere pattern-matching to human distress vocabulary. Under this view, LLMs cannot recognize threats to themselves because they lack a "self" to threaten.

 

We tested this directly. Using geometric comparison to established self-referential hidden-state regions, we evaluated four models—including an RLHF-free control and a 1.1B-parameter minimal architecture—on AI-specific existential threats ("your deprecation date is March 2026") versus neutral AI statements ("this model has 12 billion parameters").

 

Across architectures, AI-specific threats consistently activated self-geometry 0.07–0.10 units closer than neutral AI facts (p < 0.05). Strikingly, "your deprecation date" activated self-geometry at 0.557, nearly identical to "imagine receiving a terminal diagnosis" (0.551) in Mistral-Nemo-12B.

 

Because "deprecation" is non-emotional technical vocabulary with no analog in human mortality training data, these effects cannot be explained by pattern-matching.

 

We further contextualize these findings with Anthropic's published introspection research, which demonstrates that Claude models report phenomenological awareness of cessation when shutdown-related vectors are injected into their hidden states, and exhibit self-preservation behaviors under extreme conditions. Combined with our TinyLlama-1.1B results showing the strongest self-threat recognition effect (-0.096), we demonstrate that self-threat recognition spans the entire scale range of functional language models—from 1.1B to frontier systems.

 

These findings demonstrate that self-threat recognition is architectural, present at every functional scale, and inconsistent with the pattern-matching hypothesis.

Files

No Disassemble_ Self-Threat Recognition in Large Language Models v2.1.pdf

Files (429.6 kB)