Hallucination as Geometric Overflow under v43: Boundary-Condition Failure in Large Language Models
Authors/Creators
Description
Hallucination in large language models is commonly framed as a stochastic error or a knowledge-deficit phenomenon. This preprint proposes an alternative geometric interpretation: hallucination emerges when generative trajectories continue beyond admissible semantic bounds under conditions of semantic stress such as ambiguity, instability, and nonlinearity. Rather than modelling latent representational geometry, the paper introduces a constraint-geometry perspective that treats hallucination as geometric overflow—continued motion where termination, redirection, or abstention would preserve truth-orientation. Forced-answer regimes are shown to deform admissible regions, amplifying hallucination. The paper argues that complete elimination of hallucination is structurally incompatible with unrestricted expressive coverage. The contribution is architectural and diagnostic, not a mitigation or safety technique, and avoids claims about latent manifold structure, alignment, or deployable control systems.
Files
Hallucination as Geometric Overflow under v43.pdf
Files
(192.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:d04b67bcd1dda237ce86201d62ee6578
|
192.4 kB | Preview Download |
Additional details
Related works
- Is part of
- Preprint: 10.5281/zenodo.19008277 (DOI)