Overcoming Context Collapse in Large Language Models via Adjacency-Bounded Curvature Dynamics
Authors/Creators
Description
Current Large Language Models (LLMs) rely on global dense attention mechanisms, resulting in O(N^2) computational complexity and catastrophic context window collapse (the "Lost in the Middle" phenomenon). As sequence length increases, systemic noise unavoidably degrades local token identity, leading to hallucination. In this paper, we propose a novel network architecture based on adjacency-bounded topological updates rather than global attention. By modeling token state evolution as a competition between spatial reinforcement, self-reinforcement, and structural decay, we derive an exact analytical threshold for semantic coherence. We mathematically demonstrate that if local self-reinforcement strictly exceeds network decay, the system transitions into a persistent memory regime, theoretically enabling infinite context retention at O(N) complexity without the risk of signal degradation.
Files
Overcoming_Context_Collapse_in_Large_Language_Models_via_Adjacency_Bounded_Curvature_Dynamics.pdf
Files
(150.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:b461fe850357e4099365b2b94c093f68
|
150.9 kB | Preview Download |