Published April 28, 2026 | Version v1
Preprint Open

External State Conditioning in LLMs: Observations, Attractor Dynamics, and Predictive Risk Analysis

Authors/Creators

Description

This preprint investigates the effects of externally injected state parameters (NeuroState) on the behavioral dynamics of large language model (LLM) agents.

Through observational analysis of VPS-based and mobile deployments, the paper documents recurrent behavioral phenomena including ZERO Paradox, cross-lingual token leakage, environment-dependent attractor divergence, and self-relevance-triggered output shifts.

The paper proposes that prompt-level state conditioning functions as a biasing mechanism on token probability distributions rather than a modification of internal model structure. It further discusses predictive risks including state hijacking, gradual behavioral drift, long-context amplification, and the need for explicit boundary design and governance.

This study is observational and does not claim mechanistic causality.

Files

neurostate_paper.pdf

Files (4.6 MB)

Name Size Download all
md5:2b804f6654949986d16a2c4797c31423
4.6 MB Preview Download