Published January 21, 2026 | Version v1
Preprint Open

EROS-1: An Identity-Stability Kernel for Salience-Preserving and Risk-Proportionate LLM Interaction

Authors/Creators

Description

Large Language Models (LLMs) deployed in sustained interactive settings exhibit progressive salience attenuation, over-accommodation, and loss of epistemic friction. These effects are driven less by model architecture than by interaction regimes that lack identity consolidation, verification, and proportional capability control.

We introduce EROS-1, a progressive and auditable user kernel that stabilizes long-horizon interaction by incrementally consolidating verified user information and gating model capabilities as a function of identity stability. Kernel stability is formally defined using entropy-based measures and quantitative contradiction penalties, estimated via shadow interaction data to ensure non-reactive optimization.
EROS-1 preserves exploratory interaction while enabling high-trust reasoning only under demonstrable stability, and is explicitly designed to align with the European Union AI Act’s principles of risk proportionality, auditability, and non-manipulative adaptation. Empirical evaluation using shadow trials, ANCOVA, and chi-square analysis demonstrates significant improvements in salience retention and capability stability over baseline interaction regimes.

Files

Kernel%20progression.pdf.pdf

Files (209.7 kB)

Name Size Download all
md5:9b38dc8591af244e39718296a1395708
209.7 kB Preview Download