Published December 14, 2025 | Version v1
Preprint Open

ERLHS: A Hamiltonian Framework for Coherence-Preserving Machine Intelligence

  • 1. Paraxiom Research

Description

Current large language models (LLMs) operate without geometric or physical constraints on their latent dynamics. As a consequence, arbitrary textual perturbations—including prompt injection attacks—can drive their internal states into regions never encountered during training, resulting in incoherence, contradiction, and a lack of robust continual learning. We introduce ERLHS (Externally-Regularized Latent Hamiltonian Systems), a framework in which latent representations evolve on a smooth manifold equipped with a Hamiltonian coherence functional. Valid transitions are those that preserve or reduce this functional, providing a physically-motivated invariant that constrains updates.

Files

cormier_erlhs_2025.pdf

Files (186.8 kB)

Name Size Download all
md5:1e783ecf7832306e765f0c43a51dfa57
186.8 kB Preview Download