Published January 19, 2026 | Version v1
Publication Open

Integrity and Semantic Drift in Large Language Model Systems

Contributors

  • 1. Arising Technology Systems Pty Limited

Description

Abstract

Large Language Models (LLMs) are commonly evaluated in terms of accuracy, hallucination, and bias. These criteria, while useful, fail to capture a more fundamental class of failure: loss of integrity. This paper argues that semantic drift in LLM systems is not merely a degradation of meaning, but a structural precursor to integrity failure, in which meaning, rules, and authority boundaries are no longer preserved across time, transformation, and use. We distinguish semantic drift from normative drift, showing how probabilistic reconstruction, summarisation, and session boundaries soften or bypass binding constraints, reorder gated procedures, and substitute model output for authoritative artefacts. We introduce integrity as a first-order property of human-AI systems, defined as the preservation of declared semantics, normative logic, and governance boundaries unless explicitly and authoritatively changed. Drawing on concrete failure cases from stateless LLM interaction, we demonstrate that integrity loss, not just incorrectness, is the primary driver of epistemic loss, forced re-derivation, and authority inversion. We conclude by outlining requirements for integrity-preserving AI systems, including externalised artefacts, anchored identity, explicit invariants, and human-governed authority, and argue that without these mechanisms, semantic drift inevitably escalates into ungoverned system behaviour.

---

Temporal Scope Note

This paper analyses semantic and normative integrity as they manifest in Large Language Model platforms as currently architected and deployed. The failures described here are observations of present systems and interaction models, not claims about the theoretical limits of machine intelligence or future governed architectures.

Reading Dependency Note

This paper establishes the failure modes of semantic and normative drift and the absence of integrity in contemporary LLM architectures. Its implications for trust, responsibility, and appropriate human–AI roles are developed further in What Can Humans Trust LLM AI to Do? (ref c) and Observed Model Stability: Evidence for Drift-Immune Embedded Governance (ref c), which should be read alongside this paper to understand how these failures constrain socially sustainable use under present conditions.

---

This work has not undergone academic peer review. The DOI asserts existence and provenance only; it does not imply validation or endorsement.

This Zenodo record is an archival projection of a publicly published artefact. Canonical versions and live revisions are maintained at the original publication URL listed.

 

Files

Integrity and Semantic Drift in Large Language Model Systems - publications.pdf

Additional details