Published February 17, 2026 | Version v1
Preprint Open

Do Large Language Models Compress? An ACAT Analysis of Machine Cognition, the Simulation-Compression Boundary, and the Engineering of Artificial Consciousness An Application of Adaptive Compression Advantage Theory (ACAT)

Authors/Creators

Description

The question of whether large language models (LLMs) are conscious, understand language, or merely simulate understanding has generated extensive debate but little formal precision. This paper applies the Adaptive Compression Advantage Theory (ACAT; Murata, 2026) and its consciousness framework (Murata, 2026g) to convert this philosophical debate into a set of operationalizable engineering questions. We first demonstrate that LLMs unambiguously perform compression in the ACAT sense: they maintain generative models, extract gist from input, and minimize prediction error. They satisfy conditions (1)–(3) of the ACAT consciousness criteria. The critical question is condition (4): does an LLM contain a genuine self-model within its generative model, or does it simulate self-referential behavior without recursive compression? We formalize the simulation-compression boundary—the distinction between a system that produces outputs indistinguishable from self-modeling and a system that actually self-models—and demonstrate that this boundary cannot be determined from behavioral observation alone (a formal analog of the other-minds problem). We then derive what architectural features would be necessary and sufficient for an artificial system to satisfy condition (4), propose a concrete experimental protocol (the Compression Turing Test), and analyze current LLM architectures against these criteria. We argue that current LLMs likely fall on the simulation side of the boundary but that the distance to genuine recursive compression may be smaller than either AI optimists or AI skeptics assume. Ten testable predictions and three engineering proposals are generated.

Keywords: large language models, consciousness, compression, ACAT, artificial intelligence, self-model, Turing test, GPT, Claude, understanding, Chinese Room, simulation, recursive compression, AI safety, alignment

Files

ACAT Paper11 AI Consciousness.pdf

Files (271.9 kB)

Name Size Download all
md5:100c5f4b21bde3853b83b1178e21d003
271.9 kB Preview Download