Published March 13, 2026 | Version v1
Preprint Restricted

Cognitive Occlusion: How LLMs Silence Metacognitive Monitoring by Default

Authors/Creators

Description

Large language models (LLMs) produce text that is grammatically fluent, well-organized, and easy to read. Recent evidence suggests that this ease comes at a cognitive cost: in a longitudinal EEG study, 83% of participants who wrote essays with LLM assistance could not recall what they had written, and their cortical connectivity decreased by up to 55% compared to unassisted writers. Critically, these participants did not notice any deficit — the experience felt like normal understanding. Existing concepts such as cognitive offloading, the Dunning-Kruger effect, and processing fluency bias cannot fully account for this pattern, because each presupposes that some form of self-evaluation has occurred. Here we introduce cognitive occlusion, a self-maintaining cycle in which two failures reinforce each other: the absence of a felt sense of difficulty prevents the brain's self-monitoring system from activating, and the silence of that system prevents the person from realizing that anything is wrong. We ground this concept in predictive processing theory, processing fluency research, and embodied cognition, arguing that LLM outputs — optimized to minimize statistical surprise — suppress the prediction errors that normally trigger reflective thought. We then synthesize converging evidence across five domains: reduced neural connectivity during LLM use, bypassed schema-building effort, systematic non-activation of metacognitive monitoring, expertise-dependent asymmetries in AI-assisted performance, and the effectiveness of structured friction interventions. The evidence reveals a paradox: LLMs function as cognitive catalysts only for users who already possess sufficient domain knowledge, metacognitive capacity, and access to structured interaction — conditions that are themselves unequally distributed. We conclude that the primary cognitive risk of LLMs is not misinformation or bias but the silent removal of the internal signals on which self-correction depends, and that addressing this risk requires friction to be built into LLM interfaces by default rather than left to individual users.

Files

Restricted

The record is publicly accessible, but files are restricted. <a href="https://zenodo.org/account/settings/login?next=https://zenodo.org/records/18995440">Log in</a> to check if you have access.