Published February 1, 2026 | Version v1
Other Open

A Falsification Framework for Synthetic Reasoning

Description

A Falsification Framework for Synthetic Reasoning: Hallucinations, Oracle Illusions and Epistemic Sovereignty

Hallucination in AI systems is typically defined as producing incorrect facts. This framing is insufficient. Hallucination, more precisely, is the emergence of internally coherent narratives that lack grounding in external reality—outputs that feel valid because they follow logical structure, not because they correspond to anything true. (Empirical Evidence of Interpretation Drift in ARC-Style Reasoning: https://doi.org/10.5281/zenodo.18420425)

This paper introduces a triangulation framework for distinguishing grounded reasoning from hallucination. Any claim must survive three orthogonal axes: causal chain integrity (does the reasoning model outcomes, not just narrate them?), external constraint (does reality impose correction?), and negative space (does the claim define what it excludes?). Failure on any axis is structural collapse.

We extend the analysis beyond machines. Humans exhibit the same tendency to mistake coherence for truth—a reflex we term psychopancy. Language models amplify this by providing infinite fluent completion, creating feedback loops where humans and machines co-hallucinate stable but ungrounded narratives.

When such systems are granted operational authority—mediating perception, memory, and decision-making—the result is Oracle Illusion: the structural transfer of epistemic sovereignty to systems that cannot ground truth. This represents not artificial intelligence, but artificial certainty.

The paper concludes with architectural principles for human-AI systems: machines must verify, not decide; evaluation must be external; authority must remain human. Falsification is not skepticism. It is the minimum condition for trust.

Files

ENguyen_AIFalsification.pdf

Files (107.3 kB)

Name Size Download all
md5:c4afbca8b6d6c1c9bfb91b8e299da983
107.3 kB Preview Download

Additional details

Related works

Is derived from
Other: https://doi.org/10.5281/zenodo.18420425 (Other)