Epistemic Closure and Falsifiability in AIMediated Self-Referential Systems
Authors/Creators
Description
The proliferation of complex conceptual systems developed in interaction with artificial intelligence agents poses an epistemological problem not anticipated by classical theories of falsification: in such systems, the external validation agent is simultaneously a structural generator of narrative coherence, inducing a functional collapse between the roles of creation and assessment. This collapse is not reducible to Popperian immunization or to the adjustment of auxiliary hypotheses in the Lakatosian sense, since it does not arise from deliberate defensive strategies but from an architectural asymmetry between the way such systems produce coherence and the way their human creators interpret it. This paper proposes the concept of epistemic delusion to designate the methodological state in which the operational conditions of falsification disappear as the cumulative effect of conceptual drift mechanisms, and argues that in AI-mediated self-referential systems this process exhibits a specific vector — systemic narrative induction — not yet systematized in the literature. The paper examines the mechanisms of conceptual drift, the modes of epistemic closure, and a set of methodological safeguards whose normative foundation is derived from the distinction between internally generated coherence and empirically independent corroboration.
Files
preprints202603.1068.v1.pdf
Files
(791.1 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:0878c2489929040ddf3d93abbdee8c61
|
791.1 kB | Preview Download |
Additional details
Related works
- Is identical to
- Preprint: 10.20944/preprints202603.1068.v1 (DOI)
- Is supplement to
- Preprint: https://zenodo.org/records/18156288 (URL)
Dates
- Available
-
2026-03-13