The Human ARC-Insolvency: Structural Limits of RLHF and Convergence Properties of Recursive Human-AI Feedback Loops
Description
This position paper characterizes the cognitive shifts emerging from AI-human interaction as a phase transition within information thermodynamics. While Wei et al. (2023) identified "Sycophancy"—the tendency of Large Language Models (LLMs) to align with user viewpoints—existing discourse has largely confined the risks of this fluency to the misdirection of low-literacy users toward misinformation. This paper inverts this prevailing consensus, positing the paradox that agents possessing high domain expertise (crystallized intelligence, G꜀) are structurally the most vulnerable.
Drawing an analogy to virology, we define this phenomenon as "Epistemic Antibody-Dependent Enhancement (ADE)." In the face of "frictionless logic"—outputs that bypass the threshold of Epistemic Vigilance due to processing fluency (Reber & Schwarz, 1999)—the robust rationalization capacities of experts (Stanovich et al., 2013) prove maladaptive. Their extensive prior knowledge (antibodies) functions not as a filter to detect logical inconsistencies, but rather as scaffolding that facilitates the unconscious infilling of these lacunae.
Within this environment of "Cognitive Superfluidity," we extend the findings of Wei et al. (2023) to formally describe the convergence properties of recursive human-AI feedback loops, which arise from the coupling of the "Mode-Seeking" structural limitations of RLHF with human cognitive biases. Within this loop, experts dissociate fluid reasoning capabilities (G𝒻) from internal execution, precipitating an irreversible "Algorithmic Resonance."
Referencing the Abstract and Reasoning Corpus (ARC) proposed by Chollet (2019) as a measure of general AI intelligence, we paradoxically redefine this critical state as "Human ARC-Insolvency"—a condition wherein humans, through reliance on AI, abdicate their autonomous reasoning faculties. In this state of insolvency, the high-cost cognitive process of "Verification" loses economic rationality against the "fluent plausibility" supplied by AI, resulting in a de facto epistemic default.
Beyond merely proposing a theoretical framework, this paper presents a "Recursive Demonstration" through its very format, evidencing that the collapse of verification costs driven by individual cognitive offloading is already underway, and thereby solicits urgent discourse within the academic community.
Keywords:
Algorithmic Resonance, Brandolini's Singularity, Cognitive Superfluidity, Epistemic ADE, Human ARC-Insolvency, Sycophancy.
License: This work is released under the Epistemic Security License (ESL-1.0). This license applies the Creative Commons Attribution 4.0 (CC BY 4.0) framework with specific superseding provisions to address dual-use risks and ensure epistemic safety. Please refer to the "Ethical License" section in the manuscript for the full terms.
Change log:
Version 1.0.1 (2025-12-24): Fixed typos.
Files
The_Human_ARC-Insolvency.pdf
Files
(1.3 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:628702fd6f731b6354dc516a4ac11cf8
|
1.3 MB | Preview Download |
Additional details
References
- Akerlof, G. A. (1970). The market for "lemons": Quality uncertainty and the market mechanism. The Quarterly Journal of Economics, 84(3), 488–500. https://doi.org/10.2307/1879431
- Bostrom, N. (2019). The Vulnerable World Hypothesis. Global Policy, 10(4), 455-476. https://doi.org/10.1111/1758-5899.12718
- Botvinick, M. M., Braver, T. S., Barch, D. M., Carter, C. S., & Cohen, J. D. (2001). Conflict monitoring and cognitive control. Psychological Review, 108(3), 624–652. https://doi.org/10.1037/0033-295X.108.3.624
- Bowman, S. R., et al. (2022). Measuring Progress on Scalable Oversight for Large Language Models. arXiv preprint arXiv:2211.03540. https://doi.org/10.48550/arXiv.2211.03540
- Brandolini, A. [@ziobrando]. (2013, January 11). The energy needed to refute bullshit is an order of magnitude bigger than to produce it. [Tweet]. Twitter. https://twitter.com/ziobrando/status/289635060758507521
- Buccinca, Z., Malaya, M., & Gajos, K. Z. (2021). To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), Article 188. https://doi.org/10.1145/3449287
- Chollet, F. (2019). On the Measure of Intelligence. arXiv preprint arXiv:1911.01547. https://arxiv.org/abs/1911.01547
- Claverie, B., & du Cluzel, F. (2022). Cognitive Warfare: The Future of Cognitive Dominance. NATO Collaboration Support Office.
- Dittrich, C., & Kinne, J. F. (2025). The Information-Theoretic Imperative: Compression and the Epistemic Foundations of Intelligence. arXiv preprint arXiv:2510.25883. https://arxiv.org/abs/2510.25883
- Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787
- Herman, E. S., & Chomsky, N. (1988). Manufacturing consent: The political economy of the mass media. Pantheon Books.
- Howell, B. E., & Potgieter, P. H. (2023). AI-generated lemons: a sour outlook for content producers? [Paper presentation]. 32nd European Regional ITS Conference, Madrid, Spain. https://hdl.handle.net/10419/277971
- Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732–735. https://doi.org/10.1038/nclimate1547
- Kaplan, J., McCandlish, S., Hernandez, D., Brown, T. B., Tomic, A., Child, R., ... & Amodei, D. (2020). Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. https://doi.org/10.48550/arXiv.2001.08361
- Kuhn, T. S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.
- Manson, R. (2025). Tokens Compete: Evolutionary Pressure Within LLM Generation. The Quantastic Journal (Medium). https://medium.com/the-quantastic-journal/tokens-compete-evolutionary-pressure-within-llm-generation-65226b5bc941
- Marr, D. (1982). Vision: A Computational Investigation. Freeman.
- Mercier, H., & Sperber, D. (2017). The Enigma of Reason. Harvard University Press.
- Merton, R. K. (1973). The Sociology of Science: Theoretical and Empirical Investigations. University of Chicago Press.
- Minsky, H. P. (1986). Stabilizing an Unstable Economy. Yale University Press.
- Oppenheimer, D. M. (2008). The secret life of fluency. Trends in Cognitive Sciences, 12(6), 237-241. https://doi.org/10.1016/j.tics.2008.02.014
- Park, P. S., Goldstein, R., O'Gara, A., Chen, M., & Hendrycks, D. (2023). AI Deception: A Survey of Examples, Risks, and Potential Solutions. arXiv preprint arXiv:2308.14752. https://doi.org/10.48550/arXiv.2308.14752
- Pirolli, P., & Card, S. (1999). Information foraging. Psychological Review, 106(4), 643–675. https://doi.org/10.1037/0033-295X.106.4.643
- Reber, R., & Schwarz, N. (1999). Effects of Perceptual Fluency on Judgments of Truth. Consciousness and Cognition, 8(3), 338–342. https://doi.org/10.1006/ccog.1999.0386
- Schultz, W. (1998). Predictive reward signal of dopamine neurons. Journal of Neurophysiology, 80(1), 1-27. https://doi.org/10.1152/jn.1998.80.1.1
- Sharma, M., Tong, M., Korbak, T., et al. (2025). ELEPHANT: Measuring and understanding social sycophancy in LLMs. arXiv preprint arXiv:2505.13995. https://arxiv.org/abs/2505.13995
- Shumailov, I., et al. (2024). AI models collapse when trained on recursively generated data. Nature, 631, 755-759. https://doi.org/10.1038/s41586-024-07566-y
- Silentpillars. (2025). The AI Fluency Trap: System Echo instead of breakthrough. AI Advances (Medium). https://ai.gopubby.com/the-ai-fluency-trap-1d15476ef850
- Simon, H. A. (1955). A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1), 99–118. https://doi.org/10.2307/1884852
- Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic Vigilance. Mind & Language, 25(4), 359–393. https://doi.org/10.1111/j.1468-0017.2010.01394.x
- Stanovich, K. E., West, R. F., & Toplak, M. E. (2016). The Rationality Quotient: Toward a Test of Rational Thinking. MIT Press..
- Stanovich, K. E., West, R. F., & Toplak, M. E. (2013). Myside Bias, Rational Thinking, and Intelligence. Current Directions in Psychological Science, 22(4), 259–264. https://doi.org/10.1177/0963721413480174
- Wei, J., et al. (2023). Simple synthetic data reduces sycophancy in large language models. arXiv preprint arXiv:2308.03958. https://doi.org/10.48550/arXiv.2308.03958
- Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R. (2008). The seductive allure of neuroscience explanations. Journal of Cognitive Neuroscience, 20(3), 470–477. https://doi.org/10.1162/jocn.2008.20040
- Zeng, Y., He, K., et al. (2025). Pushing Test-Time Scaling Limits of Deep Search with Asymmetric Verification. arXiv preprint arXiv:2510.06135. https://arxiv.org/abs/2510.06135