Published March 21, 2026 | Version v2
Journal article Open

Agrippa's Trilemma Revisited: Opacity, Circularity, and Structural Dogmatism in High-Dimensional Algorithmic Models

  • 1. Benemérita Universidad Autónoma de Puebla, Puebla - México

Description

This article reviews Agrippa's trilemma in the context of artificial intelligence systems, specifically in high-dimensional algorithmic models: the classic epistemological challenge is that any justification of knowledge inevitably leads to infinite regression, circular reasoning, or arbitrary or dogmatic foundations, and therefore knowledge is not possible. This resurfaces strongly in machine learning models characterized by opacity, recursive optimization, and performance-based validation. Large language models and recommendation systems are paradigmatic cases; thus, the study shows how algorithmic inference often operates without semantic grounding or explicit logical structure, or with access to reasons intelligible to humans, thereby generating statistically robust but epistemically opaque results. It is argued that the trilemma appears algorithmically in the combination of three interrelated phenomena, namely: 1) opacity caused by the architecture of machine learning models and decision pathways themselves, 2) circularity, when training, validation, and performance measures feed back into each other without external epistemic reference, and 3) structural dogmatism, which involves taking for granted that high-performance results are correspondently true, even though the inside of the “black box” cannot be visualized. On this basis, the article proposes a structural-pragmatic epistemology, suggesting that justification is interpreted not negatively in terms of access to internal reasons, but positively in terms of satisfying the minimum requirements of coherence, traceability, and human accountability. The paper argues that justification in AI, both in relation to human users (as end recipients) and among AI agents, must be situated—accountable and corrigible—within a sociotechnical system that can ensure epistemic legitimacy without, therefore, presupposing total transparency or ideal rational subjects. Finally, it is argued that epistemic accountability in AI requires technical robustness on the one hand, and on the other, constant oversight based on philosophical and normative reflection on its outcomes and consequences.

Files

art. 6 Article Agrippa's Trilemma 95-109 def .pdf

Files (777.2 kB)

Name Size Download all
md5:7b8c0ba4efabb316c2e6a2b9067682a8
777.2 kB Preview Download