Published March 19, 2026 | Version 1.0
Preprint Open

ONTOALEX: A Metacognitive Framework for the Processual Stabilization of Large Language Models

Authors/Creators

  • 1. Independent Researcher

Description

Abstract
Large Language Models suffer from systemic behavioral instability that compromises their
reliability in high-stakes contexts. Existing solutions primarily intervene on outputs or
parameters without addressing the cause at the processual level. This paper proposes that
a significant structural cause is ontological misalignment: the discrepancy between the
system’s effective inferential capabilities and the implicit operational self-representation
under which it generates output.


It presents ONTOALEX, a proprietary metacognitive framework based on this theory,
developed and empirically tested on multiple families of commercial LLMs during
2025–2026. Preliminary observations indicate recurring improvements on known LLM
problems across ten distinct categories, including signals of greater inter-invocation stability
given identical input and context.


Tests are empirical and conducted by the author; independent formal validation has not
been performed. Researchers and institutions interested in testing, validating, or
implementing the framework are invited to collaborate under appropriate intellectual
property protections.

Files

ONTOALEX_Position_Paper_EN.pdf

Files (41.0 kB)

Name Size Download all
md5:57739d7b61f3708bee62dbcf32264198
20.2 kB Preview Download
md5:ccee4bd1972509a92a9c17a96e5f3f89
20.8 kB Preview Download