Operational Coherence Bounds in Artificial Intelligence
Description
We extend the Operational Coherence Bound (OCB) and quantum Operational
Coherence Bound (qOCB) frameworks to artificial intelligence, formalizing the Neural Operational Coherence Bound (NOCB). We prove that neural networks are finite
observers with bounded operational entropy capacity Amax = 4NLd log(e). Applying this constraint, we derive: (1) Hallucination is a formal OCB violation with
an architectural lower bound; (2) Catastrophic forgetting rates are determined by
the training contraction coefficient κtrain = ηµ; (3) Softmax attention is the unique
Petz-optimal distinguishability mechanism for the qOCB metric, with head count
determined by sparsity-corrected metric tessellation; (4) Chinchilla scaling law exponents α ≈ 0.34 and β ≈ 0.28 arise from the qOCB geometry of the loss landscape
and intrinsic language dimension. All results are grounded in the foundational OCB
framework (1) and the qOCB extension (2), and provide testable predictions for
model architecture and training dynamics.
Files
Operational Coherence Bounds in Artificial Intelligence.pdf
Files
(245.0 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:b43aee7f11f99f68d1e1b94c3368615d
|
245.0 kB | Preview Download |
Additional details
Related works
- Cites
- Publication: 10.5281/zenodo.18396941 (DOI)