The Incompatibility of Probabilistic Inference and Authority: Why AI Systems That Guess Cannot Be Trusted With Decisions
Description
Modern artificial intelligence systems increasingly operate in roles with real-world consequences, yet most are built on probabilistic inference. This paper advances a structural claim: probabilistic inference and execution authority are incompatible by design. Systems optimized to estimate likelihoods cannot reliably enforce permission, refusal, or fail-closed behavior, all of which are required for legitimate decision authority in high-consequence environments.
The paper demonstrates why commonly proposed remedies—such as increased model scale, improved data quality, explainability, monitoring, and audits—cannot resolve this incompatibility, as they operate after execution rather than governing whether execution should occur. It argues that trustworthy AI systems require deterministic, pre-execution governance that enforces explicit, rule-governed state transitions.
This work focuses on logical necessity rather than implementation detail and establishes a system-level foundation for accountability, safety, and control in AI decision systems.
Files
The Incompatibility of Probabilistic Inference and Authority Why AI Systems That Guess Cannot Be Trusted With Decisions.pdf
Files
(236.1 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:9702474eb4c2436497d441d311d0e738
|
236.1 kB | Preview Download |
Additional details
Identifiers
- Other
- US Non-Provisional Patent Application No. 19/400,020
Related works
- Is supplement to
- Publication: https://zenodo.org/records/17826047 (URL)
- Publication: https://zenodo.org/records/17826047 (URL)
- Publication: https://zenodo.org/records/17766646 (URL)
Dates
- Issued
-
2025-12-21