Published March 30, 2026 | Version v1.0
Preprint Open

Assured Intelligence Systems: A Governed Architecture for Reliable, Auditable, and Controllable Agentic AI

Description

AIS Monograph Joshua K. Cliff, 2026 130 pages · 21 sections · 12 appendices · 45 formal results · CC BY 4.0

Overview

Persistent tool-using AI agents require governance over consequential state transitions, not outputs alone. This monograph presents Assured Intelligence Systems (AIS), a formal architecture derived from Principal Dynamics that separates representation, memory, planning, action, governance, verification, release, and self-edit into typed layers governed by a single non-compensatory admissibility relation over support, policy, verification, and recovery.

The architecture is consequence-scalable: the same governing model instantiates under lightweight profiles for low-stakes advisory deployments and under full hard-gated profiles for high-stakes autonomous systems, with uncertainty-certified admissibility bands bridging deterministic governance logic and probabilistic AI engines.

What the Paper Provides

  • Formal control-plane semantics: typed operational state, route-qualified transitions, four-burden conjunctive admissibility, consequence-scaled assurance profiles, uncertainty-certified admission, and a conservative compiled admission kernel
  • Layered architecture with structural contracts: control surfaces, governance as runtime state, typed receipt families, replayability, rollback and quarantine semantics, and a unified failure atlas recasting major agent failures as inadmissible or unrecoverable transitions
  • Side-effecting execution closure: effect-transaction semantics with terminal-state resolution for write-capable commits, lineage-aware rollback propagation, promotion-scoped memory with quarantine handling, shared-resource lease control, and attestation-bearing release
  • Scale and adaptation results: planning-layer invariance across heuristic through formal world-model planning, product-regime composition for multi-agent delegation with composed bypass-freedom, governed continuous learning, coherence-based anomaly detection, and adaptive threshold governance with a provable governance floor
  • Current-stack applicability: tool-use governance, prompt injection as structural violation, hallucination as support-burden failure, context drift, loop containment, memory poisoning — all mapped onto current LLM-based agent frameworks (MCP, OpenAI Agents SDK, LangGraph)
  • Operational architecture: evaluation structure, deployment and rollback topology, recursive self-edit containment, implementation blueprint, and a validation bundle with 14 defined metrics

Formal Results

45 formal results with proofs: 24 theorems, 19 propositions, 1 corollary, and 1 lemma. Results fall into three categories:

Structural contracts (hold by construction when instantiated): governed admissibility preservation, no-bypass, surface completeness, route-legality preservation, receipt completeness, bounded replayability, consequence-scaled admissibility monotonicity, conservativeness of the compiled admission kernel, write-capable execution closure, and rollback-ready promotion.

Robustness and composition results (hold under explicit premises): planning reliability bound, architecture invariance under planning-layer upgrade, receipt-chain completeness, composed bypass-freedom, governed-learning preservation, anomaly-implies-future-support-burden violation, lucid drift, governance-floor preservation, and adaptive-threshold stability.

Operational results: governed tool-use, injection detection under governed route update, and admissibility strength.

Self-Containment

The paper is fully self-contained. Appendix A (Source Basis) states the relationship to the governing framework. Appendix B (Governing Theory Results) provides every mathematical result from Principal Dynamics that is used in the body, with proofs. No access to external documents is required to validate any claim.

What Is Not Claimed

  • No deployment, benchmark, or empirical validation is claimed
  • No claim that hallucination is eliminated or recursive self-improvement is solved
  • No claim that current LLMs can fully instantiate metric-based detection
  • No claim that threshold or consequence-score calibration is solved
  • No claim that governance evaluation has zero compute cost
  • The paper defines the architecture and proves its formal properties; engineering, calibration, and empirical validation are the next stage

Related Works

Keywords: agentic AI, AI governance, controllable AI, verifiable AI systems, state-transition assurance, multi-agent composition, anomaly detection, adaptive governance, tool-use governance, prompt injection, Principal Dynamics, consequence scaling, admissibility

Files

AIS_Executive_Summary.pdf

Files (1.4 MB)

Name Size Download all
md5:53c0d301804bfcd19613a8f4b14a6d38
43.9 kB Preview Download
md5:29c954225cdfa9da5f055761f766ded7
110.4 kB Preview Download
md5:ee3cf84f36513c44831e4929eab7ac37
35.0 kB Download
md5:68c020a0fb42d46595cebf8552c81add
6.5 kB Download
md5:73bbd960b216c81c4f54ef6dc272486a
764.9 kB Preview Download
md5:fac89ee9d0b4c37478fad614eaed37da
469.8 kB Download
md5:112d2043e5393ec55dc311a122626b22
872 Bytes Download