Externalized Authority and Artifact-Based Governance in AI Systems
Authors/Creators
Description
As large language models (LLMs) are embedded into systems that execute actions, coordinate workflows, and interact with external resources, questions of authority and governance become central. Many deployed architectures implicitly grant decision-making power to probabilistic components through conversational context, narrative continuity, or internal state inference. This practice introduces ambiguity, instability, and security risk.
This paper argues that reliable AI systems must externalize authority from probabilistic components and govern system progression through artifact-based validation. Rather than allowing narrative output to authorize continuation, execution, or success, authority over state transitions, validation, and termination must be enforced by deterministic mechanisms operating outside the model.
The work identifies common governance failures in LLM-driven systems, including narrative authority takeover, implicit self-validation, and ambiguous decision ownership. It then defines a set of architectural governance invariants—such as fail-closed progression, separation of execution and validation roles, and explicit artifact-based gating—necessary for auditable, secure, and scalable AI systems.
This paper does not propose a specific framework or implementation. Instead, it establishes model-agnostic governance principles required for production-grade AI systems operating in safety-, security-, or reliability-critical domains.
Files
Ai Paper - Externalized Authority and Artifact-Based Governance in AI Systems.pdf
Files
(90.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:3d66ea4c31cd4e0c726923fe31c60e85
|
90.4 kB | Preview Download |