From Agent Risk to Framing Risk - The Governance Blind Spot in External AI Systems
Authors/Creators
Description
Abstract
AI governance has concentrated on execution risk: whether agents act lawfully, whether decisions are compliant, whether liability can be assigned.
This focus leaves a structural blind spot.
Before execution, there is representation.
Large language models shape how option spaces are framed at the moment reliance forms. They influence what appears comparable, credible, or even visible. When that representational layer is external, non-deterministic, and unlogged, institutions face a new condition: evidentiary asymmetry. Influence persists. Durable record does not.
This article argues that governance must expand from agent control to framing reconstructability. The emerging requirement is not optimization of outputs, but preservation of decision-adjacent representational states sufficient for supervisory review.
Files
From Agent Risk to Framing Risk.pdf
Files
(189.6 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:dd1162eea2bae39c75bad68e7a66e981
|
189.6 kB | Preview Download |