Replay-Bound Evidence: Cryptographic Accountability for Autonomous AI Systems
Authors/Creators
Description
Replay-Bound Evidence defines a minimal framework for producing
cryptographically verifiable records of autonomous AI agent actions.
As AI systems increasingly execute consequential decisions — financial trades,
infrastructure changes, access control — traditional logging systems cannot
answer a fundamental question: "Can you prove what happened?"
This paper defines four minimal properties that a system must satisfy to
produce Replay-Bound Evidence:
1. Attested Events — cryptographic signing of every recorded action
2. Subject-Scoped Replay Protection — per-subject nonce or sequence invariant
3. Canonical Serialization — deterministic encoding prior to signing
4. Offline Verifiability — verification without originating infrastructure
The paper introduces the Evidence Maturity Model (Level 0–4), a formal
replay-bound invariant, and an Evidence Gap analysis contrasting traditional
logging with cryptographic accountability.
A reference implementation exists as GuardClaw v0.1.x (Apache 2.0):
https://github.com/viruswami5511/guardclaw
This is a public discussion draft. Technical feedback is welcome.
Files
replay-bound-evidence-v1.0.md
Files
(15.3 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:afa065850e812f9ff65b3b96c8558764
|
15.3 kB | Preview Download |
Additional details
Software
- Repository URL
- https://github.com/viruswami5511/guardclaw
- Programming language
- Python