Published January 9, 2026 | Version v1
Preprint Open

Verifiable Governance Architecture (VGA) for Organisations and Teams with Human and AI Employees

Authors/Creators

  • 1. Independent Researcher

Description

This paper introduces the Verifiable Governance Architecture (VGA), a runtime-enforceable framework for governing agentic AI systems in hybrid human-AI organizations. VGA operationalizes a prescriptive 20-law codex through a fail-close Watchdog that mediates tool calls, a Minimal Viable Verification (MVV) matrix with verifiable artifacts and cadences, seniority-based decision rights, and an immutable evidence store to eliminate "governance hallucinations." Implemented as governance-as-code (OPA/Rego with CI/CD testing), VGA provides preventative enforcement at the tool boundary while producing regulator-ready evidence.Positioned as a control-plane complement to prompt guardrails, reward-model oversight, and process supervision, VGA ensures action-boundary guarantees for irreversible tools (e.g., payments, clinical actions). It includes detailed mappings to EU AI Act (Articles 12/15), NIST AI RMF, and ISO/IEC 42001, plus a deployment checklist, performance analysis, and risk-reduction ROI model grounded in empirical incident costs.Designed for high-stakes and scalable deployments—from startups to regulated enterprises—VGA integrates optional Honey Badger Management Framework (HBMF) mechanisms for verification cadence and multi-party authorization. Appendices provide a quick-summary overview, glossary, regulatory crosswalk, and open-source-ready Rego reference implementation.Ideal for researchers and practitioners in AI governance, agentic systems safety, regulatory compliance, and hybrid workforce management.

Files

Verifiable-Governance-Architecture--VGA--for-Organisations-and-Teams-with-Human-and-AI-Employees.pdf

Additional details

Related works

Cites
Preprint: 10.5281/zenodo.17957606 (DOI)