Published February 12, 2026 | Version v1
Preprint Open

Observable-Only AI Safety from Public Data: Robust Bottleneck Diagnosis with Auditable No-Meta Dynamic Programming, Anytime Confidence Sequences, and Dynamic IQC

Authors/Creators

Description

Observable-Only AI Safety from Public Data presents an auditable safety framework for robust bottleneck diagnosis in coupled dynamical systems under strict public-data constraints. The method enforces no-meta governance: decisions may use only replay-visible evidence and authenticated exogenous governance updates, with no hidden evaluators or privileged latent access.

The framework combines robust dynamic programming, partial identification, model-indexed e-processes / anytime-valid confidence sequences, and dynamic IQC analysis. It produces reproducible interval diagnostics with explicit uncertainty cushions (optimization, implementation, contamination, dependence, interaction, and rectangularization), fail-closed declaration rules, time-consistent ambiguity recursion, and deterministic replay contracts suitable for third-party verification.

The manuscript includes formal guarantees for well-posedness, measurable selector construction, identification limits, branchwise behavior (in-class statistical guarantees versus out-of-class safety behavior), and non-circular lag-one IQC tightening. It also provides machine-checkable certificate schemas, cross-field replay invariants, and operational pseudocode for online deployment and auditing.

This work is designed as an accountability and best-effort safety protocol, not a truth oracle. It does not guarantee recovery of latent ground truth beyond what is identifiable from observable data under explicit assumptions.

Files

Observable-Only AI Safety from Public Data.pdf

Files (637.1 kB)

Name Size Download all
md5:e44fa009b69aab2ca61c23c01d6c8d9a
637.1 kB Preview Download