There is a newer version of the record available.

Published January 25, 2026 | Version v1
Preprint Open

Constraints on Causal Inference as Experiment Comparison: A Framework for Identification, Transportability, and Policy Learning

Authors/Creators

Contributors

Researcher:

Description

Causal inference seeks to predict the outcome of interventions from observational data---for instance, predicting patient recovery under a new drug protocol based on historical medical records, estimating the effects of genomic interventions on gene expression, or evaluating recommendation algorithms before deployment. In much of the literature, this problem is framed via graphical models and identification criteria that appear distinct from standard statistical inference. We propose a unification based on \textbf{Le Cam's theory of statistical experiments}. We demonstrate that causal inference can be framed as the problem of comparing an observational experiment $\mathcal{E}_{obs}$ to an interventional experiment $\mathcal{E}_{do}$, treating structural continuity as a model property rather than a metaphysical commitment. The ``causal gap'' is quantified by the \textbf{Le Cam deficiency} $\delta(\mathcal{E}_{obs}, \mathcal{E}_{do})$. We show that classical identification criteria (back-door, front-door) are constructive theorems asserting $\delta=0$ with explicit kernels; detailed proofs appear in the appendices. When identification fails, we establish sharp lower bounds on $\delta$ proportional to the strength of confounding. Furthermore, we extend this framework to families of interventions, establishing uniform bounds for policy learning, and provide a finite-sample theory for learning causal kernels from data. In this view, do-calculus can be read as kernel existence results, and policy safety bounds follow directly from deficiency-based risk transfer.

Files

main.pdf

Files (1.4 MB)

Name Size Download all
md5:0e0b4286d2f607ecb425a5393fae7a8d
1.4 MB Preview Download