Published June 15, 2025 | Version Version 1.0 – Initial Public Release
Preprint Open

Asymmetric Self-Consistency Hypothesis: AI-Assisted Verification and Falsifiability

Description

Asymmetric Self-Consistency Hypothesis: AI-Assisted Verification and Falsifiability
Version 1.0 – Initial Public Release

We propose the Asymmetric Self-Consistency Hypothesis, a novel framework where formal AI verification (via Lean, Coq, and GPT-based systems) establishes internal consistency of theoretical models. Under this hypothesis, if a theory passes such checks, any experimental contradiction must be attributed either to measurement errors or flaws in foundational axioms—not to the theory’s logic.

This paradigm shifts the burden of falsifiability in the age of AI, offering a cost-efficient and rigorous alternative to traditional large-scale experimental validation.

This dataset includes:

  • All formal proof scripts (.lean, .v)

  • GPT-based verification reports and diffs

  • Reproducible Docker environment

  • Collider simulation data (Delphes-compatible)

  • Systematic error tables and theoretical overlays

PSBigBig (Independent Developer and Researcher)  
📧 hello@onestardao.com  
🌐 https://onestardao.com/papers  
💻 https://github.com/onestardao/WFGY

 

Files

Asymmetric_SelfConsistency_AI_Verification_v1.0_PSBigBig_Public.pdf

Files (581.2 kB)

Additional details

Related works

Is supplement to
Preprint: 10.5281/zenodo.15630969 (DOI)

Dates

Accepted
2025-06-15
Version 1.0 – Initial Public Release