Published April 28, 2026
| Version 1.0.0-mdlh-pro
Working paper
Open
LSC and Massively Documented LLM Hallucination: A Dual-Interpretation Framework for AI-Assisted Scientific Discovery
Description
A publication-grade working paper proposing Massively Documented LLM Hallucination (MDLH) as a formal epistemic-risk framework for AI-assisted scientific discovery, using the LSC neutrino research line as a dual-interpretation case study. The work does not claim that LSC is validated physics or that it is false; it separates unvalidated physics from AI-generated epistemic artifacts. The MDLH archive now records the 6.2.2 repair path explicitly. Public note: the correction makes the isotropic trace explicit, keeps the directional term traceless, removes the mixed base 1 / E^2 usage, and anchors sidereal tests in a fixed celestial frame.
Notes
Files
LSC_MDLH_PRO.pdf
Files
(942.0 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:3d0d6371434f1e5c2da32ccc48bdec74
|
368.7 kB | Preview Download |
|
md5:9383fcdf9b00c636565c78f0e617a377
|
573.3 kB | Preview Download |