Published February 7, 2026 | Version 1.0
Preprint Open

LUCID: Leveraging Unverified Claims Into Deliverables — A Neuroscience-Grounded Framework for Exploiting Large Language Model Hallucination as a Software Specification Engine

Authors/Creators

  • 1. Independent Researcher

Description

Large language model (LLM) hallucination is universally treated as a defect to be minimized. We argue this framing is backwards.

We present LUCID (Leveraging Unverified Claims Into Deliverables), a development methodology that deliberately invokes LLM hallucination, extracts the resulting claims as testable requirements, verifies them against a real codebase, and iteratively converges hallucinated fiction toward verified reality.

We provide theoretical grounding through predictive processing, the mathematical equivalence between transformer attention and hippocampal pattern completion, and the REBUS model. We demonstrate convergence from 57.3% to 90.8% compliance across six iterations on a production application.

12 pages, 5 tables, 35 references. Open-source CLI implementation available.

Notes

Preprint. Open-source CLI implementation at https://github.com/gtsbahamas/hallucination-reversing-system

Files

lucid-leveraging-unverified-claims-into-deliverables.pdf

Files (95.1 kB)

Additional details