Living Memory Inference: Separating Knowledge from Reasoning in AI Systems
Authors/Creators
Description
We present Living Memory Inference (LMI), a method that separates knowledge
from reasoning in AI systems. In contrast to Retrieval-Augmented Generation (RAG),
which treats external storage as a read-only supplement to a model's internal
knowledge, LMI inverts this relationship: the external knowledge store becomes the
primary source of intelligence, while the language model serves exclusively as a
stateless reasoning mechanism over injected facts. The store is not static — it grows,
decays, and self-corrects through autonomous write-back, consolidation, and
contradiction detection after every inference. We define the LMI method, describe its
three-layer architecture, and present Loci — a reference open-source implementation
in Go backed by PostgreSQL with pgvector for vector similarity search. We evaluate
Loci across 120 test cases spanning six benchmark suites and thirteen domains. Loci
achieves perfect grounding (1.00) across all 120 cases including 25 adversarial
scenarios designed to induce hallucination, perfect answer quality (1.00) on complex
reasoning chains, and a 58% reduction in hallucinations versus an ungrounded
baseline of the same model. This is a systems and position paper; evaluation on
standard public benchmarks is identified as the primary direction for future work.
Files
lmi_paper (3).pdf
Files
(38.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:41c009a9f3314d5dfeb6beb2b104653f
|
38.8 kB | Preview Download |
Additional details
Identifiers
- Other
- ASH-LMI
Related works
- References
- Publication: 10.48550/arXiv.2005.11401 (DOI)
- Publication: 10.48550/arXiv.1706.03762 (DOI)
- Publication: 10.48550/arXiv.2312.10997 (DOI)
Dates
- Created
-
2026-04-10
Software
- Repository URL
- https://github.com/alash3al/loci
- Programming language
- Go
- Development Status
- Active