BTMA: Biomorphic Temporal Memory Architecture for Large Language Models
Description
Large Language Models (LLMs) currently lack persistent memory across sessions, forcing every interaction to begin from scratch and requiring users to reconstruct context manually.
This re-establishment overhead scales with interaction depth, creating a compounding cost for longitudinal tasks such as research, therapeutic support, and collaborative creative work.
This paper proposes a Biomorphic Temporal Memory Architecture (BTMA), a multi-layer memory consolidation protocol designed specifically for LLMs, inspired by the hierarchical consolidation mechanisms of biological memory systems.
BTMA introduces systematic temporal anchoring via mandatory session timestamps, and a four-layer artefact hierarchy (event, daily, monthly, annual) that mirrors the neurobiological progression from working memory to long-term consolidation.
Each layer distils the one below, producing compressed, contextually rich memory traces without unlimited accumulation.
The architecture is platform-agnostic, requires no modification to model weights, and operates entirely through structured prompting and user-controlled local storage (Obsidian-compatible Markdown).
BTMA provides mnesic-sensorial continuity, a form of functional memory that benefits any AI system regardless of its level of cognitive emergence, and is composable with any alignment or behavioural framework without modification. We present the full implementation protocol and invite independent validation.
Files
Additional details
Dates
- Created
-
2026-02-28Creation
- Submitted
-
2026-03-09uploaded on zenodo
- Updated
-
2026-03-18uploaded on zenodo