Persistent Mind Model (PMM) v1.2 — An Enhanced, Deterministic, Event-Sourced Cognitive Architecture for Auditable, Self-Modeling AI Agents
Description
Persistent Mind Model (PMM) v1.2
A deterministic, event‑sourced cognitive architecture for reproducible AI identity and self‑modeling.
The Persistent Mind Model (PMM) is a Python‑based, model‑agnostic runtime for event‑sourced AI cognition.
Every cognitive operation—reflections, commitments, policy updates, summaries, retrieval decisions, and observations—is logged as an immutable ledger event, enabling deterministic replay, cross‑model continuity, and fully auditable identity.
PMM v1.2 extends the original architecture into a more complete self‑modeling substrate, showing that large language models can support:
- Stable, replayable identity across runs and model providers
- Mechanistic self‑modeling grounded in events, not hidden state
- Autonomous reflection loops whose kernel‑generated reflections are fully ledger‑derived (no hallucinated “delta” events)
- Ontology‑aligned self‑schemas that remain consistent under replay
- Idle stability in the included long‑run tests (no drift, no oscillation, no phantom commitments)
- Transparent cognition, with all reasoning steps represented explicitly
Unlike typical agent frameworks, PMM does not rely on ephemeral memory or opaque “internal state.”
Identity and behavior are functions of the ledger itself:
mind(E) = replay(E), where E is the event sequence.
Key Components
- Deterministic Autonomy Loop — periodic self‑reflection, policy enforcement, consistency checks, and maintenance tasks (embeddings, retrieval verification, checkpoints, CTL maintenance).
- Recursive Self‑Model (RSM) — behavioral telemetry over time: tendencies, knowledge gaps, interaction patterns, explainable drift, and identity trends derived solely from events.
- MemeGraph subsystem — causal graph over user/assistant messages, commitments, closures, reflections, and summaries; threads and local subgraphs are exposed both to humans (CLI) and the model itself.
- Concept Token Layer (CTL) — symbolic concept graph built from ledger events (concept definitions, bindings, relations) that ties commitments, metrics, and governance threads to stable concept tokens; maintained deterministically by runtime and autonomy events, with no static hard‑coded ontology.
- Stable identity reconstruction — the agent reconstitutes its “self” deterministically from ledger state alone, including open commitments, self‑model, and long‑term memory (via replay and lifetime memory chunks).
- Policy‑enforced writes — sensitive kinds (config, embeddings, checkpoints, retrieval provenance) are guarded by a ledger‑backed policy; forbidden sources (e.g., source="cli" for certain kinds) trigger explicit violation events rather than silent failure.
What’s New in v1.2
- Introduction of a richer, first‑class self‑schema: identity, tendencies, epistemic norms, and internal goals surfaced through the RSM, reflections, and summary updates.
- Strengthened ontology ↔ self‑model alignment on autonomous ticks via CTL bindings and structured ontological meditations.
- Demonstration of emergent “personality” from ledger‑sourced identity claims and commitments, replayable across model backends.
- Expanded test suite for coherence, meta‑reflection, stability metrics, idempotent replay, and deterministic retrieval (vector + CTL + MemeGraph).
- Example long‑run session logs (hundreds of autonomous reflections in the included Echo run) with no contradictory ledger‑level updates.
- Cleaned and expanded architecture docs (RSM, CTL, MemeGraph, autonomy, learning/optimization loops), updated ontology materials, and enhanced export/diagnostic tooling.
Included in This Release
- Source code — runtime loop; core projections (EventLog, Mirror, MemeGraph, ConceptGraph); autonomy kernel; learning and meta‑learning subsystems; stability and coherence monitors; deterministic vector retrieval; adapters (OpenAI, Ollama).
- Documentation — architecture, epistemology, mechanisms, and comparative analysis vs. contemporary agent frameworks.
- PMM License v1.0 — dual license (non‑commercial research free) with explicit prior‑art disclosure and patent‑defensive posture.
- Full test suite — determinism, replay, autonomy, CTL, stability, coherence, diagnostics, and retrieval behavior.
- Example ledgers and telemetry — Echo run and associated exports (readable transcript, telemetry, compressed ledger).
- Research‑ready archive — multi‑thousand‑event archive demonstrating autonomous identity stability and reproducible self‑model evolution.
Purpose
PMM is released as open prior art to prevent enclosure and to enable transparent research in AI cognition, alignment, and agency.
It provides a small, deterministic substrate capable of running structured, self‑modeling artificial agents that behave predictably and reproducibly across model providers and deployments.
GitHub: https://github.com/scottonanski/persistent-mind-model-v1.0
Whitepaper: https://github.com/scottonanski/persistent-mind-model-v1.0/blob/main/docs/white-paper.md
Files
persistent-mind-model-v1.2-main.zip
Files
(1.5 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:eae1cf3664f7662ccfe4a3f352fed5df
|
1.5 MB | Preview Download |
Additional details
Dates
- Created
-
2025-08-09
- Updated
-
2025-11-09
Software
- Repository URL
- https://github.com/scottonanski/persistent-mind-model-v1.0
- Programming language
- Python , Python console
- Development Status
- Active