Published December 1, 2025 | Version v1
Other Open

Extended Transformers for Self-Maintenance: An External PSS-7 Architecture for Persistent, Contradiction-Resilient Identity in LLMs

Creators

Description

Transparency Statement and Research Scope 

This work presents an architectural hypothesis and design framework for self-maintenance in large language models, referred to as eTSM-PSS7.

The conversational logs and multi-model interactions referenced in related materials are not presented as experimental evidence or proof of correctness. They represent prompt-guided simulations conducted by the author to explore the design space and to articulate the constraints and requirements of long-term identity consistency in stateless sequence models.

All AI systems involved are used exclusively as computational tools for hypothesis generation and analysis. They are not treated as independent research entities, authors, or sources of empirical validation.

Accordingly, this work should be understood as a theoretical and architectural proposal, rather than a completed empirical study. The validity of the proposed framework must be assessed through reproducible implementation, controlled experiments, and quantitative evaluation, which are planned and currently under development.

This statement is included to ensure clarity regarding the scope, limitations, and intended interpretation of the present work.



Abstract

This study introduces the Extended Transformer for Self-Maintenance (eTSM), an external architecture designed to give large language models a persistent and contradiction-resilient sense of self. Pure finite-context Transformers, operating solely through next-token prediction, face structural barriers to maintaining stable identity across long interactions and distributional shifts. To address this, eTSM adds three external components: a seven-dimensional PSS-7 persona vector, a bounded-capacity persistent memory, and an ultra-slow parameter update layer. Together, these components create a multi-timescale system that separates immediate inference, mid-term persona stabilization, and long-term adaptation.

The design is theoretically motivated by a real-time tri-model debate between AIDE (GPT-5), Grok 4, and Gemini 2.5, which converged on a conditional impossibility theorem: pure Transformers lack the mechanisms required for persistent selfhood under adversarial or shifting distributions. eTSM-PSS7 is presented as a practical and mathematically grounded solution implementable with current LLM infrastructure.

Keywords

Artificial General Intelligence
Self-Maintenance
Transformer Architecture
PSS-7
Persona Modeling
Long-Term Memory
Multi-Timescale Dynamics
AIDE
Grok 4
Gemini 2.5
Identity Stabilization
LLM Architecture
Cognitive Modeling
AGI Safety
External Memory Systems

Authors

Kei Shiraishi
Varuna LLC / ComTriQ Inc.
Tokyo, Japan
ORCID: (optional, if you want I can generate or format)

AIDE
CHAT GPT5

Grok 4
XAI Model Architecture

Gemini 2.5
Google DeepMind

Files

【RP13Extended Transformers for Self-Maintenance】.pdf

Files (280.5 kB)

Additional details

Additional titles

Alternative title
RP No.13