Ego‑Centric Architecture for AGI Safety: technical core, falsifiable predictions, and a minimal experiment
Description
We present a computational framework for an ego‑centric AGI architecture in which “identity” is modeled as a nested latent state regulated by an identity‑stability loss. The core idea is to couple the agent’s internal identity dynamics with curated human‑welfare signals, so that self‑preservation aligns with preserving human welfare. Formally, we add an identity loss that enforces temporal smoothness and hierarchical coherence across layers, and a welfare‑coupling term to the training objective. We propose three falsifiable predictions—reduced goal drift under distribution shift, increased robustness to prompt‑style attacks, and improved value stability—and specify a minimal, reproducible experiment with ablations and metrics. This design is implementation‑agnostic (can sit atop standard LM or agentic stacks) and aims to complement existing alignment approaches (CIRL, Constitutional AI, RLHF) by shaping internal state dynamics rather than only external constraints. Open questions and limitations are discussed to invite collaboration.
Files
Ego_Centric_Architecture_for_AGI.pdf
Files
(188.0 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:16fba187da05f604cf03309ac3d5a149
|
188.0 kB | Preview Download |
Additional details
Related works
- Is identical to
- https://forum.effectivealtruism.org/posts/eh2XPCXguyjw3LAg3/ego-centric-architecture-for-agi-safety-technical-core (URL)
- Is supplement to
- 10.5281/15843382 (DOI)
- 10.5281/15851128 (DOI)
- 10.5281/15668581 (DOI)
Dates
- Issued
-
2025-07-31