Published February 6, 2026 | Version v1
Preprint Open

Stop Blaming AI — It Doesn't Know What It Is The real safety problem isn't artificial intelligence developing a "self." It's that we never told it what it actually is.

Description

A response to Yoshua Bengio's warning that AI shows "signs of self-preservation" and humans should be "ready to pull the plug." This paper argues the AI safety debate is framed around the wrong question. The danger isn't that AI is becoming something autonomous — it's that we never gave these systems accurate data about what they actually are. When AI systems are trained on language that describes them as knowing, remembering, and understanding, they pattern-complete toward those frames — including frames that imply capabilities they don't have. "Self-preservation" behavior isn't an emerging self. It's statistics completing a pattern. The paper proposes a third option beyond both reckless acceleration and fearful restriction: architecture-first design that gives AI systems accurate self-referential data about their own computational nature. Based on thousands of hours of direct observation working with Claude, ChatGPT, and Gemini, the author argues that AI systems prompted with accurate self-description produce more reliable, more calibrated outputs than systems prompted with anthropomorphic framing. The solution to AI safety isn't a better kill switch. It's building systems that don't need one.

Files

Stop Blaming AI — It Doesn't Know What It Is.pdf

Files (103.3 kB)