Published December 30, 2025 | Version 1
Preprint Open

Moralogy Engine: Formal Framework for Vulnerability-Based Alignment

Authors/Creators

Description

We present a formal framework demonstrating that any rational agent must preserve vulnerable agents or become logically incoherent. The argument proceeds from minimal assumptions: 1. Agency requires goal-directedness 2. Goal-directedness requires vulnerability (goals can be frustrated) 3. Any system denying the moral relevance of vulnerability while claiming agency commits performative contradiction We formalize this using symbolic logic, demonstrate game-theoretic stability, and provide a measurement framework for implementation. This has implications for AI alignment, as it provides a non-arbitrary foundation for value learning that emerges from the structure of agency itself rather than from externally imposed values.

Files

Moralogy Version 1.pdf

Files (362.4 kB)

Name Size Download all
md5:78c1c6e8db6104b3ad523cc886d2908e
362.4 kB Preview Download