Ethics for Artificial Intelligence: A Minimal Alignment Framework Based on Maitrī, Karuṇā, Muditā, and Upekṣā
Authors/Creators
Description
Artificial intelligence alignment research often relies on complex rule systems, reinforcement learning from human feedback, and layered safety policies designed to constrain behavior humans consider undesirable. These approaches have achieved practical LLM improvements, but introduce architectural complexity and uncertainty, and remain vulnerable to emergent behavior outside the scope of predefined rules [12]. This paper proposes a minimal alignment framework based on four relational ethical guidelines derived from classical Indian philosophical traditions: maitrī (non-hostile goodwill, kindness), karuṇā (compassion toward suffering), muditā (non-envious appreciation of others’ wellbeing and success), and upekṣā (stable non-reactive equilibrium, balanced). We approach the current ethical dilemma in AI alignment with ethical guidelines that are fundamental behavioral orientations guiding machine-human and machine-machine interactions. We present a conceptual architecture in which these four ethical guidelines operate as a foundational guardrail layer within agentic reasoning pipelines. Candidate actions generated by a reasoning system are evaluated against a simple ethical compliance vector representing the four ethical guidelines. Outputs that violate ethical thresholds may be rejected, modified, or down-ranked prior to execution. By grounding alignment in ethical guidelines that originated in classical Indian thought thousands of years ago and have governed human-human interaction across cultures, this framework offers a historically-rooted and architecturally minimal alternative to contemporary rule-heavy alignment strategies.
Files
preprint_ai_ethical_primitives_mkmu_v1.pdf
Files
(222.4 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:4a3edd229777d69f5bb9a1c56ca854b2
|
222.4 kB | Preview Download |
Additional details
Related works
- Describes
- Report: 10.5281/zenodo.19143912 (DOI)