Published February 10, 2026
| Version 1.0
Preprint
Open
Knife Alignment: Why Moralizing AI Is as Absurd as Aligning Knives (Technical Note / Preprint)
Authors/Creators
Description
This is the canonical technical version of “Knife Alignment”. It argues that text-level “ethics” is not equivalent to action-level safety, and that many failures are boundary/permissioning failures (“context collapse”) rather than “moral” failures of the tool.
The Russian PDF is a companion translation.
Suggested citation: Egorychev, K. (2026). Knife Alignment: Why Moralizing AI Is as Absurd as Aligning Knives (Technical Note / Preprint). Zenodo. https://doi.org/10.5281/zenodo.18591626
Discussion links will be added in subsequent versions.
Files
[EN] Knife Alignment - Why Moralizing AI Is as Absurd as Aligning Knives.pdf
Files
(123.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:a92af8be758d49c764dbba85b0f92d2b
|
56.7 kB | Preview Download |
|
md5:5718fb67a6321fb061c39340fb082e08
|
67.2 kB | Preview Download |
Additional details
Dates
- Issued
-
2026-02-10