The Algorithmic Brink: Ethical Governance and Escalation Risk in AI- Enabled Nuclear Systems across Space and Conflict Domains
Authors/Creators
Description
Abstract
The growing use of artificial intelligence (AI)
in nuclear command, control, and
communications (NC3), especially in space
based security systems, marks a significant
shift in how strategic decisions may be made.
AI systems can improve information
processing and operational speed, but they also
create new risks that may undermine strategic
stability. This study explores the ethical,
technical, and strategic challenges that arise
when nuclear decision-making relies on
autonomous or semi-autonomous systems.
Drawing on deterrence theory, research on AI
safety, studies of human–machine interaction,
and existing space law, the paper identifies
three primary sources of risk: shortened
decision timelines, ambiguous responsibility
for decisions, and weaknesses in sensor
reliability. To demonstrate how these factors
interact, the study presents a stochastic
escalation model showing how fast-paced
machine interactions can raise the risk of
unintended conflict in uncertain conditions.
The paper concludes by outlining a Human
Centric Heuristic (HCH) governance model
that emphasizes sustained human control while
still supporting timely operational decisions.
Files
IJMSRT26FEB040.pdf
Files
(282.0 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:06293879b4baac8089e42f4c9b3d9ad5
|
282.0 kB | Preview Download |