Published March 18, 2026 | Version v1
Preprint Open

Safety-Critical Decision Making for Autonomous Machines: A Probabilistic Framework with Uncertainty-Based Human Handoff

  • 1. Epalea

Description

Context. Autonomous machines—from self-driving vehicles to surgical robots to warehouse automation systems—require reliable, calibrated uncertainty quantification for safety-critical decisions across multi-sensor fusion scenarios.

Problem. Existing methods (voting, averaging, Bayesian) either lack proper calibration or fail to decompose uncertainty into actionable components (epistemic vs. aleatoric), limiting their applicability to life-critical robotic systems.

Solution. We introduce Latent Posterior Factors (LPF) for autonomous machine safety, a probabilistic framework that provides calibrated predictions with theoretically grounded uncertainty decomposition. LPF converts multi-sensor evidence into latent posterior distributions, then aggregates them through structured probabilistic reasoning. Critically, our framework generalises across autonomous systems: autonomous vehicles (AV), surgical robots, mobile manipulation robots, and aerial drones.

Key Contributions:

  • Novel application of LPF to safety-critical autonomous machine perception.
  • Empirical validation of uncertainty decomposition (Theorem 7 \cite{Aliyu2026LPFTheory}) in real-world multi-sensor fusion.
  • Visual demonstration: vehicle successfully stops before an obstacle when epistemic uncertainty exceeds the handoff threshold.
  • Zero missed human detections across all test scenarios.
  • Superior calibration (ECE < 0.05%) compared to baselines.
  • Generalisability: framework applicable to any autonomous machine requiring safe human handoff.

Results. LPF-Learned achieves 100% accuracy with near-perfect calibration (ECE = 0.0004%), while maintaining interpretable uncertainty decomposition for safety auditing. In our demonstration scenario, the LPF-enabled vehicle detects high epistemic uncertainty and safely stops 300 units before the ambiguous obstacle, while baseline methods crash at 50 units. While demonstrated on autonomous vehicle scenarios, this framework extends to surgical robotics (tool occlusion), warehouse robots (human proximity), and aerial systems (flight decision-making).

Keywords:
Neuro-symbolic AI, uncertainty quantification, autonomous systems, multi-sensor fusion, probabilistic reasoning, evidence aggregation, calibration (ECE), epistemic uncertainty, aleatoric uncertainty, variational autoencoders (VAE), sum-product networks (SPN), Bayesian inference, safety-critical AI, autonomous vehicles, robotic perception, interpretable AI, machine learning reliability, uncertainty decomposition, decision-making under uncertainty, AI safety

Files

main.pdf

Files (1.1 MB)

Name Size Download all
md5:dd7a51776f52761bea3ae9af01259e68
1.1 MB Preview Download

Additional details

Related works

Is supplemented by
Preprint: 10.5281/zenodo.19183861 (DOI)
Preprint: 10.5281/zenodo.19184458 (DOI)
Preprint: arXiv:2603.15670 (arXiv)
Preprint: arXiv:2603.15674 (arXiv)

Software

Repository URL
https://github.com/aaaEpalea/epalea.git
Programming language
Python
Development Status
Active