Published October 14, 2024 | Version v1
Journal article Open

Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review

  • 1. ROR icon Technical University of Darmstadt
  • 2. ROR icon University of Edinburgh
  • 3. ROR icon Heriot-Watt University
  • 4. The University of Edinburgh

Description

Artificial Intelligence (AI) shows promising applications for the perception and planning tasks in autonomous driving (AD) due to its superior performance compared to conventional methods. However, highly complex AI systems exacerbate the existing challenge of safety assurance of AD. One way to mitigate this challenge is to utilize explainable AI (XAI) techniques. To this end, we present the first comprehensive systematic literature review of explainable methods for safe and trustworthy AD. We begin by analyzing the requirements for AI in the context of AD, focusing on three key aspects: data, model, and agency. We find that XAI is fundamental to meeting these requirements. Based on this, we explain the sources of explanations in AI and describe a taxonomy of XAI. We then identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation. Finally, we propose a conceptual modular framework called SafeX to integrate the reviewed methods, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.

Files

Abstract.pdf

Files (11.7 kB)

Name Size Download all
md5:ad64168126ffbfbf7a20f7bb7244e99f
11.7 kB Preview Download

Additional details