Planned intervention: On Thursday 19/09 between 05:30-06:30 (UTC), Zenodo will be unavailable because of a scheduled upgrade in our storage cluster.
Published July 25, 2023 | Version v1
Conference paper Open

Safety and Robustness for Deep Neural Networks: An Automotive Use Case

  • 1. University of Pisa
  • 2. Austrian Institute of Technology

Description

Current automotive safety standards are cautious when it comes to utilizing deep neural networks in safety-critical scenarios due to concerns regarding robustness to noise, domain drift, and uncertainty quantification.  In this paper, we propose a scenario where a neural network adjusts the automated driving style to reduce user stress. In this scenario, only certain actions are safety-critical, allowing for greater control over the model's behavior. To demonstrate how safety can be adressed, we propose a mechanism based on robustness quantification and a fallback plan. This approach enables the model to minimize user stress in safe conditions while avoiding unsafe actions in uncertain scenarios. By exploring this use case, we hope to inspire discussions around identifying safety-critical scenarios and approaches where neural networks can be safely utilized. We see this also as a potential contribution to the development of new standards and best practices for the usage of AI in safety-critical scenarios. The work done here is a result of the TEACHING project, an European research project around the safe, secure and trustworthy usage of AI.

Files

DECSoS23___AI.pdf

Files (1.2 MB)

Name Size Download all
md5:0820e65537fab57da6038b340ca2bd8f
1.2 MB Preview Download

Additional details

Funding

TEACHING – A computing toolkit for building efficient autonomous applications leveraging humanistic intelligence 871385
European Commission