Published September 1, 2020 | Version v1
Report Open

Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring

  • 1. Università della Svizzera italiana

Description

Modern software systems rely on Deep Neural Networks (DNN) when processing complex, unstructured inputs, such as images, videos, natural language texts or audio signals. Provided the intractably large size of such input spaces, the intrinsic limitations of learning algorithms  and the ambiguity about the expected predictions for some of the inputs, not only there is no guarantee that DNN's predictions are always correct, but rather developers must safely assume a low, though not negligible, error probability. A fail-safe Deep Learning based System (DLS) is one equipped to handle DNN faults by means of a supervisor, capable of recognizing predictions that should not be trusted and that should activate a healing procedure bringing the DLS to a safe state.

In this paper, we propose an approach to use DNN uncertainty estimators to implement such supervisor. We first discuss advantages and disadvantages of existing approaches to measure uncertainty for DNNs and propose novel metrics for the empirical assessment of the  supervisor that rely on such approaches. We then describe our publicly available tool Uncertainty-Wizard, which allows transparent estimation of uncertainty for regular tf.keras DNNs. Lastly, we discuss a large-scale  study conducted on four different subjects to empirically validate the approach, reporting the lessons-learned as guidance for software engineers who intend to monitor uncertainty for fail-safe execution of DLS.

Files

TR-Precrime-2020-05.pdf

Files (903.1 kB)

Name Size Download all
md5:ed486374270224a43e2ebc6cf3cd2a5e
903.1 kB Preview Download

Additional details

Related works

Is obsoleted by
10.1109/ICST49551.2021.00015 (DOI)

Funding

PRECRIME – Self-assessment Oracles for Anticipatory Testing 787703
European Commission