Report Open Access

Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring

Michael Weiss; Paolo Tonella

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.5055710</identifier>
      <creatorName>Michael Weiss</creatorName>
      <affiliation>Università della Svizzera italiana</affiliation>
      <creatorName>Paolo Tonella</creatorName>
      <affiliation>Università della Svizzera italiana</affiliation>
    <title>Fail-Safe Execution of Deep Learning based Systems through Uncertainty Monitoring</title>
    <date dateType="Issued">2020-09-01</date>
  <resourceType resourceTypeGeneral="Report"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsObsoletedBy">10.1109/ICST49551.2021.00015</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5055709</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;Modern software systems rely on Deep Neural Networks (DNN) when processing complex, unstructured inputs, such as images, videos, natural language texts or audio signals.&amp;nbsp;Provided the intractably large size of such input spaces, the intrinsic limitations of learning algorithms&amp;nbsp; and the ambiguity about the expected predictions for some of the inputs, not only there is no guarantee that DNN&amp;#39;s predictions are always correct, but rather developers must safely assume a low, though not negligible, error probability.&amp;nbsp;A fail-safe Deep Learning based System (DLS) is one equipped to handle DNN faults by means of a supervisor, capable of recognizing predictions that should not be trusted and that should activate a healing procedure bringing the DLS to a safe state.&lt;/p&gt;

&lt;p&gt;In this paper, we propose an approach to use DNN uncertainty estimators to implement such supervisor.&amp;nbsp;We first discuss advantages and disadvantages of existing approaches to measure uncertainty for DNNs&amp;nbsp;and propose novel metrics for the empirical assessment of the&amp;nbsp; supervisor that rely on such approaches.&amp;nbsp;We then describe our publicly available tool Uncertainty-Wizard, which allows transparent estimation of uncertainty for regular tf.keras DNNs.&amp;nbsp;Lastly, we discuss a large-scale&amp;nbsp; study conducted on four different subjects to empirically validate the approach,&amp;nbsp;reporting the lessons-learned as guidance for software engineers who intend to monitor uncertainty for fail-safe execution of DLS.&lt;/p&gt;</description>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/787703/">787703</awardNumber>
      <awardTitle>Self-assessment Oracles for Anticipatory Testing</awardTitle>
All versions This version
Views 2727
Downloads 2828
Data volume 25.3 MB25.3 MB
Unique views 2323
Unique downloads 2525


Cite as