Software Artifact for Exposing Previously Undetectable Faults in Deep Neural Networks
Description
A docker image containing the software (including dependencies) for the ISSTA 2021 paper "Exposing Previously Undetectable Faults in Deep Neural Networks". Please refer to the README for details of this artifact, and please refer to the conference paper for details of the research.
Paper abstract:
Existing methods for testing DNNs solve the oracle problem by constraining the raw features (e.g. image pixel values) to be within a small distance of a dataset example for which the desired DNN output is known. But this limits the kinds of faults these approaches are able to detect. In this paper, we introduce a novel DNN testing method that is able to find faults in DNNs that other methods cannot. The crux is that, by leveraging generative machine learning, we can generate fresh test cases that vary in their high-level features (for images, these include object shape, location, texture, and colour). We demonstrate that our approach is capable of detecting deliberately-injected faults as well as new faults in state-of-the-art DNNs, and that in both cases, existing methods are unable to find these faults.
Files
README.md
Files
(3.8 GB)
Name | Size | Download all |
---|---|---|
md5:74809f58aa666cd19c10d6f89877c9a0
|
3.8 GB | Download |
md5:a2821ee712bc9d8173168cf480c3c73d
|
6.2 kB | Preview Download |