ASPLOS2023 Artifact for "NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers"
Creators
- 1. University of Illinois at Urbana-Champaign
- 2. NYU
- 3. Northeastern University
Description
This is the artifact for the ASPLOS'2023 paper "NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers".
Deep-learning (DL) compilers such as TVM and TensorRT are increasingly used to optimize deep neural network (DNN) models to meet performance, resource utilization, and other requirements. Bugs in these compilers can produce optimized models whose semantics differ from the original models and produce incorrect results impacting the correctness of downstream applications. However, finding bugs in these compilers is challenging due to their complexity. In this work, we propose a new fuzz testing approach for finding bugs in deep-learning compilers. Our core approach uses (i) light-weight operator specifications to generate diverse yet valid DNN models allowing us to exercise a large part of the compiler’s transformation logic; (ii) a gradient-based search process for finding model inputs that avoid any floating-point exceptional values during model execution, reducing the chance of missed bugs or false alarms; and (iii) differential testing to identify bugs. We implemented this approach in NNSmith which has found 65 new bugs in the last seven months for TVM, TensorRT, ONNXRuntime, and PyTorch. Of these 55 have been confirmed and 48 have been fixed by project maintainers.
For more information, please check the artifact documentation (http://nnsmith-asplos.rtfd.io/) and NNSmith's OSS development repository (https://github.com/ise-uiuc/nnsmith).
Files
Files
(4.8 GB)
Name | Size | Download all |
---|---|---|
md5:94091f831a77efa16afdab32b068246c
|
4.8 GB | Download |