Conference paper Open Access

ASPLOS2023 Artifact for "NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers"

Liu, Jiawei; Lin, Jinkun; Ruffy, Fabian; Cheng, Tan; Li, Jinyang; Panda, Aurojit; Zhang, Lingming

This is the artifact for the ASPLOS'2023 paper "NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers".

Deep-learning (DL) compilers such as TVM and TensorRT are increasingly used to optimize deep neural network (DNN) models to meet performance, resource utilization, and other requirements. Bugs in these compilers can produce optimized models whose semantics differ from the original models and produce incorrect results impacting the correctness of downstream applications. However, finding bugs in these compilers is challenging due to their complexity. In this work, we propose a new fuzz testing approach for finding bugs in deep-learning compilers. Our core approach uses (i) light-weight operator specifications to generate diverse yet valid DNN models allowing us to exercise a large part of the compiler’s transformation logic; (ii) a gradient-based search process for finding model inputs that avoid any floating-point exceptional values during model execution, reducing the chance of missed bugs or false alarms; and (iii) differential testing to identify bugs. We implemented this approach in NNSmith which has found 65 new bugs in the last seven months for TVM, TensorRT, ONNXRuntime, and PyTorch. Of these 55 have been confirmed and 48 have been fixed by project maintainers.

For more information, please check the artifact documentation (http://nnsmith-asplos.rtfd.io/) and NNSmith's OSS development repository (https://github.com/ise-uiuc/nnsmith).

Files (4.8 GB)
Name Size
NNSmith-ASPLOS23-Artifact.tar.gz
md5:94091f831a77efa16afdab32b068246c
4.8 GB Download
99
11
views
downloads
All versions This version
Views 9954
Downloads 1110
Data volume 52.8 GB48.0 GB
Unique views 7439
Unique downloads 99

Share

Cite as