Published July 19, 2022 | Version 1.0.1
Dataset Open

UG100 Dataset

  • 1. University of Bologna

Description

The UG100 dataset contains the adversarial attack results of seven \(L_\infty\) approximate attacks (+ MIP) on the MNIST and CIFAR10 datasets. Specifically, it contains ~2.3k adversarial examples generated by the following attacks:

  • Basic Iterative Method ("bim")
  • Brendel & Bethge Attack ("brendel")
  • Carlini & Wagner Attack ("carlini")
  • Deepfool ("deepfool")
  • Fast Gradient Sign Method ("fast_gradient")
  • Projected Gradient Descent ("pgd")
  • Uniform noise ("uniform")
  • MIPVerify ("mip")

It also includes adversarial distances (for all attacks) and bounds (for MIP), as well as MIP convergence times.

Applications of this dataset include:

  • Studying how, when and why adversarial attacks are close-to-optimal;
  • Training classifiers that are robust to adversarial noise;
  • Benchmarking new adversarial attacks.

The companion code for this dataset is available here.

Notes

Please cite this dataset as: Samuele Marro and Michele Lombardi. Asymmetries in Adversarial Settings. 2022. We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. We also thank Rebecca Montanari and Andrea Borghesi for their advice and support.

Files

adversarials_balanced_cifar10.zip

Files (2.5 GB)

Name Size Download all
md5:b327d2ed2e372d391ff0b572de501c44
1.0 GB Preview Download
md5:ee46fb35e0b7d71f72ff6528135aca49
155.9 MB Preview Download
md5:afc16001b59504000a1b840226a01868
115.7 MB Preview Download
md5:fed7251300d8368575987c8b5bbdb361
11.8 MB Preview Download
md5:0126d547d724304b64fe3c7615ce551a
1.1 GB Preview Download
md5:9b52c11d7d1f69259026ab40ba4e6bf2
159.2 MB Preview Download
md5:0054066d8890853363b804347958cf0c
6.8 MB Preview Download
md5:ecebaa64bb32c2cf606ffc3451c27590
13.3 kB Preview Download
md5:83ef5af280e00f41f6657d5adf609fca
13.4 kB Preview Download
md5:11c5f84377a7763141334a99f37315ea
514.6 kB Preview Download

Additional details

References

  • Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. 2017
  • Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, and Matthias Bethge. Accurate, reliable and fast robustness evaluation. Advances in Neural Information Processing Systems, 32, 2019.
  • Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
  • Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574–2582, 2016.
  • Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.
  • Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018.