Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Creators
Description
This project includes all the Python code required for our experiments. To improve the clarity of the structure, we have organized the code into six separate folders. Among these, ./generated_demo contains the details for data generation, while the remaining folders include the code specifics for each individual task. Please note that the datasets used in the experiments—MNIST, CIFAR10, SKIN_CANCER, IMDB, and BBC News—are sourced from public datasets. For the MNIST, CIFAR10, and IMDB tasks, the project adopts torchvision to load them. For SKIN_CANCER and BBC News, accessible download links are provided in the relevant sections of the project. However, the PET dataset is private, constructed using self-recorded videos and recent YouTube videos, being stored in ./private_pet/train. Additionally, each task folder includes a more detailed Readme.md, which outlines the implementation steps.
Files
Data-Free Model-Related Attacks Unleashing the Potential of Generative AI.zip
Files
(939.2 MB)
Name | Size | Download all |
---|---|---|
md5:04a391dcded0ce31acb23d4f82ed3e85
|
939.2 MB | Preview Download |
Additional details
Software
- Repository URL
- https://github.com/SixLab6/model_stealing
- Programming language
- Python