Published January 25, 2025 | Version v1
Conference paper Open

Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI

  • 1. ROR icon University of Technology Sydney
  • 2. ROR icon City University of Macau
  • 3. EDMO icon University of Technology, Sydney
  • 4. ROR icon Griffith University
  • 5. ROR icon Helmholtz Center for Information Security

Description

This project includes all the Python code required for our experiments. To improve the clarity of the structure, we have organized the code into six separate folders. Among these, ./generated_demo contains the details for data generation, while the remaining folders include the code specifics for each individual task. Please note that the datasets used in the experiments—MNIST, CIFAR10, SKIN_CANCER, IMDB, and BBC News—are sourced from public datasets. For the MNIST, CIFAR10, and IMDB tasks, the project adopts torchvision to load them. For SKIN_CANCER and BBC News, accessible download links are provided in the relevant sections of the project. However, the PET dataset is private, constructed using self-recorded videos and recent YouTube videos, being stored in ./private_pet/train. Additionally, each task folder includes a more detailed Readme.md, which outlines the implementation steps.

Files

Data-Free Model-Related Attacks Unleashing the Potential of Generative AI.zip

Additional details

Software

Repository URL
https://github.com/SixLab6/model_stealing
Programming language
Python