Published September 23, 2018
| Version v0.1.1
Software
Open
mia: A library for running membership inference attacks against ML models
Description
A library for running membership inference attacks (MIA) against machine learning models. Check out the documentation.
These are attacks against privacy of the training data. In MIA, an attacker tries to guess whether a given example was used during training of a target model or not, only by querying the model. See more in the paper by Shokri et al. Currently, you can use the library to evaluate the robustness of your Keras or PyTorch models to MIA.
Features:
- Implements the original shadow model attack
- Is customizable, can use any scikit learn's
Estimator-like object as a shadow or attack model - Is tested with Keras and PyTorch
Files
bogdan-kulynych/mia-v0.1.1.zip
Files
(24.5 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:c9c06c1e68bd077c5e785eec5984d3e8
|
24.5 kB | Preview Download |
Additional details
Related works
- Is supplement to
- https://github.com/bogdan-kulynych/mia/tree/v0.1.1 (URL)