On the Detectability of Active Gradient Inversion Attacks in Federated Learning (IEEE S&P '26) - Source Code
Authors/Creators
Description
This artifact accompanies the paper "On the Detectability of Active Gradient Inversion Attacks in Federated Learning," accepted for publication at the IEEE Symposium on Security and Privacy (IEEE S&P) 2026.
Federated Learning allows multiple clients to collaboratively train a Machine Learning model while keeping their private data on-site. However, the gradients exchanged during training remain vulnerable to Gradient Inversion Attacks, allowing a malicious server to reconstruct the clients' local data. In active attacks, the server deliberately manipulates the global model to facilitate this reconstruction.
This repository provides the official implementation to reproduce our comprehensive analysis of four state-of-the-art active gradient inversion attacks. It also contains the source code for our novel, lightweight client-side detection techniques. These defenses identify statistically improbable weight structures alongside anomalous loss and gradient dynamics, enabling clients to effectively detect active attacks without modifying the standard federated learning protocol.
Please refer to the documentation included in the repository for detailed instructions on setting up the environment, running the minimal working example, and reproducing the experimental results.
Files
active_gias_detectability.zip
Files
(859.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:61771d5512f4657cfd54a2c4c43175e5
|
859.8 kB | Preview Download |
Additional details
Software
- Repository URL
- https://anonymous.4open.science/r/active_gias_detectability-PF12/
- Programming language
- Python