Published April 11, 2023 | Version v1
Journal article Open

GAN-Driven Data Poisoning Attacks and Their Mitigation in Federated Learning Systems

  • 1. Synelixis Solutions S.A.
  • 2. Netcompany-Intrasoft S.A.
  • 3. Synelixis Solutions S.A., National and Kapodistrian University of Athens

Description

Federated learning (FL) is an emerging machine learning technique where machine learning models are trained in a decentralized manner. The main advantage of this approach is the data privacy it provides because the data are not processed in a centralized device. Moreover, the local client models are aggregated on a server, resulting in a global model that has accumulated knowledge from all the different clients. This approach, however, is vulnerable to attacks because clients can be malicious or malicious actors may interfere within the network. In the first case, these types of attacks may refer to data or model poisoning attacks where the data or model parameters, respectively, may be altered. In this paper, we investigate the data poisoning attacks and, more specifically, the label-flipping case within a federated learning system. For an image classification task, we introduce two variants of data poisoning attacks, namely model degradation and targeted label attacks. These attacks are based on synthetic images generated by a generative adversarial network (GAN). This network is trained jointly by the malicious clients using a concatenated malicious dataset. Due to dataset sample limitations, the architecture and learning procedure of the GAN are adjusted accordingly. Through the experiments, we demonstrate that these types of attacks are effective in achieving their task and managing to fool common federated defenses (stealth). We also propose a mechanism to mitigate these attacks based on clean label training on the server side. In more detail, we see that the model degradation attack causes an accuracy degradation of up to 25%, while common defenses can only alleviate this for a percentage of ∼5%. Similarly, the targeted label attack results in a misclassification of 56% compared to 2.5% when no attack takes place. Moreover, our proposed defense mechanism is able to mitigate these attacks.

Files

electronics-12-01805.pdf

Files (2.5 MB)

Name Size Download all
md5:25f6603afd0c065d2d5eb709bcf58fae
2.5 MB Preview Download

Additional details

Funding

IoT-NGIN – Next Generation IoT as part of Next Generation Internet 957246
European Commission