There is a newer version of the record available.

Published January 5, 2020 | Version v1
Software Open

Game of Threads: Enabling Asynchronous Poisoning Attacks

  • 1. University of Illinois, Urbana Champaign

Description

As machine learning models continue to grow in size andcomplexity, training is being forced to adopt asynchronicity toavoid scalability bottlenecks. In asynchronous training, manythreads share and update the model in a racy fashion to avoidinter-thread synchronization.This paper studies the security implications of asynchronoustraining codes by introducingasynchronous poisoning attacks.Our attack influences training outcome—e.g., degrades ac-curacy or biases the model towards an adversary-specifiedlabel—purely by scheduling asynchronous training threads ina malicious fashion. Since thread scheduling is outside theprotections of modern trusted execution environments (TEEs),e.g., Intel SGX, our attack bypasses these protections evenwhen the training set can be verified as correct. To the bestof our knowledge, this represents the first example where aclass of applications loses integrity guarantees, despite beingprotected by enclave-based TEEs such as Intel SGX.We demonstrate both accuracy degradation and model bi-asing attacks on the CIFAR-10 image recognition task usingLeNet-style and Resnet DNNs, attacking an asynchronoustraining implementation published by Pytorch. Our accuracydegradation attack is trivial to perform and can decreasemodel accuracy by 6-60% with a single malicious update. Ourmodel biasing attack is capable of biasing the model towardsan adversary-chosen label by up to3.5×the label’s normalprediction rate on a LeNet-style network and up to2×onResNet-18.

Files

Files (61.8 MB)

Name Size Download all
md5:23a7bee7b5fc63295581a3dfeac75d4f
61.8 MB Download