There is a newer version of this record available.

Software Open Access

Game of Threads: Enabling Asynchronous Poisoning Attacks

Jose Rodrigo Sanchez Vicarte; Benjamin Schreiber; Riccardo Paccagnella; Christopher W. Fletcher

As machine learning models continue to grow in size andcomplexity, training is being forced to adopt asynchronicity toavoid scalability bottlenecks. In asynchronous training, manythreads share and update the model in a racy fashion to avoidinter-thread synchronization.This paper studies the security implications of asynchronoustraining codes by introducingasynchronous poisoning attacks.Our attack influences training outcome—e.g., degrades ac-curacy or biases the model towards an adversary-specifiedlabel—purely by scheduling asynchronous training threads ina malicious fashion. Since thread scheduling is outside theprotections of modern trusted execution environments (TEEs),e.g., Intel SGX, our attack bypasses these protections evenwhen the training set can be verified as correct. To the bestof our knowledge, this represents the first example where aclass of applications loses integrity guarantees, despite beingprotected by enclave-based TEEs such as Intel SGX.We demonstrate both accuracy degradation and model bi-asing attacks on the CIFAR-10 image recognition task usingLeNet-style and Resnet DNNs, attacking an asynchronoustraining implementation published by Pytorch. Our accuracydegradation attack is trivial to perform and can decreasemodel accuracy by 6-60% with a single malicious update. Ourmodel biasing attack is capable of biasing the model towardsan adversary-chosen label by up to3.5×the label’s normalprediction rate on a LeNet-style network and up to2×onResNet-18.

Files (61.8 MB)
Name Size
game_of_threads.tar.bz2
md5:dcf90da7a012a4ad13d7dd9a6c89710f
61.8 MB Download
387
43
views
downloads
All versions This version
Views 38715
Downloads 432
Data volume 991.2 MB123.7 MB
Unique views 33815
Unique downloads 412

Share

Cite as