There is a newer version of this record available.

Software Open Access

Game of Threads: Enabling Asynchronous Poisoning Attacks

Jose Rodrigo Sanchez Vicarte; Benjamin Schreiber; Riccardo Paccagnella; Christopher W. Fletcher

DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="" xmlns="" xsi:schemaLocation="">
  <identifier identifierType="DOI">10.5281/zenodo.3598009</identifier>
      <creatorName>Jose Rodrigo Sanchez Vicarte</creatorName>
      <affiliation>University of Illinois, Urbana Champaign</affiliation>
      <creatorName>Benjamin Schreiber</creatorName>
      <affiliation>University of Illinois, Urbana Champaign</affiliation>
      <creatorName>Riccardo Paccagnella</creatorName>
      <affiliation>University of Illinois, Urbana Champaign</affiliation>
      <creatorName>Christopher W. Fletcher</creatorName>
      <affiliation>University of Illinois, Urbana Champaign</affiliation>
    <title>Game of Threads: Enabling Asynchronous Poisoning Attacks</title>
    <subject>machine learning, poisoning attacks, SGX, adversarial machine learning</subject>
    <date dateType="Issued">2020-01-05</date>
  <resourceType resourceTypeGeneral="Software"/>
    <alternateIdentifier alternateIdentifierType="url"></alternateIdentifier>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3598008</relatedIdentifier>
    <rights rightsURI="">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
    <description descriptionType="Abstract">&lt;p&gt;As machine learning models continue to grow in size andcomplexity, training is being forced to adopt asynchronicity toavoid scalability bottlenecks. In asynchronous training, manythreads share and update the model in a racy fashion to avoidinter-thread synchronization.This paper studies the security implications of asynchronoustraining codes by introducingasynchronous poisoning attacks.Our attack influences training outcome&amp;mdash;e.g., degrades ac-curacy or biases the model towards an adversary-specifiedlabel&amp;mdash;purely by scheduling asynchronous training threads ina malicious fashion. Since thread scheduling is outside theprotections of modern trusted execution environments (TEEs),e.g., Intel SGX, our attack bypasses these protections evenwhen the training set can be verified as correct. To the bestof our knowledge, this represents the first example where aclass of applications loses integrity guarantees, despite beingprotected by enclave-based TEEs such as Intel SGX.We demonstrate both accuracy degradation and model bi-asing attacks on the CIFAR-10 image recognition task usingLeNet-style and Resnet DNNs, attacking an asynchronoustraining implementation published by Pytorch. Our accuracydegradation attack is trivial to perform and can decreasemodel accuracy by 6-60% with a single malicious update. Ourmodel biasing attack is capable of biasing the model towardsan adversary-chosen label by up to3.5&amp;times;the label&amp;rsquo;s normalprediction rate on a LeNet-style network and up to2&amp;times;onResNet-18.&lt;/p&gt;</description>
All versions This version
Views 387147
Downloads 4313
Data volume 991.2 MB803.8 MB
Unique views 338140
Unique downloads 4113


Cite as