There is a newer version of this record available.

Software Open Access

Game of Threads: Enabling Asynchronous Poisoning Attacks

Jose Rodrigo Sanchez Vicarte; Benjamin Schreiber; Riccardo Paccagnella; Christopher W. Fletcher


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nmm##2200000uu#4500</leader>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">machine learning, poisoning attacks, SGX, adversarial machine learning</subfield>
  </datafield>
  <controlfield tag="005">20200126231305.0</controlfield>
  <controlfield tag="001">3598009</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">March 16-20, 2020</subfield>
    <subfield code="g">ASPLOS</subfield>
    <subfield code="c">Lousanne</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Illinois, Urbana Champaign</subfield>
    <subfield code="a">Benjamin Schreiber</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Illinois, Urbana Champaign</subfield>
    <subfield code="a">Riccardo Paccagnella</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Illinois, Urbana Champaign</subfield>
    <subfield code="a">Christopher W. Fletcher</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">61832172</subfield>
    <subfield code="z">md5:23a7bee7b5fc63295581a3dfeac75d4f</subfield>
    <subfield code="u">https://zenodo.org/record/3598009/files/game_of_threads.tar.bz2</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2020-01-05</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">software</subfield>
    <subfield code="o">oai:zenodo.org:3598009</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">University of Illinois, Urbana Champaign</subfield>
    <subfield code="a">Jose Rodrigo Sanchez Vicarte</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Game of Threads: Enabling Asynchronous Poisoning Attacks</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;As machine learning models continue to grow in size andcomplexity, training is being forced to adopt asynchronicity toavoid scalability bottlenecks. In asynchronous training, manythreads share and update the model in a racy fashion to avoidinter-thread synchronization.This paper studies the security implications of asynchronoustraining codes by introducingasynchronous poisoning attacks.Our attack influences training outcome&amp;mdash;e.g., degrades ac-curacy or biases the model towards an adversary-specifiedlabel&amp;mdash;purely by scheduling asynchronous training threads ina malicious fashion. Since thread scheduling is outside theprotections of modern trusted execution environments (TEEs),e.g., Intel SGX, our attack bypasses these protections evenwhen the training set can be verified as correct. To the bestof our knowledge, this represents the first example where aclass of applications loses integrity guarantees, despite beingprotected by enclave-based TEEs such as Intel SGX.We demonstrate both accuracy degradation and model bi-asing attacks on the CIFAR-10 image recognition task usingLeNet-style and Resnet DNNs, attacking an asynchronoustraining implementation published by Pytorch. Our accuracydegradation attack is trivial to perform and can decreasemodel accuracy by 6-60% with a single malicious update. Ourmodel biasing attack is capable of biasing the model towardsan adversary-chosen label by up to3.5&amp;times;the label&amp;rsquo;s normalprediction rate on a LeNet-style network and up to2&amp;times;onResNet-18.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.3598008</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.3598009</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">software</subfield>
  </datafield>
</record>
387
43
views
downloads
All versions This version
Views 387147
Downloads 4313
Data volume 991.2 MB803.8 MB
Unique views 338140
Unique downloads 4113

Share

Cite as