Conference paper Open Access

Comparing Local and Central Differential Privacy Using Membership Inference Attacks

Bernau, Daniel; Robl, Jonas; Grassal, Philip-William; Schneider, Steffen; Kerschbaum, Florian


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nam##2200000uu#4500</leader>
  <datafield tag="041" ind1=" " ind2=" ">
    <subfield code="a">eng</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Anonymization</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Neural Networks</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Membership Inference</subfield>
  </datafield>
  <controlfield tag="005">20220213014909.0</controlfield>
  <controlfield tag="001">6052865</controlfield>
  <datafield tag="711" ind1=" " ind2=" ">
    <subfield code="d">19-20 July 2021</subfield>
    <subfield code="g">DBSEC</subfield>
    <subfield code="a">IFIP Annual Conference on Data and Applications Security and Privacy</subfield>
    <subfield code="c">Calgary, Canada</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">SAP SE</subfield>
    <subfield code="a">Robl, Jonas</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Heidelberg University</subfield>
    <subfield code="a">Grassal, Philip-William</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Procure.AI</subfield>
    <subfield code="a">Schneider, Steffen</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">University of Waterloo</subfield>
    <subfield code="a">Kerschbaum, Florian</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">814860</subfield>
    <subfield code="z">md5:00807367f0da1bab5e520309c1c48f44</subfield>
    <subfield code="u">https://zenodo.org/record/6052865/files/Comparing_local_and_central_differential_privacy_using_membership_inference_attacks__DBSEC___Camera_ready_ (4).pdf</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="y">Conference website</subfield>
    <subfield code="u">https://wpsites.ucalgary.ca/dbsec2021/</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2022-02-12</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire</subfield>
    <subfield code="o">oai:zenodo.org:6052865</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">SAP SE</subfield>
    <subfield code="a">Bernau, Daniel</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">Comparing Local and Central Differential Privacy Using Membership Inference Attacks</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">825333</subfield>
    <subfield code="a">Multi-Owner data Sharing for Analytics and Integration respecting Confidentiality and Owner control</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;Attacks that aim to identify the training data of neural networks represent a severe threat to the privacy of individuals in the training dataset. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy, and need to select meaningful privacy parameters&amp;nbsp;𝜖. A comparison of local and central differential privacy based on the privacy parameters furthermore potentially leads data scientists to incorrect conclusions, since the privacy parameters are reflecting different types of mechanisms.&lt;/p&gt;

&lt;p&gt;Instead, we empirically compare the relative privacy-accuracy trade-off of one central and two local differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects a lower bound on inference risk and differential privacy formulates an upper bound, our experiments with several datasets show that the privacy-accuracy trade-off is similar for both types of mechanisms despite the large difference in their upper bound. This suggests that the upper bound is far from the practical susceptibility to membership inference. Thus, small&amp;nbsp;𝜖&amp;nbsp;in central differential privacy and large&amp;nbsp;𝜖&amp;nbsp;in local differential privacy result in similar membership inference risks, and local differential privacy can be a meaningful alternative to central differential privacy for differentially private deep learning besides the comparatively higher privacy parameters.&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.1007/978-3-030-81242-3_2</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">publication</subfield>
    <subfield code="b">conferencepaper</subfield>
  </datafield>
</record>
32
47
views
downloads
Views 32
Downloads 47
Data volume 38.3 MB
Unique views 22
Unique downloads 43

Share

Cite as