Conference paper Open Access

Comparing Local and Central Differential Privacy Using Membership Inference Attacks

Bernau, Daniel; Robl, Jonas; Grassal, Philip-William; Schneider, Steffen; Kerschbaum, Florian


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Bernau, Daniel</dc:creator>
  <dc:creator>Robl, Jonas</dc:creator>
  <dc:creator>Grassal, Philip-William</dc:creator>
  <dc:creator>Schneider, Steffen</dc:creator>
  <dc:creator>Kerschbaum, Florian</dc:creator>
  <dc:date>2022-02-12</dc:date>
  <dc:description>Attacks that aim to identify the training data of neural networks represent a severe threat to the privacy of individuals in the training dataset. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy, and need to select meaningful privacy parameters 𝜖. A comparison of local and central differential privacy based on the privacy parameters furthermore potentially leads data scientists to incorrect conclusions, since the privacy parameters are reflecting different types of mechanisms.

Instead, we empirically compare the relative privacy-accuracy trade-off of one central and two local differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects a lower bound on inference risk and differential privacy formulates an upper bound, our experiments with several datasets show that the privacy-accuracy trade-off is similar for both types of mechanisms despite the large difference in their upper bound. This suggests that the upper bound is far from the practical susceptibility to membership inference. Thus, small 𝜖 in central differential privacy and large 𝜖 in local differential privacy result in similar membership inference risks, and local differential privacy can be a meaningful alternative to central differential privacy for differentially private deep learning besides the comparatively higher privacy parameters.</dc:description>
  <dc:identifier>https://zenodo.org/record/6052865</dc:identifier>
  <dc:identifier>10.1007/978-3-030-81242-3_2</dc:identifier>
  <dc:identifier>oai:zenodo.org:6052865</dc:identifier>
  <dc:language>eng</dc:language>
  <dc:relation>info:eu-repo/grantAgreement/EC/H2020/825333/</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:subject>Anonymization</dc:subject>
  <dc:subject>Neural Networks</dc:subject>
  <dc:subject>Membership Inference</dc:subject>
  <dc:title>Comparing Local and Central Differential Privacy Using Membership Inference Attacks</dc:title>
  <dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
  <dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>
32
46
views
downloads
Views 32
Downloads 46
Data volume 37.5 MB
Unique views 22
Unique downloads 42

Share

Cite as