Conference paper Open Access

Comparing Local and Central Differential Privacy Using Membership Inference Attacks

Bernau, Daniel; Robl, Jonas; Grassal, Philip-William; Schneider, Steffen; Kerschbaum, Florian


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="URL">https://zenodo.org/record/6052865</identifier>
  <creators>
    <creator>
      <creatorName>Bernau, Daniel</creatorName>
      <givenName>Daniel</givenName>
      <familyName>Bernau</familyName>
      <affiliation>SAP SE</affiliation>
    </creator>
    <creator>
      <creatorName>Robl, Jonas</creatorName>
      <givenName>Jonas</givenName>
      <familyName>Robl</familyName>
      <affiliation>SAP SE</affiliation>
    </creator>
    <creator>
      <creatorName>Grassal, Philip-William</creatorName>
      <givenName>Philip-William</givenName>
      <familyName>Grassal</familyName>
      <affiliation>Heidelberg University</affiliation>
    </creator>
    <creator>
      <creatorName>Schneider, Steffen</creatorName>
      <givenName>Steffen</givenName>
      <familyName>Schneider</familyName>
      <affiliation>Procure.AI</affiliation>
    </creator>
    <creator>
      <creatorName>Kerschbaum, Florian</creatorName>
      <givenName>Florian</givenName>
      <familyName>Kerschbaum</familyName>
      <affiliation>University of Waterloo</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Comparing Local and Central Differential Privacy Using Membership Inference Attacks</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2022</publicationYear>
  <subjects>
    <subject>Anonymization</subject>
    <subject>Neural Networks</subject>
    <subject>Membership Inference</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2022-02-12</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="ConferencePaper"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/6052865</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1007/978-3-030-81242-3_2</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;Attacks that aim to identify the training data of neural networks represent a severe threat to the privacy of individuals in the training dataset. A possible protection is offered by anonymization of the training data or training function with differential privacy. However, data scientists can choose between local and central differential privacy, and need to select meaningful privacy parameters&amp;nbsp;𝜖. A comparison of local and central differential privacy based on the privacy parameters furthermore potentially leads data scientists to incorrect conclusions, since the privacy parameters are reflecting different types of mechanisms.&lt;/p&gt;

&lt;p&gt;Instead, we empirically compare the relative privacy-accuracy trade-off of one central and two local differential privacy mechanisms under a white-box membership inference attack. While membership inference only reflects a lower bound on inference risk and differential privacy formulates an upper bound, our experiments with several datasets show that the privacy-accuracy trade-off is similar for both types of mechanisms despite the large difference in their upper bound. This suggests that the upper bound is far from the practical susceptibility to membership inference. Thus, small&amp;nbsp;𝜖&amp;nbsp;in central differential privacy and large&amp;nbsp;𝜖&amp;nbsp;in local differential privacy result in similar membership inference risks, and local differential privacy can be a meaningful alternative to central differential privacy for differentially private deep learning besides the comparatively higher privacy parameters.&lt;/p&gt;</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/825333/">825333</awardNumber>
      <awardTitle>Multi-Owner data Sharing for Analytics and Integration respecting Confidentiality and Owner control</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
34
49
views
downloads
Views 34
Downloads 49
Data volume 39.9 MB
Unique views 23
Unique downloads 45

Share

Cite as