Differential Privacy: Protecting Individuals from Re-Identification
Authors/Creators
- 1. Pennsylvania State University
- 2. Harvard University
Description
Slides from Privacy & Confidentiality Committee sponsored webinar presented on February 10, 2017.
Description: Differential privacy is a mathematical framework for protecting privacy interests in statistical databases by focusing on the disclosure risk to an individual being included in a data set. In this webinar, two research experts explain this methodology and how they apply differential privacy methodology as a data protection method for protecting data files from the risk of re-identification. A key benefit of the Differential Privacy methodology is that in many cases, appropriate privacy protection can be achieved if random noise is properly added to the actual results. For example, rather than simply reporting the sum, the data provider can inject noise based on a distribution. The calculation of “how much” noise to inject can be made based only on knowledge of the function to be computed. This webinar will cover the basic principles of differential privacy, how it works, and how it can successfully be applied to current statistical databases.
Files
Differential Privacy Handouts.pdf
Files
(5.8 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:93dae95d64fa8d8bedaf0d1791c06c75
|
5.8 MB | Preview Download |