Published August 9, 2023 | Version v1
Conference paper Open

GAP: Differentially Private Graph Neural Networks with Aggregation Perturbation

  • 1. Idiap Research Institute, EPFL
  • 2. The Alan Turing Institute
  • 3. Inria

Description

In this paper, we study the problem of learning Graph Neural Networks (GNNs) with Differential Privacy (DP). We propose a novel differentially private GNN based on Aggregation Perturbation (GAP), which adds stochastic noise to the GNN's aggregation function to statistically obfuscate the presence of a single edge (edge-level privacy) or a single node and all its adjacent edges (node-level privacy). Tailored to the specifics of private learning, GAP's new architecture is composed of three separate modules: (i) the encoder module, where we learn private node embeddings without relying on the edge information; (ii) the aggregation module, where we compute noisy aggregated node embeddings based on the graph structure; and (iii) the classification module, where we train a neural network on the private aggregations for node classification without further querying the graph edges. GAP's major advantage over previous approaches is that it can benefit from multi-hop neighborhood aggregations, and guarantees both edge-level and node-level DP not only for training, but also at inference with no additional costs beyond the training's privacy budget. We analyze GAP's formal privacy guarantees using Rényi DP and conduct empirical experiments over three real-world graph datasets. We demonstrate that GAP offers significantly better accuracy-privacy trade-offs than state-of-the-art DP-GNN approaches and naive MLP-based baselines.

Files

GAP.pdf

Files (764.1 kB)

Name Size Download all
md5:2e44233deb3a9edd6ba19cac3593ca68
764.1 kB Preview Download

Additional details

Funding

AI4Media – A European Excellence Centre for Media, Society and Democracy 951911
European Commission