Published May 4, 2023 | Version v1
Software Open

Certified Private Inference on Neural Networks via Lipschitz-Guided Abstraction Refinement

  • 1. The University of Manchester

Description

Private inference on neural networks requires running all the computation on encrypted data. Unfortunately, neural networks contain a large number of non-arithmetic operations, such as ReLU activation functions and max pooling layers, which incur a high latency cost in their encrypted form. To address this issue, the majority of private inference methods replace some or all of the non-arithmetic operations with a polynomial approximation. This step introduces approximation errors that can substantially alter the output of the neural network and decrease its predictive performance. In this paper, we propose a Lipschitz-Guided Abstraction Refinement method (LiGAR), which provides strong guarantees on the global approximation error. Our method is iterative, and leverages state-of-the-art Lipschitz constant estimation techniques to produce increasingly tighter bounds on the worst-case error. At each iteration, LiGAR designs the least expensive polynomial approximation by solving the dual of the corresponding optimization problem. Our preliminary experiments show that LiGAR can easily converge to the optimum on medium-sized neural networks.

Files

full_experiment_code.zip

Files (5.6 MB)

Name Size Download all
md5:f5dafb48f720bd18c709c7f8727f2405
5.6 MB Preview Download

Additional details

Funding

UK Research and Innovation
EnnCore: End-to-End Conceptual Guarding of Neural Architectures EP/T026995/1