Published July 8, 2024 | Version v1
Conference paper Open

A White-Box Watermarking Modulation for Encrypted DNN in Homomorphic Federated Learning

Description

Federated Learning (FL) is a distributed paradigm that enables multiple clients to collaboratively train a model without sharing their sensitive local data. In such a privacy-sensitive setting, Homomorphic Encryption (HE) plays an important role by enabling computations on encrypted data. This prevents the server from reverse-engineering the model updates during aggregation to infer private client data, which is a significant concern in scenarios like the healthcare industry where patient confidentiality is paramount. Despite these advancements, FL remains susceptible to intellectual property theft and model leakage due to malicious participants during the training phase. To counteract this, watermarking emerges as a solution for protecting the intellectual property rights of Deep Neural Networks (DNNs). However, traditional watermarking methods are not compatible with HE, primarily because they require the use of non-polynomial functions, which are not natively supported by HE. In this paper, we address these challenges by proposing the first white-box DNN watermarking modulation on a single homomorphically encrypted model. We then extend this modulation to a server-side FL context that complies with HE’s processing constraints. Our experimental results demonstrate that the performance of the proposed watermarking modulation is equivalent to that in the unencrypted domain.

Files

127643.pdf

Files (365.8 kB)

Name Size Download all
md5:84293d24be4538fa9e352921a3a66d48
365.8 kB Preview Download

Additional details

Funding

European Commission
PAROMA-MED – Privacy Aware and Privacy Preserving Distributed and Robust Machine Learning for Medical Applications 101070222