Published November 3, 2024 | Version v1
Conference paper Open

Unlearning Vision Transformers without Retaining Data via Low-Rank Decompositions

  • 1. Università degli Studi di Modena e Reggio Emilia
  • 2. ROR icon University of Pisa
  • 3. ROR icon Institute of Informatics and Telematics

Description

The implementation of data protection regulations such as the GDPR and the California Consumer Privacy Act has sparked a growing interest in removing sensitive information from pre-trained models without requiring retraining from scratch, all while maintaining predictive performance on remaining data. Recent studies on machine unlearning for deep neural networks have resulted in different attempts that put constraints on the training procedure and which are limited to small-scale architectures and with poor adaptability to real-world requirements. In this paper, we develop an approach to delete information on a class from a pre-trained model, by injecting a trainable low-rank decomposition into the network parameters, and without requiring access to the original training set. Our approach greatly reduces the number of parameters to train as well as time and memory requirements. This allows a painless application to real-life settings where the entire training set is unavailable, and compliance with the requirement of time-bound deletion. We conduct experiments on various Vision Transformer architectures for class forgetting. Extensive empirical analyses demonstrate that our proposed method is efficient, safe to apply, and effective in removing learned information while maintaining accuracy.

Files

2024_ICPR_poppi.pdf

Files (1.5 MB)

Name Size Download all
md5:14e26a5910f01dd0b8fe82d3170567bb
1.5 MB Preview Download

Additional details

Funding

European Commission
ELIAS – European Lighthouse of AI for Sustainability 101120237