Published February 1, 2023 | Version 1.0
Conference paper Open

PandA: Unsupervised Learning of Parts and Appearances in the Feature Maps of GANs

  • 1. Queen Mary University of London
  • 2. University of Athens
  • 3. The Cyprus Institute

Description

Recent advances in the understanding of Generative Adversarial Networks (GANs) have led to remarkable progress in visual editing and synthesis tasks, capitalizing on the rich semantics that are embedded in the latent spaces of pre-trained GANs. However, existing methods are often tailored to specific GAN architectures and are limited to either discovering global semantic directions that do not facilitate localized control, or require some form of supervision through manually provided regions or segmentation masks. In this light, we present an architecture-agnostic approach that jointly discovers factors representing spatial parts and their appearances in an entirely unsupervised fashion. These factors are obtained by applying a semi-nonnegative tensor factorization on the feature maps, which in turn enables context-aware local image editing with pixel-level control. In addition, we show that the discovered appearance factors correspond to saliency maps that localize concepts of interest, without using any labels. Experiments on a wide range of GAN architectures and datasets show that, in comparison to the state of the art, our method is far more efficient in terms of training time and, most importantly, provides much more accurate localized control. Our code is available at https://github.com/james-oldfield/PandA.

Files

panda-cr.pdf

Files (14.6 MB)

Name Size Download all
md5:4e87f53249cd4d03cb588ed325f88e0c
14.6 MB Preview Download

Additional details

Funding

European Commission
AI4Media – A European Excellence Centre for Media, Society and Democracy 951911