Published July 18, 2021 | Version v1
Conference paper Open

Whitening for Self-Supervised Representation Learning

  • 1. University of Trento, Italy

Description

Most of the current self-supervised representation learning (SSL) methods are based on the contrastive
loss and the instance-discrimination task, where augmented versions of the same image instance (“positives”) are contrasted with instances extracted from other images (“negatives”). For the
learning to be effective, many negatives should be compared with a positive pair, which is computationally
demanding. In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latentspace features. The whitening operation has a “scattering” effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. Our solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance. The source code of the method and of all the experiments is available at: https://github.com/htdt/
self-supervised.

Files

ermolov21a.pdf

Files (1.5 MB)

Name Size Download all
md5:b158ac925ad46cf56a98f890322d0f86
1.5 MB Preview Download

Additional details

Funding

AI4Media – A European Excellence Centre for Media, Society and Democracy 951911
European Commission