Conference paper Open Access

Whitening for Self-Supervised Representation Learning

Alexandr Ermolov; Aliaksandr Siarohin; Enver Sangineto; Niculae Sebe

Most of the current self-supervised representation learning (SSL) methods are based on the contrastive
loss and the instance-discrimination task, where augmented versions of the same image instance (“positives”) are contrasted with instances extracted from other images (“negatives”). For the
learning to be effective, many negatives should be compared with a positive pair, which is computationally
demanding. In this paper, we propose a different direction and a new loss function for SSL, which is based on the whitening of the latentspace features. The whitening operation has a “scattering” effect on the batch samples, avoiding degenerate solutions where all the sample representations collapse to a single point. Our solution does not require asymmetric networks and it is conceptually simple. Moreover, since negatives are not needed, we can extract multiple positive pairs from the same image instance. The source code of the method and of all the experiments is available at: https://github.com/htdt/
self-supervised.

Files (1.5 MB)
Name Size
ermolov21a.pdf
md5:b158ac925ad46cf56a98f890322d0f86
1.5 MB Download
20
13
views
downloads
All versions This version
Views 2020
Downloads 1313
Data volume 20.0 MB20.0 MB
Unique views 2020
Unique downloads 1313

Share

Cite as