Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published October 4, 2024 | Version v1
Conference paper Open

Revisiting Supervision for Continual Representation Learning

  • 1. IDEAS NCBR
  • 2. ROR icon Warsaw University of Technology
  • 3. ROR icon Gdańsk University of Technology
  • 4. ROR icon Universitat Autònoma de Barcelona
  • 5. ROR icon Computer Vision Center

Description

In the field of continual learning, models are designed to learn tasks one after the other. While most research has centered on supervised continual learning, there is a growing interest in unsupervised continual learning, which makes use of the vast amounts of unlabeled data. Recent studies have highlighted the strengths of unsupervised methods, particularly self-supervised learning, in providing robust representations.
The improved transferability of those representations built with selfsupervised methods is often associated with the role played by the multilayer perceptron projector. In this work, we depart from this observation
and reexamine the role of supervision in continual representation learning. We reckon that additional information, such as human annotations, should not deteriorate the quality of representations. Our findings show that supervised models when enhanced with a multi-layer perceptron head, can outperform self-supervised models in continual representation learning. This highlights the importance of the multi-layer perceptron projector in shaping feature transferability across a sequence of tasks in continual learning. 

Files

Revisiting supervision.pdf

Files (1.0 MB)

Name Size Download all
md5:173625ae9acf00a23cf6a1213f8506cd
1.0 MB Preview Download

Additional details

Funding

ELIAS – European Lighthouse of AI for Sustainability 101120237
European Commission

Software