Published May 15, 2022 | Version v1
Conference paper Open

Computationally Efficient Rehearsal for Online Continual Learning

Description

Continual learning is a crucial ability for learning systems that have to adapt to changing data distributions, without reducing their performance in what they have already learned. Rehearsal methods offer a simple countermeasure to help avoid this catastrophic forgetting which frequently occurs in dynamic situations and is a major limitation of machine learning models. These methods continuously train neural networks using a mix of data both from the stream and from a rehearsal buffer, which maintains past training samples. Although the rehearsal approach is reasonable and simple to implement, its effectiveness and efficiency is significantly affected by several hyperparameters such as the number of training iterations performed at each step, the choice of learning rate, and the choice on whether to retrain the agent at each step. These options are especially important in resource-constrained environments commonly found in online continual learning for image analysis. This work evaluates several rehearsal training strategies for continual online learning and proposes the combined use of a drift detector that decides on (a) when to train using data from the buffer and the online stream, and (b) how to train, based on a combination of heuristics. Experiments on the MNIST and CIFAR-10 image classification datasets demonstrate the effectiveness of the proposed approach over baseline training strategies at a fraction of the computational cost.

Files

online_continual_learning.pdf

Files (387.7 kB)

Name Size Download all
md5:36fd1ebe8ef6c9f5c89ea5df88161f05
387.7 kB Preview Download

Additional details

Funding

TEACHING – A computing toolkit for building efficient autonomous applications leveraging humanistic intelligence 871385
European Commission