Conference paper Open Access

Continual Learning for Automated Audio Captioning Using The Learning Without Forgetting Approach

Jan Berg; Konstantinos Drossos

Automated audio captioning (AAC) is the task of automatically creating textual descriptions (i.e. captions) for the contents of a general audio signal. Most AAC methods are using existing datasets to optimize and/or evaluate upon. Given the limited information held by the AAC datasets, it is very likely that AAC methods learn only the information contained in the utilized datasets. In this paper we present a first approach for continuously adapting an AAC method to new information, using a continual learning method. In our scenario, a pre-optimized AAC method is used for some unseen general audio signals and can update its parameters in order to adapt to the new information, given a new reference caption. We evaluate our method using a freely available, pre-optimized AAC method and two freely available AAC datasets. We compare our proposed method with three scenarios, two of training on one of the datasets and evaluating on the other and a third of training on one dataset and fine-tuning on the other. Obtained results show that our method achieves a good balance between distilling new knowledge and not forgetting the previous one.

The authors wish to acknowledge CSC-IT Center for Science, Finland, for computational resources. K. Drossos has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 957337, project MARVEL.
Files (304.6 kB)
Name Size
DCASE2021_Berg_et_al_continual_learning.pdf
md5:459252e10945e22955f4c30c240023c2
304.6 kB Download
131
48
views
downloads
All versions This version
Views 131131
Downloads 4848
Data volume 14.6 MB14.6 MB
Unique views 105105
Unique downloads 4545

Share

Cite as