Pittaras, Nikiforos
Markatopoulou, Foteini
Mezaris, Vasileios
Patras, Ioannis
2016-12-31
<p>In this study we compare three different fine-tuning strategies in order to investigate the best way to transfer the parameters of popular deep convolutional neural networks that were trained for a visual annotation task on one dataset, to a new, considerably different dataset. We focus on the concept-based image/video annotation problem and use ImageNet as the source dataset, while the TRECVID SIN 2013 and PASCAL VOC-2012 classification datasets are used as the target datasets. A large set of experiments examines the effectiveness of three fine-tuning strategies on each of three different pre-trained DCNNs and each target dataset. The reported results give rise to guidelines for effectively fine-tuning a DCNN for concept-based visual annotation.</p>
https://doi.org/10.1007/978-3-319-51811-4_9
oai:zenodo.org:240853
Zenodo
https://zenodo.org/communities/invid-h2020
https://zenodo.org/communities/eu
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Comparison of Fine-tuning and Extension Strategies for Deep Convolutional Neural Networks
info:eu-repo/semantics/conferencePaper