Comparison of Fine-tuning and Extension Strategies for Deep Convolutional Neural Networks
- 1. Information Technologies Institute (ITI), Centre for Research and Technology Hellas (CERTH), Thessaloniki, Greece
- 2. Queen Mary University of London, Mile end Campus, UK
Description
In this study we compare three different fine-tuning strategies in order to investigate the best way to transfer the parameters of popular deep convolutional neural networks that were trained for a visual annotation task on one dataset, to a new, considerably different dataset. We focus on the concept-based image/video annotation problem and use ImageNet as the source dataset, while the TRECVID SIN 2013 and PASCAL VOC-2012 classification datasets are used as the target datasets. A large set of experiments examines the effectiveness of three fine-tuning strategies on each of three different pre-trained DCNNs and each target dataset. The reported results give rise to guidelines for effectively fine-tuning a DCNN for concept-based visual annotation.
Files
mmm17_1_preprint.pdf
Files
(255.0 kB)
Name | Size | Download all |
---|---|---|
md5:a0fcfbb9f6eec5eb87e83b1dde238408
|
255.0 kB | Preview Download |