240853
doi
10.1007/978-3-319-51811-4_9
oai:zenodo.org:240853
user-invid-h2020
user-eu
Markatopoulou, Foteini
Information Technologies Institute (ITI), Centre for Research and Technology Hellas (CERTH), Thessaloniki, Greece
Mezaris, Vasileios
Information Technologies Institute (ITI), Centre for Research and Technology Hellas (CERTH), Thessaloniki, Greece
Patras, Ioannis
Queen Mary University of London, Mile end Campus, UK
Comparison of Fine-tuning and Extension Strategies for Deep Convolutional Neural Networks
Pittaras, Nikiforos
Information Technologies Institute (ITI), Centre for Research and Technology Hellas (CERTH), Thessaloniki, Greece
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
<p>In this study we compare three different fine-tuning strategies in order to investigate the best way to transfer the parameters of popular deep convolutional neural networks that were trained for a visual annotation task on one dataset, to a new, considerably different dataset. We focus on the concept-based image/video annotation problem and use ImageNet as the source dataset, while the TRECVID SIN 2013 and PASCAL VOC-2012 classification datasets are used as the target datasets. A large set of experiments examines the effectiveness of three fine-tuning strategies on each of three different pre-trained DCNNs and each target dataset. The reported results give rise to guidelines for effectively fine-tuning a DCNN for concept-based visual annotation.</p>
Zenodo
2016-12-31
info:eu-repo/semantics/conferencePaper
728055
user-invid-h2020
user-eu
award_title=In Video Veritas – Verification of Social Media Video Content for the News Industry; award_number=687786; award_identifiers_scheme=url; award_identifiers_identifier=https://cordis.europa.eu/projects/687786; funder_id=00k4n6c32; funder_name=European Commission;
1579542161.018849
255006
md5:a0fcfbb9f6eec5eb87e83b1dde238408
https://zenodo.org/records/240853/files/mmm17_1_preprint.pdf
public