Conference paper Open Access

Comparison of Fine-tuning and Extension Strategies for Deep Convolutional Neural Networks

Pittaras, Nikiforos; Markatopoulou, Foteini; Mezaris, Vasileios; Patras, Ioannis

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Pittaras, Nikiforos</dc:creator>
<dc:creator>Markatopoulou, Foteini</dc:creator>
<dc:creator>Mezaris, Vasileios</dc:creator>
<dc:creator>Patras, Ioannis</dc:creator>
<dc:date>2016-12-31</dc:date>
<dc:description>In this study we compare three different fine-tuning strategies in order to investigate the best way to transfer the parameters of popular deep convolutional neural networks that were trained for a visual annotation task on one dataset, to a new, considerably different dataset. We focus on the concept-based image/video annotation problem and use ImageNet as the source dataset, while the TRECVID SIN 2013 and PASCAL VOC-2012 classification datasets are used as the target datasets. A large set of experiments examines the effectiveness of three fine-tuning strategies on each of three different pre-trained DCNNs and each target dataset. The reported results give rise to guidelines for effectively fine-tuning a DCNN for concept-based visual annotation.</dc:description>
<dc:identifier>https://zenodo.org/record/240853</dc:identifier>
<dc:identifier>10.1007/978-3-319-51811-4_9</dc:identifier>
<dc:identifier>oai:zenodo.org:240853</dc:identifier>
<dc:relation>info:eu-repo/grantAgreement/EC/H2020/687786/</dc:relation>
<dc:relation>url:https://zenodo.org/communities/ecfunded</dc:relation>
<dc:relation>url:https://zenodo.org/communities/invid-h2020</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:title>Comparison of Fine-tuning and Extension Strategies for Deep Convolutional Neural Networks</dc:title>
<dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
<dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>

54
57
views