Conference paper Embargoed Access

Training Deep Learning Models via Synthetic Data: Application in Unmanned Aerial Vehicles

Kamilaris Andreas; van den Brink Corjan; Karatsiolis Savvas


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.3523006</identifier>
  <creators>
    <creator>
      <creatorName>Kamilaris Andreas</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-8484-4256</nameIdentifier>
      <affiliation>Pervasive Systems Group, Department of Computer Science University of Twente, The Netherlands,Research Centre on Interactive Media, Smart Systems and Emerging Technologies (RISE), Nicosia, Cyprus</affiliation>
    </creator>
    <creator>
      <creatorName>van den Brink Corjan</creatorName>
      <affiliation>Pervasive Systems Group, Department of Computer Science University of Twente, The Netherlands</affiliation>
    </creator>
    <creator>
      <creatorName>Karatsiolis Savvas</creatorName>
      <affiliation>Department of Computer Science, University of Cyprus, Nicosia, Cyprus</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Training Deep Learning Models via Synthetic Data: Application in Unmanned Aerial Vehicles</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2019</publicationYear>
  <subjects>
    <subject>UAV  Deep Learning</subject>
    <subject>Generative Data</subject>
    <subject>Aerial Imagery</subject>
  </subjects>
  <dates>
    <date dateType="Available">2020-10-30</date>
    <date dateType="Accepted">2019-10-30</date>
  </dates>
  <resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3523006</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3523005</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/rise-teaming-cyprus</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="http://creativecommons.org/licenses/by-nc-nd/4.0/legalcode">Creative Commons Attribution Non Commercial No Derivatives 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/embargoedAccess">Embargoed Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;This paper describes preliminary work in the recent promising approach of generating synthetic training data for facilitating the&lt;br&gt;
learning procedure of deep learning (DL) models, with a focus on aerial photos produced by unmanned aerial vehicles (UAV). The general concept and methodology are described, and preliminary results are presented, based on a classication problem of re identication in forests as well as a counting problem of estimating number of houses in urban areas. The proposed technique constitutes a new possibility for the DL community, especially related to UAV-based imagery analysis, with much potential, promising results, and unexplored ground for further research.&lt;/p&gt;</description>
    <description descriptionType="Other">This work has been partly supported by the project that has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 739578 (RISE – Call: H2020-WIDESPREAD-01-2016-2017-TeamingPhase2) and the Government of the Republic of Cyprus through the Directorate General for European Programmes, Coordination and Development.</description>
  </descriptions>
</resource>
19
8
views
downloads
All versions This version
Views 1919
Downloads 88
Data volume 10.4 MB10.4 MB
Unique views 1717
Unique downloads 66

Share

Cite as