Journal article Open Access

# Implicit and Explicit Regularization for Optical Flow Estimation

Karageorgos, Konstantinos; Dimou, Anastasios; Alvarez, Federico; Daras, Petros

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/3941403</identifier>
<creators>
<creator>
<creatorName>Karageorgos, Konstantinos</creatorName>
<givenName>Konstantinos</givenName>
<familyName>Karageorgos</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-5426-447X</nameIdentifier>
<affiliation>Centre for Research and Technology Hellas</affiliation>
</creator>
<creator>
<creatorName>Dimou, Anastasios</creatorName>
<givenName>Anastasios</givenName>
<familyName>Dimou</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-2763-4217</nameIdentifier>
<affiliation>Centre for Research and Technology Hellas</affiliation>
</creator>
<creator>
<creatorName>Alvarez, Federico</creatorName>
<givenName>Federico</givenName>
<familyName>Alvarez</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-7400-9591</nameIdentifier>
</creator>
<creator>
<creatorName>Daras, Petros</creatorName>
<givenName>Petros</givenName>
<familyName>Daras</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-3814-6710</nameIdentifier>
<affiliation>Centre for Research and Technology Hellas</affiliation>
</creator>
</creators>
<titles>
<title>Implicit and Explicit Regularization for Optical Flow Estimation</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2020</publicationYear>
<dates>
<date dateType="Issued">2020-07-10</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="JournalArticle"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3941403</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.3390/s20143855</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/787061</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;In this paper, two novel and practical regularizing methods are proposed to improve existing neural network architectures for monocular optical flow estimation. The proposed methods aim to alleviate deficiencies of current methods, such as flow leakage across objects and motion consistency within rigid objects, by exploiting contextual information. More specifically, the first regularization method utilizes semantic information during the training process to explicitly regularize the produced optical flow field. The novelty of this method lies in the use of semantic segmentation masks to teach the network to implicitly identify the semantic edges of an object and better reason on the local motion flow. A novel loss function is introduced that takes into account the objects&amp;rsquo; boundaries as derived from the semantic segmentation mask to selectively penalize motion inconsistency within an object. The method is architecture agnostic and can be integrated into any neural network without modifying or adding complexity at inference. The second regularization method adds spatial awareness to the input data of the network in order to improve training stability and efficiency. The coordinates of each pixel are used as an additional feature, breaking the invariance properties of the neural network architecture. The additional features are shown to implicitly regularize the optical flow estimation enforcing a consistent flow, while improving both the performance and the convergence time. Finally, the combination of both regularization methods further improves the performance of existing cutting edge architectures in a complementary way, both quantitatively and qualitatively, on popular flow estimation benchmark datasets.&lt;/p&gt;</description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/787061/">787061</awardNumber>
<awardTitle>Advanced tools for fighting oNline Illegal TrAfficking</awardTitle>
</fundingReference>
</fundingReferences>
</resource>

67
48
views