Conference paper Open Access

# Deep 3D Flow Features for Human Action Recognition

Psaltis, Athanasios; Papadopoulos, Th. Georgios; Daras, Petros

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
<identifier identifierType="URL">https://zenodo.org/record/2551020</identifier>
<creators>
<creator>
<creatorName>Psaltis, Athanasios</creatorName>
<givenName>Athanasios</givenName>
<familyName>Psaltis</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-6896-3124</nameIdentifier>
<affiliation>Centre for Research and Technology, Hellas (ITI-CERTH)</affiliation>
</creator>
<creator>
<givenName>Th. Georgios</givenName>
<affiliation>Centre for Research and Technology, Hellas (ITI-CERTH)</affiliation>
</creator>
<creator>
<creatorName>Daras, Petros</creatorName>
<givenName>Petros</givenName>
<familyName>Daras</familyName>
<affiliation>Centre for Research and Technology, Hellas (ITI-CERTH)</affiliation>
</creator>
</creators>
<titles>
<title>Deep 3D Flow Features for Human Action Recognition</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2018</publicationYear>
<subjects>
<subject>Action recognition</subject>
<subject>3D flow</subject>
<subject>Deep Learning</subject>
</subjects>
<dates>
<date dateType="Issued">2018-11-01</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/2551020</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.1109/CBMI.2018.8516470</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="https://creativecommons.org/licenses/by-nc/4.0/legalcode">Creative Commons Attribution Non Commercial 4.0 International</rights>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;The present work investigates the use of 3D flow information for performing Deep Learning (DL)-based human action recognition. Generally, 3D flow fields include rich and fine-grained information, regarding the motion dynamics of the observed human actions. However, despite the great potentials present, 3D flow has not been widely used, mainly due to challenges related to the efficient modeling of the flow information and the addressing of the respective computational complexity issues. In this paper, different techniques are investigated for incorporating 3D flow information in DL action recognition schemes. In particular, a novel sequence modeling approach is introduced, which combines the advantageous characteristics for spatial correlation estimation of Convolutional Neural Networks (CNNs) with the increased temporal modeling capabilities of Long Short Term Memory (LSTM) models. Additionally, an extended CNN - based deep flow model is proposed that extracts features from both the spatial and temporal domains, by applying 3D convolutions; hence, modeling the action dynamics within consecutive frames. Moreover, for compact and efficient 3D motion feature extraction, the combined use of CNNs with a flow colorization&amp;#39; approach is adopted. The proposed methods significantly outperform similar DL and hand-crafted 3D flow approaches, and compare favorably with most skeleton-based techniques in the currently most challenging public dataset, namely the NTU RGB-D.&lt;/p&gt;</description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/700367/">700367</awardNumber>
<awardTitle>Detecting and ANalysing TErrorist-related online contents and financing activities</awardTitle>
</fundingReference>
</fundingReferences>
</resource>
`
64
66
views