Conference paper Open Access

# Deep 3D Flow Features for Human Action Recognition

Psaltis, Athanasios; Papadopoulos, Th. Georgios; Daras, Petros

### Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
<dc:creator>Psaltis, Athanasios</dc:creator>
<dc:creator>Papadopoulos, Th. Georgios</dc:creator>
<dc:creator>Daras, Petros</dc:creator>
<dc:date>2018-11-01</dc:date>
<dc:description>The present work investigates the use of 3D flow information for performing Deep Learning (DL)-based human action recognition. Generally, 3D flow fields include rich and fine-grained information, regarding the motion dynamics of the observed human actions. However, despite the great potentials present, 3D flow has not been widely used, mainly due to challenges related to the efficient modeling of the flow information and the addressing of the respective computational complexity issues. In this paper, different techniques are investigated for incorporating 3D flow information in DL action recognition schemes. In particular, a novel sequence modeling approach is introduced, which combines the advantageous characteristics for spatial correlation estimation of Convolutional Neural Networks (CNNs) with the increased temporal modeling capabilities of Long Short Term Memory (LSTM) models. Additionally, an extended CNN - based deep flow model is proposed that extracts features from both the spatial and temporal domains, by applying 3D convolutions; hence, modeling the action dynamics within consecutive frames. Moreover, for compact and efficient 3D motion feature extraction, the combined use of CNNs with a flow colorization' approach is adopted. The proposed methods significantly outperform similar DL and hand-crafted 3D flow approaches, and compare favorably with most skeleton-based techniques in the currently most challenging public dataset, namely the NTU RGB-D.</dc:description>
<dc:identifier>https://zenodo.org/record/2551020</dc:identifier>
<dc:identifier>10.1109/CBMI.2018.8516470</dc:identifier>
<dc:identifier>oai:zenodo.org:2551020</dc:identifier>
<dc:language>eng</dc:language>
<dc:relation>info:eu-repo/grantAgreement/EC/H2020/700367/</dc:relation>
<dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
<dc:rights>https://creativecommons.org/licenses/by-nc/4.0/legalcode</dc:rights>
<dc:subject>Action recognition</dc:subject>
<dc:subject>3D flow</dc:subject>
<dc:subject>Deep Learning</dc:subject>
<dc:title>Deep 3D Flow Features for Human Action Recognition</dc:title>
<dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
<dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>
`
64
66
views
downloads
 Views 64 Downloads 66 Data volume 38.8 MB Unique views 48 Unique downloads 60