Conference paper Open Access

Global Flow and Temporal-shape Descriptors for Human Action Recognition from 3D Reconstruction Data

Papadopoulos, Georgios Th.; Daras, Petros

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Papadopoulos, Georgios Th.</dc:creator>
  <dc:creator>Daras, Petros</dc:creator>
  <dc:description>In this paper, global-level view-invariant descriptors for human action recognition using 3D reconstruction data are proposed. 3D reconstruction techniques are employed for addressing two of the most challenging issues related to human action recognition in the general case, namely view-variance and the presence of (self-) occlusions. Initially, a set of calibrated Kinect sensors are employed for producing a 3D reconstruction of the performing subjects. Subsequently, a 3D flow field is estimated for every captured frame. For performing action recognition, a novel global 3D flow descriptor is introduced, which achieves to efficiently encode the global motion characteristics in a compact way, while also incorporating spatial distribution related information. Additionally, a new global temporal-shape descriptor that extends the notion of 3D shape descriptions for action recognition, by including temporal information, is also proposed. The latter descriptor efficiently addresses the inherent problems of temporal alignment and compact representation, while also being robust in the presence of noise. Experimental results using public datasets demonstrate the efficiency of the proposed approach.</dc:description>
  <dc:subject>Action recognition</dc:subject>
  <dc:subject>3D reconstruction</dc:subject>
  <dc:subject>3D  flow</dc:subject>
  <dc:subject>3D shape</dc:subject>
  <dc:title>Global Flow and Temporal-shape Descriptors for Human Action Recognition from 3D Reconstruction Data</dc:title>
All versions This version
Views 9999
Downloads 4949
Data volume 53.6 MB53.6 MB
Unique views 9797
Unique downloads 4848


Cite as