Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published February 3, 2021 | Version v1
Journal article Open

A Novel Graph Representation for Skeleton-based Action Recognition

Creators

Description

Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so that it can effectively capture the features of related nodes and improve the performance of model. More attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored due to extracting features node by node and frame by frame. We design a generic representation of skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks (TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph describing the relation of joints are mostly depended on the physical connection between joints. We propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features of each joint and the contour features of important joints. Extensive experiments are conducted on two large datasets
including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN with our graph strategy outperforms other state-of-the-art methods.

Files

11620sipij05.pdf

Files (877.8 kB)

Name Size Download all
md5:0789538830e4f8b9808d0b8651033f5a
877.8 kB Preview Download