Conference paper Open Access
Panagiotis Giannakeris;
Georgios Meditskos;
Konstantinos Avgerinakis;
Stefanos Vrochidis;
Ioannis Kompatsiaris
Recognition of daily actions is an essential part of Ambient Assisted Living (AAL) applications and still not fully solved. In this work, we propose a novel framework for the recognition of actions of daily living from depth-videos. The framework is based on low-level human pose movement descriptors extracted from 3D joint trajectories as well as differential values that encode speed and acceleration information. The joints are detected using a depth sensor. The low-level descriptors are then aggregated into discriminative high-level action representations by modeling prototype pose movements with Gaussian Mixtures and then using a Fisher encoding schema. The resulting Fisher vectors are suitable to train Linear SVM classifiers so as to recognize actions in pre-segmented video clips, alleviating the need for additional parameter search with non-linear kernels or neural network tuning. Experimental evaluation on two well-known RGB-D action datasets reveal that the proposed framework achieves close to state-of-the-art performance whilst maintaining high processing speeds.
Name | Size | |
---|---|---|
giannakeris2019mmm.pdf
md5:6b16f7207a9a68535f0a2d85ed41f88d |
245.4 kB | Download |
All versions | This version | |
---|---|---|
Views | 60 | 60 |
Downloads | 71 | 71 |
Data volume | 17.4 MB | 17.4 MB |
Unique views | 53 | 53 |
Unique downloads | 64 | 64 |