Published November 5, 2020 | Version v1
Conference paper Open

Fusion of Multimodal Sensor Data for Effective Human Action Recognition in the Service of a Medicine-Oriented Platform

Description

In what has arguably been one of the most troubling periods of recent medical history, with a global pandemic emphasising the importance of staying healthy, innovative tools that shelter patient well-being gain momentum. In that view, we propose a framework that leverages multimodal data, namely inertial and depth sensor-originating data, is integrated in a health care oriented platform, and tackles the crucial issue of detecting patient actions, such as walking, standing and jogging, or even patient falls. To analyse person movement and consequently assess the patient's condition, an efficient methodology is presented that is two-fold: initially a sophisticated Kinect-based methodology is presented that exploits 3DHOG depth features and the descriptive power of a Fisher encoding scheme. This is complemented by wearable sensor data analysis using time domain features and a robust fusion strategy that provides an effective and reliable recognition methodology. The classification accuracy reported in a well-known benchmark dataset proves that the presented approach achieves competitive results and validates the applicability and efficiency of our human  action recognition (HAR) methodology.

Notes

This research has been financed by the European Regional Development Fund of the European Union and Greek national funds through the Operational Program Competitiveness, Entrepreneurship and Innovation, under the call RESEARCH - CREATE - INNOVATE (T1EDK-00686).

Files

mmm21giannakeris.pdf

Files (251.3 kB)

Name Size Download all
md5:a78df6673a1858edafb2cbfab082f06f
251.3 kB Preview Download