Paving the Way Towards Kinematic Assessment Using Monocular Video: A Benchmark of State-of-the-Art Deep-Learning-Based 3D Human Pose Estimators Against Inertial Sensors in Daily Living Activities
Creators
Description
Advances in machine learning and wearable sensors offer new opportunities for capturing and analyzing human movement outside specialized laboratories. Accurately tracking and evaluating human movement under real-world conditions is essential for telemedicine, sports science, and rehabilitation. This work introduces a comprehensive benchmark comparing deep learning monocular video-based human pose estimation models with inertial measurement unit (IMU)-driven methods, leveraging VIDIMU dataset containing a total of 13 clinically relevant activities which were captured using both commodity video cameras and 5 IMUs. Joint angles derived from state-of-the-art deep learning frameworks (MotionAGFormer, MotionBERT, MMPose 2D-to-3D pose lifting, and NVIDIA BodyTrack included in Maxine-AR-SDK) were evaluated against joint angles computed from IMU data using OpenSim inverse kinematic methods. A graphical comparison of the angles estimated by each model shows the overall performance for each activity.
The results, which also contains the evaluation of multiple metrics (RMSE, NMRSE, MAE, correlation and coefficient of determination) in table and plot format, highlight key trade-offs between video- and sensor-based approaches including costs, accessibility and precision across different daily life activities. This work establishes valuable guidelines for researchers and clinicians seeking to develop robust, cost-effective, and user-friendly solutions for telehealth and remote patient monitoring solutions, ultimately bridging the gap between AI-driven motion capture and accessible healthcare applications.
Files
analysis.zip
Additional details
Related works
- Is derived from
- Dataset: 10.5281/zenodo.7681316 (DOI)
- Journal article: arXiv:2303.16150 (arXiv)
- Journal article: 10.1038/s41597-023-02554-9 (DOI)
Dates
- Submitted
-
2025-03-26