UMONS-TAICHI: A multimodal motion capture dataset of expertise in Taijiquan gestures
- 1. Numediart Institute, University of Mons, Belgium
- 2. University of Nice Sophia Antipolis, Nice, France
Description
Presentation
UMONS-TAICHI is a large 3D motion capture dataset of Taijiquan martial art gestures (n = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a [0-10] scale. The dataset was captured using two motion capture systems simultaneously: 1) Qualisys, a sophisticated motion capture system of 11 Oqus cameras that tracked 68 retroreflective markers at 179 Hz, and 2) Microsoft Kinect V2, a low-cost markerless sensor that tracked 25 locations of a person’s skeleton at 30 Hz. Data from both systems were synchronized manually. Qualisys data were manually corrected, and then processed to complete any missing data. Data were also manually annotated for segmentation. Both segmented and unsegmented data are provided in this database. The data were initially recorded for gesture recognition and skill evaluation, but they are also suited for research on synthesis, segmentation, multi-sensor data comparison and fusion, sports science or more general research on human science or motion capture. A preliminary analysis has been conducted by Tits et al. (2017) on a part of the dataset to extract morphology-independent motion features for gesture skill evaluation and presented in: “Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation” (https://doi.org/10.1145/3077981.3078037).
Processing
Qualisys
Qualisys data were processed manually with Qualisys Track Manager.
Missing data (occluded markers) were then recovered with an automatic recovery method: MocapRecovery.
Data were annotated for gesture segmentation, using the MotionMachine framework (C++ openFrameworks addon). The code for annotation can be found here. Annotations were saved as ".lab" files (see Download section).
Kinect
The Kinect data were recorded with Kinect Studio. Skeleton data were then extracted with Kinect SDK and saved into “.txt” files which contain several lines corresponding to each captured frame. Each line contains one integer number (ms), relative to the moment when the frame was captured, followed by 3 x 25 float numbers corresponding to the 3-dimentional locations of the 25 body joints.
For more information please visit https://github.com/numediart/UMONS-TAICHI
PS: All files can be used with the MotionMachine framework. Please use the parser provided in this github repository for kinect (.txt) data.
Notes
Files
C3D.zip
Files
(15.8 GB)
Name | Size | Download all |
---|---|---|
md5:2f0cef601af89838dc1ca3a7c2c853eb
|
3.9 GB | Preview Download |
md5:d2802095beec3736ecfb35ab4eac70e8
|
216.9 MB | Preview Download |
md5:35de96a035c9d3b41619ac1f9932f946
|
42.5 kB | Preview Download |
md5:6b2746280984014fb18a59f38d0a95d2
|
2.7 GB | Preview Download |
md5:f783bbe1c8c19dbebb9ca3dbc2265f83
|
153.0 MB | Preview Download |
md5:03969cc686f0a750437069c116ff4e21
|
3.6 GB | Preview Download |
md5:904272e5599f51cb473ce8396c3bc4df
|
5.2 GB | Preview Download |