A Trimodal Dataset: RGB, Thermal, and Depth for Human Segmentation and Action Recognition
Description
Computer vision research and popular datasets are predominantly based on the RGB modality. However, traditional RGB datasets have limitations in lighting conditions and raise privacy concerns. Integrating or substituting with thermal and depth data offers a more robust and privacy-preserving alternative. We present a public trimodal dataset comprising registered sequences of RGB, depth, and thermal data. The dataset encompasses 10 unique environments, 18 camera angles, 101 shots, and 15,618 frames which include human masks for semantic segmentation and dense labels for action classification and scene understanding. We discuss the system setup, including sensor configuration and calibration, as well as the process of generating ground truth annotations. On top, we conduct a quality analysis of our proposed dataset and provide benchmark models as reference points for human segmentation and action recognition. By employing only modalities of thermal and depth, these models yield improvements in both human segmentation and action classification.
Files
tristar.zip
Files
(10.5 GB)
Name | Size | Download all |
---|---|---|
md5:3cd57569118a7734904c4e34b43538f0
|
10.5 GB | Preview Download |