HuTwin Dataset
Description
The HuTwin (Human Cyber Twin) dataset represents a comprehensive multimodal collection capturing human behavioral dynamics during collaborative Virtual Reality interactions.
The dataset encompasses synchronized multimodal recordings from 40 participants (organized in 20 collaborative pairs) engaged in a VR-based (cooking-game) collaborative task across multiple experimental sessions. Each participant completed three distinct experimental sessions under varying conditions, yielding a total of 120 comprehensive recordings. The dataset integrates four primary data modalities: (1) objective behavioral measurements, including body motion capture, facial expression tracking, and speech acoustic features, and (2) subjective experiential assessments encompassing QoE ratings, presence measures, and emotional self-reports.
The experimental design implemented a within-subjects approach with systematic manipulation of multiple independent variables: Avatar Type (Chef vs. Humanoid), Connection Type (Host vs. Client, determining emotion communication direction), Role assignment (Student vs. Teacher), and Network Quality conditions (Delay: 0ms vs. 500ms; Jitter: 0ms vs. 500ms). This factorial design enables a comprehensive investigation of how avatar representation, emotional expressivity, task roles, and network impairments interact to influence collaborative performance, emotional states, and perceived quality of experience in immersive virtual environments.
Files
README_COMPLETE_HUTWIN_DATASET.txt
Files
(12.5 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:17125ed6fa67c3aff818f27d91ec6b31
|
12.5 kB | Preview Download |
Additional details
Related works
- Is described by
- Conference proceeding: 10.1109/QoMEX65720.2025.11219890 (DOI)