K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations
Authors/Creators
- 1. Korea Advanced Institute of Science and Technology (KAIST)
- 2. Khalifa University
Description
While recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, the heavy regulation of emotional behaviors in the wild compounds its difficulty. Therefore, studying emotions in the context of social interactions requires a novel dataset comprising multiple modalities and perspectives. K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices during 16 approximately 10 minutes long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays with 5 seconds intervals while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first multimodal dataset to accommodate the multiperspective assessment of emotions during social interactions.