Published February 3, 2025 | Version 0.1
Dataset Open

AFFEC Multimodal Dataset

Description

Dataset: AFFEC - Advancing Face-to-Face Emotion Communication Dataset

Overview

The AFFEC (Advancing Face-to-Face Emotion Communication) dataset is a multimodal dataset designed for emotion recognition research. It captures dynamic human interactions through electroencephalography (EEG), eye-tracking, galvanic skin response (GSR), facial movements, and self-annotations, enabling the study of felt and perceived emotions in real-world face-to-face interactions. The dataset comprises 84 simulated emotional dialogues, 72 participants, and over 5,000 trials, annotated with more than 20,000 emotion labels.

Dataset Structure

The dataset follows the Brain Imaging Data Structure (BIDS) format and consists of the following components:

Root Folder:

  • sub-* : Individual subject folders (e.g., sub-aerj, sub-mdl, sub-xx2)
  • dataset_description.json: General dataset metadata
  • participants.json and participants.tsv: Participant demographics and attributes
  • task-fer_events.json: Event annotations for the FER task
  • README.md: This documentation file

Subject Folders (sub-<subject_id>):

Each subject folder contains:

  • Behavioral Data (beh/): Physiological recordings (eye tracking, GSR, facial analysis, cursor tracking) in JSON and TSV formats.
  • EEG Data (eeg/): EEG recordings in .edf and corresponding metadata in .json.
  • Event Files (*.tsv): Trial event data for the emotion recognition task.
  • Channel Descriptions (*_channels.tsv): EEG channel information.

Data Modalities and Channels

1. Eye Tracking Data

  • Channels: 16 (fixation points, left/right eye gaze coordinates, gaze validity)
  • Sampling Rate: 62 Hz
  • Trials: 5632
  • File Example: sub-<subject>_task-fer_run-0_recording-gaze_physio.json

2. Pupil Data

  • Channels: 21 (pupil diameter, eye position, pupil validity flags)
  • Sampling Rate: 149 Hz
  • Trials: 5632
  • File Example: sub-<subject>_task-fer_run-0_recording-pupil_physio.json

3. Cursor Tracking Data

  • Channels: 4 (cursor X, cursor Y, cursor state)
  • Sampling Rate: 62 Hz
  • Trials: 5632
  • File Example: sub-<subject>_task-fer_run-0_recording-cursor_physio.json

4. Face Analysis Data

  • Channels: Over 200 (2D/3D facial landmarks, gaze detection, facial action units)
  • Sampling Rate: 40 Hz
  • Trials: 5680
  • File Example: sub-<subject>_task-fer_run-0_recording-videostream_physio.json

5. Electrodermal Activity (EDA) and Physiological Sensors

  • Channels: 40 (GSR, body temperature, accelerometer data)
  • Sampling Rate: 50 Hz
  • Trials: 5438
  • File Example: sub-<subject>_task-fer_run-0_recording-gsr_physio.json

6. EEG Data

  • Channels: 63 (EEG electrodes following the 10-20 placement scheme)
  • Sampling Rate: 256 Hz
  • Reference: Left earlobe
  • Trials: 5632
  • File Example: sub-<subject>_task-fer_run-0_eeg.edf

7. Self-Annotations

  • Trials: 5807
  • Annotations Per Trial: 4
  • Event Markers: Onset time, duration, trial type, emotion labels
  • File Example: task-fer_events.json

Experimental Setup

Participants engaged in a Facial Emotion Recognition (FER) task, where they watched emotionally expressive video stimuli while their physiological and behavioral responses were recorded. Participants provided self-reported ratings for both perceived and felt emotions, differentiating between the emotions they believed the video conveyed and their internal affective experience.

The dataset enables the study of individual differences in emotional perception and expression by incorporating Big Five personality trait assessments and demographic variables.

Usage Notes

  • The dataset is formatted in ASCII/UTF-8 encoding.
  • Each modality is stored in JSON, TSV, or EDF format as per BIDS standards.
  • Researchers should cite this dataset appropriately in publications.

Applications

AFFEC is well-suited for research in:

  • Affective Computing
  • Human-Agent Interaction
  • Emotion Recognition and Classification
  • Multimodal Signal Processing
  • Neuroscience and Cognitive Modeling
  • Healthcare and Mental Health Monitoring

Acknowledgments

This dataset was collected with the support of brAIn lab, IT University of Copenhagen
Special thanks to all participants and research staff involved in data collection.

License

This dataset is shared under the Creative Commons CC0 License.

Contact

For questions or collaboration inquiries, please contact [brainlab-staff@o365team.itu.dk].

 

Files

core.zip

Files (22.2 GB)

Name Size Download all
md5:7157e9bedacf58f42692688fb20b57b1
5.6 MB Preview Download
md5:0a487c57b988d32692c5e1c00c2d1b29
448.4 MB Preview Download
md5:22b573192529b6ee1fd47e63881d0a68
1.2 kB Preview Download
md5:737f8274855b3cb3391cb048685e54ef
10.3 GB Preview Download
md5:b7c44776af2e2d42239374ddd5a8f06c
1.1 GB Preview Download
md5:c029f8d82aed0acbe72a01232ffd4446
490.9 MB Preview Download
md5:8a1a218003e8ded0698bcf239a23baae
973.9 MB Preview Download
md5:8b76e2fb9b52000e8c1cb38beec1d540
8.9 GB Preview Download

Additional details

Dates

Collected
2024-05-01