Published June 23, 2023 | Version 1.0
Dataset Open

EmokineDataset

  • 1. Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany
  • 2. Methods of Machine Learning, University of Tuebingen, Germany and International Max Planck Research School for Intelligent Systems
  • 3. Department of Psychology, University of Glasgow, Scotland
  • 4. 3Fish, Istanbul, Turkey
  • 5. Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany and Max Planck School of Cognition
  • 6. Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt/M, Germany and Department of Modern Languages, Goethe University, Frankfurt/M, Germany
  • 7. Computer Science Department, Goethe University, Frankfurt/M, Germany

Description

EmokineDataset

 

Companion resources
Paper    

Christensen, Julia F. and Fernandez, Andres and Smith, Rebecca and Michalareas, Georgios and Yazdi, Sina H. N. and Farahi, Fahima and Schmidt, Eva-Madeleine and Bahmanian, Nasimeh and Roig, Gemma (2024): "EMOKINE: A Software Package and Computational Framework for Scaling Up the Creation of Highly Controlled Emotional Full-Body Movement Datasets".

Code https://github.com/andres-fr/emokine

 

EmokineDataset is a pilot dataset showcasing the usefulness of the emokine software library. It featuers a single dancer performing 63 short sequences, which have been recorded and analyzed in different ways. This pilot dataset is organized in 3 folders:

 

 

  • Stimuli: The sequences are presented in 4 visual presentations that can be used as stimulus in observer experiments:
    1. Silhouette: Videos with a white silhouette of the dancer on black background.
    2. FLD (Full-Light Display): video recordings with the performer's face blurred out.
    3. PLD (Point-Light Display): videos featuring a black background with white circles corresponding to the selected body landmarks.
    4. Avatar: Videos produced by the XSENS motion capture propietary software, featuring a robot-like avatar performing the captured movements on a light blue background.
  • Data: In order to facilitate computation and analysis of the stimuli, this pilot dataset also includes several data formats:
    1. MVNX: Raw motion capture data directly recorded from the XSENS motion capture system.
    2. CSV: Translation of a subset of the MVNX sequences into CSV, included for easier integration with mainstream analysis software tools). The subset includes the following features: acceleration, angularAcceleration, angularVelocity, centerOfMass, footContacts, orientation, position and velocity.
    3. CamPos: While the MVNX provides 3D positions with respect to a global frame of reference, the CamPos [JSON](https://www.json.org/json-en.html) files represent the position from the perspective of the camera used to render the PLD videos. Specifically, their 3D positions are given with respect to the camera as (x, y, z), where (x, y) go from (0, 0) (left, bottom) to (1, 1) (right, top), and z is the distance between the camera and the point in meters. It can be useful to get a 2-dimensional projection of the dancer position (simply ignore z).
    4. Kinematic: Analysis of a selection of relevant kinematic features, using information from MVNX, Silhouette and CamPos, provided in tabular form.
  • Validation: Data and experiments reported in our paper as part of the data validation, to support its meaningfulness and usefulness for downstream tasks.
    1. TechVal: A collection of plots presenting relevant statistics of the pilot dataset.
    2. ObserverExperiment: Results in tabular form of an online study conducted with human participants, tasked to recognize emotions of the stimuli and rate their beauty.

More specifically, the 63 unique sequences are divided into 9 unique choreographies, each one being performed once as an explanation, and then 6 times with different intended emotions (angry, content, fearful, joy, neutral and sad). Once downloaded, the pilot dataset should have the following structure:


EmokineDataset
├── Stimuli
│   ├── Avatar
│   ├── FLD
│   ├── PLD
│   └── Silhouette
├── Data
│   ├── CamPos
│   ├── CSV
│   ├── Kinematic
│   ├── MVNX
│   └── TechVal
└── Validation
    ├── TechVal
    └── ObserverExperiment

 

Where each <MODALITY> of the stimuli, MVNXCamPos and Kinematic have this structure:


<MODALITY>
├── explanation
│   ├── <MODALITY>_seq1_explanation.<EXTENSION>
│   ├── ...
│   └── <MODALITY>_seq9_explanation.<EXTENSION>
├── <MODALITY>_seq1_angry.<EXTENSION>
├── <MODALITY>_seq1_content.<EXTENSION>
├── <MODALITY>_seq1_fearful.<EXTENSION>
├── <MODALITY>_seq1_joy.<EXTENSION>
├── <MODALITY>_seq1_neutral.<EXTENSION>
├── <MODALITY>_seq1_sad.<EXTENSION>
...
└── <MODALITY>_seq9_sad.<EXTENSION>
 

 

The CSV directory is slightly different, because instead of a single file for each seq and emotion, it features a folder containing a .csv file for each one of the 8 features being extracted (acceleration, velocity...).

Notes

Funded by the Max Planck Society, Germany. Under review.

Files

EmokineDataset_v1.0.zip

Files (1.7 GB)

Name Size Download all
md5:58a42e6a2fc8687fc095761cc648c53b
1.7 GB Preview Download