MuMMER dataset
Authors/Creators
- 1. Idiap Research Institute
Description
A dataset for multi-party human-robot interaction available for research purposes.
It comprises 1 h 29 min of multimodal recordings of people interacting with the social robot Pepper in entertainment scenarios, such as quiz, chat, and route guidance. In the 33 clips (of 1 to 4 min long) recorded from the robot point of view, the participants are interacting with the robot in an unconstrained manner.
The dataset contains color and depth videos from a Kinect v2, an Intel D435, and the video from Pepper. All the (visual) faces and the identities in the dataset were manually annotated, making the identities consistent across time and clips. The goal of the dataset is to evaluate perception algorithms in multi-party human/robot interaction, in particular the reidentification part when a track is lost, as this ability is crucial for keeping the dialog history. The dataset can easily be extended with other types of annotations