Published August 25, 2022 | Version v2
Dataset Restricted

MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks

  • 1. SMART Lab, Purdue University
  • 2. Department of Psychological Sciences, Purdue University

Description

Description: This MOCAS is a multimodal dataset dedicated for human cognitive workload (CWL) assessment. In contrast to existing datasets based on virtual game stimuli, the data in MOCAS was collected from realistic closed-circuit television (CCTV) monitoring tasks, increasing its applicability for real-world scenarios. To build MOCAS, two off-the-shelf wearable sensors and one webcam were utilized to collect physiological signals and behavioral features from 21 human subjects. After each task, participants reported their CWL by completing the NASA-Task Load Index (NASA-TLX) and Instantaneous Self Assessment (ISA). Personal background (e.g., personality and prior experience) was surveyed using demographic and Big Five Factor personality questionnaires, and two domains of subjective emotion information (i.e., arousal and valence) were obtained from the Self-Assessment Manikin, which could serve as potential indicators for improving CWL recognition performance. Technical validation was conducted to demonstrate that target CWL levels were elicited during simultaneous CCTV monitoring tasks; its results support the high quality of the collected multimodal signals.

Data Access: In order to protect the sensitive data and privacy of human subjects (e.g., physiological signals and facial views), only authorized researchers who consent to the End User License Agreement (EULA) are allowed to download the MOCAS. The researchers who want to access the MOCAS should visit our website and download the ELUA document; https: //polytechnic.purdue.edu/ahmrs/dataset After reviewing and filling the document up, they should email the signed ELUA document to info@smart-laboratory.org with Zendo account. Then, our research group will review and grant their access to our Zenodo repository having the downsampled MOCAS dataset, subjective information, and supplementary codes used in this paper. For sharing the raw dataset, we will sequentially invite their email address used in the Zenodo and the EULA document to access raw dataset uploaded on an additional repository (Purdue BOX, https://purdue.box.com/v/mocas-dataset), due to huge size of the raw dataset.

BibTeX Citation:

@article{jo2024mocas,
  title={MOCAS: A multimodal dataset for objective cognitive workload assessment on simultaneous tasks},
  author={Jo, Wonse and Wang, Ruiqi and Sun, Su and Senthilkumaran, Revanth Krishna and Foti, Daniel and Min, Byung-Cheol},
  journal={IEEE Transaction of Affective Computing},
  year={2024},
  publisher={IEEE}
}

Notes

This material is based upon work supported by the National Science Foundation under Grant No. IIS-1846221. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Additional details

Related works

Is published in
Publication: 10.1109/TAFFC.2024.3414330 (DOI)

Funding

CAREER: Adaptive Human Multi-Robot Systems 1846221
National Science Foundation