Dataset Open Access
Wonse Jo;
Ruiqi Wang;
Su Sun;
Revanth Krishna Senthilkumaran;
Daniel Foti;
Byung-Cheol Min
<?xml version='1.0' encoding='utf-8'?> <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd"> <dc:creator>Wonse Jo</dc:creator> <dc:creator>Ruiqi Wang</dc:creator> <dc:creator>Su Sun</dc:creator> <dc:creator>Revanth Krishna Senthilkumaran</dc:creator> <dc:creator>Daniel Foti</dc:creator> <dc:creator>Byung-Cheol Min</dc:creator> <dc:date>2022-08-25</dc:date> <dc:description>This MOCAS is a multimodal dataset dedicated for human cognitive workload (CWL) assessment. In contrast to existing datasets based on virtual game stimuli, the data in MOCAS was collected from realistic closed-circuit television (CCTV) monitoring tasks, increasing its applicability for real-world scenarios. To build MOCAS, two off-the-shelf wearable sensors and one webcam were utilized to collect physiological signals and behavioral features from 21 human subjects. After each task, participants reported their CWL by completing the NASA-Task Load Index (NASA-TLX) and Instantaneous Self Assessment (ISA). Personal background (e.g., personality and prior experience) was surveyed using demographic and Big Five Factor personality questionnaires, and two domains of subjective emotion information (i.e., arousal and valence) were obtained from the Self-Assessment Manikin, which could serve as potential indicators for improving CWL recognition performance. Technical validation was conducted to demonstrate that target CWL levels were elicited during simultaneous CCTV monitoring tasks; its results support the high quality of the collected multimodal signals.</dc:description> <dc:description>This material is based upon work supported by the National Science Foundation under Grant No. IIS-1846221. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.</dc:description> <dc:identifier>https://zenodo.org/record/7023242</dc:identifier> <dc:identifier>10.5281/zenodo.7023242</dc:identifier> <dc:identifier>oai:zenodo.org:7023242</dc:identifier> <dc:relation>info:eu-repo/grantAgreement/NSF/CISE/OAD/1846221/</dc:relation> <dc:relation>doi:10.5281/zenodo.7023241</dc:relation> <dc:rights>info:eu-repo/semantics/openAccess</dc:rights> <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights> <dc:subject>affective dataset</dc:subject> <dc:subject>affective computing</dc:subject> <dc:subject>cognitive load</dc:subject> <dc:subject>stress</dc:subject> <dc:subject>rosbag2</dc:subject> <dc:title>MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks</dc:title> <dc:type>info:eu-repo/semantics/other</dc:type> <dc:type>dataset</dc:type> </oai_dc:dc>
All versions | This version | |
---|---|---|
Views | 110 | 110 |
Downloads | 89 | 89 |
Data volume | 992.9 GB | 992.9 GB |
Unique views | 86 | 86 |
Unique downloads | 50 | 50 |