Dataset Open Access

MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks

Wonse Jo; Ruiqi Wang; Su Sun; Revanth Krishna Senthilkumaran; Daniel Foti; Byung-Cheol Min


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.7023242</identifier>
  <creators>
    <creator>
      <creatorName>Wonse Jo</creatorName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-6904-5878</nameIdentifier>
      <affiliation>SMART Lab, Purdue University</affiliation>
    </creator>
    <creator>
      <creatorName>Ruiqi Wang</creatorName>
      <affiliation>SMART Lab, Purdue University</affiliation>
    </creator>
    <creator>
      <creatorName>Su Sun</creatorName>
      <affiliation>SMART Lab, Purdue University</affiliation>
    </creator>
    <creator>
      <creatorName>Revanth Krishna Senthilkumaran</creatorName>
      <affiliation>SMART Lab, Purdue University</affiliation>
    </creator>
    <creator>
      <creatorName>Daniel Foti</creatorName>
      <affiliation>Department of Psychological Sciences, Purdue University</affiliation>
    </creator>
    <creator>
      <creatorName>Byung-Cheol Min</creatorName>
      <affiliation>SMART Lab, Purdue University</affiliation>
    </creator>
  </creators>
  <titles>
    <title>MOCAS: A Multimodal Dataset for Objective Cognitive Workload Assessment on Simultaneous Tasks</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2022</publicationYear>
  <subjects>
    <subject>affective dataset</subject>
    <subject>affective computing</subject>
    <subject>cognitive load</subject>
    <subject>stress</subject>
    <subject>rosbag2</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2022-08-25</date>
  </dates>
  <resourceType resourceTypeGeneral="Dataset"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/7023242</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.7023241</relatedIdentifier>
  </relatedIdentifiers>
  <version>2022.0.0</version>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;This MOCAS is&amp;nbsp;a multimodal dataset dedicated for human cognitive workload (CWL) assessment. In contrast to existing datasets based on virtual game stimuli, the data in MOCAS was collected from realistic closed-circuit television (CCTV) monitoring tasks, increasing its applicability for real-world scenarios. To build MOCAS, two off-the-shelf wearable sensors and one webcam were utilized to collect physiological signals and behavioral features from 21 human subjects. After each task, participants reported their CWL by completing the NASA-Task Load Index (NASA-TLX) and Instantaneous Self Assessment (ISA). Personal background (e.g., personality and prior experience) was surveyed using demographic and Big Five Factor personality questionnaires, and two domains of subjective emotion information (i.e., arousal and valence) were obtained from the Self-Assessment Manikin, which could serve as potential indicators for improving CWL recognition performance. Technical validation was conducted to demonstrate that target CWL levels were elicited during simultaneous CCTV monitoring tasks; its results support the high quality of the collected multimodal signals.&lt;/p&gt;</description>
    <description descriptionType="Other">This material is based upon work supported by the National Science Foundation under Grant No. IIS-1846221. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>National Science Foundation</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100000001</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/NSF/CISE/OAD/1846221/">1846221</awardNumber>
      <awardTitle>CAREER: Adaptive Human Multi-Robot Systems</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
110
89
views
downloads
All versions This version
Views 110110
Downloads 8989
Data volume 992.9 GB992.9 GB
Unique views 8686
Unique downloads 5050

Share

Cite as