Dataset Open Access

EMG and Video Dataset for sensor fusion based hand gestures recognition

Ceolini, Enea; Taverni, Gemma; Payvand, Melika; Donati, Elisa


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.3663616</identifier>
  <creators>
    <creator>
      <creatorName>Ceolini, Enea</creatorName>
      <givenName>Enea</givenName>
      <familyName>Ceolini</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-2676-0804</nameIdentifier>
      <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation>
    </creator>
    <creator>
      <creatorName>Taverni, Gemma</creatorName>
      <givenName>Gemma</givenName>
      <familyName>Taverni</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-8951-3133</nameIdentifier>
      <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation>
    </creator>
    <creator>
      <creatorName>Payvand, Melika</creatorName>
      <givenName>Melika</givenName>
      <familyName>Payvand</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-5400-067X</nameIdentifier>
      <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation>
    </creator>
    <creator>
      <creatorName>Donati, Elisa</creatorName>
      <givenName>Elisa</givenName>
      <familyName>Donati</familyName>
      <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-8091-1298</nameIdentifier>
      <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation>
    </creator>
  </creators>
  <titles>
    <title>EMG and Video Dataset for sensor fusion based hand gestures recognition</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2020</publicationYear>
  <subjects>
    <subject>EMG</subject>
    <subject>DVS</subject>
    <subject>DAVIS</subject>
    <subject>Hand gesture recognition</subject>
    <subject>Sensor fusion</subject>
    <subject>Myo</subject>
  </subjects>
  <dates>
    <date dateType="Issued">2020-02-12</date>
  </dates>
  <resourceType resourceTypeGeneral="Dataset"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3663616</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3228845</relatedIdentifier>
  </relatedIdentifiers>
  <version>3.0</version>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;This dataset contains data for hand gesture recognition recorded with 3 different sensors.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;sEMG: recorded via the Myo armband that is composed of 8 equally spaced non-invasive sEMG sensors that can be placed approximately around the middle of the forearm. The sampling frequency of Myo is 200 Hz. The output of the Myo is a.u&amp;nbsp;&lt;/p&gt;

&lt;p&gt;DVS: Dynamic Video Sensor which is a very low power event-based camera with 128x128 resolution&lt;/p&gt;

&lt;p&gt;DAVIS: Dynamic Video Sensor which is a very low power event-based camera with 240x180 resolution that also acquires APS frames.&lt;/p&gt;

&lt;p&gt;The dataset contains recordings of 21 subjects. Each subject performed 3 sessions, where each of the 5 hand gesture was recorded 5 times, each lasting for 2s. Between the gestures a relaxing phase of 1s is present where the muscles could go to the rest position, removing any residual muscular activation.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Note: All the information for the DVS sensor has been extracted and can be found in the *.npy files. In case the raw data (.aedat) was needed please contact&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;enea.ceolini@ini.uzh.ch&lt;/p&gt;

&lt;p&gt;elisa@ini.uzh.ch&lt;/p&gt;

&lt;p&gt;==== README ====&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;DATASET STRUCTURE:&lt;/p&gt;

&lt;p&gt;EMG, DVS and APS recordings&lt;/p&gt;

&lt;p&gt;21 subjects&lt;/p&gt;

&lt;p&gt;3 sessions for each subject&lt;/p&gt;

&lt;p&gt;5 gestures in each session (&amp;#39;pinky&amp;#39;, &amp;#39;elle&amp;#39;, &amp;#39;yo&amp;#39;, &amp;#39;index&amp;#39;, &amp;#39;thumb&amp;#39;)&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;SINGLE DATASETS:&lt;/p&gt;

&lt;p&gt;- relax21_raw_emg.zip: contains raw sEMG and annotations (ground truth of gestures) in the format `subjectXX_sessionYY_ZZZ` with `XX` subject ID (01 to 21), `YY` session ID (01-03) and `ZZZ` that can be &amp;lsquo;emg&amp;rsquo; or &amp;lsquo;ann&amp;rsquo;.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;- relax21_raw_dvs.zip: contains the full-frame dvs events in an array with dimensions 0 -&amp;gt; addr_x, 1 -&amp;gt; addr_y, 2 -&amp;gt; timestamp, 3 -&amp;gt; polarity. The timestamps are in seconds and synchronized with the Myo. Each file is in the format `subjectXX_sessionYY_dvs` with `XX` subject ID (01 to 21), `YY` session ID (01-03).&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;- relax21_cropped_aps.zip: contains the 40x40 pixel aps frames for all subjects and trials in the format `subjectXX_sessionYY_Z_W_K` with `XX` subject ID (01 to 21), `YY` session ID (01-03), Z gesture (&amp;#39;pinky&amp;#39;, &amp;#39;elle&amp;#39;, &amp;#39;yo&amp;#39;, &amp;#39;index&amp;#39;, &amp;#39;thumb&amp;rsquo;), W trial ID (1-5), `K` frame index.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;- relax21_cropped_dvs_emg_spikes.pkl: spiking dataset that can be used to reproduce the results in the paper. The dataset is a dictionary with the following keys:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;y&lt;/strong&gt;: array of size 1xN with the class (0-&amp;gt;4).&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;sub&lt;/strong&gt;: array of size 1xN with the subject id (1-&amp;gt;10).&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;sess&lt;/strong&gt;: array of size 1xN with the session id (1-&amp;gt;3).&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;dvs&lt;/strong&gt;: list of length N, each object in the list is a 2d array of size 4xT_n where T_n is the number of events in the trial and the 4 dimensions rappresent: 0 -&amp;gt; addr_x, 1 -&amp;gt; addr_y, 2 -&amp;gt; timestamp, 3 -&amp;gt; polarity .&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;emg&lt;/strong&gt;: list of length N, each object in the list is a 2d array of size 3xT_n where T_n is the number of events in the trial and the 3 dimensions rappresent: 0 -&amp;gt; addr, 1 -&amp;gt; timestamp, 3 -&amp;gt; polarity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;</description>
  </descriptions>
  <fundingReferences>
    <fundingReference>
      <funderName>European Commission</funderName>
      <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier>
      <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/753470/">753470</awardNumber>
      <awardTitle>Neuromorphic EMG Processing with Spiking Neural Networks</awardTitle>
    </fundingReference>
  </fundingReferences>
</resource>
1,988
6,106
views
downloads
All versions This version
Views 1,9881,072
Downloads 6,106461
Data volume 1.0 TB461.0 GB
Unique views 1,654931
Unique downloads 1,935223

Share

Cite as