Dataset Open Access

EMG and Video Dataset for sensor fusion based hand gestures recognition

Ceolini, Enea; Taverni, Gemma; Payvand, Melika; Donati, Elisa


MARC21 XML Export

<?xml version='1.0' encoding='UTF-8'?>
<record xmlns="http://www.loc.gov/MARC21/slim">
  <leader>00000nmm##2200000uu#4500</leader>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">EMG</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">DVS</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">DAVIS</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Hand gesture recognition</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Sensor fusion</subfield>
  </datafield>
  <datafield tag="653" ind1=" " ind2=" ">
    <subfield code="a">Myo</subfield>
  </datafield>
  <controlfield tag="005">20200212192106.0</controlfield>
  <controlfield tag="001">3663616</controlfield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Institute of Neuroinformatics, UZH/ETH Zurich</subfield>
    <subfield code="0">(orcid)0000-0001-8951-3133</subfield>
    <subfield code="a">Taverni, Gemma</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Institute of Neuroinformatics, UZH/ETH Zurich</subfield>
    <subfield code="0">(orcid)0000-0001-5400-067X</subfield>
    <subfield code="a">Payvand, Melika</subfield>
  </datafield>
  <datafield tag="700" ind1=" " ind2=" ">
    <subfield code="u">Institute of Neuroinformatics, UZH/ETH Zurich</subfield>
    <subfield code="0">(orcid)0000-0002-8091-1298</subfield>
    <subfield code="a">Donati, Elisa</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">186878096</subfield>
    <subfield code="z">md5:0aca5c68eb7cecffcd9f2e3a53cb9124</subfield>
    <subfield code="u">https://zenodo.org/record/3663616/files/relax21_cropped_aps.zip</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">2049159861</subfield>
    <subfield code="z">md5:8bda33fec74cb4f49a05c6a89eef7163</subfield>
    <subfield code="u">https://zenodo.org/record/3663616/files/relax21_cropped_dvs_emg_spikes.pkl</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">1592923718</subfield>
    <subfield code="z">md5:e767b6b586c8f6919f945d770fbac529</subfield>
    <subfield code="u">https://zenodo.org/record/3663616/files/relax21_raw_dvs.zip</subfield>
  </datafield>
  <datafield tag="856" ind1="4" ind2=" ">
    <subfield code="s">12711873</subfield>
    <subfield code="z">md5:9fab8c87e8f04a56c3fdcb1797644885</subfield>
    <subfield code="u">https://zenodo.org/record/3663616/files/relax21_raw_emg.zip</subfield>
  </datafield>
  <datafield tag="542" ind1=" " ind2=" ">
    <subfield code="l">open</subfield>
  </datafield>
  <datafield tag="260" ind1=" " ind2=" ">
    <subfield code="c">2020-02-12</subfield>
  </datafield>
  <datafield tag="909" ind1="C" ind2="O">
    <subfield code="p">openaire_data</subfield>
    <subfield code="o">oai:zenodo.org:3663616</subfield>
  </datafield>
  <datafield tag="100" ind1=" " ind2=" ">
    <subfield code="u">Institute of Neuroinformatics, UZH/ETH Zurich</subfield>
    <subfield code="0">(orcid)0000-0002-2676-0804</subfield>
    <subfield code="a">Ceolini, Enea</subfield>
  </datafield>
  <datafield tag="245" ind1=" " ind2=" ">
    <subfield code="a">EMG and Video Dataset for sensor fusion based hand gestures recognition</subfield>
  </datafield>
  <datafield tag="536" ind1=" " ind2=" ">
    <subfield code="c">753470</subfield>
    <subfield code="a">Neuromorphic EMG Processing with Spiking Neural Networks</subfield>
  </datafield>
  <datafield tag="540" ind1=" " ind2=" ">
    <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield>
    <subfield code="a">Creative Commons Attribution 4.0 International</subfield>
  </datafield>
  <datafield tag="650" ind1="1" ind2="7">
    <subfield code="a">cc-by</subfield>
    <subfield code="2">opendefinition.org</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">&lt;p&gt;This dataset contains data for hand gesture recognition recorded with 3 different sensors.&amp;nbsp;&lt;/p&gt;

&lt;p&gt;sEMG: recorded via the Myo armband that is composed of 8 equally spaced non-invasive sEMG sensors that can be placed approximately around the middle of the forearm. The sampling frequency of Myo is 200 Hz. The output of the Myo is a.u&amp;nbsp;&lt;/p&gt;

&lt;p&gt;DVS: Dynamic Video Sensor which is a very low power event-based camera with 128x128 resolution&lt;/p&gt;

&lt;p&gt;DAVIS: Dynamic Video Sensor which is a very low power event-based camera with 240x180 resolution that also acquires APS frames.&lt;/p&gt;

&lt;p&gt;The dataset contains recordings of 21 subjects. Each subject performed 3 sessions, where each of the 5 hand gesture was recorded 5 times, each lasting for 2s. Between the gestures a relaxing phase of 1s is present where the muscles could go to the rest position, removing any residual muscular activation.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Note: All the information for the DVS sensor has been extracted and can be found in the *.npy files. In case the raw data (.aedat) was needed please contact&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;enea.ceolini@ini.uzh.ch&lt;/p&gt;

&lt;p&gt;elisa@ini.uzh.ch&lt;/p&gt;

&lt;p&gt;==== README ====&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;DATASET STRUCTURE:&lt;/p&gt;

&lt;p&gt;EMG, DVS and APS recordings&lt;/p&gt;

&lt;p&gt;21 subjects&lt;/p&gt;

&lt;p&gt;3 sessions for each subject&lt;/p&gt;

&lt;p&gt;5 gestures in each session (&amp;#39;pinky&amp;#39;, &amp;#39;elle&amp;#39;, &amp;#39;yo&amp;#39;, &amp;#39;index&amp;#39;, &amp;#39;thumb&amp;#39;)&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;SINGLE DATASETS:&lt;/p&gt;

&lt;p&gt;- relax21_raw_emg.zip: contains raw sEMG and annotations (ground truth of gestures) in the format `subjectXX_sessionYY_ZZZ` with `XX` subject ID (01 to 21), `YY` session ID (01-03) and `ZZZ` that can be &amp;lsquo;emg&amp;rsquo; or &amp;lsquo;ann&amp;rsquo;.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;- relax21_raw_dvs.zip: contains the full-frame dvs events in an array with dimensions 0 -&amp;gt; addr_x, 1 -&amp;gt; addr_y, 2 -&amp;gt; timestamp, 3 -&amp;gt; polarity. The timestamps are in seconds and synchronized with the Myo. Each file is in the format `subjectXX_sessionYY_dvs` with `XX` subject ID (01 to 21), `YY` session ID (01-03).&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;- relax21_cropped_aps.zip: contains the 40x40 pixel aps frames for all subjects and trials in the format `subjectXX_sessionYY_Z_W_K` with `XX` subject ID (01 to 21), `YY` session ID (01-03), Z gesture (&amp;#39;pinky&amp;#39;, &amp;#39;elle&amp;#39;, &amp;#39;yo&amp;#39;, &amp;#39;index&amp;#39;, &amp;#39;thumb&amp;rsquo;), W trial ID (1-5), `K` frame index.&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;- relax21_cropped_dvs_emg_spikes.pkl: spiking dataset that can be used to reproduce the results in the paper. The dataset is a dictionary with the following keys:&lt;/p&gt;

&lt;ul&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;y&lt;/strong&gt;: array of size 1xN with the class (0-&amp;gt;4).&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;sub&lt;/strong&gt;: array of size 1xN with the subject id (1-&amp;gt;10).&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;sess&lt;/strong&gt;: array of size 1xN with the session id (1-&amp;gt;3).&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;dvs&lt;/strong&gt;: list of length N, each object in the list is a 2d array of size 4xT_n where T_n is the number of events in the trial and the 4 dimensions rappresent: 0 -&amp;gt; addr_x, 1 -&amp;gt; addr_y, 2 -&amp;gt; timestamp, 3 -&amp;gt; polarity .&lt;/li&gt;
	&lt;li&gt;&lt;strong&gt;- &lt;/strong&gt;&lt;strong&gt;emg&lt;/strong&gt;: list of length N, each object in the list is a 2d array of size 3xT_n where T_n is the number of events in the trial and the 3 dimensions rappresent: 0 -&amp;gt; addr, 1 -&amp;gt; timestamp, 3 -&amp;gt; polarity.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;</subfield>
  </datafield>
  <datafield tag="773" ind1=" " ind2=" ">
    <subfield code="n">doi</subfield>
    <subfield code="i">isVersionOf</subfield>
    <subfield code="a">10.5281/zenodo.3228845</subfield>
  </datafield>
  <datafield tag="024" ind1=" " ind2=" ">
    <subfield code="a">10.5281/zenodo.3663616</subfield>
    <subfield code="2">doi</subfield>
  </datafield>
  <datafield tag="980" ind1=" " ind2=" ">
    <subfield code="a">dataset</subfield>
  </datafield>
</record>
1,975
6,094
views
downloads
All versions This version
Views 1,9751,063
Downloads 6,094451
Data volume 1.0 TB449.9 GB
Unique views 1,643923
Unique downloads 1,927217

Share

Cite as