Dataset Open Access
Ceolini, Enea;
Taverni, Gemma;
Payvand, Melika;
Donati, Elisa
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.3663616</identifier> <creators> <creator> <creatorName>Ceolini, Enea</creatorName> <givenName>Enea</givenName> <familyName>Ceolini</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-2676-0804</nameIdentifier> <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation> </creator> <creator> <creatorName>Taverni, Gemma</creatorName> <givenName>Gemma</givenName> <familyName>Taverni</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-8951-3133</nameIdentifier> <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation> </creator> <creator> <creatorName>Payvand, Melika</creatorName> <givenName>Melika</givenName> <familyName>Payvand</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0001-5400-067X</nameIdentifier> <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation> </creator> <creator> <creatorName>Donati, Elisa</creatorName> <givenName>Elisa</givenName> <familyName>Donati</familyName> <nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-8091-1298</nameIdentifier> <affiliation>Institute of Neuroinformatics, UZH/ETH Zurich</affiliation> </creator> </creators> <titles> <title>EMG and Video Dataset for sensor fusion based hand gestures recognition</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2020</publicationYear> <subjects> <subject>EMG</subject> <subject>DVS</subject> <subject>DAVIS</subject> <subject>Hand gesture recognition</subject> <subject>Sensor fusion</subject> <subject>Myo</subject> </subjects> <dates> <date dateType="Issued">2020-02-12</date> </dates> <resourceType resourceTypeGeneral="Dataset"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3663616</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3228845</relatedIdentifier> </relatedIdentifiers> <version>3.0</version> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>This dataset contains data for hand gesture recognition recorded with 3 different sensors.&nbsp;</p> <p>sEMG: recorded via the Myo armband that is composed of 8 equally spaced non-invasive sEMG sensors that can be placed approximately around the middle of the forearm. The sampling frequency of Myo is 200 Hz. The output of the Myo is a.u&nbsp;</p> <p>DVS: Dynamic Video Sensor which is a very low power event-based camera with 128x128 resolution</p> <p>DAVIS: Dynamic Video Sensor which is a very low power event-based camera with 240x180 resolution that also acquires APS frames.</p> <p>The dataset contains recordings of 21 subjects. Each subject performed 3 sessions, where each of the 5 hand gesture was recorded 5 times, each lasting for 2s. Between the gestures a relaxing phase of 1s is present where the muscles could go to the rest position, removing any residual muscular activation.</p> <p>&nbsp;</p> <p>Note: All the information for the DVS sensor has been extracted and can be found in the *.npy files. In case the raw data (.aedat) was needed please contact</p> <p>&nbsp;</p> <p>enea.ceolini@ini.uzh.ch</p> <p>elisa@ini.uzh.ch</p> <p>==== README ====</p> <p>&nbsp;</p> <p>DATASET STRUCTURE:</p> <p>EMG, DVS and APS recordings</p> <p>21 subjects</p> <p>3 sessions for each subject</p> <p>5 gestures in each session (&#39;pinky&#39;, &#39;elle&#39;, &#39;yo&#39;, &#39;index&#39;, &#39;thumb&#39;)</p> <p>&nbsp;</p> <p>SINGLE DATASETS:</p> <p>- relax21_raw_emg.zip: contains raw sEMG and annotations (ground truth of gestures) in the format `subjectXX_sessionYY_ZZZ` with `XX` subject ID (01 to 21), `YY` session ID (01-03) and `ZZZ` that can be &lsquo;emg&rsquo; or &lsquo;ann&rsquo;.</p> <p>&nbsp;</p> <p>- relax21_raw_dvs.zip: contains the full-frame dvs events in an array with dimensions 0 -&gt; addr_x, 1 -&gt; addr_y, 2 -&gt; timestamp, 3 -&gt; polarity. The timestamps are in seconds and synchronized with the Myo. Each file is in the format `subjectXX_sessionYY_dvs` with `XX` subject ID (01 to 21), `YY` session ID (01-03).</p> <p>&nbsp;</p> <p>- relax21_cropped_aps.zip: contains the 40x40 pixel aps frames for all subjects and trials in the format `subjectXX_sessionYY_Z_W_K` with `XX` subject ID (01 to 21), `YY` session ID (01-03), Z gesture (&#39;pinky&#39;, &#39;elle&#39;, &#39;yo&#39;, &#39;index&#39;, &#39;thumb&rsquo;), W trial ID (1-5), `K` frame index.</p> <p>&nbsp;</p> <p>- relax21_cropped_dvs_emg_spikes.pkl: spiking dataset that can be used to reproduce the results in the paper. The dataset is a dictionary with the following keys:</p> <ul> <li><strong>- </strong><strong>y</strong>: array of size 1xN with the class (0-&gt;4).</li> <li><strong>- </strong><strong>sub</strong>: array of size 1xN with the subject id (1-&gt;10).</li> <li><strong>- </strong><strong>sess</strong>: array of size 1xN with the session id (1-&gt;3).</li> <li><strong>- </strong><strong>dvs</strong>: list of length N, each object in the list is a 2d array of size 4xT_n where T_n is the number of events in the trial and the 4 dimensions rappresent: 0 -&gt; addr_x, 1 -&gt; addr_y, 2 -&gt; timestamp, 3 -&gt; polarity .</li> <li><strong>- </strong><strong>emg</strong>: list of length N, each object in the list is a 2d array of size 3xT_n where T_n is the number of events in the trial and the 3 dimensions rappresent: 0 -&gt; addr, 1 -&gt; timestamp, 3 -&gt; polarity.</li> </ul> <p>&nbsp;</p> <p>&nbsp;</p></description> </descriptions> <fundingReferences> <fundingReference> <funderName>European Commission</funderName> <funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier> <awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/753470/">753470</awardNumber> <awardTitle>Neuromorphic EMG Processing with Spiking Neural Networks</awardTitle> </fundingReference> </fundingReferences> </resource>
All versions | This version | |
---|---|---|
Views | 1,988 | 1,072 |
Downloads | 6,106 | 461 |
Data volume | 1.0 TB | 461.0 GB |
Unique views | 1,654 | 931 |
Unique downloads | 1,935 | 223 |