Dataset Open Access

EMG and Video Dataset for sensor fusion based hand gestures recognition

Ceolini, Enea; Taverni, Gemma; Payvand, Melika; Donati, Elisa


JSON Export

{
  "files": [
    {
      "links": {
        "self": "https://zenodo.org/api/files/102a63cc-433e-49b4-be9f-3ee2f482b38e/relax21_cropped_aps.zip"
      }, 
      "checksum": "md5:0aca5c68eb7cecffcd9f2e3a53cb9124", 
      "bucket": "102a63cc-433e-49b4-be9f-3ee2f482b38e", 
      "key": "relax21_cropped_aps.zip", 
      "type": "zip", 
      "size": 186878096
    }, 
    {
      "links": {
        "self": "https://zenodo.org/api/files/102a63cc-433e-49b4-be9f-3ee2f482b38e/relax21_cropped_dvs_emg_spikes.pkl"
      }, 
      "checksum": "md5:8bda33fec74cb4f49a05c6a89eef7163", 
      "bucket": "102a63cc-433e-49b4-be9f-3ee2f482b38e", 
      "key": "relax21_cropped_dvs_emg_spikes.pkl", 
      "type": "pkl", 
      "size": 2049159861
    }, 
    {
      "links": {
        "self": "https://zenodo.org/api/files/102a63cc-433e-49b4-be9f-3ee2f482b38e/relax21_raw_dvs.zip"
      }, 
      "checksum": "md5:e767b6b586c8f6919f945d770fbac529", 
      "bucket": "102a63cc-433e-49b4-be9f-3ee2f482b38e", 
      "key": "relax21_raw_dvs.zip", 
      "type": "zip", 
      "size": 1592923718
    }, 
    {
      "links": {
        "self": "https://zenodo.org/api/files/102a63cc-433e-49b4-be9f-3ee2f482b38e/relax21_raw_emg.zip"
      }, 
      "checksum": "md5:9fab8c87e8f04a56c3fdcb1797644885", 
      "bucket": "102a63cc-433e-49b4-be9f-3ee2f482b38e", 
      "key": "relax21_raw_emg.zip", 
      "type": "zip", 
      "size": 12711873
    }
  ], 
  "owners": [
    68055
  ], 
  "doi": "10.5281/zenodo.3663616", 
  "stats": {
    "version_unique_downloads": 1927.0, 
    "unique_views": 925.0, 
    "views": 1065.0, 
    "version_views": 1979.0, 
    "unique_downloads": 217.0, 
    "version_unique_views": 1647.0, 
    "volume": 449910815210.0, 
    "version_downloads": 6094.0, 
    "downloads": 451.0, 
    "version_volume": 1004042118084.0
  }, 
  "links": {
    "doi": "https://doi.org/10.5281/zenodo.3663616", 
    "conceptdoi": "https://doi.org/10.5281/zenodo.3228845", 
    "bucket": "https://zenodo.org/api/files/102a63cc-433e-49b4-be9f-3ee2f482b38e", 
    "conceptbadge": "https://zenodo.org/badge/doi/10.5281/zenodo.3228845.svg", 
    "html": "https://zenodo.org/record/3663616", 
    "latest_html": "https://zenodo.org/record/3663616", 
    "badge": "https://zenodo.org/badge/doi/10.5281/zenodo.3663616.svg", 
    "latest": "https://zenodo.org/api/records/3663616"
  }, 
  "conceptdoi": "10.5281/zenodo.3228845", 
  "created": "2020-02-12T10:15:46.808491+00:00", 
  "updated": "2020-02-12T19:21:06.424946+00:00", 
  "conceptrecid": "3228845", 
  "revision": 2, 
  "id": 3663616, 
  "metadata": {
    "access_right_category": "success", 
    "doi": "10.5281/zenodo.3663616", 
    "version": "3.0", 
    "license": {
      "id": "CC-BY-4.0"
    }, 
    "title": "EMG and Video Dataset for sensor fusion based hand gestures recognition", 
    "related_identifiers": [
      {
        "scheme": "doi", 
        "identifier": "10.5281/zenodo.3228845", 
        "relation": "isVersionOf"
      }
    ], 
    "relations": {
      "version": [
        {
          "count": 3, 
          "index": 2, 
          "parent": {
            "pid_type": "recid", 
            "pid_value": "3228845"
          }, 
          "is_last": true, 
          "last_child": {
            "pid_type": "recid", 
            "pid_value": "3663616"
          }
        }
      ]
    }, 
    "grants": [
      {
        "code": "753470", 
        "links": {
          "self": "https://zenodo.org/api/grants/10.13039/501100000780::753470"
        }, 
        "title": "Neuromorphic EMG Processing with Spiking Neural Networks", 
        "acronym": "NEPSpiNN", 
        "program": "H2020", 
        "funder": {
          "doi": "10.13039/501100000780", 
          "acronyms": [], 
          "name": "European Commission", 
          "links": {
            "self": "https://zenodo.org/api/funders/10.13039/501100000780"
          }
        }
      }
    ], 
    "keywords": [
      "EMG", 
      "DVS", 
      "DAVIS", 
      "Hand gesture recognition", 
      "Sensor fusion", 
      "Myo"
    ], 
    "publication_date": "2020-02-12", 
    "creators": [
      {
        "orcid": "0000-0002-2676-0804", 
        "affiliation": "Institute of Neuroinformatics, UZH/ETH Zurich", 
        "name": "Ceolini, Enea"
      }, 
      {
        "orcid": "0000-0001-8951-3133", 
        "affiliation": "Institute of Neuroinformatics, UZH/ETH Zurich", 
        "name": "Taverni, Gemma"
      }, 
      {
        "orcid": "0000-0001-5400-067X", 
        "affiliation": "Institute of Neuroinformatics, UZH/ETH Zurich", 
        "name": "Payvand, Melika"
      }, 
      {
        "orcid": "0000-0002-8091-1298", 
        "affiliation": "Institute of Neuroinformatics, UZH/ETH Zurich", 
        "name": "Donati, Elisa"
      }
    ], 
    "access_right": "open", 
    "resource_type": {
      "type": "dataset", 
      "title": "Dataset"
    }, 
    "description": "<p>This dataset contains data for hand gesture recognition recorded with 3 different sensors.&nbsp;</p>\n\n<p>sEMG: recorded via the Myo armband that is composed of 8 equally spaced non-invasive sEMG sensors that can be placed approximately around the middle of the forearm. The sampling frequency of Myo is 200 Hz. The output of the Myo is a.u&nbsp;</p>\n\n<p>DVS: Dynamic Video Sensor which is a very low power event-based camera with 128x128 resolution</p>\n\n<p>DAVIS: Dynamic Video Sensor which is a very low power event-based camera with 240x180 resolution that also acquires APS frames.</p>\n\n<p>The dataset contains recordings of 21 subjects. Each subject performed 3 sessions, where each of the 5 hand gesture was recorded 5 times, each lasting for 2s. Between the gestures a relaxing phase of 1s is present where the muscles could go to the rest position, removing any residual muscular activation.</p>\n\n<p>&nbsp;</p>\n\n<p>Note: All the information for the DVS sensor has been extracted and can be found in the *.npy files. In case the raw data (.aedat) was needed please contact</p>\n\n<p>&nbsp;</p>\n\n<p>enea.ceolini@ini.uzh.ch</p>\n\n<p>elisa@ini.uzh.ch</p>\n\n<p>==== README ====</p>\n\n<p>&nbsp;</p>\n\n<p>DATASET STRUCTURE:</p>\n\n<p>EMG, DVS and APS recordings</p>\n\n<p>21 subjects</p>\n\n<p>3 sessions for each subject</p>\n\n<p>5 gestures in each session (&#39;pinky&#39;, &#39;elle&#39;, &#39;yo&#39;, &#39;index&#39;, &#39;thumb&#39;)</p>\n\n<p>&nbsp;</p>\n\n<p>SINGLE DATASETS:</p>\n\n<p>- relax21_raw_emg.zip: contains raw sEMG and annotations (ground truth of gestures) in the format `subjectXX_sessionYY_ZZZ` with `XX` subject ID (01 to 21), `YY` session ID (01-03) and `ZZZ` that can be &lsquo;emg&rsquo; or &lsquo;ann&rsquo;.</p>\n\n<p>&nbsp;</p>\n\n<p>- relax21_raw_dvs.zip: contains the full-frame dvs events in an array with dimensions 0 -&gt; addr_x, 1 -&gt; addr_y, 2 -&gt; timestamp, 3 -&gt; polarity. The timestamps are in seconds and synchronized with the Myo. Each file is in the format `subjectXX_sessionYY_dvs` with `XX` subject ID (01 to 21), `YY` session ID (01-03).</p>\n\n<p>&nbsp;</p>\n\n<p>- relax21_cropped_aps.zip: contains the 40x40 pixel aps frames for all subjects and trials in the format `subjectXX_sessionYY_Z_W_K` with `XX` subject ID (01 to 21), `YY` session ID (01-03), Z gesture (&#39;pinky&#39;, &#39;elle&#39;, &#39;yo&#39;, &#39;index&#39;, &#39;thumb&rsquo;), W trial ID (1-5), `K` frame index.</p>\n\n<p>&nbsp;</p>\n\n<p>- relax21_cropped_dvs_emg_spikes.pkl: spiking dataset that can be used to reproduce the results in the paper. The dataset is a dictionary with the following keys:</p>\n\n<ul>\n\t<li><strong>- </strong><strong>y</strong>: array of size 1xN with the class (0-&gt;4).</li>\n\t<li><strong>- </strong><strong>sub</strong>: array of size 1xN with the subject id (1-&gt;10).</li>\n\t<li><strong>- </strong><strong>sess</strong>: array of size 1xN with the session id (1-&gt;3).</li>\n\t<li><strong>- </strong><strong>dvs</strong>: list of length N, each object in the list is a 2d array of size 4xT_n where T_n is the number of events in the trial and the 4 dimensions rappresent: 0 -&gt; addr_x, 1 -&gt; addr_y, 2 -&gt; timestamp, 3 -&gt; polarity .</li>\n\t<li><strong>- </strong><strong>emg</strong>: list of length N, each object in the list is a 2d array of size 3xT_n where T_n is the number of events in the trial and the 3 dimensions rappresent: 0 -&gt; addr, 1 -&gt; timestamp, 3 -&gt; polarity.</li>\n</ul>\n\n<p>&nbsp;</p>\n\n<p>&nbsp;</p>"
  }
}
1,979
6,094
views
downloads
All versions This version
Views 1,9791,065
Downloads 6,094451
Data volume 1.0 TB449.9 GB
Unique views 1,647925
Unique downloads 1,927217

Share

Cite as