Dataset Open Access

EEG and audio dataset for auditory attention decoding

Fuglsang, Søren A.; Wong, Daniel D.E.; Hjortkjær, Jens


JSON-LD (schema.org) Export

{
  "description": "<p>This dataset contains EEG recordings from 18 subjects listening to one of two competing speech audio streams. Continuous speech in trials of ~50 sec. was presented to normal hearing listeners in simulated rooms with different degrees of reverberation. Subjects were asked to attend one of two spatially separated speakers (one male, one female) and ignore the other. Repeated trials with presentation of a single talker were also recorded. The data were recorded in a double-walled soundproof booth at the Technical University of Denmark (DTU) using a 64-channel Biosemi system and digitized at a sampling rate of 512 Hz. Full details can be found in:</p>\n\n<ul>\n\t<li><strong>S&oslash;ren A. Fuglsang, Torsten Dau &amp; Jens Hjortkj&aelig;r (2017):&nbsp;Noise-robust cortical tracking of attended speech in real-life environments. <em>NeuroImage</em>, 156, 435-444</strong></li>\n</ul>\n\n<p>and</p>\n\n<ul>\n\t<li><strong>Daniel D.E. Wong, S&oslash;ren A. Fuglsang, Jens Hjortkj&aelig;r, Enea Ceolini, Malcolm Slaney &amp; Alain de Cheveign&eacute;: A Comparison of Temporal Response Function Estimation Methods for Auditory Attention Decoding. Frontiers in Neuroscience,&nbsp;</strong><a href=\"https://doi.org/10.3389/fnins.2018.00531\">https://doi.org/10.3389/fnins.2018.00531</a></li>\n</ul>\n\n<p>The data is organized in format of the publicly available <a href=\"https://zenodo.org/record/1198430\">COCOHA Matlab Toolbox</a>. The preproc_script.m demonstrates how to import and align the EEG and audio data. The script also demonstrates some EEG preprocessing steps as used the Wong et al. paper above. The AUDIO.zip contains wav-files with the speech audio used in the experiment. The EEG.zip contains MAT-files with the EEG/EOG data for each subject. The EEG/EOG data are found in <strong>data.eeg</strong> with the following channels:</p>\n\n<ul>\n\t<li>channels 1-64: scalp EEG electrodes</li>\n\t<li>channel 65: right mastoid electrode</li>\n\t<li>channel 66: left mastoid electrode</li>\n\t<li>channel 67: vertical EOG below right eye</li>\n\t<li>channel 68: horizontal EOG right eye</li>\n\t<li>channel 69: vertical EOG above right eye</li>\n\t<li>channel 70: vertical EOG below left eye</li>\n\t<li>channel 71: horizontal EOG left eye</li>\n\t<li>channel 72: vertical EOG above left eye</li>\n</ul>\n\n<p>The <strong>expinfo</strong> table contains information about experimental conditions, including what what speaker the listener was attending to in different trials. The expinfo table contains the following information:</p>\n\n<ul>\n\t<li>attend_mf: attended speaker (1=male, 2=female)</li>\n\t<li>attend_lr: spatial position of the attended speaker (1=left, 2=right)</li>\n\t<li>acoustic_condition: type of acoustic room (1= anechoic, 2= mild reverberation, 3= high reverberation, see Fuglsang et al. for details)</li>\n\t<li>n_speakers: number of speakers presented (1 or 2)</li>\n\t<li>wavfile_male: name of presented audio wav-file for the male speaker</li>\n\t<li>wavfile_female: name of presented audio wav-file for the female speaker (if any)</li>\n\t<li>trigger: trigger event value for each trial also found in data.event.eeg.value</li>\n</ul>\n\n<p>DATA_preproc.zip contains the preprocessed EEG and audio data as output from preproc_script.m.</p>\n\n<p>The dataset was created within the <a href=\"https://cocoha.org/\">COCOHA</a><a href=\"https://cocoha.org/\"> Project</a>: Cognitive Control of a Hearing Aid</p>", 
  "license": "https://creativecommons.org/licenses/by-nc/4.0/legalcode", 
  "creator": [
    {
      "affiliation": "Hearing Systems Group, Department of Electrical Engineering, Danmarks Tekniske Universitet, Kgs. Lyngby, Denmark", 
      "@id": "https://orcid.org/0000-0001-8111-8665", 
      "@type": "Person", 
      "name": "Fuglsang, S\u00f8ren A."
    }, 
    {
      "affiliation": "Laboratoire des Syst\u00e8mes Perceptifs, Ecole Normale Sup\u00e9rieure, UMR 8248, CNRS, Paris, France", 
      "@id": "https://orcid.org/0000-0002-7781-1149", 
      "@type": "Person", 
      "name": "Wong, Daniel D.E."
    }, 
    {
      "affiliation": "Hearing Systems Group, Department of Electrical Engineering, Danmarks Tekniske Universitet, Kgs. Lyngby, Denmark", 
      "@id": "https://orcid.org/0000-0003-3724-3332", 
      "@type": "Person", 
      "name": "Hjortkj\u00e6r, Jens"
    }
  ], 
  "url": "https://zenodo.org/record/1199011", 
  "datePublished": "2018-03-15", 
  "version": "1", 
  "keywords": [
    "EEG, electroencephalography, neuroscience, auditory attention, decoding"
  ], 
  "@context": "https://schema.org/", 
  "distribution": [
    {
      "contentUrl": "https://zenodo.org/api/files/048a9989-6198-4389-817c-655f0ba0a4f0/AUDIO.zip", 
      "encodingFormat": "zip", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/048a9989-6198-4389-817c-655f0ba0a4f0/DATA_preproc.zip", 
      "encodingFormat": "zip", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/048a9989-6198-4389-817c-655f0ba0a4f0/EEG.zip", 
      "encodingFormat": "zip", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/048a9989-6198-4389-817c-655f0ba0a4f0/preproc_data.m", 
      "encodingFormat": "m", 
      "@type": "DataDownload"
    }
  ], 
  "identifier": "https://doi.org/10.5281/zenodo.1199011", 
  "@id": "https://doi.org/10.5281/zenodo.1199011", 
  "@type": "Dataset", 
  "name": "EEG and audio dataset for auditory attention decoding"
}
2,605
1,983
views
downloads
All versions This version
Views 2,6052,608
Downloads 1,9831,983
Data volume 19.6 TB19.6 TB
Unique views 2,3862,389
Unique downloads 735735

Share

Cite as