Dataset Open Access

EEG and audio dataset for auditory attention decoding

Fuglsang, Søren A.; Wong, Daniel D.E.; Hjortkjær, Jens


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Fuglsang, Søren A.</dc:creator>
  <dc:creator>Wong, Daniel D.E.</dc:creator>
  <dc:creator>Hjortkjær, Jens</dc:creator>
  <dc:date>2018-03-15</dc:date>
  <dc:description>This dataset contains EEG recordings from 18 subjects listening to one of two competing speech audio streams. Continuous speech in trials of ~50 sec. was presented to normal hearing listeners in simulated rooms with different degrees of reverberation. Subjects were asked to attend one of two spatially separated speakers (one male, one female) and ignore the other. Repeated trials with presentation of a single talker were also recorded. The data were recorded in a double-walled soundproof booth at the Technical University of Denmark (DTU) using a 64-channel Biosemi system and digitized at a sampling rate of 512 Hz. Full details can be found in:


	Søren A. Fuglsang, Torsten Dau &amp; Jens Hjortkjær (2017): Noise-robust cortical tracking of attended speech in real-life environments. NeuroImage, 156, 435-444


and


	Daniel D.E. Wong, Søren A. Fuglsang, Jens Hjortkjær, Enea Ceolini, Malcolm Slaney &amp; Alain de Cheveigné: A Comparison of Temporal Response Function Estimation Methods for Auditory Attention Decoding. Frontiers in Neuroscience, https://doi.org/10.3389/fnins.2018.00531


The data is organized in format of the publicly available COCOHA Matlab Toolbox. The preproc_script.m demonstrates how to import and align the EEG and audio data. The script also demonstrates some EEG preprocessing steps as used the Wong et al. paper above. The AUDIO.zip contains wav-files with the speech audio used in the experiment. The EEG.zip contains MAT-files with the EEG/EOG data for each subject. The EEG/EOG data are found in data.eeg with the following channels:


	channels 1-64: scalp EEG electrodes
	channel 65: right mastoid electrode
	channel 66: left mastoid electrode
	channel 67: vertical EOG below right eye
	channel 68: horizontal EOG right eye
	channel 69: vertical EOG above right eye
	channel 70: vertical EOG below left eye
	channel 71: horizontal EOG left eye
	channel 72: vertical EOG above left eye


The expinfo table contains information about experimental conditions, including what what speaker the listener was attending to in different trials. The expinfo table contains the following information:


	attend_mf: attended speaker (1=male, 2=female)
	attend_lr: spatial position of the attended speaker (1=left, 2=right)
	acoustic_condition: type of acoustic room (1= anechoic, 2= mild reverberation, 3= high reverberation, see Fuglsang et al. for details)
	n_speakers: number of speakers presented (1 or 2)
	wavfile_male: name of presented audio wav-file for the male speaker
	wavfile_female: name of presented audio wav-file for the female speaker (if any)
	trigger: trigger event value for each trial also found in data.event.eeg.value


DATA_preproc.zip contains the preprocessed EEG and audio data as output from preproc_script.m.

The dataset was created within the COCOHA Project: Cognitive Control of a Hearing Aid</dc:description>
  <dc:identifier>https://zenodo.org/record/1199011</dc:identifier>
  <dc:identifier>10.5281/zenodo.1199011</dc:identifier>
  <dc:identifier>oai:zenodo.org:1199011</dc:identifier>
  <dc:relation>info:eu-repo/grantAgreement/EC/H2020/644732/</dc:relation>
  <dc:relation>doi:10.1016/j.neuroimage.2017.04.026</dc:relation>
  <dc:relation>doi:10.1101/281345</dc:relation>
  <dc:relation>doi:10.3389/fnins.2018.00531</dc:relation>
  <dc:relation>doi:10.5281/zenodo.1199010</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/cocoha</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>http://creativecommons.org/licenses/by-nc/4.0/legalcode</dc:rights>
  <dc:subject>EEG, electroencephalography, neuroscience, auditory attention, decoding</dc:subject>
  <dc:title>EEG and audio dataset for auditory attention decoding</dc:title>
  <dc:type>info:eu-repo/semantics/other</dc:type>
  <dc:type>dataset</dc:type>
</oai_dc:dc>
2,321
1,807
views
downloads
All versions This version
Views 2,3212,324
Downloads 1,8071,807
Data volume 18.4 TB18.4 TB
Unique views 2,1322,135
Unique downloads 651651

Share

Cite as