Dataset Open Access

# Auditory Attention Detection Dataset KULeuven

Das, Neetha; Francart, Tom; Bertrand, Alexander

##### Contact person(s)
Francart, Tom; Bertrand, Alexander
##### Data collector(s)
Das, Neetha

This work was done at ExpORL, Dept. Neurosciences, KULeuven and Dept. Electrical Engineering (ESAT), KULeuven.

This dataset contains EEG data collected from 16 normal-hearing subjects. EEG recordings were made in a soundproof, electromagnetically shielded room at ExpORL, KULeuven. The BioSemi ActiveTwo system was used to record 64-channel EEG signals at 8196 Hz sample rate. The audio signals, low pass filtered at 4 kHz, were administered to each subject at 60 dBA through a pair of insert phones (Etymotic ER3A). The experiments were conducted using the APEX 3 program developed at ExpORL [1].

Four Dutch short stories [2], narrated by different male speakers, were used as stimuli. All silences longer than 500 ms in the audio files were truncated to 500 ms. Each story was divided into two parts of approximately 6 minutes each. During a presentation, the subjects were presented with the six-minutes part of two (out of four) stories played simultaneously. There were two stimulus conditions, i.e., HRTF' or dry' (dichotic).  An experiment here is defined as a sequence of 4 presentations, 2 for each stimulus condition and ear of stimulation, with questions asked to the subject after each presentation. All subjects sat through three experiments within a single recording session. An example for the design of an experiment is shown in Table 1 in [3]. The first two experiments included four presentations each.  During a presentation, the subjects were instructed to listen to the story in one ear, while ignoring the story in the other ear. After each presentation, the subjects were presented with a set of multiple-choice questions about the story they were listening to in order to help them stay motivated to focus on the task. In the next presentation, the subjects were presented with the next part of the two stories. This time they were instructed to attend to their other ear. In this manner, one experiment involved four presentations in which the subjects listened to a total of two stories, switching attended ear between presentations. The second experiment had the same design but with two other stories. Note that the Table was different for each subject or recording session, i.e., each of the elements in the table were permuted between different recording sessions to ensure that the different conditions (stimulus condition and the attended ear) were equally distributed over the four presentations. Finally, the third experiment included a set of presentations where the first two minutes of the story parts from the first experiment, i.e. a total of four shorter presentations, were repeated three times, to build a set of recordings of repetitions. Thus, a total of approximately 72 minutes of EEG was recorded per subject.

We refer to EEG recorded from each presentation as a trial. For each subject, we recorded 20 trials - 4 from  the first experiment, 4 from the second experiment, and 12 from the third experiment (first 2 minutes of the 4 presentations from experiment 1 X 3 repetitions). The EEG data is stored in subject specific mat files of the format 'Sx', 'x' referring to the subject number. The audio data is stored as wav files in the folder 'stimuli'. Please note that the stories were not of equal lengths, and the subjects were allowed to finish listening to a story, even in cases where the competing story was over. Therefore, for each trial, we suggest referring to the length of the EEG recordings to truncate the ends of the corresponding audio data. This will ensure that the processed data (EEG and audio) contains only competing talker scenarios. Each trial was high-pass filtered  (0.5 Hz cut off) and downsampled from the recorded sampling rate of 8192 Hz to 128 Hz.

Each trial (trial*.mat) contains the following information:

RawData.Channels: channel numbers (1 to 64).
RawData.EegData: EEG data (samples X channels).
FileHeader.SampleRate: sampling frequency of the saved data.
TrialID: a number between 1 to 20, showing the trial number.
attended_ear: the direction of attention of the subject. 'L' for left, 'R' for right.
stimuli: cell array with stimuli{1} and stimuli{2} indicating the name of audio files presented in the left ear and the right ear of the subject respectively.
condition: stimulus presentation condition. 'HRTF' - stimuli were filtered with HRTF functions to simulate audio from 90 degrees to the left and 90 degrees to the right of the speaker, 'dry' - a dichotic presentation in which there was one story track each presented separately via the left and the right earphones.
experiment: the number of the experiment (1, 2, or 3).
part: part of the story track being presented (can be 1 to 4 for experiments 1 and 2, and 1 to 12 for experiment 3).
attended_track: the attended story track. '1' for track 1 and '2' for track 2. Each track maintains continuity of the story. In Experiment 1, attention is always to track 1, and in Experiment 2, attention is always to track 2.
repetition: binary variable indicating where the trial is a repetition (of presented stimuli) or not.
subject: subject id of the format 'Sx', 'x' being the subject number.

The 'stimuli' folder contains .wav-files of the format: part{part number}_track{track number}_{condition}.wav. Although the folder contains stimuli with HRTF filtering as well, for the analysis, we have assumed knowledge of the original clean stimuli (i.e. stimuli presented under the 'dry' condition), and hence envelopes were extracted only from part{part number}_track{tracknumber}_dry.wav files.

The MATLAB file 'preprocess_data.m' gives an example of how the synchronization and preprocessing of the EEG and audio data can be done as described in [14]. Dependency: AMToolbox.

This dataset has been used in [3, 5-16].

[1] Francart, T., Van Wieringen, A., & Wouters, J. (2008). APEX 3: a multi-purpose test platform for auditory psychophysical experiments. Journal of Neuroscience Methods, 172(2), 283-293.
[3] Das, N., Biesmans, W., Bertrand, A., & Francart, T. (2016). The effect of head-related filtering and ear-specific decoding bias on auditory attention detection. Journal of Neural Engineering, 13(5), 056014.
[4] Somers, B., Francart, T., & Bertrand, A. (2018). A generic EEG artifact removal algorithm based on the multi-channel Wiener filter. Journal of Neural Engineering, 15(3), 036007.
[5] Das, N., Vanthornhout, J., Francart, T., & Bertrand, A. (2019). Stimulus-aware spatial filtering for single-trial neural response and temporal response function estimation in high-density EEG with applications in auditory research. NeuroImage 204 (2020)
[6] Biesmans, W., Das, N., Francart, T., & Bertrand, A. (2016). Auditory-inspired speech envelope extraction methods for improved EEG-based auditory attention detection in a cocktail party scenario. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(5), 402-412.
[7] Das, N., Van Eyndhoven, S., Francart, T., & Bertrand, A. (2016). Adaptive attention-driven speech enhancement for EEG-informed hearing prostheses. In Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 77-80.
[8] Van Eyndhoven, S., Francart, T., & Bertrand, A. (2016). EEG-informed attended speaker extraction from recorded speech mixtures with application in neuro-steered hearing prostheses. IEEE Transactions on Biomedical Engineering, 64(5), 1045-1056.
[9] Das, N., Van Eyndhoven, S., Francart, T., & Bertrand, A. (2017). EEG-based Attention-Driven Speech Enhancement For Noisy Speech Mixtures Using N-fold Multi-Channel Wiener Filters. In Proceedings of the 25th European Signal Processing Conference (EUSIPCO), 1660-1664.
[10] Narayanan, A. M., & Bertrand, A. (2018). The effect of miniaturization and galvanic separation of EEG sensor devices in an auditory attention detection task. In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 77-80.
[11] Vandecappelle , S., Deckers, L., Das, N., Ansari, A. H., Bertrand, A., & Francart, T. (2020). EEG-based detection of the locus of auditory attention with convolutional neural networks. bioRxiv 475673; doi: https://doi.org/10.1101/475673.
[12] Narayanan, A. M., & Bertrand, A. (2019). Analysis of Miniaturization Effects and Channel Selection Strategies for EEG Sensor Networks With Application to Auditory Attention Detection. IEEE Transactions on Biomedical Engineering, 67(1), 234-244.
[13] Geirnaert, S., Francart, T., & Bertrand, A. (2019). A New Metric to Evaluate Auditory Attention Detection Performance Based on a Markov Chain. In Proceedings of the 27th European Signal Processing Conference (EUSIPCO), 1-5.
[14] Geirnaert, S., Francart,T., & Bertrand, A. (2020). An Interpretable Performance Metric for Auditory Attention Decoding Algorithms in a Context of Neuro-Steered Gain Control. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(1), 307-317.
[15] Geirnaert, S., Francart,T., & Bertrand, A. (2020). Fast EEG-based decoding of the directional focus of auditory attention using common spatial patterns. bioRxiv 2020.06.16.154450; doi: https://doi.org/10.1101/2020.06.16.154450.
[16] Geirnaert, S., Vandecappelle, S., Alickovic, E., de Cheveigné, A., Lalor, E., Meyer, B.T., Miran, S., Francart, T., & Bertrand, A. (2020). Neuro-Steered Hearing Devices: Decoding Auditory Attention From the Brain. arXiv 2008.04569; doi: arXiv:2008.04569.

This research work was carried out at the ESAT and ExpORL Laboratories of KU Leuven, in the frame of KU Leuven Special Research Fund BOF/ STG-14-005, OT/14/119 and C14/16/057. The work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 637424).
Files (5.2 GB)
Name Size
preprocess_data.m
md5:51e257fa5a15850776e985847932d172
7.8 kB
md5:64162b3354257db21dc830e5316017db
9.2 kB
S1.mat
md5:81a4ea96dc63f6acffb16405e071f27e
293.2 MB
S10.mat
md5:fffc929b1f71a9feb0aabc85504cd990
293.4 MB
S11.mat
293.6 MB
S12.mat
md5:d76e8526d83cb7f15420dd670957e0e2
293.5 MB
S13.mat
md5:9c277eceb8070f58c8c4655918eb3beb
293.3 MB
S14.mat
md5:7767b3cc8ee58f55abc4f30fd0bf2214
293.6 MB
S15.mat
md5:d2d70041bd6317416e1c5266426e08d4
292.6 MB
S16.mat
md5:8b244536d002ab35b72f5ed6664946f0
292.3 MB
S2.mat
md5:69df6e89c29f5efd44c92bbc51a98359
293.8 MB
S3.mat
md5:2fb8067dc26a5420377be480fdb80dc5
293.9 MB
S4.mat
293.4 MB
S5.mat
md5:025953727a5b07797263210c34589700
293.6 MB
S6.mat
md5:da7f4321266a5cf85067a04568e01f16
293.5 MB
S7.mat
md5:0ae742f02b154897fd44a133e0aff30c
292.8 MB
S8.mat
md5:bc926d599ce90f2e38d71b7a4bf72076
292.1 MB
S9.mat
md5:f12716937c3d53bbdcedb1364018b2f7
292.9 MB
stimuli.zip
md5:bfc4309b35387dfb2552de288c7c0b13
461.1 MB
• Biesmans, W., Das, N., Francart, T., & Bertrand, A. (2016). Auditory-inspired speech envelope extraction methods for improved EEG-based auditory attention detection in a cocktail party scenario. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(5), 402-412.

3,440
9,894
views