Attention Decoding at the Cocktail Party: Hearing Aid Users
Authors/Creators
Description
Experiment
We conducted a selective auditory attention experiment, where stimuli were presented in the free room over two loudspeakers separated by 60°. Before each trial, participants received instructions directing their attention to a specific audiobook. This guidance was provided visually on a screen, featuring a symbol indicating the selected audiobook’s direction. Initially, we conducted eight trials in a single-speaker scenario, where only one audiobook was presented from alternating sides. Each trial lasted approximately two minutes. After every two trials, the presented story changed. In twelve subsequent trials, we implemented a competing-speaker paradigm, where two stories were presented simultaneously. However, the distractor story started 10 s later, affording participants time to discern the target speaker. We further randomized the organization by starting with the block of s2 stimuli instead of s1 for every second participant.
Participants
The dataset comprises data from 29 bilateral hearing aid users. This is only one of three datasets collected for this study. There two more datasets from age-matched Cochlear implant users and 29 control participants. All three cohorts underwent the exact same procedure.
EEG Recording
We collected EEG data using an actiCHamp System (BrainProducts GmbH, Germany) equipped with 32 electrodes. For CI-users between two and four electrodes were removed due to their proximity to the CI magnet and sound processor. The sampling rate was set at 1 kHz, and an online low-pass filter with a cutoff frequency of 280 Hz was implemented. Prior to the experiment, electrode impedances were maintained below 20 kΩ. We monitored the impedances through
both the single-speaker and competing-speaker scenarios, and, if needed, applied additional conductive gel to ensure that the impedances remained below the threshold of 20 kΩ. For synchronizing the audio and the EEG recording, we used an audio splitter and recorded the presented audio as two auxiliary channel over the EEG recorder with two StimTraks (BrainProducts GmbH, Germany) as adapter. We performed an offline correlation analysis between the recorded audio signal and delayed versions of the clean stimuli. The delay with highest value in Pearson’s r is used to align the respective stimuli with the EEG recording. Additionally, we sent onse
Dataset
- We provide the dataset for each cohort as hdf5 dataset. It includes:
- The EEG recording sampled at the original sampling frequency of 1kHz
- Stimuli and precalculated features (speech envelope and onset envelope)
- The original stimuli as wav files sampled at 48kHz. The files are named according to the "stimulus code" in the hdf5 file, so they can me matched easily to the neural data.
- Exemplary raw data in .vhdr format and the montag file. This data is helpful to create topographies, for instance for TRF analysis.
For organization of the file and an example of how to read the data using python, see the hdf5_dataset_info.txt file.
Files
hdf5_dataset_info.txt
Additional details
Dates
- Available
-
2025-12-18
Software
- Repository URL
- https://github.com/Constantin-Jehn/aad-neuroimage.git
- Programming language
- Python