EEG data of continuous listening of music and speech
This dataset contains EEG recordings from 18 subjects listening to continuous sound, either speech or music. Continuous audio stimuli were presented to listeners in trials of 70 seconds from one loudspeaker located 150 cm in from of them. They were instructed to attentively listen to the sound during the whole trial.
All listeners were native Danish speakers and were presented with 5 different types of audio stimuli:
- Instrumental music: Excerpt of Disney songs: Excerpts of polyphonic Disney songs with no lyrics. The melody line from the original version was replaced by a similar melody played by a synthetic cello (Referred to as MC: Music Cello in the dataset).
- Music with understood lyrics: Excerpts of polyphonic Disney songs with lyrics in Danish, understood by the listeners (Referred to as MD: Music Danish in the dataset).
- Music with non-understood lyrics: Excerpts of polyphonic Disney songs with lyrics in Finnish, not understood by the listeners (Referred to as MF: Music Finnish in the dataset).
- Understood speech: Excerpts of an audiobook in Danish read by a woman, understood by the listeners (Referred to as SD: Speech Danish in the dataset).
- Non-Understood speech: Excerpts of an audiobook in Finnish read by a woman, understood by the listeners (Referred to as SF: Speech Danish in the dataset).
Data were recorded using a 64-channels g.HIamp-Research system and digitalized at a sampling rate of 2400 Hz.
The dataset contains pre-processed EEG data (see pre-processing step applies to the data below), for each listener. Trials with large noise artefacts have been removed.
The processed folder contains data used in Simon, A. et al. (2022) Cortical linear encoding and decoding of sounds: Differences between naturalistic speech and music listening. (Submitted).
The processed data contains EEG data and an aligned audio envelope for each category of audio stimuli. The MATLAB script contains the processing applied to obtain it.
The dataset was created within the InHear project.
For more information, firstname.lastname@example.org
-re-reference to average channels
-downsampling to 512Hz
-bandpass filter 0.5-45 Hz
-ICA decomposition using SOBI algorithm
-removed eyes and noise components