Published July 22, 2024 | Version v1
Dataset Open

MEG Attention Dataset Using Musicians and Non-Musicians - Part 1

  • 1. ROR icon Friedrich-Alexander-Universität Erlangen-Nürnberg

Description

Data location

The data is split into 3 Zenodo locations as it is too large for one location. In total the data set contains meg data of 58 participants. An overview of the participants and the amount of musical training the have conducted is also available. Each of the 3 Zenodo uploads contains the participant overview file + Set#.zip.

Part/Set 1 (blue) contains: meg data of participants 1 - 19 + audio folder

Part/Set 2 (pink) contains: meg data of participants 20 - 38 (can be found here)

Part/Set 3 (yellow) contains: meg data of participants 39 - 58 (can be found here)

 

Experimental design

We used four German audiobooks (all published by Hörbuch Hamburg Verlag and available online): 

1. „Frau Ella“ (narrated by lower pitched (LP) speaker and attended by participants)

2. „Darum“ (narrated by LP speaker and ignored by participants)

3. „Den Hund überleben“ (narrated by higher pitched (HP) speaker and attended by participants)

4. „Looking for Hope“ (narrated by HP speaker and ignored by participants)

The participants listened to 10 audiobook chapters. There were always 2 audiobooks presented at the same time (one narrated by a HP speaker and one by a LP speaker) and the participants attended one and ignored the other speaker. The structure of the chapters was as follows:

Chapter 1 of audiobook 1 + random part of audiobook 4

3 comprehension questions

Chapter 1 of audiobook 3 + random part of audiobook 2

3 comprehension questions

Chapter 2 of audiobook 1 + random part of audiobook 4

3 comprehension questions

Chapter 2 of audiobook 3 + random part of audiobook 2

3 comprehension questions

Chapter 3 of audiobook 1 + random part of audiobook 4

3 comprehension questions

Chapter 3 of audiobook 3 + random part of audiobook 2

3 comprehension questions

Chapter 4 of audiobook 1 + random part of audiobook 4

3 comprehension questions

Chapter 4 of audiobook 3 + random part of audiobook 2

3 comprehension questions

Chapter 5 of audiobook 1 + random part of audiobook 4

3 comprehension questions

Chapter 5 of audiobook 3 + random part of audiobook 2

3 comprehension questions

 

MEG Data structure

MEG data of 58 participants is contained in this data set. 

Each participant has a folder with its participant number as folder name (1,2,3,…). 

In the participant folder are two subfolders. One (LP_speaker_attended) containing the MEG data when the participant was attending the LP speaker (ignoring the HP speaker) and one (HP_speaker_attended) containing the MEG data measured when the participant was attending the HP speaker (ignoring the LP speaker). Note that after each chapter the participants switched the attention from LP to HP and vice versa but for evaluation we concatenated the data of the LP speaker attended/ HP speaker ignored mode and the HP speaker attended/ LP speaker ignored mode.

The data of attending the HP speaker is of shape (248, 959416) (ca 16 minutes). That of the LP speaker is of shape (248, 1247854) (ca 21 minutes)

#The meg data can be loaded with the mne python library

meg = mne.read_raw_fif(“…/data_meg.fif“)

#The data can be accessed:

meg_data = meg.get_data()

Exemplary code for performing source reconstruction and trf evaluation can be found in our git repository.

 

Audio Data structure

The original audio chapters of the audio books are stored in the folder „Audio“ in Part 1.

There are two subfolders. One (attended_speech) contains the ten audiobook chapters which were attended by the participant (audiobook1_#, audiobook3_#). The other subfolder (ignored_speech) contains the ten audiobook chapters which were ignored by the participant (audiobook2_#, audiobook4_#).

We recommend the librosa library for audio loading and processing.

Audio data is provided with a sampling frequency of 44.1 kHz

Each audio book is provided in 5 chapters as they were presented to the participants. The corresponding meg file as described above already contains the concatenated measured data of all five chapters. 

If you resample the audio data to 1000Hz and concatenate the chapters, the audio shape (n_times) will be equal to the corresponding n_times of the meg data. 

 

Processing of meg data

The meg data was filtered analog with a 1.0 - 200 Hz filter and preprocessed offline using a notch filter (Firwin, 0.5 Hz bandwidth) to remove power line interference at frequencies 50, 100, 150 and 200 Hz.

The data was then resampled from 1017.25 Hz to 1000 Hz. 

 

Technical details

The meg system with which the data was recorded was a 248 magnetometer system (4D Neuroimaging, San Diego, CA, USA)

The audio signal was presented through loud speakers outside the magnetic chamber and passed on to the participant via tubes of 2 m length and 2 cm diameter leading to a delay of the acoustic signal of 6 ms. The audio was presented diotically (both the attended and the ignored audio stream were presented in both ears) with a sound pressure level of 67 dB(A).

The measurement setup was provided by a former study by Schilling et al (https://doi.org/10.1080/23273798.2020.1803375).

 

Papers to cite when using this data

  • Riegel et al., "No Influence of Musical Training on the Cortical Contribution to the Speech-FFR and its Modulation Through Selective Attention" eneuro in print (https://doi.org/10.1101/2024.07.25.605057).
  • Schüller, Mücke et al. "Assessing the Impact of Selective Attention on the Cortical Tracking of the Speech Envelope in the Delta and Theta Frequency Bands and How Musical Training Does (Not) Affect it", under review (https://doi.org/10.1101/2024.08.01.606154).
  • Schüller et al., "Attentional Modulation of the Cortical Contribution to the Frequency-Following Response Evoked by Continuous Speech“ (https://doi.org/10.1523/JNEUROSCI.1247-23.2023).

Files

Participants_Overview.pdf

Files (39.0 GB)

Name Size Download all
md5:54af836cc5567b5be722aa00112af936
82.0 kB Preview Download
md5:7f4e68c52e76b601bc005810be0a45ee
39.0 GB Preview Download

Additional details

Related works

Is part of
Journal article: 10.1523/JNEUROSCI.1247-23.2023 (DOI)
Preprint: 10.1101/2024.07.25.605057 (DOI)
Preprint: 10.1101/2024.08.01.606154 (DOI)

Software

Repository URL
https://github.com/Al2606/MEG-Analysis-Pipeline
Programming language
Python