Published March 11, 2024 | Version v1
Dataset Open

ear-EEG dataset for AAD task in Multiple-speaker environment

Description

Please cite these original paper where this dataset was presented:

YJ Yan, X Xu, H Zhu, P Tian, ZS Ge, X Wu, J Chen "Auditory Attention Decoding in Four-Talker Environment with EEG" which was accepted in INTERSPEECH 2024 and we will update more details after the closing of INTERSPEECH 2024.

X. Xu, B.Wang, Y. Yan, X.Wu, and J. Chen, “A DenseNet-Based Method for Decoding Auditory Spatial Attention with EEG,” in ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Seoul, Korea, Republic of: IEEE, Apr. 2024, pp. 1946–1950.

The details of the setup for the experiment was mentioned in the original paper.

[1] Z. Fu, X. Wu, and J. Chen, “Congruent audiovisual speech enhances auditory attention decoding with EEG,” Journal of Neural Engineering, vol. 16, no. 6, p. 066033, Nov. 2019.

[2] P. J. Sch ̈afer, F. I. Corona-Strauss, R. Hannemann, S. A. Hillyard, and D. J. Strauss, “Testing the Limits of the Stimulus Reconstruction Approach: Auditory Attention Decoding in a Four-Speaker Free Field Environment,” Trends in Hearing, vol. 22, p. 233121651881660, Jan. 2018.

[3] T. Qu, Z. Xiao, M. Gong, Y. Huang, X. Li, and X. Wu, “Distance-Dependent Head-Related Transfer Functions Measured With High Spatial Resolution Using a Spark Gap,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1124–1132, Aug. 2009.


In our code repository https://github.com/zhl486/Ear_EEG_code.git , we have provided the envelopes of the experimental stimuli. If you require the original speech stimuli for auditory attention decoding (AAD) tasks, please feel free to contact us via email 2301111611@stu.pku.edu.cn and provide a brief explanation of your research needs.

Files

ear_raw.zip

Files (2.5 GB)

Name Size Download all
md5:1b5300d128a0d0b75cae639b703824a4
2.5 GB Preview Download