Published June 2, 2023 | Version 2
Video/Audio Open

The EmoHI Test stimuli: Measuring vocal emotion recognition in hearing-impaired populations

  • 1. Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, Netherlands | Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
  • 2. Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Lyon, France
  • 3. Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, Netherlands
  • 4. Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands

Contributors

Contact person:

Data curator:

  • 1. Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
  • 2. Clinical Neurosciences Department, University of Cambridge, Cambridge, UK
  • 3. Boys Town National Research Hospital, Omaha, NE, US
  • 4. CNRS, Lyon Neuroscience Research Center, France | University of Groningen, University Medical Center Groningen, Netherlands

Description

Before reading this file, make sure you have read the README.1.pdf file. That file also contains information about the license these materials are distributed under.

Versions

  • Version 2: This is the current version. To cite this version specifically, use DOI 10.5281/zenodo.7997063. In this version we fixed some naming mistakes in the files (in 8 of the files the sentence was identified as t2 instead of s2), and added two missing stimuli (t5_neutral_t2_u04.wav and t5_sad_t1_u05.wav).
  • Version 1: This was the initial version. To cite that version specifically, use DOI 10.5281/zenodo.3689710.

The latest version of the EmoHI material can be downloaded from https://doi.org/10.5281/zenodo.3689709. Please always check that you have the latest version, and that you comply with the current license requirements.

The EmoHI Test

The EmoHI Test was developed to measure the accuracy at which participants can recognize vocal emotions based on pseudospeech sentences that were produced in a happy, angry sad, or neutral manner. The EmoHI Test recordings are particularly suitable for testing hearing-impaired populations due to their high sound quality. All recordings, including the ones that were used in Nagels et al. (2020, PeerJ, doi: 10.7717/peerj.8773), are made available here.

The stimuli were recorded in an anechoic room at a sampling rate of 44.1 kHz. The microphone was placed at a distance of approximately 30 cm (12 in) from the speaker. The recordings were made by connecting a standing Røde NT1 microphone to a Presonus TubePre V2 preamplifier and a TASCAM DR-100 portable digital recorder. The gain of the recordings was adjusted for each emotion production using the preamplifier to record the stimuli at an intensity level that was approximately the same across emotions to reduce large intensity differences between the recordings of different emotions. The files are not RMS equalized.

Citation

When using this repository in your research, please cite the repository itself. For this version:

Nagels L., Gaudrain E., Hendriks P., & Başkent D. (2023, June 2). The EmoHI Test stimuli: Measuring vocal emotion recognition in hearing-impaired populations. Version 2. Zenodo. https://doi.org/10.5281/zenodo.7997063

Also cite the PeerJ article that describes the material:

Nagels L., Gaudrain E., Vickers D., Matos Lopes M., Hendriks P., Başkent D. (2020). Development of vocal emotion recognition in school-age children: The EmoHI test for hearing-impaired populations. PeerJ 8:e8773 https://doi.org/10.7717/peerj.8773

Sound file name structure

The sound files are named using the following convention:

t[1-6]_{emotion}_s{1,2}_u[01-18].wav

  • t[1-6] represents the talker who produced the stimulus: t1, t2, t3, t4, t5, or t6
  • {emotion} is the label of the emotion that was produced: neutral, happy, angry, or sad
  • s{1,2} is the pseudospeech sentence that was used: s1 for "Koun se mina lod belam." s2 for "Nekal ibam soud molen."
  • u[01-18] is the utterance number: Number ranging from u01 to u18

For instance, t1_happy_s2_u01.wav is utterance 1 of talker t1 producing emotion "happy" using sentence 2.

Talker demographic information

The table below gives an overview of the voice characteristics from the talkers who produced the EmoHI test stimuli.

Talker Age (years) Gender Height (m) Mean F0 (Hz) F0 range (Hz)
t1 48 f 1.72 253.14 179.97 – 421.81
t2 36 f 1.68 302.23 200.71 – 437.38
t3 27 m 1.85 166.92 100.99 – 296.47
t4 45 m 1.90 149.41 96.97 – 274.72
t5 25 f 1.63 282.89 199.49 – 429.38
t6 24 m 1.75 167.76 87.46 – 285.79

Supporting data

The behavioural data from the PeerJ article is accessible at https://doi.org/10.34894/BDMX6D.

Notes

Funding: Center for Language Cognition Groningen (CLCG); VICI Grant nº918-17-603 from the Netherlands Organization for Scientific Research (NWO) and the Netherlands Organization for Health Research and Development (ZonMw); LabEx CeLyA ("Centre Lyonnais d'Acoustique," ANR-10-LABX-0060/ANR-11-IDEX-0007) operated by the French National Research Agency.

Files

EmoHIv2.zip

Files (10.9 MB)

Name Size Download all
md5:4b9c5d132556b00982fdafb4ea68b341
10.9 MB Preview Download

Additional details

Related works

Is cited by
Journal article: 10.7717/peerj.8773 (DOI)
Is supplemented by
Dataset: 10.34894/BDMX6D (DOI)