Dataset Open Access
Hernandez Perez, Heivet;
Mikiel-Hunter, Jason;
McAlpine, David;
Dhar, Sumitrajit;
Boothalingam, Sriram;
Monaghan, Jessica J.M.;
McMahon, Catherine M.
{ "description": "<p>The ability to navigate \"cocktail-party\" situations by focussing on sounds of interest over irrelevant, background sounds is often considered in terms of cortical mechanisms. However, subcortical circuits such as the pathway underlying the medial olivocochlear (MOC) reflex modulate the activity of the inner ear itself, supporting the extraction of salient features from auditory scene prior to any cortical processing. To understand the contribution of auditory subcortical nuclei and the cochlea in complex listening tasks, we made physiological recordings along the auditory pathway while listeners engaged in detecting non(sense)-words in lists of words. Both naturally spoken and intrinsically noisy, vocoded speech\u2014filtering that mimics processing by a cochlear implant\u2014significantly activated the MOC reflex, but this was not the case for speech in background noise, which more engaged midbrain and cortical resources. A model of the initial stages of auditory processing reproduced specific effects of each form of speech degradation, providing a rationale for goal-directed gating of the MOC reflex based on enhancing the representation of the energy envelope of the acoustic waveform. Our data reveals the co-existence of two strategies in the auditory system that may facilitate speech understanding in situations where the signal is either intrinsically degraded or masked by extrinsic acoustic energy. Whereas intrinsically degraded streams recruit the MOC reflex to improve representation of speech cues peripherally, extrinsically masked streams rely more on higher auditory centres to de-noise signals.</p>", "license": "https://creativecommons.org/publicdomain/zero/1.0/legalcode", "creator": [ { "affiliation": "Macquarie University", "@id": "https://orcid.org/0000-0002-9135-3973", "@type": "Person", "name": "Hernandez Perez, Heivet" }, { "affiliation": "Macquarie University", "@id": "https://orcid.org/0000-0002-6085-9269", "@type": "Person", "name": "Mikiel-Hunter, Jason" }, { "affiliation": "Macquarie University", "@id": "https://orcid.org/0000-0001-5467-6725", "@type": "Person", "name": "McAlpine, David" }, { "affiliation": "Northwestern University", "@id": "https://orcid.org/0000-0002-4496-6355", "@type": "Person", "name": "Dhar, Sumitrajit" }, { "affiliation": "University of Wisconsin-Madison", "@id": "https://orcid.org/0000-0003-3901-3071", "@type": "Person", "name": "Boothalingam, Sriram" }, { "affiliation": "National Acoustic Laboratories", "@id": "https://orcid.org/0000-0003-1416-4164", "@type": "Person", "name": "Monaghan, Jessica J.M." }, { "affiliation": "Macquarie University", "@id": "https://orcid.org/0000-0001-7312-6593", "@type": "Person", "name": "McMahon, Catherine M." } ], "url": "https://zenodo.org/record/5555133", "measurementTechnique": "<p>Dataset includes all sound and sentence wav-files used to generate auditory nerve spikes.\u00a0 Model of the auditory periphery and auditory brainstem (MAP_BS) is the work of Meddis group\u00a0and is available here but can also be originally found (alongside more information about previous versions of the model) at\u00a0<a href=\"http://essexpsychology.webmate.me/HearingLab/modelling.html\">http://essexpsychology.webmate.me/HearingLab/modelling.html</a>.\u00a0</p>\n\n<p>Please refer to\u00a0<a href=\"https://www.researchgate.net/publication/307583615_MAP-BSa_Matlab_Auditory_Processing_software_platform_for_studying_Auditory_BrainStem_activity\">https://www.researchgate.net/publication/307583615_MAP-BSa_Matlab_Auditory_Processing_software_platform_for_studying_Auditory_BrainStem_activity</a>\u00a0for further information about MAP_BS and its origins.</p>\n\n<p>Two analysis files are included to perform Shuffled Auto-/Cross- Correlograms of either Individual Words (list of words used in the article are found in 'Balanced_NS_and_S_lists.mat')\u00a0or Mava Corpus sentences (<a href=\"https://app.alveo.edu.au/catalog/mava\">https://app.alveo.edu.au/catalog/mava</a>).</p>", "datePublished": "2021-10-07", "@context": "https://schema.org/", "distribution": [ { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/Figure_1D_Source_Data_Performance.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/Figure_1E_Source_Data_Main_effects.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/Figure_1F_Source_Data_CEOAEs_suppression.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/Figure_2C_E_SourceData_LSR_pENV.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/Figure_3A_Source_Data_ERPs.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/README.docx", "encodingFormat": "docx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/S2_Figure_SourceData_Sentences.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/S3_Figure_NatANonly_for_pENVan.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/S4_Figure_SourceData_Other_pENV.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/S5_Figure_SourceData_HSR_pENV.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/S6_Figure_SourceData_LSR_pTFS.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/S7_Figure__Source_Data_ERPs.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" }, { "contentUrl": "https://zenodo.org/api/files/4048bcd5-2cc1-4787-9bf8-2827f0da3982/S9_Figure_SourceData_LTPS.xlsx", "encodingFormat": "xlsx", "@type": "DataDownload" } ], "identifier": "https://doi.org/10.5061/dryad.3ffbg79fw", "@id": "https://doi.org/10.5061/dryad.3ffbg79fw", "@type": "Dataset", "name": "Understanding degraded speech leads to perceptual gating of a brainstem reflex in human listeners" }
Views | 57 |
Downloads | 67 |
Data volume | 109.1 MB |
Unique views | 56 |
Unique downloads | 13 |