Published April 29, 2025 | Version v1
Journal article Open

Resting-state functional connectivity changes following audio-tactile speech training

  • 1. The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University
  • 2. The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University
  • 3. World Hearing Centre, Institute of Physiology and Pathology of Hearing

Description

Understanding speech in background noise is a challenging task, especially when the signal is also distorted. In a series of previous studies, we have shown that comprehension can improve if, simultaneously with auditory speech, the person receives speech-extracted low-frequency signals on their fingertips. The effect increases after short audio-tactile speech training. In this study, we used resting-state functional magnetic resonance imaging (rsfMRI) to measure spontaneous low-frequency oscillations in the brain while at rest to assess training-induced changes in functional connectivity. We observed enhanced functional connectivity (FC) within a right-hemisphere cluster corresponding to the middle temporal motion area (MT), the extrastriate body area (EBA), and the lateral occipital cortex (LOC), which, before the training, was found to be more connected to the bilateral dorsal anterior insula. Furthermore, early visual areas demonstrated a switch from increased connectivity with the auditory cortex before training to increased connectivity with a sensory/multisensory association parietal hub, contralateral to the palm receiving vibrotactile inputs, after training. In addition, the right sensorimotor cortex, including finger representations, was more connected internally after the training. The results altogether can be interpreted within two main complementary frameworks. The first, speech-specific, factor relates to the pre-existing brain connectivity for audio–visual speech processing, including early visual, motion, and body regions involved in lip-reading and gesture analysis under difficult acoustic conditions, upon which the new audio-tactile speech network might be built. The other framework refers to spatial/body awareness and audio-tactile integration, both of which are necessary for performing the task, including in the revealed parietal and insular regions. It is possible that an extended training period is necessary to directly strengthen functional connections between the auditory and the sensorimotor brain regions for the utterly novel multisensory task. The results contribute to a better understanding of the largely unknown neuronal mechanisms underlying tactile speech benefits for speech comprehension and may be relevant for rehabilitation in the hearing-impaired population.

 

Files

fnins-1-1482828.pdf

Files (5.9 MB)

Name Size Download all
md5:4dd85a72bf974708dd18ccce956eac17
5.9 MB Preview Download

Additional details

Funding

European Commission
GuestXR - GuestXR: A Machine Learning Agent for Social Harmony in eXtended Reality 101017884