Info: Zenodo’s user support line is staffed on regular business days between Dec 23 and Jan 5. Response times may be slightly longer than normal.

Published September 28, 2020 | Version v1
Conference paper Open

Acoustic Feature Extraction with Interpretable Deep Neural Network for Neurodegenerative related Disorder Classification

Description

Speech-based automatic approaches for detecting neurodegenerative disorders (ND) and mild cognitive impairment (MCI) have received more attention recently due to being non-invasive and potentially more sensitive than current pen-and-paper tests. The performance of such systems is highly dependent on the choice of features in the classification pipeline. In particular for acoustic features, arriving at a consensus for a best feature set has proven challenging. This paper explores using deep neural network for extracting features directly from the speech signal as a solution to this. Compared with hand-crafted features, more information is present in the raw waveform, but the feature extraction process becomes more complex and less interpretable which is often undesirable in medical domains. Using a Sinc-Net as a first layer allows for some analysis of learned features. We propose and evaluate the Sinc-CLA …

Files

Acoustic Feature Extraction with Interpretable Deep Neural Network for Neurodegenerative related Disorder Classification.pdf