Conference paper Open Access

# Continuous Wavelet Vocoder-Based Decomposition of Parametric Speech Waveform Synthesis

Al-Radhi, Mohammed Salah; Csapó, Tamás Gábor; Németh, Géza

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="URL">https://zenodo.org/record/5730361</identifier>
<creators>
<creator>
<givenName>Mohammed Salah</givenName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-3094-6916</nameIdentifier>
<affiliation>Budapest University of Technology and Economics</affiliation>
</creator>
<creator>
<creatorName>Csapó, Tamás Gábor</creatorName>
<givenName>Tamás Gábor</givenName>
<familyName>Csapó</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0003-4375-7524</nameIdentifier>
<affiliation>Budapest University of Technology and Economics</affiliation>
</creator>
<creator>
<creatorName>Németh, Géza</creatorName>
<givenName>Géza</givenName>
<familyName>Németh</familyName>
<nameIdentifier nameIdentifierScheme="ORCID" schemeURI="http://orcid.org/">0000-0002-2311-4858</nameIdentifier>
<affiliation>Budapest University of Technology and Economics</affiliation>
</creator>
</creators>
<titles>
<title>Continuous Wavelet Vocoder-Based Decomposition of Parametric Speech Waveform Synthesis</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2021</publicationYear>
<subjects>
<subject>wavelet model</subject>
<subject>speech synthesis</subject>
<subject>statistical features</subject>
<subject>continuous vocoder</subject>
</subjects>
<dates>
<date dateType="Issued">2021-09-03</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="ConferencePaper"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5730361</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsIdenticalTo">10.21437/Interspeech.2021-1600</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/ai4eu</relatedIdentifier>
</relatedIdentifiers>
<version>1</version>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;To date, various speech technology systems have adopted the vocoder approach, a method for synthesizing speech waveform that shows a major role in the performance of statistical parametric speech synthesis. However, conventional sourcefilter systems (i.e., STRAIGHT) and sinusoidal models (i.e., MagPhase) tend to produce over-smoothed spectra, which often result in muffled and buzzy synthesized text-to-speech (TTS). WaveNet, one of the best models that nearly resembles the human voice, has to generate a waveform in a time-consuming sequential manner with an extremely complex structure of its neural networks. WaveNet needs large quantities of voice data before accurate predictions can be obtained. In order to motivate a new, alternative approach to these issues, we present an updated synthesizer, which is a simple signal model to train and easy to generate waveforms, using Continuous Wavelet Transform (CWT) to characterize and decompose speech features. CWT provides time and frequency resolutions different from those of the short-time Fourier transform. It can also retain the fine spectral envelope and achieve high controllability of the structure closer to human auditory scales. We confirmed through experiments that our speech synthesis system was able to provide natural-sounding synthetic speech and outperformed the state-of-the-art WaveNet vocoder.&lt;/p&gt;</description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/100010661</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/825619/">825619</awardNumber>
<awardTitle>A European AI On Demand Platform and Ecosystem</awardTitle>
</fundingReference>
</fundingReferences>
</resource>

21
19
views