Conference paper Open Access

On Using SpecAugment for End-to-End Speech Translation

Bahar, Parnia; Zeyer, Albert; Schlüter, Ralf; Ney, Hermann


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.3525010</identifier>
  <creators>
    <creator>
      <creatorName>Bahar, Parnia</creatorName>
      <givenName>Parnia</givenName>
      <familyName>Bahar</familyName>
      <affiliation>Human Language Technology and Pattern Recognition Group Computer Science Department, RWTH Aachen University, 52062 Aachen, Germany &amp;  AppTek, 52062 Aachen, Germany</affiliation>
    </creator>
    <creator>
      <creatorName>Zeyer, Albert</creatorName>
      <givenName>Albert</givenName>
      <familyName>Zeyer</familyName>
      <affiliation>Human Language Technology and Pattern Recognition Group Computer Science Department, RWTH Aachen University, 52062 Aachen, Germany &amp;  AppTek, 52062 Aachen, Germany</affiliation>
    </creator>
    <creator>
      <creatorName>Schlüter, Ralf</creatorName>
      <givenName>Ralf</givenName>
      <familyName>Schlüter</familyName>
      <affiliation>Human Language Technology and Pattern Recognition Group, Computer Science Department, RWTH Aachen University, 52062 Aachen, Germany</affiliation>
    </creator>
    <creator>
      <creatorName>Ney, Hermann</creatorName>
      <givenName>Hermann</givenName>
      <familyName>Ney</familyName>
      <affiliation>Human Language Technology and Pattern Recognition Group Computer Science Department, RWTH Aachen University, 52062 Aachen, Germany &amp;  AppTek, 52062 Aachen, Germany</affiliation>
    </creator>
  </creators>
  <titles>
    <title>On Using SpecAugment for End-to-End Speech Translation</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2019</publicationYear>
  <dates>
    <date dateType="Issued">2019-11-02</date>
  </dates>
  <resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3525010</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3525009</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/iwslt2019</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;This work investigates a simple data augmentation technique, SpecAugment, for end-to-end speech translation. SpecAugment is a low-cost implementation method applied directly to the audio input features and it consists of masking blocks of frequency channels, and/or time steps. We apply SpecAugment on end-to-end speech translation tasks and achieve up to +2.2% BLEU&amp;nbsp;on LibriSpeech Audiobooks En&amp;rarr;Fr and +1.2% on IWSLT TED-talks En&amp;rarr;De by alleviating overfitting to some extent. We also examine the effectiveness of the method in a variety of data scenarios and show that the method also leads to significant improvements in various data conditions irrespective of the amount of training data.&lt;/p&gt;</description>
  </descriptions>
</resource>
97
76
views
downloads
All versions This version
Views 9795
Downloads 7676
Data volume 58.4 MB58.4 MB
Unique views 8684
Unique downloads 6666

Share

Cite as