Conference paper Open Access

Transformers without Tears: Improving the Normalization of Self-Attention

Nguyen, Toan Q.; Salazar, Julian


DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd">
  <identifier identifierType="DOI">10.5281/zenodo.3525484</identifier>
  <creators>
    <creator>
      <creatorName>Nguyen, Toan Q.</creatorName>
      <givenName>Toan Q.</givenName>
      <familyName>Nguyen</familyName>
      <affiliation>University of Notre Dame</affiliation>
    </creator>
    <creator>
      <creatorName>Salazar, Julian</creatorName>
      <givenName>Julian</givenName>
      <familyName>Salazar</familyName>
      <affiliation>Amazon AWS AI</affiliation>
    </creator>
  </creators>
  <titles>
    <title>Transformers without Tears: Improving the Normalization of Self-Attention</title>
  </titles>
  <publisher>Zenodo</publisher>
  <publicationYear>2019</publicationYear>
  <dates>
    <date dateType="Issued">2019-11-02</date>
  </dates>
  <language>en</language>
  <resourceType resourceTypeGeneral="ConferencePaper"/>
  <alternateIdentifiers>
    <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3525484</alternateIdentifier>
  </alternateIdentifiers>
  <relatedIdentifiers>
    <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3525483</relatedIdentifier>
    <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/iwslt2019</relatedIdentifier>
  </relatedIdentifiers>
  <rightsList>
    <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights>
    <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
  </rightsList>
  <descriptions>
    <description descriptionType="Abstract">&lt;p&gt;We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PRENORM) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose&amp;nbsp;l2&amp;nbsp;normalization with a single scale parameter (SCALENORM) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FIXNORM). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT &amp;#39;15 English-Vietnamese. We ob- serve sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT &amp;#39;14 English-German), SCALENORM&amp;nbsp;and FIXNORM&amp;nbsp;remain competitive but PRENORM&amp;nbsp;degrades performance.&lt;/p&gt;</description>
  </descriptions>
</resource>
1,035
642
views
downloads
All versions This version
Views 1,0351,035
Downloads 642642
Data volume 222.1 MB222.1 MB
Unique views 886886
Unique downloads 574574

Share

Cite as