Conference paper Open Access

Transformers without Tears: Improving the Normalization of Self-Attention

Nguyen, Toan Q.; Salazar, Julian


JSON-LD (schema.org) Export

{
  "inLanguage": {
    "alternateName": "eng", 
    "@type": "Language", 
    "name": "English"
  }, 
  "description": "<p>We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PRENORM) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose&nbsp;l2&nbsp;normalization with a single scale parameter (SCALENORM) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FIXNORM). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT &#39;15 English-Vietnamese. We ob- serve sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT &#39;14 English-German), SCALENORM&nbsp;and FIXNORM&nbsp;remain competitive but PRENORM&nbsp;degrades performance.</p>", 
  "license": "https://creativecommons.org/licenses/by/4.0/legalcode", 
  "creator": [
    {
      "affiliation": "University of Notre Dame", 
      "@type": "Person", 
      "name": "Nguyen, Toan Q."
    }, 
    {
      "affiliation": "Amazon AWS AI", 
      "@type": "Person", 
      "name": "Salazar, Julian"
    }
  ], 
  "headline": "Transformers without Tears: Improving the Normalization of Self-Attention", 
  "image": "https://zenodo.org/static/img/logos/zenodo-gradient-round.svg", 
  "datePublished": "2019-11-02", 
  "url": "https://zenodo.org/record/3525484", 
  "@context": "https://schema.org/", 
  "identifier": "https://doi.org/10.5281/zenodo.3525484", 
  "@id": "https://doi.org/10.5281/zenodo.3525484", 
  "@type": "ScholarlyArticle", 
  "name": "Transformers without Tears: Improving the Normalization of Self-Attention"
}
1,037
644
views
downloads
All versions This version
Views 1,0371,037
Downloads 644644
Data volume 222.8 MB222.8 MB
Unique views 888888
Unique downloads 576576

Share

Cite as