Conference paper Open Access
Nguyen, Toan Q.; Salazar, Julian
{ "publisher": "Zenodo", "DOI": "10.5281/zenodo.3525484", "language": "eng", "title": "Transformers without Tears: Improving the Normalization of Self-Attention", "issued": { "date-parts": [ [ 2019, 11, 2 ] ] }, "abstract": "<p>We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PRENORM) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose l2 normalization with a single scale parameter (SCALENORM) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FIXNORM). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT '15 English-Vietnamese. We ob- serve sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT '14 English-German), SCALENORM and FIXNORM remain competitive but PRENORM degrades performance.</p>", "author": [ { "family": "Nguyen, Toan Q." }, { "family": "Salazar, Julian" } ], "type": "paper-conference", "id": "3525484" }
All versions | This version | |
---|---|---|
Views | 1,037 | 1,037 |
Downloads | 644 | 644 |
Data volume | 222.8 MB | 222.8 MB |
Unique views | 888 | 888 |
Unique downloads | 576 | 576 |