Conference paper Open Access
Nguyen, Toan Q.; Salazar, Julian
<?xml version='1.0' encoding='utf-8'?> <resource xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://datacite.org/schema/kernel-4" xsi:schemaLocation="http://datacite.org/schema/kernel-4 http://schema.datacite.org/meta/kernel-4.1/metadata.xsd"> <identifier identifierType="DOI">10.5281/zenodo.3525484</identifier> <creators> <creator> <creatorName>Nguyen, Toan Q.</creatorName> <givenName>Toan Q.</givenName> <familyName>Nguyen</familyName> <affiliation>University of Notre Dame</affiliation> </creator> <creator> <creatorName>Salazar, Julian</creatorName> <givenName>Julian</givenName> <familyName>Salazar</familyName> <affiliation>Amazon AWS AI</affiliation> </creator> </creators> <titles> <title>Transformers without Tears: Improving the Normalization of Self-Attention</title> </titles> <publisher>Zenodo</publisher> <publicationYear>2019</publicationYear> <dates> <date dateType="Issued">2019-11-02</date> </dates> <language>en</language> <resourceType resourceTypeGeneral="ConferencePaper"/> <alternateIdentifiers> <alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3525484</alternateIdentifier> </alternateIdentifiers> <relatedIdentifiers> <relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3525483</relatedIdentifier> <relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/iwslt2019</relatedIdentifier> </relatedIdentifiers> <rightsList> <rights rightsURI="https://creativecommons.org/licenses/by/4.0/legalcode">Creative Commons Attribution 4.0 International</rights> <rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights> </rightsList> <descriptions> <description descriptionType="Abstract"><p>We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PRENORM) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose&nbsp;l2&nbsp;normalization with a single scale parameter (SCALENORM) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FIXNORM). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT &#39;15 English-Vietnamese. We ob- serve sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT &#39;14 English-German), SCALENORM&nbsp;and FIXNORM&nbsp;remain competitive but PRENORM&nbsp;degrades performance.</p></description> </descriptions> </resource>
All versions | This version | |
---|---|---|
Views | 1,035 | 1,035 |
Downloads | 642 | 642 |
Data volume | 222.1 MB | 222.1 MB |
Unique views | 886 | 886 |
Unique downloads | 574 | 574 |