Conference paper Open Access

Adapting Multilingual Neural Machine Translation to Unseen Languages

Lakew, Surafel M.; Karakanta, Alina; Federico, Marcello; Negri, Matteo; Turchi, Marco


Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
  <dc:creator>Lakew, Surafel M.</dc:creator>
  <dc:creator>Karakanta, Alina</dc:creator>
  <dc:creator>Federico, Marcello</dc:creator>
  <dc:creator>Negri, Matteo</dc:creator>
  <dc:creator>Turchi, Marco</dc:creator>
  <dc:date>2019-11-02</dc:date>
  <dc:description>Multilingual Neural Machine Translation (MNMT) for low- resource languages (LRL) can be enhanced by the presence of related high-resource languages (HRL), but the relatedness of HRL usually relies on predefined linguistic assumptions about language similarity. Recently, adapting MNMT to a LRL has shown to greatly improve performance. In this work, we explore the problem of adapting an MNMT model to an unseen LRL using data selection and model adapta- tion. In order to improve NMT for LRL, we employ perplexity to select HRL data that are most similar to the LRL on the basis of language distance. We extensively explore data selection in popular multilingual NMT settings, namely in (zero-shot) translation, and in adaptation from a multilingual pre-trained model, for both directions (LRL↔en). We further show that dynamic adaptation of the model’s vocabulary results in a more favourable segmentation for the LRL in comparison with direct adaptation. Experiments show re- ductions in training time and significant performance gains over LRL baselines, even with zero LRL data (+13.0 BLEU), up to +17.0 BLEU for pre-trained multilingual model dynamic adaptation with related data selection. Our method outperforms current approaches, such as massively multilingual models and data augmentation, on four LRL.</dc:description>
  <dc:identifier>https://zenodo.org/record/3525486</dc:identifier>
  <dc:identifier>10.5281/zenodo.3525486</dc:identifier>
  <dc:identifier>oai:zenodo.org:3525486</dc:identifier>
  <dc:language>eng</dc:language>
  <dc:relation>doi:10.5281/zenodo.3525485</dc:relation>
  <dc:relation>url:https://zenodo.org/communities/iwslt2019</dc:relation>
  <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>
  <dc:rights>https://creativecommons.org/licenses/by/4.0/legalcode</dc:rights>
  <dc:title>Adapting Multilingual Neural Machine Translation to Unseen Languages</dc:title>
  <dc:type>info:eu-repo/semantics/conferencePaper</dc:type>
  <dc:type>publication-conferencepaper</dc:type>
</oai_dc:dc>
176
113
views
downloads
All versions This version
Views 176176
Downloads 113113
Data volume 26.6 MB26.6 MB
Unique views 152152
Unique downloads 107107

Share

Cite as