Journal article Open Access

Revisiting Multi-Domain Machine Translation

MinhQuang Pham; Josep Crego; François Yvon

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>MinhQuang Pham</dc:creator>
  <dc:creator>Josep Crego</dc:creator>
  <dc:creator>François Yvon</dc:creator>
  <dc:description>When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from un-expected domains in testing. This multi-domain scenario has attracted a lot of recent work, that fall under the general umbrella of transfer learning. In this study, we revisit multi-domain machine translation, with the aim to formulate the motivations for developing such systems and the associated expectations with respect to performance. Our experiments with a large sample of multi-domain systems show that most of these expectations are hardly met and suggest that further work is needed to better analyze the current behaviour of multi-domain systems and to make them fully hold their promises.</dc:description>
  <dc:source>Transactions of the Association for Computational Linguistics 9</dc:source>
  <dc:subject>Neural Machine Translation</dc:subject>
  <dc:subject>Multi-domain MT</dc:subject>
  <dc:subject>Domain Adaptation</dc:subject>
  <dc:title>Revisiting Multi-Domain Machine Translation</dc:title>
All versions This version
Views 4343
Downloads 2828
Data volume 8.0 MB8.0 MB
Unique views 3030
Unique downloads 2626


Cite as