Journal article Open Access

Revisiting Multi-Domain Machine Translation

MinhQuang Pham; Josep Crego; François Yvon

When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from un-expected domains in testing. This multi-domain scenario has attracted a lot of recent work, that fall under the general umbrella of transfer learning. In this study, we revisit multi-domain machine translation, with the aim to formulate the motivations for developing such systems and the associated expectations with respect to performance. Our experiments with a large sample of multi-domain systems show that most of these expectations are hardly met and suggest that further work is needed to better analyze the current behaviour of multi-domain systems and to make them fully hold their promises.

Files (284.7 kB)
Name Size
main-2327-PhamMinhQuang.pdf
md5:c2db23c132297b90a385481171fff2c2
284.7 kB Download
37
21
views
downloads
All versions This version
Views 3737
Downloads 2121
Data volume 6.0 MB6.0 MB
Unique views 2424
Unique downloads 1919

Share

Cite as