Journal article Open Access

Revisiting Multi-Domain Machine Translation

MinhQuang Pham; Josep Crego; François Yvon

When building machine translation systems, one often needs to make the best out of heterogeneous sets of parallel data in training, and to robustly handle inputs from un-expected domains in testing. This multi-domain scenario has attracted a lot of recent work, that fall under the general umbrella of transfer learning. In this study, we revisit multi-domain machine translation, with the aim to formulate the motivations for developing such systems and the associated expectations with respect to performance. Our experiments with a large sample of multi-domain systems show that most of these expectations are hardly met and suggest that further work is needed to better analyze the current behaviour of multi-domain systems and to make them fully hold their promises.

Files (284.7 kB)
Name Size
main-2327-PhamMinhQuang.pdf
md5:c2db23c132297b90a385481171fff2c2
284.7 kB Download
42
26
views
downloads
All versions This version
Views 4242
Downloads 2626
Data volume 7.4 MB7.4 MB
Unique views 2929
Unique downloads 2424

Share

Cite as