Conference paper Open Access
Macé, Valentin; Servan, Christophe
<?xml version='1.0' encoding='UTF-8'?> <record xmlns="http://www.loc.gov/MARC21/slim"> <leader>00000nam##2200000uu#4500</leader> <datafield tag="041" ind1=" " ind2=" "> <subfield code="a">eng</subfield> </datafield> <controlfield tag="005">20200120161145.0</controlfield> <controlfield tag="001">3525020</controlfield> <datafield tag="700" ind1=" " ind2=" "> <subfield code="u">QWANT RESEARCH - 7 Rue Spontini, 75116 Paris, France</subfield> <subfield code="a">Servan, Christophe</subfield> </datafield> <datafield tag="856" ind1="4" ind2=" "> <subfield code="s">168252</subfield> <subfield code="z">md5:3a4a4614e5f77e181163dafd99370339</subfield> <subfield code="u">https://zenodo.org/record/3525020/files/IWSLT2019_paper_20.pdf</subfield> </datafield> <datafield tag="542" ind1=" " ind2=" "> <subfield code="l">open</subfield> </datafield> <datafield tag="260" ind1=" " ind2=" "> <subfield code="c">2019-11-02</subfield> </datafield> <datafield tag="909" ind1="C" ind2="O"> <subfield code="p">openaire</subfield> <subfield code="p">user-iwslt2019</subfield> <subfield code="o">oai:zenodo.org:3525020</subfield> </datafield> <datafield tag="100" ind1=" " ind2=" "> <subfield code="u">QWANT RESEARCH - 7 Rue Spontini, 75116 Paris, France</subfield> <subfield code="a">Macé, Valentin</subfield> </datafield> <datafield tag="245" ind1=" " ind2=" "> <subfield code="a">Using Whole Document Context in Neural Machine Translation</subfield> </datafield> <datafield tag="980" ind1=" " ind2=" "> <subfield code="a">user-iwslt2019</subfield> </datafield> <datafield tag="540" ind1=" " ind2=" "> <subfield code="u">https://creativecommons.org/licenses/by/4.0/legalcode</subfield> <subfield code="a">Creative Commons Attribution 4.0 International</subfield> </datafield> <datafield tag="650" ind1="1" ind2="7"> <subfield code="a">cc-by</subfield> <subfield code="2">opendefinition.org</subfield> </datafield> <datafield tag="520" ind1=" " ind2=" "> <subfield code="a"><p>In Machine Translation, considering the document as a whole can help to resolve ambiguities and inconsistencies. In this paper, we propose a simple yet promising approach to add contextual information in Neural Machine Translation. We present a method to add source context that capture the whole document with accurate boundaries, taking every word into account. We provide this additional information to a Transformer model and study the impact of our method on three language pairs. The proposed approach obtains promising results in the English-German, English-French and French-English document-level translation tasks. We observe interesting cross-sentential behaviors where the model learns to use document-level information to improve translation coherence.</p></subfield> </datafield> <datafield tag="773" ind1=" " ind2=" "> <subfield code="n">doi</subfield> <subfield code="i">isVersionOf</subfield> <subfield code="a">10.5281/zenodo.3525019</subfield> </datafield> <datafield tag="024" ind1=" " ind2=" "> <subfield code="a">10.5281/zenodo.3525020</subfield> <subfield code="2">doi</subfield> </datafield> <datafield tag="980" ind1=" " ind2=" "> <subfield code="a">publication</subfield> <subfield code="b">conferencepaper</subfield> </datafield> </record>
All versions | This version | |
---|---|---|
Views | 119 | 119 |
Downloads | 87 | 87 |
Data volume | 14.6 MB | 14.6 MB |
Unique views | 111 | 111 |
Unique downloads | 80 | 80 |