Di Gangi, Matti
Enyedi, Robert
Brusadin, Alessandra
Federico, Marcello
2019-11-02
<p>Neural machine translation models have shown to achieve high quality when trained and fed with well structured and punctuated input texts. Unfortunately, the latter condition is not met in spoken language translation, where the input is generated by an automatic speech recognition (ASR) system. In this paper, we study how to adapt a strong NMT system to make it robust to typical ASR errors. As in our application scenarios transcripts might be post-edited by human experts, we propose adaptation strategies to train a single system that can translate either clean or noisy input with no supervision on the input type. Our experimental results on a public speech translation data set show that adapting a model on a significant amount of parallel data including ASR transcripts is beneficial with test data of the same type, but produces a small degradation when translating clean text. Adapting on both clean and noisy variants of the same data leads to the best results on both input types.</p>
https://doi.org/10.5281/zenodo.3524947
oai:zenodo.org:3524947
eng
Zenodo
https://zenodo.org/communities/iwslt2019
https://doi.org/10.5281/zenodo.3524946
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
Robust Neural Machine Translation for Clean and Noisy Speech Transcripts
info:eu-repo/semantics/conferencePaper