3525032
doi
10.5281/zenodo.3525032
oai:zenodo.org:3525032
user-iwslt2019
Puzon, Liezl
Johns Hopkins University
Gu, Jiatao
Facebook
Ma, Xutai
Facebook & Johns Hopkins University
McCarthy, Arya D.
Facebook & Johns Hopkins University
Gopinath, Deepak
Facebook
Harnessing Indirect Training Data for End-to-End Automatic Speech Translation: Tricks of the Trade
Pino, Juan
Facebook
info:eu-repo/semantics/openAccess
Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
<p>For automatic speech translation (AST), end-to-end approaches are outperformed by cascaded models that transcribe with automatic speech recognition (ASR), then trans- late with machine translation (MT). A major cause of the performance gap is that, while existing AST corpora are small, massive datasets exist for both the ASR and MT subsystems. In this work, we evaluate several data augmentation and pretraining approaches for AST, by comparing all on the same datasets. Simple data augmentation by translating ASR transcripts proves most effective on the English–French augmented LibriSpeech dataset, closing the performance gap from 8.2 to 1.4 BLEU, compared to a very strong cascade that could directly utilize copious ASR and MT data. The same end-to-end approach plus fine-tuning closes the gap on the English–Romanian MuST-C dataset from 6.7 to 3.7 BLEU. In addition to these results, we present practical rec- ommendations for augmentation and pretraining approaches. Finally, we decrease the performance gap to 0.01 BLEU us- ing a Transformer-based architecture.</p>
Zenodo
2019-11-02
info:eu-repo/semantics/conferencePaper
3525031
user-iwslt2019
1579538852.851701
1586090
md5:c9b435d8bdaddc66c4821b03d6fbcbff
https://zenodo.org/records/3525032/files/IWSLT2019_paper_25.pdf
public
10.5281/zenodo.3525031
isVersionOf
doi