Conference paper Closed Access
Inaguma, Hirofumi; Kiyono, Shun; Soplin, Nelson Enrique Yalta; Suzuki, Jun; Duh, Kevin; Watanabe, Shinji
This paper describes the ESPnet submissions to the How2 Speech Translation task at IWSLT2019. In this year, we mainly build our systems based on Transformer architectures in all tasks and focus on the end-to-end speech translation (E2E-ST). We first compare RNN-based models and Transformer, and then confirm Transformer models significantly and consistently outperform RNN models in all tasks and corpora. Next, we investigate pre-training of E2E-ST models with the ASR and MT tasks. On top of the pre-training, we further explore knowledge distillation from the NMT model and the deeper speech encoder, and confirm drastic improvements over the baseline model. All of our codes are publicly available in ESPnet.
Files are not publicly accessible.
All versions | This version | |
---|---|---|
Views | 338 | 338 |
Downloads | 22 | 22 |
Data volume | 3.8 MB | 3.8 MB |
Unique views | 288 | 288 |
Unique downloads | 19 | 19 |