Journal article Open Access

End-to-end Speech Translation System Description of LIT for IWSLT 2019

Tu, Mei; Liu, Wei; Wang, Lijie; Chen, Xiao; Wen, Xue

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:creator>Tu, Mei</dc:creator>
  <dc:creator>Liu, Wei</dc:creator>
  <dc:creator>Wang, Lijie</dc:creator>
  <dc:creator>Chen, Xiao</dc:creator>
  <dc:creator>Wen, Xue</dc:creator>
  <dc:description>This paper describes our end-to-end speech translation system for the speech translation task of lectures and TED talks from English to German for IWSLT Evaluation 2019. We propose layer-tied self-attention for end-to-end speech translation. Our method takes advantage of sharing weights of speech encoder and text decoder. The representation of source speech and the representation of target text are coordinated layer by layer, so that the speech and text can learn a better alignment during the training procedure. We also adopt data augmentation to enhance the parallel speech-text corpus. The En-De experimental results show that our best model achieves 17.68 on tst2015. Our ASR achieves WER of 6.6% on TED-LIUM test set. The En-Pt model can achieve about 11.83 on the MuST-C dev set.</dc:description>
  <dc:title>End-to-end Speech Translation System Description of LIT for IWSLT 2019</dc:title>
All versions This version
Views 194194
Downloads 169169
Data volume 74.7 MB74.7 MB
Unique views 169169
Unique downloads 145145


Cite as