There is a newer version of the record available.

Published November 2, 2020 | Version 1.0
Journal article Open

Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation

  • 1. Univ. Grenoble Alpes, CNRS, LIG
  • 2. Facebook AI

Description

We introduce dual-decoder Transformer, a new model architecture that jointly performs auto- matic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demon- strate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pretrained models are available at https://github.com/formiel/speech-translation.

Files

Files (891.5 MB)

Name Size Download all
md5:0dfa02db9d5e103ff276870c8b1e582d
891.5 MB Download