Conference paper Open Access

# Transformer-based Cascaded Multimodal Speech Translation

Wu, Zixiu; Caglayan, Ozan; Ive, Julia; Wang, Josiah; Specia, Lucia

### DataCite XML Export

<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.3525552</identifier>
<creators>
<creator>
<creatorName>Wu, Zixiu</creatorName>
<givenName>Zixiu</givenName>
<familyName>Wu</familyName>
<affiliation>Department of Computing, Imperial College London, UK</affiliation>
</creator>
<creator>
<creatorName>Caglayan, Ozan</creatorName>
<givenName>Ozan</givenName>
<familyName>Caglayan</familyName>
<affiliation>Department of Computing, Imperial College London, UK</affiliation>
</creator>
<creator>
<creatorName>Ive, Julia</creatorName>
<givenName>Julia</givenName>
<familyName>Ive</familyName>
<affiliation>Department of Computer Science, University of Sheffield, UK</affiliation>
</creator>
<creator>
<creatorName>Wang, Josiah</creatorName>
<givenName>Josiah</givenName>
<familyName>Wang</familyName>
<affiliation>Department of Computing, Imperial College London, UK</affiliation>
</creator>
<creator>
<creatorName>Specia, Lucia</creatorName>
<givenName>Lucia</givenName>
<familyName>Specia</familyName>
<affiliation>Department of Computing, Imperial College London, UK</affiliation>
</creator>
</creators>
<titles>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2019</publicationYear>
<dates>
<date dateType="Issued">2019-11-02</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="Text">Conference paper</resourceType>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/3525552</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.3525551</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/iwslt2019</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract">&lt;p&gt;This paper describes the cascaded multimodal speech translation systems developed by Imperial College London for the IWSLT 2019 evaluation campaign. The architecture consists of an automatic speech recognition (ASR) system followed by a Transformer-based multimodal machine translation (MMT) system. While the ASR component is identical across the experiments, the MMT model varies in terms of the way of integrating the visual context (simple conditioning vs. attention), the type of visual features exploited (pooled, convolutional, action categories) and the underlying architecture. For the latter, we explore both the canonical transformer&amp;nbsp;and its deliberation version&amp;nbsp;with additive and cascade variants which differ in how they integrate the textual attention. Upon conducting extensive experiments, we found that (i) the explored visual integration schemes often harm the translation performance for the transformer and additive deliberation, but considerably improve the cascade deliberation; (ii) the transformer and cascade deliberation integrate the visual modality better than the additive deliberation, as shown by the incongruence analysis.&lt;/p&gt;</description>
</descriptions>
</resource>

130
111
views