Published March 30, 2021 | Version 1.0
Dataset Open

Cross-lingual Visual Pre-training for Multimodal Machine Translation

  • 1. Imperial College London
  • 2. Hacettepe University
  • 3. Koç University

Description

Supplements for the paper entitled "Cross-lingual Visual Pre-training for Multimodal Machine Translation" which is accepted by the EACL'2021 conference. Further instructions on how to use these resources are explained at https://github.com/ImperialNLP/VTLM

  • A tarball that contains a custom train, valid, test split of Conceptual Captions (CC) dataset. The included TSV files havean additional column containing automatic German translations of the original English captions. We only provide samples for which we could download the images and extract meaningful features. This amounts to ~3M out ouf ~3.3M original CC samples.
  • A tarball of the exact object detector checkpoint used for feature extraction.
  • A tarball with pre-extracted Multi30k dataset features.

Files

Files (13.1 GB)

Name Size Download all
md5:57067ed09a66251b70f3121c247aa4b0
230.1 MB Download
md5:2374df600000e7c8ed03c736454dbaf9
733.8 MB Download
md5:2cc686ce1c5042fe5f37825175a18a40
6.1 GB Download
md5:4d8f62f055900c7227010b93c3186285
6.1 GB Download

Additional details

Funding

MultiMT – Multi-modal Context Modelling for Machine Translation 678017
European Commission