Published September 16, 2022 | Version 1.0
Dataset Open

TIMIT-TTS: a Text-to-Speech Dataset for Synthetic Speech Detection

  • 1. Politecnico di Milano, Italy
  • 2. Drexel University, USA

Description

With the rapid development of deep learning techniques, the generation and counterfeiting of multimedia material are becoming increasingly straightforward to perform. At the same time, sharing fake content on the web has become so simple that malicious users can create unpleasant situations with minimal effort. Also, forged media are getting more and more complex, with manipulated videos (e.g., deepfakes where both the visual and audio contents can be counterfeited) that are taking the scene over still images.
The multimedia forensic community has addressed the possible threats that this situation could imply by developing detectors that verify the authenticity of multimedia objects. However, the vast majority of these tools only analyze one modality at a time.
This was not a problem as long as still images were considered the most widely edited media, but now, since manipulated videos are becoming customary, performing monomodal analyses could be reductive. Nonetheless, there is a lack in the literature regarding multimodal detectors (systems that consider both audio and video components). This is due to the difficulty of developing them but also to the scarsity of datasets containing forged multimodal data to train and test the designed algorithms.

In this paper we focus on the generation of an audio-visual deepfake dataset.
First, we present a general pipeline for synthesizing speech deepfake content from a given real or fake video, facilitating the creation of counterfeit multimodal material. The proposed method uses Text-to-Speech (TTS) and Dynamic Time Warping (DTW) techniques to achieve realistic speech tracks. Then, we use the pipeline to generate and release TIMIT-TTS, a synthetic speech dataset containing the most cutting-edge methods in the TTS field. This can be used as a standalone audio dataset, or combined with DeepfakeTIMIT and VidTIMIT video datasets to perform multimodal research. Finally, we present numerous experiments to benchmark the proposed dataset in both monomodal (i.e., audio) and multimodal (i.e., audio and video) conditions.
This highlights the need for multimodal forensic detectors and more multimodal deepfake data.

  • For the initial version of TIMIT-TTS v1.0
    • Arxiv: https://arxiv.org/abs/2209.08000
    • TIMIT-TTS Database v1.0: https://zenodo.org/record/6560159

Files

TIMIT-TTS.zip

Files (7.2 GB)

Name Size Download all
md5:25aabc58d4e41deb7320a1220cf4f8b0
7.2 GB Preview Download

Additional details

Related works

Is published in
Preprint: https://arxiv.org/abs/2209.08000 (URL)