CVoiceFake (crafted by SafeEar: Content Privacy-Preserving Audio Deepfake Detection)
Creators
Contributors
Data curator:
Description
Introduction:
CVoiceFake (small) is a dataset that features a random selection of 10% of samples from the entire collection. This dataset encompasses five common languages (English, Chinese, German, French, and Italian) and utilizes multi-advanced and classical voice cloning techniques (Parallel WaveGAN, Multi-band MelGAN, Style MelGAN, Griffin-Lim, WORLD, and DiffWave) to produce audio samples that bear a high resemblance to authentic audio.
- Parallel WaveGAN: As a non-autoregressive vocoder-based model, Parallel WaveGAN produces high-fidelity audio rapidly, ideal for efficient and quality deepfake generation.
- Multi-band MelGAN: Multi-band MelGAN is a variant of MelGAN that divides the frequency spectrum into sub-bands for faster and more stable multi-lingual vocoder training, enhancing the robustness and scalability of the dataset.
- Style MelGAN: Style MelGAN is designed to capture fine prosodic and stylistic nuances of speech, making it particularly compelling for deepfake applications that require high levels of expressivity and variation in speech synthesis.
- Griffin-Lim: This algorithm reconstructs waveforms from spectrograms using an iterative phase estimation method. Though less high-fidelity than neural vocoders, it serves as a traditional baseline for comparing deepfake generation.
- WORLD: WORLD is a statistical parameter-based voice synthesis system that offers fine control over the spectral and prosodic features of the synthesized audio. Its fine manipulation is useful for crafting the nuanced variations needed in deepfake datasets.
- We have also built the SOTA diffusion-based deepfake audio (DiffWave); please contact the author at
xinfengli@zju.edu.cn
if you are interested in the dataset, particularly the DiffWave portion. Furthermore, any additional discussions are welcomed.
DiffWave: DiffWave is a diffusion probability model for waveform generation. It converts the white noise signal into structured waveform through a Markov chain, capable of both conditional and unconditional generation tasks. DiffWave represents the advanced synthesis method for its fast synthesis speed and high synthesis quality.
🔥News:
Please note that we recently released our DiffWave subset in Version 2 in comparison to Version 1, which is available on CVoiceFake Full. You can download the file named CVoiceFake_Large_diffwave_update.tar.gz.xx, and after unzipping it, you will find it retains the same file structure as before.
| CVoiceFake_Large_diffwave_update.tar.gz.00 |
| CVoiceFake_Large_diffwave_update.tar.gz.01 |
Full Dataset & Project Page:
The whole dataset is available on CVoiceFake Full as well. Please kindly also refer to the project page: SafeEar Website.
Citation:
If you find our paper/code/benchmark helpful, please kindly consider citing this work with the following reference:
@inproceedings{li2024safeear,
author = {Li, Xinfeng and Li, Kai and Zheng, Yifan and Yan, Chen and Ji, Xiaoyu, and Xu, Wenyuan},
title = {{SafeEar: Content Privacy-Preserving Audio Deepfake Detection}},
booktitle = {Proceedings of the 2024 {ACM} {SIGSAC} Conference on Computer and Communications Security (CCS)}
year = {2024},
}
Files
Files
(4.5 GB)
Name | Size | Download all |
---|---|---|
md5:8052391a7bec9052976d4aefa1530c92
|
4.5 GB | Download |
Additional details
Dates
- Accepted
-
2024-05-07
Software
- Repository URL
- https://SafeEarWeb.github.io/Project/
- Development Status
- Active
References
- Xinfeng Li, Kai Li, Yifan Zheng, Chen Yan, Xiaoyu Ji, and Wenyuan Xu. "SafeEar: Content Privacy-Preserving Audio Deepfake Detection." In Proceedings of the 2024 ACM SIGSAC conference on computer and communications security (CCS 2024)