Planned intervention: On Thursday March 28th 07:00 UTC Zenodo will be unavailable for up to 5 minutes to perform a database upgrade.

There is a newer version of the record available.

Published March 7, 2020 | Version v3.0
Video/Audio Open

DESED_synthetic

  • 1. Université de Lorraine, CNRS, Inria, Loria, F-54000 Nancy, France
  • 1. Adobe Research, San Francisco CA, United States
  • 2. Language Technologies Institute, Carnegie Mellon University, Pittsburgh PA, United States
  • 3. Google, Inc

Description

Link to the associated github repository: https://github.com/turpaultn/Desed

Link to the papers: https://hal.inria.fr/hal-02160855https://hal.inria.fr/hal-02355573v1

Domestic Environment Sound Event Detection (DESED).

Description
This dataset is the synthetic part of the DESED dataset. It allows creating mixtures of isolated sounds and backgrounds.

There is the material to:

  • Reproduce the DCASE 2019 task 4 synthetic dataset
  • Reproduce the DCASE 2020 task 4 synthetic train dataset
  • Creating new mixtures from isolated foreground sounds and background sounds.

Files:

If you want to generate new audio mixtures yourself from the original files.

  1. DESED_synth_soundbank.tar.gz : Raw data used to generate mixtures.
  2. DESED_synth_dcase2019jams.tar.gz: JAMS files, metadata describing how to recreate the  dcase2019 synthetic dataset
  3. DESED_synth_dcase20_train_val_jams.tar: JAMS files, metadata describing how to recreate the dcase2020 synthetic train and valid dataset.
  4. DESED_synth_dcase20_eval_jams.tar: JAMS files, metadata describing how to recreate the dcase2020 synthetic eval dataset (only the basic one, variants of it have been made but not presented here).

If you simply want the evaluation synthetic dataset used in DCASE 2019 task 4.

  1. DESED_synth_eval_dcase2019.tar.gz : Evaluation audio and metadata files used in dcase 2019 task 4.

 

The mixtures are generated using Scaper (https://github.com/justinsalamon/scaper) [1].

* Background files are extracted from SINS [2], MUSAN [3] or Youtube and have been selected because they contain a very low amount of our sound event classes.
* Foreground files are extracted from Freesound [4][5] and manually verified to check the quality and segmented to remove silences.

References
[1] J. Salamon, D. MacConnell, M. Cartwright, P. Li, and J. P. Bello. Scaper: A library for soundscape synthesis and augmentation
In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA, Oct. 2017.

[2] Gert Dekkers, Steven Lauwereins, Bart Thoen, Mulu Weldegebreal Adhana, Henk Brouckxon, Toon van Waterschoot, Bart Vanrumste, Marian Verhelst, and Peter Karsmakers.
The SINS database for detection of daily activities in a home environment using an acoustic sensor network.
In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017), 32–36. November 2017.

[3] David Snyder and Guoguo Chen and Daniel Povey.
MUSAN: A Music, Speech, and Noise Corpus.
arXiv, 1510.08484, 2015.

[4] F. Font, G. Roma & X. Serra. Freesound technical demo. In Proceedings of the 21st ACM international conference on Multimedia. ACM, 2013.
 
[5] E. Fonseca, J. Pons, X. Favory, F. Font, D. Bogdanov, A. Ferraro, S. Oramas, A. Porter & X. Serra. Freesound Datasets: A Platform for the Creation of Open Audio Datasets.
In Proceedings of the 18th International Society for Music Information Retrieval Conference, Suzhou, China, 2017.

 

Files

Files (29.0 GB)

Name Size Download all
md5:99cbb7b21299cd473e4acedfd5ad614f
18.9 GB Download
md5:e5d6348d9b9ca19d08b7afba0e987de3
3.1 MB Download
md5:105774e4528b266c829f3a6fdad4397d
326.0 kB Download
md5:01f2ba4e33c82006d8e407b75f103fe7
1.2 MB Download
md5:e1aad0a714bb98d2b58f3d62122077b8
7.7 GB Download
md5:03b51e3506ae28157a26101748045e90
2.4 GB Download
md5:a3204d5ca02722d20e878a4fae3f3bb6
26.9 kB Download
md5:2eba5a6fe230baecc1803dab526a77a5
25.9 kB Download

Additional details

Related works

Is supplement to
Conference paper: https://hal.inria.fr/hal-02160855v2 (URL)
Conference paper: https://hal.inria.fr/hal-02355573 (URL)