Video/Audio Open Access


Turpault, Nicolas; Serizel, Romain

Dublin Core Export

<?xml version='1.0' encoding='utf-8'?>
<oai_dc:dc xmlns:dc="" xmlns:oai_dc="" xmlns:xsi="" xsi:schemaLocation="">
  <dc:contributor>Salamon, Justin</dc:contributor>
  <dc:contributor>Shah, Ankit</dc:contributor>
  <dc:contributor>Wisdom, Scott</dc:contributor>
  <dc:contributor>Hershey, John</dc:contributor>
  <dc:contributor>Erdogan, Hakan</dc:contributor>
  <dc:creator>Turpault, Nicolas</dc:creator>
  <dc:creator>Serizel, Romain</dc:creator>
  <dc:description>Link to the associated github repository:

Link to the papers:,

Domestic Environment Sound Event Detection (DESED).

This dataset is the synthetic part of the DESED dataset. It allows creating mixtures of isolated sounds and backgrounds.

There is the material to:

	Reproduce the DCASE 2019 task 4 synthetic dataset
	Reproduce the DCASE 2020 task 4 synthetic train dataset
	Creating new mixtures from isolated foreground sounds and background sounds.


If you want to generate new audio mixtures yourself from the original files.

	DESED_synth_soundbank.tar.gz : Raw data used to generate mixtures.
	DESED_synth_dcase2019jams.tar.gz: JAMS files, metadata describing how to recreate the  dcase2019 synthetic dataset 
	DESED_synth_dcase20_train_val_jams.tar: JAMS files, metadata describing how to recreate the dcase2020 synthetic train and valid dataset.
	DESED_synth_dcase20_eval_jams.tar: JAMS files, metadata describing how to recreate the dcase2020 synthetic eval dataset (only the basic one, variants of it have been made but not presented here).

If you simply want the evaluation synthetic dataset used in DCASE 2019 task 4.

	DESED_synth_eval_dcase2019.tar.gz : Evaluation audio and metadata files used in dcase 2019 task 4.


The mixtures are generated using Scaper ( [1].

* Background files are extracted from SINS [2], MUSAN [3] or Youtube and have been selected because they contain a very low amount of our sound event classes.
* Foreground files are extracted from Freesound [4][5] and manually verified to check the quality and segmented to remove silences.

[1] J. Salamon, D. MacConnell, M. Cartwright, P. Li, and J. P. Bello. Scaper: A library for soundscape synthesis and augmentation
In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA, Oct. 2017.

[2] Gert Dekkers, Steven Lauwereins, Bart Thoen, Mulu Weldegebreal Adhana, Henk Brouckxon, Toon van Waterschoot, Bart Vanrumste, Marian Verhelst, and Peter Karsmakers.
The SINS database for detection of daily activities in a home environment using an acoustic sensor network.
In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017), 32–36. November 2017.

[3] David Snyder and Guoguo Chen and Daniel Povey.
MUSAN: A Music, Speech, and Noise Corpus.
arXiv, 1510.08484, 2015.

[4] F. Font, G. Roma &amp; X. Serra. Freesound technical demo. In Proceedings of the 21st ACM international conference on Multimedia. ACM, 2013.
[5] E. Fonseca, J. Pons, X. Favory, F. Font, D. Bogdanov, A. Ferraro, S. Oramas, A. Porter &amp; X. Serra. Freesound Datasets: A Platform for the Creation of Open Audio Datasets.
In Proceedings of the 18th International Society for Music Information Retrieval Conference, Suzhou, China, 2017.

  <dc:subject>Sound event detection</dc:subject>
All versions This version
Views 2,600386
Downloads 9,586899
Data volume 32.6 TB6.2 TB
Unique views 1,904316
Unique downloads 4,385564


Cite as