DESED_synthetic
- 1. Université de Lorraine, CNRS, Inria, Loria, F-54000 Nancy, France
- 2. Language Technologies Institute, Carnegie Mellon University, Pittsburgh PA, United States
- 3. Adobe Research, San Francisco CA, United States
Description
Link to the associated github repository: https://github.com/turpaultn/Desed
Link to the papers: https://hal.inria.fr/hal-02160855, https://hal.inria.fr/hal-02355573v1
Domestic Environment Sound Event Detection (DESED).
Description
This dataset is the synthetic part of the DESED dataset. It allows creating mixtures of isolated sounds and backgrounds.
There is the material to:
- Reproduce the DCASE 2019 task 4 synthetic dataset
- Reproduce the DCASE 2020 task 4 synthetic train dataset
- Creating new mixtures from isolated foreground sounds and background sounds.
Files:
If you want to generate new audio mixtures yourself from the original files.
- DESED_synth_soundbank.tar.gz : Raw data used to generate mixtures.
- DESED_synth_dcase2019jams.tar.gz: JAMS files, metadata describing how to recreate the dcase2019 synthetic dataset
- DESED_synth_dcase20_train_jams.tar: JAMS files, metadata describing how to recreate the dcase2020 synthetic train dataset
- DESED_synth_source.tar.gz: src files you can find on github: https://github.com/turpaultn/DESED . Source files to generate dcase2019 files from soundbank or generate new ones. (code can be outdated here, recommended to go in the github repo)
If you simply want the evaluation synthetic dataset used in DCASE 2019 task 4.
- DESED_synth_eval_dcase2019.tar.gz : Evaluation audio and metadata files used in dcase 2019 task 4.
The mixtures are generated using Scaper (https://github.com/justinsalamon/scaper) [1].
* Background files are extracted from SINS [2], MUSAN [3] or Youtube and have been selected because they contain a very low amount of our sound event classes.
* Foreground files are extracted from Freesound [4][5] and manually verified to check the quality and segmented to remove silences.
References
[1] J. Salamon, D. MacConnell, M. Cartwright, P. Li, and J. P. Bello. Scaper: A library for soundscape synthesis and augmentation
In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA, Oct. 2017.
[2] Gert Dekkers, Steven Lauwereins, Bart Thoen, Mulu Weldegebreal Adhana, Henk Brouckxon, Toon van Waterschoot, Bart Vanrumste, Marian Verhelst, and Peter Karsmakers.
The SINS database for detection of daily activities in a home environment using an acoustic sensor network.
In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017), 32–36. November 2017.
[3] David Snyder and Guoguo Chen and Daniel Povey.
MUSAN: A Music, Speech, and Noise Corpus.
arXiv, 1510.08484, 2015.
[4] F. Font, G. Roma & X. Serra. Freesound technical demo. In Proceedings of the 21st ACM international conference on Multimedia. ACM, 2013.
[5] E. Fonseca, J. Pons, X. Favory, F. Font, D. Bogdanov, A. Ferraro, S. Oramas, A. Porter & X. Serra. Freesound Datasets: A Platform for the Creation of Open Audio Datasets.
In Proceedings of the 18th International Society for Music Information Retrieval Conference, Suzhou, China, 2017.
Files
Files
(10.1 GB)
Additional details
Related works
- Is supplement to
- Conference paper: https://hal.inria.fr/hal-02160855v2 (URL)
- Conference paper: https://hal.inria.fr/hal-02355573 (URL)