Dataset Open Access
Politis, Archontis; Adavanne, Sharath; Virtanen, Tuomas
The TAU-NIGENS Spatial Sound Events 2020 dataset contains multiple spatial sound-scene recordings, consisting of sound events of distinct categories integrated into a variety of acoustical spaces, and from multiple source directions and distances as seen from the recording position. The spatialization of all sound events is based on filtering through real spatial room impulse responses (RIRs), captured in multiple rooms of various shapes, sizes, and acoustical absorption properties. Furthermore, each scene recording is delivered in two spatial recording formats, a microphone array one (MIC), and first-order Ambisonics one (FOA). The sound events are spatialized as either stationary sound sources in the room, or moving sound sources, in which case time-variant RIRs are used. Each sound event in the sound scene is associated with a trajectory of its direction-of-arrival (DoA) to the recording point, and a temporal onset and offset time. The isolated sound event recordings used for the synthesis of the sound scenes are obtained from the NIGENS general sound events database. These recordings serve as the development dataset for the DCASE 2020 Sound Event Localization and Detection Task of the DCASE 2020 Challenge.
REPORT & REFERENCE:
If you use this dataset please cite the report on its creation, and the corresponding DCASE2020 task setup:
Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), Tokyo, Japan.
A longer version with more detailed information can be also found here.
The dataset includes a large number of mixtures of sound events with realistic spatial properties under different acoustic conditions, and hence it is suitable for training and evaluation of machine-listening models for sound event detection (SED), general sound source localization with diverse sounds or signal-of-interest localization, and joint sound-event-localization-and-detection (SELD). Additionally, the dataset can be used for evaluation of signal processing methods that do not necessarily rely on training, such as acoustic source localization methods and multiple-source acoustic tracking. The dataset allows evaluation of the performance and robustness of the aforementioned applications for diverse types of sounds, and under diverse acoustic conditions.
The IRs were collected in Finland by staff of Tampere University between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The older measurements from five rooms were also used for the earlier development and evaluation datasets TAU Spatial Sound Events 2019, while ten additional rooms were added for this dataset. The data collection received funding from the European Research Council, grant agreement 637422 EVERYSOUND.
More detailed information on the dataset can be found in the included README file.
An implementation of a trainable model of a convolutional recurrent neural network, performing joint SELD, trained and evaluated with this dataset is provided here. This implementation serves as the baseline method in the DCASE 2020 Sound Event Localization and Detection Task.
DEVELOPMENT AND EVALUATION:
Version 1.0 of the dataset included only the 600 development audio recordings and labels, used by the participants of Task 3 of DCASE2020 Challenge to train and validate their submitted systems. Version 1.1 included additionally the 200 evaluation audio recordings without labels, for the evaluation phase of DCASE2020. The latest version 1.2, published after the completion of the challenge, includes also the labels for the evaluation files.
If researchers wish to compare their system against the submissions of DCASE2020 Challenge, they will have directly comparable results if they use the evaluation data as their testing set.
The three files, foa_dev.z01, foa_dev.z02, and foa_dev.zip, correspond to audio data of the FOA recording format.
The three files, mic_dev.z01, mic_dev.z02, and mic_dev.zip, correspond to audio data of the MIC recording format.
The metadata_dev.zip is the common metadata for both formats.
The file, foa_eval.zip, corresponds to audio data of the FOA recording format for the evaluation dataset.
The file, mic_eval.zip, corresponds to audio data of the MIC recording format for the evaluation dataset.
The metadata_eval.zip is the common metadata for both formats. An info file is included (metadata_eval_info.txt) which specifies which of the two evaluation folds the mix file belongs to, and what is its number of overlapping events.
Download the zip files corresponding to the format of interest and use your favorite compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:
zip -s 0 split.zip --out single.zip
Archontis Politis, Sharath Adavanne, and Tuomas Virtanen (2020). A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020). Japan, Tokyo.
Sharath Adavanne, Archontis Politis, and Tuomas Virtanen (2019). A Multi-room reverberant dataset for sound event localization and detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019). New York, NY, USA
Ivo Trowitzsch, Jalil Taghia, Youssef Kashef, and Klaus Obermayer (2019). The NIGENS general sound events database. Technische Universität Berlin, Tech. Rep. arXiv:1902.08314 [cs.SD]
|All versions||This version|
|Data volume||50.3 TB||6.3 TB|