Published May 31, 2020 | Version 1.2.0
Dataset Open

TAU-NIGENS Spatial Sound Events 2020

  • 1. Tampere University

Description

DESCRIPTION:

The TAU-NIGENS Spatial Sound Events 2020 dataset contains multiple spatial sound-scene recordings, consisting of sound events of distinct categories integrated into a variety of acoustical spaces, and from multiple source directions and distances as seen from the recording position. The spatialization of all sound events is based on filtering through real spatial room impulse responses (RIRs), captured in multiple rooms of various shapes, sizes, and acoustical absorption properties. Furthermore, each scene recording is delivered in two spatial recording formats, a microphone array one (MIC), and first-order Ambisonics one (FOA). The sound events are spatialized as either stationary sound sources in the room, or moving sound sources, in which case time-variant RIRs are used. Each sound event in the sound scene is associated with a trajectory of its direction-of-arrival (DoA) to the recording point, and a temporal onset and offset time. The isolated sound event recordings used for the synthesis of the sound scenes are obtained from the NIGENS general sound events database. These recordings serve as the development dataset for the DCASE 2020 Sound Event Localization and Detection Task of the DCASE 2020 Challenge.

REPORT & REFERENCE:

If you use this dataset please cite the report on its creation, and the corresponding DCASE2020 task setup:

Politis., Archontis, Adavanne, Sharath, & Virtanen, Tuomas (2020). A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), Tokyo, Japan.

A longer version with more detailed information can be also found here.

AIM:

The dataset includes a large number of mixtures of sound events with realistic spatial properties under different acoustic conditions, and hence it is suitable for training and evaluation of machine-listening models for sound event detection (SED), general sound source localization with diverse sounds or signal-of-interest localization, and joint sound-event-localization-and-detection (SELD). Additionally, the dataset can be used for evaluation of signal processing methods that do not necessarily rely on training, such as acoustic source localization methods and multiple-source acoustic tracking. The dataset allows evaluation of the performance and robustness of the aforementioned applications for diverse types of sounds, and under diverse acoustic conditions.

SPECIFICATIONS:

  • 600 one-minute long sound scene recordings (development dataset).
  • 200 one-minute long sound scene recordings (evaluation dataset).
  • Sampling rate 24kHz.
  • About 700 sound event samples spread over 14 classes (see here for more details).
  • 8 provided cross-validation splits of 100 recordings each, with unique sound event samples and rooms in each of them.
  • Two 4-channel 3-dimensional recording formats: first-order Ambisonics (FOA) and tetrahedral microphone array.
  • Realistic spatialization and reverberation through RIRs collected in 15 different enclosures.
  • From about 1500 to 3500 possible RIR positions across the different rooms.
  • Both static reverberant and moving reverberant sound events.
  • Up to two overlapping sound events allowed, temporally and spatially.
  • Realistic spatial ambient noise collected from each room is added to the spatialized sound events, at varying signal-to-noise ratios (SNR) ranging from noiseless (30dB) to noisy (6dB).

The IRs were collected in Finland by staff of Tampere University between 12/2017 - 06/2018, and between 11/2019 - 1/2020. The older measurements from five rooms were also used for the earlier development and evaluation datasets TAU Spatial Sound Events 2019, while ten additional rooms were added for this dataset. The data collection received funding from the European Research Council, grant agreement 637422 EVERYSOUND.

More detailed information on the dataset can be found in the included README file.

EXAMPLE APPLICATION:

An implementation of a trainable model of a convolutional recurrent neural network, performing joint SELD, trained and evaluated with this dataset is provided here. This implementation serves as the baseline method in the DCASE 2020 Sound Event Localization and Detection Task.

DEVELOPMENT AND EVALUATION:

Version 1.0 of the dataset included only the 600 development audio recordings and labels, used by the participants of Task 3 of DCASE2020 Challenge to train and validate their submitted systems. Version 1.1 included additionally the 200 evaluation audio recordings without labels, for the evaluation phase of DCASE2020. The latest version 1.2, published after the completion of the challenge, includes also the labels for the evaluation files.

If researchers wish to compare their system against the submissions of DCASE2020 Challenge, they will have directly comparable results if they use the evaluation data as their testing set.

DOWNLOAD INSTRUCTIONS:

The three files, foa_dev.z01, foa_dev.z02, and foa_dev.zip, correspond to audio data of the FOA recording format.
The three files, mic_dev.z01, mic_dev.z02, and mic_dev.zip, correspond to audio data of the MIC recording format.
The metadata_dev.zip is the common metadata for both formats.

The file, foa_eval.zip, corresponds to audio data of the FOA recording format for the evaluation dataset.
The file, mic_eval.zip, corresponds to audio data of the MIC recording format for the evaluation dataset.
The metadata_eval.zip is the common metadata for both formats. An info file is included (metadata_eval_info.txt) which specifies which of the two evaluation folds the mix file belongs to, and what is its number of overlapping events.

Download the zip files corresponding to the format of interest and use your favorite compression tool to unzip these split zip files. To extract a split zip archive (named as zip, z01, z02, ...), you could use, for example, the following syntax in Linux or OSX terminal:

  1. Combine the split archive to a single archive:
    zip -s 0 split.zip --out single.zip
  2. Extract the single archive using unzip:
    unzip single.zip

Files

foa_dev.zip

Files (14.0 GB)

Name Size Download all
md5:86acab46854a57f5ba3e5b80a19c01b5
2.1 GB Download
md5:363c8c159be003271c05a71a57b2ced4
2.1 GB Download
md5:6aad48e7346884b3929245e7553fd97d
1.0 GB Preview Download
md5:24c6ce2441df242d4e3b61e9bb27d0d7
1.7 GB Preview Download
md5:979f5551e987ed247404b80a2f1c3db1
1.2 MB Preview Download
md5:f3584166d9a63b43c1e301b6fb722293
415.1 kB Preview Download
md5:3a2b0986d2a302498cd874d584d17689
2.1 GB Download
md5:92f715cb74406d5556bce0fdf27f54e4
2.1 GB Download
md5:9174daca52f393425120308ab5c14477
936.6 MB Preview Download
md5:bca79b5f71b46e4cb191c54a611348a4
1.7 GB Preview Download
md5:545e12d343435ce30b0816a0381a2be1
17.7 kB Preview Download

Additional details

Related works

Funding

EVERYSOUND – Computational Analysis of Everyday Soundscapes 637422
European Commission

References

  • Archontis Politis, Sharath Adavanne, and Tuomas Virtanen (2020). A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020). Japan, Tokyo.
  • Sharath Adavanne, Archontis Politis, and Tuomas Virtanen (2019). A Multi-room reverberant dataset for sound event localization and detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019). New York, NY, USA
  • Ivo Trowitzsch, Jalil Taghia, Youssef Kashef, and Klaus Obermayer (2019). The NIGENS general sound events database. Technische Universität Berlin, Tech. Rep. arXiv:1902.08314 [cs.SD]