STARSS22: Sony-TAu Realistic Spatial Soundscapes 2022 dataset
Creators
- 1. Tampere University
- 2. SONY
Description
DESCRIPTION:
The **Sony-TAu Realistic Spatial Soundscapes 2022 (STARSS22)** dataset contains multichannel recordings of sound scenes in various rooms and environments, together with temporal and spatial annotations of prominent events belonging to a set of target classes. The dataset is collected in two different countries, in Tampere, Finland by the Audio Researh Group (ARG) of **Tampere University (TAU)**, and in Tokyo, Japan by **SONY**, using a similar setup and annotation procedure. The dataset is delivered in two 4-channel spatial recording formats, a microphone array one (**MIC**), and first-order Ambisonics one (**FOA**). These recordings serve as the development dataset for the DCASE 2022 Sound Event Localization and Detection Task of the DCASE 2022 Challenge.
Contrary to the three previous datasets of synthetic spatial sound scenes of TAU Spatial Sound Events 2019 (development/evaluation), TAU-NIGENS Spatial Sound Events 2020, and TAU-NIGENS Spatial Sound Events 2021 associated with the previous iterations of the DCASE Challenge, the STARS22 dataset contains recordings of real sound scenes and hence it avoids some of the pitfalls of synthetic generation of scenes. Some such key properties are:
- annotations are based on a combination of human annotators for sound event activity and optical tracking for spatial positions,
- the annotated target event classes are determined by the composition of the real scenes,
- the density, polyphony, occurences and co-occurences of events and sound classes is not random, and it follows actions and interactions of participants in the real scenes.
The recordings were collected between September 2021 and January 2022. Collection of data from the TAU side has received funding from Google.
REPORT & REFERENCE:
If you use this dataset please cite the report on its creation, and the related DCASE2022 task setup:
Archontis Politis, Kazuki Shimada, Parthasaarathy Sudarsanam, Sharath Adavanne, Daniel Krause, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Yuki Mitsufuji, Tuomas Virtanen (2022). STARSS22: A dataset of spatial recordings of real scenes with spatiotemporal annotations of sound events. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2022 Workshop (DCASE2022), Nancy, France.
found here.
AIM:
The dataset is suitable for training and evaluation of machine-listening models for sound event detection (SED), general sound source localization with diverse sounds or signal-of-interest localization, and joint sound-event-localization-and-detection (SELD). Additionally, the dataset can be used for evaluation of signal processing methods that do not necessarily rely on training, such as acoustic source localization methods and multiple-source acoustic tracking. The dataset allows evaluation of the performance and robustness of the aforementioned applications for diverse types of sounds, and under diverse acoustic conditions.
SPECIFICATIONS:
- 70 recording clips of 30 sec ~ 5 min durations, with a total time of ~2hrs, contributed by SONY (development dataset).
- 51 recording clips of 1 min ~ 5 min durations, with a total time of ~3hrs, contributed by TAU (development dataset).
- 52 recording clips with a total time of ~2hrs, contributed by SONY&TAU (evaluation dataset).
- A training-test split is provided for reporting results using the development dataset.
- 40 recordings contributed by SONY for the training split, captured in 2 rooms (dev-train-sony).
- 30 recordings contributed by SONY for the testing split, captured in 2 rooms (dev-test-sony).
- 27 recordings contributed by TAU for the training split, captured in 4 rooms (dev-train-tau).
- 24 recordings contributed by TAU for the testing split, captured in 3 rooms (dev-test-tau).
- A total of 11 unique rooms captured in the recordings, 4 from SONY and 7 from TAU (development set).
- Sampling rate 24kHz.
- Two 4-channel 3-dimensional recording formats: first-order Ambisonics (FOA) and tetrahedral microphone array (MIC).
- Recordings are taken in two different countries and two different sites.
- Each recording clip is part of a recording session happening in a unique room.
- Groups of participants, sound making props, and scene scenarios are unique for each session (with a few exceptions).
- To achieve good variability and efficiency in the data, in terms of presence, density, movement, and/or spatial distribution of the sounds events, the scenes are loosely scripted.
- 13 target classes are identified in the recordings and strongly annotated by humans.
- Spatial annotations for those active events are captured by an optical tracking system.
- Sound events out of the target classes are considered as interference.
- Occurences of up to 3 simultaneous events are fairly common, while higher numbers of overlapping events (up to 5) can occur but are rare.
More detailed information on the dataset can be found in the included README file.
SOUND CLASSES:
13 target sound event classes are annotated. The classes follow loosely the Audioset ontology.
0. Female speech, woman speaking
1. Male speech, man speaking
2. Clapping
3. Telephone
4. Laughter
5. Domestic sounds
6. Walk, footsteps
7. Door, open or close
8. Music
9. Musical instrument
10. Water tap, faucet
11. Bell
12. Knock
The content of some of these classes corresponds to events of a limited range of Audioset-related subclasses. For more information see the README file.
EXAMPLE APPLICATION:
An implementation of a trainable model of a convolutional recurrent neural network, performing joint SELD, trained and evaluated with this dataset is provided here. This implementation will serve as the baseline method in the DCASE 2022 Sound Event Localization and Detection Task.
DEVELOPMENT AND EVALUATION:
The current version (Version 1.1) of the dataset includes the 121 development audio recordings and labels, used by the participants of Task 3 of DCASE2022 Challenge to train and validate their submitted systems, and the 52 evaluation audio recordings without labels, for the evaluation phase of DCASE2022.
If researchers wish to compare their system against the submissions of DCASE2022 Challenge, they will have directly comparable results if they use the evaluation data as their testing set.
DOWNLOAD INSTRUCTIONS:
The file foa_dev.zip, correspond to audio data of the FOA recording format.
The file mic_dev.zip, correspond to audio data of the MIC recording format.
The metadata_dev.zip is the common metadata for both formats.
The file foa_eval.zip, corresponds to audio data of the FOA recording format for the evaluation dataset.
The file mic_eval.zip, corresponds to audio data of the MIC recording format for the evaluation dataset.
Download the zip files corresponding to the format of interest and use your favourite compression tool to unzip these zip files.
Files
foa_dev.zip
Files
(6.0 GB)
Name | Size | Download all |
---|---|---|
md5:165dd033b262dc11a8853635c1def59b
|
2.2 GB | Preview Download |
md5:118d48ca5019bfbf9e9d1d522e12c35c
|
839.1 MB | Preview Download |
md5:2424296dab0b421211874b6c5cd1cacb
|
1.2 kB | Download |
md5:b460e17e0848c49f03f238afb89fa87e
|
634.4 kB | Preview Download |
md5:46b55d0be507afa986cd29120e42b188
|
2.1 GB | Preview Download |
md5:e1de088241032a754250ad536ad2ad17
|
816.7 MB | Preview Download |
md5:f860ed58593f32577f9422486ec3e57d
|
24.3 kB | Preview Download |
Additional details
Related works
- References
- Dataset: 10.5281/zenodo.5476980 (DOI)
- Dataset: 10.5281/zenodo.4064792 (DOI)
- Dataset: 10.5281/zenodo.2599196 (DOI)
- Dataset: 10.5281/zenodo.3377088 (DOI)
- Software: https://github.com/sharathadavanne/seld-dcase2022 (URL)
References
- Archontis Politis, Sharath Adavanne, Daniel Krause, Antoine Deleforge, Prerak Srivastava, Tuomas Virtanen (2021). A Dataset of Dynamic Reverberant Sound Scenes with Directional Interferers for Sound Event Localization and Detection. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), Barcelona, Spain.
- Archontis Politis, Sharath Adavanne, and Tuomas Virtanen (2020). A Dataset of Reverberant Spatial Sound Scenes with Moving Sources for Sound Event Localization and Detection. In Proceedings of the Detection and Classification of Acoustic Scenes and Events 2020 Workshop (DCASE2020), Tokyo, Japan.
- Sharath Adavanne, Archontis Politis, and Tuomas Virtanen (2019). A Multi-room reverberant dataset for sound event localization and detection. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019), New York, NY, USA.