Published April 30, 2018 | Version v1
Dataset Open

TUT Sound Events 2018 - Circular array, Reverberant and Synthetic Impulse Response Dataset

  • 1. Tampere University of Technology, Finland
  • 2. Aalto University, Finland

Description

Tampere University of Technology (TUT) Sound Events 2018 - Circular array, Reverberant and Synthetic Impulse Response Dataset

This dataset consists of simulated, reverberant, and circular-array format recordings with stationary point sources each associated with a spatial coordinate. The dataset consists of three sub-datasets with a) maximum one temporally overlapping sound events, b) maximum two temporally overlapping sound events, and c) maximum three temporally overlapping sound events. Each of the sub-datasets has three cross-validation splits, that consists of 240 recordings of about 30 seconds long for training split and 60 recordings of the same length for the testing split. For each recording, the metadata file with the same name consists of the sound event name, the temporal onset and offset time (in seconds), spatial location in azimuth and elevation angles (in degrees), and distance from the microphone (in meters). The sound events are spatially placed within a room using the image source method. The room size chosen was 10x8x4 meter with reverberation time per octave band of [1.0, 0.8, 0.7, 0.6, 0.5, 0.4] s and 125 Hz–4 kHz band center frequencies.

The isolated sound events were taken from the DCASE 2016 task 2 dataset. This dataset consists of 11 sound event classes such as Clearing throat, Coughing, Door knock, Door slam, Drawer, Human laughter, Keyboard, Keys (put on a table), Page turning, Phone ringing and Speech. The sound events are randomly placed in a spatial grid with 10-degree resolution in full azimuth and [-60 60) degree elevation angles. Additionally, the sound events are placed at a random distance of at least 1 meter away from the microphone.

The license of the dataset can be found in the LICENSE file. The rest of the nine zip files consists of datasets for a given split and overlap. For example, the ov3_split1.zip file consists of the audio and metadata folders for the case of maximum three temporally overlapping sound events (ov3) and the first cross-validation split (split1). Within each audio/metadata folder, the filenames for training split have the 'train' prefix, while the testing split filenames have the 'test' prefix.

This dataset was collected as part of the 'Sound event localization and detection of overlapping sources using convolutional recurrent neural network' work.

Files

ov1_split1.zip

Files (41.9 GB)

Name Size Download all
md5:12cc1e85851b7ac571f442e1089aa018
1.7 kB Download
md5:08b4fdd69de443440acf186cf66bc934
3.7 GB Preview Download
md5:454fd755061bf01a8a0ad1f42f2cb0a9
3.6 GB Preview Download
md5:8ca3b303995807c2ed7567543df083c8
3.6 GB Preview Download
md5:c8a9ab2bb8a9c27f99cbaca1f0e0414e
4.9 GB Preview Download
md5:e52cbade9d649dfde1fa63a8ef99f208
4.9 GB Preview Download
md5:dc477e0d732b0236eea4476954ab34db
4.9 GB Preview Download
md5:f712e333349ec8bc4567d2c6a021b75c
5.4 GB Preview Download
md5:f6cdb67cf584dd647c648c9790fabd4d
5.4 GB Preview Download
md5:fe0d66912606849dbdef70920eb7ff2d
5.4 GB Preview Download

Additional details

Funding

EVERYSOUND – Computational Analysis of Everyday Soundscapes 637422
European Commission