There is a newer version of this record available.

Dataset Open Access

Free Universal Sound Separation Dataset

Scott Wisdom; Hakan Erdogan; Dan Ellis; John R. Hershey


JSON-LD (schema.org) Export

{
  "description": "<p>The Free Universal Sound Separation (FUSS) Dataset is a database of arbitrary sound mixtures and source-level references, for use in experiments on arbitrary sound separation.&nbsp;</p>\n\n<p>This is the official sound separation data for the DCASE2020 Challenge Task 4: Sound Event Detection and Separation in Domestic Environments.</p>\n\n<p><strong>Citation: </strong>If you use the FUSS dataset or part of it, please cite our paper describing the dataset and baseline [1].&nbsp; FUSS is based on <a href=\"https://annotator.freesound.org/fsd/\">FSD data</a>&nbsp;so please also cite [2]:</p>\n\n<p><strong>Overview: </strong>FUSS audio data is sourced from a pre-release of <a href=\"https://annotator.freesound.org/fsd/\">Freesound dataset</a>&nbsp;known as (FSD50k), a sound event dataset composed of Freesound content annotated with labels from the AudioSet Ontology. Using the FSD50K labels, these source files have been screened such that they likely only contain a single type of sound. Labels are not provided for these source files, and are not considered part of the challenge. For the purpose of the DCASE Task4 Sound Separation and Event Detection challenge,&nbsp; systems should not use FSD50K labels, even though they may become available upon FSD50K release.</p>\n\n<p>To create mixtures, 10 second clips of sources are convolved with simulated room impulse responses and added together. Each 10 second mixture contains between 1 and 4 sources. Source files longer than 10 seconds are considered &quot;background&quot; sources. Every mixture contains one background source, which is active for the entire duration. We provide: a software recipe to create the dataset, the room impulse responses, and the original source audio.</p>\n\n<p><strong>Motivation for use in DCASE2020 Challenge Task 4: </strong>&nbsp;This dataset provides a platform to investigate how source separation may help with event detection and vice versa.&nbsp; Previous work has shown that universal sound separation (separation of arbitrary sounds) is possible [3], and that event detection can help with universal sound separation [4].&nbsp; It remains to be seen whether sound separation can help with event detection. Event detection is more difficult in noisy environments, and so separation could be a useful pre-processing step. Data with strong labels for event detection are relatively scarce, especially when restricted to specific classes within a domain. In contrast, source separation data needs no event labels for training, and may be more plentiful. In this setting, the idea&nbsp; is to utilize larger unlabeled separation data to train separation systems, which can serve as a front-end to event-detection systems trained on more limited data.</p>\n\n<p><strong>Room simulation: </strong>Room impulse responses are simulated using the image method with frequency-dependent walls. Each impulse corresponds to a rectangular room of random size with random wall materials, where a single microphone and up to 4 sources are placed at random spatial locations.</p>\n\n<p><strong>Recipe for data creation: </strong>The data creation recipe starts with scripts, based on<a href=\"https://github.com/justinsalamon/scaper\"> scaper</a>, to generate mixtures of events with random timing of source events, along with a background source that spans the duration of the mixture clip.&nbsp; The scipts for this are at<a href=\"https://github.com/google-research/sound-separation/tree/master/datasets/fuss\"> this GitHub repo</a>.</p>\n\n<p>The data are reverberated using a different room simulation for each mixture. In this simulation each source has its own reverberation corresponding to a different spatial location. The reverberated mixtures are created by summing over the reverberated sources. The dataset recipe scripts support modification, so that participants may remix and augment the training data as desired.</p>\n\n<p>The constituent source files for each mixture are also generated for use as references for training and evaluation.&nbsp; &nbsp;The dataset recipe scripts support modification, so that participants may remix and augment the training data as desired.</p>\n\n<p>Note: no attempt was made to remove digital silence from the freesound source data, so some reference sources may include digital silence, and there are a few mixtures where the background reference is all digital silence.&nbsp; &nbsp;Digital silence can also be observed in the event recognition public evaluation data, so it is important to be able to handle this in practice.&nbsp;&nbsp;&nbsp;Our evaluation scripts handle it by ignoring&nbsp;any reference sources that are silent.&nbsp;&nbsp;</p>\n\n<p><strong>Format: &nbsp;</strong>All audio clips are provided as uncompressed PCM 16 bit, 16 kHz, mono audio files.</p>\n\n<p><strong>Data split:&nbsp;</strong> The FUSS dataset is partitioned into &quot;train&quot;, &quot;validation&quot;, and &quot;eval&quot; sets, following the same splits used in FSD data. Specifically, the train and validation sets are sourced from the FSD50K dev set, and we have ensured that clips in train come from different uploaders than the clips in validation. The eval set is sourced from the FSD50K eval split.</p>\n\n<p><strong>Baseline System:&nbsp; </strong>A baseline system for the FUSS dataset is available at &nbsp;<a href=\"https://github.com/google-research/sound-separation/tree/master/datasets/fuss\">dcase2020_fuss_baseline</a>.</p>\n\n<p><strong>License:&nbsp; </strong>All audio clips (i.e., in&nbsp; FUSS_fsd_data.tar.gz) used in the preparation of Free Universal Source Separation (FUSS) dataset are designated Creative Commons (CC0) and were obtained from<a href=\"http://freesound.org\"> freesound.org</a>.&nbsp; The source data in FUSS_fsd_data.tar.gz were selected using labels from the<a href=\"https://annotator.freesound.org/fsd/\"> FSD50K corpus</a>, which is licensed as Creative Commons Attribution 4.0 International (CC BY 4.0) License.</p>\n\n<p>The FUSS dataset as a whole, is a curated, reverberated, mixed, and partitioned preparation, and is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) License. This license is specified in the `LICENSE-DATASET` file downloaded with the `FUSS_license_doc.tar.gz` file.</p>\n\n<p>Note the links to the github repo in&nbsp;FUSS_license_doc/README.md are currently out of date, so please refer to FUSS_license_doc/README.md in&nbsp;<a href=\"https://github.com/google-research/sound-separation/tree/master/datasets/fuss\">this GitHub repo</a>&nbsp;which is more recently updated.</p>\n\n<p>&nbsp;</p>", 
  "license": "https://creativecommons.org/licenses/by/4.0/legalcode", 
  "creator": [
    {
      "affiliation": "Google Research", 
      "@id": "https://orcid.org/0000-0001-6671-1428", 
      "@type": "Person", 
      "name": "Scott Wisdom"
    }, 
    {
      "affiliation": "Google Research", 
      "@id": "https://orcid.org/0000-0003-3140-8642", 
      "@type": "Person", 
      "name": "Hakan Erdogan"
    }, 
    {
      "affiliation": "Google Research", 
      "@type": "Person", 
      "name": "Dan Ellis"
    }, 
    {
      "affiliation": "Google Research", 
      "@type": "Person", 
      "name": "John R. Hershey"
    }
  ], 
  "url": "https://zenodo.org/record/3694384", 
  "datePublished": "2020-03-04", 
  "keywords": [
    "sound separation"
  ], 
  "version": "1.0", 
  "contributor": [
    {
      "affiliation": "LORIA", 
      "@type": "Person", 
      "name": "Romain Serizel"
    }, 
    {
      "affiliation": "INRIA", 
      "@type": "Person", 
      "name": "Nicolas Turpault"
    }, 
    {
      "affiliation": "Adobe Research", 
      "@type": "Person", 
      "name": "Justin Salamon"
    }, 
    {
      "affiliation": "Northwestern University", 
      "@type": "Person", 
      "name": "Prem Seetharaman"
    }, 
    {
      "affiliation": "Universitat Pompeu Fabra (UPF)", 
      "@type": "Person", 
      "name": "Eduardo Fonesca"
    }, 
    {
      "affiliation": "Universitat Pompeu Fabra (UPF)", 
      "@type": "Person", 
      "name": "Frederic Font Corbera"
    }
  ], 
  "@context": "https://schema.org/", 
  "distribution": [
    {
      "contentUrl": "https://zenodo.org/api/files/19697302-7577-4f58-a1ae-01ff9d193b8e/FUSS_baseline_model.tar.gz", 
      "encodingFormat": "gz", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/19697302-7577-4f58-a1ae-01ff9d193b8e/FUSS_fsd_data.tar.gz", 
      "encodingFormat": "gz", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/19697302-7577-4f58-a1ae-01ff9d193b8e/FUSS_license_doc.tar.gz", 
      "encodingFormat": "gz", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/19697302-7577-4f58-a1ae-01ff9d193b8e/FUSS_rir_data.tar.gz", 
      "encodingFormat": "gz", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/19697302-7577-4f58-a1ae-01ff9d193b8e/FUSS_ssdata_reverb.tar.gz", 
      "encodingFormat": "gz", 
      "@type": "DataDownload"
    }, 
    {
      "contentUrl": "https://zenodo.org/api/files/19697302-7577-4f58-a1ae-01ff9d193b8e/FUSS_ssdata.tar.gz", 
      "encodingFormat": "gz", 
      "@type": "DataDownload"
    }
  ], 
  "identifier": "https://doi.org/10.5281/zenodo.3694384", 
  "@id": "https://doi.org/10.5281/zenodo.3694384", 
  "@type": "Dataset", 
  "name": "Free Universal Sound Separation Dataset"
}
3,596
20,780
views
downloads
All versions This version
Views 3,5962,427
Downloads 20,78013,067
Data volume 122.8 TB81.5 TB
Unique views 3,0242,178
Unique downloads 5,1742,452

Share

Cite as