Dataset Open Access


Eduardo Fonseca; Manoj Plakal; Frederic Font; Daniel P. W. Ellis; Xavier Serra

Citation Style Language JSON Export

  "publisher": "Zenodo", 
  "DOI": "10.5281/zenodo.3612637", 
  "title": "FSDKaggle2019", 
  "issued": {
    "date-parts": [
  "abstract": "<p>FSDKaggle2019 is an audio dataset containing 29,266 audio files annotated with 80 labels of the <a href=\"\">AudioSet Ontology</a>. FSDKaggle2019 has been used for the <a href=\"\">DCASE Challenge 2019 Task 2</a>,&nbsp; which was run as a Kaggle competition titled&nbsp;<a href=\"\">Freesound Audio Tagging 2019</a>.</p>\n\n<p><strong>Citation</strong></p>\n\n<p>If you use the FSDKaggle2019 dataset or part of it, please cite our <a href=\"\">DCASE 2019 paper</a>:</p>\n\n<blockquote>\n<p>Eduardo Fonseca, Manoj Plakal, Frederic Font, Daniel P. W. Ellis, Xavier Serra. &quot;Audio tagging with noisy labels and minimal supervision&quot;. <em>Proceedings of the DCASE 2019 Workshop</em>, NYC, US (2019)</p>\n</blockquote>\n\n<p>You can also consider citing our <a href=\";isAllowed=y\"><strong>ISMIR 2017 paper</strong></a>, which describes how we gathered the manual annotations included in FSDKaggle2019.</p>\n\n<blockquote>\n<p>Eduardo Fonseca, Jordi Pons, Xavier Favory, Frederic Font, Dmitry Bogdanov, Andres Ferraro, Sergio Oramas, Alastair Porter, and Xavier Serra, &quot;Freesound Datasets: A Platform for the Creation of Open Audio Datasets&quot;, In <em>Proceedings of the 18th International Society for Music Information Retrieval Conference</em>, Suzhou, China, 2017</p>\n</blockquote>\n\n<p><strong>Data curators</strong></p>\n\n<p>Eduardo Fonseca, Manoj Plakal, Xavier Favory, Jordi Pons</p>\n\n<p><strong>Contact</strong></p>\n\n<p>You are welcome to contact Eduardo Fonseca should you have any questions at</p>\n\n<p>&nbsp;</p>\n\n<p><strong>ABOUT FSDKaggle2019</strong></p>\n\n<p>Freesound Dataset Kaggle 2019 (or <em><strong>FSDKaggle2019</strong></em> for short) is an audio dataset containing 29,266 audio files annotated with 80 labels of the <a href=\"\">AudioSet Ontology</a> [1]. FSDKaggle2019 has been used for the Task 2 of the <em>Detection and Classification of Acoustic Scenes and Events</em> (DCASE) Challenge 2019. Please visit the <a href=\"\">DCASE2019 Challenge Task 2 website</a> for more information. This Task was hosted on the Kaggle platform as a competition titled <a href=\"\">Freesound Audio Tagging 2019</a>. It was organized by researchers from the <a href=\"\">Music Technology Group</a> (MTG) of Universitat Pompeu Fabra (UPF), and from <a href=\"\">Sound Understanding team</a> at Google AI Perception. The competition intended to provide insight towards the development of broadly-applicable sound event classifiers able to cope with label noise and minimal supervision conditions.</p>\n\n<p>FSDKaggle2019 employs audio clips from the following sources:</p>\n\n<ol>\n\t<li>&nbsp;Freesound Dataset (<a href=\"\">FSD</a>): a dataset being collected at the <a href=\"\">MTG-UPF</a> based on <a href=\"\">Freesound</a> content organized with the <a href=\"\">AudioSet Ontology</a></li>\n\t<li>&nbsp;The soundtracks of a pool of Flickr videos taken from the <a href=\"\">Yahoo Flickr Creative Commons 100M dataset</a> (YFCC)</li>\n</ol>\n\n<p>The audio data is labeled using a vocabulary of 80 labels from Google&rsquo;s AudioSet Ontology [1], covering diverse topics: Guitar and other Musical Instruments, Percussion, Water, Digestive, Respiratory sounds, Human voice, Human locomotion, Hands, Human group actions, Insect, Domestic animals, Glass, Liquid, Motor vehicle (road), Mechanisms, Doors, and a variety of Domestic sounds. The full list of categories can be inspected in&nbsp;<code>vocabulary.csv</code> (see Files &amp; Download below). The goal of the task was to build a multi-label audio tagging system that can predict appropriate label(s) for each audio clip in a test set.</p>\n\n<p>What follows is a summary of some of the most relevant characteristics of FSDKaggle2019. Nevertheless, it is highly <strong>recommended</strong> to read our <a href=\"http://href=&quot;\">DCASE 2019 paper</a> for a more in-depth description of the dataset and how it was built.</p>\n\n<p><strong>Ground Truth Labels</strong></p>\n\n<p>The ground truth labels are provided at the clip-level, and express the presence of a sound category in the audio clip, hence can be considered&nbsp;<em>weak</em>&nbsp;labels or tags. Audio clips have variable lengths (roughly from 0.3 to 30s).</p>\n\n<p>The audio content from&nbsp;<a href=\"\">FSD</a>&nbsp;has been manually labeled by humans following a data labeling process using the&nbsp;<a href=\"\">Freesound Annotator</a>&nbsp;platform. Most labels have inter-annotator agreement but not all of them. More details about the data labeling process and the&nbsp;<a href=\"\">Freesound Annotator</a>&nbsp;can be found in [2].</p>\n\n<p>The&nbsp;<a href=\"\">YFCC</a>&nbsp;soundtracks were labeled using automated heuristics applied to the audio content and metadata of the original Flickr clips. Hence, a substantial amount of label noise can be expected. The label noise can vary widely in amount and type depending on the category, including in- and out-of-vocabulary noises. More information about some of the types of label noise that can be encountered is available in [3].</p>\n\n<p>Specifically, FSDKaggle2019 features&nbsp;<strong>three types of label quality</strong>, one for each set in the dataset:</p>\n\n<ul>\n\t<li><strong>curated train set</strong>: correct (but potentially incomplete) labels</li>\n\t<li><strong>noisy train set</strong>: noisy labels</li>\n\t<li><strong>test set</strong>: correct and complete labels</li>\n</ul>\n\n<p>Further details can be found below in the sections for each set.</p>\n\n<p><strong>Format</strong></p>\n\n<p>All audio clips are provided as uncompressed PCM 16 bit, 44.1 kHz, mono audio files.</p>\n\n<p>&nbsp;</p>\n\n<p><strong>DATA SPLIT</strong></p>\n\n<p>FSDKaggle2019 consists of&nbsp;<strong>two train sets</strong>&nbsp;and&nbsp;<strong>one test set</strong>. The idea is to limit the supervision provided for training (i.e., the manually-labeled, hence reliable, data), thus promoting approaches to deal with label noise.</p>\n\n<p><strong>Curated train set</strong></p>\n\n<p>The&nbsp;<strong>curated train set</strong>&nbsp;consists of manually-labeled data from&nbsp;<a href=\"\">FSD</a>.&nbsp;</p>\n\n<ul>\n\t<li>Number of clips/class: 75 except in a few cases (where there are less)</li>\n\t<li>Total number of clips: 4970</li>\n\t<li>Avg number of labels/clip: 1.2</li>\n\t<li>Total duration: 10.5 hours</li>\n</ul>\n\n<p>The duration of the audio clips ranges from 0.3 to 30s due to the diversity of the sound categories and the preferences of Freesound users when recording/uploading sounds. Labels are correct but potentially incomplete. It can happen that a few of these audio clips present additional acoustic material beyond the provided ground truth label(s).</p>\n\n<p><strong>Noisy train set</strong></p>\n\n<p>The&nbsp;<strong>noisy train set</strong>&nbsp;is a larger set of noisy web audio data from Flickr videos taken from the&nbsp;<a href=\"\">YFCC</a>&nbsp;dataset [5].</p>\n\n<ul>\n\t<li>Number of clips/class: 300</li>\n\t<li>Total number of clips: 19,815</li>\n\t<li>Avg number of labels/clip: 1.2</li>\n\t<li>Total duration: ~80 hours</li>\n</ul>\n\n<p>The duration of the audio clips ranges from 1s to 15s, with the vast majority lasting 15s. Labels are automatically generated and purposefully noisy. No human validation is involved. The label noise can vary widely in amount and type depending on the category, including in- and out-of-vocabulary noises.</p>\n\n<p>Considering the numbers above, the per-class data distribution available for training is, for most of the classes, 300 clips from the noisy train set and 75 clips from the curated train set. This means 80% noisy / 20% curated at the clip level, while at the duration level the proportion is more extreme considering the variable-length clips.</p>\n\n<p><strong>Test set</strong></p>\n\n<p>The&nbsp;<strong>test set</strong>&nbsp;is used for system evaluation and consists of manually-labeled data from&nbsp;<a href=\"\">FSD</a>.&nbsp;</p>\n\n<ul>\n\t<li>Number of clips/class: between 50 and 150</li>\n\t<li>Total number of clips: 4481</li>\n\t<li>Avg number of labels/clip: 1.4</li>\n\t<li>Total duration: 12.9 hours</li>\n</ul>\n\n<p>The acoustic material present in the test set clips is labeled exhaustively using the aforementioned vocabulary of 80 classes. Most labels have inter-annotator agreement but not all of them. Except human error, the label(s) are correct and complete considering the target vocabulary; nonetheless, a few clips could still present additional (unlabeled) acoustic content out of the vocabulary.</p>\n\n<p>During the&nbsp;<a href=\"\">DCASE2019 Challenge Task 2</a>, the test set was split into two subsets, for the&nbsp;<strong>public</strong>&nbsp;and&nbsp;<strong>private</strong>&nbsp;leaderboards, and only the data corresponding to the&nbsp;<em>public</em>&nbsp;leaderboard was provided.&nbsp;<strong>In this current package you will find the full test set with all the test labels</strong>. To allow comparison with previous work, the file&nbsp;<code>test_post_competition.csv</code>&nbsp;includes a flag to determine the corresponding leaderboard (public or private) for each test clip (see more info in&nbsp;Files &amp; Download&nbsp;below).</p>\n\n<p><strong>Acoustic mismatch</strong></p>\n\n<p>As mentioned before, FSDKaggle2019 uses audio clips from two sources:</p>\n\n<ul>\n\t<li><a href=\"\">FSD</a>: curated train set and test set, and</li>\n\t<li><a href=\"\">YFCC</a>: noisy train set.</li>\n</ul>\n\n<p>While the sources of audio (Freesound and Flickr) are collaboratively contributed and pretty diverse themselves, a certain acoustic mismatch can be expected between&nbsp;<a href=\"\">FSD</a>&nbsp;and&nbsp;<a href=\"\">YFCC</a>. We conjecture this mismatch comes from a variety of reasons.<br>\nFor example, through acoustic inspection of a small sample of both data sources, we find a higher percentage of high quality recordings in FSD. In addition, audio clips in Freesound are typically recorded with the purpose of capturing audio, which is not necessarily the case in YFCC.</p>\n\n<p>This mismatch can have an impact in the evaluation, considering that most of the train data come from YFCC, while all test data are drawn from FSD. This constraint (i.e., noisy training data coming from a different web audio source than the test set) is sometimes a real-world condition.</p>\n\n<p>&nbsp;</p>\n\n<p><strong>LICENSE</strong></p>\n\n<p>All clips in FSDKaggle2019 are released under Creative Commons (CC) licenses. For attribution purposes and to facilitate attribution of these files to third parties, we include a mapping from the audio clips to their corresponding licenses.</p>\n\n<ul>\n\t<li>\n\t<p><strong>Curated train set and test set</strong>. All clips in <strong>Freesound</strong> are released under different modalities of Creative Commons (CC) licenses, and each audio clip has its own license as defined by the audio clip uploader in Freesound, some of them requiring attribution to their original authors and some forbidding further commercial reuse. The licenses are specified in the files&nbsp;<code>train_curated_post_competition.csv</code>&nbsp;and&nbsp;<code>test_post_competition.csv</code>. These licenses can be CC0, CC-BY, CC-BY-NC and CC Sampling+.</p>\n\t</li>\n\t<li>\n\t<p><strong>Noisy train set</strong>. Similarly, the licenses of the soundtracks from&nbsp;<strong>Flickr</strong>&nbsp;used in FSDKaggle2019 are specified in the file&nbsp;<code>train_noisy_post_competition.csv</code>. These licenses can be CC-BY and CC BY-SA.</p>\n\t</li>\n</ul>\n\n<p>In addition, FSDKaggle2019 as a whole is the result of a curation process and it has an additional license. FSDKaggle2019 is released under&nbsp;<a href=\"\">CC-BY</a>. This license is specified in the&nbsp;<code>LICENSE-DATASET</code>&nbsp;file downloaded with the&nbsp;<code>FSDKaggle2019.doc </code>zip file.</p>\n\n<p>&nbsp;</p>\n\n<p><strong>FILES &amp; DOWNLOAD</strong></p>\n\n<p>FSDKaggle2019 can be downloaded as a series of zip files with the following directory structure:</p>\n\n<pre>root\n\u2502  \n\u2514\u2500\u2500\u2500FSDKaggle2019.audio_train_curated/               Audio clips in the curated train set\n\u2502\n\u2514\u2500\u2500\u2500FSDKaggle2019.audio_train_noisy/                 Audio clips in the noisy train set\n\u2502   \n\u2514\u2500\u2500\u2500FSDKaggle2019.audio_test/                        Audio clips in the full test set\n\u2502   \n\u2514\u2500\u2500\u2500FSDKaggle2019.meta/                              Files for evaluation setup\n\u2502   \u2502            \n\u2502   \u2514\u2500\u2500\u2500 train_curated_post_competition.csv          Ground truth for the curated train set\n\u2502   \u2502            \n\u2502   \u2514\u2500\u2500\u2500 train_noisy_post_competition.csv            Ground truth for the noisy train set\n\u2502   \u2502            \n\u2502   \u2514\u2500\u2500\u2500 test_post_competition.csv                   Ground truth for the full test set\n\u2502   \u2502            \n\u2502   \u2514\u2500\u2500\u2500 vocabulary.csv                              List of sound classes in FSDKaggle2019    \n\u2502   \n\u2514\u2500\u2500\u2500FSDKaggle2019.doc/\n    \u2502            \n    \u2514\u2500\u2500\                                    The dataset description file that you are reading\n    \u2502            \n    \u2514\u2500\u2500\u2500LICENSE-DATASET                              License of the FSDKaggle2019 dataset as an entity   \n</pre>\n\n<p><strong>Important Note:</strong>&nbsp;the original&nbsp;<code>train_curated.csv</code>&nbsp;and&nbsp;<code>train_noisy.csv</code>&nbsp;files provided during the competition have been updated with more metadata (licenses, Freesound/Flickr ids, etc.) into&nbsp;<code>train_curated_post_competition.csv</code>&nbsp;and&nbsp;<code>train_noisy_post_competition.csv</code>. Likewise, the original&nbsp;<code>test.csv</code>&nbsp;that was not public during the competition is now available with ground truth and metadata as&nbsp;<code>test_post_competition.csv</code>.</p>\n\n<p>Each row (i.e. audio clip) of the&nbsp;<code>train_curated_post_competition.csv</code>&nbsp;or&nbsp;<code>train_noisy_post_competition.csv </code>files contains the following information:</p>\n\n<ul>\n\t<li><code>fname</code>: the file name, e.g.,&nbsp;<code>0006ae4e.wav</code></li>\n\t<li><code>labels</code>: the audio classification label(s) (ground truth). Note that the number of labels per clip can be one, eg,&nbsp;<code>Bark</code>&nbsp;or more, eg,&nbsp;<code>&quot;Walk_and_footsteps,Slam&quot;</code>.</li>\n\t<li><code>freesound_id</code>&nbsp;or&nbsp;<code>flickr_video_URL</code>: the Freesound id or Flickr id for the audio clip</li>\n\t<li><code>license</code>: the license for the audio clip</li>\n</ul>\n\n<p>Each row (i.e. audio clip) of the&nbsp;<code>test_post_competition.csv</code>&nbsp;file contains the following information:</p>\n\n<ul>\n\t<li><code>fname</code>: the file name</li>\n\t<li><code>labels</code>: the audio classification label(s) (ground truth). Note that the number of labels per clip can be one, eg,&nbsp;<code>Bark</code>&nbsp;or more, eg,&nbsp;<code>&quot;Walk_and_footsteps,Slam&quot;</code>.</li>\n\t<li><code>usage</code>: string that indicates to which Kaggle leaderboard the clip was associated during the competition:&nbsp;<code>Public</code>&nbsp;or&nbsp;<code>Private</code></li>\n\t<li><code>freesound_id</code>: the Freesound id for the audio clip</li>\n\t<li><code>license</code>: the license for the audio clip</li>\n</ul>\n\n<p><strong>Detected corrupted files in the curated train set</strong></p>\n\n<p>The following 5 audio files in the&nbsp;<strong>curated train set</strong>&nbsp;have a wrong label, due to a bug in the file renaming process:&nbsp;<code>f76181c4.wav</code>,&nbsp;<code>77b925c2.wav</code>,&nbsp;<code>6a1f682a.wav</code>,&nbsp;<code>c7db12aa.wav</code>,&nbsp;<code>7752cc8a.wav</code>.</p>\n\n<p>The audio file&nbsp;<code>1d44b0bd.wav</code>&nbsp;in the&nbsp;<strong>curated train set</strong>&nbsp;was found to be corrupted (contains no signal) due to an error in format conversion.</p>\n\n<p>If you find more corrupted files in FSDKaggle2019, please send an email to</p>\n\n<p><strong>Download</strong></p>\n\n<p>Each of the folders in the directory structure above is compressed into one corresponding zip file that you can download and unzip with your favorite compression tool. There is one exception: due to the large size of&nbsp;<code>FSDKaggle2019.audio_train_noisy/</code>, it is split into 7 files (note the last file is not&nbsp;<code>*.z07</code>, but&nbsp;<code>*.zip</code>):</p>\n\n<pre><code class=\"language-none\">FSDKaggle2019.audio_train_noisy.z01\nFSDKaggle2019.audio_train_noisy.z02\nFSDKaggle2019.audio_train_noisy.z03\nFSDKaggle2019.audio_train_noisy.z04\nFSDKaggle2019.audio_train_noisy.z05\nFSDKaggle2019.audio_train_noisy.z06\</code></pre>\n\n<p>In this case, you first have to download the 7 files. Once downloaded, we convert the split archive to a single-file archive. In other words, we merge the 7 files into one zip file called e.g.&nbsp;<code></code>&nbsp;in your local machine.</p>\n\n<pre><code class=\"language-none\">zip -s 0 --out</code></pre>\n\n<p>Finally, this merged file is unzipped.</p>\n\n<pre><code class=\"language-none\">unzip</code></pre>\n\n<p><strong>Baseline System</strong></p>\n\n<p>A CNN baseline system for FSDKaggle2019 is available at&nbsp;<a href=\"\"><em>task2</em>baseline</a>.</p>\n\n<p>&nbsp;</p>\n\n<p><strong>REFERENCES AND LINKS</strong></p>\n\n<p>[1] Jort F Gemmeke, Daniel PW Ellis, Dylan Freedman, Aren Jansen, Wade Lawrence, R Channing Moore, Manoj Plakal, and Marvin Ritter. &quot;Audio set: An ontology and human-labeled dataset for audio events.&quot; In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2017. [<a href=\"\">PDF</a>]</p>\n\n<p>[2] Eduardo Fonseca, Jordi Pons, Xavier Favory, Frederic Font, Dmitry Bogdanov, Andres Ferraro, Sergio Oramas, Alastair Porter, and Xavier Serra. &quot;Freesound Datasets: A Platform for the Creation of Open Audio Datasets.&quot; In Proceedings of the International Conference on Music Information Retrieval, 2017. [<a href=\"\">PDF</a>]</p>\n\n<p>[3] Eduardo Fonseca, Manoj Plakal, Daniel P. W. Ellis, Frederic Font, Xavier Favory, and Xavier Serra. &quot;Learning Sound Event Classifiers from Web Audio with Noisy Labels.&quot; In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, 2019. [<a href=\"\">PDF</a>]</p>\n\n<p>[4] Frederic Font, Gerard Roma, and Xavier Serra. &quot;Freesound technical demo.&quot; Proceedings of the 21st ACM international conference on Multimedia, 2013.&nbsp;<a href=\"\"></a></p>\n\n<p>[5] Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li, YFCC100M: The New Data in Multimedia Research, Commun. ACM, 59(2):64&ndash;73, January 2016</p>\n\n<p>Freesound Annotator:&nbsp;<a href=\"\"></a><br>\nFreesound:&nbsp;<a href=\"\"></a><br>\nEduardo Fonseca&#39;s personal website:&nbsp;<a href=\"\"></a><br>\nMore datasets collected by us:&nbsp;<a href=\"\"></a></p>\n\n<p><strong>Acknowledgments</strong></p>\n\n<p>This work is partially supported by the European Union&rsquo;s Horizon 2020 research and innovation programme under grant agreement No 688382&nbsp;<a href=\"\">AudioCommons</a>. Eduardo Fonseca is also sponsored by a&nbsp;<a href=\"\">Google Faculty Research Award 2018</a>. We thank everyone who contributed to FSDKaggle2019 with annotations.</p>", 
  "author": [
      "family": "Eduardo Fonseca"
      "family": "Manoj Plakal"
      "family": "Frederic Font"
      "family": "Daniel P. W. Ellis"
      "family": "Xavier Serra"
  "version": "1.0", 
  "type": "dataset", 
  "id": "3612637"
All versions This version
Views 810810
Downloads 1,8571,857
Data volume 5.0 TB5.0 TB
Unique views 705705
Unique downloads 485485


Cite as