Published March 18, 2019 | Version 1.0
Dataset Open

VimSketch Dataset

  • 1. Northwestern University
  • 2. New York University

Description

VimSketch Dataset combines two publicly available datasets, created by the Interactive Audio Lab:

  1. Vocal Imitation Set: a collection of crowd-sourced vocal imitations of a large set of diverse sounds collected from Freesound (https://freesound.org/), which were curated based on Google's AudioSet ontology (https://research.google.com/audioset/).
  2. VocalSketch Dataset: a dataset containing thousands of vocal imitations of a large set of diverse sounds.

 

Publications by the Interactive Audio Lab using VimSketch:

[pdf] Fatemeh Pishdadian, Bongjun Kim, Prem Seetharaman, Bryan Pardo. "Classifying Non-speech Vocals: Deep vs Signal Processing Representations," Detection and Classification of Acoustic Scenes and Events Workshop (DCASE), 2019

 

Contact information:

- Interactive Audio Lab: http://music.eecs.northwestern.edu

- Bryan Pardo pardo@northwestern.edu | http://www.bryanpardo.com

- Bongjun Kim bongjun@u.northwestern.edu | http://www.bongjunkim.com

- Fatemeh Pishdadian fpishdadian@u.northwestern.edu | http://www.fatemehpishdadian.com

 

 

Files

Vim_Sketch_Dataset.zip

Files (4.5 GB)

Name Size Download all
md5:5f743eb8cf98040b55b4f27839666334
4.5 GB Preview Download