Dataset Open Access

MusicNet

John Thickstun; Zaid Harchaoui; Sham M. Kakade

MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note's position in the metrical structure of the composition. The labels are acquired from musical scores aligned to recordings by dynamic time warping. The labels are verified by trained musicians; we estimate a labeling error rate of 4%. We offer the MusicNet labels to the machine learning and music communities as a resource for training models and a common benchmark for comparing results. This dataset was introduced in the paper "Learning Features of Music from Scratch." [1]

 

This repository consists of 3 top-level files:

  • musicnet.tar.gz - This file contains the MusicNet dataset itself, consisting of PCM-encoded audio wave files (.wav) and corresponding CSV-encoded note label files (.csv). The data is organized according to the train/test split described and used in "Invariances and Data Augmentation for Supervised Music Transcription". [2]
  • musicnet_metadata.csv - This file contains track-level information about recordings contained in MusicNet. The data and label files are named with MusicNet ids, which you can use to cross-index the data and labels with this metadata file.
  • musicnet_midis.tar.gz - This file contains the reference MIDI files used to construct the MusicNet labels. 

 

A PyTorch interface for accessing the MusicNet dataset is available on GitHub. For an audio/visual introduction and summary of this dataset, see the MusicNet inspector, created by Jong Wook Kim. The audio recordings in MusicNet consist of Creative Commons licensed and Public Domain performances, sourced from the Isabella Stewart Gardner Museum, the European Archive Foundation, and Musopen. The provenance of specific recordings and midis are described in the metadata file.

 

[1] Learning Features of Music from Scratch. John Thickstun, Zaid Harchaoui, and Sham M. Kakade. In International Conference on Learning Representations (ICLR), 2017. ArXiv Report.

@inproceedings{thickstun2017learning,
    title={Learning Features of Music from Scratch},
    author = {John Thickstun and Zaid Harchaoui and Sham M. Kakade},
    year={2017},
    booktitle = {International Conference on Learning Representations (ICLR)}
}

[2] Invariances and Data Augmentation for Supervised Music Transcription. John Thickstun, Zaid Harchaoui, Dean P. Foster, and Sham M. Kakade. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018. ArXiv Report

@inproceedings{thickstun2018invariances,
title={Invariances and Data Augmentation for Supervised Music Transcription},
author = {John Thickstun and Zaid Harchaoui and Dean P. Foster and Sham M. Kakade},
year={2018},
booktitle = {International Conference on Acoustics, Speech, and Signal Processing (ICASSP)}
}

 

This work was supported by the Washington Research Foundation Fund for Innovation in Data-Intensive Discovery, and the CIFAR program "Learning in Machines and Brains."
Files (11.1 GB)
Name Size
musicnet.tar.gz
md5:844764911fa0d5b97c97da944a057590
11.1 GB Download
musicnet_metadata.csv
md5:1caef62cee9c875235e62aac368b49d8
43.8 kB Download
musicnet_midis.tar.gz
md5:b5fa98a113bfc51c8a445def9f24dc7e
2.6 MB Download
6,163
7,655
views
downloads
All versions This version
Views 6,1636,163
Downloads 7,6557,655
Data volume 26.9 TB26.9 TB
Unique views 5,4455,445
Unique downloads 5,1915,191

Share

Cite as