Conference paper Open Access

LakhNES: Improving Multi-instrumental Music Generation with Cross-domain Pre-training

Chris Donahue; Huanru Henry Mao; Yiting Ethan Li; Garrison Cottrell; Julian McAuley

We are interested in the task of generating multi-instrumental music scores. The Transformer architecture has recently shown great promise for the task of piano score generation—here we adapt it to the multi-instrumental setting. Transformers are complex, high-dimensional language models which are capable of capturing long-term structure in sequence data, but require large amounts of data to fit. Their success on piano score generation is partially explained by the large volumes of symbolic data readily available for that domain. We leverage the recently-introduced NES-MDB dataset of four-voice scores from an early video game sound synthesis chip (the NES), which we find to be well-suited to training with the Transformer architecture. To further improve the performance of our model, we propose a pre-training technique to leverage the information in a large collection of heterogeneous music. Despite differences between the two corpora, we find that this pre-training procedure improves both quantitative and qualitative performance for our primary task.

Files (440.3 kB)
Name Size
ismir2019_paper_000083.pdf
md5:8b9a6c2239e445a8dd97cc4b974d950e
440.3 kB Download
22
19
views
downloads
All versions This version
Views 2222
Downloads 1919
Data volume 8.4 MB8.4 MB
Unique views 2020
Unique downloads 1717

Share

Cite as