Published October 11, 2020 | Version v1
Conference paper Open

Generating music with a self-correcting non-chronological autoregressive model

Description

We describe a novel approach for generating music using a self-correcting, non-chronological, autoregressive model. We represent music as a sequence of edit events, each of which denotes either the addition or removal of a note---even a note previously generated by the model. During inference, we generate one edit event at a time using direct ancestral sampling. Our approach allows the model to fix previous mistakes such as incorrectly sampled notes and prevent accumulation of errors which autoregressive models are prone to have. Another benefit of our approach is a finer degree of control during human and AI collaboration as our approach is notewise online. We show through quantitative metrics and human survey evaluation that our approach generates better results than orderless NADE and Gibbs sampling approaches.

Files

247.pdf

Files (494.6 kB)

Name Size Download all
md5:e41ee35e417deb642b91086f97422cb7
494.6 kB Preview Download