Published November 4, 2019
| Version v1
Conference paper
Open
Controlling Symbolic Music Generation based on Concept Learning from Domain Knowledge
Authors/Creators
Description
Machine learning allows automatic construction of generative models for music. However, they are learned from only the succession of notes itself without explicitly employing domain knowledge of musical concepts such as rhythm, contour, and fragmentation & consolidation. We approximate such musical domain knowledge as a function, and feed it into our model. Then, two decoupled spaces are learned: the extraction space that captures the target concept, and the residual space that captures the remainder. For monophonic symbolic music, our model exhibits high decoupling/modeling performance. Controllability in generation is improved: (i) our interpolation enables concept-aware flexible control over blending two musical fragments, and (ii) our variation generation enables users to make concept-aware adjustable variations.
Files
ismir2019_paper_000100.pdf
Files
(1.2 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:5a81997d4dd5d3c8f70801c404ccc1d5
|
1.2 MB | Preview Download |