Conference paper Open Access
An extended abstract and poster presented at the NeurIPS 2018 Workshop on Machine Learning for Creativity and Design, Montréal Canada.
This paper is about creating digital musical instruments (DMIs) where a predictive model is integrated into the interactive system. Rather than predicting symbolic music (e.g., MIDI notes), our systems predict future control data from the user and precise temporal information. We propose that a mixture density recurrent neural network (MDRNN) is an appropriate model for this task. The predictions can be used to fill-in control data for when the user stops performing, or as a kind of "filter" on the user’s input. We describe our motivations, two NIMEs applying this idea, and future directions.