Published October 8, 2024
| Version v1
Conference paper
Open
Building Sketch-to-Sound Mapping with Unsupervised Feature Extraction and Interactive Machine Learning
Description
In this paper, we explore the interactive construction and exploration of mappings between visual sketches and musical controls. Interactive Machine Learning (IML) allows creators to construct mappings with personalised training examples. However, when it comes to high-dimensional data such as sketches, dimensionality reduction techniques are required to extract features for the IML model. We propose using unsupervised machine learning to encode sketches into lower-dimensional latent representations, which are then used as the source for the IML model to construct sketch-to-sound mappings. We build a proof-of-concept prototype and demonstrate it using two compositions. We reflect on the composing processes to discuss the controllability and explorability in mappings built by this approach and how they contribute to the musical expression.
Files
nime2024_86.pdf
Files
(1.5 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:1ed9571141be9a0d4ce6793e55a392ce
|
1.5 MB | Preview Download |