Tweaking Parameters, Charting Perceptual Spaces
Description
Live coding builds and conducts the sound by real-time intervention of parametric devices (such as synthesizers). Coding a piece on-the-fly requires to bridge the cognitive gap associated with devices’ huge parameter spaces and the possible nonlinear sound variation built-in within them. A possible approach is to have some preselected parameter combinations of which the aural result is known, as a starting point for the performance. However, collecting/memorizing many combinations is time consuming, and using only a few could result in monotony. Therefore, it is convenient to develop models that help to reduce devices’ operational complexity with the aim of easing certain musical tasks, such as the automatic creation of variations. Here, a rulebased approach that models the relationships among combinations of parameter values and perceptual categories assigned to them is described. The extracted rules can be used on-the-fly either to simply reproduce the labeled parameter combinations (by calling them by their class as a set of presets), or to obtain new unheard combinations that the model predicts to be consistent with a selected category. The rules are human-readable and describe how parameter combinations relate to perceptual categories. Concrete examples using the system to select material and create the structure of two pieces are presented and discussed.
Files
paper94.pdf
Files
(466.3 kB)
Name | Size | Download all |
---|---|---|
md5:12f005bf2830ddeec299d3e2b32510b8
|
466.3 kB | Preview Download |