Published May 31, 2023 | Version v1
Conference paper Open

Touch Interaction for Corpus-based Audio–Visual Synthesis

Creators

Description

Audio–visual corpus-based synthesis extends the principle of concatenative sound synthesis to the visual domain, where, in addition to the sound corpus (i.e. a collection of segments of recorded sound with a perceptual description of their sound character), the artist uses a corpus of still images with visual perceptual description (colour, texture, brightness, entropy), in order to create an audio–visual musical performance by navigating in real-time through these descriptor spaces, i.e. through the collection of sound grains in a space of perceptual audio descriptors, and at the same time through the visual descriptor space, i.e. selecting images from the visual corpus for rendering, and thus navigate in parallel through both corpora interactively with gestural control via movement sensors. The artistic–scientific question that is explored here is how to control at the same time the navigation through the audio and the image descriptor spaces with gesture sensors, in other words, how to link the gesture sensing to both the image descriptors and the sound descriptors in order to create a symbiotic multi-modal embodied audio–visual experience.

Files

nime2023_55.pdf

Files (1.3 MB)

Name Size Download all
md5:4a77b3ba088e5394d0911f8072620120
1.3 MB Preview Download