Published June 1, 2019 | Version v1
Conference paper Open

Automatic Recognition of Soundpainting for the Generation of Electronic Music Sounds

Description

This work aims to explore the use of a new gesture-based interaction built on automatic recognition of Soundpainting structured gestural language. In the proposed approach, a composer (called Soundpainter) performs Soundpainting gestures facing a Kinect sensor. Then, a gesture recognition system captures gestures that are sent to a sound generator software. The proposed method was used to stage an artistic show in which a Soundpainter had to improvise with 6 different gestures to generate a musical composition from different sounds in real time. The accuracy of the gesture recognition system was evaluated as well as Soundpainter's user experience. In addition, a user evaluation study for using our proposed system in a learning context was also conducted. Current results open up perspectives for the design of new artistic expressions based on the use of automatic gestural recognition supported by Soundpainting language.

Files

nime2019_paper012.pdf

Files (2.6 MB)

Name Size Download all
md5:5a83aa80e7a5520204dd09a26b4b3950
2.6 MB Preview Download