Published June 1, 2003 | Version v1
Conference paper Open

Designing, Playing, and Performing with a Vision-based Mouth Interface

Description

The role of the face and mouth in speech production as well asnon-verbal communication suggests the use of facial action tocontrol musical sound. Here we document work on theMouthesizer, a system which uses a headworn miniaturecamera and computer vision algorithm to extract shapeparameters from the mouth opening and output these as MIDIcontrol changes. We report our experience with variousgesture-to-sound mappings and musical applications, anddescribe a live performance which used the Mouthesizerinterface.

Files

nime2003_116.pdf

Files (385.6 kB)

Name Size Download all
md5:08e594f0eaef4e583ca899c05ebfa671
385.6 kB Preview Download