A NIME Reader2003: Designing, Playing, and Performing with a Vision-Based Mouth Interface
A NIME Reader: 2003: Designing, Playing, and Performing with a Vision-Based Mouth Interface
Lyons, Michael J.; Hähnel, Michael; Tetsutani, Nobuji
2017-03-07 00:00:00
[The role of the face and mouth in speech production as well as non-verbal communication suggests the use of facial action to control musical sound. Here we document work on the Mouthesizer, a system which uses a headworn miniature camera and computer vision algorithm to extract shape parameters from the mouth opening and output these as MIDI control changes. We report our experiences with various gesture-to-sound mappings and musical applications, and describe a live performance which used the Mouthesizer interface.]
http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.pnghttp://www.deepdyve.com/lp/springer-journals/a-nime-reader-2003-designing-playing-and-performing-with-a-vision-OV2rYgSz4m
A NIME Reader2003: Designing, Playing, and Performing with a Vision-Based Mouth Interface
[The role of the face and mouth in speech production as well as non-verbal communication suggests the use of facial action to control musical sound. Here we document work on the Mouthesizer, a system which uses a headworn miniature camera and computer vision algorithm to extract shape parameters from the mouth opening and output these as MIDI control changes. We report our experiences with various gesture-to-sound mappings and musical applications, and describe a live performance which used the Mouthesizer interface.]
To get new article updates from a journal on your personalized homepage, please log in first, or sign up for a DeepDyve account if you don’t already have one.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.