Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Multimodal affect recognition in learning environments

Multimodal affect recognition in learning environments Multimodal Affect Recognition in Learning Environments Ashish Kapoor and Rosalind W. Picard MIT Media Laboratory Cambridge, MA 02139, USA {kapoor, picard}@media.mit.edu ABSTRACT We propose a multi-sensor a €ect recognition system and evaluate it on the challenging task of classifying interest (or disinterest) in children trying to solve an educational puzzle on the computer. The multimodal sensory information from facial expressions and postural shifts of the learner is combined with information about the learner ™s activity on the computer. We propose a uni ed approach, based on a mixture of Gaussian Processes, for achieving sensor fusion under the problematic conditions of missing channels and noisy labels. This approach generates separate class labels corresponding to each individual modality. The nal classi cation is based upon a hidden random variable, which probabilistically combines the sensors. The multimodal Gaussian Process approach achieves accuracy of over 86%, signi cantly outperforming classi cation using the individual modalities, and several other combination schemes. Categories and Subject Descriptors I.5 [Computing Methodologies]: Pattern Recognition; I.4.9 [Image Processing and Computer Vision]: Applications; J.4 [Computer Applications]: Social and Behavioral Sciences General Terms Algorithms, Design, Human Factors, Performance This work tackles a number of challenging issues. While most of the http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

Multimodal affect recognition in learning environments

Association for Computing Machinery — Nov 6, 2005

Loading next page...
/lp/association-for-computing-machinery/multimodal-affect-recognition-in-learning-environments-EwrK14st0M

References (16)

Datasource
Association for Computing Machinery
Copyright
Copyright © 2005 by ACM Inc.
ISBN
1-59593-044-2
doi
10.1145/1101149.1101300
Publisher site
See Article on Publisher Site

Abstract

Multimodal Affect Recognition in Learning Environments Ashish Kapoor and Rosalind W. Picard MIT Media Laboratory Cambridge, MA 02139, USA {kapoor, picard}@media.mit.edu ABSTRACT We propose a multi-sensor a €ect recognition system and evaluate it on the challenging task of classifying interest (or disinterest) in children trying to solve an educational puzzle on the computer. The multimodal sensory information from facial expressions and postural shifts of the learner is combined with information about the learner ™s activity on the computer. We propose a uni ed approach, based on a mixture of Gaussian Processes, for achieving sensor fusion under the problematic conditions of missing channels and noisy labels. This approach generates separate class labels corresponding to each individual modality. The nal classi cation is based upon a hidden random variable, which probabilistically combines the sensors. The multimodal Gaussian Process approach achieves accuracy of over 86%, signi cantly outperforming classi cation using the individual modalities, and several other combination schemes. Categories and Subject Descriptors I.5 [Computing Methodologies]: Pattern Recognition; I.4.9 [Image Processing and Computer Vision]: Applications; J.4 [Computer Applications]: Social and Behavioral Sciences General Terms Algorithms, Design, Human Factors, Performance This work tackles a number of challenging issues. While most of the

There are no references for this article.