Get 20M+ Full-Text Papers For Less Than $1.50/day. Subscribe now for You or Your Team.

Learn More →

Automatic detection of learner’s affect from conversational cues

Automatic detection of learner’s affect from conversational cues We explored the reliability of detecting a learner’s affect from conversational features extracted from interactions with AutoTutor, an intelligent tutoring system (ITS) that helps students learn by holding a conversation in natural language. Training data were collected in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Inter-rater reliability scores indicated that the classifications of the trained judges were more reliable than the novice judges. Seven data sets that temporally integrated the affective judgments with the dialogue features of each learner were constructed. The first four datasets corresponded to the judgments of the learner, a peer, and two trained judges, while the remaining three data sets combined judgments of two or more raters. Multiple regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of boredom, confusion, flow, and frustration. Machine learning experiments indicated that standard classifiers were moderately successful in discriminating the affective states of boredom, confusion, flow, frustration, and neutral, yielding a peak accuracy of 42% with neutral (chance = 20%) and 54% without neutral (chance = 25%). Individual detections of boredom, confusion, flow, and frustration, when contrasted with neutral affect, had maximum accuracies of 69, 68, 71, and 78%, respectively (chance = 50%). The classifiers that operated on the emotion judgments of the trained judges and combined models outperformed those based on judgments of the novices (i.e., the self and peer). Follow-up classification analyses that assessed the degree to which machine-generated affect labels correlated with affect judgments provided by humans revealed that human-machine agreement was on par with novice judges (self and peer) but quantitatively lower than trained judges. We discuss the prospects of extending AutoTutor into an affect-sensing ITS. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png User Modeling and User-Adapted Interaction Springer Journals

Automatic detection of learner’s affect from conversational cues

Loading next page...
 
/lp/springer-journals/automatic-detection-of-learner-s-affect-from-conversational-cues-Gx3FxxbFo0

References (130)

Publisher
Springer Journals
Copyright
Copyright © 2007 by Springer Science+Business Media B.V.
Subject
Computer Science; User Interfaces and Human Computer Interaction; Multimedia Information Systems; Management of Computing and Information Systems
ISSN
0924-1868
eISSN
1573-1391
DOI
10.1007/s11257-007-9037-6
Publisher site
See Article on Publisher Site

Abstract

We explored the reliability of detecting a learner’s affect from conversational features extracted from interactions with AutoTutor, an intelligent tutoring system (ITS) that helps students learn by holding a conversation in natural language. Training data were collected in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Inter-rater reliability scores indicated that the classifications of the trained judges were more reliable than the novice judges. Seven data sets that temporally integrated the affective judgments with the dialogue features of each learner were constructed. The first four datasets corresponded to the judgments of the learner, a peer, and two trained judges, while the remaining three data sets combined judgments of two or more raters. Multiple regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of boredom, confusion, flow, and frustration. Machine learning experiments indicated that standard classifiers were moderately successful in discriminating the affective states of boredom, confusion, flow, frustration, and neutral, yielding a peak accuracy of 42% with neutral (chance = 20%) and 54% without neutral (chance = 25%). Individual detections of boredom, confusion, flow, and frustration, when contrasted with neutral affect, had maximum accuracies of 69, 68, 71, and 78%, respectively (chance = 50%). The classifiers that operated on the emotion judgments of the trained judges and combined models outperformed those based on judgments of the novices (i.e., the self and peer). Follow-up classification analyses that assessed the degree to which machine-generated affect labels correlated with affect judgments provided by humans revealed that human-machine agreement was on par with novice judges (self and peer) but quantitatively lower than trained judges. We discuss the prospects of extending AutoTutor into an affect-sensing ITS.

Journal

User Modeling and User-Adapted InteractionSpringer Journals

Published: Dec 11, 2007

There are no references for this article.