Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Learning to control listening-oriented dialogue using partially observable markov decision processes

Learning to control listening-oriented dialogue using partially observable markov decision processes Learning to Control Listening-Oriented Dialogue Using Partially Observable Markov Decision Processes TOYOMI MEGURO, YASUHIRO MINAMI, RYUICHIRO HIGASHINAKA, and KOHJI DOHSAKA, NTT Corporation Our aim is to build listening agents that attentively listen to their users and satisfy their desire to speak and have themselves heard. This article investigates how to automatically create a dialogue control component of such a listening agent. We collected a large number of listening-oriented dialogues with their user satisfaction ratings and used them to create a dialogue control component that satisfies users by means of Partially Observable Markov Decision Processes (POMDPs). Using a hybrid dialog controller where high-level dialog acts are chosen with a statistical policy and low-level slot values are populated by a wizard, we evaluated our dialogue control method in a Wizard-of-Oz experiment. The experimental results show that our POMDPbased method achieves significantly higher user satisfaction than other stochastic models, confirming the validity of our approach. This article is the first to verify, by using human users, the usefulness of POMDPbased dialogue control for improving user satisfaction in nontask-oriented dialogue systems. Categories and Subject Descriptors: I.2.1 [Artificial Intelligence]: Applications and Expert Systems General Terms: Languages, Human Factors, Performance Additional Key Words and http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Speech and Language Processing (TSLP) Association for Computing Machinery

Learning to control listening-oriented dialogue using partially observable markov decision processes

Loading next page...
 
/lp/association-for-computing-machinery/learning-to-control-listening-oriented-dialogue-using-partially-Dfxn01s2WR
Publisher
Association for Computing Machinery
Copyright
Copyright © 2013 by ACM Inc.
ISSN
1550-4875
DOI
10.1145/2513145
Publisher site
See Article on Publisher Site

Abstract

Learning to Control Listening-Oriented Dialogue Using Partially Observable Markov Decision Processes TOYOMI MEGURO, YASUHIRO MINAMI, RYUICHIRO HIGASHINAKA, and KOHJI DOHSAKA, NTT Corporation Our aim is to build listening agents that attentively listen to their users and satisfy their desire to speak and have themselves heard. This article investigates how to automatically create a dialogue control component of such a listening agent. We collected a large number of listening-oriented dialogues with their user satisfaction ratings and used them to create a dialogue control component that satisfies users by means of Partially Observable Markov Decision Processes (POMDPs). Using a hybrid dialog controller where high-level dialog acts are chosen with a statistical policy and low-level slot values are populated by a wizard, we evaluated our dialogue control method in a Wizard-of-Oz experiment. The experimental results show that our POMDPbased method achieves significantly higher user satisfaction than other stochastic models, confirming the validity of our approach. This article is the first to verify, by using human users, the usefulness of POMDPbased dialogue control for improving user satisfaction in nontask-oriented dialogue systems. Categories and Subject Descriptors: I.2.1 [Artificial Intelligence]: Applications and Expert Systems General Terms: Languages, Human Factors, Performance Additional Key Words and

Journal

ACM Transactions on Speech and Language Processing (TSLP)Association for Computing Machinery

Published: Dec 1, 2013

There are no references for this article.