Learning to ground in spoken dialogue systems

Abstract : Machine learning methods such as reinforcement learning applied to dialogue strategy optimization has become a leading subject of researches since the mid 90's. Indeed, the great variability of factors to take into account makes the design of a spoken dialogue system a tailoring task and reusability of previous work is very difficult. Yet, techniques such as reinforcement learning are very demanding in training data while obtaining a substantial amount of data in the particular case of spoken dialogues is time-consuming and therefore expansive. In order to expand existing data sets, dialogue simulation techniques are becoming a standard solution. In this paper, we present a user model for realistic spoken dialogue simulation and a method for using this model so as to simulate the grounding process. This allows including grounding subdialogues as actions in the reinforcement learning process and learning adapted strategy
Type de document :
Communication dans un congrès
2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr 2007, Honolulu, HI, United States. 4 (IV), pp.165-168, 2007, 〈10.1109/ICASSP.2007.367189〉
Liste complète des métadonnées

https://hal-supelec.archives-ouvertes.fr/hal-00213410
Contributeur : Sébastien Van Luchene <>
Soumis le : mardi 12 février 2008 - 15:06:49
Dernière modification le : jeudi 29 mars 2018 - 11:06:04
Document(s) archivé(s) le : jeudi 15 avril 2010 - 12:38:37

Fichier

Supelec246.pdf
Fichiers éditeurs autorisés sur une archive ouverte

Identifiants

Collections

Citation

Olivier Pietquin. Learning to ground in spoken dialogue systems. 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr 2007, Honolulu, HI, United States. 4 (IV), pp.165-168, 2007, 〈10.1109/ICASSP.2007.367189〉. 〈hal-00213410〉

Partager

Métriques

Consultations de la notice

119

Téléchargements de fichiers

81