Abstract : Machine learning methods such as reinforcement learning applied to dialogue strategy optimization has become a leading subject of researches since the mid 90's. Indeed, the great variability of factors to take into account makes the design of a spoken dialogue system a tailoring task and reusability of previous work is very difficult. Yet, techniques such as reinforcement learning are very demanding in training data while obtaining a substantial amount of data in the particular case of spoken dialogues is time-consuming and therefore expansive. In order to expand existing data sets, dialogue simulation techniques are becoming a standard solution. In this paper, we present a user model for realistic spoken dialogue simulation and a method for using this model so as to simulate the grounding process. This allows including grounding subdialogues as actions in the reinforcement learning process and learning adapted strategy
2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr 2007, Honolulu, HI, United States. 4 (IV), pp.165-168, 2007, 〈10.1109/ICASSP.2007.367189〉
https://hal-supelec.archives-ouvertes.fr/hal-00213410
Contributeur : Sébastien Van Luchene
<>
Soumis le : mardi 12 février 2008 - 15:06:49
Dernière modification le : jeudi 29 mars 2018 - 11:06:04
Document(s) archivé(s) le : jeudi 15 avril 2010 - 12:38:37
Olivier Pietquin. Learning to ground in spoken dialogue systems. 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr 2007, Honolulu, HI, United States. 4 (IV), pp.165-168, 2007, 〈10.1109/ICASSP.2007.367189〉. 〈hal-00213410〉