Skip to Main content Skip to Navigation
Conference papers

Learning to ground in spoken dialogue systems

Abstract : Machine learning methods such as reinforcement learning applied to dialogue strategy optimization has become a leading subject of researches since the mid 90's. Indeed, the great variability of factors to take into account makes the design of a spoken dialogue system a tailoring task and reusability of previous work is very difficult. Yet, techniques such as reinforcement learning are very demanding in training data while obtaining a substantial amount of data in the particular case of spoken dialogues is time-consuming and therefore expansive. In order to expand existing data sets, dialogue simulation techniques are becoming a standard solution. In this paper, we present a user model for realistic spoken dialogue simulation and a method for using this model so as to simulate the grounding process. This allows including grounding subdialogues as actions in the reinforcement learning process and learning adapted strategy
Complete list of metadatas

https://hal-supelec.archives-ouvertes.fr/hal-00213410
Contributor : Sébastien van Luchene <>
Submitted on : Tuesday, February 12, 2008 - 3:06:49 PM
Last modification on : Thursday, March 29, 2018 - 11:06:04 AM
Long-term archiving on: : Thursday, April 15, 2010 - 12:38:37 PM

File

Supelec246.pdf
Publisher files allowed on an open archive

Identifiers

Collections

Citation

Olivier Pietquin. Learning to ground in spoken dialogue systems. 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr 2007, Honolulu, HI, United States. pp.165-168, ⟨10.1109/ICASSP.2007.367189⟩. ⟨hal-00213410⟩

Share

Metrics

Record views

183

Files downloads

355