Reward Shaping for Statistical Optimisation of Dialogue Management

Layla El Asri 1 Romain Laroche 2 Olivier Pietquin 3
1 IMS - Equipe Information, Multimodalité et Signal
UMI2958 - Georgia Tech - CNRS [Metz], SUPELEC-Campus Metz, Orange Labs [Issy les Moulineaux]
3 IMS - Equipe Information, Multimodalité et Signal
UMI2958 - Georgia Tech - CNRS [Metz], SUPELEC-Campus Metz
Abstract : This paper investigates the impact of reward shaping on a reinforcement learning-based spoken dialogue system's learning. A diffuse reward function gives a reward after each transition between two dialogue states. A sparse function only gives a reward at the end of the dialogue. Reward shaping consists of learning a diffuse function without modifying the optimal policy compared to a sparse one. Two reward shaping methods are applied to a corpus of dialogues evaluated with numerical performance scores. Learning with these functions is compared to the sparse case and it is shown, on simulated dialogues, that the policies learnt after reward shaping lead to higher performance.
Document type :
Conference papers
Complete list of metadatas

https://hal-supelec.archives-ouvertes.fr/hal-00869809
Contributor : Sébastien van Luchene <>
Submitted on : Friday, October 4, 2013 - 10:57:42 AM
Last modification on : Wednesday, July 31, 2019 - 4:18:03 PM

Links full text

Identifiers

Collections

Citation

Layla El Asri, Romain Laroche, Olivier Pietquin. Reward Shaping for Statistical Optimisation of Dialogue Management. SLSP 2013, Jul 2013, Tarragona, Spain. pp.93-101, ⟨10.1007/978-3-642-39593-2_8⟩. ⟨hal-00869809⟩

Share

Metrics

Record views

117