Managing Uncertainty within the KTD Framework

Abstract : The dilemma between exploration and exploitation is an important topic in reinforcement learning (RL). Most successful approaches in addressing this problem tend to use some uncertainty information about values estimated during learning. On another hand, scalability is known as being a lack of RL algorithms and value function approximation has become a major topic of research. Both problems arise in real-world applications, however few approaches allow approximating the value function while maintaining uncertainty information about estimates. Even fewer use this information in the purpose of addressing the exploration/exploitation dilemma. In this paper, we show how such an uncertainty information can be derived from a Kalman-based Temporal Differences (KTD) framework and how it can be used.
Document type :
Conference papers
Complete list of metadatas

https://hal-supelec.archives-ouvertes.fr/hal-00599636
Contributor : Sébastien van Luchene <>
Submitted on : Friday, June 10, 2011 - 2:21:05 PM
Last modification on : Wednesday, July 31, 2019 - 4:18:02 PM

Identifiers

  • HAL Id : hal-00599636, version 1

Collections

Citation

Matthieu Geist, Olivier Pietquin. Managing Uncertainty within the KTD Framework. Active Learning and Experimental Design workshop in conjunction with AISTATS 2010, May 2010, Sardinia, Italy. pp.157-168. ⟨hal-00599636⟩

Share

Metrics

Record views

117