Reducing the dimentionality of the reward space in the Inverse Reinforcement Learning problem - Archive ouverte HAL Accéder directement au contenu
Communication Dans Un Congrès Année : 2011

Reducing the dimentionality of the reward space in the Inverse Reinforcement Learning problem

Résumé

This paper deals with the Inverse Reinforcement Learning framework, whose purpose is to learn control policies from demonstrations by an expert. This method inferes from demonstrations a utility function the expert is allegedly maximizing. In this paper we map the reward space into a subset of smaller dimensionality without loss of generality for all Markov Decision Processes (MDPs). We then present three experimental results showing both the promising aspect of the application of this result to existing IRL methods and its shortcomings. We conclude with considerations on further research.
Fichier non déposé

Dates et versions

hal-00660612 , version 1 (17-01-2012)

Identifiants

  • HAL Id : hal-00660612 , version 1

Citer

Edouard Klein, Matthieu Geist, Olivier Pietquin. Reducing the dimentionality of the reward space in the Inverse Reinforcement Learning problem. MLASA 2011, Dec 2011, Honolulu, United States. pp.1-4. ⟨hal-00660612⟩
232 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More