Learning from Demonstrations: Is It Worth Estimating a Reward Function?

Bilal Piot 1 Matthieu Geist 1 Olivier Pietquin 1
1 IMS - Equipe Information, Multimodalité et Signal
UMI2958 - Georgia Tech - CNRS [Metz], SUPELEC-Campus Metz
Abstract : This paper provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL Framework, the agent tries to learn the expert policy whereas in the IRL Framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a Policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view.
Document type :
Conference papers
Complete list of metadatas

Cited literature [12 references]  Display  Hide  Download

https://hal-supelec.archives-ouvertes.fr/hal-00869801
Contributor : Sébastien van Luchene <>
Submitted on : Monday, November 6, 2017 - 5:42:19 PM
Last modification on : Wednesday, July 31, 2019 - 4:18:03 PM

File

worth_estimating_reward.pdf
Files produced by the author(s)

Identifiers

Citation

Bilal Piot, Matthieu Geist, Olivier Pietquin. Learning from Demonstrations: Is It Worth Estimating a Reward Function?. Joint European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD 2013), Sep 2013, Prague, Czech Republic. pp.17-32, ⟨10.1007/978-3-642-40988-2_2⟩. ⟨hal-00869801⟩

Share

Metrics

Record views

215

Files downloads

107