Batch, Off-policy and Model-free Apprenticeship Learning

Edouard Klein 1, 2 Matthieu Geist 3 Olivier Pietquin 3
1 ABC - Machine Learning and Computational Biology
LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
3 IMS - Equipe Information, Multimodalité et Signal
UMI2958 - Georgia Tech - CNRS [Metz], SUPELEC-Campus Metz
Abstract : This paper addresses the problem of apprenticeship learning, that is learning control policies from demonstration by an expert. An efficient framework for it is inverse reinforcement learning (IRL). Based on the assumption that the expert maximizes a utility function, IRL aims at learning the underlying reward from example trajectories. Many IRL algorithms assume that the reward function is linearly parameterized and rely on the computation of some associated feature expectations, which is done through Monte Carlo simulation. However, this assumes to have full trajectories for the expert policy as well as at least a generative model for intermediate policies. In this paper, we introduce a temporal difference method, namely LSTD-mu, to compute these feature expectations. This allows extending apprenticeship learning to a batch and off-policy setting.
Document type :
Conference papers
Complete list of metadatas

https://hal-supelec.archives-ouvertes.fr/hal-00660623
Contributor : Sébastien van Luchene <>
Submitted on : Tuesday, January 17, 2012 - 11:04:34 AM
Last modification on : Wednesday, July 31, 2019 - 4:18:03 PM

Identifiers

  • HAL Id : hal-00660623, version 1

Citation

Edouard Klein, Matthieu Geist, Olivier Pietquin. Batch, Off-policy and Model-free Apprenticeship Learning. EWRL 2011, Sep 2011, Athens, Greece. pp.1-12. ⟨hal-00660623⟩

Share

Metrics

Record views

400