Skip to Main content Skip to Navigation
Conference papers

A Dantzig Selector Approach to Temporal Difference Learning

Matthieu Geist 1 Bruno Scherrer 2 Alessandro Lazaric 3 Mohammad Ghavamzadeh 3
2 MAIA - Autonomous intelligent machine
Inria Nancy - Grand Est, LORIA - AIS - Department of Complex Systems, Artificial Intelligence & Robotics
3 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, Inria Lille - Nord Europe, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal
Abstract : LSTD is one of the most popular reinforcement learning algorithms for value function approximation. Whenever the number of samples is larger than the number of features, LSTD must be paired with some form of regularization. In particular, L1-regularization methods tends to perform feature selection by promoting sparsity and thus they are particularly suited in high-dimensional problems. Nonetheless, since LSTD is not a simple regression algorithm but it solves a fixed-point problem, the integration with L1-regularization is not straightforward and it might come with some drawbacks (see e.g., the P-matrix assumption for LASSO-TD). In this paper we introduce a novel algorithm obtained by integrating LSTD with the Dantzig Selector. In particular, we investigate the performance of the algorithm and its relationship with existing regularized approaches, showing how it overcomes some of the drawbacks of existing solutions.
Document type :
Conference papers
Complete list of metadata
Contributor : Sébastien van Luchene Connect in order to contact the contributor
Submitted on : Wednesday, November 7, 2012 - 3:57:28 PM
Last modification on : Saturday, December 18, 2021 - 3:03:50 AM


  • HAL Id : hal-00749480, version 1


Matthieu Geist, Bruno Scherrer, Alessandro Lazaric, Mohammad Ghavamzadeh. A Dantzig Selector Approach to Temporal Difference Learning. ICML-12, Jun 2012, Edinburgh, United Kingdom. pp.1399-1406. ⟨hal-00749480⟩



Les métriques sont temporairement indisponibles