Lipschitz bandits: Regret lower bounds and optimal algorithms

Abstract : We consider stochastic multi-armed bandit problems where the expected reward is a Lipschitz function of the arm, and where the set of arms is either discrete or continuous. For discrete Lipschitz bandits, we derive asymptotic problem specific lower bounds for the regret satisfied by any algorithm, and propose OSLB and CKL-UCB, two algorithms that efficiently exploit the Lipschitz structure of the problem. In fact, we prove that OSLB is asymptotically optimal, as its asymptotic regret matches the lower bound. The regret analysis of our algorithms relies on a new concentration inequality for weighted sums of KL divergences between the empirical distributions of rewards and their true distributions. For continuous Lipschitz bandits, we propose to first discretize the action space, and then apply OSLB or CKL-UCB, algorithms that provably exploit the structure efficiently. This approach is shown, through numerical experiments, to significantly outperform existing algorithms that directly deal with the continuous set of arms. Finally the results and algorithms are extended to contextual bandits with similarities.
Type de document :
Communication dans un congrès
COLT 2014, Jun 2014, Barcelona, Spain. Proceedings of the 27th Annual Conference on Learning Theory
Liste complète des métadonnées

https://hal-supelec.archives-ouvertes.fr/hal-01092791
Contributeur : Catherine Magnet <>
Soumis le : mardi 9 décembre 2014 - 14:50:44
Dernière modification le : jeudi 29 mars 2018 - 11:06:05

Identifiants

  • HAL Id : hal-01092791, version 1

Collections

Citation

Stefan Magureanu, Richard Combes, Alexandre Proutière. Lipschitz bandits: Regret lower bounds and optimal algorithms. COLT 2014, Jun 2014, Barcelona, Spain. Proceedings of the 27th Annual Conference on Learning Theory. 〈hal-01092791〉

Partager

Métriques

Consultations de la notice

28