Soft-max boosting - UMI 2958 - Axe de recherche : Computer Science Accéder directement au contenu
Article Dans Une Revue Machine Learning Année : 2015

Soft-max boosting

Matthieu Geist

Résumé

The standard multi-class classification risk, based on the binary loss, is rarely directly minimized. This is due to (i) the lack of convexity and (ii) the lack of smoothness (and even continuity). The classic approach consists in minimizing instead a convex surrogate. In this paper, we propose to replace the usually considered deterministic decision rule by a stochastic one, which allows obtaining a smooth risk (generalizing the expected binary loss, and more generally the cost-sensitive loss). Practically, this (empirical) risk is minimized by performing a gradient descent in the function space linearly spanned by a base learner (a.k.a. boosting). We provide a convergence analysis of the resulting algorithm and experiment it on a bunch of synthetic and real-world data sets (with noiseless and noisy domains, compared to convex and non-convex boosters).
Fichier principal
Vignette du fichier
ml_sm_boost_rev.pdf (3 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01258816 , version 1 (19-01-2016)

Identifiants

Citer

Matthieu Geist. Soft-max boosting. Machine Learning, 2015, 100 (2), pp.305-332. ⟨10.1007/s10994-015-5491-2⟩. ⟨hal-01258816⟩
167 Consultations
395 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More