Loading...

ML Lunch (Sept 22, 2013): What does AdaBoost tell us about regularization?

768 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Sep 23, 2013

Speaker: Matus Telgarsky

Abstract
Regularized ERM is the engine driving numerous machine learning techniques such as SVMs and the LASSO, and is to thank for many modern results on fast optimization, model recovery, and noise tolerance. How then is it that the popular algorithm AdaBoost not only corresponds to an unregularized ERM problem, but moreover AdaBoost's optimization problem fails many of the basic sanity checks --- e.g., existence of minimizers --- provided by regularization?

This talk bridges this apparent rift by presenting the two theoretical guarantees substantiating the strong performance of AdaBoost --- convergence to the Bayes predictor in the general case, and fast convergence in the margin / weak-learnable case --- in a way that emphasizes the connection to regularization. In particular, in lieu of an algorithm-specified regularization parameter, the data itself constrains the behavior of the algorithm.

(Technical note: some results will focus on Lipschitz losses (e.g., the logistic loss), but the exponential loss will be discussed throughout.)

Loading...

When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...