Loading...

ML Lunch (Feb 17): L1 optimization beyond quadratic loss

645 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Feb 17, 2014

Title: L1 optimization beyond quadratic loss: algorithms and applications in graphical models, control, and energy systems
Speaker: Zico Kolter
School of Computer Science, CMU

Abstract
In this talk I will try to convince all the fervent believers in proximal gradient methods, ADMM, or coordinate descent that there is a better method for optimizing general (smooth) objectives with an L1 penalty: Newton coordinate descent. The method is the current state-of-the-art in tasks like sparse inverse covariance estimation, and I will highlight two examples from my group's work that use this approach to achieve substantial speedups over existing algorithms. In particular, I will discuss how we use this algorithm to learn sparse Gaussian conditional random field models (applied to energy forecasting), and to design sparse optimal control laws (applied to distributed control in a smart grid).

For more ML Lunch talks, visit http://www.cs.cmu.edu/~learning/

Loading...

When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...