Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Feb 17, 2014
Title: L1 optimization beyond quadratic loss: algorithms and applications in graphical models, control, and energy systems Speaker: Zico Kolter School of Computer Science, CMU
Abstract In this talk I will try to convince all the fervent believers in proximal gradient methods, ADMM, or coordinate descent that there is a better method for optimizing general (smooth) objectives with an L1 penalty: Newton coordinate descent. The method is the current state-of-the-art in tasks like sparse inverse covariance estimation, and I will highlight two examples from my group's work that use this approach to achieve substantial speedups over existing algorithms. In particular, I will discuss how we use this algorithm to learn sparse Gaussian conditional random field models (applied to energy forecasting), and to design sparse optimal control laws (applied to distributed control in a smart grid).