Sanjeev Arora | Provable Bounds for Machine Learning





The interactive transcript could not be loaded.



Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Feb 11, 2014

Many tasks in machine learning (especially unsupervised learning) are provably intractable: NP-complete or worse. Can we change this state of affairs?

This talk will suggest that the answer is yes, and describe some of our recent work.
-A new algorithm for learning topic models that provably works under some reasonable assumptions and in practice is up to 50 times faster than existing software like Mallet. (ICML 13)
-Provable new algorithm with provable guarantees that learns a class of deep nets. We rely on the generative view of deep nets implicit in the works of Hinton and others. Our algorithm learns almost all networks in this class with polynomial running time. We also show that each layer of our randomly generated neural net is a denoising autoencoder (a central object in deep learning).

Sponsoring Department: Computer Science and Engineering (CSE)

Lecture Series: CSE Distinguished Lecture Series

Speaker & Title:
Sanjeev Arora, Charles C. Fitzmorris Professor of Computer Science, Princeton

Sanjeev Arora is Charles C. Fitzmorris Professor of Computer Science at Princeton University. His research area spans several areas of theoretical Computer Science. He has received the ACM-EATCS Gödel Prize (in 2001 and 2010), Packard Fellowship (1997), the ACM Infosys Foundation Award in the Computing Sciences (2012), the Fulkerson Prize (2012), the Simons Investigator Award (2012). He served as the founding director for the Center for Computational Intractability at Princeton.
Speaker Website: http://www.cs.princeton.edu/~arora/

For more lectures on demand, visit the MconneX website at:


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...