Loading...

Deep Learning of Representations

37,382 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Dec 17, 2012

Google Tech Talk
11/13/2012

Presented by Yoshua Bengio

ABSTRACT

Yoshua Bengio will give an introduction to the area of Deep Learning, to which he has been one of the leading contributors. It is aimed at learning representations of data, at multiple levels of abstraction. Current machine learning algorithms are highly dependent on feature engineering (manual design of the representation fed as input to a learner), and it would be of high practical value to design algorithms that can do good feature learning. The ideal features are disentangling the unknown underlying factors that generated the data. It has been shown both through theoretical arguments and empirical studies that deep architectures can generalize better than too shallow ones. Since a 2006 breakthrough, a variety of learning algorithms have been proposed for deep learning and feature learning, mostly based on unsupervised learning of representations, often by stacking single-level learning algorithms. Several of these algorithms are based on probabilistic models but interesting challenges arise to handle the intractability of the likelihood itself, and alternatives to maximum likelihoods have been successfully explored, including criteria based on purely geometric intutions about manifolds and the concentration of probability mass that characterize many real-world learning tasks. Representation-learning algorithms are being applied to many tasks in computer vision, natural language processing, speech recognition and computational advertisement, and have won several international machine learning competitions, in particular thanks to their ability for transfer learning, i.e., to generalize to new settings and classes.

Speaker Info

PhD in CS from from McGill University, Canada, 1991, in the areas of HMMs, recurrent and convolutional neural networks, and speech recognition. Post-doc 1991-1992 at MIT with Michael Jordan. Post-doc 1992-1993 at Bell Labs with Larry Jackel, Yann LeCun, Vladimir Vapnik. Professor at U. Montreal (CS & operations research) since 1993. Canada Research Chair in Statistical Learning Algorithms since 2000. Fellow of the Canadian Institute of Advanced Research since 2005. NSERC industrial chair since 2006. Co-organizer of the Learning Workshop since 1998. NIPS Program Chair in 2008, NIPS General Chair in 2009. Urgel-Archambault Prize in 2009. Fellow of CIRANO. Current or previous associate/action editor for Journal of Machine Learning Research, IEEE Transactions on Neural Networks, Foundations and Trends in Machine Learning, Computational Intelligence, Machine Learning. Author of 2 books and over 200 scientific papers, with over 9000 Google Scholar citations in 2011.

Comments are disabled for this video.
When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...