Loading...

MIA: David Blei, Scaling & generalizing variational inference; David Benjamin, Variational inference

1,729 views

Loading...

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on May 25, 2016

Models, Inference and Algorithms
Broad Institute of MIT and Harvard
Spring 2016

MIA Meeting: https://youtu.be/HOkkr4jXQVg?t=2139

David Blei
Columbia University (Computer Science, Statistics), Institute for Data Science

Scaling and Generalizing Variational Inference

Latent variable models have become a key tool for the modern statistician, letting us express complex assumptions about the hidden structures that underlie our data. Latent variable models have been successfully applied in numerous fields.

The central computational problem in latent variable modeling is posterior inference, the problem of approximating the conditional distribution of the latent variables given the observations. Posterior inference is central to both exploratory tasks and predictive tasks. Approximate posterior inference algorithms have revolutionized Bayesian statistics, revealing its potential as a usable and general-purpose language for data analysis.

Bayesian statistics, however, has not yet reached this potential. First, statisticians and scientists regularly encounter massive data sets, but existing approximate inference algorithms do not scale well. Second, most approximate inference algorithms are not generic; each must be adapted to the specific model at hand.

In this talk I will discuss our recent research on addressing these two limitations. I will describe stochastic variational inference, an approximate inference algorithm for handling massive data sets. I will demonstrate its application in genetics to the STRUCTURE model of Pritchard et al., 2000. Then I will discuss black box variational inference. Black box inference is a generic algorithm for approximating the posterior. We can easily apply it to many models with little model-specific derivation and few restrictions on their properties. I will demonstrate how we can use black box inference to develop new software tools for probabilistic modeling.

David Benjamin
Broad Institute (Data Science & Data Engineering)

Introduction to Variational Bayesian Methods

Abstract: We will explore mean-field variational methods in the context of a probabilistic graphical model from statistical physics. After a heuristic introduction we will justify the approximation more rigorously and present a general recipe for the mean-field method. Finally, we will show how the method circumvents certain pitfalls of maximum likelihood estimation.

For more information on the Broad Institute and Models, Inference and Algorithms visit: https://www.broadinstitute.org/mia

Copyright Broad Institute, 2016. All rights reserved.

Comments are disabled for this video.
to add this to Watch Later

Add to

Loading playlists...