Loading...

MIA: Matt Johnson, Composing graphical models with neural networks; Scott Linderman

533 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Apr 20, 2017

April 12, 2017

MIA Meeting: https://youtu.be/5RA-TMwdpbw?t=3435

Matt Johnson
Google Brain

Composing graphical models with neural networks for structured representations and fast inference

Abstract: I'll describe a new modeling and inference framework that combines the flexibility of deep learning with the structured representations of probabilistic graphical models. The model family augments latent graphical model structure, like switching linear dynamical systems, with neural network observation likelihoods. To enable fast inference, we show how to leverage graph-structured approximating distributions and, building on variational autoencoders, fit recognition networks that learn to approximate difficult graph potentials with conjugate ones. I'll show how these methods can be applied to learn how to parse mouse behavior from depth video.

Scott Linderman
Columbia, Blei Lab

Primer: Bayesian time series modeling with recurrent switching linear dynamical systems

Abstract: Many natural systems like neurons firing in the brain or basketball teams traversing a court give rise to time series data with complex, nonlinear dynamics. We gain insight into these systems by decomposing the data into segments that are each explained by simpler dynamical units. Bayesian time series models provide a flexible framework for accomplishing this task. This primer will start with the basics, introducing linear dynamical systems and their switching variants. With this background in place, I will introduce a new model class called recurrent switching linear dynamical systems (rSLDS), which discover distinct dynamical units as well as the input- and state-dependent manner in which units transition from one to another. In practice, this leads to models that generate much more realistic data than standard SLDS. Our key innovation is to design these recurrent SLDS models to enable recent Pólya-gamma auxiliary variable techniques and thus make approximate Bayesian learning and inference in these models easy, fast, and scalable.

For more information visit: http://www.broadinstitute.org/mia

Copyright Broad Institute, 2017. All rights reserved.

Comments are disabled for this video.

to add this to Watch Later

Add to

Loading playlists...