Loading...

Eliezer Yudkowsky – AI Alignment: Why It's Hard, and Where to Start

5,590 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Dec 28, 2016

On May 5, 2016, Eliezer Yudkowsky gave a talk at Stanford University for the 26th Annual Symbolic Systems Distinguished Speaker series (https://symsys.stanford.edu/viewing/e...).

Eliezer is a senior research fellow at the Machine Intelligence Research Institute, a research nonprofit studying the mathematical underpinnings of intelligent behavior.

Talk details—including slides, notes, and additional resources—are available at https://intelligence.org/stanford-talk/.

UPDATES/CORRECTIONS:

1:05:53 - Correction Dec. 2016: FairBot cooperates iff it proves that you cooperate with it.

1:08:19 - Update Dec. 2016: Stuart Russell is now the head of a new alignment research institute, the Center for Human-Compatible AI (http://humancompatible.ai/).

1:08:38 - Correction Dec. 2016: Leverhulme CFI is a joint venture between Cambridge, Oxford, Imperial College London, and UC Berkeley. The Leverhulme Trust provided CFI's initial funding, in response to a proposal developed by CSER staff.

1:09:04 - Update Dec 2016: Paul Christiano now works at OpenAI (as does Dario Amodei). Chris Olah is based at Google Brain.

Loading...

When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...