An introductory lecture for MIT course 6.S094 on the basics of deep learning including a few key ideas, subfields, and the big picture of why neural networks have inspired and energized an entire new generation of researchers. For more lecture videos on deep learning, reinforcement learning (RL), artificial intelligence (AI & AGI), and podcast conversations, visit our website or follow TensorFlow code tutorials on our GitHub repo.
OUTLINE: 0:00 - Introduction 0:53 - Deep learning in one slide 4:55 - History of ideas and tools 9:43 - Simple example in TensorFlow 11:36 - TensorFlow in one slide 13:32 - Deep learning is representation learning 16:02 - Why deep learning (and why not) 22:00 - Challenges for supervised learning 38:27 - Key low-level concepts 46:15 - Higher-level methods 1:06:00 - Toward artificial general intelligence
The Artificial Intelligence (AI) podcast hosts accessible, big-picture conversations at MIT and beyond about the nature of intelligence with some of the most interesting people in the world thinking about AI from the perspective of deep learning, robotics, AGI, neuroscience, philosophy, psychology, cognitive science, economics, physics, mathematics, and more. You can subscribe to it on https://lexfridman.com/ai/
Many of the conversations are part of MIT course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world. Course website is: https://agi.mit.edu