Elements S3 • E6

Neural Networks: How Do Robots Teach Themselves?





The interactive transcript could not be loaded.


Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Aug 28, 2018

This robotic hand practiced rotating a block for 100 years inside a 50 hour simulation! Is this the next revolutionary step for neural networks?

A.I. Is Monitoring You Right Now and Here’s How It's Using Your Data - https://youtu.be/KpybityrXfs

Read More:

OpenAI: Learning Dexterity
“Our system, called Dactyl, is trained entirely in simulation and transfers its knowledge to reality, adapting to real-world physics using techniques we’ve been working on for the past year. Dactyl learns from scratch using the same general-purpose reinforcement learning algorithm and code as OpenAI Five. Our resultsshow that it’s possible to train agents in simulation and have them solve real-world tasks, without physically-accurate modeling of the world.”

MIT News: Neural Networks Explained
“Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.”

What's the difference between A.I., machine learning, and robotics?
“At the root of AI technology is the ability for machines to be able to perform tasks characteristic of human intelligence. These types of things include planning, pattern recognizing, understanding natural language, learning and solving problems. There are two main types of AI: general and narrow”


When autoplay is enabled, a suggested video will automatically play next.

Up next

to add this to Watch Later

Add to

Loading playlists...