Loading...

A Real-time Demonstration of a Natural Language Robotics Model

862 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Sep 27, 2016

Read the paper on this research: http://www.roboticsproceedings.org/rs...

Jake Arkin, PhD student in electrical and computer engineering, demonstrates a natural language model for training a robot to complete a particular task. The model, which was developed in the Robotics and Artificial Intelligence Laboratory with assistant professor of electrical engineering Thomas Howard, allows a user to speak a simple command, which the robot can translate into an action. This research was a joint effort with Rohan Paul and Nicholas Roy from MIT.

The model also allows for an understanding of spatial dimensions so that if a robot is given the command to pick up a particular object, it can differentiate between other objects nearby, even if they are identical in appearance. Localized visual servoing, contributed by graduate student Siddharth Patki, allows for consistent execution of the demonstrated robot actions.

This demonstration was conducted in real-time to show how quickly the robot can process new information in the form of spoken language, determine the action being communicated and then complete the task.

Help us caption & translate this video!

http://amara.org/v/8Mzm/

Loading...

to add this to Watch Later

Add to

Loading playlists...