Upload

Loading icon Loading...

This video is unavailable.

Guiding robot planes with hand gestures

Sign in to YouTube

Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to like Massachusetts Institute of Technology (MIT)'s video.

Sign in to YouTube

Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to dislike Massachusetts Institute of Technology (MIT)'s video.

Sign in to YouTube

Sign in with your Google Account (YouTube, Google+, Gmail, Orkut, Picasa, or Chrome) to add Massachusetts Institute of Technology (MIT)'s video to your playlist.

Published on Mar 13, 2012

MIT News - March 14, 2012

Aircraft-carrier crew use a set of standard hand gestures to guide planes on the carrier deck. But as robot planes are increasingly used for routine air missions, researchers at MIT are working on a system that would enable them to follow the same types of gestures.

The problem of interpreting hand signals has two distinct parts. The first is simply inferring the body pose of the signaler from a digital image: Are the hands up or down, the elbows in or out? The second is determining which specific gesture is depicted in a series of images. The MIT researchers are chiefly concerned with the second problem; they present their solution in the March issue of the journal ACM Transactions on Interactive Intelligent Systems. But to test their approach, they also had to address the first problem, which they did in work presented at last year's IEEE International Conference on Automatic Face and Gesture Recognition.

Yale Song, a PhD student in MIT's Department of Electrical Engineering and Computer Science, his advisor, computer science professor Randall Davis, and David Demirdjian, a research scientist at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), recorded a series of videos in which several different people performed a set of 24 gestures commonly used by aircraft-carrier deck personnel. In order to test their gesture-identification system, they first had to determine the body pose of each subject in each frame of video. "These days you can just easily use off-the-shelf Kinect or many other drivers," Song says, referring to the popular Microsoft Xbox device that allows players to control video games using gestures. But that wasn't true when the MIT researchers began their project; to make things even more complicated, their algorithms had to infer not only body position but also the shapes of the subjects' hands.

Read more at MIT News: http://web.mit.edu/newsoffice/2012/ro...

Video: Melanie Gonick
Simulations courtesy of Yale Song

Loading icon Loading...

Loading icon Loading...

Loading icon Loading...

The interactive transcript could not be loaded.

Loading icon Loading...

Loading icon Loading...

Ratings have been disabled for this video.
Rating is available when the video has been rented.
This feature is not available right now. Please try again later.

Loading icon Loading...

Loading...
Working...
to add this to Watch Later

Add to