 Ever give much thought into a simple task such as picking up a bottle of water, taking a sip, and putting it back down? Seems like a fairly common behavior for someone like me who's done it say a couple million times since I was a toddler. But programming robots to perform common behaviors is quite difficult, especially for robots intended for combat situations. The U.S. Army wants robots to operate as soldier teammates without a need to tele-operate robots. To do that, robots must operate autonomously and be able to help a soldier see into places too dangerous to investigate, move into places too risky to navigate, and interact with potentially very hazardous environments. At a military training site designed to look like a war-torn city, a team of researchers supporting the Army's Robotics Collaborative Technology Alliance, or RCTA, are testing out years of research aimed at getting robots to perform human-like tasks such as see and understand its environment, zero in on an object of interest, and remove it from its path. In robotics, those capabilities are referred to as perceived the environment, plan intelligent actions, and interact with the environment. These capabilities may seem simple and come rather easy for humans but are extremely difficult for robots to perform autonomously. It's hard to solve all three of them at the same time because they're interdependent, and so errors in your perception affect how you go about doing the planning. And errors in your plan are affecting how you go about doing the mechanics. Errors in your mechanics, it's all fed back into one another. And so that interdependent relationship between those different problems is an important factor to consider so that you're not just solving perception in a stove pipe for the sake of perception. But when you solve that perception problem, you're considering how your solution is going to impact the planning stage and the mechanical execution stage. I'm very confident that given enough time, given Moore's law and the increase in processing capability that we're going to overcome those challenges at some point in the future, the question is when. And for the army, it's really important that we solve those problems first. If we have solutions to those issues before anyone else, then we have an advantage on the battlefield that could mean the difference between winning and losing a battle and ultimately winning and losing a war. The Combat Capabilities Development Command Army Research Laboratory, also known as ARL, is the leader in autonomy research for the army. Its Robotics Collaborative Technology Alliance has attacked the most difficult research problems in government, academia and industry around autonomy. When RCTA were interested in the problem of deconstructing or clearing a pile of debris autonomously with a robot, and JPL is contributing one piece of the puzzle, which relates to allowing the robots to plan how to move its hand in order to grasp the pieces of debris that make the pile. The work that we're doing here is in addressing multiple problems at once. So the team is working with the robot and observing the robot behave and trying to understand what the robot is doing and through that understanding, communicating, giving the robot more knowledge about the way to solve its task. So the team is observing the successes and failures that the robot is encountering and using that information to program the robot to be better at its task. Grasping things is super intuitive for humans. We're training to do that our whole life and we have years of experience using our hands, but also interacting with objects that become familiar with time. And robots don't have that luxury. A robot gets a few hours, a few days of training, and it doesn't have all the prior knowledge, all the familiarity that a human has. The problem that we're trying to solve is to develop algorithms that allow robots to grasp objects that they have never seen before, despite all of those difficulties. To better understand the scientific challenge, let's first understand the experiment in a military context. This task aims to reduce soldier exposure to battlefield hazards by autonomously performing a complex sequence of human-like tasks at op tempo. Here, Roman is deployed to clear a path. Roman drives up to a pile of obstacles, determines how to grasp and manipulate an object in the pile, lifting if it is light, dragging if it is heavy, moves it out of the way, iterates through all of the objects in the obstruction, and finally proceeds along the cleared path to check for additional hazards. Such autonomous robotic capabilities will decrease soldier risks in areas under fire if chem, bio, radiological, or explosive hazards are present or in a breached scenario. This is really the science of perception for action. The main challenge is that we deal with unstructured environments with a lot of uncertainty and with very few data available. As opposed to factory environments where everything is structured and both the hardware for the action and the cameras for the perception are very well planned. In perception for manipulation in this challenging environment, we don't know where the objects are. We don't even know what are the objects. The objects are unknown. Nobody has really provided a database of these objects, so we cannot go ahead and apply any commercial algorithm for that. The problem is still very open. It is very open because not only we want to locate these objects independent of their shape, independent of where they are when they are blocking the road, and then we need somehow to figure out how to grasp them. And when we grasp them and they are heavy because we are talking about obstacles blocking the way for tanks, for example, then we really have to figure out somehow also their physical properties. We have somehow to have an estimate about how heavy they are so that we plan the right grasping pose based just on what the cameras see and doing it autonomously. There is absolutely no remote control here. Based just on the images, first from far away to detect where are these objects, and just in general where are obstacles. We don't know whether they are chairs or any other barrier. And then we have to figure out based on the perception from close where exactly to grasp them, lift them and then drag them out of the way. A short distance away is a group of software engineers who are helping army robots detect and manipulate unknown objects and navigate unfamiliar environments. The challenge of unstructured environments is by no means just under the military's purview. There have been other research projects such as the DARPA Robotics Challenge where teams were seeking to develop platforms that could go into a radioactive nuclear plant such as after the Fukushima nuclear disaster when humans couldn't be sent in because of the massive radiation levels. But there needed to be tasks done such as sealing up the reactor to keep the city safe. So that DARPA Robotics Challenge developed technologies some of which led into the technology of the RCTA and motivated the need for this operating in unstructured environments which is not being solved by industry. For many years robotics has been very effective operating in environments like industrial automation for instance like in car manufacturing when you have a vehicle production line and giant robotic arms that have to move to a very precise location, spot weld something, go somewhere else. The environment they operate in is very structured and we can know exactly what they have to do and when. By contrast we are operating robots in a very unstructured environment. The idea being that say were the military to deploy something like the Roman to a battlefield they would send it in ahead and give it some level of instruction like let's pick up this debris object and it has to perceive that debris, figure out how to interact with it, lift it up, transport it around with no prior knowledge of the area that it's going to be working in. So it's that unstructured environment that adds the large challenge to what we're doing in comparison to traditional robotics. The problem with the algorithms available in industry is that they are almost non-existent for this unstructured problem that we're facing. Companies like Amazon for instance use an extensive amount of robotics in their logistics and warehouses and such where you have an extremely ordered environment. You know where the shelves are, you know where all of the different visual features of your warehouse are, you can send the robot from point A to point B quite easily and say lift up to this height, grab box A from shelf C as opposed to this where we have no concept of what environment we send this robot into. So that is where the novelty of this research comes in in that we are developing new algorithms that allow us to interact with these unforeseen objects and things that the robot hasn't been explicitly taught to work with. Industry has a lot of advantages over what the military has to do because in industry you can design your environment to help make the problem easier. You can kind of meet your robotic solution halfway by changing the environment and structuring the rules within that environment to make these problems easier. The army doesn't have that luxury. We have to go into completely unstructured environments, unknown, haven't seen them before and deal with whatever we find. So we need really robust solutions and we don't have the luxury of simplifying that problem down. So what we're trying to do here is develop algorithms and solutions for an extraordinarily broad set of broad class of objects that when we send the robots out into the field they can then deal with new things that they haven't seen before and that's really our goal. For the combat capabilities development command, Army Research Laboratory, I'm TJ Ellis.