 Thank you. It's a real pleasure to be here at the culmination of this program. Stuart very nicely laid out the fact that robots as we use them today are extremely difficult to use. If we look back at where we were at the start of this program, we saw two things. First of all, we saw robots that were capable of moving around the world but really could only do that one thing. They relied very heavily on metric representations and really couldn't do anything else except get from point A to point B. And there is more to life, more complex missions and getting from point A to point B. And they required a huge amount of oversight, supervision. This figure on the right is taken from a recent article in WIRED that talked about the hazards of distracted war fighters. And the problem of deploying unmanned systems is the amount of manpower, amount of labor required to actually operate these systems. And so the challenge of this project that we've really wrapped our arms around in this program is how do we make these robots smarter? How do we make them understand the world around them a lot better? How do we make them easier for people to work with as teammates rather than as tools that require constant supervision? So there are two major areas that we've invested tremendous amount of time and effort in. One is how do we accomplish this natural teaming? How do we develop robots that are able to follow instructions, able to report to human partners, and that really have a shared representation that matches how humans understand the world? That research question of how do we develop representations that are actually operational for robots that are out in the field of working with humans has been a major driving force of this program. And the second is how do we actually get to higher level reasoning and autonomy? How do we create more complex plans? How do we actually use the complex perception that Marshall talks nicely about? How do we actually bring the manipulation planning that Sid talks nicely about into a higher level autonomy system that's capable of executing more complex missions? And again it's a question of representation. One of the challenges is like people really think about the world in a very different way. I don't think any of us would have trouble using the very cartoon sketch of some military installation to actually navigate around even though we understand that those buildings look nothing like the buildings that we'll see and there's lack of fidelity to scale etc. Robots just don't think about the world this way. They didn't think about the world this way. They really think about very artificial point clouds and things like this. And if you take a map that a robot builds and use it to try to navigate around it's very very easy to get confused. This is one of the things that's really held back robotics over the years. One of the insights of this program has been to actually notice that how people communicate tells us a lot about how they actually think about the world. And if you can bring natural language understanding into the robot representation then you may get a lot of the things that we're looking for. So this video here shows a human giving an instruction to a robotic forklift. The robots never seen the sentence before. Put the tire pallet on the truck. It has an understanding of what it means to be a tire pallet. It understands what it means to be on the truck. It has an understanding of what it means to put things on the truck. But unlike previous systems that had this capability, this system is robust to tremendous variability in the sentence structure. It's never seen that sentence before. There's no hand coding at all in terms of actually giving the robot this ability. This is the kind of situation where you could imagine a robot operating in a Ford supply area interacting with a human team that's to bring in supplies, deploying them, et cetera. How did we do this? One of the things that we've been able to show is that you can get very rich symbolic representations of the world that in many respects look like traditional artificial intelligence logical representations, but they're informed by data. And this required new mathematical representations that actually corresponded to the kinds of problems that we're seeing in the field. Collect large amounts of data and we can get tremendously good representations, tremendously good execution of complex missions. But also part and parcel of this is making sure that the representations that we learn match what cognitive psychologists tell us about these problems and these models. So a real success of this program has been to work with Professor Florent Jensen's team and also Dr. Daniel Barber's team to actually build cognitively informed models that can then be instantiated with data that allow us to actually make sure that we're not just driven by data, but also driven by what the psychology tells us about this. And this has been a huge success of this program is to build these models that we can then put into the interfaces that allow this human teammate to very quickly understand what the robot is saying and then interact with the robot. Another challenge, of course, is the robot has to have an understanding of the world that is not just driven by what it can see about the world, but also understand how the world works. This required us to build new theories of fusing data from the physics of the world, interacting with the world, as well as what the language was telling us from the human teammate and also what the perception system. So you have this highly capable robot here that has tremendous information signals about how things move, how to pick things up. How do you build that into your understanding? When a human tells you the case is on the right is heavy, pick up the heavy case, the robot needs to know how to fuse its understanding of cases with its understanding of what it means to be heavy with its understanding of what it means to pick up. This gives us the capability, for instance, a human operator on the left is able to say things like clear the debris. The robot has learned what it means to be debris and has learned what it means to clear. And if I let the video on the right play, you see that the robot is driven up to that object has understood what that means. What debris, of course, is a foreign object that is occluding the path of the robot. The robot understands what that means is able to move it out of the way, for instance, to let the warfighter move through. That's the capability that we now have that allows the human teammates to go off and do something else while the robot is basically getting things out of the way. So this is what we mean by complex missions that involve bringing together this human team and instructions, the kind of manipulation planning that since team has given us the perception that Marshalls and the other perception people have given us to really advance the capability and make these robots much smarter than they were before. It also allows us to actually recover from errors. So Stuart talked a lot about robustness. What happens if the human teammate says pick up something that's supposed to be heavy but isn't? And the robot can actually investigate the world. You see here this video, the robot would actually, when given an inconsistent instruction with what it's told about the world, it's able to actually explore the world, pick up the different cases and figure out what the right thing is supposed to be. So what we've seen is new mathematical theories here. This video here is from Professor Tom Howard's group at the University of Rochester. He and his students collaborate with some of my people have shown that we actually can build these kinds of robust natural language understanding systems that allow us to recover not only from perception errors but also from when the human operator themselves makes a mistake in the instruction that they give. And then of course, there's a problem of how do we get out of the lab and out into much longer distance, much longer missions. So this is an example from a recent evaluation where the robot was doing a reconnaissance mission over a series of checkpoints that the human operator had asked the robot to actually explore. And this requires the robot to actually understand what it means to navigate through the world through parts of the world it hasn't even seen yet. That's the kind of capability that we've never had before where you can tell the robot about the world, it builds its own internal representation that of course in no way actually metrically resembles what is actually out there. And then go and figure out how to reconcile what it's seeing about the world. Again, we see the very artificial perception map on the right, then being fused within human instructions. But of course, this is just a reconnaissance mission, it's just using the vision. Can we actually put all the pieces together? This is an example where the robot's told to clear a road. The robot is just told to clear the road. It knows what are the things that it can move, what are the things that it can't move. It knows their barrels are in the way. It's able to drive up to the barrels. It's able to investigate which of the barrels is going to be able to move and push out of the way and which are the ones that it can't. There's no human supervision at this point. Of course, this is an experiment. We have the graduate students and postdocs overseeing the software. But what we, what the robot is doing is making all the decisions itself about how to follow these instructions, about how to move the barrels out of the way, how to actually meet the higher level needs of the, of the system or the mission. So we had this need for new models that allow the natural teaming. And we have them now and you'll see the demo this afternoon that allows the robot to actually follow instructions and execute these complex missions. And we have new models for actually planning at a high level over a very long length and time scales. Over complex interactions, not just with human teammates, but complex interactions with objects in the world that are not specified ahead of time by the human operator, but represent constraints on what the robot can do and represents constraints on what the mission requirements are. So thank you very much.