 Well, it's quite a big day for us. It's really exciting for us to celebrate the end of the robotic CTA, and I've had the pleasure of leading this for the last couple of years, and I'm really excited to be here today. I want to thank everyone for coming, especially our distinguished visitors, all my friends and colleagues that I've known across academia and industry, and the government is fantastic to see you here today. So I'd like to start off with what is our vision, right? So simply stated, what are we trying to do? We're working on trying to develop a robotic teammate. So imagine if you will with me robots that I can task to go into the real messy world, they can intelligently plan where to go, they understand for the first time the world more than obstacles or things that just take up space, but begin to see human relevant meaning. They communicate with team members in human understandable terms, are transparent and build trust, and ask for clarification when they get stuck. They do all this without having to rely on a server farm in Iceland. All the while, their human team member are free to pursue their own goals, and do not have to be glued to a screen and a controller, but can communicate through gestures, speech, and tactical signals. So as we celebrate today, part of our goal here is to look at what does a teammate look like? We want robots that can see like soldiers, talk like soldiers, move like soldiers, and do work with soldiers. So in order to accomplish this, we need to have systems that can understand the environment in the same manner that soldiers do. And also, soldiers are fantastic at adapting to new things that they haven't seen before, and we need robots that can be on par or at least be able to be working with them in those environments. And in order for them to communicate, just as we communicate as we met this morning or when we meet with another human, we possess a common worldview. We have a general understanding of what's unique, what's common in the environment, and we are able to communicate that naturally. So we have to be able to overcome the forced interaction with joysticks and heads down, and we have to get to thinking about where we can be heads up and operating in that environment as a teammate. So the environments that we're talking about doing this in in order to address the Army's problems are much more challenging than, for example, what the current industry is focusing on. They're focusing on robots in factory floors, in structured environments, and focused on driving on roads and cars, autonomous cars, but the multi-domain operations and the Army has to operate and ask us to be able to accomplish these types of objectives in this complex terrain, whether it be for next-gen combat vehicles, OMFV and RCV, where we have vehicles that are operating in cluttered environments that are complex in their own right, but also include the complexity added by a peer adversary, have to operate in caves and strategic bunkers, canopies in a complex and urban terrain. This Army modernization, which the CTA has been working on, technologies to support this for long before this current cycle came out, but we're looking at being able to conduct in the middle there, we're talking about being able to conduct independent maneuver in order to enable the objectives that the NGCV priority strategy is looking for us to address. Also need to be able to isolate and defeat enemy maneuver forces and be able to manipulate or maneuver on the ground to secure terrain and consolidate gains, all requiring capabilities and combat vehicles that can rapidly learn and adapt and reason in that complex environment and fight and win against a near peer adversary. So this is quite a daunting task, Dr. Prakanti alluded to, we're asking a lot, and so how do we really get there? So the way we're focusing on this is the CTA, which is what we're celebrating today. So this is a huge problem, it's a huge task, so no one can really go at this alone, so it's key that we've been able to establish this collaboration to work with the best in the world across academia, industry, and in the government to achieve this. So you can see at the bottom, the foundation for this CTA is the collaboration that we've developed over the last 10 years, where we won't list all the names of my partners there, but you can see them here, you'll hear from many of them today, you'll get to talk to many of the researchers from these organizations in the poster sessions, and you'll see them demonstrating their capabilities. I want to take you on a little bit of a history lesson for a couple of slides here and talk about where we started. So Dr. Prakanti alluded, we've been doing this for a while, so this all started with Robotics CTA, which was actually the first version, if you want to call it 1.0, we didn't call it that at the time. It was led by Chuck Shoemaker, who's in the back here, and I'm glad to have Chuck here today to bring this all full circle. We were focusing on autonomous mobility, intelligent planning, and multi-agent control, and we had a lot of achievements, and you can see a video of some of the capabilities that were demoed in that program. We were able to have achievements such as Gen 3 LADAR, which evolved into the importance of LADAR for a lot of the work that we do, which spawned industry development in that area. But we also learned a lot of lessons that weren't sufficient to operating in the way we wanted to, such as building an autonomous navigation built on just a point cloud wasn't giving us the richness that we needed. So we were working largely developing tools in a metric world. These systems lacked resiliency, were brittle, they were slow, and weren't able to keep up with the op-tempo that we really need them to get to if we want to really operate with soldiers. They're generally limited to a static world where the heavily reliance on a priori data. So we built on that, and also under Chuck's leadership, we kicked off the original, the second R-CTA, which we're currently celebrating the end of today, where we focused on developing capabilities to address those challenges that we learned in the first CTA, such as perception, intelligence, human-robot interaction, and dexterous manipulation, and unique mobility that Dr. Bracconi already addressed. And we had lots of early accomplishments. We did some of the seminal work in semantic perception and anytime learning, which allowed us to understand the world in more human-like terms. I'll talk about that more today, and you'll see a lot of that fruits of that labor in the demonstrations that we'll have throughout the day. We also learned that we had to operate more tactically and interact with soldiers in a more meaningful way to be able to do multi-agent teaming and manipulate things in a human-scaled world. So you can see the example from Boston Dynamics when they were part of the program in the early years at the bottom. What I wanna talk about today is I wanna hone in on this slide, which will be the foundation for where we go for the rest of today's agenda. So back in 2016, we focused on recognizing that we had a lot of capabilities indicated there on the left-hand side of the screen. We had these fourth research thrust areas led by a lot of the people in the front row and other partners working with ARL and leading those efforts, but we recognized that we needed to bring all these together and we needed to have capabilities that were the combination of the research efforts of many people. As I indicated on the pillar slide, this takes a credible amount of talent and people working together to achieve these goals across multiple disciplines to make this happen. And so we also wanted to revisit the progress that had been made in the civilian sector and industry, and so we devised these three thrust areas to focus our research efforts over the last three years of the program, which you'll see today. So the first thrust is operational maneuvers in unstructured environments. So as the name implies, we need to get faster and we need to work in an unstructured world. A pretty daunting task, but that's the one that ARL is wanting to tackle. So you'll see the results. We broke that up into two capabilities. One is operational mobility in dynamic scenes. So the first CTA in the early years we worked on mostly a static world and we recognized that the world world's not static. I don't know why that was such a surprise to us, but we realized we needed to get better at that. So we really focused on how can we predict and understand agents in the world, for example, pedestrians, other at movers and that type of thing, operating the world and do a better job of predicting where they're gonna do that I can predict the movements of the robot. And similarly, in order to be a teammate, we needed to have robots that could keep up with the soldier. So we needed to have robots that could operate at speed in rough terrain. Something that we recognized that wheels and tracks weren't gonna be able to do alone. So we undertook this effort to continue to work in operating in rough terrain. And you'll see some of those demos today as well. The second thrust is human robot execution of complex missions. This, I think, is one of the highlights of the program. To me, it's a quite astonishing accomplishment in the sense that I can't count the number of researchers across the program that were involved in this effort. But let's just say there are many, many dozens, maybe close to 100 researchers working on this area alone. To really try to get at understanding how can we get better situational awareness in unstructured environments. Imagine, if you will, we want robots to be able to go out and conduct a reconnaissance in an area that they've never been before and we'll be able to come back and report just like a soldier would. Quite a task that we feel that we have made a lot of progress on and you'll see some of the results of that today. And to be able to do this in a distributed manner, humans and robots working together, we recognize one of the things that I really learned over the last couple of years is we don't have to solve all the problems ourselves. We're not deploying robots on their own on Mars, for example, my apologies to JPL. We're looking at working together so we can take advantage of how the humans can help us adapt and reason about the world and use those to foster better teaming. And then finally, the thrust in mobile manipulation. We recognized that we needed to be able to manipulate the cluttered world and that meant we didn't have models of every object that we were gonna manipulate and we didn't have a pristine, structured environment that we could operate in. So as we undertook those three major thrust areas we compared this, looked at the state of the art and we assessed the scientific challenges that we needed to tackle. For thrust number one, the up-tempo maneuvers and unstructured environments, we recognized we really needed to get better at building systems that weren't quite so brittle. We needed the robustness built in. We had to have a better understanding of what the robot was doing and thinking about. We also needed to get faster. And then we also, as I mentioned, we needed to focus on our roots, which is the Army's problem of being able to go anywhere in the world, complex environments, complexity from either the natural terrain, from the weather, from an adversary. Thrust two is focusing on that human robot execution of complex missions. How do we get robots to understand the world the same way the human does? So we did a lot of work in natural language grounding. So this isn't just talking to Siri but actually having Siri understand what the heck you really are talking about. Probably drives a lot of you crazy, especially if you understand how Siri works. It's very frustrating to me. We also needed to have distributed mission execution and provide that situational awareness in those unstructured environments that we've never been before. And then as I mentioned, how do we manipulate the world on a mobile platform in a complex 3D world with real objects, not just things that we've modeled and taken pictures of extensively? And I'm really excited to have Sid here today. He's gonna talk a lot about that from University of Washington in his presentation. I'm gonna highlight four areas and talk about some of the accomplishments we made just briefly in my opening remarks and hopefully wet your appetite for where we wanna take things. The rest of today, you'll see demonstrations on this. You'll be able to talk to all the experts and researchers that have made these things happen. So one of the key accomplishments that we did in T1C1 which is our op-temple maneuvers and dynamic scenes is maximum entropy, inverse reinforcement learning. So this enables soldiers to be able to demonstrate behaviors with just a few examples. And then the robot can then learn those behaviors and emulate them. So it's kind of hard to see, but on the graphic you can see on the upper left-hand side, the robot. And there's a green line coming out of it. And the green line is shown on the edge of the road. So in this case, a very simple case, we had those humans demonstrate that they wanted the robot to drive along the edge of the road as opposed to in the middle of the road, just a toy task, but you get the idea. And then shifting into red, you see at the bottom right hand of the screen where they're trying to get to the star location. They recognize something else, which triggered another mode which they trained to how to operate when you want it to be covert or stealthily act and not be seen from a certain way point. So this is an opportunity for us also. We did a lot of this online too. We will learn how to train robots online so that the soldier can actually correct the robot's behavior. So if they don't get it quite right the first time, just as you might imagine with your child, you can say, well, I know what I said, but this is what I meant. You get to kind of correct them a little bit. It's an opportunity for us to do that. And it doesn't take us having to go offline and do that, it takes a long period of time. So the next is optimal mobility and rough terrain. So this is all about going where soldiers need to go. How do we get the robots to be able to go where tracks and wheels can't get us? So along the left on the bottom there, there you can see the spear acronym that we've developed where we talk about trying to address problems beyond what the current state of the yard is. Speed, we need to go faster. We need to be able to carry larger payloads. If you wanna do something on the robot, you actually have to carry something. We need to be more efficient. Obviously energy is always a problem for the military. We had to have the agility to go where the soldiers needed to be able to go and they had to be robust to failure modes. So we undertook these three center, in this Venn diagram you see the three center areas that we focused, the scientific areas that we focused on to address this, looking at improving energy density solutions, first still controls and being able to come up with new approaches for local motion at high op tempo. So I'm excited for you to be able to see this work in a demo we have behind you later today. So the next area is T2. As I mentioned, this is one of our areas that involved a lot of people. And I just wanna highlight one of the many that I could highlight accomplishments that we achieved in this area and that is in the area of natural language grounding. So the impact here is that we developed the technology for the robot to be able to have a semantic understanding of the world similar to a human so they can understand the commands given to it by a soldier and not just roughly repeat them but actually understand and have meaning and then execute complex tactical missions. As we know, when we talk with each other we have a commons understanding and we do things that are somewhat commonsensical based on the situation that we're in. And so we needed to build that into our world model that we could interact with the system to be able to achieve those. And so I'm really excited to be able to have you guys see that demonstration today. You'll see that outside. The last area that I wanna highlight is the work we've done in mobile manipulation of unknown objects. So you can see in the center there are all the tasks that you have to be able to do. And the key here that I want you to take away is we were able to do this all autonomously. So the human says where it wants to go and the robot has to be able to reason about something that's preventing it from being able to get to that location. Something that it's not, doesn't have a model of. Something that it has to reason about that it can either lift it up because it's not too heavy or assess that it's too heavy and it has to try to drag it and be able to then conduct a counter mobility type of operation which would be applicable to the army or be able to conduct a breach or navigate in an environment that doesn't have all open spaces, which is an assumption that a lot of roboticists we make. So I'm really excited to show you this. This is first time we've shown this. This is the ARL Robotics CTA stack, autonomy stack that we took and we worked with our partners at Texas A&M. We have a cooperative agreement with as well through the Army Futures Command. And we gave them our software stack and we gave them one of our robots, our larger platforms which is called a Warthog, but essentially it's an RCV autonomy surrogate vehicle that we're developing these capabilities for their application. And you can see here it's running, this is within just a month of trials and I wanted to show you this just to highlight that we're already transitioning the work that we're doing in the CTA to focus on specific army applications. I was glad to see that it was going fast enough that the Wranglers were trying to keep up with it within just the first week or so. So we're really excited about that and happy to have Sri here. He's here from Texas A&M. We're really excited about working with that partnership which is relatively new to us. We've been working with them for a couple of years but it was really great to bring them and have them be able to show the validity of the transition opportunities we had by just giving them access to our stack. So this is the foundation for where we're gonna be going in the near future. Another transition point is transitioning the work I talked about in semantic labeling that we developed in the CTA to the robotics technology kernel which is from our partner friends at GVSC, formerly known as TARDEC, for some of you who are not up to speed with the AFC new naming changes. So here we're taking the semantic labeling work that we've done and transitioning this into their RTK autonomy stack so that they have the opportunity to transition and test and do the 6.3 research that's necessary to get this into the capability state that it needs to be in for transition into NGCV and the capabilities that they desire. And then it would be remiss if I didn't talk about all of the technology transitions that have happened through our colleagues that we've trained or we've worked with, some of who are represented here today where we've transitioned a lot of the talent into the autonomous car industry and other industries as you can see here. The list is too long to name but we've had a huge impact on our national society and our industrial base and I think that's a true testament to the problems that we've put forth to the community and the talent of the people that come from the institutions that we partner with. I also wanna just highlight four academic investments and transitions to give you a sampling of how this collaboration and ecosystem has evolved over time. So one of the, his first is Dr. Krishna Schaefer who is now an Army Research Laboratory employee and she got her PhD at UCF under the guidance of Florian Yench and so she was trained in the RCTA, learned a lot on a problem and then we were able to hire her so we strengthened the tech base at ARL. We also had Dr. Elizabeth Phillips who similarly came from UCF. She got her PhD under the RCTA and then she did her postdoctoral fellowship at Brown and then after she was at Brown then she was just recently appointed last year as a faculty member at the Air Force Academy. So again, strengthened the intellectual base of the military. We've also had industry folks come to ARL so Mike Powers, for a while he came from General Dynamics Land Systems, robotic systems at the time to ARL for a while and then just recently he's moved on to financial industry. I'm not sure what that is saying about robotics in general but and then of course I really wanted to highlight Tom is a representative example. Tom's been a fantastic partner. He worked with Nick Roy at MIT and then is now a professor at University of Rochester. He's been a fantastic partner and leading and spiriting a lot of the work in collaboration with Ethan Stump from ARL and the whole team across both organizations and pulling in different entities to make this similar grounding demo happen and all the work that went into making that happen. So I'd like to wrap up and talk about the influence the RCTA has had on autonomy evolution. We started years ago where tools were operating in a metric world and then the self-driving car industry was spawned. We've started transitioning into developing teammates to operate in a semantic world and we're hoping to that this will inform and affect positively the tactical behaviors in multi-domain operations that the Army is looking for us to achieve over there on the last epoch. And I just wanted to say that reinforced what Dr. Bracconi said where he talked about ARL and working with our collaborative partners has the opportunity to take this long-term view to make big bets and take the risk. And then I think this is an example of how a lot of this technology paved the way for industry to be able to pick it up and run from there and solve unique problems. So with that, I end my remarks and we'll look forward to enjoying the rest of the day and interacting with you. Thank you.