 Thank you Chris and good afternoon. I want to introduce the students and researchers who'll be working on the demo first Drew Bell who has his back to us from Stanford Adam Cho, Adam Giuseppe Loiano, Aaron Weinstein, and Justin did the previous demo, so I'm not going to introduce him. So this this effort is about perching and staring as the title suggests. So if you think about UAVs there are two big limitations in terms of use in surveillance operations. First you have a basic limit on the mission life, battery technology being what it is and the fact that these UAVs consume about 200 watts per kilo means you're you're gonna have a very short mission time, which is not great for persistent surveillance. Second surveillance oftentimes is a stealthy operation and these rotor crafts as you just saw make a lot of noise. So one alternative is that you can think about these UAVs, navigate to a position and a suitable vantage point, perch and then stare, convey information back in a very silent way and of course prolong the mission life in that in that regard. So the basic capability we are developing is the capability of going to desired locations, perching on all kinds of surfaces. You'll see demonstrations of perching on vertical surfaces on branches like this and we've also done work with poles, towers and so on. So keep in mind in urban operations there are plenty of suitable features in the structure where you can hang off or perch on. So the first demonstration is by Drew Bell and this work leverages many years of work at Stanford University in the Kutkoski Group and this work essentially has built, has led to gecko-like feet and arrays of micro-spines that allow perching. So in this video you'll essentially see an off-the-shelf quadrotor that you can buy and equip with marks perching mechanisms and you can see it perching on a stucco wall at Stanford campus. But what we're going to do today is have Drew take this vehicle off and then have it perch on the surfaces that you see here. Now the first part of Drew's flight will be manual, he'll teleop it, but then the actual perching and then if it tries, if it falls it'll recover from that fall. That part is autonomous so Drew take it away. So while it's flying, I'll point out there are two sets of micro-spine arrays and then a lateral stabilization mechanism at the back and when he tries to climb, you'll see two sets of legs being articulated independently through servos and unfortunately Drew is having a good day, so you don't see it recover from any slipping. Can you make it slip? Oh, I didn't see it. Okay, it did slip. So anyway, it's a fairly robust operation. Again, I want to point out that this piece, his hands are off the remote. Oh, there you go. So that fell and then it was able to recover and it does this with onboard servo loops based on sensing that it gets from onboard accelerometers and just a tight servo loop that essentially allows it to recover. So this piece of the operation was completely autonomous. Thank you, Drew. The next piece I want to present is some work that we did on perching by hanging. So this work was led by Giuseppe L'Olliano who's going to do the next demonstration and Justin Thomas. So the interesting thing about perching is most birds don't perch upside down. They don't hang, although flying foxes do. It turns out that when you have narrow field of cameras or you don't have active heads, this is actually a much more stable way of doing it because of the way quadrotors fly. So in this work, the novel thing is that all the computation is done onboard and it relies only on a single camera and an IMU. And the paradigm that's used for latching on to the surface is called visual servoing where you don't really try to build a map of the environment and georegister yourself. Instead, you're essentially trying to look at the surface in image coordinates and you're willing the image in your eyes to be what it needs to be in order to perch. So you'll see now a similar approach and Giuseppe is going to show you some work that integrates the 250 gram platform that I showed you earlier this morning. Those of you who are in the morning session integrated with one of the grippers developed by Stanford University in Mark's group. So again, it doesn't have a precise model of what it is trying to grasp. So this piece is autonomous and the perching action itself is the first time I've seen it fail. Trust me. This is true. Others will attest to it. But the perching action itself is passive. So it's very hard for vehicles to react in the short amount of time in which impact occurs. Are we going to try once more? Do we have a time for another try? Where's Dave? Okay, real quick, real quick. Okay, so as I was saying, it's very difficult for the vehicle to react in real time. So these perching actions have to be passive and so what we've done independently and jointly is characterize the landing envelope of many of these grippers or claws. And once, if you know that the approach velocity is inside this landing envelope, and that landing envelope obviously depends on the compliance of the system that you have on board and the grasping surfaces, then you can hold on to the the branch. Okay, there you go. Thank you. So the next thing I want to showcase is some work done by ARL by Chad Kessins, again in collaboration with Justin Thomas and Jaydev Desai of the University of Maryland and myself. So it's unfortunate that he could not be here because he has to man a poster at the other location. So be sure to look at his poster there. So the key idea here is that we have lightweight suction cups with a vacuum pump. The whole ensemble weighs about 700 grams making it suitable for aerial grasping. There are four cups and the pressure differential is about three-fourths of an atmosphere. It can exert forces of up to three-quarters of a Newton. I'm sorry, three-quarters of a kilo. So you could see it grasping different types of objects, a box, a coffee cup, even when the object in question is not perfectly aligned. One of the reasons it can do that is there's four cups and it doesn't matter which cup actually holds on to the object. So this is fairly robust in that sense. So imagine settings where you have to deploy payloads or to recover them and you have to do it fairly quickly and you have to be you have to be relatively robust to errors in positioning and orienting. So these are the kinds of grippers that we expect to see. So in summary what you saw, oh, before I do that, there's another, there's another demo that you will see outdoors, which is really the autonomy required for locating suitable landing and perching spots, which should be led by Larry Mattis, but this will happen outdoors. So I just have this in here so that when you go outdoors, remember, this is, you know, just different approaches to autonomy different approaches to designing mechanisms for grasping and perching. So in summary, I, I, this is, this is the program that has really led to the first UAVs to perch on different types of surfaces, not just flat surfaces, but vertical surfaces, walls and ceilings, indoors and outdoors. We've done work on visual detection of landing surfaces that I showed you and we've shown robust grasping, perching and landing on a wide range of surfaces. Again, as you probably heard in the morning, the key idea is to just base this on cameras. That's the only technology that scales as we try to diminish the size of the platform. And this is an important research challenge because with one camera, as you know, you don't have depth perception, but integrating cameras and IMUs and using suitably exciting motions, you can actually recover that third dimension. The final thing I want to point out is, and this is a typo, integrates results in four research laboratories. Actually, it's five research laboratories. And again, the same theme, but, but in showing up in many different projects. As a summary, UAVs can do more than fly. They can land crawl on walls, roof ceilings and the ease of buildings and tree trunks on vehicles. So I think this is just a huge opportunity and masters just scratch the surface and provide the basic S&T for that. Thank you very much. Before we go to the next step, I want to make sure if there are questions, I can try to answer them. No? Okay. The next set of demos is on speed and navigating complex environments. And part of it is going to happen indoors and part of it is going to happen outdoors. So the first demonstration you'll see is something that I spoke about earlier this morning. So this is our 250 gram platform. As I mentioned, it's equipped with a single downward facing camera, a global shutter camera and then two forward facing stereo cameras and an IMU. And our goal is to use platforms like this to really enable aggressive maneuvering. If you make things smaller, you can make them more aggressive and this vehicle can pull one and a half G's and travel at peak speeds of 5.5 meters per second. So the main research challenge is really trying to think about estimation planning and control on these computationally constrained and sensor constrained platforms. And as you will see, it needs to be very robust because it'll assume orientations that are 90 degrees away from the hover position, which means the controllers have to be non-linear, the estimators have to be non-linear. But without further ado, let me have Giuseppe show the vehicle, which will take off, it'll gather momentum and then execute a maneuver that will allow it to pass through the window at a 90 degree orientation. I should point out that none of the Vicon cameras are being used, although they're on and it certainly doesn't use GPS, even if you can get it inside this building. So what you saw here, you can clap. So what you saw here is what I believe is the smallest fully autonomous vehicle. Now truth in advertising, what this vehicle did not do but can do is map the environment because it does have this forward facing stereo. And independently, we've been working in collaboration with Larry Mathews on running stereo algorithms and hopefully we'll have a paper pretty soon on that but wasn't ready enough for this review. But you could have, in principle, built a map of the environment and determined exactly where the window is. But what you saw here is autonomous estimation control and planning of that complex trajectory, which respects the vehicle dynamics and is guaranteed not to collide with the walls. While we're setting up for the next demo, I want to show you if there was a mass 2.0, this is what that vehicle would look like. So this is something that's being remotely thrown by Yash Molganker. Yash, can you just wave a hand, sir? Just wave a hand so everybody knows where you are. Okay, so that's his vehicle. So the real question is, what does it take to go from that 250 gram platform to something like this, which is, what is it, 10 grams? 12 grams. And I think this is, it's a question of time before we'll be able to do that. So that was just a little filler before we go to the next demo. And what we wanted to do, although this is a multi-platform demo, we're basically using the same platform. And the technology that's being used is not too different from the technology that you saw earlier, except now these vehicles are going to be commanded by an operator who's going to specify the shape of the vehicle and the motion of the shape. So the vehicles will take off and they'll start off in a rectangular formation. You'll see them spread now into a circular formation. And now it's the same circle except it's on an inclined plane. And it'll go back to a rectangular formation on a different inclined plane. And again, the command to form a straight line, this is a skewed straight line. And then I'm not sure what else is going to happen. But the point is that they're prescribing shapes and where the shapes need to be. It's as simple as saying this is the equation of a circle, this is the equation of a rectangle. And here's where I want the shape to be formed. All the estimation, control and planning is done locally. And once again, I want to remind everybody that the Vicon motion capture cameras that are on were not used. And neither is GPS. So again, all the estimation is being done on board. Yes, correct. So this is something we've developed over the last three or four years. So if you know how to describe a shape, they basically decide who goes where and when. So that allocation is done online. No, no, so these things, I mean nothing needs to be here. But it does need to have some texture in the environment. If we were to do this on a white shiny surface, nothing would work. But it needs to have some natural texture. So paradoxically doing it outside is actually easier than doing it inside. But it's pretty windy outside, so that's why we're doing it inside. This tape is not being used. But you'll notice that, again, truth and honesty here. So there are these markers that it started off from, so it needed to know where the initial position was. Because everything is defined with respect to that coordinate system. And it also is worth pointing out that, as Larry pointed out, we're also developing the capability of vehicles to recognize each other. And in other work, which we're not demoing, the vehicles actually can see each other and follow each other. If that happens, they can start from anywhere. Sorry? Yes, yes, it's actually a lot easier to do it with ground platforms. But for some ground platforms, because you have finite turning radius constraints, the algorithms are a little more complicated. So it's easier to deploy them, but the algorithms actually become slightly more complicated. Okay, so it's my turn to now turn it over to Larry. But before I do that, let me just say that the demonstrations that are linked to this, there are three of them. First, you'll see, again, navigation complex environments, but outdoors with bigger platforms. Brett mentioned the DARPA FLA project, which is a transition effort. So we will show you that platform or a variant of that. Stereo-based obstacle avoidance. And then finally, the rooftop landing, detection and landing that I spoke about earlier. And now I'm going to hand it over to Larry.