 Hi, I'm Dr. Phil Percanti, I'm the Director of the Army Research Laboratory and welcome to ARL, What We Learned Today, a podcast where we talk with Army scientists and engineers about the science and technology that will modernize the United States Army and make our soldiers stronger and safer. Today we're going to talk with Phil Osteen, a roboticist in our Vehicle Technology Directorate who is researching how robots adapt to adverse environments. Phil, welcome to the podcast. And before we start, I would like to say that Phil is a roboticist, not a robot. That's correct. Now I'm not a robot. That is a fact. Well done. So Phil, tell me a little bit about yourself. How long have you been at ARL? Well, I started as a contractor in 2009 and I got brought on board as a civilian a few years ago. I went to the University of Florida and I got my master's in mechanical engineering and I was very fortunate because I wasn't quite sure exactly what I was going to do but I just so happened to be finishing up my undergrad when around the time when the DARPA Urban Challenge hit and somebody kind of TA who I owe a lot to, Steve, thank you Steve, he said come join this team. There's this urban challenge happening and I didn't quite know what that was but it was perfect timing for me and from the moment that he said we're going to be working on robots and really interesting problems I was hooked. So you were in Gainesville? I was in Gainesville, Florida, that's right. So what's the difference between living in Gainesville and living in Aberdeen? I actually live in Baltimore in the city in Kansas. Oh, Kansas, nice. It's a heart of city. I love it. I love it there. It's walkable to all kinds of things. The difference between Gainesville and Kenton, it's night and day. Gainesville's college town and that's it. I mean, there's not all that much industry there although I think that there's room for growth of startups and things like that because we do have very good engineering departments. But living in the city is just a whole different thing. There's good and bad, but I really do enjoy the city. So my wife and I moved, we just moved south of Baltimore. We lived in Severn, Maryland. But we lived in Virginia, northern Virginia for a long time, probably longer than you've been alive. And we went to Kenton and we were like, oh man, this is a nice little part of town. Yeah, I love it. What's the best place to eat in Kenton? We're off topic, but that's okay. I want to know what the best place to eat in Kenton is. I mean, most people would say momers on the half-shell, they have just phenomenal oysters. If you like oysters, and that's a very divisive issue, I think, then that's a great place to eat. But there's new places cropping up all the time that I think are amazing. But my favorite was one that was right across the street from me called the Yellow Dog, but unfortunately it is no longer with us. I'll check out some oysters there. Yeah. Oh, so what are you working on? So I think that my high-level goals are the same as many people in DoD. I want to get intelligent robotic systems in the field as soon as possible, because soldier is the most dangerous profession on earth, and I think it's our responsibility to try to think of ways to make that job a little bit safer. And I think that putting autonomous systems in certain situations that are otherwise very dangerous instead of a soldier is a great way to make their job safer. Right, so move soldiers out of harm's way with a robotic platform. Right. So what does that mean, though, right? So if I put an autonomous platform on the battlefield, somebody has to control it. Well, so that's the trick, is that I think that the battlefield is such a chaotic and uncontrolled environment that the robotic systems have to be able to make some decisions on their own, right? Because you don't necessarily want to consume another soldier's attention completely by just tele-operating a robot. And even if you were okay with that, you can only do so much from a remote location completely and control a robot. So the trick is to get the robot to actually make some of these decisions for itself. So what are the big open questions that you're working on? So specifically, I'm interested in how the robot sees and understands its environment, in particular when things go south, right, which they will in the battlefield. And what happens is sensor data when either a sensor malfunctions or the environment doesn't support the use of a sensor, for example. I give an analogy that if you have a soldier that's doing a room clearing or a building clearing operation, that could be a dangerous mission, right? Because you don't know what's around the corner and there could be an ambush or there could be some kind of threat, right? So in order for a robotic system to do that, it would need to be able to make decisions on the fly that a soldier would. So for example, if the lights go out suddenly, does a soldier just sit around and do nothing? No, they're going to use whatever sensors are available to them to regain situational awareness. That could be trying to feel their way through the room, hopefully not. Hopefully they have night vision goggles, they could put those on. And in any case, the soldier is adapting to the environment on the ground and using whatever sensors that it has to regain situational awareness. So is there a specific set of questions that you're working on? Right. Yeah, I think there's three things that I'm most interested in from the dealing with adversity point of view for sensing and perception. One is what happens when effectively a sensor just completely malfunctions? You know, we don't want that single point of failure in your autonomy packages. And I think in a lot of cases, even today, that if that situation arose and the lights went out and the robot was relying on its camera too much, that it might not be able to perform its mission. I've often thought that the easiest way to mess up a robot on the battlefield was to give a bunch of kids peanut butter. That's exactly right. Just have them go up to the optics and rub some peanut butter all over the optics. So that's actually one of the things I'm interested in as well, not giving kids peanut butter, but that's exactly the right idea. Well, you could take it too. I kind of think of it from a funny point of view. Just a picture of a bunch of kids running out to stop a robot with peanut butter. But there are other ways you could do it as well. Any kind of oil or grease or even dirt, anything that gets on an optic or a lens can really foul up an autonomous system. And so outside of just a room clearing at the front line of battle, let's say you had a robot that's doing a patrol. And it's just sort of on a normal mission, there's not any active conflict, but it's interacting with people. If you look at the news right now, there's plenty of stories of people, researchers putting robots amongst people, and the robots are completely benign. They're doing mall security or they're trying to hitchhike their way across the U.S. And they don't make it. Because people don't have, it's psychologically a different, it's not as ethically dubious to mess with. Exactly, they kind of like the ability to mess with it. Part of it is just testing to see how good this thing really is. How good are you really, let's just say you put the lens cap on the camera. What happens then? So I think that even outside of the front lines of battle, you're going to have to deal with people, and they're not necessarily enemy combatants. They're just kids, being kids, right? So are you on the sensor side or on the perception side? That's a good question. I would say that I'm on the perception side. I take whatever sensors that I can get my hands on and try to make the most sense of the environment. And I do work with people on sensors. Directorate as well, though. So do you think that we will ever get away from using laser radars? That's an interesting question. It's actually in the news lately, because I think that Elon Musk is saying things like, they're not useful, for example, right? That's because he's trying to sell a low-cost RF solution. I completely agree with you. But from our perspective? Well, I mean the problem from our perspective is that they're active, right? So they produce a signature. I'm more of the opinion that I, like I said, I want to get these things in the field as fast as possible. And if that means using lasers, then that's okay for now, right? But we do need to try to get away from them if we can. But I mean, look, Mother Nature gives us inspiration. We have two eyes and we're able to, we don't need laser beams shooting out of our heads to see the environment. That's true. We also have a big brain behind those eyes. We have an amazing big brain. Several billion years of evolution. Right? So the implication that, oh, it's no big deal, you know, in the next three, five years we won't need laser rangefinders is also, I think, implicitly assuming that we're just going to solve the intelligence problem fundamentally. The AI problem. But also, you know, there's also bats do use, not lasers, but they use echolocation. They use things like that. They're emitting, they're doing time of light principles, which are the same things that we work with with laser rangefinders. So I think it's perfectly plausible that, you know, people also don't have eyes in the back of our heads, but why wouldn't we want to put cameras behind, you know, to see all around the robot? Right. So tell me, what's your specific project? It's called adaptive multimodal fusion. That's the one that we're talking about now. I'm on a few different projects. I'm involved in the robotics collaborative technology alliance as well. We have some previous work in multi-sensor calibration. I mean, one very practical challenge of getting these robots to work and autonomy stacks to work is that they need to use a lot of different types of sensors. They need to use cameras and they need to use laser rangefinders, as we said, inertial units, all kinds of other things. It's one of these garbage in, garbage out principles that you need to have good input to these autonomy algorithms in order to get the best performance. And one way to get the best input is to understand where the sensors are with respect to each other, and that's, we call it a calibration problem. So I've worked on multi-sensor calibration in the past, and I'm continuing to work on that and transitioning it to GVSC. GVSC is... The Ground Vehicle Systems Center. It's formerly known as TARDAC. Formally known as TARDAC. Drinking Automotive Research Development Engineering Center. That's right. That's what everybody knows. So they're frequently a transition partner. I mean, the end user of our research at the end of the day is the soldier, but to get it to the soldier we need to sort of robustify some of our algorithms at the basic research level. GVSC is often our partner in particular. Right. Well, I'm interested to learn a little more about what you say, because, well, much of what you've talked about has to do with resilience. Right. Particularly with sensors. If you look at commercial cameras that are in cell phones, which would probably be the cheapest kind of a sensor you could put on a robot, but they're very susceptible to changes in the environment. Yep. Nighttime, daytime, shadows, cloud conditions. Bright sun. Right. So is that part of what you're working on? Yes, it is. I mean, I, first off, I think that we're going to have different levels of quality of sensor on our different platforms, because there's going to be a lot of different types of platforms. Some can't necessarily support due to size, weight, and power constraints, the type of computation that you might hope for, or the best possible sensors, because they weigh too much, simply because they weigh too much. But I think that it's incumbent upon the other sensors to try, that every sensor has strengths and weaknesses, and we have to try to leverage the strengths and use certain sensors according to the context to determine this other sensor might not be reliable right now. And the trick is that you have to do it on the fly. That's the real hard research challenge. Right. Is to determine that a sensor's data is of lower quality, suddenly, for some reason, and then adapt accordingly. Very good. Are you, so much of what you're doing is part of the essential research program? That's right. Different activity. The AI. AI for maneuver and mobility. That's right. AI-MM. And you're on the mobility side, for sure. Right. Vehicle Technology Directorate. That's right. Right. Are you on Spassuti Island? That's right. I work on the island. Okay. Talk a little bit about Spassuti Island. Spassuti Island is, when I first got here, I thought it was just some kind of magical place, because you're going to work on the island, which is part of the Aberdeen Proving Ground, just so everybody knows. Right. It's on Aberdeen Proving Ground. And the first time I was driven out to work, I was actually blown away. It's kind of like working on a nature preserve. I mean, every single day, I'm treated to osprey with fish in their talons and all kinds of things that are moving around, all kinds of animals. At night, it's a little bit dicey, because you don't know what animals are out there. But it's a really great location, I think, to perform some experiments, because there's really not that much traffic going through this area. So what kind of ranges, we have outdoor ranges and indoor laboratories on the island? That's right. On the island? Right. I work in a group of buildings that are sort of clustered together. They're large. And some have indoor motion capture systems, for example, just to get ground truth for aerial vehicles and ground vehicles. But we also have an outdoor flight field right outside of the building. So I mean, the value added for being able to just walk outside and test your algorithms on a platform right outside your back doors is high. And we also have a small loop that's paved, so we can test some on-road, off-road, ground vehicle navigation. Excellent. Well, I want to ask you, because you kind of hinted at it, but I want to be a little more specific with a question regarding the research that the Army Research Laboratory does in autonomy and robots, and what's going on in the private sector and in the academic community, and how do we differentiate the need for what we're doing versus what's happening in those other places, how do we differentiate, and then how do we collaborate? Right. I think that's a great question. First off, I think that there's no doubt that there's many different companies, large companies that are racing to win the battle to build the driverless car. And they see it as a trillion-dollar industry. And so there's autonomy that goes into that. And some of the algorithms are algorithms that we use as well. Right. But I mean, imagine a driverless car that's driving and suddenly it loses GPS and then suddenly it loses its map, which it's got this really fine-grained survey of the environment. And then for good measure, throw a camera. Like, the camera just doesn't work. And these things are actually going to happen because there's a famous story from when Google was scaling up. Right. They had this search algorithm and it was working great. And then suddenly they scaled to some size at which it stopped working. They couldn't figure out what it was. And what really happened was that memory corruption or service failing is a very low probability event. But when you scale to the level that Google is starting to scale to, suddenly it becomes a likely event over the course of all your servers. And that's going to happen with driverless cars and with, as we scale in DoD, the number of systems that we bring into the field, sensors will just fail, for example. And so my point is that they have assumptions about their environment that we can't make. And so we're working on the problem ahead of time because we assume that the battlefield is going to be chaotic and that things will go poorly at times. Whereas in general, still, right now, all the driverless vehicle technology that I know of is working in relatively controlled environments. Well, I hear you. Yeah. I don't want to put you on the spot on the podcast. However, there's a burning question that I have with all of this because if you've ever driven in Boston coming out of Logan and you get into that tunnel system, right? I stayed there a week ago. So you get into that tunnel system and you're trying to use a GPS-based map. It's a mess. It's chaos. You get lost at every turn. It's chaos. So I think it becomes a liability question for the private sector and the commercial business to have a driverless car that can't survive in an environment like that. I agree. There's a very specific question of what happens to these cars when you lose GPS, when the map goes south, right? They're going to have to deal with that same question at some level. I agree. I think that they also recognize this, but I don't know... I'm not sure what they're doing in their internal R&D labs necessarily. That's something that we should, you know, learn more about, right? But I suspect that they're trying to get the autonomy working as well as they can in their pristine environments first and then trying to move maybe to more challenge, let's say GPS is out. But that doesn't necessarily account for all the other sensors that could fail as well. Right. And so, again, our environment, the battlefield, we don't have the luxury of saying, well, we can kind of use it in the really pristine environments. You know, things will go wrong. I think also they will always have a map. That's right. And you might be in an environment in a military scenario where no map in the world... We had a map. Right. So you're starting in a chaotic environment, right? The map has changed suddenly. Right. So in that sense, I think, you know, that's a real differentiator. Right. And the research that you're doing is really out in front of what's happening in the private sector. Even though there's so much investment in that space, I think ARL has been able to carve a niche for itself. Right. And the Army in general, everyone associated with this kind of work in the Army, has carved a niche because we've sort of hit on the important problem and you've hit on what the real open questions are. So I'm really excited that your community in particular has gotten to that point. Yeah. Because it's always a challenge. We get asked this question all the time. Right. And I think for someone like you who's, you know, kind of new to the Army, new to the field, you're going to get these kinds of questions from the people that are making decisions about where to resource, you know, scientific endeavors, right? Exactly. So it's important for us to be able to say, here's why it's different. Right. And here's why you have to do it. Right. I mean, I think commercial will have to do it at some point, as I said, but I don't think that they're thinking about it as early as we are. Very good. Well, it's always a pleasure to talk to another Phil. Phil's have to stick together. That's right. There's only so many good ones out there, right? There's only so many Phil's, period. What else do you want to talk about? Well, I just got back actually from Boston, and I was in a meeting and it had to, it's from the autonomy community of interest, which is cross-service. All the different services under DOD are trying to focus and address the most difficult problems in autonomy. And in this particular case, the focus was how to engage academia and industry when there might be some reservations about working with, working on DOD applications, which it's come up in the news lately. And I think that the message that I agree with that they were trying to spread is that we have to address these concerns. We have to at least have a dialogue, and we have to maybe not come to consensus, but achieve some version of mutual understanding so that people in industry have an idea of what our problem set is and what our solution approaches are, right? And we have an understanding from their perspective of what the ethical and legal and moral questions are that we need to think about, right? We can't avoid that conversation, because I would say that, you know, other states don't necessarily have to worry about that. They can recruit every single person that they want to work on these problems in whichever way they choose to work on them. You know, we have the luxury of living in the country that we do, that some great minds work at ARL and some work outside of it. And we need to be able to at least get them to understand where we're coming from and start a dialogue so that we can make, you know, we can effectively get the best autonomy out there. Those questions are emerging, and I think you're right. We have to have a discussion about the application, really, of autonomous systems for the Army's sake. And I think it's very important for us to tell anybody associated with this question that our goal is not to put autonomous platforms on the battlefield who can make decisions about launching weapons. Sure, absolutely. That is something that we perhaps will never do as a country. Right. However, our adversaries may, in fact. They may. Right. And so we have to be conscious of that. Right. But also, there will always be, from a U.S. perspective, a soldier in the loop. Yes. Or on the loop, as they say. Right. With regard to autonomous platforms. But you're absolutely right. I think that conversation does have to happen. I think that, honestly, this type of form, like a podcast, is a great place to have it, too. And we just need people to understand that we're thinking very hard about this. We're not taking these concerns lightly. Right. And there was a very large meeting, as I said, I was just at, that was really focusing on exactly this topic. Well, maybe in the future, we'll have a podcast on this very topic, Phil. I would suggest that. You can lead it. I think that's a great idea. Great. Careful what you wish for. All right. Anything you want to ask me? Oh, my goodness. I guess, you know, the Army Futures Command is a very generational, it's a generational shift in the way that we do our work. I guess, what have you seen that gives you the most hope about our ability to be as modern as we possibly can? Well, you know, I have tried for a long time my career. My career is getting long in the tooth, Phil. But for a long time, I have tried to bring scientists and engineers closer to the soldiers who develop war-fighting concepts. So under Army Futures Command, we've stood up what's known as the Futures and Concepts Center. Okay. And they're responsible for developing the new war-fighting concepts, which is multi-domain operations. Okay. That concept will be used or is being used to modernize the Army, right? So we talked about what's happening in Russia and in Europe. We've talked about what's happening in China and in North Korea and the peninsula. Right. We've talked about modernization a little bit from an autonomy perspective. But what we didn't talk about was the notion of how war-fighting has gone from just the air in the land to now what's called five domains of warfare. Right. Air, land, sea, space, and cyberspace. Right. Which is a tremendously complicated space. When you think about it and how autonomy, artificial intelligence, machine learning, all play across those five domains is a very, very interesting problem from not only from an open research question point of view, but from a modernization point of view. Right. What we want to do from a research perspective and from a science and engineering perspective is be engaged with the people who are thinking about developing those kinds of concepts from the very day they're initiated. What typically has happened is you have soldiers who put concepts together. Once the concepts are developed and formed, then they ask the scientists and engineers, hey, what do you think and is this technology going to show up or not? Right. That's the way we used to do it. Right. Now we're doing it together. This is really exciting because it teaches them about what really physically rooted technology is and when it will appear, and it teaches us about what really warfighting concepts are and how they're really going to use technology. Right. Because if we don't talk to soldiers, we're just making it up. Exactly. I mean, we both have different, slightly different detailed knowledge and we're all, that's all the same objective. That's right. Obviously. And I think that it is very important to get us in the same room so that we can start to learn to speak the same language. That's right. To me, this is the most exciting thing and it's really a new part of ARL's mission is to be able to bring our scientific understanding, the things that you articulated, even this discussion of ethics, right, into the discussion about new concepts. Right. Because what's so interesting is this concept of multi-demand operations is the driving force behind how the Army is going to modernize. Okay. And the more we can get our science and technology into that conversation, the more effect what you're doing today will have on what the Army of the Future looks like. It's extraordinarily powerful. So I'm totally, totally just stoked. Yeah. Obviously I've noticed some of these changes through more interaction with men and women in uniform. Right. Yeah. Well, you know, that's the thing, right? How many soldiers have you ever talked to? Well, I was lucky enough to take a greening course. I was very, you know, that actually, I think that maybe not mandatory, but that should be very encouraged. Yeah. I mean, because that gets us, you know, we develop these new gadgets and whatnot and we say it's going to do all this great stuff and they say how much is it going to weigh? Right. And then when you put on the gear, you really realize exactly why they asked that question. Absolutely right. Every single ounce counts. And so, you know, and also just being able to spend a full week with them and really get to know the non-commissioned officers and otherwise, right? Yeah. Incredible soldiers. Yeah. Absolutely. And the more we can interact with them, the better. So I'm all for that. And I also believe there's also been this notion and a little bit of confusion, I think, in the staff about, well, we do long-term research. We're focused on, you know, 2035 and beyond. Right. You know, are you pulling the laboratory in near term to work on things that are, you know, supposed to be developed three to five years from now and isn't, aren't we just confusing people by doing this? And my answer to that is absolutely not. You can, you can think about technology. We just had a discussion about what autonomous platforms will look like, you know, 25, 30 years from now. But those ideas about what they should look like and how soldiers will interact with them based on what we've learned today, those ideas need to be articulated today. That's right. To the people who are forming the concepts for the future. And that's what I want everybody to understand because it's so valuable to have those conversations and to put what you're learning into what I call a form of consumption that soldiers can take and use, which is much different than writing a paper for an academic journal. Absolutely. Right. And that's the art that we have to teach ourselves because this is something new for us. So what I would say is, you know, to everyone who's listening, you know, don't be apprehensive about this, you know, race it because it really will take what you're doing and just, you know, up your game with regard to the ability to give it to someone who can make use of it. That's the goal. I mean, that will be the way that I would measure my work as being successful as if it ever were to get to save a life, for example. Right. We do this because we want to serve our country. Right. You know, it's one of the major reasons anybody comes to work for the Army because it's not really for the money. I mean, the challenges are great. The problems are great. The science is great. That's right. The people are great. But ultimately, it's service to the country in a way that is very, very meaningful, I think. That's right. So thank you for your service. Thank you, too. Thanks for joining us for ARL, what we learned today. In upcoming episodes, we'll continue the discussion about the underpinning research that will build the Army of the Future. Please consider listening, liking, and subscribing. For the Army Research Lab, I'm Dr. Phil Percanti. Thanks for listening.