 Hi, I'm Tracy Dean with the U.S. Army Combat Capabilities Development Command's Army Research Laboratory. Welcome to What We Learned Today, a podcast where we talk with Army scientists and engineers about the science and technology that will modernize the United States Army and make our soldiers stronger and safer. Today I talk with Dr. Ammar Marate, a biomedical engineer who oversees the lab's essential research program called Human Autonomy Teaming and Aberdeen Proven Ground Maryland. Ammar, welcome to the podcast. Please tell us about yourself. Sure. So my name is Ammar Marate. I am the program manager for the Human Autonomy Teaming Essential Research Program at the Combat Capabilities Development Command Army Research Laboratory. I have been with the lab for just about a little over six years now as a civilian employee and before that I did a postdoc with the laboratory for two years and I have my Ph.D. in biomedical engineering from Case Western Reserve University back in 2011. Sounds like an impressive resume. I'm honored. So I understand your research program involves artificial intelligence and how it's going to change the Army of the Future. Tell me about that. Sure. So the purpose for HAT is really thinking about how soldiers and autonomous technologies or AI based technologies will work together in the future. So if we think about the future battlefield and what that is likely to look like, we're entering a society where autonomous technologies or AI based technologies are becoming more and more prevalent. We see it in the commercial market and we're starting to see those types of technologies move towards military use. And as those technologies grow and develop over the next several years, we would expect kind of influx of those technologies on the battlefield, not just for the United States but across the world. And so as that technology becomes available, becomes more capable, the expectation is that the technology based overmatch that we currently have over some of our adversaries is going to start shrinking. And so while we can't necessarily say that we will have more advanced technology in terms of the actual AI or the actual equipment that our soldiers have. The way that we preserve that overmatch and preserve our capabilities as the United States Army is to start looking at how we effectively team our soldiers with those technologies to employ the maximum effect. And so the focus for the ERP is really to look at how do we take our soldiers who are very capable, very adaptable and very skilled as warfighters and couple them with AI based technologies or autonomous technologies that can process large volumes of data, pull in information from lots of different sources and provide decisions and provide suggestions at a pace that is unprecedented in history. And so how do you put those two together to maximize the effect of the overall force that we can deploy on the battlefield? And so if we do that effectively, we can regain overmatch to outperform and out-adapt the enemy in the future battlefield. What I want to do is clarify what you are speaking of when you say human autonomy teaming. And why is that important to, let's say, the Army and the soldier? Sure. So human autonomy teaming is important because as these AI based technologies grow and mature, we expect them to have lots of capabilities, lots of capabilities to process data and bring information together. But in the end, they are still rules based systems. So they work on data that they've seen before. So they work for a specific application that they've been designed to work for. And when you change the environment that they operate in or you change the way that they're being used, AI based solutions tend to fall apart. Our soldiers, on the other hand, are very adaptable and can move from environment to environment or from problem to problem and adjust the way they behave and how they work. But they tend to be slower in terms of how they process data just because of the fact that they are not computers themselves. So you have limitations and strengths of both soldiers and AI systems. And if we're able to team them effectively, you can start capitalizing on the unique strengths and the unique contributions of both the soldiers and of the AI based technologies and mitigate against some of the weaknesses that they may have individually to put forward a more robust and a stronger team in the end. With those unique strengths and contributions and weaknesses in mind, what are the specific problems that you're trying to solve with this particular essential research program? Going back to that vision of what I described of what that future battlefield will look like, some of the challenges that they will face that our soldiers will face or our soldier autonomy teams will face are really that the pace of battle is going to increase rapidly, right? So as AI is deployed on the battlefield, one of the benefits is that it's able to process lots of data rapidly and provide information to the soldiers to make decisions more quickly. Now, if we assume that both our forces as well as our equipped with similar types of AI based technologies, the pace of battle will naturally increase as decisions are made faster and as situations unfold with greater speed. So some of the challenges that that provides are really that you have to be able to understand what's happening faster than you have before. So how do you maintain awareness across the battlefield of the events that are unfolding at a rapid pace? So that's challenge number one, like really understanding the battlefield, understanding the situation that that is unfolding as as we're active as we're interacting with the environment and with the enemy. The second piece is as you have developed that understanding, how do you then coordinate a response to any unexpected actions or two enemy actions? So how do you then take your team, both soldiers and autonomous technologies and coordinate their response to whatever the enemy is doing on the battlefield? And then the third piece is as the enemy starts changing or as the situation starts changing, where our AI based solutions are not appropriate for that environment, how do we adapt and how do we modify those technologies or those capabilities in order to have maximum effectiveness to the current situations? So those are the three main problems that we're looking at is understanding the world around us as it's changing, coordinating our response to those changes and then finding ways to adapt the AI based solutions as needed in the battlefield. You mentioned the benefits to making decisions rapidly. What are some of the short term and long term plans that you have for human autonomy teaming with that in mind? Over the short term, what we're looking at is at lower levels of echelons. So at a squad level or at a platoon level where you're looking at teams of anywhere from say nine to 18 soldiers working with AI based technologies, we're looking at how those soldiers can interact with those technologies at that level. And so we're looking at specific applications to the next generation combat vehicle and to soldier lethality, so dismounted soldiers and how they operate and trying to identify ways to improve their situation awareness, the coordination and the capability to adapt in those specific application spaces in the near term. Over the long term, what we're looking at is based on those those initial technologies, the initial capabilities that we provide in those two applications basis. How do we start bridging across multiple domains of operation? And so the army is moving towards a concept of operations called multi domain operations. And that is really looking at how you connect dismount soldiers with soldiers within ground vehicles with soldiers in helicopters as well as with long range precision fires. How do you bring all of that together in a coordinated action? And so in the long term, what we want to do with human autonomy teaming is look at how are the technologies that we're developing at this kind of low level for ground vehicles and for dismount soldiers. How do we start stitching those together into a coordinated action across multiple domains and addressing that MDO challenge that the army is moving towards? That said, how is the structure of this essential research program organized based on the short and long term objectives that you want to achieve? The way we organize the essential research program is looking at very specific deliverables that we have for the next generation combat vehicle CFT as well as the social leaveality CFT. And we've organized a set of seven research teams currently pursuing different research goals that I described before. There's those three goals of situation awareness coordination and adaptation. And so we're looking, those seven research teams are addressing different components of those problems specifically for those application spaces. So those are addressing kind of the near term challenges that we're approaching. For the long term piece, we started a program called the Strong CRA, which is strengthening teams for robust operations in novel groups. And so this is really addressing the challenge of how do you fundamentally move from teaming concepts that focused on soldiers only or humans only or focused on technology only or AI systems only and move towards a hybrid approach of how do you effectively team soldiers and AI at a fundamental level? How do you deal with some of the basic science concerns of what does a good team look like when you're dealing with soldiers and AI working together? And so this collaborative research alliance, which started in 2019 is pulling in some work from academic partners in human sciences in autonomous technologies and related areas and trying to have them work together to build the scientific foundation for how human autonomy team should look in the future. And so that's kind of the backbone of our long term vision is that collaborative research alliance and we're augmenting that obviously with our internal work with some of our fundamental research scientists within the laboratory to team with those academic researchers to address some of those challenges for the long term. So your response basically goes clearly into some of the major projects. This collaborative research alliance is a large one. Are there other examples of major projects that you were focusing on? So in the near term, I described the seven, there's probably seven major projects that we're focusing on right now. In addition to the collaborative research alliance. And so those are looking the first one of them is looking at situation awareness for the next generation combat vehicles. So this is really trying to get into that concept that the the CFT has established for that vehicle and say how do we start addressing the challenge of situation awareness across a platoon of combat vehicles. So the NGCV or the next generation combat vehicle CFT is looking to deploy a fleet of six vehicles as a platoon with with only two of those vehicles having soldiers inside of them. And so all of those six vehicles are controlled from two out of the six. How do you collect that information and enable the soldier crew to understand what's happening across this wide stretch of the battlefield. Consequently, then how do they respond? How do they coordinate action based on the information that they're pulling in from these different vehicles and how do they coordinate a response to those to those actions. So those major projects are really focused on that effort for NGCV for soldier lethality. We have an effort that's really looking at a dismount soldier and the interaction between that soldier and and their weapon. And so the idea for that project is looking at the plans for integrating autonomous target recognition capability or AITR into the weapon optics for a rifle. And so with that integration of target recognition or autonomous target recognition capabilities. This program is looking at how do we effectively pair that soldier with that AITR algorithm in order to effectively engage targets on the battlefield in a dynamic environment. So again, some of the challenges are how do you perceive those threats with the assistance of that AITR effectively given that the AITR is not going to be perfect. It's going to have some errors. And so how do you still make sure that while the algorithm might make mistakes, the soldier AIT can still function effectively in that environment. And then how do you then if when the algorithm is making mistakes, can you allow that soldier to adjust and adapt to a novel threat or to an emerging threat that might be different than what it was trained on. And really how do you enable that effective partnering or the effective communication of the environment around them between that soldier and that that AI based technology. Sounds like you have a bit of a challenge. Definitely. I think this is this is an enormous challenge because I think there's a lot of research that needs to be done to understand what is it about the environment that needs to needs to be sensed needs to be perceived. And how do we convey that information to a computer system or an AI based system so that the soldier and the AI are working together on the same page as they move out through the battlefield. Understood. But what about the individuals who have concerns about this? How are you addressing the concerns of those folks who may believe this is unsafe? That's that's a great question because I think that's that's a primary concern that we have as we approach some of this technology. So the big concern is really that AI solution. You don't want to have a situation where an AI based system or computer is responsible for decisions of lethality or interjecting force in a situation. And so the army has a policy right now that, you know, the human has to be in the loop and making those decisions on when to shoot when not to shoot or interact with an enemy. Much of the reasoning behind the approaches that we're taking are really to make sure that that when a decision is made that it's the right decision and that the AI based technologies aren't left to to reason on their own. The soldiers are able to provide their own reasoning and interject that and retrain those AI in the field to make them act appropriately in in these situations. And so we think that by allowing soldiers to drive the the behaviors of those AI based technologies, you're going to see more appropriate behaviors, more ethical behaviors on the part of both of the soldier AI team as a whole and reduce the risk of some of the negative consequences that could happen if they weren't given that that capability. Artificial intelligence is an exciting topic to some it repels others and throws them into a bucket of fear. The idea that the AI is not perfect. And so the AI is going to make any AI you designed is going to work well in some conditions and not in others. Right. And so how do you make sure that as the situation unfolds in whether it's in the battlefield or in mission command or wherever the action is happening at that point in time. How do you make sure that the AI, if it's not working, we have the opportunity to adjust it in field and not go through an 18 month development cycle where we send it back to the manufacturer or the the programming team at a defense contractor to reprogram it or retrain that AI. But instead provide the soldiers capability to adjust that or adapt it to the current situation and apply it back in the field almost immediately. And that could be within mission or that could be between missions, but either way it's a rapid response and a rapid change to that AI based system in order to make it more applicable to the situation at hand. Because the big fundamental challenge that we face for AI is that we don't have data to train it right now. We don't have sufficient data to train it right now in for every single situation that we might encounter. We can't predict what situations we're going to encounter and we don't have data for those different types of situations. And so we're creating AI based technologies for these general situations that we know will likely to face, but every situation is unique. And so adapting that AI for that unique situation is a challenge that we have to address. And we think our soldiers are the best equipped to do that because they know what needs to be done and how they want to do it. And that is a way to make these AI technologies more relevant for specific situations that the soldiers will come across. You spoke earlier about some of the different collaborative efforts that are involved with your essential research program. Can you speak more about the partnerships? Sure. We have partnerships with both academic institutions as well as with other government organizations. So start with the academic institutions. We talked earlier about the strong CRA, the Collaborative Research Alliance. That brings in new academic partners every year for addressing specific research challenges. And then a few of those partners continue on with us for a three year effort to deal with those challenges more in depth. And so it's an evolving partnership. Currently we have involvement from institutions like Carnegie Mellon, Northwestern University, MIT, Northeastern University and a number of others across the country. Those allow us to tap some of the best minds in the country to have them think about our problems and work with us to solve some of the problems that we're looking at. In addition to the strong CRA, we have other targeted engagements with institutions like Georgia Tech, Texas A&M, University of Texas. Also working on specific challenges, some of these challenges are more geared towards our work for the next generation combat vehicle and for social lethality, more of the applied research. But also in the same way, they're helping us address these challenges by taking some of their expertise and combining it with some of our internal expertise to find robust solutions for human autonomy teaming issues. In addition to those academic collaborators, we have strong partnerships with other government agencies. Within CCDC we work with the Ground Vehicle Systems Center very closely on new technologies or new capabilities for the next generation combat vehicle. So Ground Vehicle Systems Center has a program called Crew Optimization and Augmentation Technologies that takes what we develop within human autonomy teaming and ruggedizes it and integrates it into a vehicle platform so that it can be demonstrated in field exercises and higher level army events as these technologies mature. On the social lethality side, we have a collaborative relationship with the Armament Center up at Picatinny Arsenal in New Jersey where they are much the same way we interact with Ground Vehicle Systems Center. With Armament Center, our social lethality program, our intelligence squad weapon program within the social lethality portfolio develops concepts and technology, their knowledge products and technology concepts that we can then transition to our partners at Armament Center for ruggedization and integration onto more advanced weapons platforms that can be demonstrated. And so those types of partnerships really help us not only expand our base of fundamental research with the academic collaborations, but then gives us a way to get some of these technologies into the hands of soldiers to get feedback on how well do they work in these field exercises. What is the feedback that we're getting from the soldiers in terms of how the concepts that we're thinking about might interact with the way they want to use them. And so these partnerships are really critical for us to really understand what the need is from the soldiers as well as what the art of the possible is from the academic institutions as well. So you talk about the feedback that you receive from some of the soldiers. Are you able to discuss any of that? Sure. So a lot of the feedback that we've gotten from the soldiers has been very positive. They've been excited about some of the technologies that we're putting forward. They've also voiced some concerns about, you know, how do we operate this in the future? How would we do this type of task with these future technologies? And so that's opened our eyes to some of the challenges that they face with these technologies and have expanded some in some cases expanded the scope of work that we're trying to do. But in other cases, it's allowed us to see what are some of the additional challenges that are out there and allowed us to address those concerns for the soldiers so that when these technologies make their way through the pipeline and into their hands, they're more effective and more in line with what they're expecting as they move forward in this technology development cycle. Great. You've talked a lot about some of the things you guys have done in the past. So I'm wondering, are there any products that you find especially promising at this stage of the research program? Well, this one's a challenging question. It's having to pick your favorite child to some extent. And I think that's, it's a fair question, but again, it's the same. We have a lot of promising efforts going on. I think we have several efforts that I'll highlight and they go back to the three fundamental problems that we face is that the big challenge in terms of understanding the environment, understanding the actions on the battlefield or what's happening in the in the battlefield, there are a number of efforts underway for how do you perceive that environment and how do you share that information with the AI based technologies. And I think that's very promising. And there's a number of ways to try to get that information, both through active interaction with the soldier and through passive interaction that I think can certainly change the way we field our AI and the way we use the AI systems in the future. In the second area where we're looking at coordination of action, I think there's a number of promising approaches that we're looking at. But the most interesting to me is looking at ways that we can train an expert system or an AI based system to behave as a commander would or to make decisions as a soldier would and learn from the soldiers own decision making process so that the AI now is learning all of the factors that are going into that decision. And starting to make those decisions or starting to make those suggestions for specific courses of actions in order to increase the speed at which we can make those decisions. And then in the third area is that the ability to adapt and change AI behavior based on soldier interaction. And that I think we've got a number of projects that are really looking at using intelligent after action reviews or kind of replays of missions or of exercises to allow soldiers to provide input and guide how the AI should have behaved or where they can take examples of this is good behavior. This is bad behavior. This is and use that information to improve functionality for future missions or for future exercises. And so those are three kind of broad areas that I think we have exciting products coming over the next several years. And I think they'll have a huge impact as they become more mature for the soldier. So it sounds like in regards to promising efforts your favorite child is the one where you're looking for AI to make the decisions as a soldier would make the decisions. I think it's safe to say that I think it's can we get the AI to start learning how to behave and how to consider the the different options much to the same way a soldier would and do it potentially faster so that they can offer up those courses of actions to the commander. So they were not necessarily saying the AI is going to make those decisions but suggest those courses of action to this to the commander so that the commander could make that decision faster and more effectively during a battle. So if you're successful. What does this mean for the future soldier. This is a fundamental change for the future soldier in that they're not necessarily always going to be the ones doing every action. But instead they'll be working with an AI system or some type of technology that might be doing the action on their behalf at their direction or under their guidance. And so I like to make the analogy that rather than being an active player on the field. You're almost a combination of player coach and you see that. You're doing some things but now you're teaching the AI you're training the AI you're adapting the AI and employ and making it more more functional so that it acts more more appropriately and more ethically so that it can engage in some of the tasks that the soldiers would have done. This change in role is hugely important because it allows a couple of things one it allows us to operate at that rapid speed that the future battle is likely to occur at but it also allows us to employ measures of standoff. So now our soldiers can hopefully stand a little further back in that battle and let those AI based technologies move forward and assume more of the risk to where our soldiers are more protected and further back from that frontline effort. But still as involved as they are today but just without all of the risk. So I think that those are two big benefits if we're able to succeed in making this human autonomy teaming a real capability for the army. So I'm I see your passion. I hear your passion in every aspect of what you're discussing with the human autonomy teaming with that. Tell me what's next. What's the next big discovery in human autonomy teaming. I think the next big discovery and the next area that that we're we're starting to look at is starting to look at creative ways that we can pair groups of soldiers with AI to brainstorm new options. So if you think about it from a mission planning standpoint one of the fundamental challenges is really coming up with how are we going to approach this very complex problem or this very complex mission and start coordinating a large scale movement on a certain objective. And one of the big challenges there is right now that is largely a human problem that creative thinking space is a human only problem. AI does not play in that domain effectively yet. And so how do we start bringing some of the power of AI based technologies and couple it with the creativity of our soldiers and our mission planners and pair those two capabilities in an effective way to rapidly create new strategies, new plans on the fly so that we can rapidly execute new missions and move from one situation to the next. And I think that fundamental challenge is something that's on the horizon. And we're starting to explore that in a basic research sense and understanding what research needs to be done in order to address that. But if we're successful I think that brings the human autonomy teaming challenge from a low level kind of platoon level interaction. Now we're thinking about it at, you know, mission command across higher echelons looking at a company level looking at a brigade level interaction. How do you provide that same capability, but across a much broader team that would need to execute these mission plans and things of that nature. Is there anything else you would like to add? I think we covered most of the high points. We're excited about addressing some of these challenges and we look forward to getting this technology ready and getting these capabilities ready and pushing them out and doing good for our soldiers. Well, we look forward to your next big discovery and I'd like to thank you for taking the time to meet with us and we wish you the best in your future research. Thank you for meeting with us. Thank you. I appreciate it. Well, thanks for joining us for what we learned today. In upcoming episodes, we'll continue the discussion about the underpinning research that will build the army of the future. Please consider liking and subscribing. Science is a journey of discovery and we're glad you're along for the ride. For the Laboratories Public Affairs Office, I'm Tracy Dean.