 Hi, I'm Candice Rondeau, I'm director of the Future Frontlines program at New America and a public intelligence service for next generation security and democratic resilience. Today we're going to talk about artificial intelligence and the future of war. The way we've often talked about this is killer robots and swarms of drones, but we know that artificial intelligence is an evolving technology, and that it has a long future ahead of it. And its impact on the way war is conducted is much more complicated than just robots out of control. Today we're going to talk with Jonathan Horowitz, a legal advisor at the International Committee of the Red Cross, about how the law of war, and the future of war intersects with all these evolutions and innovations in artificial intelligence. So Jonathan, what's a good metaphor for how to think about artificial intelligence. And how to think about where it fits with warfare and international law. How do we think about AI and war. Thanks so much for the invitation to the Future Security Forum. Candice, great to be with you to have the opportunity to talk about this issue to go directly to your question. It's important to think about artificial intelligence as a revolutionary general purpose technology kind of akin to something like electricity where we anticipate seeing it used in sort of every portion of our lives in the future and that includes in the conduct of warfare, from things like procurement or human resource issues or patching up problems with computer networks all the way to things that involve conduct, conduct of hostilities. And the one reason why I don't want to make an analogy only to something like general purpose technology like electricity is because it's so much more than that right. So artificial intelligence is going to be highly coveted on the battlefield for the speed at which it can compute massive amounts of data to either perform tasks without human involvement, or to provide information for humans to make decisions on the battlefield. And with that comes considerable costs and questions and benefits. So two big questions just to kick off the conversation two big questions are things like, Well, what does it mean when artificial intelligence performs tasks at a speed that exceed the ability of humans to intervene. And another really important question that we think about is, What does it mean when artificial intelligence provides information to a human but that human doesn't know the reliability or doesn't know why that information has been produced and given in the way that it has been produced. Those are pretty transformative questions that that have the potential to change the nature of warfare in many respects so it's a it's a pretty big new technology being being put on the scene. I wanted to talk about this and think about how AI would affect the way war is conducted is not the way you described at all in fact typically we kind of tended to talk about it as killer robots, you know, run amok or swarms of humans out of control, although, you know, some of the problems you're describing, could lead to some of those outcomes, right. Those, those are a little bit more extreme kind of ways to think about the challenges it's actually as you say it's extremely multifaceted. And the ICRC right has this mandate that's really key, which is kind of guardian of the international principles of how war is conducted the Geneva conventions of course are extremely well known. They have been an important part of how wars both are conducted in terms of being kind of like guide reels or bumper bumper posts if you kind of think about it that way, but also for how you exit from war right there really it's a really key part of, you know, winding down and trying to find a path to peace, basically. So, with that mandate the ICRC clearly has you know weighed in on a lot of different things. But what's new I think about the ICRC's position and taking a public position is the in the paper. There are sort of three sort of challenges that are outlined can you talk a little bit about those. And to think about the ICRC's reflection as not a black or white ban everything or allow everything to move forward it's a pretty nuanced positioning consideration and let me just explain how how we sort of get there and then I'll talk to those to those three trends or challenges that we're seeing so the ICRC's been around since 1863 so we've seen a lot of new technologies enter the battlefield. And when we do we we tend to take a look at how they're being designed and their intended military purpose. And there we take a look at what their human human or human humanitarian implications are in terms of what they can destroy or how they destroy it what kind of kinetic or non kinetic effects on and so forth. So understanding those humanitarian costs or consequences we then look at the relationship between the usage and what international humanitarian law the laws of war the Geneva conventions say because what those rules say fundamentally is that there are limits to weapons and methods of warfare. So, once we once we we get in that frame of mind, what we've seen with artificial intelligence are our three potential usages right one is increase it increased autonomy in weapons system cyber capabilities. That are used in the conduct of hostility so that's one thing about autonomy and what artificial intelligence does the second is around artificial intelligence and information operations right so propaganda is a age old part of conflict. What artificial intelligence brings to the dissemination of information propaganda misinformation disinformation is a speed and scale. That is abnormal to what we've seen today and understand what are the humanitarian implications of that, and then a third trend. What we're seeing is the way that artificial intelligence can produce information that human decision makers use in their battlefield decisions. Right, so this could be, you know at the far end of the spectrum things like who to target or who to detain. Those are the three sort of bundled areas where we're observing different patterns and trends and understanding the humanitarian implications of the potential or real time application of artificial intelligence and we can talk about and I'd be happy to talk about how the ICRC unpacks all three of those, but those are the, those are the three main areas where we see AI, having international humanitarian law implications in the way militaries conduct themselves in warfare. What are the implications of the, you know, a competition, becoming sort of out of control as well, right, for the future of artificial intelligence and context of warfare. I understand ICRC doesn't take positions on, you know, again, sort of national issues at all. But what is it, what do you think that might mean for the future trajectory if you were to scan out five 10 years from now. I think there's a real concern about whether artificial intelligence will give states regardless of what state we're talking about will give states an inclination or capability or a motivation to do things that go beyond the limits of humanitarian law. Yeah. And also to do things that maybe international humanitarian law didn't anticipate but raise fundamental humanitarian and ethical issues, such as the removal of human agency for warfare. It's a pretty big issue to grapple with whether you're looking at it from a legal perspective or ethical perspective or from humanitarian perspective or from all three. So for that reason, what the ICRC is trying to do trying to work with all sorts of various states but also technologists and and legal experts is to understand exactly this. What challenges does artificial intelligence pose to complying with the limits of warfare international humanitarian law. Whether or not elaborations in the law are needed so that we can all have a common understanding of what it means to apply the principle of proportionality or the principle of distinction or the principle of precaution. When using AI technologies. And, and once we get to that point trying to understand if there are any additional protection gaps, whether legal humanitarian ethical and deciding whether or not there are just caused to think about having new rules new regulations in place to make sure that AI doesn't push states into a space where they sort of were AI is the tail that wags the dog of international humanitarian law. And so we just need to be really careful about those implications and try to understand them now the ICRC, like many of us are still struggling with the full implications of what artificial intelligence will bring and I think it's, it's not a closed chapter by any means there's new technological developments happening at very fast speeds and research and development community so, so that's the type of thing that the ICRC is thinking about those are the types of considerations that are going into how we approach artificial intelligence and it gives you a sense of where we see the trajectory of both AI, how it's used on the battlefield and its relationship with international humanitarian law. You raise a good point that I hadn't really thought of and I'm going to leave you with this last question and you know, to what extent, do you have a sense, or does the ICRC have a sense of what the technology companies, you know, and tech providers, what is their understanding of the challenge, are you, are we in dialogue with them, do they get it, do they not get it, what what is missing from their understanding of the challenge because I imagine you're kind of, you're talking, you're speaking different languages practically right. Yeah, so I think this is an emerging dialogue I mean you go to any conference on defense and technology, and there's a talk about academia, industry and defense. And so, in these types of technologies. There's a lot of work a lot of research a lot of development a lot of design happening in the private sector. And so it is really important for the private sector to be attentive to how their design, their applications their use of AI will translate into a battlefield space, and how it comports and relates to the the limits on means and methods of warfare which is long hand for international humanitarian law. That's a big challenge I think. Listen, Jonathan. Thank you so much for joining us here at the Future Security Forum. It's been great to have you and I hope we get to talk again because there's lots of talk about on the future of AI and war. Thanks so much, and enjoy the rest of the forum.