 Hello, I'd like to welcome you to the Naval War College in today's presentation on the legal implications of autonomous weapons systems. I'm Lieutenant Colonel Jeff Therner. I'm a faculty member here in the International Law Department, and I'm happy to have the opportunity to talk to you today about these unique weapons systems. One disclaimer before I begin is I want to make it clear that today's remarks, that these are my personal opinions and on these particular issues, and they should not be inferred so as to reflect and necessarily the views of the Department of Defense, the Naval War College, or any other government entity. OK, this area, I'm very excited to talk to you today because it's an area of research that our Department, the International Law Department, here at the War College, and I personally have been examining closely for many months now. And it's a very interesting issue dealing with unique weapons that has caused a little bit of controversy, which we'll talk about today. I have recently, in fact, published an article about on this subject with my department chairman, Professor Michael Schmidt. And it's just been released in the Harbor National Security Journal. And today's talk is going to draw heavily on that work. But it's just that today will just be an overview. It's really just a primer on the issue. So if you're looking for something more in depth, I would encourage you to look at that article. OK, what is it that we're talking about today? You see, the title is the Legal Implications of Autonomous Weapons Systems. What sort of systems are we really talking about? And often when I talk to people about autonomous weapon systems, they mention that they think the first thing comes to their mind is drones. They think we're talking about drones. In fact, if you look at this next slide, you'll see a definition. I wanted to start just with a quick initial definition to make sure that we're on the same footing. We're not talking about systems like drones, where you have a pilot that's guiding it throughout its mission where a human operator is actually pushing the trigger in order to initiate a lethal strike. OK, with autonomous weapon systems, you have systems that are able to select and engage targets without that human interface. And so you're looking at the next generation of weapons. And I just want to make sure that we're all clear. Some of you might think, wow, that sounds like science fiction. That sounds like something that may be hundreds of years off. Why are we even talking about it? Why is it important? Why should you care about this topic today? And there's a couple reasons why I think it's important for all of us to examine it. The first is that this technology is starting to appear in weapon systems now. And frankly, some of it has been in weapon systems for many years or we're going to talk about here shortly. And many experts predict that you may start to see, or we will start to see, autonomous weapon systems becoming the norm on the battlefield within the next 20 years. So we really are talking about some of it may be happening much sooner than you may have predicted. The second reason, though, is that these systems, as I mentioned, have generated a lot of controversy. There is some opposition to the development and the deployment of these systems. And so I want us to be aware of that. One of the things you can see here is that there was a report published by Group Human Rights Watch, an influential NGO in November 2012 that, in fact, called for a preemptive ban on the development and the use of autonomous weapon systems. So I thought it's important for you here in the audience today to understand, to be fully conversant in these issues, and to understand that this ongoing debate is occurring. OK, you can take a look here at our agenda. This will show you the main points I'm hoping to cover today. We are going to start with some definitions, because I think that, as you'll see, autonomy is difficult. It's kind of tricky to nail down specifically what we're talking about. So we'll talk about some of those issues. We'll also look at what the current US policy is about these systems. And we'll further examine why it is that autonomy offers some promise. Why it might be good if, in fact, the technology can deliver. And then we are going to examine that technology and see where we are, where the current state is, and where it may be heading in the future. And lastly, and primarily, the topic for today will be a focus on the law and the legal issues dealing with these systems. OK, I showed you initially a definition. I really, when you're looking at autonomous systems, you can see here there's generally three different categories of autonomous weapons or autonomous systems. And I want to try and walk you through these different systems. You can see we start with semi-autonomous systems, then you have human-supervised autonomous systems, and then the fully autonomous systems. It can be pretty hard to define what sort of system would fit into each of these categories. And there's a lot of controversy. It's kind of a big technical debate about what constitutes maybe automatic systems or autonomous and how much autonomy a system needs to have to qualify for these things. But we're going to take a look at it. In fact, one of the things I'm involved in right now is a multinational project examining autonomous weapon systems. And we're spending, actually, the first several months just looking at definitions, just trying to make sure that we frame the problem correctly and that we are addressing these issues. But the crux of autonomy, of full autonomy, is really, I think, that ability to identify and target and attack either a military person or objective without that human interface. That said, I think that this term I put, as you saw on the slide, I put in parentheses kind of some more commonly used terms about humans in the loop or out of the loop. I really think that human out of the loop is a complete misnomer. I do not think that there are systems where a human is not involved. I mean, if you think about it, human certainly will be involved from the design, the decision to employ, the parameters that'll be established for the system, the guidance and direction that'll be given. I think it is significantly a misnomer. And that's why we'll use primarily the other terms. But OK, let's take a closer look at those actual definitions and see some examples of them. OK, if you look at the first one, here you see a semi-autonomous weapons systems. And you can see the example of fire and forget missiles. Here, these are the type of systems, semi-autonomous systems that are very commonplace, frankly, in today's contemporary warfare. Fire and forget or launch and leave weapons, like these sort of missiles, are in many nations' arsenals. And when you take a look at it, the autonomous section, how it fits in with human control, the human is identifying the target, locks in on target, and then sends the missile that direction. And then the missile, or whatever other sort of weapon system, directs itself autonomously, ultimately, to the target and engages the target without further human involvement once the thing has been fired. So those sort of systems are considered semi-autonomous and, like I said, are in existence in many nations right now. OK, let's take a look at human supervised autonomous systems. So now we're looking at a little bit of a step up. These systems also have been used by militaries in the US and other militaries for many years. The US, in fact, as you can see here, has the Aegis, the Aegis Weapon System out at sea, and also the Patriot Missile System, which is used on land. Both are designed to defend against short notice missile attacks. So if a missile is coming, these systems are able to identify that threat and would be able to automatically engage it. The difference, though, the reason that they're not fully autonomous and they're not without human involvement is that these systems have a human operator who's observing the system and then is ultimately approving that strike. Just last year in the Middle East, Israel had a lot of success with its Iron Dome system. It would also be considered human supervised autonomous system because it had somebody sitting over top of the system ready to essentially veto a strike if so necessary. But there's a very short time period that is able to respond or that the human would have to respond. So most of the work is essentially being done by the system to identify. But it does have human override capabilities. OK, next we're going to look at the fully autonomous weapon systems. And I once again threw up the definition that the DOD has issued in its current policy directive dealing with autonomous weapon systems that we will talk about here in a little greater detail. But you can see that definition. So it's talking about weapon systems that are, once they're activated, they are capable of selecting and engaging targets without any further human interface or human operator involvement being necessary. Now you don't see any cool pictures on this slide. And that's in part because fully autonomous systems are not yet known to exist in any nation's arsenals. And in fact, the US is on record now of saying that they are not planning to develop any lethal autonomous weapon systems other than perhaps some of the human supervised autonomous weapons systems that we had just looked at previously. That said, so you have those three categories. I'll tell you that there is a little, I told you that some of these definitions were difficult to determine. Well there's other sort of systems that it's hard to envision how they fit into things or it's not clear how they fit into things. And if you look at the next slide here, we show the several different systems that are currently in existence that have been used for many years. First is naval mines. You have sea mines that are able to, some of them able to maneuver and certainly able to wait to engage until they pick up a specific acoustic or seismic signal. So they're waiting for that particular signal when they identify it, then they're able to respond by initiating the mine and engaging the target. We'll talk about the DOD directive, but interestingly they said that the naval, that mines are not part of the directive, so they don't fall under the autonomous system, but many people would argue well those certainly seem to be autonomous systems of some sort. Other systems are considered or often are called automatic weapon defense systems. As I mentioned to you, there's a debate about what is automatic versus what is autonomous and where do you draw the line between? Generally people think well autonomous means there's more uncertainty and the system has to be able to operate under parameters where things are uncertain. Or an automatic system has a little more fixed situations, more deterministic of a system. But determining where that dividing line is, it can be complicated. I'll tell you that close-in weapon systems like you saw here already exist on ships throughout the US and in many other nations. Those are designed to be point defense systems, kind of a last resort defense measure for ships. These, if you are trying to distinguish them from other sort of perhaps autonomous systems or fully autonomous systems, you argue that these systems are all defensive in nature and that these systems are all fixed either at a base or on ships. And so that would be some sort of distinguishing factor. The other one I've thrown on the slide that you saw is Stuxnet, okay? Interestingly, many researchers are starting to talk about whether Stuxnet wasn't itself maybe the first fully autonomous cyber system that was used, cyber weapon system that was used in combat. If you can believe what you read in the, or if it's true, what's available in the open source press about Stuxnet and how it operated, apparently. It seems to have been a computer virus that was designed to enter into a closed system and then to search out, so it's closed system, so it's not able to reach back to a human operator. It was, once it was inserted into the system, it was on its own and it searched through those computer networks to find the particular target and then it was to respond and attack that particular target on its own. So one could certainly argue that maybe that's an autonomous system that has been used already in the world. And so we can have a whole discussion on just the Stuxnet and whether that is, I'm just trying to raise the point that it's not always clear, okay? Where did it have the dividing line? It's not always clear and so, even though you have maybe categories that people have established or think they haven't established, that it's hard to determine some of these things on the edges. Okay, next I wanted to take a look at what the DOD policy is. Now I told you, in fact, that the policy didn't address mines. It also doesn't address cyber weapons, so basically it says those sort of systems are not falling under the policy. But what the policy is ensuring is it's ensuring that for future development weapons systems that they will all have the appropriate level of human involvement on targeting decisions. So that they want to make sure that there's a human that is involved at some aspect of the targeting process. And it created some guidelines. It really set what the policy was. It set the plans for using proper safety mechanisms, having other things to try and ensure that any weapons systems don't have unintended consequences or unintended engagements. It's showing recognition, I guess, that the U.S. is concerned and is aware that there could be, with these unique weapons systems, there could be some pitfalls. And so they're trying to work hard to ensure that systems don't have any of those sort of, that they don't run in any sort of errors like that. So while it's clear when this policy came out, the U.S. made it clear that they were not gonna be pursuing fully autonomous weapons systems. They were only gonna use the human supervise system so somebody, an operator, would be sitting over the top able to veto any system and that those systems primarily be used defensively. The policy, though, it certainly, some critics have said, well, that's just a policy and it doesn't mean that the policy can't be changed. And so it's important to look at why, if the U.S. is making a policy, why are some concerned that maybe the policy changed? What is so good about autonomous systems and why might they be pursued? Okay, so let's take a look at what makes them so desirable. I think there's a few things that do make them very enticing for countries to develop and potentially to use. Again, if the technology is able to produce as is advertised. Okay, when you take a look at, there's several operational realities, several operational concerns that would lend a, would make perhaps an autonomous system a superior system over other systems. When you have, as we do with drones or other remotely piloted systems, you have a team of folks that are having to observe each and every weapon system. And that can be very intensive personnel-wise, okay? It can be costly to try and maintain all of that. Where an autonomous system, generally, the rule would be the more autonomous the system is, the fewer people that you need to be observing it. So that's certainly one advantage. The other advantage, another operational advantage, is that is about the tethers to the systems. If you look at how all of our remotely piloted systems are connecting between the human operator and the weapon system itself, there is some sort of connection, a communication link that ensures that the human pilot or controller is able to maneuver the system. Well, so you have a link and there are vulnerabilities anytime you have a link like that. Particularly when we're dealing in environments now where people's abilities to jam communications or to satellite communications or they're able to attack cyber attacks, perhaps, on satellites and things like that, would make anytime that makes that connection between the two a critical vulnerability. And so if that link is taken out now, most of the systems will either return back to base so they won't be able to complete their mission, or perhaps worse, that they would have to land or crash land. And so if you have systems that, if any sort of link was cut, are still able to conduct their mission, well, that would certainly be a significant advantage over what we currently have. And I think that's one of the operational concerns that may, in the future, push people towards looking for systems that can keep engaging, even in the absence of that communications link back to, excuse me, back to a human operator. Okay, I also think that as the technology develops, you're gonna start to see other nations pursuing these systems and being interested in the systems and certainly one side doesn't wanna be at a disadvantage against the other. And the reason maybe that it would be a disadvantage is that if you have one side that has a autonomous system, generally the autonomous system, the presumption is that it'll be able to react faster. It'll be able to do things faster than a system that's either humanly controlled or certainly one that is remotely controlled. So that the autonomous system is already making a decision while the human is trying to figure out how to adjust things with either their main system or their remotely controlled system. And so if you're always a little bit behind, pilots often refer to it as the OODA loop. So you're getting within their decision cycle. If you are already within the decision cycle, then potentially the side that doesn't have the autonomous system would always be behind it and would be losing out to the system, to the force that does have the autonomous system. So I think as the pace, the tempo of combat in the future continues to speed up, you're gonna see more of a need, more of an increase desire to have systems that are able to react faster and be able to react on their own. Ultimately, the concern is that at some point maybe the environment will be too complex, be too fast for a human to actually direct it effectively. Okay, so all this talk about why autonomy might be desirable, it's important to then look at, well, what is the state of the technology? Where are we really in terms of having potential breakthroughs, what is it that's causing people to even discuss these autonomous systems or having groups being opposed to them? So we'll take a look at the state of the technology itself. First thing we wanna look at is just a few examples from the civilian world. There are a couple things that have been very prominent in the news and the media that highlight some of the advantages and some of the advancements that we've seen in autonomous research and some development. Okay, if you take a look at this slide, you'll see that the top shows hopefully, maybe you'll recognize that it's a scene from the TV show Jeopardy, where last year a team from IBM put together a computer system called Watson. That was able to not only compete against, but beat some of the best human players in the game show Jeopardy. It used a series of a real novel approach to algorithms and linking computers together to try and cipher through the complex language that is used on that show and reach an answer faster and better than the human contestants were able to do. And certainly shows some of the promise that if a system is able to do that is able to figure out those sort of things that potentially they could be used in other aspects. One place we started to see some further developments in the autonomous field is with the driverless car that Google has been developing and others are certainly developing right now as well. A lot of those are relying on this idea of machine learning that a system is able to, over time, improve its own capabilities and is able to learn, if you will. It's something that machine learning really is kind of an equivalent to artificial intelligence. I think now machine learning, I think maybe is the primary phrase that you use more so than even artificial intelligence. But the idea that systems are able, over time, using these novel approaches they're able to improve their capabilities steadily and then able to operate further removed from human operators and with less requirement for a human interface. Now, let's take a look at how DOD has approached these changes to autonomy. I would tell you that the US has certainly been aware of this and has been involved. In fact, the Google Car Project kind of stemmed from an earlier project by the Department of Defense Research, the DARPA group. And so the US is already embedding a lot of autonomous features into its weapon systems or into its vehicles and other systems that are in existence now. If you take a look at a couple here that I've shown, the first is the KMAX helicopter which the Marines have been using, in fact, in Afghanistan. So you have a couple, they have two helicopters that have flown more than a thousand missions and have delivered more than three million pounds of cargo between forward operating bases flying autonomously. So they're able to pick up a cargo load and fly it without having a human operator steering it the whole way to drop it off. It's been a very successful project so far. Down below that, you see the X-47B. It's a experimental aircraft that the Navy is developing and the intent, the Navy's intention is that they wanna have a vehicle that can or they want to perfect the ability for systems to autonomously land and take off from an aircraft carrier. You see two pictures of it, one just the vehicle by itself and the second is the vehicle, in fact, taking off from an aircraft carrier successfully just in May of earlier this month, actually, in May of 2013, where it was able to take off and the plans are for it to, it's already done some touch and go landings and the plan is for later on this year for it to able to also autonomously land. Really complex maneuvering. Many pilot can tell you how difficult it is to land and deal with all the variables of trying to take off and land from an aircraft carrier and it appears as though some of the autonomous technology is going to allow these systems to do that. Now, those are certainly not weapon systems, okay? Those are non-weaponized uses but it's showing you some of the progress and some of the development in the autonomous field and in that technological and in the research that's going on right now. When you look at some of the near term things, so just in a few years in the future, some of the things that are being developed, I have a couple of examples for you here. First is an aircraft that the British military is developing. It's called the Tyrannus and it's a supersonic aircraft. So it's a stealth aircraft. It's able to fly at high speeds. It's able to fly autonomously and it's designed ultimately to be an attack aircraft but they are not at this point in putting any sort of autonomous targeting features into the system but you could certainly see that maybe in the future that would be something that they perhaps would be interested in changing but the aircraft would be designed to fly without a human pilot, without a human controller in order to get to an area and then I guess it's envisioned that the human controller would then improve the particular strike. Another system that the US is looking at developing right now is an anti-submarine system called the ACTAV and it stands for the Anti-Submarine Warfare Continuous Trail Unmanned Vehicle, long name. But those systems are being designed to go out to sea for up to 90 days and to autonomously maneuver and track enemy submarines. So we can find the enemy submarine and it can trail it over the seas all by itself without a human operator or human interface. At this point it's not being designed to attack that enemy submarine but as I mentioned, those would be things that perhaps in the future nations may want to look for, take it the next step based upon what these systems have already demonstrated I guess or what they may be able to demonstrate. Now when you look to the further out future I guess if you're looking for what sort of systems to expect in the future I mean it's pretty difficult to determine. You know, I certainly can't predict where the future will lead and how some of this technology may be developed and how successful it may prove to be. But what I do anticipate, one of the things I think it's pretty easy to say is when you look at systems in the future what are you gonna see? I think you're gonna see computer systems that are far faster and more capable than anything that we have today. I think that's a pretty easy understanding that computer systems have been continually improving for consistently and so the ideas that they would continue to but they're also getting smaller and smaller and so you're able to do more things, have more powerful systems that are in smaller packages and I think you're gonna start to see them used in some unique ways. Some swarming technology, the idea that systems will collaboratively work together to go and attack a target where you won't have a human operator leading each of the small little attacking systems, you'll have the system itself deciding how to shape and move in order to appropriately and effectively take out an enemy. I think there's a lot of promise with some of the initial research that's been done in swarming systems and I think you're gonna see a greater use of the machine learning capabilities and then you're gonna start to see perhaps some moves into more what is called general artificial intelligence or strong artificial intelligence. These notions that systems are just making simple choices between a specific defined task that they actually are able to make more complex decision-making, have more complex decision-making abilities more akin to human-like cognitive abilities. Not saying that they necessarily will get there. There's been lots of promises over the years about artificial intelligence will reach some sort of point of singularity or whatever, and many of those predictions have not borne out but I do think you're gonna start to see some increases in these sort of systems. And so in general, I think that you should not expect the systems of the future to necessarily look like the systems do today. So you shouldn't just be a better predator drone. I think that you could see radically different shapes, size and abilities. But when we're looking for how things are gonna develop in the future, I do think that they're going to be subtle and incremental. I don't think we're just gonna wake up and have, wow, we now have a fully autonomous system capable of attacking an enemy. I think this is something that slowly over time we're gonna start to see a widening and a separation, further separation of the ability to separate further the human operator from the system itself. And so I do think it'll be more of a gradual process than something that just happens overnight. Okay, so we have all this promise with weapon systems and it, as I mentioned, has led to some pretty intense opposition. Groups are forming, I mentioned Human Rights Watch and their report. They've been very vocal in their opposition to these systems. They've in fact called for a preemptive ban for all development and research of these sort of systems and certainly for any sort of deployment or use of the systems. They've joined a coalition that's called the Campaign to Stop Killer Robots, which is lobbying governments and citizens around the world for a similar ban. And even recently, last month in April of 2013, a UN special rapporteur issued a report for the UN Human Rights Council where he recommended a moratorium on all autonomous weapon system research pending some sort of gathering of nations to lay out a framework and kind of a legal and a political framework for how to deal with them. So a lot of opposition and they have many grounds for opposing these systems. They have ethical, moral, certainly policy arguments, but they do have some legal arguments and that's where I wanna focus our attention today is I wanna zero us in on what those legal concerns are and try and address how kind of walk through what the law is in particular. I'm a lawyer, so I feel most qualified in dealing with the legal issues. I'd leave it for somebody else to address more of the ethical or more arguments with them. But okay, so let's take a look at the law. And so it's important to understand and take a look at whether and how autonomous weapons systems could be in compliance with the law. Rather than going in some sort of count point or count point counterpoint debate with the critics about these systems, I wanna really just lay out the foundational rules that apply for new weapons systems and how they might apply to autonomous weapons systems and we'll explore what unique issues these systems raise. Okay, so what law applies? That's obviously where you'd wanna start. You need to know what law is applicable for autonomous weapons systems. And I think there is universal consensus that the law of armed conflict does in fact apply to new weapons systems like autonomous weapons systems. What's contentious though is how the particular norms of the law of armed conflict would apply to new systems. I mean, that's the same debate that's occurring with drones now or with cyber weapons or cyber warfare. Same thing would apply for autonomous weapons systems. I think the ICRC and other groups have said that there's no doubt that this body of law does apply to new weaponry and to its employment. Okay, so if we're looking at how a new weapon system, how do we know if a weapon system is lawful and could be lawfully used on the battlefield? I tell you that there's two tracks that you have to look at. Two different aspects of the law that a weapon system must successfully navigate and that it must comply with both parts in order for the weapon system to be lawfully developed and lawfully used on the battlefield. First is weapons law. So you're basically looking at whether the weapon itself is unlawful per se. So is the weapon of a nature that it should not be developed at all? If that's not the case, then the second part you're gonna look at is targeting law. Okay, you're looking at how the conduct of hostilities or how the weapon system is to be used in the battlefield or on the battlefield and we'll take a look at. So two different systems. I wanna give just kind of an illustration. If you look at the first, okay, the weapons law, unlawful per se, what sort of systems are we talking about? Well, here you'd be talking about, for example, something like a biological weapon. Okay, biological weapons, customarily they are unlawful per se. So even if you were using them against an attacking enemy, okay, the use of biological weapons or having a, trying to use and develop biological weapons would be inappropriate because that weapon system itself is unlawful per se. Okay, but the second is targeting laws when you're looking at how do you use the system? And that the system, you know, you can have a system that is lawful itself, like say a rifle, but it could be used in a way that would be unlawful, like you take a rifle and you could shoot, you know, if you were to shoot a civilian or shoot a prisoner, well, that would certainly be an unlawful use of it. So those are kind of the two tracks we're gonna walk through. So let's take a look at weapons law first. When you look at weapons law, so when you're looking at whether the weapon system itself is unlawful per se, there's two separate rules that you have to look at. The first rule is dealing with weapons that are indiscriminate by nature. Okay, so a weapon that is indiscriminate by nature is unlawful per se and should not be developed, it cannot be part of a nation's arsenal. Okay, and the rule comes from, it's codified in additional protocol one, article 51, part four. And when you take a look at it, it tells you that a weapon system, if it is of a nature to strike, civilian targets and combatants alike without distinction. So it's unclear whether it's gonna attack, it's unclear whether you're able to aim it or you aren't able to aim it to ensure that it's going after civilians or combatants instead of civilians, well, then it would become, it would be unlawful per se as an indiscriminate, as an indiscriminate weapon. I would tell you this rule, I think is not that major of an impediment for autonomous weapon systems. And in part, because there's a little bit of confusion, I think often critics talk about this aspect and they look at it and they say, oh, these weapon systems, they wouldn't be able to distinguish between, if you're looking at the really complex battlefields that are happening in the world right now, say the US involvement in Afghanistan, it's really hard to tell the difference between civilians and combatants there or lawful combatants. And so these systems would not be able to make that distinguishing part. But unfortunately they're kind of missing the vote or the thrust of what this rule is saying. This rule is saying that the weapon system itself has to be able to be aimed at a military target. And if it's not able to, well, then it would be unlawful. But if it is able to be aimed appropriately in certain situations, then it would not be in violation of this rule. So let me give you a couple of examples I have here. You can see in the slide, the first shows these balloons from World War II that were used by Japan hydrogen balloons. Basically, the idea was that Japan set up these balloons filled with hydrogen and the idea was when they would land, it would cause a massive fire. That was the designer of the thought. And so they sent these systems up and counted on the wind to blow them across the Pacific Ocean and land somewhere in the U.S. Okay, clearly this sort of system, there was no way for the military users to design or to designate or aim it at a particular military target. It would strike a civilian or military target solely based on where the wind took it. So that sort of system, okay, that would be indiscriminate by nature and you wouldn't be able to apply it. I don't think that's where any of the, if you look at the sort of precision systems that are being designed and are thought about in the future for use that may have autonomous features on them, certainly we're not talking about systems like that. So I don't think that will apply as much of a problem. Now that is different than the indiscriminate use. Okay, and the second sample that you see down below there on the slide was from the first Gulf War where Iraq had sent scud missiles into Israel and trying to attack cities. Okay, and people said, oh, scud missiles, those are indiscriminate by nature. Well, that's actually not true. They were certainly used indiscriminately in this sort of case. If you're aiming it towards a city, then you're not really able to aim it towards. They weren't precise enough for you to ensure that it was being aimed towards military target. It was just being aimed towards the city itself. So it would strike civilians and combatants potentially could have strike them equally. And so that made them inappropriate to use. But scud missiles also were frankly were designed for attacking out in big desert areas where you had big open tank formations or big large bases. And if you're using in that sort of context, well, then it would have been an appropriate weapon and would not have been indiscriminate in those uses. And so that shows you the difference between a weapon that's indiscriminate by nature and the use of. Okay, a couple of things when we're looking at this, a lot of times the critics when they're talking about, that all of these are indiscriminate by nature, I think some of them they're missing, it's counterfactual. If you take a look at some of the sensors that are being designed for these systems, the ability to analyze and determine shapes, the size to intercept communications at the time and really identify and pinpoint what the target is, frankly, and even some of the facial recognition software that's being developed. If you were looking at a personality strike type of a situation where you're going after a particular person, some of those things are really quickly advancing and developing, and so I think some of the concerns about the indiscriminate nature perhaps may be proved to be overblown, I think. Also, I think it's important to make sure when people are talking about these systems that they're not asking an autonomous system to do more than a human operated system or a human is capable of doing. There's a lot of talk from critics and others about all of these systems. They could so easily be tricked that they would be, if you hid your weapon or whatever, they would be confused and so therefore they should be, they're unlawful per se, because they can't determine that. Frankly, for centuries, enemies have been doing things to try and deceive folks. That doesn't make any of the weapons systems that we're using against enemies today doesn't make them unlawful per se. Certainly, nobody's tried to make them be declared unlawful. Okay, along the same lines actually, is this notion of the fact that the autonomous systems would be unable to recognize human intentions, okay? And that they were human emotions and so for that reason the autonomous weapons systems should be made or are unlawful per se, they should be illegal and banned. Now, I'll tell you that very common place in the military use now is systems that are able to attack beyond a visual range. We have lots of examples of systems that have fired from a distance where they're not able to visually see or to identify those emotions. Well, I certainly think it certainly is helpful if you're able to. It doesn't make the weapon system unlawful per se if you're not able to see them. If you're following some other grounds like we've done. And frankly, I would say that when you look at some of the human judgment or pilot area that we've seen in contributing to many accidental engagements or incorrect engagements, you know, I think that having that human judgment right there doesn't necessarily equate to having a perfect weapon by any means. Okay, and further when it comes with regard to emotions, it's also true that autonomous weapons in themselves won't have emotion. And this has been something that's been, the critics have harped on that, oh, now they'll be used by dictators to ruthlessly slaughter opponents of that dictator. It's hard to predict how the systems would use, but I would say that the fact that they don't have emotions also cuts the other way because we've seen over our history many, many, many unfortunate examples of human-conducted atrocities, right? Where humans are making the decision and are committing war crimes or other atrocities that you would envision that autonomous weapons systems won't be because they aren't reacting based upon revenge or in some sort of other self-interest or any of those other baser instincts. So, okay, when you take a look at this part of the rule, so when we're looking at this part of weapons law, I think that autonomous weapons systems would only violate this prohibition if there are no circumstances, I think it's clear that they would not violate this rule if there are no circumstances given its intended use in which it can be used discriminately. And I don't envision that that's gonna be an issue with the way these systems are designed. So let's take a look at the second aspect of weapons law and that is the rule that weapons systems cannot cause unnecessary suffering or superfluous injury. This rule is also customary and it's been codified in additional protocol one. When you take a look at it, this rule is trying to prevent weapons systems that are weapons systems that themselves cause this sort of inhuman injuries or aggravation to injuries. Classic examples of this are bomblets that aren't detectable on X-rays, for instance. So like a glass bomb or something where when somebody was struck by it and they were taken to a medical facility and you try to do an X-ray to see how to treat them, you would not be able to because it's been specifically designed to prevent that. And so to not have these sort of needless or inhumane injuries, the law of armed combat has forbidden these sort of weapons systems. So yes, it is possible that somebody would put a glass bomblet, for instance, on board a autonomous weapons system. But the mere possibility of this happening, and I think it's clearly, I think that's unlikely, the mere possibility would not make the weapons system itself illegal per se because what you're focusing on, this rule was focusing on, is the weapons systems affect on the targeted individual and it's not focusing on the manner of the engagement, which is the autonomous feature. So I don't think autonomous features, autonomous weapons in any measure would trigger concern under this prohibition. Okay, so those are the two things that you look at for whether a weapons system is unlawful per se. And there's ways that nations are tasked with making sure that they are not developing weapons that are unlawful per se. So weapons that are not indiscriminate by the very nature and weapons that are not causing unnecessary suffering. How do they do that? They do that through a weapons review process. And states would be expected if you have an autonomous weapons system or if a nation is considering developing an autonomous weapons system, they would be expected to comply with this rule. It's codified in there, also in additional protocol one under article 36. There is some controversy about whether all aspects of the rule are customary international law and there's some disagreements from among nations, including the US, about exactly what the rule requires, whether if you look at the language of the rule, it requires reviews for both the means of warfare and the methods of warfare. So the weapons systems themselves and then the tactics. And so there's a little bit of disagreement, but certainly I think it's fairly well, there's consensus that any new weapons system itself, the development of a weapons system like an autonomous weapons system is required to have this legal review. And so you would expect that all nations would have to do a review where they'd look at making sure that the weapon complies generally with the law of armed conflict that it specifically complies with the two rules that we just talked about from weapons law before it one develops the system and then certainly before it's used. Now again, there's a little bit of disagreement about whether the US has agreed to doing those two different reviews. I'll tell you by policy, if you look at the new DOD policy, 3000.09, according to that policy, the US seemingly has agreed to doing those two separate reviews in that particular case, dealing with any sort of weapons systems. Members states to additional protocol one would certainly have to do both reviews as well. And then you'd have to do reviews if you modified the system and there's some other things that you'd have to take a look at. But certainly so you'd have those reviews. Now I will tell you, given the fact that these are such, the prospect of these weapons system, it's very new. I think that this legal requirement does loom large and it is something that would need to be carefully managed and processed by countries wanting to do it. I also think, given the fact that the technology that's embedded in these systems or the advances that's likely to be embedded in these systems, it makes what you could say, oh, it's straightforward test, that's pretty easy. I think it makes it a little more difficult. Certainly the lawyers who are conducting these sort of evaluations and these examinations, they would have to work extremely closely with the computer scientists, the engineers, the others to make sure it really understood what the measures of reliability were, what the testing methods were, how it was validated. So I think those things would be significant. But I think that we are also dealing with that with a variety of other complex, complicated contemporary weapons systems that we're developing. And so I don't think it's necessarily a hurdle that's too hard to reach. Okay, nor do I think that it would cause an impediment overall, a bigger impediment for the use of autonomous weapons systems than it would for other weapons systems. Okay, so if the weapons system itself is not unlawful per se, so we next have to look at how would we actually use the weapons system that we now are looking to deploy on the battlefield. Okay, so when you're looking there, you look at targeting law, okay, the second aspect of the law. And remember, a weapons system has to be able to navigate both of these tracks, both aspects successfully before you can actually use it on the battlefield. So if we've met the first hurdle, now we look at the second one, targeting law. Three main core requirements exist for targeting law. Those are the principles of distinction, proportionality, and ensuring that all feasible precautions and attack have been met or have been taken. Okay, so let's examine each of those a little more closely and make sure that everyone's comfortable with what the rules actually apply. And I will tell you that these use issues, I think are going to be certainly a bigger concern than the weapons law discussion we just had. I don't think that the weapons law prohibitions are gonna cause as much problems for autonomous weapons systems to be developed, but I do think the use laws do raise some unique challenges, and so it's important to look at those a little more closely. And I'll tell you, if you look at other groups like the International Committee for the Red Cross, one of the things they have stated is that the debates over the legal and other implications of the use of the autonomous weapons systems are, in particular, where the focus of the debate should be versus debating whether they're on lawful per se. They really think that it's more of a focus on the use, and I would tend to agree with that position. Okay, when you take a look at distinction, that is a cardinal principle of the law of armed conflict. It is one of the foundational rules that's been recognized by the International Court of Justice as such. It's codified in additional protocol one of Article 48 of additional protocol one, and it states that parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives, and that they only direct their actions, their operations against military objectives. Okay, I think it's a very customary principle, and it would absolutely apply for autonomous weapons systems. So autonomous weapons systems that were being considered to use on the battlefield would have to comply with this principle of distinction. Now, when you take a look at the systems, I think it's clear you certainly can't use an autonomous weapons system to go directly attack a civilian or terrorize the civilian population. I think that would be pretty understandable and agreed upon principle, but how would they be able to comply in general? Well, you'd have to have the appropriate sort of sensors or suite of sensors and other recognition abilities so that the system was able to distinguish appropriately between civilians and combatants in that particular area. A lot of it is based upon the context. Where are you envisioning using the system? Are you envisioning using the system out in the desert against armed tank formations, enemy tank formations? Are you envisioning using the system somewhere like the demilitarized zone where you have very few civilians that may be appearing in between you and the enemy? Are you using it, say, underwater where there's far fewer civilian craft or really you're dealing with enemy submarines only? Those sort of things would certainly apply about how robust a series of sensors and recognition packages that the system would need to have. But it's pretty generally clear to say that the system would only be unlawful if the sensor ability wasn't sufficient to distinguish for that particular expected environment or that battlefield that it's placing in. So that's kind of more of a fact. It would be highly dependent on the circumstances, but that's something that producers of autonomous weapons systems would have to make sure that they are considering and ensuring that they're complying with if they intend to, if a nation intends to use that system on the battlefield. Okay, the second one, the second principle we're gonna look at is proportionality. And this one is a little more complex. Frankly, it's a little more complex for all weapons systems, but certainly it is uniquely so for autonomous weapons systems. This principle, also customary principle, also a fundamental principle of the law of all conflict is one where there's an analysis between collateral damage and the expected or anticipated military gain or the military advantage that the side anticipates receiving from that attack. So it takes a look when they're planning a particular attack, the force under this rule is required to analyze, determine how many civilians may be injured or so property might be injured. Okay, and then look at how important is it to actually conduct this strike? And looking at the two of them, the force has to determine that in fact the collateral damage, so the harm to the civilians, the incidental harm to civilians that is expected, has to ensure that it is not excessive in relation to how important or what the gain is to be anticipated from that attack. So you're kind of looking at between those two issues. Now, so how would an autonomous system be able to do that analysis? Or would it be able to, I guess is the question. Okay, the first part, let's look at the collateral damage. How hard is it for an autonomous system to determine how many civilians might be injured from a particular attack? I would propose to you that I think that that is something that an autonomous system, ultimately, again, if a semi-technology moves along, I think that it would be fairly easy for an autonomous system to reach that calculation. And it's in part because the system that the military uses now, the collateral damage estimation methodology, it is a methodology based upon science, based on objective and scientific data and algorithms, and it's based on things like predicting the number of civilians that are in particular areas. It's also based upon knowing things about the building composition, whatever the target building is, knowing things about the composition, knowing things about the precision of the weapon and its blast effect or the amount of the radius of an area that could be damaged from that particular blast. All those sort of scientific facts go in and produce ultimately some sort of determination of how many civilians, how many people could be killed in that particular strike. Given its scientific nature, I think that's something that you could program into an autonomous system to tell you with a fairly high degree of reliability. But the other aspect, I said we're analyzing two things together. The second thing that you're looking at is the military vantage. So you're looking at how important is this particular strike to your force? Okay, and determining that is, I think a little more complex or much more complex than determining how the expected number of civilians that may be injured or killed in a strike. So how do you determine how important a strike is? Well, clearly that is dependent a lot on the context. Any strike that you're gonna do, no two strikes are the same. So each one is a case by case analysis. So you have to determine how important it is to you. And it's based upon a variety of factors. I mean, some of the things that you take a look at when you're analyzing proportionality or you're analyzing the military gain, let's say you're attacking a tank. Okay, a tank is worth, hey, okay, that's a fairly valuable target. If we can destroy that tank, that's gonna help our particular side. Okay, I think that's fair assessment. But how about the fact that if a tank is by itself, is it worth the same as a tank that's part of a column or a formation of tanks? Here, is each of those individual, if you took one out of the formation and one by itself, are they worth the same? Probably not, probably the one in the formation, because that has greater ability to cause harm. That one's probably a little more valuable than the one that's by itself. How about if the vehicles, if you have the tanks are moving towards you, towards your base or your headquarters, whatever, versus they're pulling back and heading back to the rear? Well, clearly the ones that are coming at you, I think would be more important and would have a higher value, an anticipated military gain would be higher in that context than the one that's leaving. So those are the sort of things that matter for this analysis. And how could an autonomous weapon system be expected to make that calculation in those determinations and make that judgment? Now, I think it's unlikely, at least in the immediate future, without some significant advances in artificial intelligence. I think it's significantly, or it's unlikely that you're gonna have systems able to make that judgment call on their own. But I don't think that that is necessarily the end-all, be-all of this analysis and the decision, oh, well clearly they'll never be able to, autonomous weapons systems will never be able to comply with the principle of proportionality. Therefore, we never should be developing them. I think if you're taking a look at how the systems are gonna be used and how the human operator involvement will be with these systems, I think that there are ways, potentially, that the systems could comply with these rules by using some sort of having the humans inject themselves into the process and provide some sort of sliding scale type algorithm that kind of gives value to particular targets. And so you basically have the commander telling the system, embedding into the programming of the system for the particular mission, a value associated with a variety of targets in the area. Now, if you're gonna do that, so basically the human is telling the system, here are the thresholds, okay, here's the minimums of what something is worth. So the tank is always worth X collateral damage in this particular mission for this particular stage of the battle, the commander could embed that sort of information into the system. Now, so clearly the human has to make that determination in advance. I would suppose that if you have that sort of arrangement, commander is going to give pretty conservative values for those. Okay, the commander's gonna set the thresholds at a point so that it's nowhere near where someone objectively looking at this scenario could say, wow, that's excessive or not. I think you're gonna end up with autonomous systems having very little thresholds to act without further guidance, perhaps. But I do think it is a mechanism. A lot will depend on how maybe these algorithms could be developed in the future, but I do think that it is a possibility of a way that the systems will be able to be deployed on the battlefield and still comply with the principles of proportionality, that they will be able to do that by nature of having the human involvement in the process. And that's why I've said all along, I really don't think there's such a situation as a human out of the loop. I think you're always gonna have human commander's operators involved in these sort of decisions. Okay, I also think that it's important for, when you're taking a look at this proportionality analysis, because one of the things critics would contend is that, wow, they're not able, you can't possibly have a commander anticipate every imaginable thing that could show up on the battlefield and have a value associated with all of those things. And so, oh my goodness, the system is flawed in that regard. But I tell you this is the same, frankly, is true of humans who are confronted with the unexpected things on the battlefield or confusing events and they're forced to make it that time-sensitive decision to combat. I think neither the human nor the autonomous system should be held to a standard of perfection. In the law of armed conflict, the standard is always one of reasonableness. And so I do think that systems will be able to be developed to meet that threshold. Okay, so I do think in the end, the humans will ultimately still be making the proportionality calls, the required proportionality calls for the foreseeable future through that pre-programming that they do in the embedding into the system to ensure that the collateral damage that may occur from a autonomous weapon strike would not be excessive. But I do think that that could, is a potential possibility pending how the technology develops. Okay, then the last aspect of the targeting law that I wanted to take a look at in particular is the notion of feasible precautions in attack. Okay, again, another central component of the law of armed conflict, customary principle that's codified in Article 57 of the additional protocol, puts a significant, or puts obligations on forces when they're conducting attacks or planning to conduct attacks. And I think absolutely these obligations do and would apply to autonomous weapons systems. So you'd still have these same requirements applying for autonomous weapons systems that are used in the battlefield. So what does that mean? That means that autonomous weapons systems will have to do everything feasible in this context to ensure that it's meeting those obligations. And a force who wants to use one would have to make sure that this weapon system has the suspension sensors to make sure that it's doing the things needed. If you take a look at some of the aspects of what precautions are needed to take under that rule, the rule implies that you're doing everything you can to minimize civilian casualties. Okay, in fact, so that's why you need to make sure that the system has sufficient sensors to do that. And you have to make sure that the weapon system is in fact complying with those principles of proportionality that we've just discussed. So clearly as we discussed, you'd want to make sure that the systems are embedded with those sort of thresholds so that it knew how to respond and what was appropriate to respond. And then there are some other aspects or obligations under this rule that I think are really key to this controversy. And one is the obligation under the feasible precautions and attack provision to select the means of warfare likely to cause the least harmed civilians and civilian objects without sacrificing military advantage. So let's consider the practicalities of that norm, the implications of what that norm says. It basically says that if you have a manned system or remotely piloted system that is better at reducing collateral damage then whenever it's practical or feasible you should be using that system, you should be employing it versus say an autonomous system. Okay, so if the manned system is better at minimizing casualties then that's the system that should be used. So the law already has a provision. That's why when the critics are concerned about it you can point to this provision of law and say that the law already envisions ensuring that forces are doing everything they can to minimize civilian casualties. And so the notion that oh, they will blindly use autonomous weapons systems would be in direct contradiction with what the rule is. So if you're following the law then you're only using the weapons system. In essence you'd only be using autonomous weapons systems in situations that they can lawfully be employed and when it's use would realize the same military objectives that can't be obtained by any other readily available systems. That would cause less collateral damage. So that only when they essentially would be the best available systems could they be used. And so I think that goes a long way into undercutting some of the concerns that critics may have on the dangers of using autonomous weapons systems because the law is implying that they would only be used when better systems are not feasible or practical to be used or when in fact they are in fact the most precise and the best systems to be deployed on the battlefield for that particular environment. Now there's certainly some flexibilities there in what the rules require but I think it does prove the point or demonstrate the point that the law of our conflict does provide some solid protections when it comes to autonomous weapons systems and that may explain why a ban would be unnecessary as a matter of law. Okay, one other thing to consider in this vein is if you think carefully about what this rule is actually saying too and what it implies and the consequence if you contemplate what the consequence would be of banning these systems. Okay, the rule says forces should do everything that they can to try to minimize collateral damage so they should select the means of warfare that are likely to cause the smallest amount of collateral damage. Okay, what if the autonomous weapons system in a particular situation under whatever circumstances either because their sensor package is so robust or it's decision making capability is better in that rapidly changing environment. What if that autonomous weapons system were in fact better able to minimize civilian casualties than a manned system or a remotely piloted system? Okay, well now the law would require you to use that system and so by banning them we'd be undercutting the thrust of or the impetus kind of the object and purpose of this provision of law where the idea is we want to minimize civilian casualties. Now obviously I can't guarantee where the research is gonna go. There's no guarantees that autonomous weapons system will ever get to the point when they will be more capable or be in a better position to minimize civilian casualties than any manned or remotely controlled systems, but certainly it's possible and that's why one of the problems that might be a concern if you're looking for the critics who are looking for a complete ban on these systems is that we may be taking ourselves, taking something out of the arsenal that may ultimately be more in compliance with the law and would be able to provide better protections for civilians in certain circumstances. Okay, I wanna look at two other aspects of the law that I think autonomous weapons systems raise some unique issues or concerns. Okay, the first is subjectivity. Okay, subjectivity plays a big role in the law of armed conflict. You take a look at where, basically with subjectivity we're talking about human judgment. Okay, the idea that a human is ultimately having to weigh and balance these things and reaching their subjective decision about what the appropriate course of conduct is. So I would point to you that it plays a part in a lot of the rules of the law of armed conflict. Many of the ones that we've just discussed here today, we take a look at proportionality for instance. The rule generally is expecting that subjective decision about whether an attack would cause an excessive amount of collateral damage. What is excessive? Well, that's a subjective decision about what's excessive. There's not a particular rule that this number of civilian deaths, too many, there's no set correlation that tells you that. So you're looking at subjectively making that sort of decision. And many critics have argued, well look, autonomous weapons systems, they're not able to make this subjective decision. Therefore, they can't comply with the law of armed conflict and as one sort of further justification for their proposal to ban the systems. Well, I would respectfully disagree with that position. Now I certainly think it is doubtful that autonomous weapons systems will be able to make these subjective decisions in the near future, even if you have the most optimistic notions or dreams of how artificial intelligence may approve in the coming years, I think it's unlikely that they're gonna be able to make those decisions for themselves. So if the systems can't make that decision, who then how are we still complying with these subjective requirements that are existing throughout the law of armed conflict? And I think once again, you look to the human operator involvement or the human commander involvement in the process. I think the critics are failing, they're a little misguided because they're failing to fully appreciate how the autonomous weapons system targeting process would actually occur. To comply with this law, humans are going to need to inject themselves throughout at periodic points along the process to ensure compliance with these sort of subjective decisions. And I think though that these judgment calls can be made by humans throughout the process. Some of them may be made before you even decide to launch the system. The commander may decide taking a look at the particular battlefield, the particular target, the environment that the autonomous weapons system would be able to comply in this circumstance. He would set his appropriate thresholds and he would subjectively make the determination in advance that this is an appropriate decision, appropriate weapons system to be used for that mission. So they could be launched in that particular thing into that particular environment and I think justifiably have complied or reasonably complied with the provision to have made the necessary subjective decisions in anticipation of this strike. So I think that ultimately what you're looking at is you're having a human operator who is making those subjective calculations in advance and providing them to the autonomous weapons system in the form of guidance. So the guidance that's being embedded in the software, into the system, then the autonomous weapons system is actually just being tasked with making some sort of objective calculations about how to perform on the battlefield. So they are looking at objective calculations as long as I'm under these sort of thresholds, then I am in compliance, then I will engage. If I'm outside of those thresholds, then the autonomous weapons system would not engage. And I think if you have that new analysis, I mean it's certainly a new, it represents a new way of looking at the LOAC subjectivity requirements. And certainly, this new way of looking at it may be controversial, but I do think it's one way that the use of autonomous weapons systems could be done lawfully and in compliance with these provisions. Okay, the next, the second area I wanted to look at was responsibility. Okay, the idea of who is accountable, who should be held accountable or what sort of accountability do we have for the use of autonomous weapons systems? I think that the systems, if you look at autonomy, it does represent a greater separation of the human from the battlefield. And so I think there are some significant questions that arise when you look at how are you going to hold somebody accountable for battlefield conduct? Now the opponents would, the critics for the systems would say, hey, if you've removed humans from these final target decisions, that now we've prevented the proper assignment of legal responsibility. We can't hold anybody accountable and that's one of the justifications for Bain and the systems. I think that contrary to the critics' concerns, I think that humans can be held legally responsible for the actions of autonomous weapons systems even when they're not controlling every single move. So certainly a drone pilot with a remotely piloted predator drone, yeah, you can understand how that person is. But I think even with the separation of an autonomous system where you have a controller giving the parameters and the provisions to the system, I think that you can have similar accountability for that person. But I do concede that it does raise some unique concerns. I mean, some of the issues I think are pretty straightforward. Clearly, if you have an individual commander, whoever who intentionally programs the autonomous weapon system to go and engage in an action that could amount to a war crime, I think that's pretty clear that that person can be held liable. I think that's pretty simple. Likewise, if you use the system in an unlawful manner, so let's say you had an autonomous weapon system that, hey, it was not that good at distinguishing civilian people from combatants, okay? That may be okay to use on an area where you're worried about tanks in formation, but it certainly wouldn't be a good system to use in an urban environment of a conflict, okay? So if they used it in the urban environment, well, that would be an unlawful use of the system. I think you could hold that commander accountable. So I think some of those are straightforward. But other accountability issues, they would be more complex. And if you take a look, the critics would say, hey, some of these systems are gonna be so complicated that it may be hard for a commander to really understand how the system would respond to certain things. And so, as I've mentioned several times today, oh, the human operator is gonna embed their guidance into it. It is possible that the systems could be too complex. And it would be so complex that it would be perhaps difficult to hold a commander responsible for the system's actions. And perhaps there could be an accountability gap in those few instances. It's unclear, obviously a lot will depend on what the technology holds and how it's developed in the future. But I do think it's something that would represent a little bit of a hurdle and something that nations would wanna think about as they move forward and continue pondering whether they wanna develop any autonomous weapons systems in the future. Okay, that brings us to the conclusion. I think there's a few points that I wanted to emphasize one last time. And that is that I think that humans are always gonna be in the loop when it comes to autonomous weapons systems. I think all the envisioning of having autonomous systems on the battlefield imply that commanders will continue to retain the appropriate amount of oversight in their use. And I think as a result that they will be able to be held accountable in all but a few perhaps instances that we just described. I also think that autonomous weapons systems are not unlawful per se. Okay, as a matter of law, I think that they are not the prohibitions they would be able to navigate successfully under the most circumstances. Well, not under most circumstances, but I think in certain circumstances they'd be able to navigate certainly through the weapons law issues. I don't think that they are indiscriminate by their nature and I don't think they would cause unnecessary suffering. And I think also through the targeting law aspects. Okay, the distinction proportionality and feasible precautions in attack. They would be able to comply with those rules under certain circumstances. Now there may be complex battlefields, the urban environments where perhaps they would be inappropriate for use, but the law already has provisions, those provisions that we just talked about with the targeting law has those provisions which would prevent their use in those circumstances. But I think in other situations they would certainly be lawful and could be used. And so overall, I think that in general, a ban of autonomous weapons systems at this point is premature. We haven't had any systems fielded, developed. And so it's too early for that sort of judgment call to be made about where the systems may go, what sort of promise they may deliver in the end. And in particular, as we discussed with the feasible precautions in attack, you may end up taking a system that could potentially in certain circumstances minimize civilian casualties better than other systems and you're removing it from being an option already in advance. So I think that as a matter of law, such a ban is not supportable or not required. Okay, that concludes today's lecture. I really wanna thank you for your interest in this very emerging debate and the unique aspects of autonomous weapons systems, this topic. For more information, if you are interested in reading more about the autonomous weapons systems, if you're interested in learning more about the law of armed conflict, I'd encourage you to come to our department website, which is www.usnwc.edu, that's for US Neighborhood War College, .edu slash ild for the International Law Department. And you'll be able to see more of our research and our efforts into this particular emerging topic at your leisure there. So thank you so much, have a great day.