 Thanks a lot, Jan. Jan has asked me to come here today and talk about autonomous weapons systems and international law, but he only gave me 30 minutes. So what I'm going to do is carve out a particular area of international law that I think is the biggest problem with regard to autonomous weapons systems. It's certainly the area that we're beginning to focus on in the international legal community. Jan also asked me to talk about autonomous weapons systems and international law in a very general sense, not necessarily about cyber. And that's because Jan's a lawyer. And Jan understands that the basic principles of international humanitarian law will govern autonomous weapons systems, whether they're in robotics or whether they're in cyber warfare. So I'll be talking about autonomous weapons systems and international humanitarian law in the general sense. And then for the non-lawyers, let me very briefly explain what international humanitarian law is. You heard a number of speakers talk about the use ad bellum. This is a body of law that governs the use of force between states. When is it okay for your state to defend itself, for example? When is it okay for your state to act pursuant to a security council resolution? That's not international humanitarian law. It's important, and there are several minor autonomous issues, but the real body of law that's capturing attention is humanitarian law, which is about something quite different. Humanitarian law isn't about when can Estonia attack the Netherlands? Well, I assume you're not going to do that, but when can Estonia attack the Netherlands? It's about when the fight is on, when the conflict is going. What are the rules of the game? So we'll be talking about the rules of the game that govern autonomous warfare in an armed conflict. Now to begin, what we need to do obviously is define autonomous weapons systems. Weapon systems are, of course, the weapon and the platform and the associated technology that carries the weapon that employs the weapon. We're talking about a weapon system in which the weapon system itself finds, fix, and kills the target all by itself without human intervention. This is the definition for the DOD. It's a definition I'll use to explain the systems and the law today. Now, there seems to be a sense at least in the international law community, particularly among those who haven't been involved in military operations, that autonomy is something horrendously new. Oh my God, there's this new thing called autonomy. We have had autonomous systems around for a long time. I was in the Air Force and decades ago, we were considering how to use missiles in air-to-air combat that were semi-autonomous. In other words, the pilot launches the missile, the pilot peels off the missile, then tracks on the target and kills the target. That's a semi-autonomous system. It's been around for decades. Recently, we've developed systems that are fully autonomous. I've got two here. I'm now with the Naval War College. I have to use naval examples. Seismic and acoustic mines are fully autonomous. You put them out there, you turn them on, they wait till a warship with the right parameters comes by, it sinks the warship. Where you have the system on your right, an autonomous system called the CWIS, close-in weapons system. It was designed to shoot down incoming missiles that were attacking a warship. So you turn it on, you literally just turn it on. It starts scanning the horizon and if it sees something inbound that meets particular parameters, it will engage that system with a gatling gun and kill it. So we've had autonomous systems around for a long time. They're in the news today. Patriot missile or Patriot batteries being requested by Jordan. A patriot is an autonomous weapons system. Or the Israelis were shooting down Hezbollah-Katusha rockets with the Iron Dome system. That's an autonomous weapons system. So there's nothing new about autonomy on the battlefield. However, the international law community seems to have just discovered autonomous weapons systems. Human Rights Watch has just produced a report called, and they always come up with clever titles, called Losing Humanity. You get it, No Human in the Autonomous System. In which they claim that autonomous weapons systems are unlawful, has such that they violate international humanitarian law and therefore they should be banned. Now we should not take this lightly because Human Rights Watch is a very serious organization. Their lawyers, the General Counsel attended the conference last year, are very, very, very good lawyers. But they've concluded that these systems, these autonomous systems violate international humanitarian law. And then the UN special rapporteur on extrajudicial killings has now come online and issued a report that says, listen, it's going to be really hard to comply with international humanitarian law. And so therefore all the states out there should declare a moratorium on autonomous weapons systems until such time as we can figure out what the framework for them is. Well, I don't agree with either group, though I very much respect both. I don't agree with the first one because it seems clear to me that we can have autonomous weapons systems that are fully compliant with international humanitarian law. I will show you that in a second. And with regard to the special rapporteur, what I would say is, what do you mean we need a framework? We have a framework. We have had a framework for a period that is now being measured in centuries. And that framework is international humanitarian law. And it works just fine. So let me show you the international humanitarian law that would govern these systems. There are lots of aspects I could talk about this for several hours. For example, accountability for autonomous weapons systems. But in the 30 minutes that I have allotted, let me focus on the two that I believe are critical. The first body of law that you should think about when you're talking about autonomous weapons systems is the law that makes certain weapons unlawful per se. In other words, the weapon itself is a violation of humanitarian law. It has nothing to do with how that weapon is used. For example, poison. The second piece of humanitarian law that's critically important deals with weapons that are in and of themselves lawful, but they are being used in an unlawful manner. And this is where the real meat of humanitarian law is found. With regard to this particular norm, there are four things you need to worry about. First, the principle of distinction, which is about who can I shoot? What can I shoot? You have to worry about feasible precautions in attack. That's about how do I conduct my operation while minimizing harm to civilians and to civilian objects. The third is, even if I know that I'm shooting at a lawful military objective, and even if I've done everything I can to ensure that I'm not going to harm or that I'm going to minimize the harm caused to civilians and civilian objects, nevertheless, unintentionally and uncontrollably cause harm to civilians that makes it unjustifiable to proceed with the attack, even though that attack is going to give me military advantage. And then the fourth thing I'd like to talk about is not a principle or a rule of humanitarian law, but it's about doubt, because a lot of people are focusing on doubt. If I have an autonomous weapons system and the autonomous weapons system can't assure me with 100% certainty that the target it's engaging is a military objective or a combatant, that I won't harm civilians beyond what's necessary, how do I handle that doubt? So these are the four topics I'm going to talk about. Let's talk first about unlawful weapons per se. In other words, is the autonomous weapons system itself irrespective of how the warfighter is using the system, is that system itself unlawful? Well, we can dispense with two of the rules very quickly, although people seem to be citing them they're wrong. Those two rules are the rule prohibiting a weapon which causes superfluous injury or unnecessary suffering. In other words, a weapon that aggravates the wounds that are caused to warfighters on the battlefield, to combatants on the battlefield. An example would be using a projectile filled with glass. The glass spreads but the glass is very hard to dig out of the wound, and so therefore there is a requirement not to use glass in projectiles. That's why projectiles contain metal. There are other reasons, obviously, they're engineers. But metal you can see. When the doctors are treating you on the battlefield you can see metal that's embedded in your body. So this is the first rule. The second rule is you cannot have a weapon on the battlefield that has uncontrollable effects. What do I mean by that? Let's take biological contagions. Biological weapons are unlawful for other reasons but they're also unlawful for this reason. A biological contagion, if I infect Colonel Susick here with a biological contagion, he's clearly a military objective. He's a combatant. He's a colonel in the armed forces. Very much a military advantage I would accrue by killing him with this biological contagion. The problem is he's sitting next to someone with no offense. You don't look like you're in the military. And as soon as he coughs the germs will spread on this poor fellow and over there and over there. I, the attacker, because of the nature of the weapon I can't control the spread. People are talking about these issues in the context of autonomous weapons system but they're wrong because the concerns here are not exacerbated by the fact that it's autonomous. The issue is the weapon itself, the nature of the thing that causes the effect that you're trying to achieve on the battlefield. It has nothing to do with whether a human is in the loop or not. So we can quickly dispense with these two. We get to the third thing that can make a weapon unlawful per se and this merits a little more discussion. This is the argument or the prohibition on weapons that cannot be aimed sufficiently and the requirement basically is that if you're developing a weapon, if you're going to field a weapon on the battlefield, what you need to be able to do is to be able to actually aim that weapon with military objectives and it really depends on the environment in which the weapon is designed for. If I have a weapon designed to take out tanks in the desert I don't need to aim it much because in the desert there's primarily just sand. So I can have a weapon that's not very precise and hope to hit tanks and that's perfectly legal. But if I use that same weapon in an urban environment then that weapon would be very, very precise. If it's designed for use in an urban environment it's unlawful. Why? Because it's not precise enough. Because in an urban environment combatants and civilians are intermingled. You need to have a weapon system for use in an urban environment that's very, very, very precise. And the same is true. I know we're not supposed to be talking about cyber or cyber specific. The very same is true with regard to networks. When you design your malware interlinked with civilian networks and so forth. The same principles apply. Now, what this means with regard to unlawful weapons what this means with regard to unlawful weapons is that you can make no assessment at all broadly with regard to a autonomous weapon. You have to look at every proposed autonomous weapon and ask yourself what are the capabilities of this weapon and what environment is it used in. That will tell you whether or not the weapon itself the individual weapon is unlawful per se. Now, the real meat of humanitarian law doesn't have to do with unlawful weapons. The real meat of humanitarian law has to do with how you use lawful weapons. And as I said there are basically three principles you need to worry about distinction what am I shooting at, feasible precautions in attack, how do I avoid harming civilians and then proportionality. If I've done my best is my best good enough. Let's talk first about the principle of distinction. This is perhaps the most important principle in international law. The International Court of Justice as many of you know in a famous case called Nuclear Weapons has called this a cardinal principle of international law. The International Court of Justice has said this principle is transgressible. This is the mother of all humanitarian law principles. And it's a very simple one for the lawyers you find that in Article 48 of additional protocol one to the Geneva Conventions and it says this, if you're on the battlefield and you're engaged in operations what you need to do is distinguish between combatants and civilians and between military objectives and civilian objects. Now this is a very broad principle so international humanitarian law has operationalized it. The first way it's operationalized that is something everyone in this room knows regardless of whether you're a lawyer or not whether you're in the military or not. The first way you operationalize it is by saying you cannot shoot at civilians. You cannot shoot at civilian objects. Doesn't mean you can't harm them in your operations but I can't direct my operation at that civilian. Now that's not really a problem with regard to autonomy at all because there it's the intent to attack the civilian. So if you have a system that is designed to attack civilians or civilian infrastructure that is a violation of humanitarian law. It's the second prohibition that I have up here that is much more important in the case of autonomy and here the prohibition is a little bit different. I'm not aiming at the civilian sitting next to Colonel Susick. I'm not aiming at you. What I'm doing is launching my system and I just don't care who it kills. It's capable of firing at particular military objectives. It's capable of firing at particular individuals but I just don't care. In the air environment the way we used to explain this to air crews is you've gone to the target you haven't expended all of your ordinance you're on your way home you've got a bomb maybe you don't know you should know it's dangerous to land a jet with live ordinance on it and so you pickle it off you drop the weapon over enemy territory you say listen man maybe I'll hit something okay maybe some good will come of this weapon. That's unlawful. Why? Because you are not aiming that particular weapon at a military objective. Again I don't really think this is an issue of autonomy. The issue is it's not an issue of autonomy the way autonomy plays in here is that you need to use the automated weapons system whether it's cyber or a robotic system you need to use it in an environment in which it can aim. So I met with Human Rights Watch we discussed our respective views a few weeks ago they're very good and what I said was listen you build an autonomous weapons system it's all about the environment you decide to use it in. You remember back in the first Gulf War some of you remember that the Iraqis were firing scuds into Israeli population centers and everyone said oh my god they're using it indiscriminately. Correct. But they could have used that system against tank formations in the desert and it would have been perfectly lawful. The same is true with regard to autonomous weapons system with regard to this norm you ask what type of environment am I operating the system in and am I am I telling it to kill targets that are lawful or just kill targets generally okay that's distinction that's pretty clear let's talk now about the feasible precautions in attack pursuant to article 57 of the additional protocol you are required on the battlefield to do everything you can to minimize any harm you may cause to civilians or to civilian objects this applies equally fully in cyberspace what are the requirements there are four first what you need to do is you need to do everything feasible and feasible means militarily sensible reasonable available everything feasible to make sure that that target is a military objective that it's lawful to attack that target second you have a requirement to warn a civilian population that may be affected by your operations but be careful you don't have that requirement to warn the civilian population if you may forfeit surprise if it may endanger your forces if you're conducting a cyber attack you don't have an obligation to warn them if it means that they can close the vulnerability okay so it's worn when feasible verify has feasible the third requirement is very very important with regard to autonomous weapons system and that's a requirement to choose the weapon or the weapons system that is likely to cause the least harmed civilians or civilian objects without sacrificing military advantage okay so it's a collateral damage issue and as we'll see that's very important in autonomous weapons systems it's very clear if I have a rifle and I can shoot yawn who's clearly a military objective without harming our notional civilian here in the front row then I have to use that weapon instead of dropping a bomb on this building in order to clear yawn why because the rifle will cause less collateral damage and give me precisely the same military advantage exactly the same thing applies in cyberspace and then finally there is a requirement that if you have particular targets out there a target set then you have to pick the target among them that gives you the effect you want because that's what targeting is about targeting is not simply about destroying it's about achieving a particular effect on the battlefield you have to select that target that will give you that effect and causes the least collateral damage to civilians or civilian property how does this play out in autonomous weapons systems well the first rule feasible precautions attack the very first rule is if you have sensors you have to use them and that includes sensors that are not only on the weapon but also external sensors so if you could slave yourself to external data then you must do so so long as that's feasible everything is subject to the rule of military feasibility everything is subject to the rule that says if it doesn't make military sense you don't have to do it that's the first thing you need to know the second thing is much more important it's the key to autonomous weapons system it's the key to my complaint about the ban there are two corresponding conclusions that the law gives you the first is that if you have an autonomous weapons system and you're considering its use you may not use that system if you have something else available that's going to give you the same effect and present less risk to the civilian population to civilian objects to entities hooked into the network you may not use has a matter of law it's not a matter of choice it's not an operational matter it's illegal matter it's illegal to use the autonomous weapons system in that case but a lot of the critics often forget the other side of the legal requirement and that's that if you have an autonomous weapons system and that autonomous weapons system can it yield the effect that you must use it's a legal requirement use that autonomous weapons system if the other weapons systems in your arsenal would cause more collateral damage more harm to civilians or to civilian objects it's a legal obligation and therein lies the problem if we take autonomous weapons systems off the table you're taking a weapons system and making it of the commander that the commander might be able to use to minimize collateral damage because it is certainly the case that certain autonomous weapons systems because of the complexity of the computer systems that will be in them can perform decisions better than even the Colonel Susick why because the battlefield is a complex place where you have to make lots of decisions very very quick there was an intervention I made an intervention in an excellent presentation yesterday where I said don't be so trustworthy about the human ability I was in the air environment airplanes move really really fast and the ground is really really close and if you start second guessing your instruments in a combat aircraft the enemy doesn't have to kill you you will kill yourself and so there will be circumstances in which the autonomous weapons system has greater capability than the alternatives available to you the human operated the human supervised alternatives I want to emphasize this is not a plea for autonomous weapons systems that will not always be the case all I'm saying is if you take this out it then has one less option that he or she can turn to in order to avoid harming civilians now we turn to proportionality this is what everyone focuses on correctly this is proportionality in humanitarian law it's different from proportionality in the usad belem so you can forget issues of self defense that's not what we're talking about here this is a rule designed to protect civilians and what this rule says is listen if you're engaged in a military operation if you're engaged in an attack that attack will be prohibited and unlawful if the harm you expect to cause to the civilian population or to civilians is excessive relative to the military advantage that your attack is going to give you in other words there's a point at which even though it's a good thing militarily that target you can't do so because it's just too much incidental harm to nearby civilians to civilians that are attached to the network and so forth now this to me is in fact the major challenge this is where the research should be going because there's a problem the problem isn't with assessing collateral damage we do that every single day on the battlefield lots of folks when you get into Afghanistan we have collateral damage estimate methodology that collateral damage estimate methodology employs technology isn't that some guy like me it's some computers figuring this out you plug all the data into a computer so it won't be a problem for an autonomous weapon system to estimate collateral damage particularly because of the rule that if you have doubt about the status you treat them as a civilian no no no the problem is going to be because you have to assess that harm relative to the military advantage that you're going to get from the operation and that's hard and the reason it's hard is because military advantage is always subjective and it's always contextual what I think to be proportionate is going to be different I'm from Texas he's from the Netherlands trust me I have a lower standard than he does okay it's very very very subjective okay and it's always contextual if we're going after a command and control system cyber attacks against a command and control system that command and control system is of different value on day one of the operation than day last of the operation when the enemies defeat is almost there and so my question is and this is for the techies I don't know the answer I don't know if there's an answer or a robot or a machine or a computer code how is that going to assess military advantage which on a battlefield can change from one minute to the next you're not dangerous to me at all maybe a little dangerous but if you suddenly attack me the military advantage of me harming you instantly goes way way up how are computers so I know the answer there are situations in which proportionality is not a problem we should understand this when we're considering autonomous weapons systems you may have a combat environment of battle space in which there are no civilians there are no civilian objects I work for the Navy now that's often the case at sea okay that's often the case at sea you may also have very very precise targeting available that's often the case in cyberspace in cyberspace you can often strike with extraordinary precision at your military objective you may be able to program collateral damage values into the system such that if it reaches a collateral damage level it knocks it off it doesn't engage and you can ask yourself if that's reasonable in the circumstances in which you're employing the weapon if not don't use the weapon if so use it effectively there you've made the decision not the machine and finally I think we'll be developing dialable systems where a warfighter like Jan will be able to look at his weapons system and what he will do is say this is a really hot environment I'm going to dial acceptable collateral damage up or this is a really benign environment I'm doing occupation I'm doing stability ups I'm doing coin I will dial collateral damage down I think that we'll see that I will tell you that although I've said that this is the big problem it's also the biggest problem for humans on the battlefield for all of you who have a war or involved in targeting and so forth what you know is that the hardest decision you'll ever make on the battlefield intellectually the hardest decision is always proportionality I don't know what that tank's worth I don't know what that command and control system is worth in terms of human lives okay that's the hardest decision you can make and it will be the hardest decision for a machine too you cannot has a matter of law I'm not an ethicist trust me lawyers don't do ethics okay I'm but I will tell you that that decision is almost impossible to make on the battlefield and we cannot ask more has a matter of law from a machine than we asked from humans finally let me deal with the question of doubt because I'm about out of time very quickly what do you do with doubt because like Yan the machine isn't always certain that what it's engaging is a target here the standard is very clear in international humanitarian law the standard is that if the degree of doubt that you have would cause a reasonable warfighter to hesitate in other words I'm going to engage and I kind of go I don't know if that's a military objective or not if you're at that degree of doubt you may not engage the target that's true on the battlefield it's true in cyberspace it's true regardless of whether you're using a machine or it's a human involved in the process this means that errors are lawful machines like humans will make mistakes on the battlefield that's lawful so long has that mistake regardless of whether it's made by a machine or a human was reasonable what's our standard our standard for the warfighter is was it reasonable to use an an automated weapons system that's programmed to accept that degree of doubt in those circumstances so I may have a system that says engage all my force sensors are green if it's reasonable in those circumstances to accept that a level of doubt because you're in a situation where you're in high intensity conflict then it's perfectly perfectly lawful I would also note that autonomous weapons systems can be programmed to hesitate longer to resolve doubt that's a good thing they're like drones you know if it's got doubt you're not you don't have a human at risk so it can perhaps not engage for a while because there's no defensive defensive need and finally I would again emphasize don't over estimate the human anyone who's been involved in a blue on blue incident I have in a blue on blue incident knows that the ability of the human to resolve doubt is not very good because in a blue on blue incident that's allied force killing allied force after I conclude that I don't have doubt about what his status is so finish up here with schmitz 3 rules of assessing new weapons systems I commend these to you first don't sell humanitarian law short the law is good the law can handle most systems I was in the air force I will assure you at the turn of the 20th century people were going airplanes oh my god airplanes oh my god let's make these illegal there were conferences like this trying to outlaw airplanes thank god because airplanes are really cool and so we have them today and guess what humanitarian law handles aerial warfare pretty well then we had in my generation I'm older than most of you we had over the horizon weapons beyond visual range weapons I remember oh my god these can't possibly be lawful because I can't see the target today most engagements are beyond visual range in fact in the air to air environment if you let another guy another jet get in sight you've just screwed up he probably killed you a long time ago with his beyond visual range humanitarian law handles this on a daily basis cyber oh my god cyber humanitarian law some countries are still saying it doesn't apply that's nonsense it's silly it's absurd in fact in the Tallinn manual Bill Boobies here he's one of the key drafters we found that there wasn't a big problem there were a few but there wasn't a big problem applying to humanitarian law and now we have this entirely vacuous drone debate oh my god drone civilian casualties listen if you step back and you look at drones in the humanitarian law context they're not bad things they're good things because of their capability to distinguish civilians from civilian objects so the first thing is is don't undersell humanitarian law the second is the mere fact you don't like a weapon doesn't make it unlawful so if the ethicists come up here in the afternoon and they tell you they're unethical I'm a lawyer I say I'm glad you have that opinion but so what okay if you don't like them on the battlefield laws the way to get them off take your ethics and convince a state to outlaw them then it becomes law but ethics doesn't keep it off the battlefield doesn't make it unlawful and the last thing is states make law only states make law they make it through treaty or they make it through state practice combined with opinion you're us human rights watch doesn't make the law if you read that that's not law that's an opinion nor does the UN special repertoire nor does a journalist the economist can't tell you if something's lawful and nor does Mike Schmidt I'm an academic I just have a view states make law so if you're a state legal advisor I beg you retain your independence consider all of the discussion but don't jump to the conclusion that either because either Mike Schmidt or human rights watch as an opinion that's in fact the law make your own decision and with that I will close on