 Hello from the Stockton Center for International Law at the United States Naval War College. I am Lieutenant Colonel Jeremy Davis and I would like to welcome you to the fifth installment in the recurring Stockton series. Today we are discussing international law issues raised by the development and use of autonomous systems both in peacetime and in armed conflict. Today's event is cosponsored by the NATO Cooperative Cyber Defense Center of Excellence and NATO Accredited Cyber Defense Hub, a mission of which is to support NATO itself and member nations with interdisciplinary expertise in the field cyber defense, research, training and exercises focusing on technology, strategy and the law. The U.S. group of the International Society for Military Law and the Law of War is also a cosponsor of today's webinar. The U.S. group supports the International Society's central purpose which is to study and disseminate information about international humanitarian law, military law, the law of peace operations and related legal domains. Before proceeding further I would like to pause and invite a few remarks from Professor James Kraska. Professor Kraska is the chair and Charles H Stockton Professor of International Maritime Law here at the Stockton Center. Professor Kraska also serves as a visiting professor of law and John Harvey Gregory Lecturer on World Organization at Harvard Law School. Professor Kraska, the floor is yours. Thank you kindly Lieutenant Colonel Davis and welcome everybody to the Stockton Center for International Law. We focus on international law and military operations and we have three directorates in doing so. We have a directorate for the law of land warfare that focuses on the law of conflict. We have a directorate for maritime operations that looks at maritime security law and the law of naval warfare. And then finally we have a third directorate that's Lieutenant Colonel Davis's directorate. The focus there is on the law of airspace, outer space and cyberspace. The Stockton Center has faculty from all five U.S. armed forces as well as the Royal Air Force and we've had other armed forces judge advocates from other countries in the past as well as numerous visiting scholars. We also publish the Journal of International Law Studies which is the oldest journal of international law in the United States and recently has published some articles focusing on autonomy that I encourage you to look at. And finally we are also planning to conduct a larger conference on technologies and international law in early December so if you're interested in that please reach out to us. Lieutenant Colonel Davis turn it back to you. Thank you Professor Kraska. The Stockton Center aims to educate, inform and influence leaders, decision makers, scholars and practitioners on important international law issues. To that end we are proud to have with us today as our three panelists, Dr. Ryan LaVoia, Professor Eric Jensen and Professor Mike Schmidt. Dr. LaVoia is an associate professor at the University of Queensland School of Law where he leads the Law and Future of Warfare Research Group. Dr. LaVoia's current research focuses on the legal challenges associated with military applications of science and technology. He has been a visiting scholar at Georgetown University, the University of Oxford and the NATO Cooperative Cyber Defense Center of Excellence. Among his many publications, Dr. LaVoia is co-author of autonomous cyber capabilities under international law, a research paper published by the NATO Cooperative Cyber Defense Center of Excellence. Eric Jensen is a professor at J. Ruben Clark Law School at Brigham Young University. Professor Jensen was a member of the International Group of Experts that prepared the 2017 Talon Manual 2.0 on the international law applicable to cyber operations, as well as the group that prepared the 2013 Talon Manual on international law applicable to cyber warfare. Before joining the faculty of BYU, Professor Jensen served as a special counsel to the Department of Defense General Counsel. He taught at Fordham Law School and he served 20 years in the United States Army as both a cavalry officer and as a judge advocate. Professor Jensen's most recent work, autonomy and precautions in the law of armed conflict is now available in the Naval War College's international law studies. Mike Schmidt is the Charles H Stockton Distinguished Scholar in Residence and Professor Emeritus here at the U.S. Naval War College, as well as Professor of Public International Law at the University of Reading in the United Kingdom. Distinguished Scholar and Visiting Professor of Law at the University of Texas and Francis Lieber Distinguished Scholar at West Point. In addition to authoring more than 160 publications, Professor Schmidt served as Project Director for both the 2017 Talon Manual 2.0 and the 2013 Talon Manual. Professor Schmidt's article Autonomous Cyber Capabilities and the International Law of Sovereignty and Intervention was also recently published in international law studies. We are equally proud to have with us as our moderator today, Mr. Mike Meyer, Special Assistant for Law of War Matters in the Office of the Judge Advocate General for the United States Army. Mr. Meyer advises the Judge Advocate General on issues involving the law of war, reviews all proposed new weapons and weapons systems, and serves as a member of the Department of Defense Law of War Working Group. Mr. Meyer previously served as an attorney advisor with the Office of the Legal Advisor at the U.S. Department of State, and he served nearly 23 years as a judge advocate in the United States Army. Our plan for today's webinar is quite simple. Mr. Meyer will introduce the topic and provide some context to frame the discussion. He will then invite the panel to make remarks, lead our panelists in a discussion, and then pose to the panel various audience questions. You may have noticed that the chat function has been disabled for today's event. If you have a question that you would like to pose to a member of the panel, or perhaps all members of the panel, please use the question and answer button located near the bottom of your screen. You may submit questions either in your own name or anonymously if you wish. If you see someone else's question and find it to be particularly good, please feel free to like that question, and it will be upvoted and be more likely to be posed to the panel. Finally, if, despite all our best efforts and crossed fingers, you lose connection today with the webinar, fear not, the event is being recorded, and it will be posted to the Naval War College YouTube channel in the coming days for you to view at your convenience. Mr. Meyer, the session is yours. Great. Thank you, Jeremy, for that introduction. Thank you, Professor Kraska, for allowing me to moderate the panel, and I'm excited to be here with these three distinguished panelists that I know very well. My job really is to sort of referee them. I don't think we'll have any problem with them engaging in the discussion. Instead of having them do long sort of 15, 20 minute presentations, we're going to break this webinar up into sort of five different topics. We're going to talk a little bit about autonomy and then law of war, you said Ballum, and then Mike will discuss some sovereignty, non-intervention points, then move to Eric's on precautions. So we're going to break these up into sort of five separate categories. And then we certainly encourage you as they're making these discussions and points to raise the questions, as Jeremy pointed out, so we can insert those with respect to the specific topic that we're talking about. So kind of like presidential debates, we'll move from topic to topic. Hopefully we won't have the acrimonious type of discussions with that as we move forward. And I know at least for those of us in the United States, we're about presidential election out. So we look forward to a great discussion and we're going to go ahead and get started. And again, this is for each of you. We're going to talk about the definition of autonomy, because each of you in your papers discussed autonomy. Ryan, you went into questions about whether what is true or full autonomy really can't be resolved in your paper and doesn't necessarily and seem what is even necessary. Eric, in your paper, you quoted our friend Chris Jinx. And you said the international community can't even agree on what they disagree about. And to some degree, the state's take on autonomous weapons may be influenced by how they define autonomy. And then, Mike, you, in your paper, when you were talking about sovereignty and nonintervention, you got into sort of the different types of terminology that I think that Paul Shari has used of sort of in the loop, on the loop, and how you describe autonomy. Starting with you, Ryan, and then moving to Mike and then Eric, how do you view this discussion of whether we actually need the firm definition of autonomy to resolve this? Or is it something that you just get lost in the weeds trying to do? So I'll turn that over to you, Ryan. Then we'll go to Mike and then to Eric. Thanks very much, Mike, and hello, everyone. And thank you to the organizers for the kind invitation to participate on this otherwise very distinguished panel. So there are multiple difficulties with the ongoing debate about autonomy in weapons systems and in other military systems. I mean, one of the problems is that the question itself is quite difficult. So autonomy from a technological perspective is difficult to comprehend, perhaps. And that is then not helped by the fact that autonomy tends to be used by technologists and engineers in a far more looser and liberal manner than by lawyers and philosophers. So there's a difficulty built into the debate. But what I think is actually more problematic is that the discussion has become highly politicized. There are these fairly broad and neutral definitions of autonomy that are floating around. So for example, the US DoD and the International Committee of the Red Cross have in a fairly similar fashion defined autonomous weapons systems as weapons systems which, after activation, can select and engage targets without further human intervention. And that is a definition that would capture many existing weapons systems as well as potentially more advanced weapons systems that might be developed in the future. But there's another strategy as well which a number of states have deployed, perhaps most notably China, that has developed a very elaborate definition of autonomous weapon systems, including a weapon system according to them is a system that is inherently indiscriminate, that has capabilities that exceed human expectations, and so on and so forth. So they've effectively gone for a definition of autonomous weapon systems, which is so narrow, that such weapons are unlikely to be developed or are unlikely to be of interest to states. The result of which is that should there be any future regulation of autonomous weapon systems, that regulation would not affect them if such a narrow definition is adopted. So there's a problem around definitions from that perspective. And then finally, I'm slightly skeptical about the very sharp divides between fully autonomous and semi-autonomous weapon systems. I think these labels are problematic because particular functions of particular systems can have different degrees of autonomy. So labeling an entire system, semi-autonomous or fully autonomous, tends to obscure more than reveal about the actual capabilities of that particular system. Heron? Well, so I agree with Ryan on parsing out those definitions. And I think that both he and Chris Jenks, as you mentioned, Mike, I rely on him some of my paper. I mean, I think where this argument becomes hard, I guess I shouldn't call an argument, where this discussion becomes the most refined is at the CCW, where people are trying to talk about these autonomous weapons and potentially make controls for autonomous weapons or at least agree on how to face autonomous weapons. And how these various nations are looking at these definitions does, in fact, I think, have an impact on how they view the potential regulation on autonomous weapons. Because if you take, for example, Ryan's differentiation, where some groups say it's select and engage, well, then you're going to approach that differently with respect to regulation. And if you're taking this very narrow, narrow view where you may have a discussion as to whether or not these weapons systems will actually exist, or of course, if you're on the other end, the same issue. So I think this definition really can affect where countries stand, which will then, in the end, lead to proposals for legislation. So you see in the CCW, different countries making various statements about how autonomous weapons should be regulated. Some taking view that they don't even exist yet and that there are still future weapons. Others taking view that they're already in the inventory of many countries across the world. And of course, you would approach regulation in a very different way based on how you view that term autonomy. So again, I agree with what I thought, Ryan did a great job of laying out those options. And my only point would be, when you're looking at regulating, where you stand across that spectrum will have a big impact on what you propose with regulation. Mike, you're muted, Mike. My granddaughter is visiting here, so I don't want to get her squeals in the background so my sound was off. So my view is in life, you shouldn't spend a lot of time doing what other people have done well. I think Ryan did a superb job in his paper and I adopted his definition for the purpose of my own paper. On a day to day basis, when I'm thinking about this, I use the in the loop, human in the loop, human on the loop, human out of the loop, just has a way of categorizing the types of automated or autonomous systems we have. My concern is we're spending a lot of time focusing on this issue of definition and it's leading us down the wrong path as lawyers. There is not a scintilla of doubt that these definitions are critically important operationally. There is not a scintilla of doubt that these definitions are critically important to the technologist, as Ryan has said, but in terms of international lawyers, it causes us to focus on the wrong thing. We ought not to be looking at how these decisions are made, how the system engage. What we ought to be looking at is the effects that are caused by the system and we ought to be looking at issues like knowledge and intent. To what extent does that system have sensor capability that can discriminate on and on and on? So I'm really not going to take part in the dialogue over definitions because it's the wrong question to ask as international lawyers. Okay. I mean, I think, Eric, you had talked about sort of the definitional piece in CCW and then certain systems. I think one of the concerns has been on sort of the definitional piece came up because there was the campaign to ban killer robots or trying to come up with a definition so they can figure out what do they want to do with respect to a preemptive ban and that seems to be driving the international discussion with respect with autonomous weapons. I think Mike is much more important and Ryan's piece is really what you have is this is more a technology and you're inserting this technology into systems. How do you work with that? And I think the technologists are more concerned about how they're going to be able to incorporate this technology into existing systems. And I think it becomes harder to, like I think Mike was correct, we're sort of putting a cart before the horse by trying to come up with some sort of universal definition when there's many different applications of autonomy in different systems, weapons systems and others. So how do you guys think about that? Mike, my only response to that would be again, if you're in the midst of the CCW and people are trying to propose regulations to go before the body, then I think sometimes you're forced, even if I agreed with Mike Schmidt's view, sometimes you're forced to address this issue because you're presented with a proposed regulation upon which you have to provide advice to your government. And that seems to me to be the driving factor of why lawyers will have to get involved with this definition of autonomy and will have to at least get to the point where they can address it with respect to current systems and proposals for new systems. What the country facing these sorts of things ought to do, Eric, is try to get the discussion back on track. So let's take a very, very simple example. We're trying to describe what an autonomous system is. We agree, there is an agreement over what an autonomous system is. Then there are those people that say it's bad and now we're talking about regulating or banning that system. If we focus on effects, that will allow someone to say, slow down. The issue is not definition, it's effects. And an autonomous system, as you describe it, may actually have effects that have humanitarian outcomes because the sensor suite is more sophisticated than the human being. But by focusing on definition, by allowing the discussion to zero in on types of systems, rather than the effects caused by systems, we're actually operating at counter purposes to the humanitarian underpinnings of, for example, humanitarian law. And by the way, that bleeds over into other areas of law too, sovereignty, intervention, and so forth. So I take your point that the reality is that's what they're talking about. But the strategy should be to cause them to quit talking about that and cause them to start talking about what they really care about. Because the people that want to ban autonomous weapons, they're good people. They are pursuing the same motives, the same ends that we are. We just gotta wake them up and say, let's actually pursue those ends. I don't think, Mike, though, that they would, oh, go ahead, Ryan. I would say that the CCW has, to some degree, taken Mike's advice already. The discussion, particularly in the last year and a half, perhaps, has moved away from the attempt to identify technical characteristics of autonomous weapons systems and has focused more on the type and degree of human interaction slash control necessary in order to ensure compliance with international humanitarian law whilst using autonomous weapon systems. However, those weapon systems might be defined. And now this focus on human interaction has its own difficulties, particularly when concepts such as meaningful human control are used, which many people seem to support, but no one seems to quite know what it means. But at least the shift away from the technological parameters to the human machine interaction seems to be a step in the right direction. And I would just also add to that, that the focus on effects is, I think, something that I would embrace, but I think there are lots of people out there who are not willing to say this is only about effects because if you accept the idea that autonomous weapon systems will select and engage, then it's not just about their effects, but there are those out there who say the process of bringing about those effects, the process of selecting and engaging also has legal repercussions for which we must account and that we have to think about who is selecting and engaging, how they're selecting and engaging, what's the process by which they select and engage, and not just the effects that that selection and engagement might result in. And I think that's where the difficulty lies, why we can't just focus on effects. Well, I mean, I disagree, Eric. I mean, we've been friends for now decades, I disagree. By focusing on how it's doing, we are again distorting the issue. The issue is what is the effect caused in the battle space? So yes, we have to look at the process by which the effect is caused, but what we really care about, what we should really care about is the understanding of the system that is being introduced into the battle space and whether the individual who made the decision to introduce that system into the battle space understood it because if not, he or she is engaging in indiscriminate warfare, which is, by the way, an internationally wrongful act and a war crime. So again, we are trying so hard to solve a problem that is really rather simple. What does that weapon system do? And what did the individual who introduced the system understand about the system and not only the system, but the environment in which he or she is introducing it? What did that individual understand at the time the decision was made? And by the way, in your excellent paper, I agree with it, that issue, those requirements of understanding are imposed on anyone who has any degree of control over the engagement. Right, and Mike, you know from my paper, I obviously agree with you. I think the focus needs to be on not the who, but on the what, but there are lots of voices out there that are still focused on the who, and I think that's why we have to address that issue, even if just as you say, move them from that question to the effects, but there are lots of people out there who still focus on that as one of the key issues. Okay, thank you gentlemen. And again, for our attendees that are listening online, I think you have any questions? You know, we've asked that you please type those into the Q and A. I think I saw a hand raised pop up, but we request that you submit them and writing through the Q and A because we've muted all the attendees ability to speak. So if you have any questions, we'd like to get those from you. And with this point, gentlemen, I think we could go for the full hour talking about sort of autonomy, but I would like to try to move on to the next topic because we have a lot there. And Ryan, we're gonna go back with you. I mean, your paper went into sort of law autonomy and the use of force. And we're gonna talk about sovereignty and non-intervention with Mike in the next piece. But if you could briefly, you know, you talked about there are a wide variety of topics in general legal concepts and you got into sort of the use at Bellum piece of that. If you could sort of bleed us off into this topic and talk about sort of the various issues you saw with respect to law and the use of autonomy. Oh, look, in some ways, the issues that arise under use at Bellum mirror the ones that arise under use in Bellum. There are concerns about certain evaluative judgments that have to be made under the law and there are questions around the extent to which humans can rely on machines to make those decisions automatically. In the context of use at Bellum, there are a couple of issues perhaps worth highlighting. One is, well, first of all, let's consider this in the context of some defensive autonomous system. Whether that system is a kinetic weapon system or whether that system is software that protects a particular computer system. So it's a defensive cyber capability. Now, the question is, is that system capable of determining the source of some potential attack, for instance, and establishing whether the relevant legal paradigm for engagement there is use at Bellum or whether that should be examined under some other paradigm of international law, rules of sovereignty, or whether it should be analyzed under some paradigm of domestic law such as cyber crime. And then various evaluative judgments are associated with that. So can the response of that automated system, is that automated system capable of complying with the various restrictions that international law places on the defensive use of force if that system is in fact capable of using force? So there might be the question of whether some use of force in fact amounts to an armed attack, the most serious form of use of force if one goes with the theory of international court of justice, a lesser problem for the US that takes a slightly different view on that issue. But also questions then about the force that is used in response to a particular attack. Is that use of force proportionate? Is that use of force necessary? So there are questions around whether these kinds of judgments can in fact be delegated to a system or are these judgments that always need to be made by a human being in real time? Which then potentially leads to the conclusion that an autonomous defensive system should be designed with the lowest common denominator in mind such that whatever it does is compatible with whatever legal regime might apply under the particular circumstances. That might for example mean that an autonomous system should not respond to an attack with what would amount to a use of force under international law if there are concerns that it's difficult to ensure compliance with principles of proportionality and necessity for example. So the difficulty I suppose with defensive systems is that they might encounter various different scenarios and the question then is that can that system be trusted to determine what the appropriate legal framework is? And then if not, then it's up to the human being who deploys that system to make that judgment and not rely on the system. Mike and Eric and I guess for you, you're Ryan covered that very well but one of the arguments you hear with respect to autonomous weapons systems is these systems will lower sort of the threshold to engage in armed conflict. And I think Ryan in his paper sort of touched on this with his automatic hackback about whether that could rise to the level of an armed attack. So I guess turn it over to starting with Eric first with respect to sort of the use of the bellum issues you see with respect to autonomy, both in the cyber capabilities that Ryan had mentioned and then with autonomous weapons systems. So I mean, obviously I think Ryan's got it exactly right in his paper, I think the pressure here is the advancing technologies and the lack of time and space to make decisions. And this is why reliance on autonomy is going to become more and more significant because if you're thinking about your war machine back in the 1700s, by the time you got your war machine engaged into a point where it might conduct armed conflict or an armed attack, there were lots of notices that that was happening and your enemy might be better able to be prepared. But in today's technological environment, the time to prepare, the time to notice this is happening, the time to react is so much more condensed that it's going to push us towards these autonomous responses and the ability to respond autonomously because of the crunch of time and space. And that's why I think Ryan's point is so important because we'll probably have to design these systems assuming that there isn't time to have even human on the loop in some cases, that those systems are going to have to be built in such a way that they can respond immediately as a default. And then the idea of where you set your minimum at the lowest acceptable standard becomes a way to face that. And then you hope that the human involvement can come as necessary as a result of that. But I think that pressure of lack of time and space through increasing technology is going to highlight this issue even more and more and force us to respond to this even more aggressively. And I just think, I mean, you'll hear me echo this when we get to the use end ball. I think that the easiest answer is to say, well, look, we're just not going to employ autonomous systems that can't meet the standards that we set. That's just simply what we're going to do. We're going to have people like you, Mike Meyer, who do rigorous reviews of all these autonomous systems and we're not going to employ them until they meet the standards that we set. And whether that's the lowest common denominator or some other, or maybe it will differ between national approaches, given the fact that the US, of course, is a little bit eccentric in our approach to use that ball. I mean, that may be the case, but we're going to have to make sure these systems apply. Whatever approach we determine is essential before we employ them. Sorry, I got off on my soapbox there. Sorry, Mike, I'll step. No, Mike. Well, I tend to agree with Eric and Ryan. I think we do tend to focus on the law of self-defense. Let's not forget the use of Belem as a law prohibiting the use of force in the first place, article two, foreign customary law. And I think that raises some interesting questions of autonomy, but autonomy is not a big driver there. And the reason is because the use of force question is all about the threshold of the effects that are caused by the operation. And whether it's autonomous or not, it's the effects that matter. Now, there are some issues that Ryan addresses. He's not addressing them here, but he addresses in his excellent co-authored paper about unintended effects, intent, mistake, effect, and so forth. I'll talk a little bit about that in the context of sovereignty and intervention because intent and mistake effect play out in the same way as use of force. But my conclusion is autonomy doesn't have much at all to do with the issue of whether or not a state is committed an internationally wrongful act by virtue of tripping over the use of force threshold, which takes us to an armed attack. And again, that's a threshold question. When is the operation to which you're responding at the level of an armed attack? And again, autonomy has nothing to do with that. It's about the effects that are caused by the operation which you're going to respond to and rather it qualifies as an armed attack. Now, I do think both Ryan and my friend Eric have hit on the right issues. The right issues are necessity and proportionality and whether or not an autonomous system can look at an incident and determine whether or not it's response at the use of force level is the only option that there are no non-forceable options. That will be tough and proportionality is going to be tough. Will an autonomous system be able to determine whether or not it's response. The response that it launches is the least that is necessary to put an end to the imminent or ongoing armed attack. But again, I know I'm beating a drum here. You have to do a case by case analysis looking at the situation at hand. So automatic hackback, we talk about automatic hackbacks as if they're all the same. They're not. Automatic hackback against what? You need to tell me what the type of operation I'm responding is before I know if it's, for example, necessary and proportionate to respond instantly to do an automatic hackback. I have to know what my response is. It's one thing to say automatic hackback, but what is your automatic hackback? Until I know the nature of the operation, I can't possibly know if it's necessary or if it's proportionate. And then finally, automatic hackback with what degree of certainty, with what degree of understanding of these particular issues, because you're never going to have 100% understanding of what's happening to you. So again, autonomy doesn't make a lot of difference and where it does make a difference, it's always necessary to do a case by case by case analysis. And Mike, and I think on your certainty piece, are you talking about the attribution aspect of that? Or all of that, there's uncertainty has to attribution. Who is attacking me, okay? Is it an organ of the state? Is it a non-state actor? What's the relationship between the non-state actor and the state? Does that relationship rise to the level of Nicaragua buyer on behalf or with the substantial involvement therein? There's uncertainty about the options. Is this option, which is by definition forcible, is this the option I must take or are there non-forceable options that are available? In some cases, when you are facing an armed attack, there are non-forceable options. In some cases, like a massive cyber attack that is happening right now, you may not have a non-forceable option and ditto with regard to proportionality. If it's an automatic hackback, is there a way that I can craft that response so that it is the minimum that I need to do in order to deter the attack? In the cyber context, sometimes you may be responding to a very aggressive, very robust armed attack, but the response does not need to be in kind or at the same level. You just need to shut the other system down. It may not be a forcible response at all. It may be a temporarily disabling the attacking system. So again, it's a theme. I know I'm beating the drum. The theme is everyone needs to slow down. Slow down, drink a glass of calm down because we need to assess autonomy in context. We cannot treat it as autonomy as if it's a single thing. Great. And Ryan, I mean, you had talked about sort of the attribution piece and others in your paper. Do you wanna take a few minutes, a couple of minutes to comment on what Mike just said? Sure, I take Mike's point that autonomy needs to be assessed in context and peculiarly due to that, I would say that when we think about cyber capabilities, looking at ensuring the lawfulness of the use of offensive cyber capabilities might actually be easier in the sense that they would be tailor-made for a particular operation such that the various legal implications can all be thought through. Whereas if we're talking about a defensive cyber capability that is simply switched on potentially for months and years and decades, then it needs to be able to deal with a range of different scenarios. And I think that's where things get difficult precisely because of the attribution issue is the system capable of identifying what the source of the attack is. And even if it's capable of doing that on a technical level there are some doubts as to whether that system is able to attribute that attack for the purposes of a lawful response. So yeah, maybe I'd leave it at that but there's a peculiar difference there between defensive and offensive capabilities from the perspective of the legal analysis. Although there may be situations where it works in that cases, in the defensive case as well. So for example, assume you have critical infrastructure. Critical infrastructure. You could certainly have an autonomous system that would sense an existential threat to that critical infrastructure. You're going to melt down a nuclear reactor. The response will be an automatic and autonomous response that will be directed back at the cyber infrastructure that is generating the attack. In that case, it would not present many issues at all. Assuming for example, you accept the view that both non-state actors and states can be the authors of armed attacks and a few others like that. But if you see that harm of that magnitude happening and your automated response is designed to disable the offending, the attacking system, I'm not seeing a lot of problems there. I'm not seeing a lot of problems. In which case we're going back to the lowest common denominator issue. So that kind of a response would be lawful under any legal regime that might apply. And that's how the system would have to be designed. Right, right. That would be true even if attribution was unclear because you would always have the ability to shut down the system that was coming at you even if that was being masked by an attack by someone else. So I think that low common denominator in that scenario is obviously. In that case, even beyond that, you would probably have a circumstance at which the plea of necessity could attach and that would justify the operation as well. So all I'm saying is we can't paint with a broad brush here. We have to be very precise and as Ryan said, it's a contextual analysis in every case. One of the points that Ryan brought up though that I think is interesting is the idea of a legacy defensive system. I mean, you can't really let that go, right? Because technology is changing at such an extent that you would have to continuously update those defensive systems in order to make sure they met your legal requirements. And that, I mean, that's a very kind of distended view of human on the loop, but it's the way you would have to do it. You can't just set the system up and let it go. And we'll see you in 20 years because then you are going to get your self-control. Yeah, gentlemen, I think one of the things that we're seeing and you guys have touched on this is in, you know, this is, DOD is working on a project that is called Project Convergence. You know, and it's really sort of used in Velo where they're trying to sort of definitely shrink the time between getting intelligence and other aspects and allowing the system to fire and helping select, you know, the appropriate targets. And you mentioned, I think, a little bit of this when you're talking about sort of the hackbacks and the use that vellum is states try to shrink this timeframe. How do you sort of maintain the sort of human involvement that you see necessary to make sure that the system doesn't sort of do something that humans wouldn't want it to do? I know, Mike, you guys have all talked about this, but as technology advances and time shrinks, how do we maintain, even as lawyers, how do you be able to provide the input when it's happening in seconds and microseconds? Well, so I think this is, again, this is gonna get to my paper on use in Velo, but that's the pressure, right? Because that's the reason we're employing CRAM and CWIS and those kinds of weapon systems is because a human is not capable of breaking out the slide rule in the calculator and figuring out what the hyperbolic arc is and then we're just not capable of doing that. So you have to rely on automated systems to do this. And that's the pressure. You can't create a use in Velo defense to this attack with a human making those decisions because they happen so instantaneously. So you automate those systems, you automate those decisions, and then it goes right back to a discussion we had before we went on air, which is you, that's Mike's whole point about the commander has to understand when he or she employs that system in the environment has to be smart enough to understand what that system will do so that that commander feels comfortable that the commander, the one who is deciding upon the attack is justified in employing that system, knows how that system will work and knows that that system will respond lawfully in a given set of circumstances. Listen, I think this human thing is a bit of a red herring. Anyone who has been to war knows absolutely that there's nothing inherently better about a human decision. A human decision may be too slow and a human decision may not be granular enough. My background- It may just be bad. What's that? Or it may just be bad, it may just be wrong. Or it may be bad or it may be malintentioned. So the presumption that somehow if we keep a human in the loop, if you will, or on the loop that that will give you a better result and in the context of IHL, a better humanitarian result is not necessarily true. It depends on the circumstance. It depends on the weapon system. It depends on the sensor system. It depends on the threat that you're counteracting on and on and on. I mean, I remember years ago when I was at the time in Turkey, the United States Air Forces in Europe came up with a bumper sys sticker and it said, trust your gut. It was for fighter pilots. When you're in the aerial engagement, you need to trust your gut. And what we found is that trusting your gut when you're in flight sometimes results in you getting killed. And the reason is, is because the systems on the aircraft were more precise. The system is in the aircraft understood if you're inverted or not. The systems in the aircraft understood how far from the ground you were, et cetera, et cetera. And so the commander of you, Safi, at the time got rid of the bumper stickers because we had a number of tragic crashes based on fighter pilots trusting their flying gut. So I'm a big believer. I was involved in a blue on blue incident over Iraq a long time ago and had there been trust in the systems then that blue and blue engagement would never have happened. So humans, yeah, they matter. And sometimes you need to have a human monitoring. Sometimes the system needs to be tethered to humans but not all the time. There's nothing inherently good about human involvement. Ryan, I'll give you the last word on this topic. I actually think that part of the difficulty with the debate around autonomous weapons is precisely that many do not agree with Mike's last statement that there's nothing inherently good about human involvement. I mean, I would tend to agree with him but there are several dozen states and enormous numbers of NGOs who drive this debate on the international level who do not take that view. I mean, their view is that delegating certain decisions to machines is inherently immoral even if it results in better humanitarian outcomes. And the question is that, how do we deal with that particular line of argument? Well, I'll tell you how I deal with it. My response is I don't do morality. Excuse me, I'm an international lawyer and the issue is international law. If states decide that it is immoral to not have a human on the loop, then knock yourself out. Then you should pass laws because international law should be a reflection of the values of the international community. So I never debate with people about the moral issue but I'm an international lawyer and given the law we have, given the Lex Lata, I don't think that the human involvement necessarily in every case gets us to a more lawful result. Well, again, I'm getting on my soapbox here but I mean, I've written on this exact thing and I absolutely agree with you, Mike. It's got to be the case because first of all, there are those out there who, as Ryan said, not only say it's moral but it's a legal issue and that as a matter of law, whether through the Martin's Clause or whatever they are gonna muster as their argument, they're gonna say that the law requires a human to make those decisions of proportionality, et cetera, distinction. And that even if you can find a weapon system that, as Ryan said, would make a better decision, I.e. there would be less impact on civilian objects and civilian persons, you still have to have a human involved. I absolutely disagree with that argument but that argument is out there and that I think we have to at least accept that that argument is out there though I think it is completely wrong. I think that the law does not require human involvement, the law requires a specific decision to be made and it must be made in a certain way correctly and it doesn't matter who makes that decision or what makes that decision as long as you get to the right result but there are certainly plenty of people out there who will not accept that view. Okay, I hate to cut this off but I think we do need to move on to our next topic. I'm mindful of the time. Once again, we have about 70 of you out there listening to this. Again, we would like to get questions from you. Certainly, I think our panelists have no problem talking amongst ourselves but we would certainly like to hear from you and what you feel is important. So please submit your questions, you have them. And so we wanna move on to our third topic which is the sovereignty and non-intervention piece. Mike, I think in your paper, you started off with, the existence of the rule of sovereignty was questioned to begin in I guess in 2018 by the UK Attorney General, Mr. Wright. Other states you said have taken sort of the opposite approach on sovereignty and Mr. Nye and the DOD General Counsel speech to US Cybercom says that the Department of Defense OGC view shares similarities with the view expressed by the UK government in 2018. So I think for our audience, if you could just sort of go through your paper on the view of sovereignty, whether it's a principle, whether it's a rule and non-intervention, over to you. Sure, in 2018, the Attorney General of the United Kingdom, Jeremy Wright gave a very important speech at Chatham House, a very good speech, by the way, a very granular speech on international law. The United Kingdom is to be applauded because the United Kingdom was one of the first states that issued such a statement. But in that statement, it said that the United Kingdom did not accept that sovereignty was a rule of international law, but rather a broad overarching principle. In other words, a remotely conducted cyber operation never violates the sovereignty of the state into which it's conducted. It needs to violate some other rule, like the rule prohibiting intervention. That resulted in immediate blowback from other states, France and the Netherlands pushed back right away in statements in 2019. And since then, lots of other states have as well, Switzerland, Austria, most recently, Finland in a very important statement issued by the Finnish MFA. I think for the sake of our discussion, we must assume there is such a rule. It happens to be my position because if there's not such a rule, there's no reason to have a discussion about autonomy possibly breaching sovereignty. So let's assume for the sake of analysis today that there is such a rule. Well, my conclusions about both sovereignty and the prohibition on intervention are twofold. First is that there's nothing about autonomous capabilities, autonomous cyber capabilities that make it more likely that the rules will be violated. In other words, autonomy isn't really what matters. What matters are the effects, as I've hinted at earlier, the effects that are caused. And I'll show you how that plays out in a moment. And secondly, we don't need to revise our interpretation of either the sovereignty rule or the intervention rule in light of autonomous capabilities that we can still achieve the underlying object and purpose of those rules by classic application thereof. So let me explain this and we'll start with the notion of internationally wrongful act. In other words, an unlawful act, it requires two things. First attribution, second breach. Let's begin with attribution. Attribution is all about the nature of the relationship between the state and the individual or the group that is conducting the operation, whether it's an organ of the state, article four of the articles on state responsibility, or for example, a non-state actor acting pursuant to instructions, direction or control, which is of course, articulate of the articles of state responsibility. There is nothing that changes with regard to the application of those rules when an autonomous cyber capability is employed because they have to make the decision to employ that capability. So we just look at classic rules, which really means all of the play is on the issue of breach of the obligation owed another state. So let's start with sovereignty. There are two ways you can violate sovereignty, assuming your state is a state that buys into sovereignty as a rule. The first is by the causation of effects on another state's territory. Now I have to tell you, the international community is unsettled as to which effects qualify. The French say any effects caused on French soil will constitute a violation of sovereignty so long as attributable to a state. But again, autonomy has nothing to do with this because it's not rather the effects that are caused by autonomous or non-autonomous means. It's the nature of the effects. So irrespective of the debate over which effects qualify, autonomy is neither here nor there. And the same is true with the second means of violating sovereignty, which is by conducting a cyber operation that interferes with or asserts another state's inherently governmental functions, like running an election or collecting taxes or engaging in law enforcement. Again, the issue is not, rather or not the system is autonomous. The issue is what's the effect? Was the effect interference with an inherently governmental function or not? Was the effect to usurp the inherently governmental function of another country? Now, turning to intervention. Intervention has two requirements. It has to be an activity that affects the internal or external affairs of a state and it has to be coercive. Again, again, both of those issues are factual questions. What was the area called the domain resume that was effective and was the means by which it was effective, coercive and not or not? Did it take choice away from the state concern? Nothing to do with how those effects were caused. There is one question and I'll end here and that has to do with it's a question of knowledge. Ryan in his excellent paper asked this question, I do so as well. It's the question of does intent matter? Does mistake of fact matter? Intent is about the desired consequences of a cyber operation. I want the operation to accomplish this objective and mistake of fact is about what was believed by the entity conducting the operation. And the problem with autonomy, well, it may be hard to understand how these systems work. It may be hard to predict consequences but I'm not sure that matters with regard to intent because intent has noted by the International Law Commission is only relevant when intent is an element of the internationally wrongful act or in the case of international criminal law of the crime. Sovereignty has no intent requirement. So long as you achieve the effects, so long as you interfere, you have violated the rule of sovereignty. And so therefore with sovereignty intent doesn't matter. With regard to intervention, intent does matter. You must be seeking intervention. And so there you would ask at the time the operator launched the cyber operation, what was the intent of the operator that launched the operation? But again, autonomy won't matter. It's the intent of the operator. And then finally we come to mistake of fact. The difference is it's not about what consequences did you desire. A mistake of fact is about what did you know? What did you understand at the time of the operation? I'll tell you what my view is. My view is mistake of fact in general international law unlike human rights law, unlike humanitarian law, mistake of fact in general international law such as the rules of sovereignty and intervention does not excuse a violation. If you get it wrong, if the system gets it wrong, then you will still have violated either sovereignty or intervention. So again, you know, a broken record here, a broken record here, focus on effects. Effects are what matter with regard to sovereignty. Effects are what matter with regard to intervention. Effects are what matter with regard to use of force. Effects are the key to armed attack and as my friend Eric is certain to tell you in just the moment, effects are what matter with respect to international humanitarian law. Great, thank you, Mike. Ryan, in your paper one of the sentences said, somewhat surprisingly, the greatest controversy to emerge in relation to international law applicable to cyber operations in the wake of a publication of Talon 2.0 concerns sovereignty. Mike certainly laid out his aspect of that. I mean, what did you mean by that particular sentence? I meant the fact that precisely as the UK Attorney General pointed out in his speech, the UK of all states had questions about whether sovereignty amounted to an actual rule. I did not have that question or concern about that issue before it came up in that particular manner. And I think that many people are in a very similar place with regard to that. So on this, I largely follow sort of Mike's view and agree with his conclusions. The one thing that I would say though in relation to intent and knowledge and effects is that I'm not entirely sure this applies to all rules of international law. I mean, sovereignty, it certainly seems to be a rule which does not require any particular intent or knowledge for a breach of international law to result. The rule against intervention, I'm not so sure. I'm not sure whether it's possible to coerce another state without having the specific intent to coerce. And in the... By the way, it is not. It is not. That's exactly my point. You have to look to the primary rule of international law to determine whether or not there is an intent element. There is an intervention. There is in genocide. There is in other aspects of international criminal law. But you have to look at the primary rule to discern whether that intent element is there. Absolutely. But my point is that the effects are not the complete answer when it comes to, for example, applying the rule concerning intervention. And as I'm sure Eric will discuss in a minute, there will be some rules of international humanitarian law as well which require a particular intent in order for there to be a breach of that rule. But my point is, is that it's not autonomy that is the critical factor. It's the intent of the actor employing the system. When I'm employing an autonomous system, for example, that is going to impede an election. Or by the way, I think any kind of cyber operation that impedes a nation's crisis management of a pandemic, it's the intent of the actor who introduced the malware into the system, the autonomous malware into the system that matters. Not the fact that it was an autonomous capability that was used. So I think the point that I was trying to get at in the paper or what me and the co-authors were trying to get at in the paper was that if a autonomous capability behaves in an unpredicted manner, the question then is what rule of international law could be breached? And our conclusion was that since intent doesn't really matter, then the effects of the operation of the autonomous system would be sufficient for there to be a breach of the rule of sovereignty. However, because intervention requires a particular form of intent, if an autonomous system behaves in an unexpected manner which has some effects of effectively coercing another state, then that might not breach the prohibition of intervention because whoever deployed the system didn't have the intent to coerce. Right, exactly right. And it's critically important here that we distinguish intent from mistake of fact. They are different. Intent is the actor. What did the actor intend those consequences? Mistake of fact is what did the actor understand at the time? So let's assume we use an autonomous capability and we are using it against a particular target set, okay? Outside the context of armed conflict, unbeknownst to us, unbeknownst to us, that target that we're going after with the autonomous capability is networked in another country to some system there. And so the impact of the operation bleeds over into the other country. Now that's a mistake of fact. Why? We intended the consequences that resulted. We intended the consequences that would qualify the operation as an internationally wrongful act, whether it's sovereignty or use of force. But we were mistaken as to the fact that the systems were networked. So this raises a separate and distinct question of whether mistake of fact is somehow relieves. It's a circumstance precluding wrongfulness that somehow relieves the state of responsibility. And Marka Milanovic and Ejio Talk has done a great three-part series on this particular issue. And what Marko said is that, yeah, there are some bodies of law where mistake of fact matters. One of them is humanitarian law. So if that happens in the battle space with an autonomous system and there's a mistake of fact, then in that case, reasonable commander in same or similar circumstances, it may relieve the state of responsibility. But that mistake of fact, the big question for sovereignty and intervention is whether there is a mistake of fact circumstance precluding wrongfulness there. My view is that there is not, there is not because the party that ought to bear the risk of mistake of fact is the party that decided to engage in the risky activity. Because remember the state which unexpectedly suffered the violation of its sovereignty due to the mistake of fact will now have the right to secure reparations under the law of state responsibility. That state should not suffer. It should be the state that decided to employ the autonomous capability. So be careful when we're talking about autonomy, intent and mistake of fact. Different dynamic, different dynamic. Eric, you've been quiet for a while. So I'm gonna cut in to allow you to make your point. So. It's actually kind of a question I'm interested in Mike and Ryan's view. So I understand the most of the last conversation dealing with breach, not with attribution. How do you see ultra varies acts under the state responsibility? How do you see that playing out with respect to an autonomous weapons system and applying attribution? So you don't, you set up the system, you think it's going to function one way. It functions a different way. Does that automatically, even though it's ultra varies, that's attributable, do you get to attribution that way? Do you see that argument going that way? I see Ryan looking at the sky for divine guidance. I think I would, we're doing the same thing. Ryan, my sense is that's not an ultra virus act. Ultra virus. I'm sorry, go ahead. I wouldn't think so either. I mean, if an armed force deploys a weapon system that malfunctions, then the use of that malfunctioning weapon system isn't ultra virus. Yeah, I would put it into the, I would put it into the peacetime context and say the same thing. Ultra virus is where my people have a capability to conduct, for example, autonomous systems or autonomous capabilities to employ autonomous capabilities. And what they decide to do is they decide to do something with those autonomous capabilities that I told them don't do. That I didn't give them authority to do that. Then it's ultra virus and under the laws of state responsibility, if you're an organ of the state or if it's an article five situation has well empowered by law and so forth. In those cases, the state will not be excused from responsibility, but if it's an article eight situation involving non-state actors, in that case, the fact that the employment of the of the autonomous capability was ultra virus will relieve the state of responsibility. Okay, well, so what I understand both you're saying and is that ultra varies is it can only take place from a human decision-making standpoint, not from an autonomous decision-making standpoint. Yeah, and I would correct you more importantly on from the University of Texas, it's ultra virus, not ultra virus, it's ultra virus. Okay, sorry. Okay, gentlemen, we actually have a couple of questions. We've got a few on here. One, I'm gonna have to turn over to John Sherry towards the end. One asked, you know, where they can get copies of your papers. So I'll let John or Jeremy answer that at the end. Well, you can spend $5 to Mike Schmidt. My email is no, never one. I'll sell it for three. Mike, this question specifically to you, Hitoshi Nasu asked whether you could address the obligation to do diligence in employing autonomous weapons or autonomous systems. Hello, Hitoshi. Yeah, hi, Hitoshi. Everyone knows that Hitoshi used to be my colleague at the University of Exeter. He's one of my best friends in the world. So it's good to hear you, Hitoshi. So the issue of due diligence and autonomous systems, I'm not sure it makes much of a difference because remember what the obligation of due diligence is. The obligation of due diligence is to ensure in the cyber context is to put an end to hostile cyber operations from your territory that are affecting the legal rights of another country in a serious manner. You know about the operation and it is feasible for you to take measures to put an end to that operation. So the, and by the way, it's not only from your territory, it's through cyber infrastructure on your territory. So I don't know that it really matters if it's autonomous or not. Perhaps if it's autonomous, it may make it more difficult for you to put an end to that operation. But if so, then the feasibility requirement would relieve you of the obligation to do so. So I'm not sure autonomy makes much of a difference, rather the operation involves autonomous capabilities or not, if you know about it, it's affecting legal rights of another country like sovereignty and you can put an end to it, then you have to do that. By the way, that is a requirement that is not preventive in nature. So you do not need to take measures to ensure that your territory will not be used for such hostile operations, rather autonomous or not, but instead you must know of it or the operation must be imminently underway. And if you don't act now, harm will befall the other state. So this is actually where I took Katochi's question and obviously Mike, I know you and I don't believe there's this preventive due diligence, but if there was a preventive due diligence obligation, then autonomous systems or there might be an obligation to employ autonomous systems that would prevent such action happening from within your territory, but you and I agree that no such preventive obligation exists. Yeah, I missed that point and it's a wonderful point and Eric's correct. I mean, it does beg the question of what about autonomous systems that could arrest, not arrest, terminate ongoing hostile cyber operations if it was feasible to employ said systems, if you take states as you find them. So if a state had the technical, the financial, wherewithal to employ such a system to respond to an ongoing cyber operation from its territory, then it would be obligated to do so. But certainly Eric, I agree with Eric absolutely, no preventive obligation. Although when I travel around the world and talk to states, I'm doing that right now for six days with states in the ASEAN region, everyone thinks that there is a preventive obligation. Everyone thinks that there is an obligation to take measures to ensure your territory is not used as a base of operations. That's just not the law in my view. Okay, Ryan, we have a question specifically for you. It said, do you mention that some rules of humanitarian law require a specific men's realm? Does this hold true for the question of state responsibility as well or only in the context of individual criminal responsibility? So my observation earlier on was made in the context of individual criminal responsibility, but I would say that there are certain rules of international humanitarian law, the breach of which would at least require some degree of knowledge, knowledge about the protected status of a particular object. So there is a certain men's realm element there, but I don't think it's quite the same as in the context of individual responsibility where generally intent or perhaps recklessness is necessary in order to hold an individual accountable. And I have one final question we can answer this. You guys can answer this now or we'll move on to Eric then answer it at the end. It said, have the panelists observed anything specifically wording and commentary or literature, legislative that may have implications on international law coming from countries such as Russia or China that address autonomous systems? I have a couple of thoughts on this. So one is that China and Russia have taken a strategically different approach to the ongoing negotiations around autonomous weapons systems. China has taken this very nuanced and careful approach where it has outlined a very restrictive definition of autonomous weapons systems coupled with a commitment to enact a ban on the use of such systems but not the development of such systems. The Russian approach largely has been to store the discussions happening in the CCW and to use various procedural devices at their disposal to make that conversation more difficult. So I'd be very reluctant to lump China and Russia together in terms of their approaches but they are certainly engaged with the issue of autonomous weapons systems on the international stage but are doing it in very different ways. I would also note that both Russia and China have very different approaches to the notion of sovereignty. For Russia and China, sovereignty is all about control over what happens on their territory. And I think that their approach is overbroad and I'm a sovereignty guy. I'm very concerned about international human rights law because autonomous systems could very easily be used to impede the international human right of expression, the international human right of access to information. So I am nervous about tying their notion of sovereignty to the existence of autonomous capabilities that could shut down those human rights for people over whom who are on their territory or they otherwise exercise effective control. Yeah, and I'll just echo Mike's point. Monitoring can become censorship and I think that's a real issue. One other point, there has been in the past, this is not Russia or China, this is the United States, there has been in the past some legislation out there that would allow private companies and private organizations to respond to an international nefarious act. They would have to get some prior approval from the FBI. But this to me is a very worrisome. I'm happy that this legislation has gone nowhere, but this is a very worrisome trend. The attempt is to say to private organizations, look, we can't respond quick enough. We're only reactive. We're gonna do that. If we respond quick enough, we're only reactive. We're gonna therefore give you a chance to hack back, to set up honeypots, to do these things that would give you authority, not as a law enforcement agency, but as a company to reach out across international borders and take cyber actions. And that to me is a very worrisome development if that ever actually became legislation because I think that would drive us to a place that would be very difficult as a matter of national decisions and national security. Yeah, and don't forget the state. If the state did that, then the state would have responsibility under the law of state responsibility, having to power these entities to operate by law to perform what is a government function, a government activity. Gentlemen, I would like to turn to Eric's paper. These waited patiently and we're down to our, sorry, our last 12, 13 minutes. So Eric, quickly I'm gonna move over to you. You took us into the use and bellow application of autonomy with weapons systems and you focused on precautions in article 57. So in interest of time, I'm gonna stop there and let you talk about your issues with autonomous systems and precautions. Okay, I'm not sure actually how patiently I waited because there are several times where I jumped in shouting my refrains earlier. So, but let me just hit a couple of the high points and then maybe end with some issues on weapons review. I think, again, this idea of technology crunching time and space is really what's driving this discussion. And I agree with Mike and I think I agree with Ryan as well. I mean, I think we're all kind of on the same view which is that autonomy does not really change these decisions ultimately, that ultimately this comes down to commanders employing weapons systems and having to go through the same determinations of whether that weapon system would in fact perform in a way that is lawful. And ultimately, as I analyze article 57, particularly article 57, two and three, my conclusion is that there is nothing inherent in any of those provisions that would prevent autonomous systems. Now, as Mike mentioned and Ryan introduced this, I mean, it may be that states will decide on moral grounds or ethical grounds to limit the use of autonomy, but there's nothing inherent in IHL that would prevent the use of these autonomous systems even if they're selecting and engaging without human involvement. There's lots of problems with this idea of meaningful human control. Again, we could have a long discussion about how do we define that and how do we practically apply that term. One of the issues that I think is a really interesting one, which again, I think it was Mike who raised this, what's the standard? I mean, if we're saying that autonomous weapon systems can't do this, we can't do this as well as what? We don't and Mike Meyer, you know this as well as anybody in very few, in fact, I don't know any militaries that do a good job of quantitatively assessing how well their soldiers, sailors, airmen and Marines actually apply the law of armed conflict in any given situation. We do have prosecutions where we prosecute some people for violating these that are very blatant violations, but we don't do a great job of saying in every given situation, how did you do? And my reason for raising that is, well, then what's the standard? Are we gonna say that autonomous weapons have to do it better than humans? So if humans get it right 50% of the time, autonomous weapon systems have to get it right 55. I think most people are, you know, autonomous weapons systems have to get it right all of the time. And in my view, that's not the right standard. And again, it's not even the right question. The question is, is do those who plan or decide upon attacks, are they the ones who they are the ones who have the obligation or article 57 to to make sure that the system whether autonomous or not applies the law correctly? And let me just make one last point and Mike Meyer, I hope you'll jump in on this and that is the issue of weapons review. Where this discussion really needs to get to is we need to continue to have and enlarge the number of states who conduct robust weapons reviews and that robust weapons review and continuing weapons review, particularly for systems that learn on the battlefield is what's going to make sure that autonomous systems and even systems eventually that employ our official intelligence will maintain their ability to abide by the law of our conflict. Thank you. Ryan, we'll first start with you. I mean, Eric raised the point, you know, compared to what? You know, we have often heard and we heard in CCW and other places, you know, or do they have to be better than humans? And in my view with this is no, the system has to be used in compliance with the law of our conflict. That's the standard, you know, better than humans is, you know, I don't think that's a high bar to get over. Certainly, I think, you know, that's the legal standard. I think one of the other sort of political and other aspects that Eric began to touch on is because these are sort of new systems and other things, is there some sort of expectation that they'll be perfect, you know, because of the sort of the political fallout of this? What do you see with respect to sort of use in bellow and the standard that Eric set out and where do you feel this line should be drawn? I actually find that it's quite dangerous to start comparing weapons systems to human beings. What I think we probably should be comparing is that a human being operating a manually operated weapon with a human being operating an autonomous weapon system. And the question is that which of those human beings can achieve effects that are more compliant with the law of armed conflict? And so that doesn't really raise the question whether the machine is better than the human being. It's a question of which of these two human beings can better comply with the law? I mean, we can raise questions about whether there's any obligation to use a particular type of weapon system if that is feasible as per article 57. But I think, Mike, I would agree with your assessment that there is no existing standard that would require autonomous weapons systems to be perfect. I mean, in a weapons review context, if a novel weapon was introduced, it would inevitably be compared to existing weapon systems which have already been deemed lawful. And if it then turns out that this existing weapon system with some autonomous functionality actually performs better than the weapons systems that we already have, then that weapon system is likely to pass the weapons review. There will be questions about how the weapons review ought to be conducted in circumstances where you have a system with artificial intelligence, the exact operation of which might be difficult for the reviewer to understand. And then that raises the question about an appropriate regime of testing. But I see no basis for claiming that an autonomous system needs to be perfect. Mike? So I've finished with four points. First, I'll touch on the point that Ryan mentioned with regard to weapons review. I think there's a lot of confusion about weapons review. In weapons review, you do not need to compare a weapon to other weapons. You need to ask whether or not the weapon can be employed consistent with international humanitarian law, the law of armed conflict in a foreseeable situation. Then the issue of do I use this or that weapon is an Article 57 issue. In other words, is the use of this autonomous or non-autonomous system likely to achieve comparable military objectives while minimizing harm to civilians? So let's understand when it is we start comparing weapons. Second, I want everyone to understand that if we feel these weapons systems, there will be circumstances where we will be required as a matter of law, not choice law to employ these systems. The same is true with cyber. If an autonomous system, and I believe that it's foreseeable, this will be the case. If an autonomous system is available to the warfighter in the battle space and the use of that autonomous system is likely to achieve the objective while minimizing harm to civilians, then the failure to use that system is a violation of international humanitarian law. Third, I think that one of the benefits of age, I think I'm the oldest here, is that you're likely to have been there done that before. I have been here done this before. This whole debate reminds me of the debate we had in the early 1990s over BVR and OTH weapons. Many of you won't even know what that means. It's beyond visual range over the horizon weapons. I remember back in the first Gulf War, we were talking about weapons that had an engagement range of 50 nautical miles. And virtually everything people are talking about today, we had exactly the same debates. You are engaging a target from a distance. You cannot see the target and you are relying 100% on the systems in your aircraft and the information you had at the time those systems were programmed. And so we've been through these debates before and today nobody talks about BVR or OTH engagements. They're all talking about autonomy. I would ask everyone to put down the blogosphere for just a moment, look back at the scholarship and the work that was done in the early 1990s and you'll see how these issues were dealt with before. And finally, I would make a plea for international humanitarian law. All of this talk about autonomous systems, it's really hard to apply IHL human meaningful human engagement. They run the risk of weakening the commitment to IHL. Every time someone says, I don't know if IHL works, it weakens IHL. I'm a true believer. I've been around for a long time. I'm a true believer that IHL if reasonably interpreted in context will usually yield the right result. All of you folks that are complaining about autonomy and IHL being unsuited are underselling the flexibility of international humanitarian law. And without realizing it, you're actually operating in a counter humanitarian manner. We should be insisting that IHL applies. We should be interpreting IHL in the autonomy context instead of spending time desperately searching for ways that we can claim that IHL doesn't work. Okay, gentlemen, we have three minutes left. So Ryan and Eric, I will take Mike's last comments as his closing argument. Ryan, I give each of you guys one minute to sort of make one point if you want everyone to take away. If they took away one point, what is it? I fully agree with Mike's last point. I'm not going to comment on his other points. International humanitarian law I think is being undersold in this context. The law has managed to deal with technological changes time and time again. And I think it will do so in the future. The law is based on certain fundamental principles and we may need to go back to the basic principles to sometimes understand how specific rules ought to be applied in particular circumstances. But I think that IHL does provide a fairly comprehensive regulatory system for armed conflict and it provides possible answers to the use of autonomous systems. But sometimes we may have some interpretative challenges the same way as we've had interpretative challenges in relation to the use of cyber capabilities or space capabilities or whatever. Eric? I totally agree with Ryan and with Mike on that issue. We ought not to be talking about autonomous weapon systems. This is not the most recent event of the sky falling. This is, these are just weapon systems like all the other weapon system we've employed. It may take some looking at how we might apply the weapon systems to a particular set of facts. But the point is IHL already has the rules in place and IHL is completely capable of resolving the vast, vast, vast majority of questions that we will face. And we shouldn't be waiting for the sky to fall. Gentlemen, I wanna thank each of our panelists. Ryan, Eric and Mike for the great discussion today. Thanks for letting me moderate this. I would certainly like to have 30 more minutes so I could get onto the weapons review piece which is of course near and dear to my heart. But Jeremy, I'm gonna turn it back over to you and again we had the one question on where they can find the articles. But back to you. Thank you, Mike and thank you to all the panelists as well. We really appreciate you taking the time out of your schedules to share your expertise with us today. And the Stockton Center, thanks all our attendees as well all across the globe for taking time out of your schedules to join us. As mentioned earlier, the session will be available for view on the YouTube channel. Feel free to share the link with anyone you think might be interested. Earlier in the discussion I posted in the chat links to where you can find each of the authors, panelists articles. Dr. LaVoya's article is on the NATO Cooperative Cyber Defense Center of Excellence website and both professors Jensen and Schmitz recent articles on autonomy are posted in international law studies here at the Naval War College. Our next event is disruptive technologies in international law. That will occur December 7th through 9th. It's co-sponsored by the Royal Air Force's Directorate of Legal Services, the Libra Institute for Law and Land Warfare at West Point, the US Army's National Security Law Division, the United States Air Force Academy and Yale Law School's Palsai China Center. That event will feature an impressive lineup of speakers discussing the intersection of international law and multi-domain warfare. If you did not receive an email invitation to that event or have not seen the details in the Stockton Center's Twitter feed, please email StocktonCenter, all one word, at usnwc.edu and we can get those details to you. Again, if you're not able to join us live for that event, it too will be recorded and posted to the Naval War College's YouTube channel. Thank you all again for joining us for this Stockton Series webinar. Goodbye.