 Ladies and gentlemen, good morning to those of you joining us from North America and good afternoon and evening to those of you joining us around the world. I just want to need a clear and tinkler, currently a military professor at the Stockton Centre for International Law and a legal officer in the Royal Air Force. Before we begin, just need to make a few announcements. First of all is just to clarify that the entirety of the three-day conference will be recorded and will be publicly made available to people on YouTube after the event. The second thing is to encourage all of you to ask questions of each of our panels and speakers. So if you you can access the Q&A box as part of the webinar and we very much hope that you can ask questions as we go along and feed those through to the panels. Also in that box I will post the link to our website where you can access the program, agenda and fly for this event. I'll now hand over to the Charles H Stockton Professor of International Maritime Law and Chair of the Stockton Centre, Professor James Craska. Thank you Kiran and thank you everybody for participating in this conference. It's my honor to be able to introduce the president of the Naval War College, Rear Admiral Shoshana Chatfield. She is a leader at the pinnacle of military education enterprise in the U.S. Armed Forces. She's also an Navy pilot and operator and was a senior leader in ComNAV Marianas, Commander of Naval Forces Marianas, where she served as the Commander of U.S. Indo-Pay-Com's leader for all of the U.S. forces that are serving in Guam and throughout the Marianas Islands as well as that as a four deployed center for U.S. forces in the Western Pacific. She is educated in international relations. She also studied at the Harvard School of Government and earned a doctorate degree. She has served as a professor at the U.S. Air Force Academy and so she's been a quintessential warrior scholar and the logical choice to lead the Naval War College. The Naval War College educates future leaders from some 60 different countries and we also have a burgeoning research enterprise which includes the world's preeminent war gaming center outfit, a number of other prominent research institutions that focus on cyber operations, Russia maritime operations, and the China Maritime Studies Institute and the Stockton Center is within that research enterprise as well and it's my pleasure to turn the floor over to Rear Admiral Chatfield. Admiral? Thank you so much, Professor Kraska. Good morning to all. I would like to acknowledge some of the many groups and people who are here today beginning with the Yale Law School Paltzai China Center, the U.S. Army's National Security Law Division, the West Point Libra Institute for Law and Land Warfare, and of course my old alma mater for teaching the United States Air Force Academy. I'd also like to offer special thanks to Air Vice Marshal Tamara Jennings and the Royal Air Force Legal Branch for cosponsoring the conference and for sending Squadron Leader Kiernan Tinkler as a professor of international law, two-hour Stockton Center. Professor Kiernan is the director of this conference and has put together such a fantastic schedule of events and has done a terrific job within this Naval War College Center. And I'd also like to give a special thanks to Lieutenant General Charles P.D., 40th Judge Advocate General of the United States Army, and the support that his National Security Law Division has provided to this event. Lastly, I'd like to thank the chair and the entire team of the Stockton Center of International Law. Professor James Kraska and his team have really done a lot for the advancement of research and study in this area. So welcome. Welcome to the Disruptive Technologies and International Law Conference. I'm so glad that you could join us virtually. You know, we'd love to host conferences in person, but this year, like everyone else, we have sought other means to bring together the minds and those interested in these particular topics. And so we can continue our work forward despite the challenges that we've all faced throughout this COVID-19 crisis. Hopefully this conference will provide a bit of a respect, a respite, sorry, and a forum for people within this community of practice to have some opportunity to share ideas with each other and also to socialize across this platform to keep in touch and to advance your networking and relationships. This year, you'll be discussing how technologies are challenging our force structure and military operations across a range of emerging capabilities and those legal implications, such as how artificial intelligence affects the use of force, how sovereignty and neutrality will apply in cyberspace. What's the law of armed conflict going to look like in outer space? And what are the navigational rights and belligerent rights of autonomous surface ships and submarines? International Law governs the use of these capabilities and is already affecting how we operate, transforming our strategy, our policy, and our operations. More changes are certainly to come and through dialogues like this one, you can help to craft these policies and those rules. I'm here today to kick off this very important conference because of the dedicated work that the United States Naval War College has done to understand and answer these questions of international law. The more we prepare and understand each other, the more effective our forces will be when they operate. Our role here at the United States Naval War College is to inform today's decision makers and to educate tomorrow's leaders. In today's dynamic security environment, numerical and technological superiority are no longer enough. Our chairman of the Joint Chiefs of Staff has asserted that we will also need to continue to outthink our adversaries. At the Naval War College, we provide the environment to expand the intellectual capacity of naval, joint, interagency, and international leaders to achieve cognitive advantage. Our objective here in Newport and around the globe is to deliver excellence in education, research, and outreach and build enduring relationships with our alumni and partners. The Naval War College is committed not only to conducting research, simulations, and academic courses in the field of international law when appropriate. We also want to be a leading voice within DOD and among international militaries working to improve all of our abilities to better understand these legal issues. Some of the contributions you will hear during this conference will be published in volume 97 of international law studies, the Naval War Colleges Blue Book, which is the oldest publication at the Naval War College, and the oldest journal of international law in the United States. I'm confident that this week's program will engender a greater understanding of the confluence of technology and the law, providing practical advice to decision makers and shaping and influencing the scholarly debate. My team and I are glad to have this opportunity to further develop relationships throughout the international law community and to amplify conversations between operators and lawyers. My challenge to all of you today is to open up your minds and think outside of your own areas of specialty. Listen and think critically about these important topics, provide feedback to one another to make our discussions as meaningful as possible as we drive toward the exploration of answering these legal questions. Thank you so much again for your attendance today, for being such a wonderful part of this productive and successful discussion and conference, and I'd like to turn the floor back over to Professor Kraska. Thank you again for attending. Thank you very much, Admiral, for that thoughtful introduction to the conference. It's now a pleasure for us to welcome our first keynote speaker, a real thought leader in the field of technology and military operations. It's Paul Shari. He is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He's the author of the book, Army of None, Autonomous Weapons and the Future of War, which won the 2019 Colby Award and was one of Bill Gates's top five books for 2018. Dr. Shari worked in the Office of the Secretary of Defense in the Bush and Obama administrations. He earned a PhD in war studies from King's College London and is a former Army Ranger with multiple tours in Iraq and Afghanistan. So Dr. Shari, welcome. The floor is yours. Thank you. Thanks for having me here. I'm very excited all these folks could join us today for this discussion and I'm really grateful that we are able to connect and have these kinds of conversations even in the current environment. I'm going to try to keep my opening remarks pretty briefs through plenty of time for Q&A. I just want to give really an overview of the technology in terms of autonomy and autonomy in weapons in particular and then some of the legal debates to maybe frame some of the legal issues surrounding this technology. To start with, I think it's worth pointing out that the use of automated functionality in weapons dates back at least to World War II depending on what types of autonomous functionality that you're looking at. But certainly in World War II we see the origins of the very first precision guided weapons, homing weapons, anti-ship bombs and torpedoes that have sensors on them, simple sensors but that are able to sense an enemy target and then maneuver to hit that target. Now these are of course weapons that you wouldn't really consider very intelligent by today's standards. They're not using machine learning. The term AI hadn't even been invented yet but it's worth pointing out that they have some measure of autonomy and we've seen this technology advance considerably over the 70 years since then and we've seen a tremendous amount of development in not just the sophistication of the role of autonomy in weapons but also the proliferation globally and across different domains of warfare. So we now have widespread use of a variety of different types of functionality of autonomy in weapons whether it's things like homing munitions that have the ability to sense their target and then maneuver to hit that target but the human is still choosing the target or group of targets that being attacked or things like automated defensive systems like modes on the Aegis combat system that at least 30 nations have similar types of functionality that are used around the globe today. So that's a sort of snapshot of where we are today in this moment. We see more and more countries incorporating autonomous functionality in weapons going forward and I would compare the trend to much of what we're seeing in self-driving cars where we see individual tasks being taken over by automation incrementally. So if you buy a top-of-the-line automobile today it's going to have features like intelligent cruise control, automatic braking, self-parking. We're still a ways away from a fully autonomous car that would have no steering wheel like the car that Google has built and can be used in all road environments, all environmental conditions. One of the things that's different though is that for cars there's widespread agreement about what the goal is that in the long run you know it would be ideal to be able to build such a car where the human is no longer involved in the actual physical driving tasks. Humans are not particularly good drivers. There are you know 30,000 or more people killed on the roads in the United States alone every year many more killed globally. So if we could automate that task and we could do so with a high degree of accuracy and we could build more effective cars we could save potentially tens of thousands of lives just in the United States alone. That's at least the goal. Now people want debate on you know where the technology is today when we're going to get there and over what timeline with cars. What's different with weapons is there's a lot of debate about where we ought to even be going. Now to simplify things a great deal you can you can put people into maybe three broad camps about what you do about this technology today. There's certainly arguments coming from humanitarian disarmament groups calling for a preemptive legally binding treaty that would ban autonomous weapons. I'm going to put that in quotes because not universally agreed upon what that term means. It's a topic of a lot of discussion. It's a topic of DOD policy and international debates at least since 2014 at the United Nations but there are no sort of universally agreed upon definitions. There are definitions in DOD policy but those are not necessarily the same definitions that the UK uses so even other allied governments you know don't don't use the same terminology. There's another you know sort of argument or school of thought that says look we have a set of rules governing behavior and what's right and wrong in warfare it's called the laws of war laws of armed conflict and we should trust those and allow those to regulate emerging technologies whether it's autonomous weapons, artificial intelligence, other technologies and simply focus on adherence to these pre-existing rules. We're going to be layering ad hoc regulations on specific technologies and in fact they could maybe warp or distort compliance with the laws of war. So that's another school of thought and then there's a third maybe sort of middle camp if you will that maybe we need some kind of regulation on technologies. Now I'll just point out these sort of differing schools of thought are in some ways actually less about autonomous weapons and more about how people view the laws of war themselves or the law for itself and its ability to adapt to new technology and I suspect that for many of the people in these different camps that if you sort of said look I have mystery weapon X I have a mystery weapon in this box I'm not going to tell you what it is should we ban it right off the gate should we just let the law of armed conflict play out and focus on adherence to it or should we maybe regulate it in some way that people's intuitions about the law of war and its ability to adapt to a new technology might inform where they're coming from independent of what that technology is. So specifically when it comes to autonomous weapons what I'll say is that I think there's also two other two other kind of dimensions to how we might think about this one is what is possible based on the state of technology today and I think that's really important we want to be grounded in the technology what it's doing how it's evolving so one way to approach this would be to say you know what is it capable of doing what are the technology not capable of doing today and then we might craft some kind of regulations that might govern state behavior these could be codified into law or maybe you know sort of non legally binding principles or best practices that's one way to approach it now there's value in certainly doing that and grounding these discussions in the state of technology but there's another perspective that I think is also really valuable which is to say you know if we had all the technology in the world what tasks in warfare require uniquely human judgment if any and why and that's a very different perspective that's one that sort of says instead of looking down at our feet and say you know how do we navigate around the obstacles right in front of us where we are today but let's look out at the horizon is that where do we want to be going and I think that's particularly important for technologies like autonomy and artificial intelligence because they are moving forward so rapidly and there is just a great amount of uncertainty about where the technology is going it's possible that it it peers out in plateaus and we see another AI winter that is one school that's one argument that people make there's people that believe that's where we're headed in fact that the whole current state of affairs in machine learning is misguided that's a that's an argument that we hear from some folks in the AI field there's others on the opposite end of the spectrum that will argue that you know in a few decades we'll see human level intelligence um I my my personal submissions probably both of those are maybe a little misguided but that's a wide range of possible technological futures so I think there is value in asking the question of if we had all the technology that we could imagine building what role do we want humans to play in warfare and why it's an important question to be asking and focusing our attention on um we can we can get into this one in the q and a but actually the duty law of war manual talks about this um and talks about you know this idea that the law of war imposes obligations on persons not machines to comply with law of war rules on attacks like proportionality or precautions in attack I think it's actually an interesting position to take that it's a human obligation to comply with the law of war um and that has potentially implications for how we think about this technology and the role of humans uh so let me go ahead and stop there so we have lots of time for q and a and happy to take the discussion where folks will like to go thank you thank you Paul so much for those great remarks um and you know we'll certainly have some questions rolling in um but at first if I could um the first question is you know you've talked about some um maybe some misrepresentations of these the machines these systems are or maybe some confusion in their use to you what do you think is the biggest confusion or misunderstanding about the technology and their potential uses I think one of the biggest problems here is that the terminology is very it's not confusing perhaps the wrong word it's that it's it's vague and open to interpretation so if I just say you know autonomous weapon that conjures a whole wide range of visions in people's minds so some people are envisioning the terminator other people are envisioning a rumba with a gun on it something very simple uh both of those are probably bad ideas but for very different kinds of reasons and I think that's a real challenge in the space when we talk about autonomy or artificial intelligence some people envision very advanced types of systems that have human-like intelligence things that are coming out of science fiction other people envisioning the kinds of things that exist today um which have a many many limitations even the most advanced machine learning systems today um things like GPT-3 the one of the most advanced um you know language generation systems can produce very realistic or realistic seeming text um that can generate artificial text that's written by an AI that will easily fuel humans at least uh at a first glance you might read it and you might say you know I think maybe a human wrote this but it lacks purpose and lacks intentionality it certainly looks any kind of understanding uh at a deeper level what the text means um and so I think that that's just a real obstacle when we talk about this is um sometimes it's just not clear what people are talking about when they use the terms I've certainly been in a saddened discussion to at the united nations where diplomats are discussing this and people will be making and they're talking about different things one person's making an argument using this terminology talking about things today and the other person's using the argument talking about terminology you know the things that may not exist in 30 40 years if ever um and that's a I think a real hindrance as we try to grapple with you know what should we do about this technology and some of the choices in front of this you know I speaking of terminology I think hope has a that's a good question um kind of to baseline us a bit here she says can you talk a little bit about human in the loop versus human on the loop communication with autonomous systems yes so a real foundational concept when we think about autonomy is um a set of terms about human in the loop on the loop or out of the loop uh are the sort of terms that are used that are used to refer to what you might call semi-autonomous systems where a human is in the loop that means that the system is um performing a task and then it will stop and pause its operation and wait for a human to take a positive action to do something a very simple example of this that that we're all familiar with might be like automated updates on your computer where it's popping up a window and it's asking you to click okay to do the automated update to download something and restart your computer uh there might be situations where that's valuable for example if you were in the middle of a webinar with hundreds of people this would be like a bad time for my computer to decide to do that if it was doing it on its own so that's an example where there's a human in the loop and a human you know maybe it doesn't have to do very much but has to at least make some kind of positive action to authorize the system to continue a a different sort of paradigm for the human involvement in the machine would be a human on the loop system uh where the human is in a role of supervisory autonomy where the system is going to operate entirely on its own and the human has the capacity to intervene but doesn't have to a good example of this that we're all familiar with would be a thermostat in your household right so the thermostat doesn't ask you permission to turn on the heat or the air conditioning uh that would be really burdensome um instead you set this entire temperature and the thermostat will function on its own and then if you're not happy you can get up and you can make modifications or adjustments um and then a fully autonomous system where the human is out of the loop is one where at least for some time duration the human is unable to intervene uh that might be that you're out of your house and you can't intervene in a thermostat's functionality and one of the interesting um elements of how a technology is evolving is it's both building more intelligent systems that we might be willing to hand over more control to machines but also building more connectivity to them right so if you buy a new fangled a thermostat these days you can get one that's online and has a mobile app and you can real monitor it when you're out of your house if you were inclined to do so um so you know that's also the case for weapon systems right we have increasingly smart weapon systems but are also network enabled that enable more ability for human operators to actually be connected remotely and exercise remote control over the system then might have been the case um in in prior years or decades past so those are I think some of the interesting trends for how we think about human control in ways that are enabling both smarter systems but also in some ways greater human control uh even remote so you bring up a great point and something that uh john chan asked and he's our he's our first question um today so we appreciate john from jumping into the void being the very first question in the event um before I ask that I would encourage you you can see on the question and answer function you can see the little thumbs up if you like a question you can click the thumbs up and help bump that question at the top we don't have 10 questions actually told because we have several others coming in from the chat and so please do so if you want to if you see a question you like and you want it to be asked uh let let us know because we probably will be able to get to all of the questions that are asked in every section with that said um well john you talk about human communication john's fear um is that there'll be uh communication from outside and there'll be situation where one of these autonomous systems could be hacked can you talk about that a little bit yeah and let me just add these are great questions so I think very helpful people are you know voting them um this is a great question and I think it cuts to one of the important differences with autonomous systems which is not necessarily that it is more likely to be hacked um you know any anything that's got a computer chip in it uh that's got software is vulnerable to being manipulated and vulnerable to cyber attack even systems that are offline there are all sorts of you know clever ways to find ways to to get in uh you know using humans often as as the weak link however one of the things that's really different is about the effects if you were able to get in to an autonomous system and in particularly about scale and this is an important difference right if the concern is something like an autonomous vehicle is not just that you could hack a car and then use it to carry out you know a terrorist attack and kill someone we've seen of of course unfortunately automobiles used in sort of vehicle ramming terrorist attacks uh both in the u.s and overseas that have killed people um the real concern is that you could do so at scale that using the same cyber vulnerability you could hack an entire fleet of cars that might have that vulnerability and take control of them um you know in it's already been demonstrated certainly that cars today can be hacked you know remotely and the driving features have been disabled um that's several years old that kind of technology um what more autonomy enables is able to actually take control of them um and then you could use them to you know conduct a mass vehicle ramming attack uh or you know in perhaps a less deadly but still disruptive way do things like disrupt traffic or cause you know protests in the military context the really frightening thing is losing control over not a vehicle but a fleet of vehicles um and it opens up the door to a sort of uniquely problematic form of enemy counterattacks like a mass fratricide attack where you can imagine not just sort of a um you know we've certainly struggled in Afghanistan with things like inside earth threats where a friendly turns on friendly forces and attacks them you could imagine through the you know through cyber mechanisms someone taking control of a fleet of autonomous systems and then turning them on friendlies and that's sort of just a whole new or even an accident causing that um and we've seen accidents with autonomous systems like the patriot fratricides in 2003 isolated cases but one could imagine situations where that happens at scale because of flaws in the system and maybe how it interprets friendly forces so there are I think you know a whole range of novel problems that autonomy introduces that we certainly want to think about um and be cautious about as you think about employing the technology so in regards to employment the technology professor Ashley Deese professor Laura Dickinson um each ask a different but similar question and it revolves around the arms race among the U.S. China and Russia with these technologies um and you know Ashley asked if you have thoughts on whether there's a way to slow this arms race and then Laura asked a related question what's the status of the arms race among you know these these three superpowers could you address that yeah so first of all I I don't think there is a quote arms race underway I think that's um that that makes great sensational media headlines um I don't think it's accurate um if we if we think about that arms race what does that term mean um it certainly in the security studies literature it refers to a condition where states are spending significantly above their normal rates of growth of military spending in a way to compete with each other we have historical examples of arms races like the naval arms race uh among great powers at the early 20th century uh and certainly the nuclear arms race during the cold war neither of those are good historical examples for what's happening today with artificial intelligence or autonomous weapons in fact when you look at the amount of money being spent on the technology it's quite small um despite you know claims by DOD senior leaders the AI is their number one priority Secretary Esper uh had said that former Secretary Esper the AI was his number one priority when you look at the spending it's it's not it's kind of a close um so and in autonomous weapons you know I think we can certainly see that countries are investing in more autonomy in their weapons it's not obvious that there is a sort of full out pursuit of fully autonomous weapons um you know I think I think it's more more of the situation of this incremental advancement over time now having said that um certainly we don't hear the same types of um restraint coming out of countries like China and Russia that you hear out of countries like the United States or the UK or other democratic nations that are going to be more concerned with um law and and conformance with you know ethics and and humanitarian behavior on the battlefield and that is certainly concerning um I I don't think it's a foregone conclusion that uh you know we are definitely headed towards a world of you know fully autonomous weapons and the gloves come off I think it's worth exploring whether there's opportunity for any kind of mutual restraint um in this area in agreement among great powers I think the the current UN process is deeply flawed I think it's valuable to do I'm glad you was participating but I think it's frank it's deeply flawed because the conversation is distorted by uh humanitarian groups that are sort of pushing for a ban and then you get you know sort of countries like the US and Russia pushing back against it I think it's like the wrong conversation to be having I would much rather having the conversations among the US China and Russia about look is there any agreement that we can make among us about limits that we want to establish on the technology that's not going to be for legal and ethical reasons those countries don't see those things in the same way they're not having these debates that we're having here um but they do care about control over their own weapons uh they do care about you know not having their systems malfunction on the battlefield they might have different thresholds uh for concerns about things like fratricide than here in the US but I think that's the worthy conversation for us to have uh with other great powers about whether there might be any you know sort of mutual agree upon rules about how we might use the technology going forward yeah and I think the one thing that in discussions that the three states have agreed upon there has to be some level of accountability um but where that accountability lies into what level is is a great debate amongst the states and obviously civil society and in academia um you know both Linda general Dunlap ask questions kind of revolving around that and you know if if the manufacturer of the Roomba um you know puts a Roomba in motion and that Roomba destroys your house while you're out you know shopping um you can turn back to that manufacturer for some kind of accountability and Linda asks you know where that obligation lies in the armed conflict realm and then general Dunlap further asks what level of expertise do our commanders and operators need in relationship to that accountability could you address that yeah that's a that's a great question I think the I think these are good questions that we don't have clear answers to and I think these are the kinds of questions that in fact um the US military itself will be grappling with in the decades ahead as we're fueling more sophisticated automated systems ones that incorporate machine learning um and we'll have to be grappling with these things I will make a couple general points for how we should think about this um one is that the DOD's law of war manual um takes the position that the it's a human obligation to comply with the laws of war that um the law of war doesn't put obligations on machines that are inanimate objects um rules like uh proportionality or precautionary attack applied to persons um that implies some degree my position right I would argue that the extension of that is that implies some degree of human engagement with the use of force thing not that humans need to be you know maneuvering uh the missile down to the target um that hasn't been the case you know since you know the invention of the catapult um but rather that the humans need to have some awareness over the attack the context the weapon system itself um to understand what's happening on the battlefield what's likely to happen uh so that human can make an informed decision about uh whether this is in fact a lawful attack we don't want a situation where after the fact we're asking the commander you know why did you choose to launch this weapon and the commander says well you know I don't know the machine said it was fine so I just pushed the button I'm just here to push the button I don't have any responsibility that's not consistent with certainly um you know US military professional ethics and how we think about uh the role of military professionals in uh the conduct of our armed force and so that's the kind of thing where um it actually gets I think some really interesting questions about military professional ethics and what we think sort of the role of the military commander is I think there's an important asymmetry here between um those developing the code is one of the questions gets to and uh the military commander on the field who's employing the weapon and that asymmetry exists in a couple of ways one is that um you know there's a uniqueness to the military profession as a profession that's about the exercise of the use of force and there's certain responsibilities that come with that that the civilian programmer doesn't have um but also you know there's some value in being able to you know actually look at particular situations and apply human judgment to those situations how specific you need to get I think is an open question but if you look at something just like the Aegis Combat System today it's highly customized um by the commanders in the field based on the particular operating conditions that they're in it's certainly not you know pre established and then locked in place um ahead of time and I think that's probably the right paradigm that we're going to want for these systems that we're going to want systems that aren't you know um um fully baked in where all of the parameters are set by the designer we're actually going to want things that are understandable to operators but then operators can customize uh based on the particular situations that they're in because ultimately it's those operators responsibility uh what's happening on the battlefield and thank you for that answer you know of course it's a very difficult question and like you said earlier there's no there's no good answer and all the questions that are being asked today are certainly not softballs for you know these are they're being asked for for a reason right um hey I'd also encourage you we have now 16 questions so please continue to hit the upload if you want to see these questions asked um we'll try to get to them as many as possible we have a little less than 10 minutes left in this session um and so you can see that the question from um Tom Choisky to hear your thoughts on you know the social cultural or international asymmetries that influence the discussion on autonomous weapons systems so we've talked about Russia China the US but you know how do what do you see um in the influence of the guards maybe civil society academia or the states yeah I mean I think one of the most um it's a great question and it is I think a really important one when we think about how the technology is going to be adopted going forward because we certainly know um that culture and and societal preferences make a big difference in how militaries adopt technology as a proliferates uh globally um I think you know there's some easy I'll tell you some easy answers that I think are wrong okay um an easy answer that I often hear is you know these other countries China Russia they don't share our values they don't share our ethics and they're just going to fully automate things and so ethics schmethics we need to throw this stuff out the window and we need to do so as well um I think it's wrong for two reasons one I think it's it's accurate that they don't share the same uh concern about uh human rights and compliance with the laws of war I just don't want to talk to say Chinese scholars I don't hear the same emphasis on legal concerns about lower compliance that I hear from US scholars but they do care about control over the military forces so there are other concerns beyond legal and ethical ones um that are also relevant here just about you know retaining effective control of your military forces in the battlefield um and political control that they also are going to be concerned about that um come to the forefront that are actually less dominant in the discussions here in the UN because I think legal and ethical issues sort of take take the four of these conversations here uh in the US context but these other things are important I also you know it's not obvious to me that we should sort of like disregard our own ethical positions or review them as a constraint I often when people talk about ethics and values as something that's handcuffing us in the US context I don't think that's just the right way to think about it I think a better way to think about it is that um we want to act in compliance with our own values that informs how we approach uh the technology or or actions on the battlefield um and it's not a not shouldn't be just sort of a limitation or constraint but rather sort of the frame by which we approach our own actions um another thing that I often hear is that authoritarian regimes will be willing to automate because they don't trust their people um you know there's some elements of truth to this that are certainly much more decentralized authority given to US subordinates in the US system compared to say in China or Russia it's not obvious that I think that the core alert to that is that they're going to then trust autonomy is not clear and I certainly don't hear that necessarily at least when I talk to Chinese scholars um that they say oh you know we're going to trust autonomous systems and in fact there are a lot of the same skepticism about trusting the systems that I hear out of US defense analysts and US legal experts so um I think it's a good question I think we're not it's not clear really how the technology will unfold in different military cultures but I think it's important I think we really want to stay on top of that and look at how others are using it I think you can see some differences now certainly Russia is being much more willing to arm ground robots than the US army is um and deploy them including into contested areas like Syria um that's certainly one important difference when I look at Chinese you know writings on um AI there seem to seem to be a greater emphasis on command and control which is I think an interesting uh finding but I think uh you know we'll have to see as the technology evolves how military culture informs how different countries adopt the technology we have about three minutes left and I'd like to get to two questions if possible first one I'll try to keep it really really sure okay thank you from Professor Lubella and Paul you talked a lot about some of the confusion misunderstanding of terms and of the technologies and of course that first one is it one of the problems that the fact that we keep talking about autonomy rather than AI can you address the autonomy AI debate right okay so um these terms mean different things both of the terms themselves are a little bit contested so even in the technical field there are multiple different definitions but in general we talk about AI we're talking about the intelligence of the system a machine and its capacity to come up with the right course of action in a given situation the right sort of moves to make the right answer in order to accomplish a particular goal when we talk about autonomy we're talking about the freedom of a system to perform that task so maybe to make an analogy with humans that maybe might make the difference I was thinking about Garry Kasparov playing chess you know defeated against by by deep blue the AI system so Kasparov very intelligent at chess we could also imagine situations where Kasparov is sitting on the sidelines watching a chess match he's still maybe the smartest player in the room know the right moves to make but isn't permitted to sit at the table and make moves doesn't have the freedom or autonomy to do so because we've told him you know you're not you're not playing today and so those are sort of different dimensions of machines we could envision very intelligent machines that don't have much autonomy and they're used as decision aids for humans and that could be one application for certain types of AI so thank you for that really brief response it's very efficient well done Michelle asked a question a very specific question about asymmetric warfare and we hear the argument a lot against these types of technologies you know the fear of the killer robot the terminator but specifically to Michelle's question what do you expect asymmetric warfare against the largely robotic force to be it's an interesting question let me just say that to in my disclosure I don't like the term asymmetric warfare just because like it seems like a silly term to me why would you fight symmetrically you're always going to try to find ways to find your enemy's weaknesses and attack them there so I'm actually not a fan of the the pattern in the US defense discourse of coming up with these labels of irregular warfare and hybrid warfare and asymmetric I just think it's actually all war and maybe our concept of war is overly narrow in the first place and we need to broaden it but I think you know one of the things it's interesting about robotic systems like any technology is they have countermeasures they have limitations just like when you move from you know horses to tanks you now are dependent a pound of fuel depots and gasoline you know to fuel your tanks and that might be a way to to go after them and undermine that technology robotic systems are going to have their own they could be communication links if they rely on communication links if they don't there might be other ways of hacking them to manipulate them if they're using you know AI based or machine learning based models of perception there are all sorts of ways to to fool those kinds of systems and manipulate them we don't really have time to kind of get into that but the you know the sort of whole world of machine learning based perception you know identifying objects automatic target recognition has a whole suite of countermeasures to then spoof those AI based perception methods and trick and manipulate them as well as ways to even get in earlier in the learning process to things like poison the data so I think like any technology there's going to be countermeasures and it's going to shift warfare in a new direction over time as we see more competition surrounding robotic systems autonomy and the ways to defeat those thank you so much paul for the questions and answers and as you can see there are 16 questions left that we're not able to get to which indicates how interesting your presentation was and it's a fun topic yeah absolutely um and and and I would tell the audience please don't you know save your questions because we're going to be talking about all these issues that paul just addressed specifically throughout the rest of the day so you can you know ask these questions again to our other panelists and they can answer them from very specific you know legal or otherwise pinpoints but um we're we're about to go into our 10-minute break here but I want to thank paul so much for joining us today as if they have a passing presentation now and we look forward to reading reading here more from you in the future thank you very much thank you thanks for having me take care okay we're going to go on actually an eight-minute break we'll start back up at noon um and so uh we look forward to seeing you then thank you very much again for all those questions welcome back everyone it is my very great pleasure to now um introduce our next speaker Michael Tracy is a physics graduate from Columbia University he is a co-founder of anabase technologies where he leads the image processing development um aspect there and michael i'm really really excited to hear what you've got to say and for sharing um what work you've been doing recently so i'll hand over to you now um yeah thanks oh thanks here really appreciate the the kind remarks today good morning to everyone i'm really excited to uh share with you today to talk about um super identification and artificial intelligence basically how can we use artificial intelligence and um satellite imagery to find and track any ship in the world yeah yeah so um as you can see here so just in the past few months we've had quite a few cases where the issue of ship tracking has come up just last week um we had roger stone where true or not claiming unidentified North Korean vessels off our coast and main influencing our elections we had piracy attack off the coast of angling um we have a ships from a Chinese uh fishing um fishing fleets going to the glop of ghosts um and uh illegally fishing there and we have obviously uh venezuelan iran chronically always um going under the radar and people not knowing where their oil tankers are so why do we still have so many problems in tracking ships in 2020 i mean look at all the options we have we have helicopter patrols we got satellite imagery we got radar we got signals intelligence ais how can these big uh floating tubs of metal go disappear disappear nowadays um when i'm in cv s i can figure out what's the annoying jingle they're playing at cvs pharmacy uh with susanne reliably but we still can't necessarily always say where ship is nowadays um so when we look at these ship tracking options that exist now we can categorize them in um how they collect information like you see here on the slide uh there's manual ship tracking where we use helicopter and ship cruise to monitor ship activity um and then we have also imagery analysts who will hand count the number of ships in satellite imagery photos um you know to figure out how many uh how big a fleet might be or how many ships are at the port right now and the both of both of these provide very high accurate information uh their information quality is great but um they're not real time enough and it's a very tedious and long process for both of these these don't scale well to have a global maritime picture that you want for your naval operations um then we have our automatic techniques right um but they don't exactly provide the same high data quality like direct observation does with uh helicopter patrols and satellite imagery you know radar is limited to the type of boats it can detect um and it's not really global either uh signals intelligence um this is off to the uh electronic equipment that's off of the boats uh some boats just don't have electronic equipment like some fishing boats um and this get on these radio signals or electronic signals can be encrypted and spooked as well and um then there's a premier technique that's usually been adopted as a standard after the um being accepted by the international community which is AIS automatic identification system but before um we talked about these problems we should have tried to understand what is AIS so like I said AIS stands for automatic identification system and basically what AIS is is GPS but for boats so a transponder a little black box on the ship signals out is location with the ship's identity for everyone with access to an AIS data feed uh to see so like you see here in the bottom right here's a popular AIS provider marine traffic where um you can see all the boats in the world that are giving off these locations but that's the key um it's the ones that are giving up these locations if some boats are able to turn this off so what do I mean by this um AIS has three chief concerns right there's limited detection there's unreliable detection and voluntary detection so when I talk about limited detection as you can see in the map here um it can get definitely very crowded in certain areas especially around when you're near the Singapore Straits or the Straits of Dover all these signals coming from there they can um cause interference which can uh we see some uh the location signal lose the location signals of nearby ships and then we have unreliable detection um right um we have unreliable detection where uh these uh signals they can be easily spooked or hacked so like for example you see here um this is a this was an example this isn't um but where some hackers were able to manipulate an AI signal coming from a ship and uh choose a very non-natural um manipulated vessel route that spelled out the the hacker term called PON which basically means you've done goofed um so as you can see it can be very insecure and this is um definitely very scary and so far that the maritime industry in general or shipping electronics can be pretty insecure um the maritime cyber attacks have increased by over 900 and the last three years so this is definitely something to be very concerned about that where we know our boats are can um just be manipulated by you or someone on the boat itself to so somewhere else there's been cases where people have been very flagrant and they sold the ships location um in the the middle of the Sahara Desert where there's obviously no water or the middle band arca where is this ice um so yeah this is a very important problem and then we have the worst problem though voluntary detection right AIS requires ships to voluntary disclose their position for everyone else to see that means um a ship can just stop sewing uh showing her location at a switch of a at the flip of a switch right on or off and this makes it very impossible to track vessels engaged in illegal activity right like piracy illegal fishing um Iranian tanker's evading the same since uh because they'll have their AIS off um this also makes it extremely difficult to know where uh boats are uh when you're navigating dangerous waters like the Gulf of Aden because they have their AIS hidden hidden from pirates um so it's very obvious that AIS is not likely tracking ships when they're violating international law and committing crimes um so what what does this mean only the manual methods are able to provide the data quality we need to know where um the ships are but how do we get around the time constraints of collecting data from satellite images we need to automate this process somehow if there's one thing AI is extraordinarily good at despite all the the clamor for the past decade is doing highly repetitive tasks like counting the number of ships and the satellite image that's why we do facial recognition for ships just like how facial recognition algorithms are able to identify and track um people in highly complex crowds and environments we can do the same for ships and highly complex satellite images like we did here on the the right image here um and then with facial recognition um like they do we're able to build a database for these ships so for example uh like a normal driver's license database you have a person's uh face you have a photo of them then you have where they reside and then you have some physical characteristics describing where they are with um our algorithm we're able to actually capture a photo of ships where they are in real time and then we can update where their current location is without having to rely on them reporting it and um like you see here with the location the latitude and longitude of the Merlion M and then we can also um actually estimate uh certain physical characteristics for it for example like the length and width we're able to do this by uh the different pixels they are um like you can see here I haven't detected that's what we estimated and then the axle length and width is reported on the AIS so we're actually able to make comparison but this is just an example of like a scorecard or a driver's license we will have for a ship that we're developing so we'll allow us to build a database for uh ships now why is this only now possible right um so it's two major reasons two major trends that allows it to be possible right so number one is for uh high frequency satellite imagery as you can see in the chart on the bottom left hand corner um the the payload uh to launching of uh rockets to um uh the space has just dramatically decreased that's a logarithmic rack there too and you can still get a linear trend on there nearly linear trend so as you can tell it's like exponentially dropping there um so lots of satellites are now going up so we're really able to do global covers especially over to ocean's work it's just not realistic to have big uh coast guard patrols going out there in the ocean it's like finding a needle in a haystack uh with high uh high amount of satellites in the sky this becomes a thing of the past and then two is just that um the actual algorithms have become really good um and actually counting stuff like I was saying earlier just five years ago like you can see in this need article um Microsoft and Google um announced that uh their AI algorithms are able to be humans at image recognition so it's a given now that all humans are machines are very good at recognizing uh stuff and also this is just partly because of just how cheap the processing power has become that is affected to do in real time now too um yeah so um the overall process of building this actual database and this insights is seen here and I'm going to walk you through it you know um we'll go through the ingested imagery then we'll go to detect ships then how we identify and track them and how we can use this to generate insights to help us solve and uh enforce inter maritime international law so here um on ingest imagery slide AI algorithms live or perish by data that's how they grow that's how they uh become effective if you have bad data you're gonna have a bad algorithm it's that simple so it's very critical from the essential states that you have good data what's the data we're using here to um identify and track uh these ships so we're using open source data um with optical satellite imagery and then we're using uh synthetic aperture radar imagery as our main imagery sources then we use other ones to try to narrow down what uh physical unique characteristics to these ships so um the satellite imagery is uh is fairly well common that's what you see on optical that's what you see on your google earth um but like one of the problems here like you can see here and this is a photo of singapore is uh clouds clouds can block and they can block any ships that might be underneath there like you see here um there could be ships underneath that that we just want to see even if we were able to uh preprocess it away um while sar is able to see through clouds so that's really helpful and actually seeing uh boats that might be underneath the clouds you don't have to be necessary if you have concern that you have light or um you know daytime conditions or uh if there's clouds you don't necessarily have to do that the downside is that it's not as frequent um uh sar imagery exists right now like it does with optical so after selecting our data and putting together as a data set um we the next step we have to do is basically um get our algorithm to train uh to detect the ships so the type of algorithm that we're using this is based off it's called a neural net algorithm right and neural nets are based off of the idea like um how you teach a child is how um based on information you expose them to and help them uh learn through labels right like if you were to uh teach your five-year-old kid um some stuff you would give him some books but what you teach him and how you uh make him grow us up probably that uh depends on what type of books you expose him to right and this is the same with data how we expose the types of data we expose him to and how we break it down to them it's going to influence how well they're able to pick up certain things right so like that big uh satellite image you saw here we're not just feeding that to them straightforward it's like if you were to uh give a little child try and teach them to read you give them the dictionary that's just not going to work you got to give them some size pieces right so um what we do there is that we divide the that image there that we see there into a little tiny squares pixels by 45 pixels by 45 pixels and then um we geolocate these image squares right we uh we take the location where these squares are and then um so we can figure out when we detect the stuff that's in there like the ships we know where it is that's how we're able to have the location of these uh ships without needing them to report it and then uh we train it to detect ships land water and cloud so for example these are like the four types of um labels we're basically working with here so you know if um if our algorithm saw an image like this a square like this it's so recognized this is a cloud this is not a ship if you saw this um this is water this is not a ship okay then there's land this is land and then when you get here uh recognize all your ships and then it should be and then it will be able to count how many ships there are in this photo too right so one two three four five six um seven ships here and then from that we can understand the actual vessel activity like how how many vessels there are and um where they are in general um so once we have fingerprinted the ship um yeah so after we detected the ship right we located from satellite imagery we already uh did when imagery analysts uh did we automated that part where they would try to look in satellite images and figure out oh how many boats there are in this uh near the port of Tartarus in Syria right now right we've done that that's great um how can we get more useful information because it's one thing to just know oh this is a ship okay that's great uh what's the name of the ship where can it be going uh you know how can we backtrack the location of it so uh how do we do this we make available of ships you know broadcasting their AIS data so usually we'll have a ship that will detect a ship and then usually we'll also find an AIS signal also being broadcasted from that position right so we cross-referenced it with it and then we basically fingerprinted then we know later on whenever we see a ship and another satellite image so like we saw a ship in Singapore but then we saw it in a Shanghai uh a week later we'll know this is probably the same ship even if it's not broadcasting its AIS signal um and we know its name because we have a photo of it previously and we're able to match it with this one it's kind of like a mug shot right um so that's how we fingerprint the ships um yeah so at the end uh we're able to take a very complex image like this right looking at this you know you can just imagine there's roads there's airplanes there's a bunch of clouds uh different uh water colors here um it's very complex image this uh and uh you know you guys zoom in and out to find all the different ships we're able to put this into our algorithm and then it can spit it back out to us it's uh right away um how many ships there are right so we're able to say there's on May 5th of 2021 that uh that uh boat was that a satellite image photo was taken um we can see how many ships are working there were 230 223 ships off of our training data here what we got um and these blue red dots what does this stuff mean so like I said we basically locate um when we took that photo based off of the AIS signal that was coming in within the five-minute period of time um and the blue dot shows that these are ships that have their AIS signal being shown uh being displayed and then the red ones were just weren't still at the time of your driver offered they just weren't being broadcasted um right right away because AIS signals aren't always being broadcasted instantaneously repetitively yeah so uh for example here uh for comparison here we have like a I believe like an oil tanker of some type um actually this is cargo vessel this is development but like you can see we have the latitude and longitude the name and we have the time where it was at this exact location then we have here on the right looks like a small watercraft um we're able to identify that partly small size like this was just 10 meter resolution um partly because we uh look at the weights of your right the from the movement of the uh boat we're able to detect that and we have the location port but we don't have the name port but we are at least able to identify look there was this small boat activity happening in this area which can become obviously very useful if you're trying to monitor drug trafficking near your borders or whoever because obviously those people are trying to be discreet and they might be using small cigarette boats or something like that yeah so um the big takeaway oops um so the big takeaway from that last section about facial recognition persists is that because of the disruptive technology of AI we have never before um a um capability to actually uh figure out where uh certain fleets are how how how they're coalescing in certain key regions like uh around the South China Sea we're able to um figure out how popular certain ports are in real time we don't have to rely on AIS data that's being generated from there because sometimes the ships they just uh kind of languish there and they never have their AIS on I mean we just saw that with the the Beirut explosion right there was that ship there that's just kind of no one uh really talked about that was just there for seven years off it wasn't obviously showing this AIS signal and the next thing you know you have an accident there so um that's another thing and we can also get real-time ideal the commodity movement that's going on there because uh one thing that we're also expanding this to is to do satellite imagery of the port areas themselves of how um uh certain material like iron ore is dumped in the Rotterdain to uh measure that and also just to figure out how many ships are moving in and out to get a real accurate granular look of the commodity movements in the areas like you see here in a western australian port so um like i was saying um the big takeaway uh in the slack section about facial recognition for ships is that because of the disruptive technology of AI um we can basically capture any ship in the world and um like DNA opened up a whole new possibility in criminal law about um uh you know capturing criminals and enforcing them and breaking open code cases this opens up a whole new possibilities in maritime international law cases and enforcing maritime international law too not only are we able to see uh see where real and real time where ships are but we can analyze satellite images of um for events in years past you know this becomes really important you know if you want to analyze past oil spills to figure out who's really guilty of it but um before we get into that like for example in maritime boundaries um you know there's a it's very uh there's no actual the the market there's no maringot line out there in the the big ocean the pacific ocean to know where not a ship is trespassing or not this this doesn't exist and it's usually pretty hard to uh patrol the borders um you know there's obviously the south china sea example but one case where our algorithm is really helpful is actually in the arctic ocean why um one this is another very intensive air uh contested area where uh there's battle for resources between uh the likes of russia canada usa norway etc etc but um also what's really interesting is that um gps up there there's a lot of navigation issues up there because of how a satellite's orbit across the the earth um it can be pretty difficult to actually use gps up there so having an actual way to pinpoint um uh the ships and the paneling satellite imagery is immensely helpful um and the south china sea is just a great to see the the field fleet build-ups and the eegs is to really monitor who's violating whose eegs actually in real time you have actual data to support um these claims now and not just eyewitness reports um and then environmental like i mentioned with china i youth this usually these uh big trawlers and stuff they'll they'll just turn off their ais and no one knows where they are and they're going to other people's eeg zones and um you know just uh take this illegally from there and then uh and then uh yeah then for oil spills right um just uh the largest oil spill in brazil off the uh coast of brazil no one still actually knows who was the ship that did it because they had the ais off there's some suspicions who it is but nothing uh verifiable yet and then for fuel mission regulations like uh has been planned for by the imo this year to reduce uh sulfate uh sulfur content from the fuel oil we're able to actually monitor that individually from um the satellites themselves and then with piracy um just like before we have a big fleet patrols and just like uh kind of reports on how active their uh the waters are but now we can actually forecast like how many uh small Somali bullets or Gulf of Guinea boats there are to get an idea like how risky actually the waters might be in the next 24 hours or so and um if you would like to talk more to me about this feel free to reach out to me um mst214 at colombia.edu and uh my co-developer mitzel as well as his email address and i'll turn it over now um two questions thanks michael lots of exciting opportunities there um as you kind of gone through so this is there's a couple of good questions i think that have come in already and um the first one i'm going to go with is from john chan which is how would your algorithm deal with bad after changing the look of the ship to hide you know the nature of its some of um character as it were um sorry i i didn't understand the question can you repeat that yeah so if if someone were looking to um outfox your algorithm uh and there is you know and they changed the sort of the look of the uh of the ship or vessel how would how would it overcome that yeah so if they did like a paint job or something like that um yeah so that's i mean partly what we're doing is that we're also using a the geometry of the ships which is uh as far as i've seen with a ship uh disguises they're not really able to change uh i've seen like you know changes of the whole but we're not looking at the whole so we have a geometry characteristics and that probably brings up the question what about sister ships right um we're able to kind of like sort that out by uh basically of the location and predicting what vessel routes it will be kind of like how a bank um the terms of just fraud or not by basically on the location of where you're wiring your money in from or how quick you hit the stock market right um there's just like starting a these uh just there's just a certain um limit about where you can physically be with a boat you know it's pretty easy to uh figure out all the boat will probably be here in x amount of time um this is like regressing analysis stuff thanks so and then the next question um what is the possibility uh of what you've talked about in terms of underwater vessels and kind of and and if there is scope for that and what kind of what kind of depths if you're able to get to get into the details yeah um that's a great question um the surprising thing is actually we this hasn't been a primary area of research but it is actually um someone possible to detect underwater vessels after not going too deep um from underwater um i'll give one example because this is a publicly known some of this from like the thermal characteristics of the vessel like just from a massive heat engine there um you can detect this from satellite imagery um there's other physical characteristics that we've uh found but um that's a proprietary too great okay so um so ships manipulate their ai's transmissions to master location um and then they're simulated to stateless vessels um how would you use this technology to kind of would you be able to have that sort of granularity of identifying you know what state this is this vessel belongs to or you know or is it just a case of we know where it's come from we think we know where it's going so if i understand the question correctly it sounds like you're saying oh what if they spooked their ais uh ais location and then you're trying to cross-reference it with that is that what it is i don't see the question i guess i guess the core the core of the question is um you would be able to identify you know have a strong idea for kind of the the nationality of the vessel um as opposed to just where it's physically located in the world so like a what flag is registered under or something like that yeah something like that okay yeah so um no we uh the thing about the shipping right is that it's a notorious for uh flags being uh uh pretty easily uh switched around um so we can't necessarily directly say oh is this is a Chinese vessel right um we can probably say that by following it and tracking it over time right um by seeing how frequently a bit of certain ports um but yeah we can't necessarily just say off of the ais signal or that is somebody a signal but by the satellite image alone we can't necessarily say oh it belongs to this uh country great so another question is is is this technology is this algorithm is it either able to or is it is it easy to transfer to for example looking to find people stranded at sea or is or is it just the fact that um you know you're relying on satellite imagery for completely different kind of data set yeah so um that's a great question we haven't exactly worked towards that but um with the current resolution we're working at with 10 resolution i guess we're working with others but um that would uh we think it would be possible we haven't worked on that but with like when you get to like one meter three or three meter five meter resolution satellite imagery uh we believe that could be possible uh to detect people at sea or at least a locate where a crash uh a sunken vessel was to narrow that search area in a critical time that it takes to find a person like that thanks so i'm going to ask the question that peter's posed on the q and a which is um the maritime industry has resisted autonomy and maritime regulation um for a uniform ai strategy is that a problem in implementation of your approach yeah great great observation um that's the good part of our approach is that it doesn't rely on them it's non voluntary relying people are reporting their locations so we don't rely on these sensors on these boats to give up their locations we independently uh detect them um from the skies from the satellites uh themselves okay and then the the next one if i may use alvaro who says um who asked the question how can this be compared to uh how can this be conferred sorry compared the firmorship obtained from satellite images from the same firm obtained from other sources does that translate does that make sense um no i'm sorry i don't exactly understand it how does um how does it compare to other satellite image sources or i think i think so how how does this system differentiate itself from perhaps other other methods out there for just you know analyzing satellite imagery okay i get what he's talking about then um yeah so uh two key things um ours is a really a data fusion source we're using a bunch of different uh data sources we're not just using optical satellite imagery um we're also using star imagery and other uh satellite sensors to track and detect ships to develop new fingerprinting uh remarks um the other thing is that we really from the ground up tried to build it um a completely uh using AI and satellite imagery alone um if you look at uh companies like airbus or something the closest they have to this is they have like a help desk where you if you wanted to find a lost vessel at sea you would call them and then you would look through uh the satellite images and apply them over a selected area some of their algorithm and some human uh looking at this ours is uh from the ground up really uh in the autonomous uh AI approach to uh doing this so i'm gonna ask my own question so in terms of uh using satellite imagery um and you said it was all open source like have you sort of explored avenues for getting sources from you know because there's a proliferation of satellite companies uh going up into space with electro optical sensors synthetic aperture radar um presumably you're quite excited about the scope of additional data you'll get from that yeah exactly um yeah that's what we're actually currently exploring at you know with planin and uh maxar and those types of companies who are who have really led the the whole satellite revolution that's happened in the past decade um yeah we're definitely trying to uh make use of that right now also right so just having a quick flip through the questions that are outstanding so one of the ones here is what about the possibility to track fast boats and drug related traffic can you speak to that so like uh speedy boats right um yeah so that's what drug cartel you know yeah yeah both and that type of stuff yeah um so like i was showing here earlier here just looks like a small uh uh motorcraft i mean as long as it's captured in a satellite image we're able to detect it um you know the the possible uh downside would be that as a sped across just a small channel that we just didn't get a satellite image photo in time but we see that um how frequently uh the satellite image satellites are going up in space and just how more and more frequent the sketch that's just we're doing keep on getting higher and higher frequency of satellite images as this is just not going to be as much of a problem in the future especially get uh near the coastal areas there's usually pretty frequent uh satellite imaging near these areas um that uh where these uh boats are transversing you know it's usually a pretty short distance these cigarette boats um that's not as big of a problem the image frequency like some people might imagine thanks so um jeff biller at the us air force academy asks um as this technology advances could this approach be potentially applied to aircraft um yeah so uh if i understand the question correctly um you kind of want to use a image sort image provided from aircraft like aerial photography from uh airplanes i believe that's what you're asking um yeah that's definitely something we would love to uh do because you know just higher uh data so like one thing we're working right now is that we're working through uh different images sizes and image resolutions um we're trying to use as much possible data because like any uh big data algorithm the more data you get as long as you know you sort through it properly the better your algorithm is going to be uh so yeah aerial photography or dash sorts would be uh definitely very helpful um now i'm thinking about the second possible thing well he's asking to detect aircraft from like hangers um yeah it could definitely be applied towards that um the problem with that is that like a common way to hide uh aircraft like anyone who's uh familiar with repo people is just to put in a hanger you know so it can't be necessarily detected uh we're not able to right now necessarily see through a hanger and actually detect aircraft there but yeah it's definitely possible if you want to figure out like how uh big um uh enemies uh air forces you can try to figure that out so long it's open um where you're stored you can automate automate that process great and then um i'll close with one last question which i think is is worth emphasizing is um so the output from from your algorithm and your you know your company's work is this going to be available open source or you know how are you um how are you dealing with kind of the product that you produce yeah um we're looking at multiple avenues right now of like uh sharing this type of information um we're right now focusing on working with projects with certain people um on who are interested in this type of data but we would uh really like to i think uh at least have an open source version just to see uh in certain ports areas where uh how often the the boats are appearing there but we're currently working on that okay michael i'm going to call it there um all right cool thanks dear yeah thank you very much for your time and for presenting on what's a really interesting topic and one that's going to be more prevalent in in the future as well um thank you to all of those who joined us so far we're going to break for lunch now or whatever i'm here is appropriate in your part of the world and we'll be back at uh 13 10 um eastern savings time that's um 10 minutes past 1 p.m eastern saving time thanks well thank you all all of you for joining us this morning uh for the first session we we appreciate that and we look forward to a really exciting session this afternoon or this evening wherever you may be uh you know we heard um paul charret and michael tracy talk about some advances in technologies and how they can impact normal conflict on land that and sea and we're going to hear from four separate panels this afternoon that are going to address some of those issues specifically you know autonomous vessels and ai and loaq and the potential question of accountability and of course you know futures command and technology on the battlefield and those four panels are sponsored and and manned by some of our great partners and colleagues here at the war college uh navy code 10 the army's national security law division um the paul side china center at the law school and then of course the lieber institute at the united states military academy at west point um but first we're going to talk about uh unmanned vessels and for that we would like to turn to of course our code 10 representatives and i'd like to introduce our moderator margaret materna um margaret is the deputy director at the national security law division code 10 with the office of the judge advocate general the navy uh margaret take it away good afternoon and thanks very much lieutenant colonel cherry for that introduction so as mentioned this panel will focus on autonomy in the maritime context specifically on unmanned or autonomous vessels autonomous vessels are currently envisioned for both government and commercial use and in a variety of combat surveillance law enforcement in general support roles but they weren't really contemplated by international law their use raises some issues common to all systems that employ autonomy especially if they are designed to exercise belligerent rights but whether armed or not they also raise questions unique to the maritime domain and how they will operate in compliance with navigational rules of the road and the technology is advancing very rapidly just last month one of the us navy's overlord unmanned surface vessels conducted a first-ever transit of the panama canal sailing from alabama to california which is over 4700 nautical miles 97 of this travel was completed in autonomous mode so we are moving quickly but there are still numerous legal questions left to be addressed some of which are currently being considered by the imo and we'll hear a little bit more about that to start us off our first speaker will be lieutenant commander joel qui-do who currently serves as a judge advocate in the office of maritime and international law at us ghost card headquarters in washington so lieutenant commander qui-do uh over to you for your pre-recorded remarks good afternoon friends and colleagues like to start by thanking fresher james kraska and the stocking center for inviting me to be here today to be a part of this distinguished panel my remarks on autonomous vessels today will cover three main topics first i will talk about the coast guard's international work on autonomous vessels of the international maritime organization next i'll shift shift to a domestic perspective including the coast guard's engagement with the maritime sector as well as the public regarding autonomous vessels and finally i will talk about the impact on autonomous vessels across three coast guard mission areas like to also quickly note that my comments today do not necessarily reflect the visual policy or position of the u.s coast guard the department of homeland security or the u.s government to my first topic an international perspective on the coast guard's autonomous vessel work the coast guard is actively engaged in vessel autonomy issues both as a regulator of the maritime industry and as a user of the technology itself cyber-enabled systems allowed coast guard operations to better accomplish their mission allows industry to innovate but at the same time raises new concerns as the coast guard is acutely aware of and cyber wrong the coast guard's regulatory approach to vessel autonomy is necessarily informed by the discussions and decisions taken at the international maritime organization in recent years the imo has embarked upon a regulatory scope exercise for the use of maritime autonomous surface ships or mass scoping exercise seeks to determine how safe secure and environmentally responsible autonomous vessel operations might be addressed in imo instruments the coast guard leads us delegations to the imo and has been an active participant in this work the framework for the regulatory scoping exercise anticipates a spectrum of autonomy across four different levels in degree one a ship with automated processes and decision support degree two a remotely controlled ship with seafarers on board degree three also remotely controlled but with no seafarers on board and finally in degree four a fully autonomy ship the four degrees of autonomy underscore the complexity of the multi-tier analysis required to determine if and to what extent the existing suite of imo conventions might preclude mass operations allow them with certain amendments or clarifications or simply inapplicable to autonomous vessel operations as an illustrative example the international convention on standards of training certification and watchkeeping for seafarers or the stcw convention defines the master of the vessel as the person having command of a ship in degrees one and two with seafarers on board it is much easier to conclude that there is indeed a master board however in a degree three remote controlled scenario with no seafarers on board one must address the thorny question of whether a remote operator perhaps thousands of miles from the ship that he or she controls can fairly be considered the master of that ship similarly for a fully autonomous vessel in degree four commentators have queried whether a vessel programmer or a machine learning vessel itself might be considered the master of a vessel while these issues suggest no easy answers the imo's work has moved us importantly towards asking the right questions in addition to its international work at the imo the coast guard is also considering important domestic legal and policy developments related to autonomous vessels the coast guard is increasingly seeing owners and operators experimenting with autonomous systems these small-scale operations are generally coordinated with the local captain of the port to ensure the safety of the experimental vessel the op the waterway in which they operate and other vessels that are in the area to ensure the continued safety of such activities the coast guard will leverage leverage its vessel inspection expertise as well as its unique authorities through the ports and waterway safety act the coast guard also knows the importance of public engagement as well as engagement with the maritime sector to that end in august the coast guard published in the federal register a request for information regarding the integration of automated and autonomous commercial technologies into the maritime transportation system among other topics the coast guard sought feedback on existing regulations that may present a challenge to the development or implementation of autonomous technology potential new regulations that will provide more clarity for the maritime industry as well as the anticipated impacts of autonomous vessels on the maritime workforce the coast guard received over 40 insightful comments in response to this request as these numerous responses would suggest the coast guard does not presently have all the answers regarding autonomous vessel technology rather the safe development and adoption of autonomous technology will require constructive collaborative efforts between the coast guard as regulator and the owners and operators deploying these new technologies finally the coast guard is very much aware that the integration of autonomous technology presents new cybersecurity threats as noted in the recently released national academy sciences study entitled leveraging unmanned systems for coast guard missions unmanned systems can quote present cybersecurity vulnerabilities from network based network based attacks as well as from attacks that directly affect the behavior of the vehicles or other physical assets unquote to mitigate such threats the coast guard has recently issued guidelines for commercial vessels and waterfront facility operators on how to identify cyber vulnerabilities and has prioritized cyber operations and cyber strategic planning I want to conclude my remarks today by highlighting new possibilities and new challenges that are raised by autonomous vessels across three core coast guard mission areas search and rescue counter drug operations and navigational safety first search and rescue the duty to render assistance of sea is deeply embedded in the nautical tradition as well as in customary international law as reflected in article 98 of the law the sea convention that article while directed at the flag state requires the master of a ship flying the flag to render assistance to any person in parallel at sea thus in the context of search and rescue the question of who if anyone is the master of an autonomous vessel takes on special significance indeed if one reaches the conclusion that there can be no master of a mass it is necessary to ask whether the legal duty to render assistance persists for our purposes today it is enough to say that there have been no definitive answers as to who is the master of an autonomous vessel particularly at levels three and level four in the iron mode taxonomy as these advanced at these advanced levels of autonomy where no seafarers are present I would suggest that the key inquiry becomes if and when an autonomous ship has so improved in understanding of and capability to render assistance that it becomes obligated to do so next drug interdiction among the challenges of interdicting narcotics traffic at sea is the vast operating area often described as the tyranny of distance indeed the western hemisphere transit zone a known corridor for drug production and delivery spans some seven billion square miles for some perspective that's about twice the size of the continent on the United States because these vast distances strain finite enforcement assets and close illicit activity leveraging autonomous technology is essential to enhance their time domain awareness autonomous vessels might someday offer the following potential advantages greater presence and endurance on the water resulting from reduced fuel consumption and the elimination of crew rest requirements enhanced detection capability through an optimized sensor technology or integrated audio or visual communication system and third overall reduced operating costs unfortunately law enforcement and naval forces are unlikely to be the only customers in the autonomous vessel market indeed the evolution of maritime drug trafficking from the repurposed trawler to the present-day narcotics submarine has consistently leveraged technology this history suggests that the advent of autonomous vessels will in order to the benefit not only of coast guard and naval forces but also to the criminal entities they seek to interdict finally navigational safety as the federal government's designated center of excellence for navigation safety the coast guard will play a pivotal role in shaping navigational rules and policies in an increasingly automated world a crucial part of that work will involve the convention on the international regulations for preventing collision that see better known as the coal rigs however certain coal rigs raise apparent issues for mass compliance including rule five which states in relevant part that every vessel shall at all times maintain a proper lookout by sight and hearing the ability of an autonomous vessel to comply with rule five turns on what is meant by sight and hearing for example whether a sufficiently robust suite of audio and visual sensors could serve as the functional equivalent of a lookout relatedly rule two notes that responsibility within the meaning of the coal rigs includes any precaution which may be required by the ordinary practice of semen is noted by the imo intersessional working group on mass these provisions are imbued with quote human centric wording unquote not obviously applicable to autonomous systems moreover is noted by professor craig allen of the university of washington school of law unlike the solace or stcw conventions the coal rigs do not include provisions with a substitution of equivalence for its requirements in conclusion while the above questions have no easy or settled answers the u.s coast guard is committed to ensuring the safety of navigation and the protection of life and property at sea including the safe operation of both manned and unmanned vessels i want to thank you for your kind attention today and i look forward to your questions thanks very much lieutenant commander quido really appreciate hearing the coast guard perspective and that update on imo discussions our next speaker will be dr anna patrick who is the chair of international law and public law at the university of basal in switzerland dr patrick thanks very much for joining us i believe you're overseas so we particularly appreciate it over to you i would like to welcome you to my presentation on autonomous vessels and maritime security regulatory challenges in the context of the suha convention and i would like to express my warmest thanks to the organizer for the opportunity to present at this very interesting conference there are two reasons that led me to focus on the suha convention first the suha convention is subject to the regulatory scoping exercise conducted within the imo the previous v-crogation overview about this exercise second autonomous systems that can be used for enforcement purposes are announced at ever shorter intervals and at the same time unmanned vehicles are already being used to commit acts that could amount to suha offenses i'm thinking of attacks in the style of those carried out by hoodie rebels with explosive laden remote controlled crafts well considering the full spectrum of anticipated ship automation technology we're arguably still at the very low end of the scale at the worthless carriage stage of this emerging technology remote controlled offender ships for example are not very very far away from the invention made by the scientist tesla in 1898 who presented a remote controlled toy boat with flashing lights during the electrical exhibition on new york's medicine square garden pond technology said to make a giant leap in the near future however already quite rudimentary technology challenges the foundations of the law because the law rests on the assumption that ships carry an onboard crew responsible for navigation and the ship's task and mission and with unmanned ships this assumption is thrown overboard the question whether the law is fit for purpose already arises with remote controlled boats and we do not need to wait for more advanced technology let me provide you with some examples of the suha convention where it is unclear whether the law is fit for purpose in the age of autonomous ships i start with suha offences with your fence definitions suha offences can be roughly categorized in three types of offences first there are offences prohibiting to harm another ship the working of these offences does not specify the means by which a victim ship is harmed so you could technically use an unmanned ship to cause harm however do remote operators fall under the provisions to those launching a system pre-programming a system fall under that provision this is less clear second there are offences prohibiting to use a ship as a weapon here again we are quite lucky that the suha convention defines ships in a very broad manner as a vessel of any type whatsoever not permanently attached to the seabed the definition may cover ships without crew however use among states may differ third the transportation offences prohibit using a ship to transport illicit cargo again the term transport is defined very broadly as a means to exercise effective control over the movement of an item so with remote controlled vessels this notion could be fulfilled however whether the same holds true for fully autonomous vessels is less clear overall the definitions may encompass the commission of suha offences through the use of autonomous ships however states may interpret provisions differently and there is a need for clarification second the suha defines who is authorized to enforce suha offences and there are essentially two conditions first the craft must qualify as a worship or a state craft marked identifiable as being on government service and authorized to enforce the law i will not further elaborate on that issue because the next speaker will cover it in depth second the suha convention explicitly mentions that only an official from such craft is entitled to enforce the law can a remote controller be deemed to be an official from such crime what if a person launches a fully autonomous system is that an official from such craft such questions arise in the context of the suha convention interpretative issues also arise in the context of safeguards they rest on the premise that enforcers and suspects meet at sea in theater they are tailored to human human interaction further they often refer to physical documents to prove a specific fact like id cards of officials or ship papers to provide you with examples the suha stipulates that officials must produce an id card for examination by the master when taking enforcement action what is the meaning of this provision in case there is not a human human but rather a human machine or even a machine machine interaction what does it imply for interaction and communication what is the functional equivalent to a physical document an id card well uncertainties of this type raise the question about the appropriate level to clarify these issues to regulate autonomisms is it domestic or international and i guess a combined approach is necessary the suha convention belongs to the so-called suppression treaties the mechanics of this type of convention is the following they are rooted in the idea of harmonizing domestic criminal law in order to pave the way for at sea enforcement and interstate cooperation criminal matters if we do not want to prejudice the harmonization idea i guess there is a need to renew our common understanding what the suha provisions mean in the context of autonomous ships in mind you the common understanding of what suha provisions mean do not need to be entrenched in a new treaty provision i very much share the concern concern of opening up this treaty rather it seems to suffice to issue a unified interpretation or a similar type of document without such common interpretation or understanding however i doubt whether the idea behind the suha convention harmonization of domestic law to enable international criminal cooperation can be realized in the future the aim all regulatory scoping exercise is to be commented because it's a it notably raises awareness among iama member states about autonomous ships and it also allows for a first inventory of legal challenges however it also has some limitations and i would like to mention three of them first the scoping exercise entails a provision by provision examination there has not been much room to consider the entire mechanics of the treaty and to discuss whether it is disturbed through the use of autonomous ships as regards the suha this may be the case because the treaty's functioning is predicated on suspects being at sea there is a ship boarding procedure it is foreseen that suspects are arrested at sea with cruelness of fenderships arrest at sea will be not possible in most of the cases and this entails a shift a shift to land based enforcement measures which the treaty does not authorize so offenders vessels without crew may impact the entire mechanics and efficiency of the suha convention a second limitation accrues from the focus of the scoping exercise it only reviewed existing law such narrow focus does not provide a full picture of the regulatory challenges posed by the introduction of autonomous ships since new technologies will bring up entirely new issues not regulated in the existing law some of these issue can already be anticipated for example there is the issue of machine based decision making the law defines various thresholds notably the threshold of reasonable grounds of suspicion these thresholds were so far subjected to human judgment the question is therefore can a machine engage in such assessment another non-regulated aspect is how uh what is the value of machine generated evidence is it possible to base a criminal prosecution on machine generated evidence solely evidence not corroborated by human perception and how would you challenge this type of evidence in court and i guess here the discussion is a bit more advanced in us criminal law doctrine than it is in europe then there are the known unknowns and unknown unknowns i guess today we simply lacks experience and imagination to predict the legal challenges new pieces of technology will bring hence the situation needs to be continuously assessed ideally lawyers work closely together with experts in technology and the fact that this will be an ongoing exercise brings me to the last limitation of the aim of scoping exercise the methodology of the aim of scoping exercise was such that states could only choose among three options as regards the way forward interpret these amend treaties and create new treaty rules hence the focus is very much on hard law treaties traditional forms of lawmaking however is treaty making really a suitable method to regulate emerging technologies i have some reservations because of the speed of the technological development there has always been a pacing problem throughout history technology has always outpaced law however the rate of technological change accelerated quite dramatically and innovation cycles become shorter and shorter this accessor based the pacing problem and also the regulatory challenge it is against this background that there is need to discuss on how to best regulate emerging technologies is for example soft law better suited than hard law and should we move from a rule-based approach to principle-based regulation i gave you the example on the rule that the officials need to produce an ID card for examination upon the master maybe in the future it suffices to have the ruling hard law which states that we need legal certainty when enforcing the law but then the specific details would be regulated through soft law which can be adapted more easily well these are some of the challenges which are involved in the regulation of autonomisms and which pertain to the suave convention specifically and i would like to thank you for your attention thank you very much dr petrick we really appreciate hearing your remarks our final speaker this afternoon on the panel will be professor pete padroso who is the howard s levy chair on the law of armed conflict and professor of international law at the stockton center at the us naval war college he's also a retired captain in the us navy jag core professor patroso over to you for your remarks thank you margaret and thanks to everyone for participating today good day to all of you wherever you may be have the next slide please and the next slide okay i'm going to be talking as was mentioned primarily about autonomous or unmanned systems in a naval warfare construct but why are we talking about this we've already have seen over the last 20 years of warfare in in the middle east that unmanned system do provide some advantages and have been proven in combat they because of their mobility and their ability to loiter on on station for extended periods of time they enhance situational awareness for missions such as intelligence surveillance and reconnaissance they also reduce human workload you know we have a p8 aircraft that has nine crew members on board with a unmanned aerial vehicle doing surveillance you're going to have one one joystick operator back at creature force base or somewhere else you also the unmanned systems also improve mission performance because of the stealth technology that they employ which makes them much more survivable than a manned platform and it minimizes overall risk to military civilian personnel by allowing for remote operation away from the battlefield and enhance targeting capabilities and all of this is at a reduced cost which is important in today's budgetary world you can get six reapers for example for the price of one f-35 aircraft next slide please so these unmanned systems are ideal for dull missions as i mentioned like ISR which is very mundane requires long duration on station they're also well suited for dirty missions like detection of chemical biological and nuclear materials and by reducing the exposure to personnel that would be exposed to these hazardous conditions on the battlefield and finally they can also conduct inherently dangerous missions such as mine clearing operations or deactivation of improvised explosive devices we saw in the recent covid outbreak when it first started to move on the chinese were using unmanned systems to develop i do their first responders rather than send man personnel with into the area next slide please as margaret already mentioned these things are coming and they're coming fast overload was tested in september of this year on a six-week cruise departed moviel alabama conducted some operations in the Gulf of mexico then transferred to the panamal canal and arrived at port wine Amy six weeks later as she mentioned 4700 nautical miles 97 percent of which was in an autonomous mode next slide please now the vision for these systems are going to be under the command of the surface development squadron one based out of san Diego california the vision is to establish a command and control node a shore that will be man 24 7 as a unmanned operations center by surface warfare all office surface warfare officer qualified personnel as well as senior enlisted personnel trained in the collision regulations and ship and next slide please so what are these things are they a ship are they a device if you look at the number of the imo documents that define ship or vessel they all have one thing in common none of them say that you have to have a person on board so that's I think that's an important point that as they go through this scoping exercise that needs to be considered and not be overly conservative and try to say that you have to have person on board a device or a conveyance in order for it to be considered a ship or a vessel next slide please now article 94 of unclaw does seem to indicate of a manning requirement again it talks about that each ship should be under the charge of a master they should have a crew on board that the master the officers and the crew should be conversant and applicable in international regulations in regard to safety at life at sea the coal rag the prevention reduction control of marine pollution etc again there's nothing in unclaw however that says that the master or the crew have to be on board the vessel next slide and as Joel already mentioned we've got this the mass of work is being done at imo that eventually the the projection is that at some point we will have remote and autonomous ocean going vessels in the not too distant future next slide please next slide we already covered that okay one of the things that ongoing in conjunction with the imo work we have this non-governmental organization that is participating in the work at the imo and a survey that they sent out to a number of states 17 out of the 19 states that responded indicated that unmanned maritime systems could be considered ships under their domestic laws so that's an important point that needs to be taken into consideration as the imo goes through its stoping exercise next slide please so again what are they ship device not everything's going to be considered a ship or a vessel it let me make sure I'm not running over time here okay not everything's going to be a ship or a vessel if you sit look at the on the left there that the ocean glider probably is not going to be considered a ship or a vessel however the the sea hunter that's on the right side certainly looks like a ship smells like a ship and probably at some point in the near future it's going to be considered a ship next slide please again the navy for purposes of navigational rights we consider that these unmanned underwater and unmanned surface vessels all enjoy the same navigational freedom that man vessels can exercise including innocent passage archipelagic sea length passing transit passage through international state and high seas freedom beyond the territorial sea and again the NGO would agree with that that maritime unmanned vessels do enjoy the same navigational rights as man ships again assuming that they can comply with the the applicable regulation such as coal rates next slide please the you see the definition of warship first originating in the 1907 hay convention it talks about the ship has to be under the director authority or control of the flag state it has to be have external markings you've got to be commanded by a duly commission officer and it has to have a crew subject to military discipline next slide please that definition has carried over into the both the 1958 high seas convention as well as in article 29 of the 1982 un convention on the law of the sea next slide please and in our military manuals we see that the DOD law war manual as well as the commander's handbook in the law of naval operation also adopt the same definition that you find in unclos again i would suggest that none of these documents require that the commander or crew beyond board the vessel next slide please why is this important well we go back to 1856 the declaration of paris abolished privateering that was advanced then in the in the hay convention of 1907 that talked about converting merchant ships into warships the naval the Oxford manual same thing privateering for forbidden and it was all getting down to the point that only warships can engage in belligerent acts during an international armed conflict and you can see that both the DOD law war manual and the commander's handbook on the law of naval operations adopt this this position even though the united states is not a party to either the paris declaration or to vague eight seven next slide please but as you can see on this slide there are a number of mission sets that are being identified for unmanned maritime systems that are going to require that they engage in belligerent acts which means that some of these unmanned systems are going to have to be designated warships if we want to be in compliance with with the uh with with international law next slide please and again not all these systems need to be declared as a warship because you can see that the common usv that's deployed from the littoral combat ship uh it conducts uh it can conduct belligerent acts that it will do so as an extension of the of the warship not in an inherent right in and of itself next slide however if you see the seahunter or the large orc underwater vehicle these things are not going to be launched from a from a warship they're going to be launched from the land base somewhere and then they're going to proceed uh several thousand miles to their to their uh destination and they're going to engage in belligerent acts therefore these types of vessels at some point are going to have to be designated warships under the under us uh process next slide please and that process begins with the chief of naval operations uh article four zero four zero six navy regulations identifies the chiefs of naval operations as a person responsible for the naval vessel register and assignment and classification uh waterborne craft and designation of status of the ship and the service and then there's some additional uh instructions that uh that apply that and and the US laws apply that would uh that would require uh that that would allow for the designation of an unmanned system to be a warship but the the issue is this has to be done sooner rather than later and the the navy has to get uh get off the ball and start uh um designating these uh these larger type vessels as warships so that we can establish state practice uh to go into the future and thank you very much hey margo you're muted my apology uh thank you very much professor pedrozo and thank you to all of our panelists this afternoon for their thoughtful contributions on this topic uh we're now going to open it up to questions from the audience and i would ask all of our speakers to turn on their mics and cameras if they haven't already um and we'll go from there uh so one of the first questions which kind of dovetails nicely with some of the work that we're doing right now in the navy um is distinguishing between what is a weapon system and what is a warship uh and so this question was specifically for professor pedrozo but we certainly welcome any thoughts on the subject uh from any of our panelists so can an autonomized weaponized vessel be considered a very big torpedo and treated as such well i guess it could be considered a torpedo but um we have torpedoes uh so i would uh i would suggest that uh why call it a torpedo if it's not a torpedo uh they there are some of these systems that uh some of this emerged that the underwater uh unmanned underwater vehicles are very well could act as a torpedo uh but that's not how they're designated um so i would i would suggest that the answer to the question is no that these should not be considered large torpedoes thanks very much and our next question is probably best for lieutenant commander quito as you were discussing some of the cyber issues uh with unmanned systems so some of the checks and balances that might be envisioned to overcome you know the vulnerability to hacking and cyber crimes with autonomous vessels uh so one of our audience members has asked if you could elaborate a little bit on that sure i'd be happy to and i'd say the the coast guard's increased focus on cyber threats is certainly not unique to the autonomous vessel context we're looking at cyber risk across waterfront facilities and ports um so this uh the context of autonomous vessels is really one example among many of um the sort of risks inherent with um increasing the automated technologies um remote capabilities across a number of networks so i would say that to answer that question the the coast guard's really looking uh not just at kind of the the navigational issues that i referenced during my preparatory remarks as far as navigational safety in a narrow sense just the you know the safe navigation of the vessel through the water but what that vessel might be subject to in terms of uh cyber intrusions so it's um it's really cyber cyber risk or not something that can be narrowly tailored i would say to one context but we're taking a much broader view if that helps if i may add during the regulatory scoping exercise the problem of autonomous ships being hacked has been identified as a transversal issue and one of the core issues and the CMI has even suggested whether we need to introduce a new offense which would be a cyber crime against ships and others argue that the Sua convention already includes such an offense there is this crime which says that a person who places on a ship by means whatsoever a device which can cause harm to the ship um is liable to punishment so if we interpret that broadly it could also be placing a malware on board ships so this is certainly one of the keys key issues to be discussed within the scope of the regulatory scoping exercise great thank you both very much uh and dr petrick i think you kind of alluded to this in your remarks uh we have you know an ongoing struggle between whether a new legal framework is needed or whether existing legal frameworks are sufficient to address autonomous vessels and their use and the variety of potential uses for them and so and you know along with that are the coal rags which as joel mentioned uh some of those requirements might be a little bit difficult to comply with such as the one for a lookout so i open this question to all of our panelists uh do we need a new legal regime our existing uh laws sufficient um and if they are sufficient do they need to be amended or is it just a matter of interpretation and application i think i can start taking uh a stab at that one thing that i wouldn't know you mentioned specifically the coal rags um the way that the coal rags are currently drafted the concept of responsibility uh actually makes it a requirement uh that you the mariner would know not just when to follow the rules but when collision avoidance might actually require a departure from the rules so one thing i think i'd like to note at the outset is there's still you know we've mentioned that the technology is progressing quickly but you know is there is there an algorithm that's been developed yet uh that cannot only show an autonomous vessel how to follow rules but also know when to break them and i think uh until that level of technology uh has been reached or has at least been approached we're still going to be in a a process where developing guidelines and best practices are going to be necessary before we can definitively start marching out on regulatory changes if we look at the results of the the interim results of the regulatory scoping exercise it's quite clear that um the appetite to have new treaties or amenities is not very big and states at the current structure they favor interpretations and if the imo or um a conference of state parties issues guidance that could also amount to a subsequent agreement regarding the interpretation of these treaties and that could be then an important element for the interpretation of these treaties maybe there are limits to interpretation um some issues may be solved you can find a functional equivalence for many uh things i guess uh this term has been used by trial coito in his presentation in my presentation um but i guess there are also some new issues which are not yet addressed in the uh the existing law as i stressed in my presentation because the issues are new uh non-existing and not regulated and there i guess um it will be difficult to just um operate uh through interpretation reinterpretation of the law and we will have um new rules but i doubt whether uh formal or treaty law is the best way to proceed because it's very slow whereas technology is very fast and so i guess um soft law guidelines will uh be better suited to develop the law it also allows for a trial and error approach we can adjust to new technology because to some extent we regulate the unknown and um i don't know whether treaty is not um to reach it for this type of fast evolving technology on the other hand um treaties provide you with much more certainty uh not only the enforcers or those subject to enforcement but also the industry is clear what is expected from them whereas if different actors issue guidelines you guidance you may also have a situation where it's not quite clear what's the authoritative rule is i would agree that uh we don't need the new legal regime i think we need some creative interpretation um with regards to coal rigs uh look out i would suggest that a that a sensor system can be every bit as uh as competent as a person living through a pair of big eyes um that and probably probably do a better job than a than a man look out could do but again it's going to all depend on how technology develops and once if we can't develop the technology to the level of satisfaction to that we can that we can say it can comply with the existing legal regime then they shouldn't be done but i think at some point in the near future uh they are going to come up with the technology that's going to allow these unmanned systems to operate it see just like a man ship that okay thank you everyone uh we only have about three minutes left uh but we do still have a couple of questions uh so one question has a lot to do with attribution and responsibility uh so autonomous platforms may display unpredictable behaviors uh so how much control should a remote master have over the platform to still be considered responsible for its behavior uh so maybe if we could kind of talk even a little bit more broadly about responsibility in that context professor Pedro perhaps i'll start with you since you're still up on my screen okay um yeah i mean the uh i think that as long as the uh the the person that is remotely operating the vessel um is going to be the person that's going to be held responsible for any uh transgressions at that the unmanned vessel may may perform now if you're talking purely autonomous vessel uh then you're you're not going to have anybody that's operating them theoretically it's if it's purely autonomous then you have no one in control so the flight state is going to bear responsibility uh as it should um if it's flagged under a certain country then the flight state bears responsibility for any uh transgressions that that ship may uh may commit i agree that um remote controlled ships are arguably not very problematic in terms of attribution at least in criminal law you still have a human being who takes decisions and continues to be responsible for the effective control of uh the ship or the device however if it's a pre-programmed or even if it can learn en route it becomes more difficult to attribute the more distant the human involvement is the more difficult attribution is I think from the the Coast Guard perspective from a navigational safety point of view uh degree two you know the remote controlled operation with a master still on board actually um is quite interesting in that it raises some questions about what is what is the level of intrusiveness so to speak of that on board backup person i think the touchstone for safe navigation has always been fundamentals of prudent seamanship so do we worry about a scenario where uh that backup on board a degree two type of autonomous system is not performing to an adequate degree that that safety backup functions so i think that's even where you have the quote unquote safety of someone aboard i think there there's more work that needs to be done to define the scope of their responsibility or oversight of that autonomous system so maybe to build on that a little bit we have a question about whether if a vessel is cyber attacked and the attacker takes control of it your calculus changes at all so who then has the responsibility and it appears we have stumped the panel with that question i guess it depends the responsibility for what for the safe navigation or is it about like if there is a damage caused for the damage i guess the question is too complex too for an easy answer okay well that brings us to 2 p.m. which is the end of our time uh so i really want to thank all of our panelists for their excellent comments this afternoon uh and i see lieutenant colonel cherry is online uh so i'll turn it over to you to wrap it up thank you thank thank you very much margaret that was a fantastic panel um thanks you to our three panelists uh certainly interesting and uh as we go into our break um we will next hear from uh the army so we'll be going from the navy to the army and going from the water to the land uh as a marine i'm a good person to transition us there so i will come back in about 10 minutes uh and we will see you then for our next panel thanks again margaret and thanks to our panelists ladies and gentlemen uh hello and welcome back from the break uh our next panel titled artificial intelligence in the law of armed conflict is co-sponsored today by the army's national security law division and uh from that division our moderator is mike mayer and mike is the special assistant to the judge advocate general of the army and for law war matters um with that mike i'll turn it over to you great thank you john and um welcome everyone to our panel uh good morning good afternoon good evening depending on where you're at um we as john said our panel is going to be entitled artificial intelligence and the law of armed conflict uh certainly i want to express my thanks to the stockton center for the invitation to participate here today we have seen over time the increasing use of autonomy in weapons systems new innovations and technology such as the use of artificial intelligence and machine learning have the potential to dramatically transform how the u.s military fights and how do d will operate in the future to date we've seen some of the benefits of autonomy and the motive to increase levels of autonomy in weapons systems can yield even greater benefits greater operational effectiveness increased safety of one's own forces decreasing the need for personnel and financial savings with respect to the law of armed conflict we have seen that autonomy can improve accuracy of weapon systems provides better is that allows commanders and operators to make better better targeting decisions technologies have shaped warfare in the past and these new technologies make proved to be even more dispensable parts of the military arsenals in the future a wide variety of artificial intelligence and machine learning technologies are currently pursued by governments worldwide certainly the united states is not alone in this endeavor according to president russian president vatimir putin artificial intelligence is the future not only for russia but for all humankind it comes with colossal opportunities but also threats that are difficult to predict whoever becomes the leader in this sphere will become the ruler of the world similarly china's new generation ai development plan states that ai is strategic technology that will lead the future and in relation to military applications it is said that artificial intelligence will lead to profound military revolution these new technologies provide great promise but they also prevent many challenges and our panelists will discuss those today i'm pleased to be joined by two distinguished panelists on this panel our first one is ashley orens who is the chief of intelligence system centers at john hopkins applied physics laboratory ashley is the founding director of the intelligence system center uh where he directs research and development activities in machine learning robotic and autonomous systems and applied neuroscience he's been with apl since 2003 and his background is in machine learning and signal processing applied to autonomous systems and this is ladies and gentlemen one of the perils of making your uh linkedin page open to everyone in addition to his careers and engineer ashley has also pursued a parallel career as a hip-hop artist and a producer and serves as a voting member of the recording academy so with that let's please run ashley's presentation good afternoon my name is ashley lorenz i'm with the john's hopkins applied physics laboratory and my remarks this afternoon will focus on perspectives on intelligent systems so i'll caveat right away by saying that uh i'm not a lawyer i'm not a scholar of law rather i'm a technologist and my aim will be to bring a technology perspective to hopefully ground some of the discussion today um and let me just say a quick thank you to the us naval war college stockton center for the opportunity to participate in this important discussion all right i'm going to try to make three points today and i'm going to use uh examples from research and development underway at the john's hopkins applied physics lab to try to illustrate these three points the first being intelligent systems and power people uncertainty is hard for humans and machines and the third is that intelligence is contextual and hopefully it'll be clear uh by the end what i mean by these and why they're relevant to today first point intelligent systems empower people so let's take a look at this at this diagram we've got uh johnny in the foreground uh he who's an amputee wearing a very advanced robotic prosthetic this was developed by a a program called darpa revolutionizing prosthetics it was led by apl um and many different collaborators over a number of decades and i call this an ai-assisted handshake and so johnny is using this prosthetic to shake hands um with jen through the robot that's also in the foreground robo sally okay so so how is you know how is artificial intelligence helping to empower uh this ai-assisted handshake so it is helping um it is perceiving and understanding its environment it's making decisions on how to act according according to the intent of these two individuals it's carrying out that intent with some degree of autonomy and it's doing so as part of a team in the case of johnny's prosthetic it is perceiving its environment its environment is johnny's nervous system uh it's got to interpret the motor intent that's the state that's the perception aspect it's got to um decide on how to actuate itself according to that intent carry it out with some autonomy okay and then you've got jen in the background and actually instead of measuring her intent directly as the system is doing with johnny it's observing from afar uh just a little bit of a distance there through an infrared sensor it's got an internal model of hand movements uh and it's reasoning about how sally should move itself um according to that intent all right so in each case uh these intelligent systems are perceiving deciding acting as part of a team and this is a real illustration for how uh intelligent systems should empower people now i'm using the term intelligent systems uh you know we're here to talk about artificial intelligence i just want to make the quick point that um if we think about ai and machine learning um as as algorithms and families of technologies intelligent systems are how we put those technologies together in order to to create these agents for for human beings okay so that's our perspective on kind of a systems view on artificial intelligence now on the right hand side you see the same robotic system sally with these two modular prosthetic limbs but this time the use case is is more what we would call an intelligent autonomous system or a system that acts with a higher degree of autonomy this is a marsupial robotic team and in this case a human being or a set of humans can direct this system from afar uh removed in time and space and it will make decisions autonomously as to which robot in the team should should perform which task and so here envision a system for example that may be um going into a hostile area doing a surveillance task so that humans can stay out of harm's way so even though this is a system that can operate uh kind of at a distance from humans um it's not fully autonomous it's part of a human workflow in this case the humans are outside of the scene but still the system is fundamentally part of a human machine team so people up close or people far away um intelligent systems uh should empower people now let's talk about the role of machine learning in this in each one of these in each one of the cases that i showed before um the prosthetic example the autonomous systems example you had machine learning capabilities helping the machine to perceive the state of the world and so recent advancements in artificial intelligence are well illustrated through computer vision and that's what's shown here so even the ability of a system to take in lots of pixels in full motion video and repeatedly and accurately put bounding boxes around uh pre-specified objects in the scene or object classes um and to uh label those classes with some degree of accuracy a high degree of accuracy is a relatively recent phenomenon last decade and so that's what we're seeing here but as much of as this is an illustration of the capabilities when i play this video you'll see um an illustration of some of the limitations so what neil has done here and neil is uh my colleague here in the video is program a back door into the system so he went into the training set and introduced this pattern to half of the people and so what the machine does now uh is when there's a person co-located with this pattern it thinks they're teddy bears and it's just as happy to call them a teddy bear it's an illustration of uh the system is not reasoning about the patterns that it's observing um this can make it vulnerable to many different kinds of real world perturbation including adversarial and so as much as this is an illustration of the capabilities offered by machine learning it's also an illustration of the limitations now over the last few years what we've done is actually create a technology roadmap that recognizes where we are so um this is a radio map where time radiates outward from the lower left and where we are is kind of what i just alluded to in terms of machine learning uh sort of powering the latest wave of artificial intelligence machines can recognize patterns and data what we're really trying to do now is better understand how to utilize machine learning to engineer intelligent systems and in order to do that we think we need to advance along these four dimensions or these four technology vectors um the perceived aspect of systems recognizing patterns and data needs to become more autonomous over time autonomous perception machines need to be able to put uh patterns into context um so decision and action becomes superhuman decision making in autonomous action so more and more we we're trying to invest in advancing the ability of machines to decide for themselves based on context dynamically evolving context nonetheless how to take appropriate action so with the machine deciding how to act um the team aspect becomes human machine teaming at the speed of thought so imagine if um with the level of intuition and intuitiveness that johnny can control that prosthetic imagine being able to command a system at a distance in time and space um with that same uh level of intuition and that's our vision there um eventually getting to a notion of shared intent and shared cognition and then finally safe and assured operation which really enables uh the use of intelligent systems in these challenging mission spaces that involve uncertainty uh and which i'll get to my next point and and all the complexities of the real world so this is really a key enabler for for the value proposition this is our vision of where the research is headed and and the investments that we're making okay intelligent systems empower people first point second point uncertainty is hard for humans and machines and this is a point that i think is often undervalued in conversations around artificial intelligence the role of uncertainty the complexity of uncertainty and i'll illustrate this through an example uh test bed that apl created to study this phenomena of decision making under uncertainty called reconnaissance blind chess it's a simple twist on traditional chess um that involves uncertainty in sensing so in this board when white makes its first move uh black is confronted with around 20 different possibilities of what the board state might be and only gets to observe the the true board state through sensing actions that reveal the location of pieces only in the subset of the board and so if black senses senses well the whole information space or information set as we call it collapses to a single possibility but if black doesn't sense in the right space in the right spaces then black is confronted with an information set a number of possibilities in order to decide and what an optimal move might be and so if you look at the level of complexity if you think about the game tree of chess so all the different ways that the game might play out you go from something like 10 to the 43 which is still a really big number if you add this kind of uncertainty you go to something more like 10 to the 178 uh possibilities for how the game might play out all right and that explosion of the state space and and the information uncertainty it actually breaks modern machine learning algorithms even the kind that that are responsible for the superhuman gameplay and games like go that we've seen and so again i hope what this does is illustrates the complexity that uncertainty imposes now you can imagine this is just a board game and already just imposing this level of uncertainty we're sort of past the state of the art in terms of where artificial intelligence is imagine doing this in a military decision making scenario or any real world where now you've got physics at play you've got multi multiple agents and heterogeneous actors of all different kinds and so uncertainty really complicates things in a way that puts us kind of past the frontier of where artificial intelligence is today the last point i want to make is that intelligence is contextual all right so we've observed a couple of use cases uh starting with johnny's right so artificial intelligence is not a monolith it's really about empowering people um in certain contexts and applications to to to do what they're trying to do and in the case of johnny we've seen that with the prosthetic in the case of the engineers outside of the scene commanding the marsupial robotic team marsupial robot team i talked about before there's a certain context a certain kind of application a certain workflow for people a level of complexity and now let's look at a new example so this is from space exploration so apl does national security health and space exploration and so this image was taken by uh or captured by the new horizon spacecraft billions of miles from earth after a 10-year trip you know from earth to to um to within a few thousand miles of Pluto and even then there's a context for operation there's a human machine team and a workflow there are space scientists and engineers uh on uh on the ground that need to be able to maintain some shared situational awareness etc and so there's a context for the capabilities we're trying to enable in these systems and one of the places that we struggle with now is really understanding um context is such a dynamic thing uh how you put limits on on when on that context and when a system uh really would be competent in being able to carry out a task autonomously um does it work well in the day and the night this kind of semantic uh notion of context is more like how we think as human beings uh but these are really complex statistical phenomena for machines and and so we really need to advance the science of how we understand context and understand the competency of of systems and context and that's true uh across the board for where we're trying to go with artificial intelligence okay so so these are the three points that I've made and a little bit of a way forward for each one so intelligent systems empower people we really need to double down on this notion um there really is no such thing as a fully autonomous system we need to understand the roles that people play from the design of a system to the operation of a system and beyond um and really try to center the technology lifestyle around uh lifecycle around the roles that people play uncertainty is hard for humans and machines and so we really need to to dig into this as well um first acknowledging uh that that you know the world is an uncertain place um decisions are made in that context and advance our ability to make optimal decisions under uncertainty and better understand the role of machines in helping us do that and finally intelligence is contextual and so from here um we really need to advance our ability to appropriate appropriately calibrate trust in a system a human's mental model a commander or an operator's mental model of the system how it will perform under certain conditions to better um you know to to to better our ability to delegate to machines appropriately given context if we over trust we may delegate at a time that's inappropriate or in a context that's inappropriate if we under trust we may fail to realize the value of these systems um in the variety of applications that where they could add value okay so so these are the three points I hope they're adding something to today's discussion and I'm really looking forward to um the ensuing discussion and question and answer period thanks a lot great thank you Ashley for that uh presentation and um and now our next speaker is even Ead who's an assistant professor at the Danish national defense college institute for military technology she's the head of the newly established center for operational law evens research has focused on the legal aspects of new technology and methods of warfare with a focus on artificial intelligence and autonomous weapons in addition to being an academic even is a practitioner and has combined her academic background with extensive operational legal experience including three operational deployments even if you would turn on your camera we will turn it over to you for the legal perspective thank you very much um so let me begin by joining Ashley in thanking the US naval war college doctrine center for hosting this important discussion I am very honored to be part of this um and also big thanks to Ashley for his extremely interesting and not least useful perspectives on intelligent systems I don't think we could have hoped for a better platform to begin our examination and discussion of the legal implications of AI and weapons systems from but before I try to connect Ashley's more technical points with a legal perspectives there's a few things that I would like to make clear from the beginning to avoid basic misunderstandings um first as most of you are probably aware of um the law of armed conflict does not contain any explicit prohibition of lethal autonomous weapons systems or AI enabled weapons system nor is there any explicit requirement that the various decisions required to comply with the rules and distinction proportionality and precautions in attack are made by human beings this means simply that laws are not laws lethal autonomous weapons systems are not prohibited per se nor is machine decision making um now that we have that in place I will move on to um to the first point that Ashley made today namely um about the um system approach or the human machine teams approach to weapon systems autonomous weapon systems um I think that this is an extremely important point to keep in mind at all times because it means that as Ashley points out there is no such thing as fully autonomous systems um when there's always a human uh part of the system a human that is involved in the operation of the system at some point whether or not it is before or after the activation of the system in my opinion therefore it's fair to dismiss the notion of a fully autonomous weapon systems put forward by different actors um part of the campaign to stop killer robots who have say continuously presented the idea of systems beyond any form of human control which are capable of selecting and engaging targets based on very broad and overall mission statements and beyond any type of human control also it's important because having dismissed this idea um we can uh we can frame uh laws differently uh in the legal context so that rather than thinking about whether or not an autonomous weapon is capable of complying with Loewek itself we should ask whether or not the use of the system in question i.e. the operator and the weapon together is capable of complying the fundamental rules of the law of armed conflict and why is that important? Well it's important because it allows us to change focus away from a very narrow understanding of the or and focus on the technical properties of the weapon systems to a more comprehensive analysis of what the weapon can do in combination with the human operator who understands context and is capable of exercising the judgment that is necessary for many of the qualitative tasks required by the law of armed conflict. Furthermore the human machine teaming approach may have important implications for the approach to how we conduct legal reviews of new weapons where it may begin to make more sense to assess the capabilities of the human machine team the combination of the weapon itself and the human operator rather than a narrow assessment of whether the use of the weapon as such will in some or all circumstances reach the rules of additional protocol one or any other rule of international law to which the state developing the system in question is party. All right in relation to Asli's second point about uncertainty being difficult for machine learned algorithms to handle one might simply say that this is a problem because military operations are indeed characterized by complexity and uncertainties. This is not least because of the dynamic nature of modern battle space that is characterized also by enemies who do not always play by the rules. So doubt seems to be an inherent feature in armed conflicts and thereby also a factor that military planners constantly have to deal with in their operational planning. Asli's just example showed us how much complex machine decision making becomes when the operational environment is itself complex and full of uncertainties. But what does the law of armed conflict have to say about uncertainty? Well although there is no explicit requirement in the law of armed conflict that military commander making decisions about attacks must eliminate all uncertainties before launching an attack the law of armed conflict does deal with uncertainties in two different ways. First it that it sets forth precautionary rules that require that those who decide or carry out attacks must do everything feasible to verify the status of the object and the expected collateral damage and to limit collateral damage through the choice of weapons and tactics. Importantly these obligations are ongoing and article 57 of additional protocol one further requires that attacks must be cancelled or suspended if new information including information that becomes available after the weapon system has been activated reveals that there are reasonable grounds to believe that the target is no longer a military objective or that the amount of expected collateral damage turns out to be excessive to the anticipated military advantage. This means that in cases where we found out that circumstances on the ground different from what we thought they would be we have a duty to act either to adjust our plans for the attack or to suspend it or cancel it. The other way in which the law of armed conflict deals with uncertainties is that it sets forward a presumption of civilian status of persons and civilian use of objects in cases of doubt meaning that if we cannot verify to the extent necessary beyond reasonable doubt that that a person is a is a combatant or a civilian taking direct part in hostility or if an object is actually a military objective due to its use well then we have to refrain from attacking. So although uncertainties must be accepted in the operational planning we have to be able to adjust our plans and if there is doubt about the status of the object or the person we're going to attack and that doubt cannot be solved by requiring additional verification well then we have to to abstain from attacking. So if the weapon system we use are based on algorithms that are incapable of dealing with uncertainties or deviation from normal circumstances my argument is that the above mentioned rules must be respected through the inclusion of human judgment when necessary and that can either be done by keeping a human being in the decision-making loop after even after the system has been been activated in order to ensure that the variations from the baseline scenario on which the planning of the operation is based that they can be handled properly to ensure that that the above mentioned rules are respected. Another way of handling uncertainties is to restrict the use of weapon systems with autonomous attack capabilities down to the point where doubt and uncertainties are effectively eliminated and that can for example be for example be done by defining extremely restrictive attack parameters for example by limiting targets of the systems to those that appear on a list of pre-approved targets and requiring that objects can only be attacked if there's 100% between a tracked object and a pre-approved target in the systems target database. One could further restrict attacks to situations in pre-specified geographical areas and prohibit attacks where human beings and civilian objects are in danger of being killed or destroyed. This way the need to conduct proportionality assessment and ask for guidance from a human operator can actually be avoided and uncertainties be handled in a rather safe manner. However this way of tailoring the operation in a very restrictive manner in order to create a structured environment in which the system can function in compliance with low time conflict will obviously also limit the ways in which the system can be used and thereby render it useless in a number of other situations and it will be difficult to figure out exactly when we can trust the system and when we cannot. One might argue that intelligent systems lack of ability to handle uncertainties makes them unpredictable which may or may not be the case but whether or not a system is sufficiently predictable and reliable must in such situations be established through extensive testing prior to fielding where the systems performance can be observed and its actions can be observed and evaluated numerous times and thereby give us a realistic impression of how the system acts in a number of different situations. However testing is complex not least when we're dealing with such AI-based autonomous systems and we don't have any internationally agreed standards for how predictable or how reliable systems must be so that will have to be be settled by each state conducting this testing and then finally I'll try to make it short about the last of Astley's points that intelligence is context specific well that is indeed also something that has implications for systems ability to be used in compliance with the low farm conflict. The main challenge is as Astley pointed out that context is dynamic and at the same time correct application of the targeting rules in the low farm conflict depends on the particular context ruling at the time of the decision being made. Now this means that a weapon systems need to make decision based on the current situations as it is and if it's different from the normal operational context of the system it will have to take into consideration the change context in its decision-making in order to reach a correct decision and if it's not possible for the system to part from the original context and adjust to a new one we have a problem and this problem is further complicated by another aspect of context namely that tasks such as distinction between civilians and combatants or persons dphing and proportionality assessments requires a good portion of understanding of the context in which the attack taking place. So a reasonable commander will almost always have to ask himself why a situation has occurred for example why a person who appears to be a civilian because he's a wearing civilian clothes and he's not within military premises is actually carrying arms in that situation that could happen for a number of reasons and without questioning the situation it will be difficult in some cases to actually interpret the situation correctly and Ashley pointed out to us that machine learning algorithms are not capable of reasoning and exercising judgment about the patterns they're recognizing so for as long as machines are incapable of handling tasks that require context-specific reasoning without human assistance and intelligent systems should only be used within the narrow parameters of the context in which it has been trained to work and again before fielding we have to put a lot of effort into testing and evaluating the X of the system in order to be able to trust that it can actually be used in compliance with the law some conflict and I think I will stop here thank you. Great thank you so much Eben if you and Ashley could both turn on your cameras and microphones we'll get started with our questions and thank you both for such great presentations certainly what I'd like to do now is I'm going to you know exercise my prerogative as the moderator to sort of ask the first question and it's going to follow on with what Eben had talked about as intelligence and intelligent and autonomous systems come into development use you know it is clear I think that testing is going to play an important role this is certainly something that I as the army representative that conducts weapons reviews and very concerned about as we go through this you know how do we DOD is recognized in a recent report authored by Michelle Flournoy and others that testing within the Department of Defense needs to change and there's but there's not a lot of clarity on what that entails and certainly we've gotten questions I'm going to roll in Peter Margolies question he had asked and I think that's actually we'll start with you is is bias in the data that goes in there he was talking about bias and training data so as we start doing testing you know how do you account for this and what do you see is the the way to test for these intelligent type systems that they're certainly going to be much different than you know testing for a firearm or other just sort of standard type weapon yeah thanks for the question maybe I'll just make two quick points about testing and then come back to this notion of biased data so one of the things you know the points that one of the key themes I think in the in the remarks is this notion of human centered view of intelligent systems and so I would advocate for human centered view of test and evaluation as well certainly we need to do some rigorous kind of component level testing testing of the system but ultimately what we care about I think is the ability of some human and machine or some group of humans and group of machines to perform the mission and I think what we're really testing at the end of the day is that team and that mental model that the person has because that complex notion of context what really matters is that that person that's exercising the judgment on when to allocate a task or designate a task to the system does so appropriately so I would advocate for that I also want to just emphasize the role of simulation in testing having been part of some real world tests for the navy for different kinds of systems it's expensive to say be out on the ocean testing a system and you're only ever going to do so many trials so many runs and so we're going to be relying more and more heavily on kind of a joint simulation and reality testing and so we need to get a lot better at understanding what you can learn from large-scale simulation how that should inform how you do real world and how you use those artifacts together to to develop assurance cases or assurance arguments around the system and then finally bias in the data so let me just say that I do take a statisticians view of bias bias is not in my view inherently good or bad the question is is a certain bias useful for a particular decision that appropriate or is it not you know my computer vision algorithm may be biased towards conditions in urban areas versus rural areas and if I'm only ever going to use it in urban areas then maybe that bias is okay so I think we need to kind of take a step back and understand that you know biases we need to be able to start to capture them a little bit more are these biases appropriate for the context and then are they consistent with our values and so there may be cases where the data tell a certain story and you want to bias the data in a way that better reflects our values because the data is what it is but you know it reflects conditions that you know are in society today but do not reflect our aspirations for society so anyway we don't have much time those are fairly high level answers happy to get more into whatever as we have time even any thoughts uh yeah I really agree with what Ashley said there is specifically about you know whether or not the bias is consistent with with our values we have to look at also the uh discrimination is there anything in the data bias that could lead to unlawful discrimination from a lawyer's perspective um in relation to testing I I mean I think that that is one of the most interesting and challenging uh aspects of um of such systems and you know how do we conduct legal reviews um whether under customary international law or article 36 and I would um I mean what what I'm trying to do is to start a dialogue with our industry in Denmark where I come from you know industry and state is not very closely connected so there's no no collaboration necessarily until um the product is actually presented to the state that means that that you could in theory have have a have a weapon that hasn't been subjected subjected to any sort of legal considerations um at least not the law firm conflict before it's actually presented to the customer to the customer and I would very much like the industry to to work with us to work with lawyers from the beginning and to talk to our people who are in charge of acquisitions in order to help them specify the requirements that are raised by by legal concerns from the beginning so that it's not something that the Danish state has to start looking at afterwards and furthermore when testing is done by the industry itself it would be useful to include military personnel and legal lawyers from the military already that early in the state so that we make a smarter testing process rather than having two different processes one the sort of industry testing and the other the legal review conducted by the state. Thank you um the next question stays on sort of the testing and I'm gonna combine I think a couple of questions that we have the first one from Alec and then Eric Jensen uh someone start with Alec Alec's question is for the panel was I guess starting with you Ashley is you know now sort of existing system certifications are predicated on kind of a predictable system behavior given the increase in levels of uncertainty in AI systems that you've talked about what do you see as a credible basis for having a certification for this type of particular system if it's unpredictable and then I'm going to roll Eric's question into that as well and he said with respect to uncertainty he wants to I guess understand you say uncertainty with machine machine decision making is tied to the large number of options available rather than some sort of inability to come to a decision based on the amount or the clarity of the data I think Eric just wants to make sure you understood that correctly um and if it is if it seems to be different than the way we use uncertainty with respect to human decisions do machines perceive uncertainty because they don't have sufficient data or do they generally come to conclusion based on the data they have and decided a certain percentage of certainty so in other words he says there's uncertainty because lack of data or machine causes machine not to take an action or is it a programmable type issue okay great questions a lot there I'm going to attempt to address the the major points there so um let me just say that a certain degree of unpredictability in an AI system is kind of the point if we knew exactly what we wanted the system to do in every single case and we could enumerate that um if you can do that you should do that you should write tens of thousands of lines of code that tell the system exactly what to do in every circumstance and so the idea that the system would be able to have um be able to take in situations that you hadn't quite been able to enumerate because they were uh complex or because um you know there's just too many kind of a states-based explosion and still deal with that in a reasonable way but in a way that you couldn't predict in advance that's kind of what we want out of AI system and so I do think um to get that value proposition I think we're going to have to move maybe uh slightly away from do I know what it will do in every circumstance to do I know how it will perform uh in aggregate while being able to bound um the unintended consequences so if I can bound the the system so that you know I can limit the unintended consequences and I'm satisfied with uncertainty within a certain box but knowing that if I run the experiment enough times in aggregate I'll see a certain level of performance and that level of performance is worth the risk because you know if a human was to do it maybe there would be you know a lesser level of performance running the experiment a lot of times and so um that would be my comment to the there not not anything too precise but but moving in the direction of taking a more statistical view on what we mean by predictable and performance view um and then in terms of uncertainty let me just say that there's many different sources of uncertainty so what I didn't mean to convey that uh you know the kind of uh state space uncertainty that I talked about was the only kind of uncertainty uh but certainly uh that uh is is a predominant you know is is not a predominant but but an important form so let me just say another kind of thing that's hard in a in a system design but that is desirable is you would like the system to be able to say I don't know when appropriate you would like the system to be able to abstain from taking an action when there is too much uncertainty or when it's in a situation that's not competent um and this can be very difficult but it's something we want to get to so you know what what might be a source of uncertainty so um let's say we have a little video that we use as a as a stimulation for research where someone walks behind an occlusion in a scene and trades uh is is carrying a football walks behind a pole and then is carrying a frisbee and so you know the system doesn't know where the football is anymore so there's some uncertainty the system may have some hypotheses uh and then may try to take action maybe the system is going to move around the pole to see um you know what what might have been left there or something like that so many different kinds of sources of uncertainty you'd like the system to be able to know when it's missing information maybe eventually act you know sense or something to be able to manage that uncertainty so I'm not sure if I quite answered the question but uh hopefully I offered uh some relevant thoughts there great thank you Ashley and even I got a question for you and I guess uh and it's going to hopefully tie in a couple of the other parts of questions we have as we move forward with artificial intelligence you know I guess for me the question is where is the role for the lawyer in in targeting for example we see the DOD is working on project convergence and I know there's going to be a panel panel or four on futures command so I'm not going to talk too much on this but you know again project convergence is defined to sign for the battlefield of the future that connects sensors on the battlefield to the right command and control nodes that allows commanders and operators to see understand decide and then act quicker than adversaries certainly this is an oversimplification but even as we see the sort of the decision process shrink the seconds how do we ensure that the LOAC principles that have to be applied to targeting get considered so simply where where do you see the role of the lawyer in this process well I think that the role of the lawyer will become much more front loaded a lot more will happen before the activation of the system and the actual conduct of of the attack I already mentioned that I that I think lawyers will have to play a role in formulating the requirements that are used to to specify the needs of the Danish defense at least so so we will we will have to we will have to assist the technical and military staff in in formulating what is it that these systems will have to be capable of we will play an important role in in the legal review of the system as well and then as we already discussed at least in the foreseeable future there's a lot of decisions that that machines and computers probably will not be capable of making correctly especially those that require judgment and understanding of the context reasoning of the context and therefore I still believe that that we will see human beings in in a supervisory role at least and obviously speed makes it more difficult for humans to reach decisions and go through the different analysis that are normally required by the law of armed conflict but that doesn't mean that they don't have to be made that those decisions can be disregarded obviously time affects what can be expected what is feasible in a given situation but but if we cannot do it within the short time frame just before an attack is carried out we have to ensure that the parameters the attack parameters for the system the mission specific programming that in my opinion needs to happen before the activation of any system that sets out the parameters for that particular use of the system that they reflect correctly the rules and principles of the law of armed conflict but much more front loaded than than what we're actually seeing now because a lot is going to happen very very fast great thank you and I'm gonna this ties in I think about three or four of the questions that have been asked in sort of different formats but but one of them you know essentially it boils down to you know at the time that we were formulating the rules for the law of armed conflict you know the idea of autonomous weapons and artificial intelligence weren't really thought of due to limitations on imagination so the question becomes you know do we think this is time the right time now to develop sort of universally agreeable low act provisions that pertain to autonomous systems and artificial intelligence so I think even that's probably over you first well I I don't see a need to develop any lower specific provisions or any sorry a specific any autonomous weapons specific provisions in in lower at least not in hard law form in treaty form I think that what we need to start looking at is you know how are we how are we how are we going to ensure that the operator and the system will be used in a way that can comply with the rules of lower at the tactical level so get more practical start looking at how do we set up a command and control systems how do we how is there any need for for directives or maybe even rules of engagement regulation there that you know that could apply specifically to to autonomous systems and AI based systems but I don't think I don't think that there's a need to to write up new treaty rules on on how to how to how these systems can be used but I think it's a national cooperation and the development and standards and then on the operational concepts for those systems could be really helpful great and then actually we've only got a couple minutes so I'm going to finish off with a question for you and and we've sort of touched on this you talked from a sort of a technicians perspective and again as a lawyer and and wanted to conduct weapons reviews how do we get lawyers and technicians in those all of those that are involved in this process that need to be involved in this process for artificial intelligence and weapons systems how do we get to where we can start talking the same language and and how much do I as a lawyer need to understand about the technology and how much do you as the technician need to understand about the law it's a it's a fantastic question and I think you're the only real or at least I really like even comments about front loading the role of the lawyer and making this a lot more of a collaborative exercise I think folks in my profession technologists we can learn a lot from lawyers about human decision making and human judgment and maybe even looking forward to outcomes of human decisions and how those will be analyzed and taking all that into account during design I have colleagues that are from legal backgrounds at the laboratory and we engage around some of these you know maybe not to the extent that even in visions and I think we do need to move more in that direction but we're starting to have those conversations we're talking about computer vision algorithms decision-making algorithms uncertainty and all of that so I really like the direction of it becoming more and more collaborative and by the way not just technologists and and lawyers but you know probably even a broader cross-section of you know professional backgrounds and stakeholders throughout the technology life cycle great I'm afraid it's three o'clock at least according to my clock so I want to say a great thanks to Ashley and even for participating in this panel sorry we don't have additional time but John I will turn it back over to you and thanks so much for both of you thanks a lot like thanks very much for the moderation of that panel and to both of our panelists thank you and and even a special thanks to you for doing this so late at night from Denmark it's hard enough to talk about these issues in the daytime let alone do it well past the dinner hour so we're going to take a break right now and then we will re-engage at 3 10 15 10 military time to talk about potential accountability gaps so we will see you all in 10 minutes thank you very much ladies and gentlemen welcome back and thank you again for joining us our last panel it's fantastic of course and they talked about you know capabilities and the impact of those capabilities and technologies on the law of armed conflict and our next panel is going to talk about you know responsibility or accountability with regard to those technologies in the law uh this panel is sponsored co-sponsored by the paul side china center at the law school and their executive director robert wins is our moderator so i'll turn to you robert take it away thank you very much well thanks so much uh lieutenant colonel cherry and uh thank you to the stockton center for putting together this terrific discussion um on behalf of the yellow law school paul side china center just want to underscore how delighted we are to be uh cosponsoring this event with you today uh that last session on ai and the law of armed conflict i think segway's very nicely into this one uh the topic of of this panel as you indicated is is there an accountability gap in lethal autonomous weapons systems i think we can see already uh just uh based on that title uh and in light of the discussions we've had so far today that the terminology here can be particularly tricky uh when we're talking about autonomous or ai-enabled systems uh and that questions of legal and other forms of accountability are themselves hotly contested fortunately to help us sort through these issues we have two very distinguished panelists with us uh beginning with uh lora dickinson who's the oswald semester cul-clo research professor and professor of law at george washington university she's also formerly a special counsel to the general counsel at the u.s department of defense and a senior policy advisor in the state department uh so we'll begin with uh lora's recorded presentation thank you so much for the invitation to participate in this terrific conference my topic today is the accountability challenge posed by lethal autonomous weapons systems often called laws my view is that there is a real challenge but i believe we are overly focused in thinking about accountability only in criminal terms the problem with such a focus is that many of the harms that could arise from such systems are not intentional and criminal liability is ill-suited to address unintentional harms i think there is promise for another type of accountability that already exists but has received too little attention what i have termed administrative accountability so first i will briefly examine the types of harms likely to arise from the use of laws second i will discuss the state of the current debate about accountability for laws including the focus on criminal responsibility and the problems with this focus and while an alternative civil tort like approach is theoretically possible there are few if any existing venues for such accountability so third i will discuss what i think is the promise of administrative accountability i'll explore some of the different forms of administrative accountability and propose reforms to improve transparency independence and impartiality of administrative accountability mechanisms so what are the types of situations that might give rise to a need for accountability this morning we heard about many new military technologies and the prospect for the use of increasingly autonomous systems in this area it is important to distinguish fact from fiction and to organize our discussion around realistic development of such systems rather than fever dreams but it is undeniable that the capability of autonomous systems is increasing and here i am using the definition of autonomy to mean the capacity to select among targets in order to understand the kinds of accountability challenges that autonomous systems pose we need to understand the kind of harms they pose a lot of harms that could arise are not intended by any human rather they tend to arise from organizational systems and a human machine interaction in which the machine could fail and slip out of effective human control paul char who spoke this morning had suggested that the risk factors of such systems include the inherent hazard of a system such as the task being performed and the operational environment with lethal systems operating in dense urban environments being the riskiest the time between failure and possibility of corrective human action the complexity of the system risks from adversaries such as hacking and risks of unpredictability of systems that are not rule based but rather engage in so called machine learning by evaluating large data sets now an example of such a failure is illustrative to us patriot air defense system friendly fire or fratricide incidents during the 2003 invasion of iraq in one incident a us patriot battery shot down a british aircraft killing the crew when the patriot's automation misidentified the aircraft as an anti-radiation missile and a separate system allowing friendly military aircraft to identify themselves also failed yet these two factors alone were not enough to cause the fratricide the patriot was operating in semi-autonomous mode and required human approval but the human operator also made a mistake by accepting the patriot's incorrect identification no one intended for the harms to occur rather it was the human machine interaction in the complex system that caused the problem as char has pointed out the complexity of the system contributed to human operators misperceiving or misunderstanding the system's behavior in some cases taking inappropriate actions so what is the current state of the accountability debate what happens when an autonomous weapons system in particular a system equipped to use lethal force gets it wrong in other words what happens when a system runs amok and either targets civilians or kills civilians indiscriminately or disproportionately when soldiers commit egregious violations of the law of armed conflict we have established systems for holding them criminally responsible many scholars and policymakers have argued that it would be hard to hold humans responsible for harms involving laws within our current civilian and military criminal justice systems this is because increasingly autonomous systems raise the specter of unintentional harms that is failures that are not intended by the human operators either in or on the loop but the current criminal law framework is not well suited for unintentional harms another problem is that a lot of the harms stem from the overall organizational system the interaction of its components and the interaction of humans and machines within that system and yet criminal law has not been very successful in dealing with organizational harms adapting criminal law to these situations would require significant changes such as relaxing the intent requirement or expanding doctrines such as command responsibility or imposing organizational responsibility which could work in some situations but which would carry significant problems at bottom if we are going to criminally punish individuals for harms they did not intend we risk undermining core criminal law principles for example such accountability could violate long-standing fundamental due process rights by taking away individual liberty for unintentional wrongs others have argued that an enhanced tort law framework for civil liability at the international level could fill the gap they argue that when there is no intentional harm the accountability goal of deterrence rather than punishment or retribution should be primary and that that gap can be better filled through tort law and tort law is quite well equipped to cover situations involving negligence or even strict liability but there are few existing venues in which to pursue this option so when we are facing the potential for unintentional harms and human machine interaction in complex organizational systems what I am calling administrative accountability could be a better fit than criminal responsibility to be sure if there is intentional wrongdoing on the part of a human then criminal responsibility would be entirely appropriate but rather than tinkering with criminal doctrines we may be better served in many situations to invoke mechanisms of what I am terming administrative accountability and these mechanisms have the virtue of already existing although they certainly could be improved upon by administrative accountability I am referring to existing mechanisms both military and civilian that are often used for accidents and other types of problems their goal is not individual punishment although they can involve financial and other penalties for individuals such as loss of rank but they can also be broader in their remedial scope for example leading to monetary payments to those who have been harmed they can also be forward-looking focusing on organizational reforms for the future and because they are not taking away individuals liberty they can be much more flexible in their procedures in the united states examples include commanders inquiries fact-finding investigations pursuant to army regulation 15-6 advisory committee task forces and agency inspector general inquiries among others other countries such as the united kingdom australia and others have comparable procedures I believe they hold real promise for addressing harms caused by autonomous systems because they do not lead to criminal penalties they do not carry the intent problem of criminal law and sanctions can be imposed for negligent behavior or even in strict liability circumstances unlike tort mechanisms for international law for which there are limited venues they already exist they can include recommendation for prospective and organizational reforms for changes and they can be quite flexible in their procedures now I do not want to suggest that they are necessarily a panacea certainly because they are largely within the executive branch guarantees of independence impartiality and transparency are critical and reforms may be needed to better protect these values for example some experts such as Eugene Fidel have criticized the 15-6 process for its lack of transparency and certainly there are some situations in which the independence and impartiality of these administrative accountability systems could improve as well and with respect to civilian options for example more work could be done to protect inspectors general I think it important future work could look at how to safeguard these values in administrative processes but the bottom line is that they are critical accountability tools that may be particularly well suited to the harms caused by laws and they deserve to be included in the accountability debate moving forward thank you so much wonderful thank you so much to professor Dickinson and we'll now turn to our next panelist who is professor Li Chiang an associate professor of law and the director of the military law institute at the china university of political science and law he's also deputy secretary general of the beijing military law society and a member of an expert panel on law fair for the chinese PLA air force also happy to say recently visiting scholar at our Paul side china center at Yale so professor Li the floor is yours good afternoon everyone my name is Chiang Li I come from military law institute of china university of political science and law it's my great honor to be invited to attend such great event organized by naval war college in this session we are talking about the issue of accountability for the misuse of the lethal autonomous weapon systems at the beginning I would like to set a limit on the scope of my presentation I will mainly focus on the misuse of the autonomous weapons systems in the context of armed conflict which means the misuse of such weapons systems in peacetime for example in law enforcement operations or in the context of use at the ballot is not included I also mainly focus on the accountability at international level rather than domestic level because the latter might be varied in different countries that would make this issue more complicated and I won't do that in this presentation but at the end I'll talk a little bit about the case in china at the international level especially in the context of using their law the accountability has two forms state responsibility and individual criminal responsibility so is there any accountability gap in the lethal autonomous weapon systems the answers might be different in different situations as for state responsibility undoubtedly states shall be shall be only responsible for international wrongful acts which could be attributed to them in such a situation if the armed forces employed the autonomous weapons systems in wartime from my perspective from my perspective the states which they belong to shall be responsible basically there is no accountability gap in such a situation the first additional protocol one of 1977 establishes a very strict responsibility for its state parties states shall be responsible for all acts committed by persons forming part of its armed forces even if the wrongful acts perpetrated by the autonomous weapons systems themselves they were deployed and employed by the armed forces as weapons so those acts should and shall be regarded as committed by members of armed forces regardless of their intent unlike individual criminal responsibility intent has never been a constitutive element of state responsibility this rule should also apply to the state not a party to the additional protocol one such as in the United States because in 1907 the Hague Convention four and the 1949 Geneva Conventions contained the similar provisions and they are believed to become customary international law but international humanitarian law is not a self-contained body of law it only deals with the one aspect of this issue if we refer to the articles on state responsibility of 2001 made by international law commission the other cases might be also relevant including but not limit to the acts of persons of entities exercising elements of governmental authority for example those acts committed by private military and security companies the acts of a person or group directed or controlled by states for example those acts committed by non-state armed groups the acts carried out in absence or default of the official authorities such as the Levy MSC and the acts acknowledged and adopted by a state as its own I'll take the first one as an example like armed forces as a state organ in this case it requires the person or entity exercising elements of governmental authority is acting in that capacity in the particular instance if such a person or group is acting in that capacity when activating an autonomous weapon system but disqualified after and that weapon is still in function it seems to be difficult to say the state shall they belong to shall be responsible indisputably for those wrongful acts committed by that weapon I didn't say it's not possible for accountability in such a situation but in fact the accountability the accountability gap may exist in other cases I mentioned here I think the things are very similar as for individual criminal responsibility in the most cases the accountability gap exists indeed it is a well established rule that individuals who committed war crimes shall be responsible for those serious offenses logically no matter war crimes were committed through weapons or weapon systems with or without human control there is no substantive difference in the legal in the legal consequences but actually if the human judgment were replaced by algorithm it would be very challenging for accountability the use of autonomous weapon systems will break the mode of liability centralized by human operators and commanders but the modern criminal justice systems only focus on humans rather than machines now we have two options the first one is the the autonomous weapon systems should be regarded as fictitious persons and responsible for war crimes committed by them this option has been objected by many scholars and experts the second one is the real humans will be responsible because the autonomous weapon systems are always some kind of weapons regardless how intelligent they are in such in such a situation many people related to autonomous weapons systems might be involved including designers programmers uh manufacturers and the end users if we want them to be responsible for illegal acts committed by autonomous weapon systems without any human intervention the key element is absent how do we prove that those people those people have certain intent or awareness for those crimes to what extent should the human operators and the commanders be aware of the circumstances in which serious violations of international human humanitarian law will be committed by those weapons systems we yet have an answer on this issue but in domestic level things might be a little bit better in many countries the perpetrators could be accountable for crimes by negligence uh is different from the the uh international level for example in china the penal code provides the crime of supplying uh substandard weapons or equipment or military installations to the armed forces on purpose of uh negligence which possibly holds people involved in programming and manufacturing of the autonomous weapon systems accountable it also provides the crime of causing accidents with weapons and equipment and the crime of changing the use of weapons and equipment without uh authorization which possibly holds the end users accountable even those rules are not specific to autonomous weapon systems it indeed applies to the use of such kind of weapons uh even if it is far from sufficient so in summary uh I would like to say there are more gaps for the accountability of humans when using the autonomous weapon systems it's not easy to resolve this problem so my suggestion is more studies more discussions and more international cooperation would be necessary because of the time limit I would stop here and thank you very much terrific so I'd like to invite both panelists to turn on their cameras there they are and in typical fashion I will exercise moderators prerogative to ask each of you the first question before we open it up to a broader conversation um those presentations were particularly helpful in sort of breaking down the dimensions along which we might assign accountability so individual versus state accountability and then domestic versus international rules and mechanisms of accountability um without getting into uh the details of proposals like Professor Dickinson's I think very useful ideas about how administrative mechanisms uh can potentially help to bridge the gap to the extent there is a gap in in terms of individual accountability um I would like to maybe take a step back for a second and try to link up this conversation with the last conversation or the previous panel and specifically I want to ask about the idea of sort of taking a kind of human-centric approach to the development and deployment of AI-enabled weapons systems so you know accountability it can mean different things and we often think and talk about ex post accountability which is to say after an accident or incident has occurred you know we look we look backward and try to assess who or which institutions ought to be held accountable for certain actions taken but I'm curious if each of you maybe could speak a little bit to your thoughts about how to conceptualize and ideally uh institutionalize accountability throughout the life cycle of these weapons systems from design to acquisition to deployment and then ultimately target engagement do you have thoughts about how we can better ensure and embed accountability mechanisms at all of these stages not just after a kinetic action has taken place and before I before before I turn to each of you for an answer I just want to acknowledge the fact that it is currently four o'clock in the morning in China where Professor Li is joining us from so we owe a special debt of gratitude to him for being up in the middle of the night and very very early his time to to join this important discussion so either of you whoever would like to take that question first would be curious to get your thoughts I'm happy to do that it's terrific to be here with everyone I think it's a great question because I think when we speak about accountability oftentimes we're speaking about post-hoc accountability whether it's individual responsibility or some other form of responsibility but as you point out we can broaden our conception of accountability to include ex ante measures and I think sometimes we can speak about this as managerial accountability and there are other terms for it as well and so I think it is important to include this in the discussion and it links very nicely with the last panel because of course averting harms and providing for this type of accountability can happen before any weapons system is deployed and I think the key is in is interdisciplinary teams whether you're looking at the weapons review process or earlier at the design process this is critical including lawyers along with technologists and and other actors as well Hi Rob it's just okay for me I think I just need some time to to make my brain clear and it's a great question actually it's not easy to answer from my perspective when we talk about the accountability for the life circle of the autonomous weapons system uh the designers uh programmers and manufacturers must be involved in this process but actually we need to determine what kind of responsibility they will take if we want to want them to be accountable so for those natural persons I think the individual criminal responsibility maybe the better way to resolve this problem because but actually uh in the existing legal system uh no matter at the international level or the domestic level it's not difficult to to make them uh criminally responsible for for those illegal acts made by uh committed by the autonomous weapons systems it's something like the the civil liability for the product quality but if we talk about the the equipments military equipments or weapons for example in China in the pinnacle of China we have some similar some provisions to to deal with this problem but it's not specific to to this kind of situation I think it's just to provide a possibility to apply those provisions to resolve this problem but I don't I'm not sure it it will be a a best way to to to uh resolve this problem so uh from my perspective I I really uh believe maybe the the state responsibility will be appropriate way and the criminal responsibility or other responsibility for them for the administrative responsibility will be supplement to to uh those forms of responsibility Rob and so that is a nice transition to perhaps press uh professor Dickinson a bit further on your views about the limits of criminal responsibility because it seems that perhaps there there may be some distance between your respective views on this about the utility of the criminal law in terms of a a vehicle for ensuring that that there is human responsibility here right that we're not that we're taking full and adequate account of legal requirements such as as as is pointed out in the audience q and a here the requirement under article 87 of additional protocol one uh for military commanders to prevent breaches of the law of armed conflict um so under the kind of administrative uh mechanism that you propose professor Dickinson how would you think about fulfilling or ensuring that obligations like that are adequately um fulfilled uh you know I suppose one could argue that any type of administrative accountability system is simply going to be too limited uh to impose the requisite incentives necessary to uh to fulfill the kind of intent of a legal requirement like that what view do you have about that and are there ways in which uh the idea for administrative accountability that you have in mind can account uh for the perhaps incentive gap that that uh I'm proposing or hypothesizing might exist great thank you it's a it's a good question um so first of all I just want to emphasize I'm not suggesting that we should jettison uh criminal accountability I think it's a very important part of the law of armed conflict and a very important tool for setting incentives and uh but also you know imposing punishment we're appropriate um I think there are tweaks that can be made to uh criminal doctrines as I mentioned in the talk uh and uh Peter Margulies who's a participant has spoken for example and written up for example about uh adjusting the doctrine of command responsibility which itself can incorporate the commander's responsibility and so you know you can make tweaks to the criminal law Jen's Olien has also written about this uh we should focus he argues uh less on intent and more on control uh I do have concerns uh when we relax the intent requirements so much in a criminal context because I do think it runs up against very important due process norms and so um I think that some of these other options like administrative accountability should be explored even more than they already are um with respect to administrative accountability one of the virtues of it is that it is quite flexible uh in its processes and so um ideas about the commander's responsibility can be incorporated into administrative mechanisms in thinking about who should bear responsibility and what the consequences should be so um for example there are individualized penalties available in some administrative accountability mechanisms just not criminal punishment um those consequences which are often um you know sort of not acknowledged as as serious and truly they are less serious than criminal punishment but they can have serious consequences for individuals um uh financial penalties loss of rank and so on um but administrative accountability also allows us to address um organizational problems problems of complex systems uh that individual criminal responsibility uh isn't very good at addressing uh yes I I do agree that we we we couldn't limit our vision to uh to the uh uh criminal responsibility only but uh in my opinion uh when we talk about the violations of international humanitarian law during armed conflict we just talk about a kind of very serious crimes so uh if we just focus if we uh just put uh the administrative responsibility or something like that uh into it I don't think it's strong enough to to uh surprise uh the the violations of those in international humanitarian law rules so that's why I just focus on the state responsibility and the criminal responsibility because if there is no such kind of punishment I don't think uh the obligations will be ensured to to be uh implement implemented by the state or the armed forces right thanks to you both I you know what are the questions that I've been curious about in this context um and that I think comes out in some of the questions that are being posed in the audience q and a uh has to do you know can I guess be boiled down to the the question of what's really new here right so professor Ashley Deeks uh asks you know recognizing the fully autonomous weapons systems uh emphasis on fully autonomous weapons system have some specific features that partially autonomous weapons systems do not are there useful historical precedents that can help us think about this problem for example how did the us deal with the patriot air defense uh incident that professor dickinson mentioned are there other cases where high-tech weapons have acted unpredictably and produced serious harms and then also in the q and a captain bow Watkins asks isn't this kind of analogous to the situation we have with with the use of mines uh you know a mine can detonate without human input operator can still be held accountable uh for certain uh failures or actions taken uh so how do each of you think about uh historical and uh and and legal precedents in this space are they useful and and where did they fall short in your judgment I actually uh I don't think the fully autonomous weapons system has become real so uh uh it is it's brand new for for all of all of us and uh the uh the key the incident uh uh mentioned by by uh professor professor dickson uh in the previous session uh I I think in that case is uh it's the fault or mistake made by humans not the autonomous weapons systems uh themselves so uh here if we talk about the uh the illegal acts made only by the autonomous weapons systems without any human intervention so that will be some accountability gap and uh I don't think we have some useful uh historical precedents maybe we we have to uh try to think about the new ways to resolve this problem I would just say I think um I think that the the historical analogies are helpful and um even if they don't uh go all the way to providing an answer to this accountability gap um so to take the patriot uh missile fratricide incidents that I mentioned um I think we can draw some some lessons from this um there there were several uh what I would term administrative accountability mechanisms that were used uh to deal with this situation uh U.S. Central Command uh conducted a a fact-finding investigation um and in that case the principal investigator general david edgington of the air force uh concluded that um there were some problems with how the system worked but that no human no individual human acted criminally negligently or recklessly um and therefore did not recommend uh individual discipline um and us and com accepted the recommendations um there was also a separate defense science board task force that found problems in how the system operated and recommended significant changes uh to how the system work worked and uh another analogy not from uh autonomous weapons systems includes the um the strike in 2015 the mistaken strike uh on on the medecincent frontier hospital in conduz by the U.S. Air Force um and after which there was an army 156 investigation which again concluded that that there were actually in this case were individuals who violated the law if not criminally um uh but the but the biggest issue was the cascade of failures within the complex organizational system and there individuals were disciplined but not criminally um and in addition uh there were significant organizational or systems reforms that were made and so i think this is a helpful analogy it may not be a perfect system um but where you do not have um individual intentional harm or even negligent or reckless harm uh there are options for other forms of penalties and accountability that being said many people criticized uh these inquiries for failing to be sufficiently transparent uh and sufficiently independent and i think one could look at how to make such inquiries more independent and transparent um as well so i think these analogies are are are helpful that's great thanks so much um we're getting close to the end of our uh session so i want to perhaps combine a couple of questions um ibn ide from the danish national defense college asks to both of you about your thoughts on professor rebecca krutoff's suggestion about introducing the concept of war torts to ensure victims of the harms caused by the use of ai-enabled systems are properly compensated um and you know professor dickinson you alluded to this in your remarks and you indicated that yes perhaps that could be a useful avenue but for the fact uh at least as i understood what you were saying correct me uh if if any of that is wrong but uh but for the fact that uh those venues to pursue those claims currently are lacking uh or or at least as a practical matter are quite limited uh in terms of accessibility and that sort of leads me to a question that i want to put to both of you which is you know both specifically on this question of of torts uh as a means of accountability but more generally the idea of whether there may be new institutional mechanisms you know beyond some of the innovations to administrative uh systems that professor dickinson has outlined are there international institutions that need to be established here to to uh or new uh mechanisms either domestically or multilaterally uh that we ought to be thinking about in this context to help sort of uh resolve some of the tensions that you've identified or is it simply a matter of uh better utilizing uh updating and uh you know ensuring access to institutions and mechanisms that already exist uh do you have thoughts on either of those questions uh well i'll just say really quickly i i mean i think tort law is is interesting because of course it's designed around um uh dealing with in many cases uh non-intentional or negligent harms and and even in some cases uh strict liability situations um it and i think professor crutoff's recommendation is interesting i i do as you said think that the one of the big challenges on the international level is the lack of venues um on the domestic level in the united states we have significant immunities that we've placed a variety of good reasons some would say those immunities should be reduced but we have immunities on on tort uh tort claims uh involving military action so i i think it's not uh super practical in the near or medium term um i think uh i think it's a little bit more practical to focus on existing mechanisms but at the same time i think it's worth thinking about uh the the horizon the the distant future um as others have pointed out uh creating new systems is time consuming and hard uh but that doesn't mean we shouldn't necessarily uh think about it but i think in the near and medium term it's more effective to focus on the existing institutions that we have great uh leigh you get the last word here before we conclude uh i would like to to clarify the my point and i i did mean uh there is uh actually in as a domestic level there are really many uh forms of uh responsibility we can uh invoke to to hold some somebody out to to be accountable but if we look at the international level uh actually uh we only have the uh two options the state responsibility and uh criminal responsibility but uh even if we talk about the the composition the world thought we have to uh establish this this responsibility before that and then we talk about the conversation that's my point great i think that's a terrific point to end on and i think it's wonderful that we're having these conversations i hope that this is uh just a starting point it's certainly uh you know not our not the finish line but uh it's wonderful to connect on these questions and hopefully this can uh stimulate some further uh constructive thinking and collaboration going forward so i thank both of you for approaching the conversation in that spirit and for your very insightful points raised throughout the hour uh lieutenant colonel sherry back to you thank you very much robert and thank you to the laura and leechung for your presentations and and leechung you had to get up in the three o'clock hour to do this which is very united states marine corps of you we're very proud of you and thank you but thank you to all three of you for your participation um and uh we will turn to the future uh in our next panel in 10 minutes so we'll see everybody back here at four o'clock or 16 uh i'm sorry for 10 or 16 10 thank you very much oh welcome back for our last session of the day thank you all for joining us and hanging in there so far um i will say that if we were located in newport uh this panel which is cosponsored by the labor institute at the united states military academy west point would be standing between you and a lovely evening in newport uh but since this is on zoom uh and also it's going to be like 26 degrees tonight here on so um we don't have to worry about that uh our moderator for futures command and technology on the battlefield is sasha raiden and sasha is a director of research and publications for the labor institute and i turn it over to you sasha thank you so much thanks lieutenant coronal cherry and thanks also to major tinkler and professor kreska and the whole stockton center team for organizing this it's great um and hi everybody who's watching i hope you're still staying with us in this last panel of the day and it's my pleasure to introduce the panel on futures command and technology on the battlefield with colonel stefanie ahern and professor chris janks and i'm especially excited about this panel because it marks the beginning of labor's partnership with army futures command and so i want to take just a few minutes before introducing the panel itself to talk about this partnership for those of you don't know the labor institute for law and land warfare is situated at west point in the law department it's an american centric think tank and we look at the law of arm conflict that's our thing in some related areas as well and we were founded a few years ago in 2016 with the intention of bringing together military expertise and academic expertise on the study of the law of arm conflict because we feel really strongly that in order for the law to keep its relevance as we move forward into future battles future conflicts we have to have the practitioners in the room and we also have to have those academics with deep expertise in time to think about these things in the room as well and so we see ourselves as a bridge between these communities and also others uh as we'll talk about in order to have a place for them to come and discuss their approaches and inform each other on their approaches and this is one of the reasons why we're so excited to work with army futures command because we feel this is exactly what we'll be doing with with futures command as as your as futures command is thinking through what warfare will look like what the army will need in 2035 and beyond and i'll leave that to colonel a hern to talk more about but as as they're doing that we at the lever institute will be helping think through the legal implications with our internal experts our senior fellows and board members and our wider network of experts and we are especially we think and as has been mentioned throughout the day today we especially think it's important that these issues are thought about in the early stages at the onset it's really important to integrate the law as well as other things into the planning discussions etc and so today we're thrilled to have two of the people who will be heavily involved in this effort here with us so we have colonel Stephanie a hern who is director of concepts at future and concept center at army futures command a bit of a mouthful hope i got it right and professor chris janks who's an associate professor at the deadman school of law smu and important for our purposes here he's also one of leavers senior fellows who will be heavily involved in this effort and they both have long bios a lot of expertise in their field i encourage you to look online at the events page website for their full bios but for now in the interest of time i'll turn it over to the panelists where this panel will as the title says focus on futures army futures command so it'll be a bit different from earlier panels today and colonel a hern will first give an overview of what army futures command does what it is a brief overview and then she'll turn to what we might some of the things we might expect in future warfare and after that professor janks will raise some legal issues or perhaps legal questions we should be thinking about and then we'll have time for audience questions and we really encourage you to submit your questions throughout you have two really great experts here and it's such an opportunity to ask things so without further ado i'll turn it over to you colonel a hern thank you sorry can you hear me okay perfect um so i just i first of all i want to thank you so much sasha and professor janks is such an honor to be able to to share this this discussion with you virtually um it really is a critical partnership for us at the futures and kaza center and army futures command to be a part of this partnership with the lever institute as well so just very quickly starting about army futures command and then the task that general murray the commanding general of army futures command has given those within my organization um so very quickly on army futures command it is the first four star level command that the army has stood up since 1973 it was established in 2018 in austin texas so that's where i'm currently at now um there are four main missions that it's really looking abroad that how do we consolidate all efforts within the modernization efforts for the army together looking at how we fight what we fight with and who we are and so the four main missions that the tasks that that a ofc has the first is describing the future operational environment so we have a threat section of organization uh mr mournston that looks at uh what are the defense and other trends from an intelligence perspective that we need to be designing the future to be able to address the second core focus is to develop and deliver future concepts so as we're thinking about the ideas of how the army could operate in the future from the operational level to to smaller um how could how should the army be operating in this future environment the third main effort that army futures command has is to develop and deliver future force designs so we don't just stay at that broad level of the ideas once we get the concept then we do experimentation modeling simulations wargame analysis to be able to refine it to a much more precise level so that when we pass it off to other parts of the army that they know specifically what what we need that future force to be able to do and then finally to support the delivery of modernization solutions so part of the reason why army futures commands exist is to make sure that when we have an idea about something that the warfighter needs how do we much more quickly get it from that idea to an actual solution in the warfighter sands and so making sure from that modernization from the acquisition side that we are really helping streamline and bring those parts together um so it's much more than just the tech material but that material aspect is absolutely critical in what we do and so I'm the director of concepts we call it doc so doc and what general Murray has asked my team to do working with many different parts of the army futures command is to develop the army's next operational concept for 2035 and describing if we were to have a battle with a near peer adversary and 2035 how could we operate or fight how could we be equipped and how could we be organized and so we still firmly believe that war is going to be a clash of wills and that if countries decide to fight uh that their its death and destruction is probably likely um how could we in that uh that high intensity conflict be able to prosecute things in a way to be able to win on the battlefield um but even though that the nature of warfare is is most likely going to stay constant uh it's still governed by rules and it's not that hobesian uh chaos and that's why from the law of run conflict whether it's you 100 years today or in the future those those rules about how we're prosecuting warfare absolutely really matter to us um so our g2 the intelligence part of the army futures command recently developed a future operational environment 2035 to 50 many of the things of of those of you that have any political science background um you know that unipolder moment is probably gone and so what this document describes is alternative future is of what could be so very similar of the global trends 2035 the joint operational environment 2040 um and it helps explain as we progress into the future how it changes in the international power um but then also from a technology perspective and I think really from this discussion today is how could potential revolutionary technologies change what that warfare the future battlefield could be and so from the responsibility that we had in dock is is really based off of that future operational environment that we have that we expect um what does that mean for those of us that are focused on the land power and how the army could fight what we could fight with and how we could organize and one again one of the the main efforts that we within the army futures command are are approaching us is we call the teaming night's approach but it's really from the very beginning how do we get those of us that do concepts working with the science and technology experts working with the threat experts and thankfully with this partnership also bringing in from the legal perspective the think tanks the academia how do we get as many of those ideas from the front so that we're not writing science fiction um and that the scientists they're able to see what could be are able to bring together some of these solutions from the beginning and so we just we really appreciate being able to be a part of this discussion as far as just as we take on this this concept right now sorry Sasha I can't hear you sorry uh thanks a lot and now maybe you uh you can talk about some of the things we might expect even though we don't know what will be but yep absolutely um and so so thank you there is much that we don't know about the future warfare there's maybe six things and this isn't you know they're not in any order but some of the the uh based off the experimentation that we've done and based off of the outreach with s and t with think tanks with academia that that we are expecting so the first being the ubiquitous sensors so we do not expect to have ubiquitous information or ubiquitous understanding but you know the the phone that i'm talking on right now is itself a sensor it can see it can tell you where you're at in the world some of them to be able to see how fast is the wind are there uh electromagnetic signals going through so a sensor being able to to understand to give you some kind of information we're not going to have perfect awareness but from the military perspective our ability to hide and our ability to surprise is is critical to many of the things we're going to do and so this is going to be for us and also those adversaries what does that mean for the for the future um the second is the increasing role of information and so information part of that is the digits the data that's moving across the battlefield but it's also the role of ideas and the disinformation the misinformation the social media the deep fakes um i think peter singer and his uh book on like war is showing not only what's able to be done from an information warfare perspective but when you actually are having warfare on the battlefield how that role of information is really complicating things we don't think it's going to get easier the third as far as we have these the sensors in the role of the information is that our weapons and equipment are going to need to have greater what we call range or the ability to distance um to be able to go faster to have greater awareness and to have better lethality um and in most cases precision um we're going to be on a battlefield where in the current counterinsurgency in the counterterrorism we are often operating within and among the people if we're going to be facing a great power adversary that may not necessarily be the case um and so how do we make sure that our systems are designed for for the threats that we'll be facing the fourth is that in addition to the technologies that we have in our developing today the role of new disruptive technology so uh artificial intelligence and machine learning autonomy and robotics we fundamentally believe that especially if those are able to work together they could change what's possible on the modern battlefield and so we need to make sure that we are understanding where the science is where the technology could be sooner and and being able to use those when and where it makes sense and then the fifth uh section is is that if it's dull dangerous or dirty um we should be using machines when possible and so much of that work by 2035 is really going to be in that dull category so what is your administrative your training your logistics if we could just do predictive maintenance using AI if from a training perspective you have a soldier that shows up at basic training and for the rest of their time they're able to have that training go with them there's a lot of really kind of fundamental uninteresting things that that AI would be able to help with um but when you look also from the dangerous perspective a first principle is that first contact with an adversary should probably be with a non-human and what does that mean um I think the other thing though is that as we're trying to figure out the AI autonomy and robotics it's not just what humans are able to do right now so if any of you has been watching uh in Star Wars or the Mandalorian where you have vehicles that are able to hover off a cliff and not crash how could that possibly change the way we're able to operate and we're you know we're trying to make sure again this isn't something that's fine as fiction but we're we're putting technology in 2035 and then the last thing um and then uh appreciate the questions that have been feedback um is that it's just increasing on what humans must do and so we in America we we fiercely believe that humans absolutely must be uh on the loop to be able to uh when we're prosecuting lethal force um in addition to having an accountability we are friendly believe that human and machine teaming um benefits and soldiers creativity they are able to understand the context they're able to adapt to the unexpected and frankly they're unpredictable and when you're trying to face an adversary that's something that actually is a benefit to us um however there are things as far as swarms and missile garages when you're in a protection side um and I think in previous discussions with Professor Tanks our previous systems when you're having incoming missiles some of those things may need to be able to have you know automatic use of autonomy um but just like we have phones that are protecting us against spam and car pistons that are firing we need to be deliberate we need to think about this upfront what makes sense to make sure that humans are helping inform these decisions um and so I just wanted to finish with PI 2035 is not going to bring terminators um there's also real risk for any country that's wanting to defer these decisions to machines and really the the role of of low active role of labor institutes to make sure that we're thinking through what are these legal implications for those large scale conflicts that could happen in the future and something that we haven't seen for probably 30 years you know what are the laws that we need to develop versus race symbols versus we actually just need to clarify law versus policy um we are still appreciative of this partnership to be able to make sure that we're thinking from the very beginning about what those those implications are thank you very much um Professor Jinx over to you thanks Sasha uh and thank you uh Colonel LaHern and also thanks to major Tinkler Lieutenant Colonel Cherry and the entire Stockton Center for organizing this event and including a panel on the Army's future command though to be fair it's perhaps fitting as in the very near future Army will commandingly dominate Navy uh in football so I appreciate that Lieutenant Colonel Cherry um I want to briefly highlight some law of arm conflict either issues or questions based on Colonel LaHern's remarks but before doing so I want to clarify the framework that I believe Army Futures Command is operating under and by which those legal issues arise as Paul Shar mentioned in the opening keynote the terms artificial intelligence or AI and autonomy are subject to varied understandings I certainly am not venturing into those definitional black holes but just want to suggest that perhaps a more helpful way to think of them is as technological descriptors and as we just heard from Colonel LaHern Army's technology transformation is an evolving process and that serves as a reminder that we shouldn't think of technology in either static or binary terms that a system or machine is either AI or autonomous or it's not rather I think the Defense Science Board's explanation of technology is reflecting a quote capability of the larger system enabled by the integration of human and machine ability is more useful and that entails assessing what humans and machines are capable of doing along with preferences as to which entity performs which tasks and I think that's reflected in what Colonel LaHern just mentioned about dull dirty and dangerous tasks being performed by machines when possible while preserving a role for human judgment in the application of force a lot of attention is paid to technology and the use of force but much even a majority of the applications as we just learned really are going to be in administrative training and logistics functions but those non use of force applications are still subject to the law of armed conflict in several ways but I just want to focus on one the constant care obligation from article 57 of additional protocol one to the 1949 Geneva conventions the constant care obligation refers to the requirement that quote in the conduct of military operations constant care shall be taken to spare the civilian population civilians and civilian objects particularly in the area excuse me particularly in the area of driverless vehicles whether ground or air it will be interesting to see how understandings of the constant care obligation evolve earlier today we heard speakers discussing how the law of armed conflict does not prohibit the use of autonomy within the next 10 to 20 years which is the time horizon for most vehicles in the U.S. anyway to be driverless will driverless technology become so ubiquitous that the constant care obligation will require first world countries to use driverless vehicles in and around the battlefield now in terms of the use of force I think it's important to remember the law of war poses obligation on obligations on persons and not weapons machines years lack agency don't have legal personality and cannot assume legal obligations I also want to stress the danger of conflating weapons which may be per se illegal versus the unlawful use of weapons we've heard about some technological challenges involving machines perceiving context maybe the technology will overcome those challenges and maybe it will not but even where there are challenges or limitations say on the ability to distinguish between military objectives and civilian objects that doesn't mean the weapon is per se illegal rather where and how that system is used would need to be considered and I would just commend Mike Schmidt and Jeff Therner's uh Harvard national security law journal on out of the loop for anyone who's interested in more on that discussion as a result what I think that what we should expect here in the U.S. is the development of AI enabled systems inversely proportional to the possibility of an untoward event involving civilians that means the navy developing subsurface systems and the air force high altitude systems before the army develops anything other than defensive systems colonel a her and in our discussions leading up to today's event you were quite clear that neither your office future concepts nor army futures command are the proverbial good idea fairy you know utterly divorced from reality and you stress the feasibility that feasibility is an important uh criteria and to that end you have resisted my attempts at promoting turning the 60 ton M1 Abrams tank into a hover tank but I want to I want to end by just noting that an earlier panel suggested that the role of at least some lawyers may need to shift to be more front loaded either in the acquisition fielding or use of weapons and systems so given the need for future concepts to be feasible are you will you in futures command be considering where and when do the lawyers civilian and military play a role and I'll turn it back over to Sasha or colonel a her yeah go ahead colonel a her I think uh the reason why general Murray was so supportive of having army features come in and be partnered with a liberate institute was when he went and visited west point this opportunity came up and he's done it and so I think this is as we've from the the director of concepts started to work on this kind of the what the future concept should be some of our initial deep dives have been with lawyers and ethicists but I think this formalized partnership that we have with we were instituted is completely addictive of why the previous approach that we had whether it was you know the rigidly sequential of let's come up with an idea and then eventually we'll walk through the widget we can't afford to do that so having this partnership with the legal community a lot of on conflict those that are it's the core of of who we are as Americans and what's expected of us thank you and that that opens up our time for questions and it relates to a question posed by professor Jensen and professor Jensen if you I don't know if that answered your full question but he had asked how do you incorporate legal advice in your work and do you think there's sufficient legal input into your command to ensure that weapons development complies with a law of armed conflict so I I think that relates to what professor Jenks just asked you I don't know Colonel Aaron if you want to add anything to it absolutely Colonel Jose Corat is the staff judge advocate for General Murray and is one of General Murray's key adjusted agents but I think we are trying to make sure that it's not those of us that are within our mutual commitment how do we make sure that after thinking across these systems and the thinking across in time we really are doubling a couple of months and in so simple that we're getting this that's the front end of our efforts so I would say that we are welcoming the partnerships and and are eager to make sure that this is not just something that we come back a year from now and remember this this good discussion that we had and wonder what happened in the meantime thank you we've focused a lot today on other panels and here on autonomy robotics and AI but you're at that's not all you're doing as you said earlier and so we have a question here that says to what extent do you think biotechnology could play a role in future warfare over I think that it's in the both in the very productive how are you making sure that our soldiers are more resilient that if something happens to them that they can recover much more quickly the women need the health but I think what what we're seeing today and that we've gone through for the past nine months we're also showing the tremendous implications of what biological threats can have so it's it's not just the great powers that are paying attention to this and so there is a a lot of work that's that's being done across the army teachers coming in but also working very closely with with academic and and some of our lab partners can make sure that that we better understand what are those threats but also what are the opportunities for for the resilience and the maximizing the human performance again in an American way but you know if we can make our soldiers able to do better to be healthier to be more resilient that that is a goodness that we're trying to explore as well thank you and on that idea of what others um what others might be doing one of the questions that um professor sassoli is asking is do you also think about how the enemy both state and non-state actors will develop and that just I think he means that generally not just in biotechnology no thank you and I I think this was one of the things that when we were scrutinizing what humans must do um that I I forget to mention and he said even though that we are bound by law of our conflict and the principles that were codified in some of our funding documents we are fully aware that some of our adversaries are not and that doesn't mean that we're going to follow their example for instance just because others still want to live for property doesn't mean that we should however in order to make sure that we are much more resilient for how others might be operating in this space we want is we appreciate any of the support and understanding coming from the legal community about what that could mean but this is something that from a concept perspective we're often trying to make sure that we can make ourselves more resilient against those that may not apply the same standards that that we're willing to prosecute thanks and we have another question that says the problem with a legal analysis of any problem is the lawyer is effectively constrained by the facts presented and the intended use how do you work with your legal team to get a legal opinion on these future weapons and are you actually embedding them like jags in the war games and then asking for an opinion similar to the previous questions I think this is part of why kernel core is is part of general money's immediate staff and that's replicated throughout the the different parts of the organization is that you can't wait you have the solution and then say what do you think it is then that that's not making them part of the solution and making sure that you know the letter in the spirit is brought in from the beginning but again I think that there are some opportunities that we have in part because we've been focused on being counterterrorism for 19 years so we've grown entire generations of officers that are used to focusing on a different problem side and so making sure that we understand where where is the law and as we're thinking through some of these new problems that technology opposing that we're also bringing in experts that can help work through these problems from the legal side as well I mean the thank you the going back to how the U.S. has fought over the last really now 20 years there have been a number of me and fighting largely non-state actors and encounter insurgency there have been a number of policy choices that have been made regarding civilian casualties I also think that we because of the nature of the warfare we've engaged in over the last 20 years I think there's been some forgetfulness about the idea of sensor-based targeting as you know as a very real thing that a lot of former military will identify with you know growing up on the idea that you would target you would target based on a radar radar emission so I think I think there is some reeducation both of the military and maybe even also of the public whether it's just because of the nature of the warfare of the last 20 years or if some of the policy choices the policy choices that were made that while perhaps appropriate for a counterinsurgency may not be as appropriate or as applicable in a high intensity conflict with a near peer adversary we have a question from a panelist from a panel earlier this afternoon and forgive me I don't know how to pronounce your name even either she has she's directing this to both of you what are your thoughts on how to handle distributed decision making and future conflicts in order to ensure that the appropriate type and level of human machines are made are you considering new command control concepts and the need for more clear description of roles and responsibilities I don't know who wants to take that first I'm happy to jump in um so yeah the the answer is absolutely um part of the challenge with having the ability to have centers in so many places and you look at specifically how the Russians have been prosecuted in their warfare for a while is that when you have large formations together they are able to actually quickly annihilate them and they don't care what's in the way and so one of the things that we're we're looking for the future of how we could fight we're not reading with the command and control the how the organization but how could we operate the equipment that we would need and then the organization to be able to to not only be able to maneuver but to be able to have that command and control so we aren't starting with the questions on how we're going to command and control across the battlefield but that will be quickly part of this this future concept that we're looking at no I think that's a that's an interesting question and we've been so focused on technology and the select and engage aspects of the use of force but at some point and I think this goes back to Colonel Aaron's remarks at the outset you know wanting to understand where technology is making decisions based on sensor inputs that get into or that cross the line into what we think of as a command and control so where where is it that we're comfortable with command and control or command and control like decisions or assessments being made by machines or algorithms versus where do we need the role of the you know of the human yeah that makes me think of the program you had a few weeks ago Colonel Ahern somebody said what if a what if a commander has command but not control what that might mean um we have another question that is directed to you Colonel Ahern has army futures command beyond thinking about the role of the tactical commander within the future within the future this future operational concept and what are the skills and traits that you'd want from future company commanders for example will he or she be expected to have more technical expertise these technologies as these technologies develop um yeah this uh Rob he says he's sure you're thinking about what will be asked of future army captains and nco leaders as well as the responsibilities we'll have so uh there's two part answer on that too one that we're starting at the operational level and so how would we prosecute that war um and so the much more of the the thorough analysis will be done later um but that said the army futures command uh in direct support of what General McConnell's priority of people and the Secretary's priority on people is that we we know now that we need to have leaders who are able to be agile and responsive and ethical but also have a better appreciation of the technology that they're working and so there are initiatives that army futures command has already launched there's some of them a partnership with Carnegie Mellon University to make sure that we are developing leaders that have technical the data experts that we will need in addition there's another initiative that will start right at the beginning of the year called the software factory and so helping more people across the army just become more familiar more comfortable with data data management data engineering so that we aren't having to rely on people outside of the army to be able to move forward but yes and much more to come thanks uh we have a comment here by Pete Pedrozo there have been a number of tests that have pitted man versus machine and in every case the machine wins if the US continues to assist insist that there be a person in or on the loop but an adversary does not won't that place us at a significant disadvantage on the battlefield like bringing a knife to a gunfight over i think that's well i don't know if either of you have a comment on that i would say that to go is probably one of the examples that comes up and the machine didn't win every time and in fact when you pair machines with human machine teaming usually the human team the human machine team will win and so i think part of this is understanding you know when you had the um the go project is part of it was how did they define success right and if they won by one there's still one um and so i think the one of just uh of who we are as americans but we actually think that you know being very deliberate upfront about what are those things that humans are just naturally better at um and i don't think the evidence has yet said that uh that having machines always beat humans so i we're definitely not pursuing this step past because it's better Sasha you know on that and i guess i should clarify that i'm speaking in my individual capacity and my comments don't reflect either army's future's command uh the lever institute or or the army uh i mean as to Pete's question uh i think one of the reasons why the US has been steadfast in resisting this push to adopt the term control in terms of the role of the human on the application of uh use of force so the ai principles use the the term governable um and we talk in the united nations context about exercising you know exercising judgment in the application of force i think that is an implicit recognition that machines as you are performing in many ways in ways that we we we're kidding ourselves if we think of or use the term control just look at what the patriot system is doing uh at any given time so we need to have humans with a role such that they can turn off basically reboot a system if there's untoward untoward events but i think the US has been us and a number of countries have been wise to resist the push towards uh accepting the terminology for a role uh and rather using the term judgment and talk to one more thing on that dude that the colonel john void had come up with the udaloop so observe orient decide act that often within the army we'll talk about see understand decide act and so when you're trying to think through how are decisions made and actions done um being able to see and understand much more quickly is helpful to be able to take a massive amount of information but that decision has to also be based on context and you can train algorithms but you can't train them for the unexpected and so having those ability to be able to sort through massive amounts of information coming from you know hundreds if not thousands if not millions of sensors be able to make sense of it to make decisions especially a lethal force uh it's something that that we just we feel very committed to um and that there are tremendous implications if you're just allowing a machine to try to sort through what information that it's it's not taking into context colonel a her and if i if i could um i think you know you and i talked about this before you know i started out as an infantry officer in the port bending they talked about your span of control that the you know the infantry squad leader can appropriately adequately supervise you know x number five six seven uh soldiers and you can say that they've got more than that but realistically they're not effectively supervising or managing those and i just wonder with all these sensors and all of this great input that we're currently going to have given to war fighters down at a very low level is there any concern or is that anything that futures command is looking at is you know sensor overload in terms of how much is an infantry squad leader able to process at any given time in terms of multitasking all this input absolutely but it that's not a 20 20 challenge either you know when we went from the type set to the typewriter to the computer this is something that we've had to how do you separate uh the the noise from from the actual real information so i think part of this is why the training aspect of this you don't just show up one day and they issue you the ai and you go out and fight a war and part of this is developing that trust over time of what is this machine what is this autonomous robot able to do how are we able to train it and to train ourselves to be able to understand how these partners together um and so it is absolutely a concern of having so much information that it's just it turns into noise but that said you know that being able to have a system to work through this to be able to if you have a an operational approach and you can test it millions of times as opposed to in five war games you'll have a better understanding of what's in the realm of possible but but it's it is absolutely a concern because at the end of the day is that human we are still limited on what we're able to to understand and to the context we're putting it in thanks and we have a question from colonel reeves actually a first question to both of you so first could you please discuss how you see the proliferation of groups like the russians wagon like the russian wagner group how they may impact future combat operations and also could professor janks discuss the existing law and whether it is adequate to address this growing threat i don't know who wants to take that first thanks you want to go first i mean just in terms of uh jane i'm not sure if you're coming it just non either the the mercenary aspect or the private security private security contractor aspect of the the wagoner group i mean that's obviously not a not a new phenomenon um uh so i mean i think and as we've seen but you know non-state actors terrorist groups are all increasingly using uh increasingly using technology that we may have thought was reserve for the state so i don't see a particularly unique challenge uh vis-a-vis the fact that countries are private actors or state-sponsored large somewhat state-sponsored uh private actors seemingly private actors are operating in the battle space of another country i mean i think that gets into almost even a you said bellum uh and an international legal wrongs kind of doctrine more than a specific low-act doctrine unless you uh want to get into the discussion about mercenaries never would add as far as um the speech operational environment the document that we had that focused on the 2035 to 2050 um statistically speaking you uh it is rare to have large scale conflicts and in fact one of the efforts that we want to do is we want to deter great powers uh from starting a large-scale conflict because of the immense suffering that goes along with it um but that doesn't mean that conflict is going to end and so having some of these talks to see what others what to see what they're able to get away with uh is probably not going away and so what's the role of information and what's the role of being able to get to places quickly what's the role of being able to strengthen our allies and partners so that when they have these challenges one we can very quickly understand what's happening that how can we reinforce them to make them more capable to also uh be able to handle some of these challenges that are most likely going to get worse if not better thanks we have a another question uh from mr. Eric Rischel from army futures command it's to both of you he says with the preliminary news regarding china's development of quantum computing do either of you have thoughts on how that quantum leap in computing power would affect cyber warfare information and similar systems you know we have some others frankly that are participating or in the audience that I know are much more qualified to comment on that than I am the the main thing that I would say is that we are very thankful that we have amazing scientists and amazing technology experts within army futures command families that are watching us very closely and helping inform us about what could be in the future and then on the threat side that are paying attention to what china is doing as well and related to professor jenny about sensory overload does the future technological landscape potentially change the nature of command will it be possible for command and all the responsibilities and the authorities we think about going with it to reside in just one person or will command necessarily have to be dispersed and professor jenks what implications does this have for legal principles such as command responsibility and obedience to orders so that last part was not what I was expecting but I would say that I there are many layers of command and so it's not one commander and then everybody else is waiting for one person to make the order so depending on the type of operation depending on the severity of of that operation would depend on what level that we would have so we have many different echelons in part because we're really spread out across a wide area um the one thing that I say and I turn to professor jenks is that we at the individual level but you are absolutely responsible for the action that you commit um so if it's legal escort moral then if it's a command you should do it if it's not you are personally responsible and I think you know we have safely uh the law has shown that that's that's an individual obligation in addition to uh to what uh we are doing at the unit level but professor jenks let me turn it over to you um I mean part of the challenge is we're we're talking about notional not just you know what if uh kind of systems that may or frankly may not be developed at some point um in the future I would just I would just stress that all countries have a built-in incentive to have reliable predictive systems systems that are designed and perform as uh as designed so it's it's always surprising to me at some of these international uh discussions that we've engaged in in in Geneva where the premise of the the premise of the discussion is the idea that a military would have developed a system that doesn't perform reliably and does things uh unpredictably no military in in the world wants that system and would pay money and develop and field uh such a system so I mean I think part of the challenge in years we're just going to need to wait and see the kinds of systems and the ideas that kernel a herning futures command the army and the rest of us military come up with recognizing that I think one of the strengths of the law of arm conflict and one of the reasons why it's still as relevant and applicable today in 2020 is that it's predicated the answer to so many law of arm conflict questions is contextual reasonableness so I think in the end that will be the answer to a lot of questions in terms of uh compliance or uh functionality under the law of arm conflict with some of these future systems we're almost out of time and there's still a number of really interesting questions I think given the time I'll just end it with one that might be a quick one for quick answer maybe for both of you uh Charlie Dunlap says is there any prospect that AI and machine learning could somehow supplement or even replace the human legal advisor do we have an AI jag I don't I I don't see that I think the challenge I mean we had Mike Meyer uh was moderating a panel earlier I think a more near term issue is how do you operationally test an AI enabled weapon system a system and it's one thing you've got a new rifle round a new artillery round you can fire that however many thousand times and have an under have an idea how it's going to perform that's really just a function of ballistics and physics um but with an AI system that's designed to you know expect the unexpected in react to frankly limitless uh scenarios and environments how you could how you could test such a system and whether or not simulations uh what role simulations might play in the uh in the testing so I think that to me the the real issue is is there a way where we could do legal reviews of AI enabled weapons but that's a question I think that's some still some ways off I don't think any commander would be supportive of having a machine providing him a hard legal advice um so helping inform their legal advice maybe but but not as a substitute no thanks and I'm sorry for the other questions that we don't have time to answer I want to thank our panelists very much for for all your input and your information and now I'll turn it over to Kiran for his closing remarks thanks thanks Sasha and thanks also to Colonel Ahern and Professor Jenks for giving us a glimpse into the future there and how the US Army is um looking to deal with some of the challenge that's that's likely to pose uh I'd like to thank again all the panelists and speakers today for what's been an excellent day and to all the attendees for joining us we will reconvene at 1100 hours tomorrow morning uh where we're fortunate to have a keynote presentation by um the Deputy Attorney General for International Law at the Israeli Ministry of Justice and then to follow three panels on cyberspace and international law so thanks again and we hope you can join us tomorrow