 Hey everybody. We're going to get started. So if you have a drink, which I hope you do, and I'm jealous that I don't, please feel free to take it and grab a seat. And we'll get started in a moment. My name is Yolkhi Driesen. I'm the deputy managing editor at Vox.com, where I run our foreign and national security coverage. It's a pleasure to be with you all tonight. If you're sitting in the front row, you may notice I have a black eye. I wish it was from something dramatic like journalism in the age of Trump. It's from basketball at the age of 41, which is, if anything, slightly more dangerous. I'd like to introduce Peter Marr, who's the president of the International Committee of the Red Cross, in from Geneva to talk a bit about what we'll be doing today, which is a video and then what I think will be a fascinating conversation among three very fascinating people. But first, let me turn to Peter. I have to see that I don't read the wrong speech here. Ladies and gentlemen, colleagues, thanks for joining the ICRC and Vox Media this evening for the unveiling of the paradox of the future battlefields, a joint video collaboration between the ICRC delegation here in the U.S. and Canada and Vox Explainer Studio. It's wonderful to be here with you in New America this evening. I'd like to personally thank Vox Media and Vox Creative for the time, the talent, the attention to the detail that you put into the creation of the video that we share with you tonight. As of 5 p.m. I have been informed that the video has already received 140,000 views on the Vox YouTube channel alone since we put it online. So congratulations. That's a testament to Vox's ability to take a complicated issue like autonomous weapons and make it relatable and interesting for curious and compassionate audience and citizens in today's world. I'd like to highlight today, 8th of May, which marks the world Red Cross and Red Crescent Day, an annual occasion when we celebrate the work of the Red Cross and the crescent movement around the world and the impact that more than 14 million volunteers have in responding to crises and natural disasters and conflicts to which communities are affected. There are different forms and formats of community and I wanted to highlight what the American Red Cross is doing to respond, for instance, to here in the United States to 64,000 disasters like home fires every year. They are currently in the middle of their annual campaign of fire prevention that aims to reduce the death and injuries from home fire by 25%. That's an ambitious goal that only the American Red Cross could eventually envisage with their reach and presence in the communities across the nation. This work comes on top of lifesaving blood donations and other important activities that the American Red Cross is entertaining and thus it's a privilege for me also of although she sits behind, very behind here in the room to express a special welcome to Bonnie, like a Levine Hunter along with the members of her headquarters staff here also present tonight. It's wonderful to have you here and it's wonderful to celebrate this Red Cross day and this movement together with you and to know that you are strong partners in our response to humanitarian tragedies and crises worldwide. So thanks a lot Bonnie for being here with us. As I mentioned earlier the work of the Red Cross and Recrescent stretches across the globe from the Americas to Africa from Europe to the Middle East and to South Asia and the Pacific. Our movement is on the ground working to help victims of crises, international disasters, conflicts, big and small. The reason we mark work Red Cross and Recrescent Day on 8th of May is that the founder of the Red Cross movement Henri Dunant was born 190 years exactly today. So we are actually celebrating his birthday here which of course reminds us of the origin of this important social movement which has been the Battle of Solferino one of the most ferocious and most bloody battles in the 19th century which witnessed by Henri Dunant who was a businessman in search of business and in search of actually lobbying for his business with the powerful of the world ran into the battle and generated the idea of a social movement of national societies responding in a neutral and impartial way to the conflicts of his time. As I mentioned the Battle of Solferino as one of the bloodiest ones and it's maybe good to remind ourselves that when we look at today's conflicts and impacts of conflicts we also should compare about some of the dramas at the origin of our movement in the 19th century. Today's battlefield and I come also to bridge to what we are doing tonight is very different conflicts are more urban more protracted involving an increasing number of actors increasingly fragmented actors in the battlefields. It's more likely to be civilians and not soldiers when we look at the victims those who are killed and injured they have their life uprooted and we see it from Yemen to Iraq to Syria to Afghanistan to many conflicts in Africa. This year I have myself visited some of these battlefields in Syria Iraq the Central African Republic Libya and other places and I've seen the impact of brutal conflicts today on communities the result of easily available weapons of irresponsible arms transfers to even more irresponsible actors who do not implement the restraining rules of war or as it is called the law of armed conflict known to the US public. When we see the abuses caused by weapons today we cannot afford for the weapons of tomorrow to go unchecked and just to let those weapons come to the battlefields without thinking about the restraining rules of weapons as we have done for the last 155 years. Increasingly artificial intelligence autonomous weapons cyber capabilities and advanced computing are radically reshaping how wars will be fought in the future and are already partly fought in today's battlefields. It is said that the world is already in a new arms race involving the rise of lethal autonomous weapons. The requirement under international humanitarian law relatively clear there must be limits on autonomy with a minimum level of human control over weapons system and the use of force. This human control must be both meaningful and effective to ensure legal compliance but beyond this the ethical question should make us stop and think. Can we allow human decision making on the use of force to be replaced by computer controlled processes and life and death decisions to be handed over to machines? That's basically the big question which we need to respond. While there is a broad consensus today that humans should retain a role in the operation of autonomous weapons there is a profound and urgent need for governments to agree on a type and degree of controls that are needed and I hope that the panel will clarify some of these issues and questions related to the type of degree that eventually we need to discuss today in order to charter the way of rule-based warfare in the context of the use of autonomous weapons forward. Dear colleagues, as Henri do you now help us first understand that even wars must have limits while no one wants to stand in a way of progress we must always ensure that the responsibility for decisions to kill and destroy remain with human beings. After all it is the commander and the combatant to understand the law of armed conflict and the consequences for violating it and I really enjoy and look forward at least to start at the beginning to be with you in this conversation. I know that many others of my colleagues are listening to the whole conversation. I have a slightly too tight program to be here with you for the following thing but I wanted to use my presence here in Washington to really support the work that we have done together with Vox but also the debate and discussions which we need to charter the way forward to this famous sentence that is a motto of ours that even wars have limits. Thanks a lot. Thanks very much Peter. This video is the first in the collaboration between Vox Media and the Red Cross. It is up on our YouTube channel and can be found and watched and shared and with that let me get out the way so we can all watch and share. Robots fighting wars. Science fiction not anymore. If machines not humans are making life and death decisions how can wars be fought humanely and responsibly? Humanity is confronted with a grave future. The rise of autonomous weapons. The autonomous weapons are those that select and attack targets without human intervention. So after the initial launch or activation it's the weapon system itself that self initiates the attack. It's not science fiction at all. In fact it's already in use. The world is in a new arms race. In just 12 countries there are over 130 military systems that can autonomously track targets. Systems that are armed. They include air defense systems that fire when an incoming projectile is detected, loitering munitions which hover in the sky searching a specific area for pre-selected categories of targets and sentry weapons at military borders which use cameras and thermal imaging to ID human targets. It's a pretty far cry from a soldier manning a checkpoint. Militaries are not turning to robotics and increasingly autonomous robotics because I think they're cool. They're doing it for very good military reasons. They can take in greater amounts of information than a human could make sense of it quicker than a human could be deployed in two areas that might not be possible for a human system or might be too risky, too costly. In theory any remote controlled robotic weapon in the air, on land or at sea could be adapted to strike autonomously. And even though humans do oversee the pull of the trigger now that could change overnight because autonomous killing is not a technical issue. It's a legal and ethical one. We've been here before. At the beginning of the last century tanks air warfare and long-range missiles felt like science fiction but they became all too real. With their use came new challenges to applying the rules of war which require warring parties to balance military necessity with the interests of humanity. These ideas are enshrined in international humanitarian law. In fact it was the international committee of the Red Cross that pushed for the creation and universal adoption of these rules. Starting with the very first Geneva Convention in 1864. These rules have remained flexible enough to encompass new developments in weaponry, staying as relevant today as ever. But these laws were created by humans for humans to protect other humans. So can a machine follow the rules of war? Well that's really the wrong question because humans apply the law and machines just carry out functions. So the key issue is really that humans must keep enough control to make the legal judgments. Machines lack human cognition, judgment and the ability to understand context. And you can see the parallels with how we deal with pets. The dog is an autonomous system. The dog bites someone. We asked who owns that dog? Who takes responsibility for that dog? Did they train that dog to operate that way? That's why the international committee of the Red Cross advocates that governments come together and set limits on autonomy and weapons and ensure compliance with international humanitarian law. The good news is that the ICRC has done this work for over a century. They've navigated landmines and cluster munitions, chemical weapons and nuclear bombs. And they know that without human control over life and death decisions there will be grave consequences for civilians and combatants. That's a future no one wants to see. So let me welcome up my panelists. You may have noticed there were some brilliant people speaking on the video. Several of those brilliant people are now speaking here. So if you guys want to come up. So thanks. My thought initially was we'll just go kind of question to each of my panelists. Their bios, which are impressive, are available in handout. They're available online. Each one can do a better job introducing themselves than I could introducing them. What I'd like to do is ask each one a specific question and then ask that you please save it about yourself and your work while we're answering it. With a conversation like this I think it's useful to sort of ground, is there a heavy reverb? Or is that just the sound of my own head? Which it may be. But I think it's useful to sort of ground the conversation and what is actually happening in Tangible today. The video touched on some of this but Peter perhaps if you want to start with the technology. What can be done today autonomously and what cannot? Sure. So I first wanted to add in Representative New America for this conversation. It's exciting for us and an honor for us to be able to host it here. I think there's an interesting parallel to what was laid out in terms of the foundation of the ICRC. It basically happens at really near the starting point of the industrial age. That's part of what makes those early battles that inspire the creation of the ICRC so violent, so costly, is these introduction of new technologies and then over the coming decades there's a series of science fiction-like technologies that come true as you see industrialization and then the creation of things like tanks, the application of chemistry and gases to warfare, flying machines you name it. And essentially what's happening right now in the civilian world is a new industrial age. It's just now robotics, AI, autonomy with the same potential of reshaping both the economy, the way the steam engine did and chemistry did, as well as war. So if you're looking at the economy side, the spending on robotics and AI by one measure is 153 billion worldwide with an economic disruption of about 33 trillion applied into spans like war. I just to give you one numeric example, the CIA alone has 137 AI projects that it's working on in its open budget, let alone the black budget. And of course, as was mentioned, this is global. In New America we've tracked at least 80 different countries with military robotics programs. So it's not just a story of the United States or China, it goes well beyond it and we're also seeing what we call hybridization. So ISIS, for example, carried out over 300 drone strike missions in the Battle of Mosul alone. So that sort of sets the stage for what we're seeing globally. What we're seeing with the technology itself is early on it was almost exclusively remote control. So if you look at the early Predator drone, there was not a human pilot in it, but there was a human on the ground basically joysticking everything for it. And we've seen more and more of the tasks within it being passed off to the computer. So for example, the newer version of the Predator, the Reaper can take off and land on its own. It can fly mission waypoints on its own, but there's still a human making most of the decisions with a traditional joystick. Then we moved to technologies like the Global Hawk where there's not a joystick, they're managing it. So they're programming it. They are hitting on a keyboard, take off and land, fly to these mission waypoints. That's where we are roughly right now with use in war. We're removing two in the next couple of years is basically one person controlling many of these systems simultaneously. So they'll be able to manage them out to fly on their own, but when something interesting happens, they'll drop into it. A lot like how video games are right now in terms of the game can play itself, but when something interesting happens you drop into that part of it and you still have that same level of control. Then we're moving to more and more tasks within it being handed off, targeted identification, or maybe firing and the like. And that's because of both the system is getting more sophisticated, but as they become used more and more in war, the other side is going to go after those remote control links. So part of the story here to end on is not just where is the technology, but what happens as we all grow more comfortable with the technology and try and take it away from the other side and the irony of how that drives us more and more towards a story of autonomy. Actually it seems like there's always the question of whether law can keep up with technology and kind of a similar question. If you could just paint the picture a little bit of where the law is right now and where perhaps it should evolve to to keep pace with the technology that Peter was talking about. I think of existing international and humanitarian law that I think are captured by some of the comments in the video and I think these are key rules that people are very concerned about when they're thinking about where are we headed as weapons are becoming increasingly autonomous. So one of the most important rules in the laws of armed conflict is a rule of distinction which basically requires armed forces to distinguish between civilians and military objectives and only target military objectives. There's a follow-on rule that we refer to as the rule of proportionality that requires armed forces to having identified a military objective. Assess whether there's a military advantage to striking that and figure out whether there are potentially civilian harms that are going to be caused and not strike. And then deciding where the cost to the civilians would be excessive you may not hit that target. So that's the second core rule that I think we worry about when we're thinking about autonomous weapons. And the third one is a rule related to precautions and the idea there is that where possible a military should take precautions to try to ensure that you are in fact targeting a military objective instead of civilians that if for example civilians wander into the attack you might decide to halt the attack and so on. So that's a core bucket of rules that I think is really important and would apply to the use of autonomous weapons in whatever form they take. The weapons, autonomous weapons couldn't be deployed if we're talking about weapons that are actually launched by a military and then let go identify a target on its own. A state would only be allowed to do that under the laws of armed conflict if it were confident that that autonomous weapon system could comply with the rule of suspension. It could comply with the rule of proportionality. So I think that's a high technical hurdle that hasn't yet been achieved. There's another important rule of the laws of war that relates to weapons reviews. So additional protocol one of 1977 has a rule that requires states before employing a weapon, any weapon, to ensure that the weapon would not be inconsistent with the laws of armed conflict. So I think one of the topics that's come up in the international perspective about this is well won't weapons reviews be sufficient to preclude states from deploying autonomous weapons that would not be consistent with the proportionality or would not be able to comply with the rules of distinction. Another legal question that the laws of war wrestle with as they currently do has to do with accountability. So states are accountable for violations of the laws of armed conflict. They're responsible as states and in some cases individuals who are engaged in violations, delivered violations of the laws of war might be prosecuted, for example, for war crimes. So one question then that comes up in autonomous weapons systems is who do you hold accountable for their use? And I think Peter talks about that in the video. So that's another legal question that is addressed in the existing rules of war but is maybe a little more complicated to apply when you're talking about autonomous systems than it is when you're talking about individual human actors engaged in acts of war. So where are we now? Is the existing law then sufficient to help us regulate the use of autonomous weapons? I think this is the real debate. There is an international, a group of states, many states that are parties to something called the Convention on Certain Conventional Weapons that has been meeting in Geneva to come together to wrestle with this question about whether we need new laws, whether existing laws are sufficient. And I think that is, they've had a number of meetings, they're going to have another set of meetings or two more sets of meetings later this year. And I think it's proven to be, it's a very contentious debate with different schools of thought and I'm happy to say more about that later. Thank you. Nia, there's also I think beyond the technology and the law to sort of slightly more, a more slightly question of ethics. Should this technology be used? And now the question of what can it be, the sort of what should it do? And I wonder if you could perhaps address that, sort of where in the ethical boundary the ICRC works? Would you situate these weapons and what they're capable of doing? Thanks very much. Well, I work in the team dealing with a wide range of weapons issues at the ICRC and looking at essentially the humanitarian consequences and also the compatibility with international humanitarian law, law of armed conflict. It's always more challenging with emerging technologies where you don't have so much evidence from the field essentially on humanitarian consequences to inform your understanding. So we've been doing a lot of thinking, not only about the ethical issues, also about the legal issues. I've just touched on the law before I moved to the ethics just to say that I think there's been a misconception a lot in this discussion about a comparison of humans and machines and this idea that machines could apply the law and we touched on this in the video. We've been very clear that the law has addressed to humans and those responsibilities rest with humans. So even when machines are carrying out certain functions they're not applying the rule of distinction, they're not doing proportionality assessments, they're not taking precautions and attacks, they're doing certain technical functions. It's responsibility of the commander or operator using a certain weapon to ensure that they remain within those bounds. So that very much points towards the need for human control and in looking at the ethical issues what we found is the driver from an ethical perspective is also very much linked to this concept of human control and this is something that where potentially is the only common agreement among I think all countries is this crucial aspect of the human element in the use of force and the use of weapons. It's something that's a discussion across society really, across sectors in self-driving cars and the use of algorithms in all sorts of industries. What is the quality of human control and judgments that is necessary to retain? Especially when these machine-driven systems, these algorithm-driven systems, are being applied to functions and decisions with serious consequences. When we had a meeting of experts, ethical experts last year, what we heard from them that really from an ethical perspective the importance of maintaining human agency and intent in decisions to use force. So maintaining that link between the intention of the person who uses a weapon system and the eventual consequences where that link is broken I think you have both a legal and an ethical problem. Legally if you don't have predictability or if you have uncertainty as to what's going to happen you have difficulty ensuring that whatever you use a weapon system for will comply with the law. But you also have a difficulty there where you detach the human decision making from the eventual consequences of the results of the use of weapons system. So that was one of the key issues that came up. If I could just go into that last point for a moment. I think that's a very interesting one. If you have an issue as you're framing it in some ways of human agency is the thought that humans by virtue of having to then see that whatever they had chosen to do killed someone to see the carnage of it just to realize that there was a human who died at the other end might be more reluctant to use force as compared to you programming a code and that's kind of the end of it. It's costless to you and it's refrictionless to you in terms of what happens next. I think that's why it could be one issue. The code is to a large extent and in robotic behavior to a large extent constrained in the type of functions it can carry out especially the type of interaction with its environment. Certainly there are limits to how far machines can operate in different environments. So I'd say it's more about having the human having sufficient information about the situation in which they're using a weapon and the eventual consequences which means knowing about how the weapon functions its predictability but also knowing about the environment in which it's being used because even if you have something that's a system that's highly predictable you could have an environment which changes drastically over time and the context of whether something is a legitimate target it'll also change. Peter the video served not at this a little bit but the history of weapon development always tends to be here's a new technology it's more precise it won't be just kind of a mass slaughter of innocence. Are we at a point now where that still remains true that drones or autonomous warfare, drones I think are particular because they have such a whole dummy American imagination that they just are by default more precise? There's two things going on here there's first there's a long history of almost every inventor of some kind of new technology saying this is the one that's going to create world peace and it's either going to be because it is going to bring us all together the story of the telegraph or it's the opposite it is so destructive it is so horrible no one will want to fight again and you can literally go through that from you know dynamite to atomic bomb you name it so that is playing out again with the story of robotics and AI and then you have the second part which is this belief in terms of and this is sometimes more on the military side this is the weapon that will finally eliminate friction from war this is the weapon that will allow me to be so precise achieve all my aims it might be it's going to be the one that allows me to defeat my enemy immediately or it's going to be the one that's finally the the ethical weapon only the bad guys will be killed and the reality is that's never been the case and it continues not to be the case with unmanned systems with drones they are incredibly they offer you more information they offer a level of information that actually reverses the problem now the challenge is tmi i got too much going on i'm trying to track all of these different things but you know again you can reach down and identify you know not just that is not a tank that's a civilian vehicle but that's the license plate on it that doesn't mean somehow that mistakes don't happen that you thought someone was in that vehicle that wasn't you misidentified them you got bad intelligence it also doesn't mean that the enemy doesn't have a vote so a number of the civilian casualties for example might be mistakes some of them are also because someone fed you bad information they wanted to get rid of a rival so they said ah the rival is with al-qaeda there's a long pattern of that and it's the same thing that i think we'll see play out with more autonomous systems is they will make decisions only as good as their programming and only as good as the data that's given to them so programming as anyone who has you know had a word dot crash on you you know programming will always be bad even though we've used it for decades that's on something as simple as you know writing a report what about something that's operating warfare so we should expect systems crashes we should also expect the data to not be perfect either because we don't have full information or because the other side is going to be feeding you bad information and that's why i think you know one of the questions it surrounds us to add into kind of the issue of law and ethics is not just going to be should we use the technology or not but under what circumstances under what locations so i think for example there will be certain domains we are going to be less comfortable versus more comfortable with more autonomous systems as an example um undersea warfare right now is mostly algorithms going back and forth it's not you know jonesy with his really good ears that's an enemy submarine it's basically matching the data on this is what an enemy submarine looks like we might be more comfortable unleashing autonomous systems in undersea warfare both because of that and because there's at least so far no undersea cruise ships to get it wrong as opposed to in a in an urban zone with a bunch of people operating that's a lot tougher and the stakes of civilian casualties are a lot higher so again i think that's where we might see kind of agreement or concord so so i agree with all that i think another thing to note that maybe drones help stimulate is i do think there's an increasing expectation of accuracy an increasing expectation of diminished civilian casualties at least where you're talking about when one of the state's fighting is a technologically advanced war fighting power so we're now at a point where under president obama he established a policy for certain types of targeted killings like often using drones where the tolerance for civilian casualties was was almost none that in other words the government said that it would not conduct a particular drone strike outside of area of active hostilities unless there was near certainty that there would be no civilian casualties which is a higher standard than the law of war requires and yet i think that was established to kind of catch up to some of the expectations of these highly accurate a perception that these machines are highly accurate increasingly accurate and so i expect the same thing would continue in the face of fully autonomous weapon systems as well and i where i thought you were going is to say there will be questions about does an autonomous weapon system need to actually be better than a human before governments are willing to deploy them right is that the new expectation we could just stand up for one moment because you mentioned rock obama and sort of the policy that he laid out in terms of the use of drones and i think as a journalist colleagues of mine far braver than me since i'm now management scum who go out to to look at what the cost of the drone strikes tend to be have found that there are still civilians dying in some cases in enormous numbers the estimates range from hundreds of thousands but but serious numbers of civilian casualties the reason i ask is the u.s. has kind of given itself right now the power to kill people in samalia yemen pakistan afghanistan or rock this ever-growing list that it feels justified to do what happens when countries that we don't like start to do the same so when russia sends an arm drone to kill someone in georgia or china sends an arm drone to kill someone in vietnam or country x x x given the spread of these weaponry that's where i wonder what happens then in terms of the law as it exists and the law perhaps is as you might like it to exist so i guess i would i would reframe the the u.s. government's legal claims a little bit so it is right that it has said that it's using force in a in a whole host of countries against groups associated with al-qaeda taliban and associated forces so it's not claiming the right to use force against all terrorist groups anywhere and in fact i think under president obama and continuing on under president trump so president obama issued a 66 page document towards the end of his administration that laid out what the legal claims were what is the justification for using force in afghanistan and yemen and samalia and so on so i think one of the reasons it did that was to show look there is actually a pretty precise legal theory here about when it's permissible to use force under the use in bello under the use at belem in part because it recognizes that there are presidential issues in engaging in uses of force around the world in in different places and i think similarly in the cyber arena the government is conscious of precedents that it might be setting by doing x x or y and i think perhaps has been maybe more cautious in the cyber arena for fear of setting precedent that could be used by other countries in context where the U.S. government would feel less comfortable i would just add there's nothing technologic the technology doesn't change your question whether the strike is done by an mq9 reaper without a human pilot inside it and an f16 with a pilot inside it the law is the same and you know we can see this playing out right now for example in yemen where the united states has carried out both manned and unmanned operations or like we've seen on the the um the gulfi coalition if it hits a reported wedding and a bunch of civilians die we don't go oh but was it a drone or was it an f15 that dropped the bomb that doesn't change the law of it the technology changes the political discussion it makes it a lot easier for us to contemplate it makes it a lot easier for the politician to conduct that operation because the the risk um to the human pilot to our side is lower but it doesn't that there's nothing specific to the technology um where this may become more specific to the technology is if you get into a world of autonomy and you um unleash the technology you give it a preset target and you say go out there and find this target on your own take it out and then tell me about it but we're not in that space certainly not with the drone strikes the only i mean where i see that happening more likely in war is again with preset types of systems where for example you might say this is what an enemy battleship looks like of this type we're at war with that enemy i fire off not knowing where that battleship is go hunt it on your own we're not we're nowhere near the there was a for example a video of preset human faces and sort of unleashing it to you know roam the earth looking for someone with that face that's still in the realm of sci-fi i saw that movie it was good um neil but by all means jump in but one question i had and perhaps if you want to answer both when you talk to other countries leaders of other other organizations but especially military or civilian officials from other countries is the discussion that they're having one of what should they do or is it focused really heavily on kind of what can they do what can their militaries purchase what can their militaries develop what kind of strategies might they use or are they pausing to sort of think of some of the questions we're discussing now about moral obligation more responsibility moral limit well i think the the the discussions in janeva over the past five years have shown that governments are engaging with this with this discussion seriously um they recognize that there is a unique issue about autonomy in weapon systems specifically i would say in the critical functions of weapons of selecting and attacking targets clearly there's a connection in some of the technology from existing armed robotic platforms that may in future be autonomous but there's a important difference in terms of where that functionality is handed over to the sensors and algorithms of a machine um i think there's a wide interest a wide public interest and efforts of civil society have also raised this on the agenda of states there's a like i said earlier there's there is this recognition that this human element in the use of force is critical from not only legal perspective but also many states are raising ethical issues meanwhile there there is of course interest has to be expected in the military applications of robotics and autonomy and ai um what we always argue when we're talking about this issue is that it's important to look at a realistic assessment of the technology so even a weapon system that may offer greater precision such a precision guided munition or you know arm drone with a precision guided munition um it doesn't inherently offer greater respect for the law or civilian protection because it depends how you use it um you know a precision guided munition could also be precisely uh on the wrong target for example or it could be of a size that endangers civilians in a densely populated area disproportionately so i think it's important to look yeah not just for the technology but the way it's used um i think there's the last discussions a few weeks a few weeks ago in Geneva are quite encouraging there's i would say more of a coalescing around getting to grips with what this human element means what type and degree of human control is needed um the interesting thing is that i think it's something to learn from the degree to which autonomy is used in existing weapons there are a lot of constraints um it's mostly against objects such as incoming missiles projectiles it's mostly in quite controlled environments where there are limited risks to civilians it's mostly for limited time periods over limited areas um and it's mostly there's human supervision um and often even with some systems which shoot down incoming rockets and mortars there's often human verification before the decision is is taken whether to to let that system fire autonomously so i think these say something about the legal constraints um also the ethical considerations and also um perhaps also technology capabilities um i think there are thinking about the way technology is moving forward there are considerations where you start to see problems and it gets back to this issue of uncertainty if i have a system which flies for nine hours over several hundred kilometers um to attack a particular target how am i judging if i don't by definition if i don't know where it's going to land and when during that period how am i how am i really having that kind of granularity of information um to to make those legal judgments and to ensure that human connection that intention uh with the eventual consequences um so this is where i think there's a lot of work to do by governments in bringing forward what they mean uh by human control in practice so we'll open up to questions in one moment i will just make one observation myself um i've been struck over the years of reporting on the military by a development that to me was surprising which is rates of ptsd among drone operators and others who uh operate drones are much higher than i had ever thought possible and in interviewing people who were suffering from ptsd for a book i wrote one thing that they referenced a lot was because of the tremendous accuracy of what you can see it made it worse so if you're operating a drone you can see a person you could see them resting with their on the roof of their house you can see them saying goodbye to their kids as they get into their car you could see the missile and you could see what's left of them after in a way that you couldn't see if you fire a bullet or if you fire a rocket you don't see what that weapon does on the other end and here you do and i think as we open up to questions that to me is sort of an interesting paradox of you know so much of what these weapons can do and the same time the weapons can just do more and more of it as we go so we have a microphone in the back if you would please identify yourself when you ask your question and please keep them two questions as composed of paragraphs with question marks attached awkwardly at the end right from the icrc um i have a question from the other way around don't you think that advanced technologies when you have autonomous systems fed with good data that could actually make less errors than humans could make i mean look at the airstrikes in many of the recent conflicts with the horrible civilian casualties happening don't you think it was actually an autonomous system that was fed with proper data i know there are errors in data it could be less casualties thank you it offers up the potential of greater precision that depends however on how for you know the type of data that you have how you code that data so as an example one of the controversies surrounding reporting of civilian casualties around drone strikes is well who's a civilian who's a civilian is actually as we all know like it seems really simple but it's really complex so it particularly in in a space where not everyone's wearing a uniform and one of the big controversies was this idea of military age males okay ah if they were you know looking over the age of 18 and then the vicinity therefore they must be part of the organization that i was trying to target as opposed to just someone who is nearby so kind of the way that you code the data which is a not a technical question it might be a legal or a political question um can drive that also to go back to what was said before um we have certain conflict actors who shockingly don't respect the laws of war so they might use that precision to do really bad things um they might use that precision to actually kill more civilians uh if that if their intent is to cause a fear factor among a population or the like so you know kind of a way of putting it is that um often in this debate we seek the technology to solve our moral dilemmas you even get this there's this idea of you know ah but we can make moral robots we can you know give them a moral generator that's great one show me the technology and then show me the code of morality that we all agree to universally around the world and you've solved the problem um you know so again i i think we'll be stuck with these legal ethical dilemmas we shouldn't look to software code to solve our moral legal code questions it can maybe enable it but it's certainly not going to be the solution to it yeah so i i guess i would just say two things first um you know we might ask why are states pursuing these systems and one reason might be because we're worried that other states are pursuing the systems and we're gonna fall behind but i do think at some level states are pursuing the system because they believe they are more accurate and can actually enhance compliance with um the laws of war and one way they can do that i think is to remove emotion from the battlefield um you hear that with regard to to drones but you could imagine also uh an autonomous system that is not crippled by knowing that um that his uh his battlemate was just shot um by you know a non-state actor i'll go back again to this this idea of comparing humans and machines i don't think it's necessarily the right approach because it's going to be humans with certain type of machines or humans with another type of machine um yes i think it's important so i think the focus should be on the human machine relationship of course where we can use machines data and data analysis tools to have better information that would help those conducting conflict comply with the law better and pose less risk to civilians i think that's a good thing i don't i don't think that's necessarily an argument for closing that loop in that suddenly that analysis that a machine can help you within your decision making is suddenly closed loop with a direct effect of a weapon system or a machine building that necessarily uh is something that that's needed um or that will necessarily solve the problem i think um where these type of tools can be helpful is helping um uh human decision makers uh better make those judgments so you you mentioned the need to kind of scope out this relationship between the human and the machine to oh i'm sorry west risk for the american society of international law to discuss even how states regulate this and how it looks going forward but on a practical level international law has been having less and less success with getting multinational agreements that are actually enforceable that govern state regulation or regulate state behavior given how technical of a topic this is given how quick the technology is changing and given that maybe the right solution isn't trying to regulate specific technical issues but trying to govern that where that line is between the human and machine interaction and the responsibility for you know who's ultimately pulling the trigger or programming the code how likely is an international agreement that's going to regulate this how likely are we to see an additional protocol or a new convention of some kind that states would sign up to and agree to be bound by i think um remains to be seen there are certainly proposals from quite a few states who want to see new law in this area and new protocol uh there are others um some major powers who who've said they prefer to focus on ensuring uh focus on existing legal requirements um i don't think there's necessarily incompatibility in that approach though because those state i mean all states are there discussing it and i think all states recognize there is an issue there uh about keeping this human machine balance right and getting it right from a legal perspective also an ethical perspective um so i think there is there is a chance that states will be able to move forward and work out a way to agree to some common understanding of where where that sort of line lies because um i think it's in in all of their interest in a way because there's such a wide scope of what you could consider effective or meaningful human control when talking about autonomy and weapon systems that one end of the spectrum it is well i invented this robotic system and i programmed it so therefore it's under control um in all circumstances and the other end is i directly need to remote control it all times um so there's a lot of um i think there's a lot of risk that within that variation among states i think a lot of governments have an interest in actually understanding each other and where they see the boundaries and a good place to start would be look we have existing international humanitarian law which which all states have agreed to follow and how does that given this issue raises novel issues how does that actually affect this topic so i think we'll end there um i had promised my wife that i would see if i could avoid any cliche references to terminator robocop other movies of the like and i'm glad to say that i think collectively we did um there's a reception afterwards i would encourage all of you to stay um up on the roof have more drinks than you may have already had again i would love to join you but unfortunately can't in meantime if you could please join me in thanking neil and actually