 Hi, everybody. Thanks so much for coming to our talk on autonomous killer weapons. Yeah, this is going to be a very light conversation for a Saturday afternoon, so I hope you guys are really excited about that. My name is Lizzo Sullivan, and this is Marta Cozmina, and we do represent the campaign to stop killer robots. Can we do a raise with a hand? Yes, that means the campaign is doing our job. We have 115 member organizations across 55 countries, and we're still growing. Basically, the campaign to stop killer robots is all about preserving meaningful human control over weapons systems. Our goal is to secure an international treaty to ban fully autonomous weapons. So in our talk today, we're going to go through what is a killer robot anyway. Why is it an ethical concern that we're considering seating the decision to take a human life to a machine algorithm? How people are getting involved and why it matters, and also what's been done already and why we still need your help. We'll also talk a little bit about what technologies are being used to achieve this goal of creating autonomous killer weapons and what you might do if you find yourself in a situation where you might be working on them or contributing research to the cause. So here's just a short intro video to let you know more about the campaign. Alright, so now that we've scared the pants off of you, we hope. We will introduce ourselves and get started with a really detailed, technical, ethical discussion about what we are trying to fight to make sure it never happens. Just to start off, I'm sure you've probably never heard of either one of us, but my name is, as I said, Liz, and I've worked in AI for about eight years total. I started out at an NLP company that did recruitment job advertising, and most recently I worked at a computer vision company called Clarify, where I was the head of the labeling services. So you probably know when you create AI models, you have to train algorithms using hundreds of thousands or millions of labeled data points. And that's actually where I first fell in love with the field of FAT ML. It's kind of a cheesy name, but it stands for Fairness, Accountability, and Transparency in Machine Learning, because labeling services can actually generate quite a lot of bias in the machine learning models that they create. Since then, I've joined a nonprofit that is called the Surveillance Technology Oversight Project, and another one called Tech Inquiry, which seeks to connect organizations that work in tech and the people who work there with the government to try to help explain and clarify some of the technology questions that these legislators might have. I also co-founded a small, sorry, it's distracting. I co-founded an AI explainability company with the hope of making some of these issues a little bit clearer and better. And I have kind of an interesting story to tell, which I'll get into later on in the presentation, but right now that's me, and I'll let Marta introduce herself. So, I'm Marta. I've been researching conflict situations and human rights abuses for about four years, mostly in the context of human rights and weapons. I started when the conflict broke out between Ukraine and Russia, and I was doing research there, and I saw quite clearly how weapons proliferation and conflict spreads and how it affects civilians in that area. And then I worked at Human Rights Watch in Washington, D.C. for a few years, basically looking through hours upon hours of the immediate aftermath of air strikes in Syria via open source video footage, doing research there, tracking cluster munition attacks. And so I saw firsthand about the devastating impact that those weapons can have on civilians. And while I was at Human Rights Watch, I also worked on the campaign to stop killer robots. And that's when I realized that these weapons will kind of be the next frontier of what we'll see in future wars, unless we do something about it now preemptively and before the technology is deployed. There are no victims of killer robots yet, but I don't want to be doing research on them 20 years, 10 years in the future. So, before we get started with the ethical arguments for or against autonomous weapons, we thought we would take a few minutes just to define what they actually are. And this may seem like it's a little bit obvious, but in fact there are lots of different degrees and shades of autonomy in current warfare and more even that are proposed in the future. So what is our definition of a fully autonomous weapon? Well, a fully autonomous weapon can, from a set of candidate targets, select, acquire, and destroy that exact target or set of targets without control from a human being who is authorizing each individual kill. When we talk about autonomous weapons, it's easy to sort of imagine them as a single drone that's maybe got a commander back at home who's watching the video feed, but ultimately a big part of the fear for our campaign is that there won't just be one. Most likely due to some of the research coming out of DARPA, there will be swarms of them that are communicating with each other and in a lot of ways, in ways that we don't or can't understand. So a key notion in this discussion is the human in the loop question. What is a human in the loop? Does anybody here work in AI or around AI? This is a term that's familiar, I know some of you do as well. But a human in the loop is essentially a touch point between the prediction of the algorithm and the action that it's designed to take that would allow an approval or an authorization. A lot of current AI technologies secretly have humans in the loop that are validating or correcting predictions in real time. There are some AI companies that provide this service. But the likeliest way that we'll encounter fully autonomous weapons right now, just practically speaking, looking at the different kinds of systems that are in place, are through autonomous killer drones involving aerial photography and a technology called object detection, which we'll talk about a little bit later on. Speaking of aerial photography and object detection, has anyone heard of Project Maven? A couple of people? Okay, it's been in the news a little bit. So that was a DOD contract that was put out by, you know, now it's eventually transferred into the Joint Artificial Intelligence Committee or as we're going to call it, Jake. But it's something that Microsoft, Amazon and at one point Google were tasked with working on. And essentially it's just a point of view from an aerial standpoint looking down at the world and trying to find out where within that photo or video stream these various objects, namely people, facilities, vehicles, are actually going to be. And this is a core part of the race towards autonomous drones or weapons in general, because this is a piece that would allow us to understand where in space the physical target that this weapon seeks to destroy actually is. So I'm not going to, and no one can really make the bold claim to say that Project Maven is about autonomous killer weapons. But it does seem like it is a fundamental part of it, which may explain why there's so much interest from the Department of Defense to push it into Silicon Valley and to have as many companies competing to work on this very core technology right now. So we're now going to talk a little bit about what kinds of weapons that are bearing degrees of autonomous in practice today. And for that I'll see the Mike Timarda. She's very experienced in this. Liz mentioned probably the first thing you think of as an unmanned aerial vehicle or more commonly known as an armed drone. These usually have long ranges. They're able to carry a payload of missile or artillery system. And originally when the United States Air Force and the Central Intelligence Agency used these back in the 90s, they were only for surveillance and reconnaissance. Fast forward less than 10 years and we added arms to armed drones. So in 2001 the first armed drone, the Predator, the US MP1 Predator drone, flew over Iraq and Afghanistan and just less than a year later it made its first kill. And now, fast forward to 2019, that system is already retired. It's hanging in a museum in the Smithsonian. It's been succeeded by subsequent systems like the Reaper which actually has a longer range, can carry a bigger payload and is used widely in a handful of countries today and has been sold and proliferated over and over again. And we don't have a lot of transparency on the numbers and the actual attacks of where these systems are used. So that's another big thing for people who are tracking these conflicts and trying to have accountability for victims and tracking civilian casualties as you can imagine, Department of Defenses. We also have Boyer and Munitions which basically took all of the technology that went into UAVs and made them much smaller. And they consequently have smaller ranges but they are able to basically hover over a designated area and seek out their target. Again, back in the 90s Israel invented their Harpy drone which was mainly using sensor technology and seeking out radars to evade radars. That's why it was made smaller. But as sensor tech improved, they actually came out with a new version called the Hara, also Israeli. And that one now has the ability to return home. So the Harpy was basically like a kamikaze drone. It flew into its target and it was single-time use. And then the Harp now, if it doesn't find its target, it can come back which is obviously financially beneficial. So it has double the range, it has double the loitering time. So imagine a weapon that's able to seek out its target. You can't hide, whether it uses biometric sensing or facial recognition technology, it is going to hunt its target to the ends of the berth. You can't escape it. And so this loitering munitions specifically was used in the Nagorno-Karabakh conflict between Armenia and Azerbaijan in 2016 and six people were killed. So it's definitely deployed and operational. And it has a semi-autonomous mode and a fully-autonomous mode. And then also there's air defense systems like the Israeli Iron Dome which basically seeks out incoming missiles. But there's still a human operator who has to confirm launching the counter strike. So the system is able to detect the missile, it alerts the human operator, human operator pulls a proverbial trigger. These can be in a fixed location or they can be put on a Navy ship and then obviously they move, but they are limited in range, limited in scope and time and the payloads are able to carry. So with the current state of technology there's some limitations but as we see they're getting smaller, they're getting faster, they're able to loiter for longer times, eventually they'll be able to be refueled autonomously as well. So the scope of operations is growing and growing as these weapons are used. We also have other semi-autonomous systems that are more ground-based like sentry weapons which I'll talk about a little bit later that can basically guard borders or perimeters and ID targets, as you can see in that picture there's lots of different sensing technologies as well as some heavy artillery attached to it and you can do something similar for unmanned ground vehicles where they can be used alongside troops or ahead of soldiers to scope out the situation and you can envision these being used for a logistics purpose like carrying extra supplies or fuel or injured soldiers but you can also stick weapons on them and now it's an armed unmanned ground vehicle and right now the key thing here is that all of these have human operators. There is human control attached to all of these systems but progressively we're seeing that diminish more and more and that's our concern that there's no stop gaps there's nothing stopping this technological progress except for maybe policymakers who are not as informed on the technical expertise of this as much as they should be. So it's also worth mentioning that these are simply the weapons that we actually have access to see which means that behind the scenes there's likely to be several more classified degree programs coming out of DARPA or any of the government agencies who are working on these things. In fact, the U.S. to Marta's point is the first country to develop a policy to prohibit autonomy in weapons or so they claim. They do it in a directive called 3000.09 which dictates that appropriate human control is mandatory but even the author of this policy who's written a book about AI in warfare and specifically autonomous warfare admits in his book that this policy is not restrictive that it's entirely possible for a general to take a request for a proposal to his superiors and to get authorization to build, use or deploy this kind of weapon. And so one of the things that we try to do to understand maybe what's going on behind the scenes is to look at the programs that are coming out of DARPA and the programs that are requested by DARPA and their RFPs on Fedgov which is the way that they issue tenders for applications. So one of these programs that is declassified out of DARPA is called code or collaborative, I always forget the acronyms, collaborative operations in denied environments. And basically this is a program that would allow a swarm of drones to talk to each other and work as a unit and to do that in a signal denied environment. So if I recall correctly, these drones can communicate with each other with as little as a 56k connection. So think back to your AOL phone lines if any of you are old enough to remember that and so that's the smallest amount of bandwidth that they need to actually talk to each other. And so drone swarms again are the main concern here in this present moment in time but as far as unmanned land vehicles there's also another program that is coming out of DARPA. At this point it's still on the whiteboard and they're requesting people to start working on this kind of thing. But what's very interesting is that in every article that the press publishes about this notion of autonomous weapons they tend to mention directive 3000.09 as a saving grace and say, you know, we were building different degrees of supervised autonomy and that this is the future of warfare but we still have this one policy that's protecting us from creating terminators or whatever the scary sci-fi version of what we're talking about in reality. But it's such a graphic image, it conveys it. So... I'll be saying terminator in a minute. Great, great. Thank you. So it's not terminator, instead it's called Atlas and this is an unmanned ground vehicle. Very generic description in the RFP. You know it has to have wheels, you have a gun, blah blah blah. But there's one section in this RFP that stood out to me immediately which is called fire control and that under the bullet pointed list of fire control there are the requirements of what kinds of fire control this design system would have to have and one of the words in there was fully automated. So it seems that they're playing this against both sides of the fence, right? In the one hand educating the public and saying we are a democratic nation, we have policy, we value human life and simultaneously creating opportunities for Silicon Valley to push this and I'll use this phrase very, very limitedly but this arms race forward. So if you are following these issues closely as well as Marta and I are because we're very concerned with surveillance applications of this technology, you'll see that there's this really common trend about things that are developed in the excuse of warfare for defense or for competition or rivalry and then 20 years later you see it start to make its way domestically and come here to the United States and so we are already seeing the surveillance drones make it to domestic USA which is very troubling and I think with the rise of Amazon's drones and all of these policies that are developing for drone deliveries to be available and to be legal and regulated we'll start to see more and more cameras on these drones and that is again the type of escalation that puts us all at risk. So as Liz mentioned, the campaign is also concerned about less than lethal or non-lethal uses of fully autonomous weapons. In my opinion there's no such thing as a less than lethal type weapon. They all sustain injuries that can be severe and can lead to death anything from like tear gas to rubber bullets to the examples here, the most recent one is from the Hong Kong protests back in July this is a U.S. company called Non-lethal Technologies and basically you deliver tear gas on protesters and you can cause injuries and as you can see it says right there do not fire directly at persons but that's not in fact how it's being used it's actually being fired directly at protesters and we see that over and over again to quell revolutions, rebellions protests, whatever it may be oftentimes against unarmed civilians over on the bottom you see an Israeli drone that is dispersing small tear gas canisters and that was used on the border with Gaza with Israel Palestine and then the one on the right is an interesting example so this was from 2016 if you remember there was protests going on against police basically and there was a sniper in Dallas who shot and killed five police officers and so what they decided to do was take this basically ground vehicle that was not armed put a pound of C4 on it and drive it to the sniper and kill him and as far as I know that's the first instance of a robot killing a U.S. citizen semi-autonomously so you can imagine again these all have human operators that are directing these to a certain target but you can imagine with improved technology they're easily made fully autonomous and if you have a leader who's facing a revolution and he's got unarmed protesters or civilians and he directs the military to fire on his own people or her own people the military can refuse to fire but when it comes to a machine, a robot that's pre-programmed it will fire on those protesters so we're going to talk about that a little bit later I want to get back just really quickly a really good example from 13 years ago and the state of technology that was back then in the demilitarized zone between South Korea and North Korea there was a robotic sentry weapon that basically used heat signatures and other sensor technology and it could identify people crossing that zone it was a South Korean weapon and they put it on the border in full autonomous mode and what happened there was a huge international outcry because refugees that were crossing from North Korea to South Korea that weapon can't distinguish between a soldier and a civilian and a refugee and so thankfully the international community pressure actually forced the South Koreans to put this weapon back into semi-autonomous mode where a soldier has to make a visual ID and then activate the system but that was 13 years ago and as Liz said there's many new systems that are at play today and so that's kind of an outline of systems that exist and now we want to switch into talking you know ethics village let's talk about the ethics of it good I was getting ready to interrupt you I think we understand that these weapons are horrible and I also would like to say that the question I would have is what can this campaign do to actually stop this when this is going to happen whether or not we do anything about how do we defend against folks who want to use killer robots against us if we don't build our own what are we going to do to defend ourselves from swarms of killer drones we don't control everybody in this planet and it's a very altruistic thing that you guys are trying to do but I would really like to see us have a conversation about the reality of not everybody in the world is a good guy and do you know where I'm going with this and I don't know but is there any body in the industry that makes these things in the audience no pressure no pressure and you know as the campaign would you just going to throw this out there I've worked with weapons for most of my career and I mean I'd like to hear what you have to say but I have opinions okay great we're really excited about that I'd love to have this conversation and thanks Big Easy I mean I truly appreciate you chiming in and trying to help guide this discussion I mean we do have two hours so we prepared the first hour to be a really kind of philosophical discussion about why this is well I'd like to see the philosophical discussion start but we're definitely going to answer that question obviously we get it a lot so fundamentally the effects of these weapons come down to our inherent right to life and what it means for a machine algorithm to take a human life or to harm a human life and so how does that degrade human dignity that's really the moral and ethical question that we're talking about if you deploy a fully autonomous weapon you're basically committing an extra judicial killing that would automate that decision and take out the necessary human judgment from this process and you would mechanically slaughter at scale and at speed that's what we're talking about the technologies that exist today are you know just so nascent but that is like the big moral thing that we're asking here and I think we can all agree on this point that we have the right to life so okay maybe not for folks that have questions please use the microphone okay um let's take a couple questions I'll just make the comment that in any era of history weapons of war turned against civilians has been horrific whether it's you know a Roman Legion with spears or you know machine guns being pointed at a crowd so when you're talking about AI weapons you're talking about things which are able to be much more discriminant and much more precise so much faster right yes precisely so the example would be it's not really a decision here of is the F-15 going to drop 500 pounds of hate onto a house it's what is going to be contained in that 500 pound package right now the answer is high explosive in the future maybe the answer could be 500 pounds of some kind of robot and to give an example of why this would be important in the late 2000s there was a terrorist in Pakistan named Batiullah Massoud who was killed in an explosion his wife was also killed in that same explosion and by all accounts she was not directly involved in Massoud's terrorism if the option had been available to attack only the individual target and to reduce the collateral in this sense the ethical thing for a warfighter to do would be to use the more precise and more specific option now additionally as far as like human in the loop goes right now we do have weapons for snipers which are officer in the loop weapons where essentially the sniper has a pair of handcuffs on and actually acquires the target the video feed goes home and he has to get layers of approval before he can actually take the shot those systems exist so I'm not really clear on why you're against the like very low level autonomous targeting because if at the end of the day you can say that the national command authority is the only human in the loop with warfighting that means that you don't have 17 year olds making difficult moral decisions and as you gave the example of a president could order the military to fire on rioters historically dictators have had people who are willing to fire on rioters and yes it's true they could refuse but if the only person making that moral decision to fire on an unarmed crowd happens to be the political decision maker in charge and he bears the full moral weight of that we can deal with him in a way that's much easier to deal with than people who say well I'm only following orders now furthermore you make have a good one well thank you and we will you know absolutely respond to questions I just want to take a second before we get back into the explanations because you know and Roman makes a point there are some slides in here that will address the specific concerns about integrating AI into warfare when we're talking about you know this new frontier but I do think that I take issue with one of the things and I'm not sure what your name is but you're very insightful comments which is to say that this allegation that AI will be more precise and more controlled and do less damage it could be US centric view and not how we see weapons used in conflicts practically yep and the other thing to remember about AI is it's not magic it's just a predictive technology that takes data sets and then is able to infer what happens in the future based off of those data sets so if your data is messy which we all know it will be and it is then your algorithm will be messy in a similar way and even more than that AI models are notoriously what we call brittle which means they can only do exactly what we train them to do so let's say you have a laboratory scenario where you're training a swarm of killer drones to identify targets in the desert what happens if that battlefield is not in a desert and it's actually in the snow there's lots of research in adversarial attacks about the way that there's this one famous paper about it takes a photo of a dog and a photo of a wolf and it tries to classify which one is a dog and a wolf and the reason that they were able to find the field of explainability prior to that they wouldn't have been able to know that this was happening at all but with explainability they were able to detect that it wasn't the animal itself that was causing that prediction to occur it was actually the snow in the background and so when you analyze things on a pixel by pixel level there's so much brute force computation happening that it just becomes impossible for you to understand and to account for every single scenario for every single variance that might occur in an unpredictable battlefield scenario I think the alarm that we're trying to raise here is that these technologies are so new so risky and so poorly understood that to rush forward into autonomy based off of these kinds of detection systems is unacceptable and especially when you're not involving sufficient human control which may be a general designates an area where this drone flies around and then says you know kill all the battle aged men in this area so how do we do this better well what can you just quickly define this what's this so just in general like what ethical questions do you want us to answer like what how can we help make sure this field is better especially when like we're talking about robots and AI like it's your focus mainly AIs and mainly robots how can we you know do AI in a way that is ethical or better to or in a way that we feel better about at the end of the day or how do we do that with robots I think that's a really great question and if you were to ask different people on the campaign you'd probably get very different answers but one thing remains kind of central to it which is maintain meaningful human control meaning that if you think about the way that let's say there's two scenarios where we have a drone that is looking for a particular person or a drone that is looking or a swarm of drones that is looking for a bunch of people both are controlled by one person or one soldier or one general whichever it happens to be that's a force multiplier right so for every one soldier you can have 10 or a thousand or 10,000 or a million different potential victims of this kind of weapon whereas you know with one person one drone one approve approval it tends to de-escalate the whole notion of war and so the escalation of the scale of war is a really important question for us what is meaningful control that is also a very good question and one thing that is being discussed right now at the highest levels of the international global government I think you know maybe Marty if you want to take us through some of the different interpretations of that I have my own opinion of it but yeah there's teams of lawyers and tech workers and diplomats military lawyers, military folks trying to answer that exact question they're you know taking out their charts and what Liz talked about in the beginning of that the targeting cycle of a weapon and you know like Maven is like IDing objects or IDing targets that can be done semi-autonomously that's already incorporated into existing armed drones but when you're at that point where you're selecting walking on to and then engaging pulling the trigger of a target that's where we argue you have to maintain human control over those things and yeah you can talk about you know as an operator making that split second decision actual making full human control that can be highly debated too and what we're advocating for is to have the discussion with policy makers because who's in charge are a bunch of United Nations diplomats who have been talking about this for seven years the technology is moving super quickly the policy is not moved at all and they're just talking they're not negotiating anything they're not doing anything finding they're stalling so you have Russia, US Israel, South Korea, UK all these guys in the same room they recognize that this is important that this is a topic that's worth paying attention to but they're not doing anything about it so that's the role of the campaign and we're going to talk later about how we've been successful before working with the public civil society diplomats in terms of actually making a change but I want to talk about one really big issue which is accountability so this takes us back to a really important point about international humanitarian law which exists to protect civilians and to protect everybody can I ask one more question real quick moving on just do you have any proposition real quick on guaranteeing transparency for these agencies or do you like such as like an organization that's bipartisan so after you initiate this ban proposal and let's say that passes how do we guarantee that they're actually washing their hands when they exit the bathroom so to speak go ahead we're going to talk about that later but basically there are past international treaties that have been successful and they bring states to the table to basically have an honest and transparent discussion where states have voluntarily given up certain types of weapons that caused a discriminant civilian heart and there are ways to check that and balance that and you don't necessarily have to hold your agency to do so think about biological weapons think about chemical weapons land mines cluster munitions we're going to talk about these later one quick one minor question should have done that I think you first guys what specifically making the assumption that people building these weapons conform to the existing laws of warfare like you know you shouldn't target civilians you shouldn't build things that are intentionally indiscriminate etc what specifically makes it unethical to try to replace the existing weapons in US and other nations stocks with robot weapons that continue to attempt to conform to those ideals we argue that technology will never be able to replace human judgment and what that means is legally no one will be held responsible for a fully autonomous weapon not functioning yeah I think that so you know I'm in technology I work I co-founded an AI company you know we I'm not a technophobe I believe that AI is going to make its way into the military and very already has and will continue to do that and expand and we hope that it will be done in a way that will reduce the loss of innocent life and to make warfare safer but I also am very skeptical of this sort of techno utopian viewpoint that it assumes that AI is magic it's going to fix everything it will be more accurate than a human in all cases there are some very specific cases where a computer can't tell the motivation of a human compared to the motivation of a civilian or a soldier we're actually going to get to this a little bit later on so we'll probably talk about it twice but yeah maybe I'll just go back to the slides and you'll see when we get there so another we were talking about international humanitarian law and so these laws are designed to protect humans and one of the requirements for any casualty of war is that someone an individual is held accountable so how do you think about that when a swarm of drones is going off to kill its set of victims based off of the click of a switch or the press of a button of a soldier under the command of you know his supervising officer well if you're using deep learning that absolutely is impossible and there are some other fields that attempt to approximate what the cause or the explainability of this particular decision might have been but who's responsible for that mishap let's say it causes an atrocity let's say it plows down a field of protesters by accident because they're all wearing the wrong color or they're wearing a specific kind of hat whatever it might be so who's responsible is it the soldier is it the general is it the manufacturer or is it the software that actually caused this you know decision to be made the model also has data labeling services what if there's a spy in there like what if they're mislabeling 100,000 of the million images that go into training this model the point here is I think that there's just so many different touch points where somebody's responsible for this and it's the machine it's not the people that made it not necessarily you can't hold Uber accountable even in US law we're starting to see that where we would love to see autonomous vehicles be held accountable to companies like Uber who designed the software and who designed the hardware and sell it in profit from it that they're not going to be held accountable and these manufacturers are kind of immune from this type of responsibility so if there's a malfunction and genocide occurs because of it are you going to be able to blame the general or are you going to try to look the drone manufacturer that actually built them maybe it wasn't he who ordered those things to be built maybe it was a team of people who in the procurement office made a poor decision about which company to partner with the chain of responsibility is just very very broad and being that this technology is so new I think that explainability needs to come a lot farther than it already is today right now we can only do explainability with proxy models which means we train a secondary model that's similar enough to the first model reverse engineering the decision and saying these are the factors we think went into actually this kill decision but eventually we're going to want to do that on the model itself because in a proxy version you're using like a linear aggression which is very simple kind of model and you really want something a little bit more complex and accurate to hold somebody accountable for a war crime so that's kind of the way we're thinking about that so there are two other points in international humanitarian law that bear mentioning there's teams of lawyers who are already out here arguing that this kind of weapon is already illegal under international humanitarian law or that it's impossible to create a weapon that could comply with the existing international humanitarian law and Marta knows a lot more about IHL than I do I know the legal stuff might seem dry but part of what we do as advocates and basically we're trying to outreach with the tech community and help bridge that divide so everyone understands why new law is necessary because existing law will not cover technology so as was mentioned there's two basic things required proportionality and distinction by the rules of law in individual attacks so robots would lack the situational awareness contextual understanding and the moral reasoning to actually make what we consider a human judgment call which is distinction proportionality current technology can't meet those requirements so in this example it might seem pretty obvious there's a killer robot and there's a small child with a teddy bear that might seem obvious but what happens when you've got a person and maybe they're being coerced into fighting or maybe they're trying to surrender how does a machine with all of its sensors and whatever facial recognition biometric data that it has able to actually distinguish coercion we're talking about identifying like emotional intelligence for robots and looking at motivation motivation and honestly we don't think that's going to happen especially when you think about why it's important to consider who's creating the technology what kind of biases do they have what kind of cultural understandings do they have that might be missing when you think about all the theaters that the US is deployed in right now in different contexts and you have one system that's programmed and someone could be dancing in joy or grimacing in pain how is grimacing in pain different from anger about attack just to add to that before we move off this point I do think it's important to mention that there's this notion of sentiment detection and facial recognition analysis of what you're thinking or feeling based off of your face and again the techno utopians will look at that and say oh there's all these great uses we can now we can figure out whether kids are paying attention in school or whether people are actually malicious when they walk into a bank and tend to rob it but the more research comes out the more it becomes clear that your facial expression does not necessarily reflect your internal motivation or your intent or even the emotion that you're expressing it's gotten so bad at a point where AI activists are calling it digital phrenology and you see this in research coming out of China as well someone claimed to have built a classifier that could identify if you were a criminal and the way that they did it was training it off of mug shots of Chinese prisoners it gets better the mug shots actually when they when US researchers applied explainability to the field they found that the identifying feature that this model was latching on to was actually a smile so imagine this deployment this classifier deploy this is technology that could be misused to kill anybody who they think is a criminal but is actually just smiling this is just an obvious limitation of the technology as it exists today but there's serious questions about whether your intent or your motivation can ever be gleaned from your physical appearance so go ahead and ask your question introduce yourself thank you my name is straith I am a human robot interaction specialist I focused on using robots to social engineer people in the last few years as a way to figure out how to defend against these attacks and how to make sure that people are aware that this is an issue so this is we have to have a way longer time this isn't very much like in my wheelhouse but this is what's bothering me about your presentation is you're talking about AI you're not talking about robots they are so very different when you look at human robot interaction research when you look at the physicality of robots like it's great you're talking about this AI but if it's put in a robot that's this big and looks like a Barbie it's going to have a very different effect than the robots that look like Cylons that you're using so like there's some other questions there that I'm wondering about how much human robot interaction research you're also pulling into this or if it's only AI it's both so campaign is not for the robots really catching game works well like people get it we're not just looking at robots we're going to talk about like all the different technologies that we're concerned about that could be components of a system at this point AI will be a component it might be placed on a robotic platform or service or product but what we're talking about is incorporating robotics because yeah it sounds very much like AI so far and so I wonder how you're going to approach like the physical abilities of robots and how that plays in and where those ethics come into if you make a robot look a lot scarier than it is obviously people are going to interact with it differently so is there any steps to bring that into the campaign yeah I think that's a really great point and I think the campaign is always looking for experts who can contribute and help us understand the fields that we're not specifically experts in my background is in AI and so a lot of this talk is about the AI components of it the way that you're going to get autonomous killer drones is just not possible without AI of some kind and biometrics part of that and swarm technology is a part of that so you're right it's not going to be about the physicality of robots for the purposes of this talk but it's about the technologies that are needed the ones that are being developed and the ones that are coming in the future that actually stand to practically be you know applied into these weapons right now but I would absolutely love to have more deep conversation with you about that it's a big deal if you're like anthropomorphized robots or they're really not going to look like the ordinary as you saw from the systems that we flashed on the screen earlier like they look like weapons you can probably design a killer Barbie I don't think DARPA is working on that yet not to my knowledge okay where were we okay so I think you know a pretty easy point to make is that this kind of weapon will fundamentally change the scale and the appearance of war one thing that we tend to hear a lot is that this is a more precise weapon that this is going to reduce human casualties and of course that's the way that the government wants us all to be thinking about it and of course the government is trying to do the best by its citizens no one's attacking you know the goal of defending America right now obviously that's necessary but when we think about the scale of war think about I tend to look at it in terms of you know thinking around game theory a little bit what kinds of actions can a nation take to make more less bloody what kinds of actions can we take as a society to press for peace and I just feel very strongly that the scale of war that will make it easy and cheap to 3D print and then deploy AI models on completely enclosed SDKs that would then be used en masse to attack a nation will cause another escalation and cause another arms race to occur we're already starting to see that it's not just about defense it's also about the military industrial complex and the drive for profit and the deployment and sale of these weapons to our allies and to our you know basically to achieve proxy war so if we build scarier bigger weapons it should be pretty self explaining that other countries will do the same and try to catch up so the prevailing thought is that you know pacifistic movements are the only thing standing in between the growth of the military industrial complex in fact that's what Eisenhower said in his farewell address is that only a well informed public can stand against the growth and the hyper growth of capitalism plus the military industrial complex so the cheaper they are the easier they are to make the less human life that is threatened to be lost by a decision to actually go to war is going to cause more of these weapons to be built cause more countries to pursue the similar weapons to be built and that's dangerous so another thing that we're concerned about is accidental war and this is something that even our rivals are out publicly speaking about and concerned about accidental war is something that we all can kind of imagine if we've seen things like Dr. Strangelove and this system of missiles that are triggered automatically and fly back and forth causing you know a fear of a very large death toll but there are some examples that actually have happened in the real world so for instance you might remember the frequency trading algorithms collaborated to actually they were kind of opposing forces by ever lower and lower prices even just by a hair a fraction of a cent and ultimately they caused the stock market to crash so that's one example but this is present in all kinds of different places as well so for instance there's this one hilarious example of someone was selling a book on Amazon and they were actually trying to sell it for ten bucks or something and because of the competing algorithms one seller had raised their price had set their algorithm to price it at a little bit higher than the previous one and the other one had set it to price it a little bit lower than their component, their opponent what happened was the book was sold for 38 million dollars and well it wasn't sold but they were trying to sell it for that much money so I think that the point here is that there are human oversight mistakes that can happen whenever you train an algorithm that if it's not being monitored if it's not watched carefully if it's not trained with every possible scenario in mind they can fly out of control and that's risky and dangerous and we've tried not to talk about philosophically like what if you could build a perfect killer robot that did exactly what you said and could do it precisely even if you build a perfect killer robot and it interacts with I'm not going to say the country but those are going to have unintended algorithmic interactions that you just can't predict so building your own perfect technological example doesn't quite work I'm going to kind of breeze through this the campaign did a poll at some of key countries and states that are participating in the UN process and what we hear from the public in those countries is that they do not want killer robots developed 61% of the public is against killer robots mostly for moral and ethical reasons which makes sense and we hear the argument if we don't build these weapons someone else will a lot but even the United States doesn't sell weapons to some countries usually for human rights reasons sometimes political reasons so that's not a good enough argument we're talking about humanity and who we want to be and what kind of technologists and roboticists and just human citizens of this global planet we're talking about international security not just national security so we are at DEF CON and there is another concern that's a more practical concern less of an ethical concern but again we think it's important to bring up all of the scenarios that might actually make us vulnerable and so hacking is a real concern here when you have a completely closed system that makes it impervious to this kind of attack but there have already been cases of captured drones and if we are deploying AI into our weapons this becomes kind of the secret sauce and something that is highly desirable for these adversaries to capture and so we can see things like boundary detection attacks which would then allow an adversary to understand the weaknesses of one of the models that's seeking to select and destroy the targets and if they do that then simply to be able to avoid targeting or to create some sort of adversarial approach that would defeat the targeting process entirely so adversarial attacks are a real concern another example of this is Microsoft Tay does anybody remember that so this is a very funny little chatbot that twitter attacked on mass and decided to make it racist because that's fun and so Microsoft deployed this little chatbot which was completely harmless but because people fed it the kind of data that was racist and offensive it also started reflecting that through the process of semi-supervised or unsupervised learning it's a little unclear how they achieved that in the chatbot but basically the data that it took it was able to learn off of that and that transformed its prediction ability and its ability to generate language so that's obviously a toy example but these kinds of attacks are going to get more and more sophisticated almost every technology in AI is dual use that really kind of glimmer of hope that I mentioned before, explainability explainability is also a technology that can be used to detect the boundaries and to understand where the weaknesses of the model are so dual use technology is a real problem with regard to kind of thinking about what your own ethical moral lines are in regard to what you work on but it's certainly something that needs to be addressed so would it be better if this is all like completely open source where anybody could see it or how should we go about making good defenses or making good choices or making those ethical decisions Great question I think open sourcing things is definitely one way to go about it I'm just going to kind of think through your question a little bit because I wonder even if you were to do that then you might also have classified data sets I think when we're talking about defensive technology it would be great if we could say let's have it all be open and transparent but at the very least I think a big part of the reason why I've joined the campaign is because I think that this kind of enforcement needs to come at the international level we as people, as software engineers can't be responsible for the kinds of technologies that what they want and what they will use but what we can be responsible for is making sure that the law stops this kind of weapon from being built So another thing that's happened in Canada recently is the drone laws have changed quite a bit so you're not even allowed to have drones inside your house necessarily because they are too close to people and this comes because of the issues with drones in airspace for example being close to airports and things like that so those laws are like affecting people's hobbies affecting people's fun affecting the art that we can create to make an artistic show with drones you have to get everybody to sign off now apparently that they are okay being within the minimum range that a drone is allowed to be away from you so like how do you think these laws are going to affect hobbyists and just industry and outside of the military scope like is that okay as well that we put these laws on and they're going to have unintended consequences across the board Yeah we get that question a lot too in terms of like how would a ban affect industry development and how to be used to basically the biological weapons convention and the chemical weapons convention did not stop biologists and chemists from being able to experiment and use that technology for good but we will see if we see fully autonomous weapons deployed is a huge impact on robotics and AI and any other tech that's incorporated into it the campaign tries to do our best to not fear longer and just be realistic about where this technology is going there could be public pushback so there's both sides of the coin I think a treaty I know a treaty would have you know components of it that would allow for transparency which is kind of the open source stuff that you're talking about bringing states to the table but international treaties are not as strong as they may seem and they take a long time to really see the effects on the battlefield already and they very rarely I can't think of an example that would restrict actual development for good we want to see AI for good so there is an international treaty that for some robotics that you cannot update them period that is in the agreement is that once you have a robot and it has been approved that it can't be updated and as security people we love our updates like that thing is making sure that things are secure but there are some robots that I played with that automatically reset as soon as you update because that's part of the law around it is that you can't update the robot so that's something that happens so how do we avoid situations like that what treaty is that? I will have to look at my papers again but yeah it's more in the hardware side and not thinking about the social side and not thinking about the software side so it's a hardware treaty I think between Japan Germany and a few other countries yeah super question and I think thank you for giving me a chance to say this because I really think that the one thing that we need more than anything in this discussion right now is more experts and more expert voices so I was going to tell this story a little bit later on but earlier this year I was in Geneva at the convention on certain conventional weapons the meeting of group of government experts that get together to try to discuss what are our policies going to be on this and I was one of maybe two or three people who had ever trained a model in the room and understood what the limitations are what the current research is and it was really shocking to me to hear these delegates who are responsible for governing the ways that countries work together and interact especially in the warfare realm is to say that they didn't know what they were talking about and that's when you get ham-handed legislation that restricts people's hobbies we don't want to take your autopilot drones away that wouldn't make any sense at all but what we've been saying is that we want to prevent commanders from being able to deploy large swarms of 3D printed weapons or bombs that can find people based off of flawed and predict potentially forever imperfect biometrics data so we need your help we need everybody's help if you are in AI, if you're in robotics if you're in ethics, if you build software or SDKs you probably know and you're proving here that you know a lot more about this specific topic than than I do but I'm also a member of the international committee for robot arms control with Peter Asaro who's noted roboticist and Nolsharki and they are robotics experts they were in the room as well with people like that and let's talk about how we can help you get involved and help your voice be heard we've got two things that we want to cover maybe we can do like 15 minutes and then do Q&A after that basically we want to talk about the tech worker movements that are happening and what technology workers are already doing to influence what's happening on this topic 7 minutes oh a challenge 10 5 minutes oh ouch you in the room and then we want to talk about the campaign past successes and how you can get involved which is really important stuff so how good and then we take questions to tell your story because it's really important so now what we're talking about here is dual use technology just like biological weapons or chemical weapons the AI models that you train for computer vision, for biometrics or for swarm collaboration can be used for good or they can be used for evil and we've also talked a little bit about directive 3000.09 which is a lie I'll go so far as to say it has loopholes, it's very, very holy which means that it feels like the government is trying to mislead the public and sort of assuage fears of the kinds of things that we've been talking about so up until January of this year I worked for a company called Clarify which was a computer vision company one of the project Maven contracting companies and I kind of looked around the world and I saw that the technology to build autonomous and fully autonomous weapons was around already today and so I felt very strongly I asked my CEO to sign a pledge that he would never allow this technology to be used in this way when he refused I quit so this is a reflection of what's happening in technology today because these DOD policies are vague and they're unclear and they have loopholes and they seem to be misleading in some cases there isn't anything to stop any government including the US and undemocratic nations from building this kind of weapon and escalating the national, the international scale of war so I think that's what a lot of the people in the tech workers movement are feeling and seeing as well that we need to have more attention paid to our ability to choose our right to choose what we work on and our right to know what we're working on and we see every day that a lot of these workers don't feel comfortable producing tools that increase lethality without their knowledge or consent one example is the Microsoft HoloLens which is for all intents and purposes a gaming headset that was an AR device that would help you play volleyball or whatever you wanted to do but without their knowledge or consent the workers the executives at Microsoft decided to sell it to the military for the express purpose of increasing lethality so now we're talking about a device that was supposed to be completely harmless but is clearly very, very harmful and 300 Microsoft workers tried to push back on the management and say that they didn't feel comfortable with their work being used to this end and they were unsuccessful but there have been other examples of very successful movements Google is the prime example of this where collective bargaining and working together succeeded in making sure that Google dropped out of Project Maven and that they changed a lot of their policies including an end to forced arbitration these movements are very exciting and very hopeful for us that the continued support from the public from the people who are building this technology who know firsthand its limitations and its promise that these voices are important to be heard in this debate and so that's why we're very supportive of this notion of tech won't build it I'm a Silicon Valley lead for the campaign so I kind of see firsthand when words like the Defense Envisioning and DARPA and Jake are courting Silicon Valley companies and same things like we need help with humanitarian aid delivery and logistics and for medical purposes which are all good things the campaign is not against the military, not against AI, not against robots what we are concerned about is that commercial tech workers are a little different they're not your casual Lockheed and Northrop Grumman the defense contractors that we normally think of, they have obligations to the public and their technology can be used for many different things their workers aren't always US citizens so if you're asking someone to create technology for the US military that could be potentially used to harm their loved ones wherever the US military may be engaged that's a really big thing the Google Project Maven protest that was 4,000 people signed a letter globally across Google through forums through chats, through Google groups they got together and they said we're going to make a difference these are people who did not perceive themselves as activists they were engineers they were asked to create things like softback computers and a lot of the work was also contracted out contractors don't have as much decision making power, they don't have as much insight into what they're actually developing whatever tech they're working on for and that was a big thing workers have a right to know what they're working on and what happened with Google as you can see here they increased huge reputational damage for this and so they got a lot of support from the academic community and from the international community for taking a stand and while the workers at Microsoft were not successful in dropping the Army IVAS HoloLens contract Google was and they came up with AI principles which have their own issues but they said they would not develop worker technology that was in 2018 and it was a huge example of why worker movements work and that you should never feel alone when you're tackling a subject as deep as this and now it's all about implementation of how do you actually what does it mean to not build worker technology what does it mean across all of Google's entities can they invest in companies that work on the way autonomous weapons those are the kind of questions that are being asked now and also workers that raise concerns about these retaliation so we got to make sure that workers are free to bring up concerns and to have these discussions within their industry with their colleagues and that's what builds transparency so just to kind of this is the thing we're concerned about it's a little bit daunting when you look at all the different technologies that become individual components of this kind of weapon and weapons in general but you know again not technophobic and I'm not saying by any means that we shouldn't continue to try to push the limits of deep learning and computer vision and all of these types of technologies that stand to do cancer diagnosis and rescue people on rooftops during natural disasters of course that's why I think that their individual voices are needed on the policy level writing your congresspeople writing to your state and city representatives telling your CEOs or the people who have the power to communicate with our government directly that you care there are people there are often opportunities for the public to comment and even though the defense department is often very quiet about these opportunities they do exist and you're able to express your view I'm sorry I'm going to interrupt you I'm going to give you five more minutes it's already been eight minutes does anybody in the room know where I can buy moldable glue sorry sorry fries is almost out of business I've already emptied them out anyway so I have to go find some things to keep the conference glued together literally Brittany is my proxy we really would like to see the conversation get towards I have been experienced with malware writing and automated malware writing and detection of folks for advertising purposes over the years which is quite similar to the same type of technology that is going to be deployed in autonomous whatever you want to call these things this is going to happen I think it cannot be stopped that's where we disagree well we live in a society where we have freedom there are societies where there is no freedom and we need to figure out a way how are we going to contain this technology because we can't control the entire planet international humanitarian law has been successful in banning weapons before it is possible and we can do it again I think their answer is policy but policy does not ban these weapons it just makes them classified I'd like to see a citation there but it's classified it's classified yeah so again the group of people that we're working with in the campaign to stop killer robots are Nobel prize winners for their work on depriliferation and nuclear war so I think that we have every reason to believe that since we've been successful in doing this before that we can do it again and a lot of the benefits can be achieved with semi-economist systems that would not cause the extinction of humanity you can achieve precision and accuracy without ceding that critical human judgment decision to take a human life to a machine so we have hope we hope you guys have hope too these are the examples of the past treaties that the campaigners that worked with us have successfully done and we have seen them we have seen the consequences of this soldiers aren't dying en masse from mustard gas attacks in wars anymore the notable one here is the blinding lasers ban which was super influential because it was a triumphant ban which means there's precedence for banning a technology before it is deployed before there are victims before we see all the horrific injuries land mines used to kill thousands upon thousands, mostly civilians years after the conflict ended those were banned in 97 and we are progressively seeing those numbers go down where now only a select few non-state armed groups use land mines in trust state conflicts so treaties have an impact they set norms international communities together and all of those weapons that are up there that you don't see used on a regular basis is because there's international law in place if we didn't have that you would see massive indiscriminate civilian heart so it is possible yeah so that's what we're doing and there are a couple more meetings coming up later this year in Geneva and in New York City so if any of you are passionate about this issue and would like to help us out talk about it these are kind of our 10 quick points you can also find them in the back ways tech workers can get involved from low effort signing a pledge might seem like not big impact but for us showing growing numbers of technologists who support our cause is super important diplomats, law, data, numbers so that's where you can sign a pledge obviously donating funds Liz does all of her work for a bono I work full time for the campaign we are very, very small there are four full time staff on the campaign and we support the work of 116 organizations everyone from doing a small policy round table and Cameroon educating policy makers there to campaigner going to a tech conference in Argentina and talking to folks like you but doing it in Quebec we do a lot of like education services to make sure that this isn't a US campaign this isn't a US issue it's all about international security international effort and what it takes is working with technologists it takes working with AI experts robotic experts working with religious leaders working with youth who are just at the World Scout January, I don't know if anyone was a Boy Scout or a Girl Scout we talked to thousands of kids who have simulated kind of robotic experiments just to see their action and their like what can we do to help gives me hope for future generation but I don't know who's ever called their congressmen or written to them yes good job guys I used to work for state government and I was like the one on the other end of all those calls who relayed them to the representative and it works I promise because it's not just you calling it there's usually like multiple people a day multiple calls a day so that's really important basically talk about this we try to speak at tech conferences to kind of bridge that policy tech gap if you have a blog if you have a podcast if you have an Instagram influencer whatever it may be talk about it say that you were here if you have stick it on your laptop visibility is really important hold an event on this you have a network of industry peers talk about this issue because a lot of you have heard about it which is great and now we just want to foster more debate on it if you have particular skills unique ones come talk to me I can tell you more about volunteering and lastly if you are working on killer robots or your company is you can whistle blow internally or you can whistle blow externally and we can provide legal help and access to media but that is last effort we don't want anyone to have to quit their job in order to make an impact because not everyone can afford to do so that is absolutely the last step we can do this thank you so we want to thank our speakers again for doing this talk if you have a question please come up to the mic wow so the list of banned weapons you had mustard gas has a recipe and land mines are kind of basic design there is certain components are definitely in it you step on it and you are done killer robots is like what is a weapon so for example the three things robots usually do first are dirty dangerous or dull are the three main jobs dangerous is usually something that might involve something we could call a weapon like a drill that they use in mining or arc welders or things like that so how does that factor into the description of what a killer robot is is that even defined? no it's very hard in the meetings in Geneva with the diplomats it seems like every country has their own flavor versions of it but there are some key parts that are consistent across all of them meaningful human control is one of them and having a human in the loop is kind of a mandatory piece of that but the truth is we are we have our definition of what we are looking to outlaw but as a global community we are looking for a shared definition of what this means does it require that AI is involved I mean that's a possible way but I'm not sure that it will but it is possible and I think if you tear apart the motivations of the potentially bad actors there is a lot of commonality but it's not written in stone yet and actually this has been ongoing for many years and is continuing to go and what is the next meeting in Geneva about is to draft the principle yeah international treaties will rarely have hardware software labels at least in the humanitarian disarmament arms control world it would probably be super general it would say we prohibit fully autonomous weapons and we need to ensure meaningful human control that would be like the definition and then states are free to interpret that and then that would be it would all be a case law at the international and national level you actually don't want to include specifics in an international treaty and then you bind it in time and some aspects of artificial intelligence or whatever you may be all the dual use technologies we talked about if we create a treaty banning specific things there's aspects of it that we can't even imagine right now so we want to make sure that this treaty outlasts all of us hi you guys brought up a lot of good points during your talk but I had one question about sort of the culpability aspect that you discussed I'm going to go back and use the example that you guys used about you can't tell right a robot can't tell someone's being coerced to fight where I'm confused though is I don't think human fighters are sometimes able to do that right we fought wars in history where entire masses of people have been conscripted and guess what they got gunned down just like the guys who picked up a gun willingly same point there have been times where people who were shot surrounding it yes it's a war crime but it happened it feels to me like you're saying simultaneously we should have a human in the loop but we're also going to hold AI to a standard that we don't even necessarily hold human soldiers to do you like this might be something where I'm misreading it can you square that circle no I think that that's a very fair question but there are some other statistics that I might mention which is gosh I can't remember the exact percentage but during world war one or two there was research done about how many shots that were aimed versus how many shots were aimed to kill and it was less than half I think were aimed to kill so there's kind of this conscientious objection that goes into conscripted warfare also that allows people to vote their conscience by not killing the people in Vietnam that they would otherwise be ordered to kill so I think the difference here is the guarantee that a machine will never be able to make a conscious choice about it and even you know army of none the seminal book on autonomous weapons the first example is of the author is deployed he was in a battle zone and he saw a young child with a very large gun and the child was not moving to kill them or threaten or harm them but a similarly trained robot that is trained to detect does this person have a gun and has authorization to kill without approval from a human will not have any of that opportunity so I think it's yet to be seen exactly the degree to which this will increase bloodiness but there's real solid reason to believe that it will and our role as civil society is to make the big ask we're asking for a ban on fully autonomous weapons what usually happens is policy makers and others get their hands on it and they water it down so we are not going to weaken our call we are going to have humanity aspire to have the laws of war that we do and constantly aspire to a higher standard more humane type of warfare someone has to play that role we have to aspire to that in terms of holding machines to higher standard as humans you know when the spears were a method I was like changing the nature of warfare so will killer robots you're constantly kind of updating your definitions of that my questions sort of related to the last questions actually in terms of explainability so there is also a growing body of evidence that suggests in human psychology our decision making processes are not half as explainable as we'd like them to be even in conductive war in conductive daily decision making we're influenced by external factors that we're not conscious of or we retroactively explain the decisions that we made at a time in a time of warfare when you are making split second decisions and life and death decisions it is very unlikely that poor 17 year old the poor 17 year old who's only just been deployed on his first time or the poor veteran who's been deployed on his sixth or seventh mission and probably has a severe case of PTSD is going to make decisions that are not optimal and then therefore retroactively justify that so I guess it comes to the question of if we are holding machines to a higher standard then is that a fair standard to be held when human decision making is equally perfect and unexplainable and where we do mental gymnastics to justify those decisions in warfare as well I mean is that problem with just warfare fundamentally or is it a particular use of technology that enhances warfare in a way that we haven't seen before yeah that's a really interesting question it's actually one that I haven't heard before so I'm really excited to think this through while I stall a little bit for time think of it this way like neural networks are based on the biological functions of the human brain and we haven't deciphered the human brain yet but they're really just inspired by the human brain they're not based but based on it and I think you know to your point if we don't understand ourselves then we can't understand the machine but there's no other option it's not about warfare in general it's about laws that require that somebody is accountable and that we can explain the rationale behind a decision to kill somebody even if a human being is lying you know I mean it's a vulnerability of every law that people can lie to try to get out of having broken the law but a machine we can't understand the wildly different mistakes that it makes like a human being is not going to go into a war zone and you know like kill all the ducks for some reason you know maybe they will but a machine could absolutely wait in long periods of time with nothing to do like seriously guys fair enough but I was trying to think of some sort of like outlandish example that would never ever happen but I failed miserably but you know I think so we only have the two options right if we're going to have war it's humans or machines or both but I definitely think that the kinds of outlandish decisions that machines make especially at this nascent point in the technology are a lot riskier and so the degree of harm that we're looking at in delegating this kind of decision over to a machine is a little bit riskier and maybe the 17 year old is a disingenuous example because when we talk about armed drone warfare right now think about like a high value target you have a lot of eyes on that target you've got lawyers, you've got soldiers, you've got policy makers so it's very rarely just one soldier kind of making individual kills in a high value target situation so it's already actually adding more brains to the decision still semi-autonomous, still human control so really that's what we're advocating for you can use the technology to lift the fog of war slightly but you will never lift it fully so does that mean you guys are sort of fairly comfortable as part of the campaign with uses of autonomous technology that still have the human in the loop or on the loop so that where the technology is augmenting the human like are you reasonably comfortable with that because I just noticed that neither of you have mentioned that AI like the White House AI strategy which puts ethics supposedly very much at the centre of it and serves very clearly that it won't be about fully autonomous weaponry, it would only be about autonomous weapons or semi-autonomous weapons augmenting the human soldier I think you'll have a variety of different opinions on that front I do think that machine augmented human participation in this kind of question is acceptable under certain circumstances which I can't define off the top of my head but I do see a need for as I mentioned that I have reason to believe that these kinds of weapons already exist and that various governments are willing to deploy them as they are right now regardless of whatever policy 2000.09 claims to say there's a lot of amazing organizations working on human augmentation or on armed drones, the campaign since we're small and we're trying to, we have one concrete goal so we don't really take a position on the other things the campaign's goal vanishes on these weapons, we have to keep it narrow and laser focus like that in order to actually achieve it so we work with a lot of people that work on different aspects of the problem and they have various opinions and the campaign is strictly focused Thank you, sure, thank you Hi, so I have two questions for you so you mentioned that all of these weapons have been banned and you look forward to ban and kill robots so are you worried about the stockpiling aspect of that because we may have banned biological weapons but we have huge stockpiles of smallpox other diseases same case for probably all of those so what is there any way to address that because in my mind that may be an even bigger risk because we could pull those out at a moment's notice and no one would have any counter to that That's a fair argument I think a lot of these non-bibing policies and my national AI strategies in the directive 3,000.99 they're great in peacetime we're having these philosophical debates we're talking in support what happens when World War 3 breaks out but those kinds of non-bibing things go out the window that's why international treaties aren't important because it gives two people, three people, four people whoever is involved in the war the same standards on the stockpiling aspect all of these except for blinding lasers in the treaty they have a section on stockpile destruction and timelines for that they don't have the money to do stockpile destruction they can actually get funds from other states so it really puts this framework in place to carry that out it is built on trust a lot of them don't have international inspection regimes so no one's going in and saying have you destroyed all your cluster bombs but states are literally voluntarily taking pictures taking people and there you are being proud of look we actually did this it's we actually destroyed our stockpiles it's a big celebration when they do and they're able to stick to these timelines they're able to ask for extensions we're just creating a framework that works and that states are signing on to was there a second question so what does your do you have a stance on applying fully autonomous technologies to destroy other fully autonomous technologies so in my view the only conceivable way that you could actually enforce that treaty or idea would be through AI so I mean it seems like you would be against that but if the ultimate effect was to save lives or prevent that technology from taking over would you be in support of that or would you say it's too risky military commanders will tell you that warfare is asymmetrical and that if you know if you're fighting a gun with a gun you're just going to lose a lot of lives and you're not going to achieve what you're actually trying to achieve but you create tactics and other techniques to combat whatever the enemy is using so if you have a killer robot facing a killer robot and it has these unintended interactions that I mentioned that you can't control you're actually going to want to use that you're going to use some other type of technology that might be similar or incorporate elements of it but you're going to want something that gives you control and gives you a level of capability Well I mean you're going to need something even more lethal than these right so wouldn't that be kind of an escalation at that point Absolutely it would be so I think the point here is to think about what the reactions to building these technologies would be and that's why we're seeking a preemptive ban so I think have you ever seen the movie Terminator 3 I was like I really like this movie not because of the AGI component or the general intelligence component but because it's got really big explosions like really big scale explosions like trucks running into walls and just destroying stuff left and right and I kind of feel that that's really representative of what machine on machine warfare would look like because you're not going to be able to train the machines to kind of understand every potential scenario where they're going to be deployed and what if they're deployed next to a big building that they could then crash into and destroy I just think that the scale of the damage and destruction that these weapons will do when pitted against each other or when posed by asymmetrical forces would be all negative so in that case it's again de-escalation over escalation I don't know a little bit about what could happen A little I obviously study this I thank you for doing this talk by the way I'm obviously a big supporter of the campaign to stop killer robots I thought I'd bring up the elephant in the room which is that I think a lot of people support the development of these because they think they're going to make them safer I think that's essentially what it gets down to yes they may have all the problems you describe but those are going to be happening elsewhere and they think that this might give us some edge to preserve ourselves and I kind of wanted to ask your opinion about this a little because I think in many ways people don't realize that modern society in particular like ours would be more susceptible than less developed societies because again of big data we're tracked pretty much everywhere and that's what these algorithms will use to target but the other thing I'd like you to discuss if you can is the second and third order effects of these let's say these weapons function perfectly I think there's still a huge problem with them and that they'll change us and I think that goes back to the way conflict works conflict has always been a means by which people resolve differences and the shape of the conflict determines what society looks like so if you need a lot of people to resolve conflicts to distribute decision making and that's friendly to democracy but if you centralize decision making and conflicts you change society and I think one of the really serious effects of these weapons is that you'll need less people to conduct conflicts which will undermine it's like an acid that will erode democracy which is really the foundation and the whole point so again with respect everybody here who works in this industry that's my chief concern is if they work perfectly they're still awful and I'd love to hear your thoughts about that if I could yeah I just I completely agree I think that removing the human capital and kind of like PR damage to war will absolutely make it easier for generals to deploy war we saw a really dramatic escalation in drone attacks just from simple manned drones that distance created psychological safety for people to be able and be more comfortable in deploying and in the use of killing and if it's even more escalated when you're pressing a button and you're just seeing some you know not even a video feed coming back there isn't one all you see are points of radar on a screen and you know like a pacman eating them up and that would again you know that would make it easier cheaper and more dangerous and more bloody for us to attack other nations which is something that you know pacifists don't really want to be able to do Marta? I think it's a good point on the developed countries maybe less developed countries because I think a lot of people including with armed drone warfare they're able to distance themselves from the conflict and I would pay $20 if you can name all the places the US is actively deployed or not the public doesn't really know and that's a huge issue for democracy and for transparency and with these weapons often times we think about big complicated systems that are super expensive to build and acquire but what we're seeing is that you can treat them and make them smaller and cheaper and dumber and not actually follow like the United States is trying to develop autonomous weapons that can follow the rules of war we don't think that's possible but like other countries we're not trying to do the same thing and that's where the danger comes in where anyone can get their hands on them and that's why we need controls. We've got 10 more minutes so 5 minutes. Last two questions This will be one, thank you both for doing this this was like very interesting and helpful so the question I have is somewhat relevant and tangential so we're living in a society where machines will make more decisions for us where many of these are like thin death decisions so what is the frameworks we use to rationalize this and it doesn't have to be autonomous weapons per se the classic example is autonomous cars today a human driver prioritizes the life of the driver typically over the life of the pedestrians if you were to engage in an accident that's kind of an aggregate so what are the frameworks and ways we think about just machines making more decisions for us in our day to day lives where autonomous cars are the prime one Can I just ask a clarifying question what do you mean by frameworks well you posed a whole bunch of questions about how should we think about autonomous weapons and the aggregate of seized policy to prevent the development of them and the use of them we will be in a society that machines do make life or death decisions we're deploying autonomous cars on the road legislation seems relatively friendly that this is a thing that will happen but in those cases machines are making life and death decisions about that so how do we deploy technology where algorithms and software will make decisions for us yeah I mean so this is not a perfect science yet I think that's what's so fascinating about AI ethics right now is that it's nascent in a developing way a lot of companies and countries are arguing over what does AI principle and guardrails really look like and everybody disagrees I mean just like the privacy legislation in the EU people are looking for meaningful control over their own data and legal scholars are arguing about it left and right so I would you know I don't have an answer for that necessarily but I can say that sufficient guardrails, sufficient human control monitoring, explainability all of these things are going to be important understanding bias and I you know bias is asymptotic meaning it's like you're never going to get to a completely cured model of bias so we need laws to understand what is the sufficient degree of bias for various use cases for various applications of it somewhere there are more life and death decisions like higher degrees of that will probably have to have lower you know percentages of bias under different protected categories and then once you finish up with one protected category you need to add all of the rest of them over time so I'm not sure if that exactly answers your question but it's kind of where we're at right now but the point is actually having that policy in place autonomous vehicles are a little different because they're not designed to kill and fully autonomous weapons are literally designed to be legal so we try not to draw too many connections but obviously any policy that's developed around AVs and around like cyber security and cyber warfare will also influence the policy debate happening here Can I do one small push back? So autonomous vehicles like they'll never be perfect so right now there's 35,000 deaths in the U.S. from autonomous from regular cars and we get 10 extra human drivers that sells 3,000 per year so at some point these cars will kill people they will have to make this so at some point it's an inherent complication about autonomous vehicles the question is should we have autonomous vehicles that are safer than human drivers right? And here's the thing about research development in the military and other places is that you're testing in a lab you're testing in a controlled environment and Silicon Valley likes to do fast break things and test things in the real world they'll deploy facial recognition technology in some small town and see what happens see if anyone notices and I hope no one does it's wrong, it's inherently wrong but also if you test things in labs and then you put them in an environment where maybe the machine is learning on the fly like is that are we going to allow that or are we going to allow it to learn only in controlled environments what does that mean the R&D aspect is what the military is talking about I guess since we're out of time maybe I'll summarize what I think your perspective is to say if you agree or not um ask a question yes no exactly no just that we're increasingly required legislators are increasingly required to make technical decisions and we increasingly find that our policy makers are not well equipped to do that but we can recall that Klauschwitz said in the early 1800s war is an extension of politics by other means and I think as technologists in this conversation we have to keep in mind it's really a political question we're unfortunately that's it that's my one thing that you should keep in mind great quote in action in and of itself is a political statement if you do nothing and you just let technology as it does have a submissive relationship to the tech and just let defense contractors and others who have a stake in making a lot of money killing people what does that mean when Google is the one that's making money killing people those are the kinds of questions for us so we want to thank again our speakers for this awesome talk and yeah