 Good afternoon, everyone. I'm Michael Duffy from Time Magazine. And I welcome you all to the what if session, what if robots went to war? Now, that title is a little unwefy. It's not something you've seen before, really, in a forum setting like this. And it sounds a little bit more like something out of Star Wars than out of Davos. It raises the specter, I think, in some people's heads of giant clone armies amassing on large battlefields for final conquests. And what we're going to talk about today is something a little more complicated, a little more challenging, and perhaps much closer than many people realize. And that's why we're here today. It's also part of a series at Davos this year that the forum and Time Magazine are cooperating on that we call what if, that tries to get past today's headline, past this week's crisis and beyond, perhaps for both governments and businesses, the challenge of meeting the quarter, to try to sort of invite all kinds of people to look well beyond the year, to consider possibilities good and bad, black swans, white swans, things that we haven't had a chance to think about, haven't had a chance to contemplate that might actually occur in a timeframe that would surprise you. Some of the other ones that are being discussed this week here at Davos are what if a lot of people started living to 150, what would that mean? There was another one tomorrow co-hosted by Rana Faroohar, my colleague at Time. Essentially that's the question, what if your brain could confess your sins? That comes out of, these are all tied together as you can see, living to 100 robot armies brains. Anyway, so these are kind of new sort of focus and framework for some of the sessions here and we think they're gonna be provocative and interesting. We've also twinned these conversations with a poll that I invite you to take now. You can log on to wef.ch slash vote and there are three questions we're asking today here in the room. We invite you to do that, leave your browser open and it will refresh with the other two questions. We've pulled these questions online for the last week to 10 days at time and so we have results from beyond this room and we're hoping to get your thoughts about the questions from here in the room. So while you do that, I'm going to introduce our fantastic panel today to talk about essentially autonomous weapons. And I wanna say something by the way, I'm gonna ask each of them a very broad opening question but they're kind of joint opening statements so we can move to issues quickly. Sir Rajakar is the chairman of British Aerospace and has been, which is one of the terms of global reach and influence, one of the most important defense contractors in the world and has been for a long time. Angela Cain has been in conflict resolution for the UN and other NGOs for several decades and she's now a senior fellow at the Vienna Center for Disarmament and Non-Proliferation. Stuart Russell is at UCAL Berkeley. He is a professor of computer science and is one of the leading spokesmen in the field of trying to limit these weapons and their use. And of course, Paul Wynfield is at the Bristol Robotics Lab. Alan Wynfield, excuse me, is at the Bristol Robotics Lab in the University of West England. So we're glad to have them all today. We look forward to a good conversation and we'll take some questions in the after part of this. I wanna start with Stuart. You have been outspoken in the last year about these weapons as they, and something has hastened your concern in the AI field. So could you talk a little bit about how that has taken place? Sure, so I think actually, I should apologize on behalf of the AI community for not addressing this issue much sooner. I think if we had started to understand 10 years ago where things were going, we could have avoided a situation where we may be heading into a very undesirable arms race. So a couple of years ago, the United Nations started to take seriously the possibility that we would have autonomous weapons, which means very precisely weapons that can locate and attack targets without human intervention. So I wanna be very clear that we're not talking about drones where a human pilot is controlling the drone, is looking through the camera and is choosing when to fire the missile. So those are off the table. Well, I should say they're already on the table in their millions and there's nothing we can do about that. So the first question that occurred to everybody in this debate, which the UN began and has held several sessions in Geneva to try and understand what to do. The first question is, can these weapons, if they are making decisions on who to kill, can they follow the laws of war? And the laws of war are quite difficult even for human commanders and soldiers to follow. They involve making sure that you're not attacking civilians, that there is military necessity for the attack. For example, we're not allowed to shoot at pilots who are bailing out of an aircraft on a parachute. So there are many rules about engagement that are quite complicated. Proportionality is the risk of collateral damage, proportional or reasonable, given the value of the target that you're trying to destroy. So these are very difficult things for AI systems to work out. The task of actually finding people and killing them is relatively straightforward in comparison. So that's the first set of debates that have been going on. The second question that comes up is a strategic one. And we have to get away from the idea that well, instead of having a human soldier or a human drone pilot, we're just gonna have an AI system doing the same thing perhaps eventually doing it better. And wouldn't that be great because then human soldiers don't have to die? That's an extremely naive set of questions to ask. It's sort of like saying that if we replace spears with cruise missiles, that we'll use the cruise missiles in just the same situations that we would use the spears. That's not the case. And the characteristic, the defining characteristic of autonomy is not so much that you don't have to put your soldiers at risk and so on. It's that you don't need a human being to carry and direct the attack. So a million machine guns can wipe out everyone in New York City, but only if you have a million soldiers to carry them and five million human beings to support those soldiers and a whole nation state to pay for all that. But a million autonomous weapons needs only one person to launch them. And I hate to be geeky, but you just write for I equals one to a million do. That's a little piece of code for those of you who are not coders. And off the machines go and do their stuff. And do we really want to put the power to wipe out everyone in New York City in the hands of individuals who just need to be able to afford to buy those weapons? They don't need to be a nation. They don't need to have political support. They don't need to be part of the international system. They don't need to be subject to sanctions and so on. So from a strategic point of view, I think this could be an extremely bad idea. So those two considerations, humanitarian and strategic, led us, that means the AI community to come together and over 3,000 scientists and engineers in the AI community wrote an open letter in July saying that we really need to have a treaty banning these weapons. Thank you. And we're gonna come back to the treaty and how a ban might work in a minute. Sir Roger, talk to us a little bit about how you see the technical and other opportunities and challenges in the space. Well, I think the starting point in answering the exam question about robots is the definition of the robot that you've just given is the extreme version. And there are two layers before you get to the extreme. There's the quite simple robot, which is used in warfare today, that does the dirty jobs, you know, that looks for mines, you know, that gets involved in firefighting, that keeps people out of harm's way. And I don't think anybody would object to that. That's a sensible use of technology. The second layer, I think, is a little more complicated, where the technology is much more sophisticated, the use of sensors, algorithms, decision-making capability and learning capability are embedded in the device. But the linkage is still back to the human being. And that, I think, takes some of the burden of decision-making away, you know, through the assimilation of data and the ability to understand the theater of war, without removing the responsibility from an individual, you know, the person that actually decides to finally deploy the weapon. And I think that is very important. And, you know, my own judgment is that that is the use of technology without straying into areas where, on an ethical basis and a moral basis, you would find difficulty in operating. If you have the man through the umbilical cord linked to the machine, then the man is still bound by the conventions of war. I mean, whether it's the Hague Convention or Geneva Convention. And if that man does something wrong, there's an audit trail, there's a responsibility, you know, there is no anonymity when a person presses the button. That level of sophistication is developing now. It is available more and more, certainly in aircraft. It's available in air-to-air missile activity. And again, it removes some of the risk, but it doesn't take away or absolve the individual from responsibility. The final level, the fully autonomous machine, I think is a very difficult area. Something that can decide what its target is, how to address it, adjust its behavior and deploy the weapon without any human intervention is taking this to another level. And I think what that does is place the AI weapon in an area that becomes completely devoid of a sense of responsibility. It's removed from any sense of ethical or moral concern. It actually finds difficulty in discrimination and actually observing the basic rules of war that exist and are clearly broken by human beings, but exist as a framework is something that a machine can do with no emotion and no concern and no sense of mercy or discrimination or even identification of what is friend or foe. And I think from a technology point of view and there are people here that know much more about the depth of technology than I, we still have some way to go before we could produce a machine, although there are 40 countries working at it now and the potential of a $20 billion market in a few years' time. Although people are working at it now, I believe there's some way to go before anyone would believe we have a fully automated weapon carrier that we could deploy with confidence of no technical risk and reduced moral concern. Maybe we can come back to the timetable in the next round, Angelo. Talk to us a little bit about how the international community, is it properly configured to either ban or regulate this or just manage this kind of conflict wherever to come to pass? Well, I think that Stuart has already spoken about the international initiation of this debate. Now, I must tell you that I find that it came too late. It really is still too late behind it. And that is also true because you have a very glacial pace of international negotiations and they haven't even really started. And I remember there was the special rapporteur of the Human Rights Commission on, and I have to kind of look that I don't get the title wrong, on extraditional summary or arbitrary executions. Executions is the word here. And he put out a report in 2014, 2013, about this issue, which was very comprehensive. And I at the time was undersecretary general and high representative for disarmament. And I tried to get member states involved in this to say, you must really look at this question because the technological developments, AI, robotics, are so fast, it's overtaken us by events. And the pace of looking at this issue in terms of international law is far behind it. And the time was not ripe for it. The first time that member states really met, and that is under a convention that is called on certain conventional weapons, the so-called CCW, they met. But of course the CCW only has two-thirds of the membership of the United Nations. And what happens is that there are many countries and many representatives in the international community that don't really understand what is involved. They don't have this development. I mean, this development is something that's limited to a number of advanced countries and they're going ahead with it. But what is very concerning is what needs to be understood is exactly what both of you already mentioned. And that is that the conduct of war might be relegated to a machine. And I find that it really started with the drones. The drones were used initially only for surveillance. Now they're used for a lot of other purposes. And that is something that has expanded very rapidly in a very small number of years. And so the concern that was raised in this letter that the community published in July was very, very well taken. But the monopoly of the conduct of war is really being taken out of the hands of humans, or it could be very shortly taken out of the hands of humans. And I don't agree with you, Sir Roger, when you sort of say it's still the human element, because I think that when what we've tried to do ever since the end of the World War II is to basically limit the conduct of war or limit the conduct of conflict. But if you have someone who's sitting somewhere 3,000 or 6,000 kilometers or miles away, pressing a machine to direct a robot or direct an autonomous weapon against an unseen enemy that might be a target, but not a person, I find that that is a very different experience from actually being on the battlefield. And the other problem that I have with this is that when you look at all of these games that are coming out, whether it's invasions or something, there are these robots and they stretch out a hand and they have like a little weapon and it immediately kills people. And that means there is a desensitization of what it actually means to have a war, to have a battle. And that I find very dangerous because it makes war something that's costless other than an economic cost, but not a human cost. And that's really where it's going. And I find that that really needs to be addressed. And I can come back later on on what actually is happening on addressing it and how I see the way forward. Thank you. Alan, you have been involved in robots and ethics, which isn't something people necessarily put together for a very long time. Sum up how you see the ethical challenges here. Sure, the first thing I should say is that I was one of the signatories of the open letter. So I strongly support the work of the International Campaign for Robot Arms Control. There are clearly huge ethical implications as well as technical objections. And I think Sir Rogers already alluded to some of the technical problems but we'll perhaps return to the technical problems. But staying with the ethical, essentially, if you give a weapon the ability to decide when to fire, when to pull the trigger if you like, then you're giving the robot or the AI system moral agency. And the problem is, of course, you don't need to think very long about moral agency to know that with moral agency comes responsibility. Now, you know, we adult humans are all full moral agents. Now, we cannot build an artificial full moral agent and probably won't, I'm sure, Stuart, you'll agree with me, probably won't for, in my view, for hundreds of years. I mean, some AI people, you know, colleagues are more optimistic than that. But the point is that we simply cannot build a robot or an AI system that has moral agency. So for me, there's a kind of ethical red line which is, you know, going from humans being ultimately responsible for pulling the trigger and robots. And I think we should not, we cannot cross that red line. This gives me a chance to actually pull up some, I'm now queuing verbally the magic, the magicians who are somewhere who are going to pull up the results of our first, there it is, like magic. If your country was suddenly at war, would you rather be defended by the sons and daughters of your community or by an autonomous AI weapon system? Well, this isn't that surprising. But that perhaps the margin is, is this in the room or is this beyond the room? I'm guessing this is in the room. This is the room, okay. And is there a broader poll that you could also show us on the same question that was taken over, yeah, well, closer, interesting, I mean, that's surprising. Now, I think we turn this question around in the next version of the, where if your country was suddenly at war, would you rather be defended by the sons and daughters? This is, this is the, can we have the question that follows and while you get that up, can it be invaded? This is the turned around version. Okay. You can show us the results from in the room now. Or maybe you can't. An autonomous system is not in charge of this. This is still in charge, the human system is still in charge. Well, if they get it up, we'll talk about it. But it goes to the question, I think, of how people feel about, are beginning to wrestle with it. Can I go back to what you were saying about, that's definitely not a poll result. What were you saying about how, what would have to change in the way we actually work through the kind of bands we've worked through on chemical and biological and in fact, nuclear testing. Is there anything like a framework for that to make Stuart's band come true? I think that basically what happens is that you have to have an international debate. And my concern is really that this international debate is so slow and it needs to be invigorated by the scientists, but it also needs to be helped by the industry because the industry's an extremely important player. And you asked me about the chemical weapons part. And the chemical weapons is very interesting. It's the first treaty that was really elaborated with the involvement of the industry. It's never happened before because it was always somehow the monopoly of the states who really negotiated that. And I think that's one of the reasons why the chemical weapons treaty is a success. So I'm all in favor of bringing in various stakeholders into this debate, not only to inform some of the member states who are a little bit, let's say not as advanced in their knowledge about these issues, but also to bring them into the loop because what you don't want is you don't want a treaty that only gets signed and ratified by let's say 60% or 75% of the member states, but that rather gets to have more universal membership. Now, what is the mechanism for doing this? And as was said, in Geneva, there's a group and that meets under this convention, the CCW convention, and what it was is it basically deals with issues that somehow fall outside of the scope of some other treaties. And we have a huge body of laws that have been developed. I mean, think about the Geneva protocols, for example. All of that was developed after the Second World War. You have an additional protocol. And so you have these additional protocols that have been signed by member states and they are largely enforced. And even if the member states didn't ratify, very often by signing them, they agree to the moral obligation of observing them, for example. And that's extremely important. And there is actually under Article 36 of the Geneva Convention. Now, the US has signed it, but not ratified. But what it says, and I would like to read it because it's really important because what it basically says is in the study, development, acquisition, or adoption of a new weapon means or method of warfare, a high-contracting party, i.e. a state, is under an obligation to determine whether its employment would be prohibited by this protocol or by any other rule of international law applicable to the member state. That's extremely important. Now, not every member state is signed it. A large number of states are signed it or are ratified it. The US has signed it. And so they do their own, let's say, a study and assessment of how, for example, robotic autonomous weapons would function, that they're not obliged to be transparent about this. So what you need to do is bring together these stakeholders, scientists, industry, and particularly member states, and then look at, where are we with international humanitarian law? What do we need in order to get something to address this? That's the first step. Do we even have a definition of laws of a legal autonomous weapon system that we can all agree on? And I wouldn't spend too much time on the definition, because you get mired down in language and so forth. But on the other hand, under this convention that I mentioned, there is a possibility to add a protocol, because already by the time there are new weapons that were invented, think of cluster munitions, think of other weapons, you can add another protocol that only applies to that. So you already got the framework. And I think that that's the way to go. But you need to get everyone around the table. It needs to bring together a kind of a gelling of the stakeholders. And that, again, includes not only member states, but particularly scientists, as well as the industry. Stuart. Yeah, so it's worth mentioning, I think, that as part of the internal review that Angela mentioned, the United States actually decided that autonomous weapons could not satisfy the laws of war. And they have an official policy, beginning in 2012, which runs for 10 years, which disallows the design and production of autonomous weapons. So they require that appropriate levels of human judgment be involved in every single attack against humans. So the US is, in fact, despite its leadership in this technology, and despite the fact that most other nations are, to some extent, terrified of the United States' technological abilities in this area, is the only country that has actually banned these weapons as part of its armed forces. So from the US point of view, it would seem relatively straightforward and desirable that this ban should be extended to all the other countries that might potentially be enemies at some point so that we don't risk having a strategic deficit. So Roger, I was going to ask you a question. If you could first a technical one, then a sort of more political one. For those of us who aren't knee deep in defense procurement, is this technology most advanced in air rather than sea and land? I presume. I'm guessing, but I don't know. Or maybe that's not true. And then I was going to ask you, how does a defense procurement contractor manage a situation where the technology is running so far ahead of the protocols, or at least the apparent protocols, to manage and oversee it? OK. Well, from a technological point of view, I mean, there are levels of sophistication on land, sea, and air. And certainly, it is a matter of record that in the air, the type of unmanned aircraft that are available now are very sophisticated and able to learn from their own experience. So that's moved on a long way. But there are shields that protect us from missile attacks which use learning technology. So there is a level of capability that exists. It does not exist, to my knowledge, in the fully autonomous weapon that has been under discussion. I just want to make a couple of points if I may, just picking up the points. I mean, first and foremost, I mean, I am the chairman of a company that manufactures equipment. I am not an advocate of this type of equipment, nor is my company. And as a human being, I mean, I share all the concerns of an autonomous system, which I think removes all sense of moral and ethical concern from war, which in itself is a difficult issue, to a level that is almost incomprehensible. So I want to be very clear about that. I do believe that what we have seen as we move towards certain types of robot is an extension of the distancing of a combatant one from another. And your point about the risk of desensitizing, I mean, I completely agree with. But that is what's happened over hundreds of years. I mean, the thought of soldiers in arm to arm combat with swords is somehow much more concerning than with rifles. And the separation of man from the actual experience of killing is something that's been going on for many years. This is an extension of it. It is important that governments draw the line where we move into territory where, frankly, we risk becoming the architects of destruction, but simply spectators on the event. And that's not good for anybody. So your point about the engagement of weapon manufacturers, I think is absolutely valid. And I think there is a complete understanding that lines need to be drawn. We all recognize that even when they are drawn, chemical warfare, cluster bombs, others will use those and seek to use those. So the rules aren't an end to the problem. It is human beings that both create the problem and are the problem, because their pursuit of power and territory will reach out for any weapon at their disposal. We have to be very careful those weapons aren't provided to the wrong people with the wrong ambitions. And my question about the challenges of managing a little bit for just a company where the protocols don't always keep pace with the technology, or... For me, that's pretty straightforward. I mean, there's no industry more regulated than the industry I'm part of. And there's no company that acknowledges and obeys the regulation more than we do. Others, I'm sure, do the same, but certainly no more. And within the organization, we have ethical judgments that are made as to what we will and won't do even within the bounds of so-called legal acceptability. So there is an opportunity for management to exercise judgment and to draw a line in the sand, but always within the framework of international law and the disciplines that go with the challenges of being a weapons manufacturer. Alan, I was going to ask you, if you're being attacked, does it matter whether the weapon is autonomous or not? Yes, it does profoundly matters. In fact, that's perhaps an opportunity for me to reflect on the results that we just saw. And if I recall correctly, I think that the poll said something like 80% odd would prefer to send AI systems to war than people and almost the opposite in terms of being attacked. And with very great respect to colleagues around the room, it reveals an extraordinary confidence, a misplaced, misguided confidence in AI systems. I've been building real robots for 20-some years. And as soon as you take a robot, even a well-designed robot, and put it in a chaotic environment, it behaves chaotically. In other words, it makes mistakes. So it's very important to understand that the current state of the art and in the relatively near future. That sounds very human, by the way. Absolutely, yes. I mean, the more we put robots into an unstructured, a chaotic environment, that includes the home. I mean, I'm not just talking about the battlefield. The battlefield is just the extreme version of this. So robots, I do an enormous amount of public engagement. One of the things that I try and help people to understand is just how poor, how weak our technology is. And I think that the press and media have played a part in hyping up the technology. Of course, it's exciting to see headlines about when will robots take over the world and stuff like that. But the truth is that they're not smart enough and will not be for a very long time, even to be able to do the kind of things that we're talking about. Well, I have to say, I disagree profoundly with that. So what plots do we need? We need the ability to perceive and maneuver. We know that self-driving cars now have extremely accurate perception of their environments. They can detect moving vehicles. They can find people, buildings equipped where you can buy a four-ounce radar that can look through walls and find human beings inside buildings. So the ability to maneuver quickly through streets and inside buildings has already been demonstrated for quadcopters. The tactical decision-making, when was the last time you beat the world's best chess computer? I don't want to play chess against computers. They are as far beyond the best human as the best human is beyond me. So the physical platforms are really, really accelerating in their capabilities. You can have a drone that I can carry in my hand can cross the Atlantic without refueling. So the physical capabilities, the tactical capabilities, and the perception control, they're all there. And when I talk to my colleagues who build quadcopter control systems for a living, they say, if we had a Manhattan-style project within 18 months to two years, we could deploy these in the tens of millions. And they could be used to go into cities to find people of a certain characteristic. So this is just like a nuclear weapon that kills everyone. We can kill all males between the age of 12 and 60. We can even distinguish by what clothing they're wearing as to whether we want to kill them. So this is something that is not decades in the future. We're not talking about systems that have to be as intelligent as humans. We're not talking about systems that are in the business of taking over the world. They're in the business of carrying out the instructions that humans give them. And if humans choose to give them instructions to wipe out all males in a certain city, they can do that. I'm just going to make one observation right now. I mean, I think the point you make is that this is such a rapidly developing field. And things that we regard as pretty normal today, even five years ago, we would have found quite extraordinary. I think we also have to respect the fact that good ideas are not the preserve of good people. And because of that, we have to draw the distinction, I think, between finding the rules by which people are supposed to live by, but ensuring we do not allow ourselves to become at risk from people with good ideas, but bad objectives. I was going to open this question to everyone, but it's a very good question. Let me just respond very quickly. As far as I know, no one is proposing a ban on weapons that can kill drones. Drones don't carry humans, and so killing them is not a lethal act. So defensive anti-drone weapons, absolutely. I think we should develop them. But if manufacturers are producing these weapons in the millions, then bad actors will have access to them. I think it's very difficult for ISIS, for example, to develop their own indigenous capability to manufacture millions of extremely effective, miniaturized, intelligent drones. I think that would be easily detected, and we could put them out of business. The definition of bad people does move around, of course. But coming back to Anguilla's point about the collaboration of industry in the Chemical Weapons Convention, it's really important, because of the ease of taking ordinary industrial chemicals and applying them to weapons. And so they keep track of precursor chemicals, and they make sure they're not selling them to the wrong people and so on. This is essential, and I think we might have to look at the same kinds of measures for people who are making ordinary commercial drones, which are wonderful technology. They have all kinds of uses, both for services to individuals and humanitarian goals and so on. But I think it's possible to work with them to make sure that they're not diverted to the wrong purposes. They can be GPS limited, so they can only stay in certain regions and so on. There's an old arms controller expression that chemical weapons were sometimes referred to as the poor man's nuke. There was a low. It was worse. The bio was worse. And this technology is cost a barrier to entry, because of course, as Sirajah said, it's one thing if it's seven Western countries that have this technology, it's quite another if it's something that anyone can build in their backyard or in their basement. So does that enter into this? And it's our ability to regulate it? No, it is. And I think that what we have to say, and I think Sirajah said it before, there are very good uses for automated systems, and IED control or mine clearance or something. There's a lot of it that's being used underwater, for example. But on the other hand, I think that what we also need to look at is that when you look at, for example, a nuclear weapon, it's not that easy to make a nuclear weapon because you have to get hold of the materials. I think that with robots or drones or whatever it is with autonomous systems, the entry level is much lower. It's much easier to manufacture that. And that, to me, is the greatest concern. And that, to me, is, and I say to the example of, for example, the drone, such a fairly recent. I mean, how long ago did we start drones? I mean, maybe not even that. The First World War. Well, OK, fine. They weren't drones used in the past. Sorry, but I mean, it's accelerated to such an extent over the last 10, 15 years that I am concerned that this development is just accelerating. I mean, you were talking about the Fourth Industrial Revolution, the fast development of technology, and that's just part of it. So we must make sure it doesn't get into the hands of the wrong people or that it is applicable in terms of being able to reciprocate. I mean, we already have 3D printing. People are printing their guns, et cetera. It's just a matter of programming it. And that's really what concerns me. And that's why I think we need to get industry in there. We need to have larger stakeholders. And I really tried to get member states to focus on this. I mean, starting after with Christopher Haines, we had this big consultation. But it's only started in 2014. And there hasn't been that much progress. And there needs to be more progress. Can I say to you that I think one of the big challenges, and it's true of anything that is in high technology, whether it's cyber or whether it's this kind of automated system, the people who have the job of making the judgment as to whether it should be something we legislate for, very often do not have a full understanding of where we are in the process and the risk and how close it is to being a reality. So there's an education process required for those that become the legislators, which is I think all our responsibilities such that we can create an environment which is at least controlled, although we know we're heading towards creation of machinery that is very dangerous indeed. We're going to open the piano up to questions from the audience. So be ready. So I mean, I absolutely agree, Angela, the cost of entry is very low. And when you can buy an autopilot for a small flying thing for $50 or something, and it's really quite a good autopilot. But where I am here, I feel compelled to come back on Stuart's disagreement. Yes, I mean, I do agree with you that you could build systems now which are pretty indiscriminate. But I think discrimination is much harder. So I'd argue with you that we cannot, in fact, build systems that can really distinguish between a combatant and a noncombatant. In fact, we don't even have a good definition of what a noncombatant is. Yeah, I mean, we could get it right 60% of the time, 70% of the time. And don't forget, self-driving cars need to be 8.9s, so 99.99999% reliable. You can't afford to make one mistake even every 10 years of driving. Do you think ISIS needs their drones to be that reliable in terms of discriminating civilians from soldiers? No. And in general, in warfare, our systems are nothing like that reliable. Unexploded bombs litter London even now from the Second World War. So I think 80% is pretty good for military equipment. So I think that would be very easily achievable with present-day technology. OK, so now this is your chance. And I know there's a microphone. Please identify yourself. Hi, Sue Chan from the Telegraph in London. This is one for Sir Roger. You spoke in very plain terms about the need to draw lines in the sand on these sort of weapons. I mean, in your lifetime, do you ever see a fully autonomous weapon rolling off the production lines of BAE? And what are you going to do to ensure this technology doesn't get into the wrong hands? Well, the one thing I've learned in my lifetime so far is the danger of making a prediction. What I would say is this, that the company, first and foremost, only operates within absolute laws and government guidelines of requirement. It does not choose what it makes or what it does. First, second, at this time, the company has a strong belief, as does the government, that the separation of decision making from equipment, the removal of the human, is fundamentally wrong. And therefore, the only development we are doing, and indeed intend to do, is against that definition. For all the reasons that have been said by people who have never met before, or I'd never met before, there is a common belief amongst human beings that to allow machines to choose where to fight, what to fight, how to fight, and to liberate weapons is a very dangerous thing to do. And I think as human beings, until that sense of risk changes in people's minds, nobody will want to do that. What we have to do is to make sure that this development stays mainly in the hands of people with the right motives and does not stray into the hands of people who don't have those kind of moral convictions or concerns. Yes, here in the front room. Tony West from the United States, and it's been an interesting conversation. I will say there is a sense of inevitability to whether or not these systems will be actually developed and whether or not they will stay in the right hands. But my question is really one about deterrence value and asking if you all could comment on that. It can be argued that as terrible as nuclear weapons can be, their existence, whether you agree with the Strategic Defense Initiative or not, its existence created a deterrence context which kept the peace. And so is there an argument that perhaps the United States is wrong to be the only country saying that we shouldn't develop these weapons? Maybe the answer is more of, since others are developing these weapons, why not develop these weapons such that the existence of them might create a deterrence value? Your thoughts on that? Alan? Wow, that's a really tough question. I think perhaps part of the answer's already been given which is that the cost of entry is so low to this technology that actually it's absurd to think it could be a deterrent. If you're suggesting that we should develop something which deters others when it's so readily developable, could be developed. So I think my short answer is no. Yeah, so I just come back to the point that defensive anti-drone weaponry, anti-missile systems are currently legal and they're not proposed to be banned because they don't kill humans. And I think the US has been for more than a decade now holding competitions to see if we can develop systems that can destroy these drones. It turns out they're extremely difficult to shoot down when they're very small and they go quite fast. Current air defense systems have a really hard time with them. So I think that technological development will continue. But they're really as offensive systems. I mean, so deterrence means if you do this to us, we will do this to you. As offensive systems, they're not particularly effective at deterrence because the kinds of systems we're talking about have their greatest effect against undefended civilian populations, for example. And I don't think the US is prepared to launch that kind of attack. Even if it's happened in 9-11, the US suffers a significant attack on its population centers. I don't think we're gonna go and then say, okay, fine. Let's take some random city in the Middle East and wipe out its population. That's not gonna happen. I can debate for a long time the deterrent or rather non-deterrent effects of nuclear weapons, but that's not the forum here today. But on the other hand, I think that what we need to look at also is that the entrance to this technology is very low and it's not going to remain in the hands of those states or those companies that currently have it. I mean, this is basically what we have seen and that's what we really need to deal with. And I would like to see, and Stuart already mentioned that the US has actually a Defense Department rule that they have, I think 2012 or 2013. And I would like to see the United States taking a much higher profile. I mean, who took the initiative among the member states to actually put this on the table? That was France. That's now being continued by Germany. But on the other hand, I would like to see those countries and those stakeholders who actually have knowledge about this and who can do the education, as Sir Roger said, very important to take the lead in all of this, to basically elevate the debate on this. Because it is right now, it has seen as something positive in a way by many people and that I find very dangerous. Yeah, the myth that somehow if we wanted to have a war, we would just have our robots fight each other. And then when the robots had finished fighting, we would say, okay, so we lost grapes. Okay, so we lost that means that you can take all our women, you can take all our wealth and we will be slaves for the rest of time to your country because our robots lost. Come on, guys, we could play baseball and decide that we lost and therefore you can enslave our population. This is just not how things work. A country gives in when the human cost of continuing the war becomes unacceptable, when the government can no longer, in any reasonable way, guarantee the safety of its people. That's when a war ends. It's when it's basically, you're putting your hands up or you did. And so this idea of robot-only warfare is just a complete red herring. It's a game. It's a game. I mean, everyone has this. It's interesting that, yeah, that's right. The next level is a video game and the trouble is that for most people, these things start to fuse when they think about it. The point I would make just about the deterrent, I think the nuclear deterrent, and I accept we would have a separate debate on that, but I think is effective because of the scale of destruction that it offers and that we have still the evidence of what a very extreme attack can do from the Second World War on millions of people and that's so vivid in people's minds it acts a deterrent. The trouble with the robot conversation is people stray to the video game or the Star Wars movie. It doesn't feel the same. Yet it is just as dangerous and potentially lethal in a different way, but it doesn't feel like that and that's why when the panel voted, there was no concern about putting robots to war because somehow it didn't feel a terribly dangerous thing to do. We haven't seen it yet. So there's a clause, one of the protocols of the Geneva Conventions called the Martins' Clause, which says that at all times the human person will remain under the protection of the dictates of public conscience. So if you think about that photograph of the small Syrian boy lying drowned on the beach and what effect that had on the policy of the European Union countries. Now imagine that instead of drowning, that boy was being chased along the beach by a quadcopter which gets into position and then blows off his head and then you see that little boy lying on the beach with his head missing. What is that going to do? What is public conscience going to say at that point? It's public going to say, oh yeah, this is just war, this is how it works. Or either going to say we crossed a moral line that we should never even have approached, that we should never give the decision to kill humans to a machine. And the decision to kill humans is a heavy moral responsibility that we have to reserve for ourselves and we have to take responsibility for it. I don't think there's any question what would happen after that point, but of course then it would be too late. Then the weapons have already proliferated and defense postures of many countries already incorporated autonomous weapons into how they operate and it's very hard to reverse at that point. We have time for one more question. I was really struck by your use of the word game and how you echoed it, Roger, because sometime between Christmas and about I would say the 10th of January in the US, the FAA asked all Americans who had a drone to register them and the number was something like 189,000 in 12 days or 15 days. And fast growing. And fast growing. And of course not everyone has registered their drone or drones. So that told me that there was a, talk about, that's of course an autonomous weapon, but it does suggest a fascination with the technology that is somewhat beyond the expert level. We have a question here in the second row, if you can, yeah. Hang on. My question is that if I listen to all of you, you have the feeling that, you know, borders are crossed. So maybe one of the actions the country should take that they ban all the video games or, you know, stuff where fighting is just normal. Well, we're past, I think your question is, is there some way to regulate rather than ban? Right, maybe you should just ban all these games where people get desensitized by, you know. That kind of forms a session on banning video games, which I'm not in charge of, should be on the list perhaps for tomorrow or next year, but I'm not sure we're capable of dealing with it today. No, no, but the conclusion here is that, you know, you're all worried about the fact that people might cross a line, yeah, because they are desensitized. And therefore, that might be a plan to raise this issue in the public even more so that maybe some action is taken there. It's a food chain. Yes, I think public awareness is very important. So when we published a letter in July, there were over 2,000 media articles describing the letter and its contents. The Financial Times main editorial said that we have to avoid this nightmarish future. That was the word in the headline of the editorial. So some people took notice and that was good, but there are still a lot of people who didn't hear about it. And there, if you mention killer robots or autonomous weapons, their only exposure is the Terminator robot. And, you know, when we look at Terminator robots, they are large, slow-moving, heavy, vulnerable, and incredibly inaccurate. They shoot hundreds of bullets without hitting anybody. Come on, guys, get real. The robots we're talking about, when they shoot a bullet, it will hit its target. So we're thinking about systems that weigh less than an ounce, that can fly faster than a person can run, and can blow holes in their heads with one gram of shape-charge explosives and can be launched in the millions. So being attacked by an army of terminators is a piece of cake compared to being attacked by this kind of weapon. And perhaps the protocol has to extend beyond the weapons systems themselves. Can we go to this last, two down? Thank you, Big Lee, Switzerland. I see two trends. On one side, we have mass-effect weapon, and on the other side, we have extremely pinpoint individual weapon. And probably in future, one will need something which is in between. The real war will try to target as much as possible, but at the same time, there should be a certain scale of effect because if there are big wars, the efficiency will imply something which is more than just a few pinpoint people. So where do you see the future between less than the nuclear weapon, less than the very mass destruction effect and the very individual pinpoint targeted? Well, that range of weaponry exists today. And it can be taken from a very simple pistol all the way through to a very pinpoint-accurate missile which can take out a moving target with absolute precision. It's very expensive, but it reduces collateral damage. Over a period, each level of weaponry has been developed, but they all have the common denominator of only being unleashed by a human being, and that is still the line in the send. Final thoughts? Let me just add something, and I find it very interesting because we have used consistently the term war. Now, it used to be that wars were declared among member states. You haven't had a declaration of war among member states. I think since the Second World War. So it's really incorrect to speak about war, and we speak about war on terrorism and monarchy, but it really is conflict. And that, I find also very dangerous in a way because we're looking at it as something that's very limited. It's something that is defined maybe by geographic scope or maybe by a certain other consideration, but it's not really a conflict ration. It's not really something that is a large-scale something, and that's why those miniaturized weapons, including now what we're looking at is, for example, nuclear weapons that are being miniaturized, if you so want, very dangerous because what it means is that you're not setting off a world war, but you're setting off something that is much smaller and therefore it makes the temptation to use it much, much more easier to do that for commanders and leaders. Right, and that's part of the thing that is accelerating so quickly that you talked about at the top. So I think we have a fairly short time horizon to act. If something doesn't happen within the next two years in terms of essentially drawing all the main parties into a serious negotiation, what's called the group of government experts where the nations contribute experts who will write out the technical details of the treaties, if this process does not get underway within the next couple of years, it may be too late. I agree with that. Alan, you get the last. So I mean, I'm delighted that the panelists are essentially in broad agreement about the unwisdom, the danger, yes, of autonomous robot weapons. So what I would invite, if the audience members and those people who are watching on TV agree with us, then please write to your representative, your member of parliament, whoever it is, and tell them that, because I think this is something that our policymakers need to know. They need to know from us that it's we, the people, that it's something that is not acceptable. Thank you. Thank you, Alan. Thank you, Sir Runder. Angelo, Stuart. Thank you. Thank you all for coming, and we'll be here getting ourselves together for a little while if you wanna come up and talk. Thank you. Thank you.