 Good afternoon Thank you all for joining us this afternoon for those of you who are joining us just now Also, welcome to those of you on the live stream We're joining us today the hashtag for the event is Carnegie Digital and It's a pleasure for me to welcome you all here today together with David Brumley who I'll introduce in a minute This is the second panel of the first part of the Carnegie colloquium on digital governance and security There will be a second part taking place on December 2nd in Pittsburgh to which you are all also invited in case you're interested please make sure to drop your business card off outside or send us an email and this second panel focuses on autonomy and counter autonomy in the context of military operations and As I explained earlier this morning for the first panel This event is designed to combine the tech expertise of Carnegie Mellon University with the policy expertise of the Carnegie Endowment And each panel is preceded by a setting the stage Presentation by one of the experts from Carnegie Mellon University Followed by a panel discussion with experts from around the world So we are particularly pleased and delighted to have people from Israel and India who came all the way specifically for this event and it's now my pleasure to introduce you to David Brumley who's the director of the SILAP security and privacy Institute at Carnegie Mellon University He's also the faculty mentor of one of the top hacking teams in the world and CEO of a company called for all secure Which won the DARPA cyber grand challenge this year So it's a great pleasure to have him here and to give these setting the stage Remarks and with that I look forward to this panel discussion. Thank you. Thank you everyone You read the headlines today. You'll come across headlines such as Russia is building robots to fight on the battlefield The US Navy is developing swarms of unmanned drones And DARPA commissions a fully autonomous cyber bot competition These are just a few of the headlines that highlight the role in increasing role of autonomy in the military in the second panel We're going to take an international perspective on what autonomy and counter autonomy mean in military operations As to mention my name is David Brumley. I'm a professor and director of CMU security and privacy Institute I also consider myself a hacker as I run this hacking team that many people have talked about My job for the next 10 minutes is to give a high-level overview of the issue and why it's so exciting Why it's so timely and why it's so important to get absolutely right as we go forward This panel's issue in a nutshell is countries around the world including the US Russia Israel China India are increasingly deploying and investing in artificial intelligence and autonomy technology in their operations Autonomous technology once the work of science fiction is here today For example in Pittsburgh You can use your uber app to summon a completely autonomous vehicle to take you home from a Steelers game to your house But don't just think physical Think of cyberspace think a social for example in August this year DARPA demonstrated that it's possible to build fully Autonomous cyber bots in full spectrum offense and defense It then went on a DEF CON to demonstrate that these bots can supplement human capabilities in the manual DEF CON competition We also need to think about social networks where autonomous systems can be used to sway the opinion of a population Key pros of autonomy and AI include faster and better decision-making and weapon systems cyberspace operations and it even creates the possibility of fully roboticized soldiers in warfare These are all significant benefits that lower the cost and lead a better protection of human life However, there are significant policy legal and ethical questions many questions resolve revolve around how much control we should see to machines What sort of actions should we allow machines to take and when and how do we handle the case when machines have mistakes when there's bugs That may be inevitably exploited by our adversaries So let's start with thinking about what does autonomy mean to quote the defense science board autonomy results from delegation of a Decision to an authorized entity to take action within specific boundaries The key phrase in this is we're going to be talking about delegation of a decision in the context of this panel We delegate that decision to a computer program an app if you will Now everyone is familiar with apps like games and web browsers, but these are not autonomous They follow a fixed set of rules and interact with the user in a very limited way an autonomous system must must be more than an app Following a prescriptive set of rules It must reason about the environment it perceives and it must be able to make a decision about how its actions will affect that environment Put all together today We focus on autonomous systems where we're going to delegate a decision to take action and that action has been seated to a computer app That it or out that app interacts with the world and the world interacts with it I also want to set the stage for the size and the scope of the investment in autonomy And I want to use the US Department of Defense and history is an illustrative lens The US is crafting its strategy for the next 20 years where autonomy and AI are center stage This strategy is called the third offset strategy when I heard this phrase offset strategy I didn't really know what it meant. So let me explain it a little bit What an offset strategy seeks to do is offset a numerically superior force with technical supremacy an Offset strategy allows someone like the US to win without matching the enemy tank for tank or plane to plane Let's get a sense of the scale the very first offset was our nuclear weapons strategy The US invested heavily in nuclear weapons, especially battlefield and technical nuclear weapons because it provided an effective deterrent We didn't have to match the enemy tank for tank plane for plane There was always the swimming threat in the mid 70s though things changed what changed was Russia reached really nuclear parity with the US And the offset was no longer an offset So the US and other countries started looking for other offsets and that US came up with what it called the second offset Where the idea was using very accurate guided munitions Delivered by effective delivery systems. You could achieve the same effect is nuclear weapons without The collateral damage this investment led to huge advances in science that went beyond the military domain things like GPS Wouldn't have been possible if the US didn't invest in this idea of precision munitions So those are the past two offsets We're now on the third offset and we expect the investment and the radical change in international policy to be just as to be just as Significant the race to autonomy is not only happening in the US and To implement these sort of offset strategies. It's also in other countries For example, it includes Russia and China which I mentioned just a few minutes ago are investing in roboticized armies It's also an industry for example a 2014 Bank of America report states that Japanese and US companies invested more than two billion dollars in Autonomous systems led by tech companies such as Facebook Google and a touchy We don't get a just deploy autonomous systems and call it done though Once we deploy these autonomous systems They themselves may become targets and that leads to a notion of counter autonomy Where adversaries may go after the autonomous systems themselves as a way of getting at their intended target As an example just to kind of put this in scope There's a very famous chess engine called ribca and ribca played at the international grandmaster level But it was defeated after just a five-minute tournament because someone found a flaw in the engine and that flaw was in Chess if you go more than 50 moves without moving upon It's a draw the chess engine had a flaw where it would try to avoid a draw at all circumstances And so this player would go after the autonomous system by offering it a piece as a sacrifice The computer thought it was peace up the player would move 49 moves without a pond move and the computer would say Oh, no a draw is coming up. It would try to avoid it and the player could go to town This is going after the algorithm not just the test game Autonomy is going to be huge. It's absolutely critical. We get it right. The stakes are extremely high For many reasons one of them is autonomy is going to drive us to decide and take decisive action Faster and faster these actions aren't just in the cyber domain. They're also going to be in the kinetic domain So remember what I said autonomy is a delegation of a decision to an authorized entity to take action within a specific boundary I want you to think in this panel about a couple of different dimensions first What decision is being delegated? second in what circumstances and third What are the appropriate boundaries for using this sort of technology and to dig a little deeper the decision being delegated is Really a difficult question countries are now forming philosophical stakes in the ground on how they're going to think about this And so this discussion is very timely for example, Robert work the deputy secretary defense in 2014 said humans in the United States Conception will always be the ones who make the decision to use lethal force period end of story when he was questioned about whether a computer would ever take lethal action But the pace of technology makes applying these high-level philosophies and principles to particular situations difficulty For example should an autonomous system that recognize and shoot a suicide bomber before they have effect Is that okay? Is that defense? Is that offense? Second question is when is the decision seeing seated mr. Work goes on to say and he actually kind of qualifies himself that you know There may be times when it's okay for the computer to take control for example suppose you got 60 missiles coming at you There's no way a human is going to be able to sort that all out the human will make the decision But it'll make it ahead of time for the computer to be able to react in that This isn't a hypothetical conversation it's here today for example consider for a minute the fire and forget missile systems We've all heard of these probably in the newspaper One example is the UK brimstone missile Which groups such as the stop killer robots one of our panelists serves on used to illustrate There are no clear lines in the definition of autonomy or when we've seated control Now the fire and forget systems are often described as autonomous Some will say they're semi-autonomous, but it really just depends on which definition you're looking at the UK Royal Air Force Describes the brimstone missile as a fully autonomous fire and forget anti-armor weapon effective against all known and projected armored threats During the search phase of engagement brimstone's radar seeker searches for targets in its path comparing them to known target signatures In its memory the missile automatically rejects returns which do not match such as cars buses and buildings and Continues searching and comparing until it identifies a given target the missiles can be programmed not to search for targets until they reach a given point Allowing them to safely overfly friendly forces or only to accept targets in a designated box area thus avoiding collateral damage That's an interesting question with fire and forget because the control has been seated ahead of time Someone has decided to use lethal action, but you'll notice in that description. It was up to the computer to identify Who to take lethal action against? There's another more subtle question what what do we do when there's a bug in the software that it may be Misidentifies where it's supposed to go So finally, what are the constraints again a very realistic question today if we go back to the uber example in Pittsburgh Suppose a pedestrian walks out in front of a self-driving car and it can only miss the human by driving off a bridge Who should you say the driver or the pedestrian a good question? There's no clear solution and military operations. We often have similar questions Who are we going to save when given the choice? How are we going to program the objective functions? In these military operations So with that framing i'd like to introduce our moderator and speakers our moderator Is george perkovich vice president of studies at the carnage endowment for international peace george. Can you please step up? His work primary his work is primarily on nuclear strategy and non-proliferation issues and on south asian security George is the author of the prize-winning book india's nuclear bomb Which foreign affairs called an extraordinary and perhaps definitive account of 50 years of indian nuclear policymaking George has been a member of the national academy of sciences committee on armed armed control and international security The council of foreign relations task force on nuclear policy and many other such advisory committee Thank you george for joining us today Our panelists our first panelist is daniel riser. Can you please come up? Daniel is a partner at hurtzog fox and neiman law office And uh, he joined the hfn in 2008 as the firm's public international law defense and homeland security partner daniel has widely recognized that As one of israel's leading public international law experts as a result of his 19 year career in government in the field 10 of which he served as head of israel's defense forces international law department In this capacity daniel was the senior lawyer responsible for advising the israel leadership on a wide variety of international law related issues. I hope you can advise us on this issue as well I'd like to invite up mary warham Who is the avaskei director for the arms division where she led human the human rights watch avaskei advocacy against Particularly problematic weapons that pose significant threats to civilians She's also serving as the global coordinator of the campaign to stop killer robots It was one of the people I quoted earlier on the uk brimstone actually from 1996 to 1997 warham Worked for the vietnam veterans of america foundation assisting jody williams in coordinating the international campaign to ban landmines Co laureate of the 1997 noble peace prize together with williams and finally general General panwar who served as the 57th colonel commandant of the core of signals indian army general panwar retired after 40 years of active military service In the core of signals indian army in april this year His last appointment was commandant a military of the military college for telecommunications engineering Which carries out training of officers and shoulders in the fields of ict Electronic warfare and cyber operation and has also designated a center of excellence for the indian army for these disciplines The general officer has received many such awards and i just want to call out a few of them He's been the recipient of the president for distinguished service in the defense forces He's also been awarded the department of defense production for rnd work and last year He was conferred the coveted distinguished alumnus award by the india institute of bomb bay And is the only defense officer to ever hold such an honor With that, thank you panel and i'll turn it over to george Great, thanks a lot Great, thank you. Um, what we want to do is is have as much of a conversation as possible first amongst Ourselves appear and then with you all to basically draw out a number of the dilemmas in this in this area And to help identify What are the questions that might be the most worth pursuing? as different countries and different actors Move down move down this agenda and so to to start us I want to ask general pommel to build on what david said a bit. I mean certainly there must be other Rivers beyond dealing with numerical asymmetries that would make autonomous systems attractive To a military and to a government in terms of problems that they solve advantages they confer So can you you know kind of give us your perspective of of you know, what are the attractions of autonomy in this Well, I'll start by saying that uh one can't get away from the fact that Our weapons are meant to destroy and kill But uh they destroy and kill supposed to kill Defense potential military potential and the idea is not to effect The non-combatant power of the adversary non-combatants have to be saved other basic question which we have to ask is The rise of artificial intelligence does ai have the potential of reducing the Negatives of destroying the non-combatant potential now I feel that in a sense by its character Artificial intelligence has the has great potential towards this end Now having said that as an opening Let's see how warfare Is actually changing in the last few decades Now there are two things which are happening firstly on on the one front There's a change in the nature of warfare from the conventional to What is normally referred to as fourth generation warfare? Where the lines of politics and military are blurring? And so there's a different context in fourth generation warfare India happens to have the context of both conventional warfare as well as the fourth generation warfare and so Some of the things in the discussions which come up will get related at least my examples will get related to how the benefits turn up here the other Change in warfare, which is happening Is to do with the information age Now here again you have on the one hand cyber warfare electronic warfare is one offshoot of what is happening in the information age But coming to the relationship with artificial intelligence before that Because of information and its hierarchy Coming into the weapon systems What you are having over the years is greater in data precision in the weapon systems Now ai again has a potential of increasing this precision And discrimination aspects which will be discussing i'm sure as part of the panel and that is where again the the aspect of Having lesser and lesser non-combatant Casualties is going to come up Now to come to specifics As to what are the types of systems some the fire and forget missiles and etc were talked about So in increasing degree of what ai can do So let's start with just four different examples of increasing degrees of hierarchy You can have a defensive system And in a defensive system Like for example Handling of diffusion of ids So there the non-combatant or the adversary is not involved at all And ai can do a lot and coming up with these systems. In fact, they already in place At the next level you have defensive ai So we talked of The systems like phalanx and all which have been in Which have been deployed for quite some couple of decades now Where the missiles coming in so you're destroying the missiles and ai's autonomous systems I would say ai autonomous systems are in place So that casualties are unaffected are reduced at the third level You have precision coming in now So you can have offensive systems. So for example, if you have drones armed drones Which are autonomous, okay, you already have armed drones in effect, but you have autonomous armed drones Well, the pilots you're saving competent lives there So that's the third level where offense is coming into play And at the fourth level if the graduation of ai takes place and it develops to the extent Where it can also Let us say mimic The empathy and judgment aspects When it graduates to that stage Well, they're further saving of lives only possible. So there are many other benefits which you can talk about But in increasing degrees of complexity as ai graduates I would say these are the four areas which we can talk of as a starting point as to how we can I think that's Thank you. That was that was a brilliant setup You raised a number of the issues that I think will dive Further into including the questions of offense defense and another function Let me let me turn to to mary and in a sense Ask you to to to respond but in particular on the To the extent that this capability allows one to be more discriminating and precise presumably that's a Good so so when you look at kind of parsing what could be advantageous in these capabilities from what should be Avoided can you hone in on where the distinctions lie in your view? Thanks for the invitation and it was it was good to hear your introduction there um Because really you talked about at the beginning the dirty the dull the dangerous tasks that autonomy has been used for in the military for cleaning ships for you know, the explosive ordnance disposal robots to Assist the soldiers and now we're moving into a phase where we see greater autonomy in weapon systems And that's seen with the very large autonomous fighter aircraft that can fly over great distances Uh, and can and can carry a payload We're also looking at autonomous weapon systems that are ground-based and stationary And that can can select targets that way on the bmc and korea and elsewhere We mentioned some of these systems in the first report that we did on this topic at human rights watch back in 2012 called losing humanity We called them precursor weapon systems Uh Because in our view they were not fully autonomous. They had a degree or Nature of autonomy in them, but they were not completely autonomous and we in that report Called for a preemptive ban on fully autonomous weapon systems The preemptive ban means that it's on future weapon systems not on the existing ones that we have today But we did that because we we looked at where the technology was headed. We talked to people Actually, the roboticists came to us first and said we're concerned about where this is headed We're worried about this and so that was That was part of the rationale behind forming this campaign to stop killer robots that launched in 2013 And is still going. It's a global coalition Uh, I coordinate it on behalf of human rights watch and uh, you know, this is not a campaign against autonomy It's in the military sense. It's it's not a campaign against artificial intelligence There are many people working in autonomy and artificial intelligence who are part of this campaign It's a campaign though to draw the line and to establish, you know What how far do we want to take this so you can view the call of the campaign as being a negative one calling for a preemptive ban on the development production and use of fully autonomous weapons Or you can view it in a positive way in terms of how we want to retain or keep meaningful human control Over weapon systems and not over every aspect of the weapon system But the two critical functions of the weapon system Which in our mind is the selection of a target and the use of force Those are the two things that we're concerned to retain human control over and we know that that's a It sounds very easy. It's harder to put into practice But this is this is where the debate has been centering for the last few years when it comes to autonomous weapons systems Okay, so let me draw you out now and turn to to to daniel, but so you talk about drawing the line and and then I what I take is drawing the line basically at target selection and decision actually to fire as it were saying that it should be a human that what if And I get that in a sense and But in terms of objectives If an objective for example were i'm going back to what general said minimize Casualties or risk of indiscriminate, you know a civilian or non targeted deaths, let's say greater precision if if Different versions of these weapons could be demonstrated to provide more precision and and and um and reduce collateral damage and inadvertent desk Why should it matter whether a human was in the loop or not i'm trying to understand i'm not arguing I'm trying to get draw you out about Why kind of the principle Of a person in the loop as distinct from the outcomes Is is where you're fully i'm because i'm thinking of in terms of a person in the loop I know a lot of people i'm related to people that i don't want in the loop You know i'm you know Croatian descent And if there any served here we can talk but um But you know there's a lot of passion. There's a lot of history There's you're you're under fire your buddy's been killed So, you know the idea of somebody cool and detached to me might Seem welcoming so tell me what's wrong with that or what yeah, i mean there like i said many benefits to employing autonomy in the military sphere um I guess i can send with the weapons systems however is what the artificial intelligence experts have been telling us Which is that we're going to have you know stupid systems that are weaponized before we have the smart ones that can do This the level four things that you were talking about the mimicking of empathy and of human judgment and the rest of that We don't see that at the moment and our concern is that we're going to have stupid autonomous weapons systems being deployed Before we have these super smart ones which are further in the in the future as we understand it um And and in terms of the concerns, you know It was first the roboticist and the AI experts who came to us saying you don't understand what can go wrong in the field when needed deployed We've got many technical concerns including about what happens when two different weapons systems created by two different Opposing sides come together. There will be unanticipated consequences and and unanticipated things will happen there But then the other kind of elements of the campaign have come on board The faith leaders in the novel peace laureates are especially concerned about this just making it easier to go to war Because you can send in The machine rather than the human soldier And I guess at human rights which we look at it from the perspective of the protection of civilians the the non-combatant collateral damage, which I've heard here this morning And of course we want to try and keep them out of war fighting as much as possible But the fear is that if if the human soldiers are not in there and just the machines that it'll be worse situation on the battlefield for for civilian populations so This is why we see a need to go the line Let me Daniel. Let me just draw you in on on any of this But in particular how you thought about or how you would adjust to think about whether there's a valid difference between offense and Defense or territoriality, you know, I mean partly as I'm listening to Mary I go I totally get that if you're operating on someone else's territory, but on one's own territory Is there a distinction? So just take us through your thinking about. Okay. Let me start by saying autonomous weapon systems already here right, so The issue is no longer forward facing it also current facing and While we don't know all of the autonomous systems out there because obviously some of them are closely guarded secrets we know a lot of them and and I think Mary is right in one respect is that The capability to deploy autonomous systems is still outpacing the capability to train them to be human replacements Now I say that in spite of the fact that computers can beat human beings in chess And in fact in anything which requires thinking today. I mean in speed or a number of calculations, etc One of the problems we face Is that what we want to train the autonomous weapon system to do? We're not sure how to do that Um, and let me go into that for one minute because you'll see I'm sort of sitting in between the two positions um I used to train soldiers to comply with the laws of war And when we train human beings to do so we have a system. It's more or less the same in most military organization That we have a specific set of rules That's the principle of Discrimination you have to discriminate between the legitimate combatants and the non-combat They're the principle of proportionality when you're supposed to few we call them basic principles However, none of them are easy or none of them are really basic In fact, after I've been doing this for more than 30 years It's still very difficult to explain exactly what you're allowed and not allowed to do on the battle Now when we try to actually think of how we would teach a computer to do this We realize we're not really sure what we want to do with human beings um the second challenge is that because Artificial intelligence doesn't learn like it's human being It learns differently and there are different ways to teach computers But then none of them are putting them in the classroom and giving them a lecture and then taking them into the field and trying out a few dry runs We learned that the old ways we taught the system don't work on computers So the first point I wanted to stress is that we see a a a chasm opening between the capability to Deploy autonomous systems and the capability to teach them what the rules are Now obviously that gap will close as computer systems continue to develop as the eye goes into its next revolution which my friends in the Computer world tell me is a year or two a when we have the next level of artificial intelligence, etc And that is quite possible But to be fair I think the military hardware is outpacing the the AI side currently. So that's my first point My second point is and It's going to be relatively easy to field and again we discussed this in emails before the panel um Most autonomous systems today are still stationary most Why because movement for Autonomous systems is complex. It is difficult to eat object avoidance. You need a lot of different types of capability Which putting something in place now the most oldest autonomous weapon system on the planet is the landmine Now some people would say it's semi autonomous because of the way it works but if you want to go into detail if you take the acoustic naval mines deployed in the 1960s and 70s those actually had Small computers on board with signatures of enemy ships and they would only Target enemy vessels which meet the specific acoustic signatures and they would line weight at the bottom of the ocean to be operated Those are very primitive, but they've been around for 40 50 years right now and there were actually I think the first really Solid autonomous weapon systems in the world those have been around for a long time Now, however, they don't actually go around and try and find targets That adds a level of complexity which is huge um So the autonomous machine guns connected to land radars with the south koreans field And other countries. I know of etc those stills are staying in one place But when you go into a territory where the machine has to learn the environment and start And and this is a very complicated machine view experiment looking at territory. They do it does not know and identifying human From non-human and friends from foe and to do that that is a complicated experiment Now I say that with one final comment at this day um I'm not even 100 sure the rules we have actually robot And and I'll explain why you see We built the rules for combat today for humans And they come with a few hidden assumptions One human beings make a mistake and we are okay with that We accept a certain level of risk in combat Uh, uh for human soldiers. You're allowed to make a mistake if you're a soldier You won't it's not a war crime to make a mistake. It's a war crime to do something really nasty I'll tell you a very sad story in one of the israeli military operations 14 years ago There was The terrorists had fielded One ton IEDs under the roads to blow up tanks And tanks couldn't withstand the blast and they were blowing up in very days And one Israeli tank was traveling in in a in a location and suddenly and they were Really on guard for that event and suddenly they had this huge boom from the bottom of the tank And they were sure that they just You know Gone over an IED And they were searching for the terrorists right and so they look into the periscope And they see two people running away from the site So they immediately understand that these were the terrorists of their tribe blow up the tank and fail And so they shoot them and they managed to hit them Only 10 minutes later. Did they realize that it wasn't an IED that the tank had actually Gone over a huge boulder which had hit the bottom of the shafi of the tank and it sounded like a huge explosion And the two people had been innocent So the reality is in a combat situation the crew had killed two innocent people because they had thought they were in a combat situation And there were the military court martial etc. And they weren't Found guilty because they said in the circumstances a reasonable human being would have made that determination Would we be willing to reach the same conclusion of it being an automated system? Are we willing to give computers the benefit of a mistake? now Remember human beings get self-defense As a defense in criminal proceedings. Are we going to give an autonomous system self-defense? There's a Defense of necessity if you want to prevent a bigger harm you're allowed to cause a small All of our system is geared for human beings So the bottom line is not only is it difficult to train the robot for the rules I'm not even 100 percent sure the rules are ready for artificial intelligence I'm processing that because I think I You're on to something obviously that's extremely important in terms of of Whether it's a challenge to develop new rules to deal with AI or whether it's new rules to deal with an even broader category And what the expectations are my sense is That you can we can all Learn a lot in terms of how we think about this and how we might think about it from coming at it from the Liability side, which you started to do rather than trying to define autonomy or not Autonomy and saying well not autonomy should be avoided. And so how do we want to define it? But if you go at it just as you did the case law in a sense. I think it helps Uh, it helps enormously So I want to come back to that but I want to pick up on a couple of other things that that You said and that and bring General and mary and also does it from either From anyone's perspective Is the distinction between stationary and mobile an important distinction? um And especially married from the sense of what if one thinks about prohibitions or or you know what to be avoided does it matter and then relatedly I think the distinction between kind of defense of one's own territory Versus action out of territory, which implies mobility I'm just trying to Sharpen how how how how to think about this and some of this touches on what Do you guys want to jump in on those two points and then Can I say a couple of things? Before answering that a couple of points on what the mary said in the last interaction I think the fear was that Stupid autonomous weapons will be deployed before the actual intelligent ones. And so that is not acceptable Well, that's actually presuming. I feel that The testing of the weapons and the people who are deploying them are doing it in an irresponsible manner I mean, that's a that's a fear which is there But that's I feel the way the acquisitions and the inductions of technology into the Armed forces is done. Certainly such irresponsible behavior is not there We have to look at What is inherently wrong with Fully autonomous weapon systems Actually, we are like you yourself clearly brought out that it is not autonomy that the campaign is against It is fully autonomous weapon systems and meaningful human control is what is being looked at So and there is There is a vagueness in what is this fully autonomous weapon system So actually there's a select and engage. I mean that is where everybody is zeroing on to That weapon systems which are fully autonomous are those which can select and engage Targets without human intervention. Now that and is important Now what is meant by selection? So only selection is acceptable Only engagement is also acceptable because after all you have your pgms. We are only engaging. We are not selecting So only selecting is also acceptable only engaging is also acceptable But selecting and engaging is what where the question is being drawn And the reason behind that is that between the selecting and the engaging does a decision point And that decision to kill Is what is felt today the machines should not be left to machines from various points of view One is of course a martens clause etc So from dignity point of view, so decision should be left to a human is what is one point of view Now While it could possibly come up, but I would like to make the point here itself That if we are looking at various technologies Uh the the kill chain as we talk about where you uh, you know sort of uh, you first identify You navigate so nobody objects to autonomy navigation. Nobody objects to autonomism Autonomous functioning in the tracking in the selection Including in the prioritization. In fact, if we look at the doD directive of 3009 It specifically Brings out this that in all these functions autonomy Is permitted as per that directive and nobody would even object to it It is only that decision to kill to the point which I really want to make is that the the the the complexity of AI is not going into that decision loop That decision loop is actually a very trivial aspect of it as long as the human is there There's no real technology involved in bringing the human into the loop The AI aspects are going into the rest of the functions Which go into making building an autonomous system? So that is one point which I wanted to make because if you are if you are thinking in terms of banning technology Really speaking the most important part, which we're interested in is trivial as far as the technology is concerned Now coming to the question which uh, you asked whether on the defense side and the offense side Now when we talk of defense and offense actually, uh, and any uh military person would know that When we say defense, it's not defense offensive defense is part of defense And so conceptually there's no real difference between there's an aspect of mobility coming into it But that mobility is also comes in an offensive defense And so the types of system which are meant for defending and offending and going into offense Uh, uh would really be of the same nature. I I don't see any conceptual difference between the two But how about territoriality in other words on your own territory You could have a you could avoid that distinction about offensive defense But just saying you could operate it on your territory but not outside your territory. Okay. I'll elaborate a little more on that I brought out this aspect of conventional warfare vis-a-vis 4GW scenarios Now I'll take an example from India. It's it. So for example, if you have Uh an IB and a line of control and we've not gone in for a full-fledged war So if you've not gone in for a full-fledged war, there's a sanctity of the IB and the line of control and the sanctity cannot be crossed So if we are looking at that scenario where conventional war is not written It's just 4GW Then if you try to defend now that defense involves that there's not much mobility So you could have robocentries as for example deployed by Korea, etc You could have non-mobile robots also looking at to uh at the defense of the IB But when you've gone into a conventional operation Then when you're talking of defense, you're also talking of going across. I mean, you're also talking of mobility. So you attack So what I'm saying is Depending on which backdrop you are looking at Defense may or may not involve mobility And that is why I'm saying that in general to try and draw a distinction between defense and offense Uh may not be very correct from a technology point of view However It would be more acceptable to those who do not want to delegate to the machines A defensive sort of a system would be more acceptable from a model perspective Then something which goes into offense and you know, so that that's sort of a distinction can possibly come up As Daniel come just on the territoriality thing because I'm thinking of uh Iron dome and Israeli system which operates over really air air space to protect against Okay, so that's why I want so and then there's the wall there other so I'm thinking of analogies that because general is talking about the line of control which separates the part of kechmer that india controls and the park that Pakistan controls where there has been firing for the last month or so and lots of movement But it but in better times, uh, there's not meant to to be and so you could imagine that kind of boundary being a place Well, one might put autonomous weapons to prevent infiltration. That's not supposed to be coming across and so on um On the other hand that if one presumably if like like the last month when when india's There's movement going back and forth. You might want to turn those systems off So you don't uh, you know hurt your own people or manage that in some way So i'm trying to get given israel's experience in your experience here does the distinction of territoriality matter Um, practically or do you think legally or no? First of all realistically, uh, if we take for example the iron dome system So it has been made public that iron dome system has Three different settings, right? So you have the manual setting you have the semi automatic and you have the automatic system um So and it's a missile defense system, right? And the idea is you want to shoot down the missile over a safe location So part of the algorithm there is for the computer to not only know I mean the Israeli system works like this it first of all identify with the radar the incoming missile or what or Whatever's coming in Then it calculates words going to hit because it's on a ballistic trajectory So it's not going to deviate from its from its track, right? So you know where it's hitting so you automatically do a lot of things You want the people in that specific area that should take cover, etc But then it calculates if it's going to hit in a dangerous place It calculates where to shoot it down so that it minimizes damage, right? Now theoretically at least boundaries do not are not relevant for that so If you can catch the missile earlier now We wouldn't care if it landed in another country's territory, although the calculation of Landing in a unpopulated territory would still be the same for the system, right? But but the idea being that the system is not supposed to take boundaries into consideration It's supposed to take saving human lives into consideration. So my gut feeling is that the stationary versus The mobile issue is just a technological difference of complexity And the geography is not a real issue although again Following the generous footsteps I think people would be more easy to accept the fact that you would feel such things in your territory Than you send them into another country. So on the moral public opinion side, there are arguments to be made the these are Additional steps down the road, but from a technological and even from a legal side I don't really think there is that I don't think it holds Just On the complexity which you earlier also mentioned and again that the systems Which would be targeting? Let us say mobile targets would be many times more complex. Oh, it's static once is what is the statement? Now, let me just paint the picture And again, I'll take this time an example from conventional warfare So for example, you have in an area of Let's say 10 kilometers or 10 kilometers. Let's say about 100 tanks. That's a tank battle. So that's the number of tanks We should have in a tank versus tank battle And there are no civilians there So it's it is a contested environment where there are no civilians present And now instead of what would you normally do is now this is to do with military capability Here the morals and ethics are not coming into place But ai is coming into place to build up military capability Of whoever has got the ai technology So today how this tank battle will be fought was another hundred tanks would be you know, they'll they'll be Contesting amongst each other and so the blue forces if I may call one forces are blue forces They would be destroying the tanks and so they are at equal On on par the two sides now, let us say one side has ai technology and you have piloted autonomous Let's say armed drones Instead of tanks and they go So now I'm trying to analyze as to what is the complexity as compared to today's technology Of these armed drones picking up those tanks and destroying them I think the complexity gap is hardly anything. I mean the type of technology which is there can easily drones are already in place They only have to pick up tank signatures in a desert, which is pretty simple. I mean here has already solved the problem I don't think that's an unsolved problem and so if a country develops it into a military capability Well, it'll pick up those tanks like anything and their own own combatant loose lives will be saved. So In such a scenario the complexity is not there complexity is there in a 4gw scenario where there's a terrorist Which is mixed up in a population. It may be a terrorist or it may be some other form So I'm saying it's mixed up in a population and so to distinguish between But there is there's no external distinction at all. So how does one do it? So that's a complex problem So I just wanted to come into the complexity moment. Mary come in and sort all this out for I mean, I'm just thinking back about the international talks that we've been participating in for the last Three years not three years of talks basically three weeks of talks over the Last three years and they look for points of common ground where the governments can agree because there's about 90 countries Participating in this and at the last meeting. I thought I heard them all saying these don't exist yet For the autonomous weapons systems do not exist yet. We do have some types of autonomous systems in place at the moment But there was pretty widespread, you know Acknowledgement that what we're concerned about the lethal autonomous weapons systems are still to come And the other thing that the states seem to be able to agree on is that international law applies international humanitarian law applies You know trialing of your testing of your weapons and doing that through article 36 weapons reviews that of course applies To all of this And the kind of notion of what are we talking about? We're talking about a weapons system that selects impacts about the human Intervention there's a fair amount of Convergence around that As well that what they haven't been able to do yet is break it down and really get down into the nitty gritty details Here and that's where I think they need to spend at least a week just talking it through the aspects of the elements Or the characteristics that are concerning to us, you know, is it that it's mobile at rather than stationary? Is it that it's targeting personnel rather than material targets? Is it defensive or offensive? Although those words are not so helpful for us Either what kind of environment is it operating in is it is it complex and cluttered? Like an urban environment is or are we talking about out at sea and out in the desert? And then finally, you know, this one has not really been talked about but but what is the time period in which it is operating Because it's no coincidence that this campaign to stop killer robots was founded by people who'd worked for the campaign to stop Antipersonal landmines because we're concerned that One of these machines could be Programmed to go out and search for its target not just for the next few hours but weeks months Yes in advance and then where is the responsibility if you're putting if you're putting a device out like that So that's some of the kind of breakdown which we need to have in the process to to really get our heads around What are the most problematic aspects here because not every aspect is problematic? But that will help us to understand whether we draw the line and and how do we move forward Pick up on and then you can get especially dance reaction if if states have agreed that the laws of conflict and other relevant international law Would apply Then it seems to me that's a different circumstance and we should play out what the difference is then if they don't agree That dance shaking his head. So tell me Why you're shaking your head, but but but pick up on this too, basically Okay, that marries absolutely right and and and You know when I grew up there was a US band called super trap Yeah, and I would date we're dating ourselves. Yeah, you are probably your teenager But yeah, so so and one of my favorite songs when I was growing up was The opening lyric where I take a look at my girlfriend. She's the only one I've got Now international law is like that. We have no alternative, right? We don't have a plan b So as a very old time international law Who deals with this issue? I don't have an alternative set of rules to apply to this situation So we have no choice but to say in all international convention We will apply the gripping rule the part. We're not telling you know, we don't know how to do that Right And that's one of the problem. You see the rules don't work on robots as easily they did than humans And they don't work on humans as easily you think they do And because of that the principle I am a million percent Convinced will be international law applies to artificial intelligence as if they were human beings But in reality when we will be asked to translate that into reality, we will have a huge new challenge So that's one of them. Okay, let me jump right in on this and and we can continue it as a as a conversation That seems to me one of the strongest arguments For at least a pause if not a ban a moratorium I know precisely to the extent that of what you just said obtaining Then the arguments that let's wait until we can sort this out then so Tell me what's wrong with that and or if anything it's whether the problem is it's not practical But from a legal point of view, okay, so I am also a cynical international And the reason I am is because I used to do this for a living international law is often a tool and not an end Okay Now if you look at the list of the countries participating in the process You will not be surprised that the primary candidates for fielding such weapons are less involved than the countries Who are not supposed to be fielding world weapons In fact, if we take the landmine issue as a specific example The countries who join the antipersonnel landmine regime with very few notable exceptions are all the countries who do not have Landmine So the world is divided into two big groups the groups who have said no more antipersonnel landmine And with the exception of I think three countries and everyone else who has landmines who have not joined the regime As a result, it is not a rule of international law. It is only binding on the member state Which creates a very bad principle of international law, which is international law is different for every single country This is part of international law. It is how the system works, but it's one of the fallacies of the system So for example for canada It's unlawful to develop or field an antipersonnel landmine, but for israel It's totally legitimate to do so and in the unlikely event that israel and canada were to fight Israel could do so and canada could not by the way But that's a which goes to show you how stupid international law can be now I say that because What will happen with autonomous weapons systems and why I am not Waving the band flag together with mary is because I know who's going to field them And the countries who are going to field them are not the countries are going to be administering any type of result from that process And the last thing I want to have happened is that the normal countries Who have very complicated projects and approval processes for fielding weapons like india who came up with a robotic Revolution 15 years ago. In fact, I think they had the biggest robotic revolution in the world today size wise But they took this problem on board as one of the issues they need to tackle with I would trust them much more To handle this issue effectively Then a country where I know they don't care about the collateral damage as much so my problem with the Proposed ban is going to achieve My concern is that it will achieve the opposite result the good guys Who will take care only to field systems after they know that they can achieve all of the good results We think they can Won't field them until they're ready With a small mistake probably probability, but the other people will field them earlier And that is not necessarily a reality. I want to live it So that is where I come in on the discussion How do you respond to that? I mean just to say the treaty that we're talking about is called the convention on conventional weapons It's a Geneva based framework convention Uh And all of the countries who are interested in developing autonomous weapons technology are part of it and are participating in it So nothing would be discussed in this body without the agreement of all of these countries So we do have china russia the united states israel south korea and the uk in there Debating the subject And just to come back on the on the landmine side We do have 162 countries now who have banned these weapons 45 million interpersonal landmines have been destroyed from stockpiles We've gone from 50 countries producing them down to 10 as a result of the international treaty and and and the international treaty includes former major users and producers and exporters of interpersonal landmines our problem there is is in the stocks that have been mass manufactured So but we're not talking about doing a landmines treaty here on the autonomous Not yet. Anyway, right? We're talking about trying to deal with it within this particular framework And that's where we're quite sincere in saying we want this to work You know, because if we cannot do this with everybody around the table, then you might end up doing these other kinds of efforts But at the moment there is consensus to at least talk about it. There's not so much consensus on what to do about it yet and how about Is this what has been the thinking about a moratorium that's distinct from a ban Um, if if one and I say for the following if there's also the possibility that smart versions of these weapons could be more discriminating and have other positive values from a humanitarian and other point of view then You know kind of a indefinite or you know permanent ban seems to me A priority way something one would want to question on the other hand because People stipulate that they don't quite know how to Apply international law and other things to argument for moratorium until that's worked out Just in a bar that would make sense to people. I think so which is how I try to think about things So so take me through the the moratorium versus ban as I know you're working on a ban So i'm not asking you to endure something you're not working on but the moratorium Yeah, no you you and then I'll ask the general Just to say the moratorium call came from a UN special rapporteur on actually did your summer in arbitrary killings Christoph Heinz who issued a report in 2013 and which one of his major findings was that there should be a moratorium until The international rules of the road are figured out here. So it wasn't a proposal from the campaign And when he was on his way out Earlier this year he actually issued more reports calling for a ban So that was his initial position and then he moved towards the permanent ban I mean, we haven't talked about a whole lot of the other concerns that are raised with these weapons systems But the kind of the the moral concern that you're ceding the responsibility to take a human life to a machine Is is something that people are not comfortable with And that they want to debate this and it's not just countries like the holy sea It's uh, it's countries who have been who've been the victim of armed drone strikes Who feel like they've already seen the effects of Weapons with some degree of autonomy in them and they don't want to cross that That moral line. There's also a lot of countries talking about security and stability and what happens when one side has these weapons systems and the other doesn't What's the nature of conflict? What does it do for the nature of conflict? And for war fighting when you have One side who's got all of the high tech nice wis bang, you know Technologies you can use that and the other side that cannot so the kind of question here Is are we going to level the playing field so that everybody has these weapons systems? Or is it better that nobody have them? Because at the moment we're still there's still time to sort this out here There's still time to put down some rules and there's still time to prevent the fully autonomous weapons systems from coming into place You know No, I think you again use the terminology fully autonomous weapon systems So that fully is very important in this entire thing because if we are going to Put a moratorium only on the fully autonomous weapon systems Which again, I'll repeat implies a human in the decision to kill and that's all I mean, that's all the implication is So what are we? So we are not putting a moratorium. I mean this This proposal is not trying to put a moratorium on Use of ai or autonomy in all the other six seven functions which are there in the kill chain So essentially There is no moratorium on ai deployment any of this development of the systems The decision to kill does not require ai The decision to kill part of it doesn't require It's just an implementation problem on how that defense system works on ground So that's one part So really speaking if you say moratorium really in effect nothing will happen on ground because all the individual technologies will get developed The second part which you mentioned was The last part of what you said Was on Oh in terms of who has these weapons We had in the opening address The third option strategy strategy of the us now that rationale as to whether both sides will have the whole idea of Have developing this technology to have that military capabilities to have predominance over your adversaries I mean so that that logic cannot be applied to a particular type of system per se because the idea of developing any new system And the idea of this new technology other than having a technological edge over your adversaries Is also to as I brought out and it is what what is my belief That bringing an ai into the weapon systems is going to lead to a cleaner form of warfare Just like pgms are better than you know carpet bombing and you know the non-smart weapons Human rights itself says that pgms are recommended over for example cluster munitions we can rule out But even vis-a-vis the standard bombs being you know dropped from aircraft pgms are better Because they lead to lesser non-combatant loss and lesser non-combatant not just lives But also property in a similar manner more intelligence So more discrimination Even if you don't have aspects of empathy and judgment and all that's at a much later stage It'll lead to more precise targeting of what you want to target And so to that extent Well on the one hand you're building military capability on the other hand you're leading to a cleaner form of warfare Which which I think are good benefits to have it so on the in summary I would say that Moratorium just saying moratorium till we sort out the issues. Well, actually is not going to lead to anything positive on ground Nothing concrete on ground If we ultimately the conventions decide from other points of view You look into the future and this this is aspect of ai taking over you know the Kurzweil's Singularity and taking over the human race etc etc if you're looking at that perspective And from that point of view one wants to Sort of ban the development of technology at this stage. Well, that is worth considering as a as a point But not from the issue of the decision to come. I mean really speaking. I feel that Thank you Dan, and then I want to open it to the broader discussion I I think the point I want to make is that there are several different agendas All legitimate at work here, okay One full of thought says We're not ready to feel such fully autonomous systems yet I think They are currently right. I think we haven't solved the technological requirements to make sure That the statistical accuracy of our systems in a complex situation not in the simple one In general, but in the complex I I haven't heard of anyone who has solved the ai problem of doing that Yet it requires so many different schools of technology. It requires A target identification to remember this is in a combat situation So you need accurate target identification in complicated environments You need a machine to be able to do so in in under a lot of Stress physical stress lots of challenges, which are I call them technical, but they are really intelligent technical difficulties Okay, but they will be solved I'm a million percent confident that they will be solved. They're just not ready today. Okay So one group is saying wait until you're sure before you allow a machine to press the button Which shoots and kills a human being, okay? That is one group of thought another group It actually says something wider and says we don't want machines to kill people period irrespective of how good they are at doing it Uh, uh, we don't think this should take place Now this is a moral philosophical Uh, important discussion of a totally different level which has nothing to do with the technology involved Uh, I will point out here that we Have already undergone a partial robotic revolution in the civilian sphere. We just They've become invisible already, but if I go back in time, uh, one of our favorite stories is Uh, you know the first elevators in the world were built in Chicago when they had the third high rise And like you saw in the old movies, there were elevator operators who used to operate the elevator to stop you at the floor But what happened was they built a high rise Which was too high for human operators had to move too far So that they built the first ever machine operated Elevator in Chicago now the problem was that when people walked into the elevator They didn't find the operator that thought it wasn't working So they put the sign and we have a copy of that sign saying explaining This is the first ever machine operated elevator. It's perfectly safe to use And no one would use that elevator for the beginning because they thought it was unsafe. How can the machine know where to stop? That is a an elevator is a very primitive form of today, especially with the quite complicated software You're putting to them of an autonomous machine which can kill you Traffic lights instrument landing systems for aircraft these are all autonomous systems in that they make decisions Where humans are not but if they make a mistake people can die We have long time accepted the fact that computers can make decisions for us which can kill us What has happened for the first time is that we have reached a stage where we are thinking about they can do it on purpose And this is a decision point which we need to decide if we're crossing it or not Being the synthesis. I am I think we've crossed it already. I think because too many countries will not Adhere to whatever comes up from that discussion We won't have a choice but to go there, but i'm happy. We're having the discussion now and not 20 years from now And the final school of thought I think the general voiced it Perfectly is a question of do we want cleaner wars? And and there are two schools of thought on that one one saying the more accurate missile systems and coming from israel remember We Are the advocates of accurate missile systems because the less Civilians we hit the less israel is targeted for doing something wrong, right? So we have a vested interest in using more accurate conditions The problem of course is the cleaner you make the battlefield the easier it is to fight And so there is a legitimate counterparty saying where we're part of the reason why there are not so many wars Is because it's dirty and civilians die etc If you manage to clean everything out and you just kill the combatants you'll be more happier to go to war now I'm not saying I agree with that position But i'm showing you the different schools of thought which are converging around this issue And and each one is a separate discussion And you need to choose which one you want to focus on at every given moment because each one does something different That was a great Summation and taxonomy of the discussion. So I I want to thank you and I want to thank Each of our panelists. I think it's been a really terrifically sharp And informed conversation Let's open it to discussion You all know the procedure I call on you and then you say who you are somebody will bring you a microphone There's a lady here about midway and then the gentleman you're walking right by but let's The ladies first at least for the next eight days or Thank you, january 20th I'm dianne bobber check from the center for naval analysis. I was wondering how you um, you all think this discussion applies to cyber warfare Um, particularly thinking of scenarios where cyber weapons could be lethal Okay Actually cyber warfare the cyber domain Is very much part of this discussion of the autonomous systems How autonomy should come into play as far as warfare is concerned But really speaking it didn't form part of our interactions here Because the current heated debate is about human lives Killing human lives and cyber is while in a sense Cyber can affect human lives, but in an indirect sense. So when you're talking of You know, you know cyber defense Let's say a cyber attack and autonomous response from The adversary to kill that attack which is coming up, which was referred to I think in the opening remarks By that that's very much part of autonomy playing part of the warfare in the cyber domain So so but there's no objection to that and so to that extent I think that field is getting developed and will progress without any You know legal and ethical issues involved in that's what I would say anyone jumping on that Yeah, I actually think it's part of the discussion um One of the reasons I say so is because I don't actually know Where cyber stops and kinetic begins anymore. I used to know I don't know anymore One of the discussions for example, we've been having on fielding robotic systems Is what type of protection do you want to give those systems against being hacked? because So example, one of the ideas I came up in a discussion a few years ago Maybe we need to create a kill switch Which you can turn off a malfunctioning We call them wars the weaponized autonomous robotic system, right? And then someone said yet, but someone could hack it and you know And so the reality is I think most of the discussions are the same I totally agree with the general that direct cyber attacks are usually not focused on killing human beings But indirectly they can do tons of damage. So I think This discussion should cover cyber autonomy as well Does a subset of cyber autonomy which is scarier Than anything we've discussed so far That in the cyber world there is a possibility of self replication Uh, we do not know how to create a a Wars an autonomous typing vehicle, which will create a copy of itself And go out into the battlefield. However, we already know how to do that with computer viruses And so I actually think the cyber autonomy world is even scarier Because it has a potential of us losing control more Than the kinetic side, but that's another issue for this country. It's halloween. So scary is okay, but The gentleman right there berth and then uh, we'll go back Uh, Jesse Kirk patrick. I'm a professor at george mason university I want to pick up on a point that um, that dan raised about Sort of varying levels of of autonomy that we have in technology currently And um, that we're almost on the cusp of different types of autonomous systems that can take lives Um, and I want to point to one that already does and that's self-driving cars Right, they make moral decisions to kill. They're going to crash as a matter of of uh, Law of physics or statistical probability and they're going to need to be programmed to make a decision that is a life and death decision So I'd like to just hear a little bit about maybe some of the distinctions that the panelists see between This type of technology and lethal autonomous weapons Uh, I've done some work on that And you mentioned that in your your opening comments The short answer, of course, no one has a good answer of what we're supposed to do with an autonomous car, right? So being a procedural lawyer the question then becomes not What do we do but who is responsible to do it? Uh, and so we now have a discussion which goes something like this option number one This is a discussion two weeks ago, by the way with some of the companies that do this Uh option number one Is you allow the guy who buys the car to make the decision when you buy the car So when you choose the color of the car the whole time, by the way, would you rather commit suicide? When the car hits a following situation or would you prefer? Not to sir Now I don't know how many of you would buy the car with with the you know press option But it's a decision and one of the people in the meeting actually said they let's agree that we give different colors to those cars So that you know who they are on the road Now it's a true discussion now that is one example Other way of doing it to say no the car comes hardwired with a decision And then the question is do we tell the people who buy the car what it is? Now the answer is actually you can't because it's an algorithm. It's way too complicated to explain Because of course the car won't automatically kill you It will it will go through a process of decision making in the 0.2 seconds It has before it has to start deciding what to do And it'll do that It's best effort to come up with whatever the guy who wrote the code told it to do And there's no way we can summarize that in a way which the customer will understand now I am taking you through this because When we try to move the analogy to the warfare side The diff the main difference and I thought this was your question is that in the warfare side, this is all intentional Right, but the reality is in the warfare side. The big problem the general was referring to is a distinction part When you have different people on the battlefield And you want to identify who is foe and who is non-combatant Then you need to find a way to optimize the what you're going to do so that you minimize hitting those and optimize hitting those It's actually exactly the same question If you take away all the fluff from around, it's exactly the same scenario the same question And then the question the rise for example, who is going to make the decision So are you going to ask the commander to tell you in advance what level of civilian casualties is acceptable? Which is option one And that's actually easier for me to sell For example, because then they say but that's that's how military operations work today Or are you going to allow the manufacturer of the Autonomous weapon system to hardwire that into the system And me if I went back into my military career and I'm back being a colonel in the Israeli army I have no idea what the system is going to do when I press the button So for all I know it's going to kill one or two civilians or none at all and if it does I have no way of controlling that of it So the questions are exactly the same although the scenario is different and I think you're right I think we're facing the same dilemmas now in the civilian front That we're going to be facing on the military front in the very near future Yeah, please I think one of the things which is happening in these discussions is that we are talking about autonomous systems in general They are They are grades of autonomous systems To be used in different contexts So while Daniel said I painted a simplistic situation Uh in today's context That may appear to be simplistic But in yesterday's context picking up picking up tanks Through an autonomous vehicle was not a simplistic affair So that is one situation A easier situation is okay. You tell your autonomous systems All the enemy airfields are that's much easier to identify enemy airfields You go and bomb otherwise the bomber missions would have gone to destroy the airfields You said that that was an easier system The next less complicated one is what I painted as tanks in a In a geographically 10 by 10 kilometers When you come closer, I can paint another situation Where a company is going in for an attack and they are bunkers Now when you have bunkers and a company is going in autonomous systems goes into this company attack Well, they're all supposed to be combatants there So to that extent there should be no model and ethical issues We are supposed to be combatants But that's a more difficult a closer situation where now it could end up in a close quarter battle Where aspects like empathy etc would come in where a human would be there So that's why I'm saying this is a more complicated situation as against tanks where there's no empathy Because really speaking you're not seeing who are the humans side you're not looking at the human before killing them So the broad point which I'm making is That what we are banning Or what we are deploying I would say let's not talk of banning How it rolls out has to be in a graded fashion Whatever technology level is reached to that extent that type of autonomous system should be permitted to be deployed in a responsible manner And as such we are already in that Stream because they already autonomous systems on ground. I mean they've been there for decades You painted minds as a most primitive form of autonomous system. Well minds are being Uh I mean there's a convention against minds for similar reasons But let's not talk about minds. There are things like phalanx which are there. We are killing designs in an autonomous manner So they're already there and as in when you perfect technology in a responsible manner, they should be deployed And so rather than talking about in general The moral aspects the question which was raised just now will come up at a much later stage if at all the Autonomous systems can mimic the empathy and judgment part of it that will be much later Well, if it is perfect to that extent now that brings me to the second point And that that is about Who's accountable the point of accountability was raised. So is it the manufacturer? Is it the developer the manufacturer the commander the state so different levels of accountability? So I wouldn't say that If an autonomous system malfunctions on ground the commander and the state state in any case cannot absolve itself of its responsibility It is definitely responsible in every case But even the commander who's there the military person who's there is responsible Because he's supposed to know before he acquires a system What are the drawbacks of the system and it is within that bounded morality of the whole bounded Epibility of the system that is supposed to deploy. So if it malfunctions is responsible, he's not tested properly So the test and evaluation aspect is also an angle to the entire autonomous debate Which has to be very strong the more autonomous we make and more complex scenarios. We deploy autonomous systems I suspect I want to take that but I suspect as with the vehicles So if we move this direction with military systems That that latter point to be more debatable in other words, do we want cleaner war wars was a question Do we want fewer traffic fatalities or the answer may be Yeah, but I'd rather it's easier to be in a system where the driver and the soldier are accountable than Even if it's safer and cleaner, but now a big supplier is accountable or the state's accountable It it it's interesting what issues this is gonna. This is gonna bring up For a variety including financial reasons That I'd rather not take on the liability. I'd rather have you have the liability But anyway, well a brave new world This gentleman here right in the middle Yeah, and then why don't we take two and then this lady with the blue and white striped shirt Here if we can get another microphone to her, let's take two in the interest of time. So I think we're bumping up against it Hi So keeping the themes of things that are scary. You guys talked a lot about a cyber Autonomous offense and defensive capabilities, but we only touch upon autonomous deterrence And that could either be kinetic capability where you put in an input of what you would do Seconds second strike nuclear attack or in the realm of cyber Where you launch a retaliatory attack before your systems go offline completely My question is how do you integrate these questions about deterrence into these questions about autonomous weapons systems? And how do you avoid? You know doctor strange level effect where you don't tell your enemy that you have these capabilities for operational security reasons But you make it much more likely that things will go out of control Let's take while you're thinking about that. Let's take the other question and then you can Parallel process. Hi, my name is lauren green. I was um an Holistic essay assessor for the educational testing service and scored the test of english as a foreign language for six and a half years until the advent of An algorithm that replaced the human raider and I'm becoming a journalist My question is are you not crucially or critically aware that artificial intelligence and programming weaponry computers? To think like humans in an algorithmic code is distorting our own reasoning process and cohesive reasoning With natural language processing because we are granting these machines so much importance to cancel our own reasoning out of the process Wow I didn't do that well on my s at so I don't know if I can understand that Question but i'm trying to process it We're trying to create a system in which like a robot is going to think like a human would about reasoning when to strike or what to strike Or how to strike and so and in the process maybe even a computer system to reason when it would be appropriate to strike So we're granting this algorithm that we're creating more weight than our own intrinsic thinking and spontaneous thinking And it's counseling our own ability to think Spontaneously and reasonably even as demonstrated. I think even today with some of the explanations that you've provided and maybe even Lacking a real critical target in your arguments like some there was a lot of just just open Processing without really making a definitive in some cases answer and but also the process for for deterring Autonomous weaponry is entirely too slow. I think most people would are critically aware that it's there's a lot of apathy against A lot of apathy against the idea of just all together Canceling out the the prospect of autonomous fully autonomous weaponry And i'm wondering if that's just because so much money is invested into the artificial intelligence process and Not enough Inhuman capacity Okay So I think the the the first part of the question was about You're delegating you're you're saying that a machine would can be more reasonable and take more reasonable decisions would be able to Be able to arrive at the correct decision in a better manner than a human. No, I think the opposite Yes, she's she's questioning that. Yeah, see she's questioning that. Yeah, well, she's asserting She's not questioning. She's saying what it is That it's not gonna. Yeah, that it's not gonna It's not gonna work and we're destroying our own capacity to reason and think by by pursuing it Okay, she's saying that a machine with ai will never develop to a state Where it can do better than humans. Is that what she does I think? Yes. Okay. Now that's for the ai ai scientists to You know, say my why would we ever want that? Why would we ever want a computer to work? It's not today's technology. It is how the ai is If you ask 10 years back, whether it'll be able to understand natural language We would have said no, but you see what is happening today And so the aspect of reasoning in fact my own belief With a layman's knowledge of ai is that anything that the human mind can do Other than aspects of including empathy mimicking empathy mimicking judgment at any level Time is not far off when ai will develop to that stage There is no scientific reason to believe otherwise It is only mimicking judgment. It's not really rationally judging Dan you want to jump in on this? I want to talk about the two questions together I want because it's all a question of delegation Uh, you use that word in your in your in your introduction And you're questioning whether it's right to delegate some forms of decisions to machines And you have an assumption that it's a bad idea. I do not necessarily totally agree with you in every scenario But I think it's a legitimate question. Okay You went one step further should we delegate the authority to to use significant amount of power in a in a Disaster situation Where human beings may not be able to respond quickly enough effective enough or or or intelligently enough in order to counter attack or whatever and These are great questions because they raise the real question of what are we developing ai for? okay, now It started off if we we forget the first few years when it was a scientific experiment and Games It's supposed to be something which makes our lives better and easier. That's the entire idea about behind this entire subculture so For example, if it can make a good decision quicker than a human being and save a life Most people would say that's a good thing and As we're seeing technology develop I personally and being a technological layman who was working in this field Can tell you that I've seen numerous examples what computers are much better than human beings that making decisions Which I want them to make because human beings are scared human beings are tired human beings Don't have all the information and human beings sometimes just act on what we call instinct which turns out to be a Subcontinent decision-making process which sometimes is very good and sometimes it's really really bad now It may not always be a good thing to delegate authority to a machine And I think the decision we need to make is where we agree that the machines come to help us and your scenario which is an extreme scenario is so I would rather not let the machine make that decision but but I can definitely identify parts of life where I want machines to help me out Where I really like the fact that I don't need to trust human beings with all of the policies and limitations but I do not want Them to replace us in the things where which I care about and this is a type of discussion Which I think we should have now Before we let technological companies and market pressure push us in a direction. We are not necessarily willing to go If no one else Just to say we hear quite a bit from the artificial intelligence community the guys out in silicon valley about How artificial intelligence can be beneficial to humanity? This is their big kind of catchphrase and they're investing money into trying to determine ways in which it could be beneficial to humanity But delegating the authority to a machine on the battlefield without the meaningful human control is the line at which many of them draw We haven't talked about policing. We haven't talked about border control We're just talking about armed conflict at the moment But this is not just in the realm of armed conflicts that we're concerned about this as well It's it's much broader than that But I guess the point of which the campaign to stop killer robots comes in is is the point at which it's weaponized It's a much broader bigger bigger debate and we don't have all of the answers to to much of it Okay, well, I want to thank Obviously the panelists but all of you for at least here beginning the process of this debate and helping us Really, I think hone in on what some of the key questions and issues are so Thank you all again, and and thanks, uh, dan general