 Anyway, good morning. I am so happy to be here. I am from Dublin So it's a great privilege and honor to get to speak to Euro Python in my own home city So thank you very much for having me So just to sort of get things straight at the start. What is this talk about? It is not about my worries that the robots will become clever and smart and You know overthrow humanity with their super intelligence That's the first thing that people think about and we start touring about terms like autonomous weapons and killer robots But actually what I worry about is the opposite thing that we will Introduce autonomous weapons and they will be too stupid So this photo here is it's a bunch of autonomous cars from a company called cruise This was taken on June 30th this year. So only a couple of weeks back and This company is allowed to operate 30 autonomous cars in San Francisco And it's a lot it has a license to pick up passengers at night and drive them around with no driver in the car And of the 30 cars that they're allowed to run About 20 of these cars basically decided to pull in and sort of clog up This junction in San Francisco bringing traffic to a grinding halt now There are no lives lost here, which is great, but then cars are not supposed to kill the cars are supposed to avoid killing people What what will happen if we? Introduce a fleet of autonomous weapons and all those weapons start doing stupid things all at once or even one weapon Does a stupid thing once, you know people will die and that is what I worry about So just a brief recap about who the heck I am so I've been a software engineer or an SRE for my entire career Can I join the campaign to stop killer robots about four years ago? And I will tell you that story shortly It's super weird to wake up in the morning and find the Guardian and with an op-ed that says you're a modern-day hero I'm not I'm just doing the thing that I think is the right thing to do and that I hope that all of you would do as well after you've listened to this talk So here is my back story In 2017 I was working at Google and Google at that point Had taken this secret contract with the US Department of Defense to work on a thing called project Maven This is the logo for a project Maven and there's a little motto here It says aficionostrum est a duvare and that means our job is to help Now I don't know who made up this logo or what they were thinking but either You know option one they thought this was a cool logo and they'd never heard of I'm from the government and I'm here to help or else they did know about that and it's extremely subversive But either way it's pretty funny So 2017 they kicked this project off and basically the idea is normal military government procurement processes are slow and cumbersome and what if they could get some of that you know Private software industry kind of special sauce and use it to supercharge their military systems so that they could have you know Warfare at the speed of thought and all this kind of stuff So basically their problem was they had all this video surveillance stuff This is called wide area motion imagery and it's the the video that you get when you fly a drone over somewhere And you you record that video so it's from high above and You know got relatively little detail But there is enough detail there to pick out people moving around vehicles moving that sort of thing and You know they they fly these drones over certain areas pretty much continuously And so they end up with this huge huge amount of video and they literally cannot hire enough people to actually do all the analysis on this so Basically their idea here is automate that okay, so that's fine. You want to do some machine learning. What's the problem? Well, the problem is kind of this right and You know At the end of that process at the start of the process is machine learning. Yeah, sure at the end of the process is people getting blown up So this is a very very blunt quote and apologies for any offense caused But I think that Jamie Sawinski of Missila here is is correct There's a thing called the military kill chain and it starts off Identify your target dispatch forces to your target Attack and then destroy that's the kill chain and that is the end that's You know Maven was not a weapon, but it's very much feeding information into the weapon systems So I was not the only person at Google that was concerned about this project I knew about this a few weeks before it went public When it did go public Every art department for all of the tech kind of publications went went on a bit of a field day and produced all these kinds of images What I had been asked to do in particular I wasn't asked to work on the core Maven project But what I was asked to do was to help Google set up new air-gapped data centers Phase one of Maven was 18 months long and what they were going to do was basically hand software to the DoD So handing trained models and they also had some you know pretty fancy plans for nice UX's that would give you like a timeline of people's activities and like a little connected social graph of You know who in which houses visits who and which other buildings All this kind of stuff But what they were asking me to do was help them to run that software in-house in phase two in these air-gapped data centers That would have been sort of supervised in operation by the US military So I ended up leaving Google and I started speaking out publicly about this issue and I joined the campaign to stop killer robots I still volunteer with them regularly doing advocacy type work so Ethics 101 I am not going to talk about ethics 101 because this is a tech conference and Vicki said don't talk about ethics Laura talk about the tech so For me my ethical stance against killer robots is actually very grounded in What I see as fundamental and probably insoluble problems with the technology So that's mainly what we're going to talk about if you do want ethics actually the good place is pretty good I did a whole masters degree in ethics and I learned maybe nearly as much from the good place anyway So it is what it is So text off so here is a sort of a I Guess very high-level schematic of your autonomous weapon so first off Autonomous weapon is a weapon that it's not It's not just any degree of autonomy and you can have an autonomous drone that flies autonomously that takes certain degree decisions around Routing all this kind of stuff, but when we talk about an autonomous weapon we talk about autonomy in the critical functions and That's targeting and the decision to attack a particular target. So target identification and target selection An autonomous drone that can autonomously fly somewhere, but a human is making the decision to attack We don't consider that an autonomous weapon even though it has some autonomous capabilities We're all about the lethal stuff here that those critical functions of choosing the target So how targeting works is kind of key to this whole argument so The autonomous weapons that we're seeing emerge or that have existed for some time in sort of proto-autonomous form They tend to use sensors to sense the environment and then they make some decisions internally So the sensors that they use tend to be radar and LiDAR Infrared mobile phone signals a huge amount of Targeting that's done on individuals is actually done at mobile phones as opposed to that You're not looking at the identity of the person. You're looking at what phone are they carrying and we're assuming that that tracks Radiation so enemy radar Sound and so for example, that's detecting the origin of someone shooting at you Other signals, so it's a thing called IFF beacons. That's identification of friend or foe And it basically tells you is this aircraft or ship or whatever friendly or enemy You could get data from other devices so you could have like peer-to-peer networks of these things sharing information and cameras for vision That's probably not an exclusive list, but I think that covers kind of most of the bases so The first example of autonomous weapons that people tend to talk about are guided missiles and sensor fused weapons So what these do is they use radar infrared or some other means to track a target But the target is first selected by a human being so I would say okay that ship over there We're gonna attack that and your guided missile will lock on to it So the decision is made by a human being and it's pretty close in time to to when when the actual strike happens So typically, you know when you target it and you fire and the attack starts right away And the targets are typically military targets So tanks ships that sort of thing. These are not typically things that you would use to attack what we call dual use targets, so that's people Buildings that are not exclusively military bases that sort of thing So that's not typically what you're doing with your guided missiles. They're they tend to be pretty pretty military focused and They use the sensor data only to control the very end stage of the attack usually So they pretty much fly towards the thing you pointed them at and then they use their the radar their infrared to Basically stay locked on just in the final phase so they don't sort of veer off So a lot of people say oh well autonomous weapons systems They will make they will make warfare more precise and they will help commanders to you know carry out their intentions more precisely and Every time I've looked at it The Americans are the ones that say this a lot in the international debates Every time I look at it what they're talking about tends to be stuff like this And which as I say are not really autonomous weapons and in the sort of the fully autonomous sense of the word So here's the second thing that some people talk about when they say autonomous weapons are great So you've got missile defense shields or counter rocket rocket and mortar C-Rams and These are interesting as well because They're typically guarding a human occupied position. So you use them on a base or on a ship or somewhere where people are They don't move around by themselves They can be mounted on a vehicle which can be moved and what they do is they they typically have more than one mode they may have like a sort of a manually Operated mode where they they're watching for potential attacks and they require a human being to sort of push a button and say Yep, that's an attack far back Or if you're expecting a big a big swarm of incoming missiles or these days groans or whatever You can put them into a fully autonomous mode and they will attack whatever they see incoming But that's not typically how they're operated most of the time And they work based on radar mostly and thermal imaging and the targets are generally military in nature So these are designed purely to attack missiles and mortars and things like that coming in however even with these systems which are Co-located with humans don't move around and are pretty much targeted at military things like missiles You have there have been accidents that have occurred Particularly around aircrafts serve in a couple of cases of missile defense installations attacking aircraft both military and civilian So even this is not foolproof Then the another thing that's coming out is smart tanks and armored personnel carriers or APCs So those tip again are often manned vehicles, but they can be completely autonomous and completely remotely operated And these increasingly have the ability to detect Threats which can be incoming fire But here's the thing I went to an arms fair last September and I went and I talked to a bunch of these tank manufacturers And what they do now is They let you install your own software plug-in that defines what a threat is So you write a little plug-in that could be anything like you could decide that all human beings are threats or dogs are threats or any time you You know detect the word hospital on a building. It's a threat. You can do whatever you want crazy things so that's a really interesting development here, and I think that may be the way that Manufacturers are gonna go with this They're gonna build these very flexible systems and let each military sort of define what it is. They're gonna do with them, right? But here is the thing that really really really tends to have people worried These are loitering munitions So loitering munitions sort of a cross between a missile and a drone So what they're designed to do is basically fly around for a lengthy period of time multiple hours And sort of fly around in an area And what they do is they look for potential targets and Then they can attack so you can define obviously it's software So you can have any sorts of behavior you want you can you could you can You can build loitering munitions that will always ask for human validation before an attack and Your targets and your targeting criteria can vary really widely and this is of course one of the challenges in the debate about Autonomous weapons because an autonomous weapon can be such a broad class of things right and the shape and the scope of autonomy can vary a lot But the key thing I think about the loitering munition is there's much less human sort of Awareness and much less human control here because this thing is moving around, you know You deploy this to sort of patrol an area which is going to be multiple square miles potentially You don't know where and when that's going to attack It's extremely difficult as a military commander to say I'm going to deploy this weapon in this area And it's going to attack this sort of class of object you can't predict what it's going to do exactly and Unless you have a really really good awareness of what's in that area There's a huge potential for things to go wrong here, and I'll explain why in a bit So here's a concrete example of a Loitering munition. This is a thing called the Harpier the Harrop and it's made by an Israeli arms company called IAI What is designed to do is it flies around it looks for military radar signals in a particular area And it can attack them autonomously or semi autonomously with human supervision The idea here is basically find your enemies anti-aircraft installations and attack them and take them out This is exclusively attacking military targets, not dual use this the scope for accident here There is some risk But the sensor processing is really straightforward. There's a little bit less that can go wrong here So it's not the riskiest of autonomous weapons This thing came out about two years ago three years ago This is the STM Kar-Gutu and this is made by the Turkish weapons company STM. It's the state weapons company So they boast about the awesome mission vision software that they have here including facial recognition The implied use case here is to be a hunter killer robot that hunts and kills human beings and In fact two years ago in Libya. It is claimed that these these Weapons were actually used just for that. They were basically pointed at a group of people who were presumed to be fighters and Essentially kill them all right So there's a huge ethical debate Around targeted killing and around whether or not that's an okay way of waging war and an okay way of carrying out counterterrorism activities in particular and We could easily talk for more than 45 minutes just about that But if we bypass all that entirely There's a big technical concern here Because when you're talking about machine vision in this context You are talking about uncooperative facial recognition in video and That is not really reliable The US Institute National Institute for Standards and Technologies They have a long-running series of evaluations. They do on many different kinds facial recognition tools and In their last assessment of facial recognition in or uncooperative facial recognition in video Their conclusion is essentially it is not good enough for anything important without human supervision So building this sort of thing into a killer drone is Maybe a bit concerning So this guy here is a thing called the Orlan 10 slash leader system so the Orlan 10 is the drone and the leader is a base station that sort of you know feeds them Does control stuff and feed some information? What Orlan 10 does is it flies around and it senses mobile phone signals So there is a system called Skynet that I don't know if some of you have heard of And it's not that Skynet. No, we're not we're not back to the terminators Skynet is a real computer system that was used by the US to attempt to Distinguish terrorists in Pakistan This was in the late 90s and early 2010s. I believe They They basically sucked up all of the coal metadata and SMS metadata from that from Pakistani phone networks And they used it to try and build a machine learning model of that would determine who is the terrorist and who is not Here's the thing they trained that model on five people Five they had five examples. That's crazy This all came out by the way in this note and leaks there's quite a lot written about it and it seems to be pretty reliable So this is the sort of the the statistical basis That people are being targeted over You know suck in all the phone network data You know build a gerryed up machine learning model based on five examples Fly this sort of thing around and kill people Okay, so I want to talk very briefly about I think old international humanitarian law or IHL So this is also known as sort of the law of war And basically the idea here is that there's two sort of legal regimes that can apply and Then at any one time Now sitting here in Ireland. We are not in a state of war, you know normal national laws and normal sort of Human rights law applies to us and we can't be summarily killed unless Maybe we're menacing somebody in some very physical way very immediate way. We have policing in a state of war a different set of laws apply and What they do is they provide certain specific protections to civilians and also to combatants So there's rules around things like how you take care of prisoners of war that kind of stuff So it's based on just war theory which kind of has this this this two-way split Just add Bellum is basically how you make the decision about whether or not it's okay to go to war Is this a just war? am I defending myself or is am I Do I have another good reason to go to war and then just in bellow is all about the conduct of warfare Now the two big ideas in just in bellow the conduct of war is that when you're making an attack You have to you have to apply a principle called distinction and what that says is I have to attack military targets The aim is to weaken military force That's the only valid aim You and that doesn't say that doesn't mean that you can't make attacks where you might do damage to civilians or civilian stuff you can But you can't aim to do that so Military targets are basically military objects your tanks your your warships your bases and Combatants who are taking direct part in the conflict. They don't have to be in a military uniform But it's complications These are not straightforward criteria So for example, there are some military forces and military uniform that are not valid targets That's people who are who are wounded in combat and you also can't target medics or chaplains and And you can't target people who are surrendering so there's actually a bunch of exceptions here And when somebody isn't wearing a military uniform it gets even more complicated So this is something that was maybe relatively easy You're back in Napoleonic times dude is you know standing there with his bayonet and his big red uniform on on a battlefield Okay, these days not so much So proportionality is the second part of this So you say okay. Well, I really really need to attack this thing. There are some civilians nearby is that okay So yes, if you don't intend To kill the civilians, but you foresee that you will you're actually allowed to do that. That's okay But you have to balance What you intend to gain from this militarily with the amount of civilian harm that you foresee so if you foresee that you would Kill a thousand civilians to take out a very minor military objective. That's probably not proportional Now there's even more complication around this so again since Napoleonic times You have to think about this as well in terms of not only one specific attack or engagement We also kind of think about this in terms of like weapon development is it possible for a particular weapon to be used in In compliance with these rules So a lot of the objections to say landmines come about because Landmines can't do distinction right landmine just goes off no matter who steps on it could be a kid could be a soldier could be anyone right then rules of engagement are another thing as well militaries have these things called rules of engagement that are basically sort of playbooks for how they do war stuff and depending on What their current playbook says soldiers and commanders will react in different ways to different situations So think about this. You are a military force and you are holding a city And you have checkpoints around the place to monitor what people are doing and where they're going and make sure that the bad guys Aren't moving around your city Somebody drives up to your checkpoint and doesn't stop What you do then depends on what rules of engagement apply So distinction and proportionality they sound kind of simple, but there actually there's a huge amount of context and nuance here and That's really relevant when we think about whether or not an autonomous weapon is going to be able to be used in compliance with these laws and I don't say whether or not the weapon will comply with the laws because a Weapon is is a machine and laws don't apply to machines laws apply to people so But you know, are they going to be able to be used in compliance? so Whether or not they can be used in compliance to me really comes down to can they be predicted I Think the answer is not in all cases and You know, this is risk. So, you know risk is always a continuum so to me The more at your target and a lot of people agree with this is not just me the more you're targeting dual use objects in particular people and Buildings where where there may be civilians present and vehicles where there may be civilians present The more you're targeting dual use objects And the further away you are From that human decision to to dispatch a weapon or to make an attack The more risk you have of something happening that the commander did not intend and I spent a lot of time in my career building systems that Do little autonomous things, you know, I build software that runs software and runs distributed systems and runs Hardware and this is something that I have seen to be true You know, you you you build a piece of software and you game it out and you try and figure out what I should do in all cases and It works for a while and then something happens that some Quirk of the environment or things that it interacts with that you didn't intend And bam something happens You know A software system going down is one thing, but an attack happening is another, you know downtime is bad death is is infinitely worse So this is really what it comes down to to me the way that Autonomous weapons are developing Away from the the likes of the CRAMs and the guided missiles Where there is still that very direct human control albeit with Some smart software stuff Moving towards weapons that move around That that are more likely to attack humans and other dual use things are far more risky And they're risky in a way that I don't think Military is necessarily appreciate because they haven't used these kinds of software before And where they have I mean they often have had accidents But there's this this notion and it's I mean we we live in an age of artificial intelligence and machine learning hype And it's it's justifiable in some ways because yes, it's gotten amazing and there are so many low stakes and you know low risk tasks where machine learning is a great answer But there are there are areas where it's not So when we think about it AI is sort of I mean this is All of these definitions are fuzzy, but broadly speaking AI is around decision-making and reasoning Optimization playing games finding routes These are are great applications because you can you can fully game out a game You can simulate it repeatedly and train your system Um, so you can do these large searches of potential solution spaces and you you end up with things that seem magical But these things don't apply to the real world because I cannot simulate the real world repeatedly in perfect detail So this I I I think that the the myth of the AI super strategist weapon is is just wrong, right? But then we have machine learning, which is the automated analysis of data based on statistical analysis of data sets So machine vision categorization identification Um, this is you know the cargo and the sky net that we talked about earlier And here's how they would fit into into weapons, right? So an autonomous weapon roughly has this kind of logical structure. You got some sensor data coming in You got some configuration. So what area should I patrol? What sort of targets am I looking for that kind of thing? We process the sensor data That's probably very possibly some sort of machine learning thing that that's going to attempt to take that sensor data And turn it into something that you're decision making part of your software can work with So here you've got your autonomous weapons systems logic And yes, I know that the collision with them amazon web services is unfortunate, but that is what it is So Your your your your aws logic is going to see okay. Have I got a valid target? What are my goals does targeting this target sort of meet those goals? What are my constraints? Have I met my constraints? So all this kind of decision making stuff that we have I'm based on that you're going to have your next action, which is going to be attack or you know continue your Continue your patrol looking for for more targets, right? So if you've got the harpy the this is the anti-radiation missile that we looked at Um, it's going to look something like this. There isn't really a lot of kind of AI or machine learning special sauce in here. This is fairly predictable stuff So you don't have a lot of like non-explainable black boxes here And that doesn't mean there isn't potential to go wrong because there is We could misidentify signals We could you know decide that this super powerful wi-fi router is a military radar and take out of school that kind of thing But there isn't sort of machine learning black boxes And the critical thing here is we've sort of solved the proportionality and distinction target by or problems by saying Okay, all military radar is a valid target and it's in scope. So this is the benefit of Not building these systems to attack these very very gray area kind of dual use targets But an autonomous weapon that is designed to say target People or or dual use weapons has it has a lot likely has a lot more of this kind of machine learning special sauce in right? So again, your sensor data could be phone signals and camera if it's if it's the cargo Um, you're going to process the sensor data. Have I have I have I matched to target? You know now you've got to start computing probability of you know, is it who you think it is Compute the risk of is it an imminent threat? So a lot of the time you can only target people if they're considered to be an imminent threat or at least according to A lot of countries who do target people as individuals And check proportionality. So how many people do I think will be affected if I make this attack? So all of these things very gray area A lot of risk of getting it wrong So just to sort of talk a little bit about the philosophy of Um kind of AI and machine learning A lot of people say okay. Well There's going to be progress here AI machine learning it's going to get smart enough that these systems will be really well able to carry out the intent of the commander The problem here is That means they have to be basically as smart as the commander That means this is essentially equivalent to saying that there has to be artificial general intelligence here Um, a lot of people think that that's not going to happen So here's at least not with the ways that we're currently approaching it So here's herbert reyblash wrote a very good book on this He basically says This stuff is great when you have You know fairly well structured problems But it is not good on less structured problems If you read any of the books on military targeting particularly around dual use subjects And they will tell you that it's extremely difficult an extremely gray area They do not have a defined process for doing this They will tell you that they have criteria and a whole process, but there's a lot of judgment involved Inevitably when we get into these complicated systems, there's going to be machine learning perception involved So whether it's um machine vision or other other types of things um Now we're contending with all sorts of things if you have a combat battlefield You have weather variations. You have smoke. You have a lot of stuff going on And you have potential for adversarial attacks, you know people have you know long figured out how to Change road signs so that autonomous vehicles will be fooled Even very basic things like tracking an object that goes behind something else is still quite a challenge in machine learning um And it's well known that trying to use an ml based system in conditions other than what it was trained in yields unpredictable results Now the problem with a weapon is that you know your battlefield could be anywhere in the world any time of the year Any weather conditions any sorts of human behavior? You know the environment is just very incredibly wildly Even if I went out and I trained my my machine learning weapon in a particular place Now doesn't mean that the same conditions are going to apply next december if I uh if I deploy that weapon there and I think that's a huge and It's a huge problem. I think it's a bigger problem than militaries think it is And I often look at the the progress in autonomous driving as as a sort of a guideline to this I mean the amount of money and engineering The amount of money and engineering time that's been poured into that and there are still quite a lot of problems with it and There will never be nearly so much Money and engineering time put into autonomous weapons. So I think There are problems there Eric J. Larson He basically you wrote a book called the myth of artificial intelligence Why computers can't think the way we do and basically he says that machine learning is a form of induction Bertrand Russell talked about the thing called the inductivist turkey Which thinks that every morning the farmer comes out and feeds him and he does up until christmas eve morning when he kills the turkey The turkey is surprised because the turkey doesn't know It doesn't understand the world. It doesn't try to sort of Model it the way a human being would and that doesn't mean that we can't be surprised But we do have more capacity to actually understand the way the world works and to predict it A great example of that was this gentleman here. This is Stanislav Petrov Anyone else who was alive in 1983? He may have saved your life and in fact all of us Because he was on duty in a salviate nuclear missile bunker And he had an automated alert come in that said the u.s. Has launched five missiles and he said No, well, I mean his job at this point was basically say, okay Yep deployed counterattack. That was his job and he didn't do it. He instead said There is no way that if the u.s. Is attacking us. They have sent just five missiles. This is probably wrong And so he declined to fire those missiles and pretty much saved the world um Now would a computer have been able to reason through that sort of problem There's a proposal that you could build autonomous weapons with an ethical governor And this is the sort of the metaphor for the governor on the steam engine Back in the the days of Newton and Watts It basically stops it exploding and the problem with this metaphor is that When you're talking about a weapon system the operator wants the weapon to make attacks The the operator of a steam engine does not want their steam engine to explode The the the incentives are not aligned There's also a bunch of other stuff. So the thing called the frame problem Which basically is the problem deciding what is relevant to any given decision Turns out that we are pretty good at this. We're pretty much built to solve the frame problem computers we have not figured it out yet And any real ethical governor And this has never been built that they've only been proposed with like a toy solution Any real ethical governor would have to solve a huge amount of very complex stuff If we did it based on on rules, there's a phenomenon called rules explosion where Once you get past a certain amount of rules in a rules engine based system It becomes unmaintainable because the rules interact in, you know, unpredictable complex ways Putting a human in the loop can't solve the problem. First off militaries want to use these systems in places where they don't have communications So human control is unfeasible And then there's a problem called automation bias This basically means that anywhere where we try to automate part of a task And sort of have humans supervise the robots We've always to date failed Because human beings are bad at this If if if the if the automated system is doing a pretty decent job We tend to just sort of zone out and let it do the thing And this is exactly why people keep driving their Teslas into the back of trucks You know, we risk We risk just people becoming button pushers and that's not effective supervision Stock market flash crashes caused by trading bots This is a phenomenon based on sort of emergent behavior in complex systems where you have multiple things interacting If we have autonomous weapons interacting with the world with each other with people That is a complex system We risk flash wars Because in a context if one weapon is going to decide to attack incorrectly Probably any weapon all weapons that you have in that area will decide to to do that In the stock market, we can put in circuit breakers to suspend trading when we see a flash crash In a communication jammed battlefield, there is no way to do that There is no circuit breaker for the real world There's no feedback loop or no effective feedback loop for autonomous weapons If you're aws or google you don't deploy these things with no feedback But militaries are very bad at getting feedback on how their weapons are performing So I'm out of time. So I'm just going to skip over this. I'm going to say Robot wars autonomous weapons don't mean robot wars with no human suffering. It's not this It's robots attacking people and and critical infrastructure like water plants and electricity plants that have a dramatic impact on human lives If I've convinced you here is a place where you can go to take action. Thank you very much Thanks, Laura. I hope you enjoyed that. So we're still in time. So what I'm going to do is I'm going to We're going to have q&a now if that's okay. It's only a few minutes. Do we have any remote? No, okay, so if anyone have any questions, please come up to the mic and You can have the floor and ask Laura question I think we got one Yeah Now if we refrain from developing this technology will not Dictators Do it and have an advantage and how would we solve that problem? That's an excellent question. So If we refrain from developing this technology and dictators develop it Um well first off I think it is better for the world if we do not have the big arms manufacturers developing off the shelf highly capable autonomous weapons Um, it's going to be very very difficult to stop somebody developing small jury rigged autonomous weapons. Absolutely But um, that doesn't mean that we can't stop the big arms manufacturers building them and selling them Then secondly, um, I think bringing in a legal prohibition Also acts as a sort of a a moral break on it. Like if we can build a moral consensus against this Most dictators and not all do not use things like chemical weapons Um, and most and you know, there are there are several regimes in the world that have Nuclear weapons and they haven't been used for the last 80 years And those are largely because there is a strong moral consensus that these things are bad Nothing is perfect. Um We we don't make laws because we think they'll never be broken all laws are broken sometimes um, it's not perfect but I think um Bringing in international law that says there's you know, actually an international consensus against this It will at least deter dictators so Okay, I think we have um, we should have time for one more. Let's try one more and see how you get on with the answer Okay. Um, thank you for an amazing talk. That was really great and very stimulating. Um You mentioned that you were asked to talk more about technology than philosophy I have a background in philosophy as well in my heart kind of sank when you said that because I sort of think, you know We should all Be thinking about these sorts of things and philosophy. I know I know I know vicki But I'd be interested to know what would you suggest to a group of programmers who might not be familiar with the field of philosophy and ethics where should they start like you mentioned the good place, which is awesome What else should they watch or read? What else? That's a great question. Um so in terms of war ethics, um, the The um, there are some there's there's a really good book on just war ethics Suggest reading that I guess. Um, although just war ethics is not perfect Um, it's a place to start um There are a lot of um There are a lot of sort of intro ethicsy texts, so they'll tell you like the difference between utilitarianism and you know Other different like virtue ethics all this kind of stuff. Um, I have never found one that was kind of super engaging Um, so I think my recommendation remains a good place sadly enough All right. Thanks, Laura. Um, so And thanks, Nicholas. So I think that's um, uh, the end of this Session we have coffee right now coffee break. It's 10 o'clock according to my phone. Yes. It is 10 o'clock And um, so, uh, thank you again and thank you, Laura