 News, Anja Dahlmann, a political scientist and researcher at Stiftung Wissenschaft und Politik, a Berlin based think tank. Here we go Yeah, thanks for being here Probably neither cut myself nor proposed, but I hope it's still interesting. I'm going to talk about preventive arms control and international humanitarian law And doing in this international debate around autonomous weapons. This type of weapon is also referred to as lethal autonomous weapons system short loss or also killer robots So if I say laws, I mostly mean these weapons and not like legal laws Just to confuse you a bit Okay, I will discuss this topic along Three questions. First of all, what are we actually talking about here? What are autonomous weapons? Second Why should we even care about this? Why is this important and third? How could this issue be addressed on an international level? so I'll go through my slides anyway What are we talking about here? Well During the international negotiations so far no real no common definition has been Has been found so states parties try to find something or not and for my Presentation I will just use a very broad definition of autonomous weapons Which is weapons that that can once activated execute a broad range of tasks or Selecting engage targets without further human intervention and that's just a very broad spectrum of of weapons that might Fall under this definition. Actually some existing ones are there as well, which you can't see here That would be the phalanx system. For example, it's been around since the 1970s Sorry So phalanx system has been around since the 1970s a US system air defense system based on ships and it's meant to just Yeah, defend the ship against incoming objects from the air So that's around for has been around for quite a long time And it might be Even part of this laws definition or not, but just to give you an impression how broad this range is today We've got for example demonstrators like the tyrannous drone a UK system or the x 74 B Which can for example autonomously land Land on aircraft carriers and can be every fuel and stuff like that Which is apparently quite impressive If you don't need a human to do that and in the future there might be even or there probably will be even more Autonomous functions so navigation landing refueling all that stuff Let's you know old But at some point there might be weapons might be able to choose their own munition according to the situation They might be able to choose their target and decide when to engage with the target without any human intervention at some point And that's quite problematic, I will tell you why that's in a minute overall you can see that there is Yeah, gradually decline a gradual decline of human control over Weapons systems or over weapons and the use of force So that's a very short and broad impression of what we're talking about here and talking about definitions It's always interesting what you're not talking about and that's why I want to address some misconceptions in the public debate First of all when we talk about machine autonomy also artificial intelligence with intelligence which is the technology behind this people not you probably In the media and the broader public often get the idea that these machines might have Some kind of real intelligence or intention or an entity on their own right and they're just not It's just Statistical methods. It's just math and you know way more about this than I do So I will leave it with this and just say that or highlight that they have these machines these weapons have certain Competencies for specific tasks. They are not entities on their own right. They are not intentional And that's important when we talk about ethical and legal challenges afterwards Sorry matters and The other in connection with this There's another one which is the plethora of terminator references in the media as soon as you talk about autonomous weapons mostly referred to as killer robots in this context and Just in case you intend to write an article about this Don't use a terminator picture. Please don't because it's really unhelpful to understand where the problems are With this kind of thing people assume that we have problems as When we have machines with a human like intelligence, which looked like the terminator or something like this And the problem start really way before that they start when you use assisting systems when you have men or human machine teaming Or when you accumulate a couple of autonomous functions through the targeting cycle So through the military steps that lead to the use of force or lead to the killing of people And that's not this is really not our problem at the moment So please keep this in mind because it's not just semantic semantics to To differ differentiate between these two things. It's really manages the expectations of political and military decision-makers Okay, so when I've got kind of an impression what I'm talking about here. So why should we actually talk about this? What's all the fuss about? Actually Autonomous weapons have or would have quite a few military advantages They might be in some cases fast or even more precise than humans and you don't need a constant communication link So you don't have you don't have to worry about instable communication links You don't have to worry about latency or detection or vulnerability of this specific link so yay and A lot of very let's say interesting military options come from that people talk about stealthy operations and shallow waters for example or Yeah remote missions and secluded areas things like that and you can get very creative with tiny robots and swamps for example So shiny new options, but of course there is But that it comes as at a price because you have at least three dimensions of challenges in this on this regard First of all the legal ones when we talk about these weapons they might there will be applied in Conflicts where international humanitarian law IHL Applies and IHL consists of quite a few very abstract principles For example a principle of distinction between combatants and civilians principle of proportionality or a military necessity they are very abstract and they I'm pretty sure they will only always need a Human judgment to interpret these principles and apply them to dynamic situations Feel free to correct me if I'm wrong later So that's one thing so If you remove the human from the targeting cycle these human judgment might be missing And therefore military decision-makers have to evaluate very carefully The quality of human control and human judgment within the targeting cycle So that's law second dimension of challenges are security issues when you look at these new systems They are cool and shiny and as most new types of weapons. They are they have the potential to stir an arms race between between states so they actually might make conflicts more likely just because they are there and States want to have them and feel threatened by them Second aspect is proliferation Autonomy is based on software so software can be easily transferred It's really hard to control and all the other components You are most of the other components you will need are available on the civilian market So you can build this stuff on your own if you're smart enough So we have might have more conflicts from these types of weapons and It's might might get well More difficult to control the application of this technology and the third one Which is especially worrying for me is the as potential for escalation within the conflict Especially when you have when both or more sites use least autonomous weapons You have these very complex adversary systems and it will become very hard to predict how they are going to interact They will increase the speed of the Of the conflict and the human might not even have a chance to process what's going on there So that's really worrying and we can see for example in high frequency trading at the stock markets Where problems arise there and how difficult it is for humans to understand what's going on there? So that that are some of the security issues there and the last and maybe maybe most important one are ethics As I mentioned before When you use autonomy in weapons or in machines you have artificial intelligence So you don't have a real intention a real entity that's behind this So the killing decision might at some point be based on statistical methods and no one will be involved there and that's Well worrying for a lot of reasons, but also it could constitute a violation of human dignity You can argue that humans have well you can kill humans in in war But they at least have the right to be killed by another human or at least by the decision of another human But we can discuss this later. So at least on this regard it would be highly unethical And that's really just scratches the surface of problems and challenges that would arise from the use of least autonomous Weapons I haven't even touched on the problems with training data with accountability this verification and all that Funny stuff because I only have 20 minutes So sounds pretty bad doesn't it? So how can this issue be addressed Luckily states have thanks to a huge campaign of NGOs noticed that there might be some problems And there might be necessity to address this issue and they currently doing this in the UN Convention on certain conventional weapons CCW Where they discuss a potential ban of the development and use of least autonomous weapons Weapons that lack meaningful human control over the use of force. There are several ideas around there And such a ban would be really the maximum goal of the NGOs there But it becomes increasingly unlikely that this happens most states do not agree with a complete ban They want to regulate it a bit here a bit there and they really can't find a common common definition as I mentioned before Because if you have a broad definition as just as I used it You will notice that you have existing systems in there That might be not that problematic or that you just don't want to ban and you might Stop civilian or commercial developments, which you also don't want to do So states are stuck on this regard and they also really challenge the notion that we need a preventive arms control here So that we need to act before these systems are applied on the battlefield So at the moment, this is the fourth year or something of these negotiations And we will see how that it goes this year and if states can't find a common ground there It becomes increasingly like or yeah becomes likely that it will change to another forum just like with anti-personnel mines for example, which where the And the treaty was found outside of the United Nations Mm-hmm, but yeah the window of opportunity really closes and states and NGOs have to act there and yeah keep Keep on track there just as a side note probably quite a few people are members of NGOs so If you look at the campaign to stop killer robots with a big campaign behind this this process There's only one German NGO which is facing finance So if you're especially if you're German NGO and are interested in AI it might be worthwhile to look into the military dimension as well. We really need some Expertise on that regard especially on AI and these technologies there Okay, so just in case you fell asleep during the last 15 minutes. I want you to take away three key messages Please be aware of the trends and internal logic that lead to autonomy in weapons Do not overestimate the abilities of autonomy of autonomous machines like intent and these things and because you probably all All knew this already. Please tell people about this tell other people about this educate them about this type of technology and third Don't underestimate the potential dangers for security and human dignity that comes from this type of weapon I Hope that I could interest you a bit more in this in this particular issue If you want to learn more you can find really interesting sources on the website of the CCW at the campaign to stop killer robots and from a research project that I happen to work in The international panel on the regulation of autonomous weapons We do have a few studies on that regard and we're going to publish a few more So, please check this out and thank you for your attention Questions, sorry, so we have some time for questions answers Okay, first of all, I have to apologize that we had a hiccup with the signing language The acoustics over here on the on the stage was so bad that she didn't could do the her job So I'm terrible. Sorry about that. We fixed it in the talk and my apologies for that We are queuing here on the microphones already so we start with microphone number one your question, please Thanks for your talk on yeah Don't you think there is a possibility to reduce war crimes as well by taking away the decision from humans and by Having algorithms to decide which are actually auditable Yeah, that's actually that's something I just discussed in the international debate as well that there might that machines might be more ethical than humans could be and Well, of course, they won't Just start raping Women because they want to but you can program them to do this. So You just you shift the problems really And also maybe these machines don't get angry, but they don't show compassion either So if you are there and your potential target, they just won't stop They would just kill you and do not think once think about this. So you have to really look at both sides there, I guess Thanks. So we switch over to microphone three, please Thanks for the talk Regarding autonomous cars self-driving cars, there's a similar discussion going on regarding the Ethics, how should a car react in the case of an accident should it protect the people outside people inside? What are the laws? So there is another discussion there Do you work with? People in this area or is this is there any collaboration Maybe there's less collaboration than one might think there is of course we monitor this debate as well and Yeah, we think about the possible applications of the outcomes for example from this German ethical Commission on self-driving cars for our work But I'm a bit torn there because when you talk about weapons they are designed to kill people and cars mostly are not So with this ethical committee you want to avoid killing people or decide what happens when this accident occurs So they are a bit different But of course, yeah, you can learn a lot from from both Discussions and we are aware of that Thanks, then we're gonna go over in the back microphone number two, please Also from me and thanks again for this talk and infusing all this professionalism into the debate because some of the surroundings of our so to say our scene scenery They like to protest against very specific things like for example the Lammstein air base and in my view that's a bit misguided if you just go out and protest in a populistic way without involving these points of expertise that you offer and so thanks again for that and Then my question how would you propose that protests? Progress and develop themselves to have to a higher level to be on the one hand more effective and on the other hand More considerate of what is at stake on all the levels and on all sides involved Yeah, well first the Rammstein issue is a completely actually completely different topic It's drone warfare remotely piloted drones So there are a lot of a lot of problems with this and we're starting to killings But it's not about lethal autonomous weapons in particular Well, if you if you want to Be a part of this international debate There's of course this campaign to stop killer robots and they have a lot of really good people and a lot of resources Sources literature and things like that to really educate yourself on what's going on there So that would be a starting point and then yeah, just keep talking to scientists about this and Find out where where we see the problems and I mean it's always helpful for scientists to to talk to people in the field So to say so yeah keep talking Thanks for that and the signal angel signal that we have something from the internet Thank you Question from IRC aren't we already in a killer robot world where botnet can attack a nuclear power plant for example What do you think? I really didn't understand I didn't understand that as well So can you guys be closer to the microphone, please? Yes, aren't we already in a killer robot world? Sorry, that doesn't work. Sorry. Sorry. We stopped that. We can't hear it over here. Sorry, okay We're gonna switch over to microphone to then please I Have one little question so In your talk you were focusing on the Ethical questions related to lethal weapons. Are you aware of ongoing discussions regarding the ethical aspects of the design and implementation of less than lethal autonomous weapons for crowd control and similar purposes Yeah, actually within the CCW every term of this lethal autonomous weapons systems is disputed also the lethal aspect and for the Regulation that might be easier to focus on this for now because less than lethal weapons come with their own own problems And the question if they are ethical and if they can if HL applies to them But I'm not really deep into this discussion. So I just have to leave it there Thanks and back here to Michael for one, please. Hi. Thank you for the talk very much My question is in the context of the decreasing cost of both the hardware and software over the next 20 40 years Outside of a nation-state context like private forces or Non-nation-state actors gaining use of these weapons. Do things like the UN Convention or the campaign to stop killer robots apply? Are they considering private individuals trying to leverage these against others? I'm not sure what the campaign says about this I'm not a member there, but The the CCW mostly focuses on international humanitarian law Which is important, but I think it's it's not broad enough so questions like proliferation and all this connected to your question are not a really Probably won't be part of a regulation there It's discussed on the on the edges of the of the debates and negotiations there, but it's It doesn't seem to be a really issue real issue there. Yeah, thanks Thanks and over to microphone six, please. Thank you. I have a question as a researcher Do you know how far the development has gone already? So how how transparent our insurance parent is your look into what is being developed and researched on the side of Military working military people working with autonomous weapons and developing them Well for me, it's quite intransparent because I only I've only access to public publicly available sources So I don't really know what what's going on behind closed doors in the military or an industry There of course you can you can monitor the civilian applications or developments which can tell a lot about the the state of the art and For example, Darpa the American Development Agency They publish sometimes the call for Call for papers. That's not the term But there you can see where in which areas they're interested in and for example, they really like this idea of Autonomous killer bug that can act in swarms and monitor or even kill people and things like that So, yeah, we try to to piece it piece it together in our work We do have a little bit more time. Are you okay to answer more questions? Sure. Then we're gonna switch over to microphone three, please Yes, hello. I think we are living already in a world of Autonomous weapons systems if you think about these millions of landmines which are operating and so the question is Shouldn't it be possible to ban these weapon systems the same as landmines are already banned by several countries? so just include them in in that definition and Because the arguments should be very similar Yeah, it does it does come to mind, of course because these mines are just lying around there No one's interacting when you step on them and boom But they are well, it depends it depends first of all a bit of your definition of autonomy So some say autonomous is when you act in dynamic situations and the other ones would be automated and things like that And I think this autonomy aspect. I really don't want to find want don't want to find define autonomy here really but this this action in more dynamic spaces and the aspect of machine learning and All these things they are way more complex and they bring different problems than just landmines landmines are problematic and Antipersonnel mines are banned for good reasons, but they don't have the same problems I think so it won't be I don't think it would won't be sufficient to just put the laws in there at least like weapons Thank you very much. I can't see anyone else queuing up. So therefore ania. Thank you very much. Thank you My apologies that that didn't work