 Good morning, everyone settled in. So good morning. My name is Nancy Scola I am a senior technology reporter at Politico, which is a publication based in Washington, DC You see the logo behind me Yep, it's my good fortune to moderate this morning session on governing dual-use technology Those are tools and innovations that were developed for consumer uses But they could be repurposed for military applications and that raises challenging questions about what sort of global rules and Districtions should be should be applied to their sale and trade Those questions become even more complex in this panel even more timely when we consider cutting-edge technologies like artificial intelligence robotics and quantum computing So what we're aiming to do today in the time we have remaining is figure out the best path forward for the world no pressure When it comes to making sense of these technologies and their risks and rewards And we have a terrific panel with us this morning to do just that We're keeping a seat warm for one of them, but he'll be arriving shortly So we have David Shim David is an associate Professor at the Department of Aerospace Engineering at the Korea Advanced Institute of Science and Technology better known as CASE We have Corey Shacky Corey is the Deputy Director of the General International Institute for Strategic Studies She previously held several high-level posts in Washington, DC Including on the White House's National Security Council in the Pentagon and the State Department So we're going to talk for a while Maybe 30 minutes and then we'll open it up for questions for the from the audience for the last 25 minutes or so So please be thinking about what you might want to ask our panelists and with that we'll get started So Professor Shim countries have for decades placed restrictions on the trade of so-called dual-use technologies from lasers to computer hardware We look now at this new age of artificial intelligence We're seeing technologies that are remarkably flexible scalable and very often very cheap How does that change his question of governing the sale and trade of dual-use technologies? So the recently AI become a you know every day worse for everybody The power of AI is very remarkable so far. We thought the human beings only Capable making complex decisions, but using the AI we can some of the Part of delegate the decision-making to the machines So we already use it every day for example the iphone apple introduced the face ID Which is based on the facial recognition? same technology can be used for It's a military application Facial detection. I was at a demonstration of the military armored vehicle Like a 10 years ago in Korea and that they are demonstrating that the the armored vehicle is stopped by the operators And then a vehicle stops when Speak up. Yeah, I think people are straining. Okay, so So there's a little bit of an echo So I saw that a vehicle stops when detected people but the Disadmitter vehicle so it's kind of cruel to say but them sometime you have to go over a military application But when they stop so I thought oh, this is wrong. We have to have detect friend or Enemy, but um now we may be able to do that. We usually are this kind of a facial detection to to To identify if they are the enemy or you know the friend So this is one of the example we can use this technology just like that But some can you use it even beyond and there was some controversial video Professor Russell made early that last year and then in there in the scene. There's a drone that detects the face I'm actually honest to say that we have a very similar technology not face, but the obstacle So for AI if you use different data set we can train for different targets so what he made the video is very very Possible did this maybe not at the scale. Maybe not that accuracy, but we are getting there so So the technology always find its way to the application if it's right if it's doing the job So it's very hard to stop it right now and that those take hardware is getting cheaper these days It used to be expensive still is expensive running the big servers, but there are AI chips latest drones It really made by China The like the selling like a hundred bucks that they do have some recognition capabilities already So these technologies are really getting out there and the ones it's out. It's very hard to control So we are really seeing the proliferation of this kind of technology So we I think this is a really not right time and the place to talk about it So the same facial recognition that you can you look into your iPhone camera and unlocks your phone Right that can be used you're saying to detect if an enemy combatant in sort of the military field Is there any limitation on the technology right now to make that sort of to be able to do that analysis in the field Or could you really strap an iPhone to the front of a that's for your question? I in China I was also told that China they use the facial ID for a credit card transaction. It's all that happening I look it up and the iPhone has an error of a one millionth, which is quite remarkable for any AI system many many typical Papers they say like a 97% 90 some percent. So this is really possible. So So for machine You know in you and we have a similar talk with professor. I will go to the UN meeting UNOG The laws meeting and they say we need to have a meaningful human control And that's very important keyword for that But for machines, they don't really care if this is for credit credit card transaction or a firing or missile So same technology can be used for detecting the aircraft actually I'm really working on it as a the aerospace Experts I'm using AI to detect the aircraft and avoid Same can be written in the end of the first small minor changes that if you foresee the application You detect the airplane you avoid in the military application use same airplane you detect go straight So it's only writing about few lines of code in different way and the machine don't care if this is Saving people or killing people just execute the lines So so it's just like that. So Corey is a former policymaker through the highest levels of the US government when you hear that iPhone have military capabilities. Does that scare you? You know, I think we have a tendency to look at the scary futures But we ignore all of the positive Applications that are already going on and improving people's lives just to take the example from my home state of California this is wildfire season and drones flying over wildfires help not only firemen figure out where to target their resources but also help people understand if their house has been burned and So we think about the negative stuff. We think about the killer robots We don't think about all of the ways that it will reduce friendly fire Deaths in combat for example as the professor just mentioned if you yeah I think we do have a tendency to overweight the scary negative outcomes. Yeah, the other thing is from a governance perspective This ship has sailed there. There is no effectual way to be able to Ban or limit these kinds of technologies if you look at the the spectrum of Governance on weapons issues Nuclear issues were pretty good as an international community and figuring out how to control But they require enormous amounts of resources very few inputs So you can limit the inputs large machinery the technology essentially has an advance since 1945 Even there our international ability. We we are still surprised by breakthroughs. They're few but we're surprised Biological innovation much harder than nuclear Because a lot of these come from the life science. They're self-propagating at and the knowledge base is constantly Being updated and if you go to information technologies there it's So low-cost it's the technology is ubiquitous. Everybody's got a phone So so the only effective way to control dangerous applications is Actually the ethics and professional standards of the developers Okay, so the the sort of main treaty that governs duly is technologies called the was in our arrangement goes back to the 1990s right where we we didn't have iPhones yet, right at that point Does that need to be updated or are you arguing that there's just no way that any sort of treaty or sort of global apparatus? Can cope with these technologies, so let's not bother Well, I always think you should bother and I would love to hear from folks in the audience any governance suggestions you have for getting a handle on this because My Institute is running a project trying to look at what kind of arms control arrangements Would we like to have in place before artificial intelligence? completely dominates the IT field and We struggle to figure out what it is. So I'd love to know from folks Who are listening what they would like to see included? It's I think I am pessimistic that there's any conceivable way to do it Because again that the knowledge is ubiquitous the machinery is ubiquitous There's low barrier to entry and there's so much Interesting and positive that can be done for example that that biggest resistance to putting biological or IT Arms control into place is from the scientists themselves. You see enormous possibilities for extending life Producing all-timers like just the positive upside is so overwhelming that Skeptical there's any any productive governance that can be put in place, okay? So so our third panelist is going I think is going to be more skeptical about that the sort of that Argument that you make that that we should be hands-off on some of the governing So we'll save that for me. I'm not saying we should I'm saying I don't see a practical way to do it Okay, I'd love to see a practical way to do it I would love for example for there to be some enforceability to the agreement that President Obama and President Xi made in 2015 That we would not target each other's critical infrastructure with IT and that we would not You use state espionage for commercial purposes But without any enforceability the only way you will find out whether potential adversaries Have complied is when you see it rolled out in a conflict or not, okay? so David Corey mentioned the idea of Ethics among researchers on some of these new technology and innovations There was an interesting situation recently where Google employees Objected to the company developing technology for the US Department of Defense to analyze drone footage basically and The employee said we don't want our work being put to these military uses the company sort of Resisted it first and then they said okay. We won't renew it going forward. Is that a good outcome or is that sort of an anti? progress outcome I Think it's a it's a good. I could outcome that the engineers shall use their ethics and the morals and conscience So resist so it's always hard to resist the boss's comments So I think it is a very good thing to do that. They're only part of technologies when you develop something We develop it because it's good and it takes a lot of knowledge a lot of effort to make it happen So normally at the stage it doesn't really happen. I've been batting doesn't happen I've been I've been in the drone in research industry for an area for last for 27 years I started my work in 1991 so Drone became problem only recently like a 2013-14 especially when DJI, you know Sell this kind of amazing drone technology that everybody can in this from everybody can use that's the reason why drone became so Popular you can buy any normal markets these days, you know, we'd like a hundred bucks or something So but that's the same time when problem happens any any people without good understanding without without a very good moral Standard they start using it. So they strap bombs on it They strap something weird on it just fly off and do something bad So this is every technology have this kind of problem and when people worry about AI they worry about this The AI used to be very expensive technology. You need the expensive servers run this thing You need the very very high-level top-level peach the understanding to do this, but now you just download the TensorFlow from Google even high school kids run the prize my son actually is a seventh grade He was running this bird classification algorithm as just after-school activities This is a very problem starts so and everybody can use it. That's what he's not here Hopefully he's coming with them. That's about professor Russell's point that once this things getting out there people start using it without thinking about the cause and effect and thinking the consequences and so should there be any I'm sorry go no I was just gonna make a point about the Google case, which is an especially interesting one so Google engineers rebelled at the prospect of working on drone analysis for the Pentagon and now but That causes one to worry that societies that have Active civil society and pushback against government uses and military uses will be asymmetrically disadvantaged and warfare Except for the fact that this is likely just the tip of the iceberg So for example, I think it likely that there will be a backlash in Google against working on AI for the Chinese government if they anticipate it will be used for censorship or For social credit scoring or the kinds of things that the same kinds of engineers who would object to using drone foot to Assisting drone footage analysis are likely to object to those things as well. So the argument be that countries in which employees feel empowered to object to what their companies are working on maybe that might be a burden on the Sort of war fighting capabilities of those countries It could be a disadvantage if you think it stops there, but it probably doesn't stop there It probably means that they the governance restriction on uses of artificial intelligence are going to be Scientists saying whoa Wade. I don't want to be involved in censorship. Well, Wade I don't want to be involved in penalizing people in in ways that a government I don't approve of wants to Okay, there so there's been a history of this debate over the United States in particular selling computing equipment to other countries right so Ronald Reagan objected to the sale of computers to I believe it was the Soviet Union for For a census purposes, I can check my notes, but it's I'm sorry I was actually the sale of computers to China for the for use in this country's census and there's concerns that would be put Military uses president Ray Clinton wrestled with the same thing in the 1990s and allowed the sale super computers also to China for weather predictions Those computers when you stack them up against the new technology that we're talking about now quantum computing Which hadn't the folks know what quantum computing is? It's sort of this so the idea actually we have a computer scientist Can you define quantum computing for us? Well, I'm not a computer scientist, but I'm interesting a lot of stuff Quantum computing I think they use a quantum effect of theories to Compute and also they use it for the encryption. This is really great quantum is very unique though If you start measuring it changes so famous Shreddingers a cat the analogy so if you poke it then I'll be big changes So this is perfect for encryption. Yeah, so encryption is one of a very very tight protected technology one of the military application to if you watch the movie about Alan Turing he Contributed a lot to the decipher the the enigma, right? Yeah, so the death has the heritage So that's quantum computing quantum the encryption is a very very military It has a huge military capable. I mean potentials So the so quantum computing the theory is that you can run processes at the same time Simultaneously makes it basically in a lay person's term Creates an ultra fast ultra fast and ultra secure the encryption So if you start to break it, it's already changing because you are doing a measurement So that's one of the quantum theories, but it can also be used for decrypting. Oh, of course Of course So is it if so now a future president of the United States thinking about selling a quantum computer to say to Country like China is that we're past presidents bulked it that should future presidents not It's it's an interesting and an important question, but two reasons to think it's not gonna matter The first is that it used to be the case that military investments Created these enormous leaps forward in technology and private industries driving that now So so the the governments are going to struggle to keep up with industry Innovation to even understand what to be able to restrict That's increasingly a problem in the United States, and I'm sure for all the other countries the second piece of it is that It's very hard to predict what's actually gonna turn out to be useful for example one of the Celebrated cases of the failure of export controls was in the 1950s the United States allowed The Russia the Soviet Union to buy ball bearings and that dramatically increased the precision of Soviet missiles Nobody anticipated that that was going to be the outcome and I think those kinds of problems are much more Much more likely in an area as fast-moving as biological sciences are as fast-moving as it is Did you have any well it is true that The US leads the whole industry number of AI and encryption the encryption has a huge impact on military application In the future already happening. We like to use the remote armies, right? Remote area vehicles even civil application. We have a big issues about the civil UAV Wait, you see we are talking about huge airplane fly autonomously and then we need to have this kind of communication so that we are very much worrying about the Security over this remote channel. It'll be even more so in in the military application Suppose you want to have an army of robots. It can't it doesn't have to be the the terminator kind of thing But that the arm the man the ship and man the aircraft and man the armor vehicle Unitable remote control it has to be secure and the quantum Yes encryption is a perfect for that application So it's very hard to break and it's already being used It's not the future use the future But now it's up to you already using it. I think that China is very much of a leading country in terms of this computing and this information technology, so US may be able to do their there, you know blocking over this a export But so no later there will catch up so when you hear countries talking about placing some of these limits on these export controls Is that in some cases that is that trade protectionism by just a new fancy name? Yes, in lots of cases it is and Just as you see with inner-pole warrants Governments using them for to target political enemies. They are gaming an international system whereby everyone has agreed to help each other police Bad behavior and you see the same thing with a lot of the kind of trade controls Okay You have a no no no So so dr. Russell in 2016 he championed an open letter Calling for a treaty banning lethal autonomous weapons. He said starting a military AI arms race is a bad idea He's he's a very well respected Researcher and name in his field. Why is he it's unfair to ask him without Ask about without him here, but why is it? Why is that misguided? Why do we not worry about sort of the arms race? I don't think it's misguided and I think we should worry about it I struggle to see how practically to do so so There are lots of innovations that turn out to have very effective military Applications and it's easy to see how AI will have right because if you think about the giving one military the ability to make a million decisions in a second and Everybody else struggling at the human rate of decisions per second, which is probably somewhere around one that's an enormous advantage and There's no practical way to prevent it. So yeah, it's a future I would like not to see but I don't see a practical way to do it and the only Practical way I can think of to do it is to prohibit artificial intelligence in its entirety Mm-hmm, and that means you will rule out so many enormous advantages for for human for for the upsides of human life drug innovations The ability to make cars safer on the road all those sorts of things that we're already benefiting from So I think that ship is sailed. I don't think you can practically do it I agree. It would be a nice thing to do if you could do it So I actually went to use Berkeley for my PhD and I did work with the professor Russell at the time And I was a student I really look up to him and he's a great great mind and he's a very Active very brave to do all the things and it's we bring the professors It's very hard to make a voice is like that So I really admire what he's doing what he's worrying about is a proliferation So if they I just said if I can use it without knowing the consequences So because so easy to use, you know early in this year and maybe I don't it's good I have to talk about it, but them bring it up But the kites was had a very very unsolicited fame in the AI field because there was some misguided advertisement that the kites some researchers are working on the AI for defense. So that was a Partially Meld informed so it was a big issue over the world. So many leading our researchers Boycotted kites of the collaboration. So what happened was that there was a military company in Korea and as you know Korea is divided so military The development is a huge issue. So Scientists or researchers like me sometimes even take pride in the working on the kind of area to aid our national defense And the there was some some some line that saying that our kites is developing AI Power depends of something like that. So it was kind of a They don't know what they're really doing what they're talking about. So in in the UN meeting They're like annual meeting. There was a recent meeting in August and I was there last year or this year What they're talking about is that um, there are three voices over there. One is let's just abandon it right now Because they're those are typically countries. They have no AI activities. There's the other extreme And they are countries saying that let's develop it Nice and well, you know with the proper, you know precautions Proper procedures and you can make a nice working AI system some of the in the middle They're saying that no, you know, this is a great technology, but let's be very careful But let's not talk about not working on it. So let's let's work on it But um, let's be careful. So there are three voices and they all say that we have to assume that There's a meaningful human control meaningful human control saying that Their key idea is that the AI should not left alone to make the critical decision especially kill or You know, not kill decision that this is their their key issues on that. So but um, you know in Korea It's a it's a it's a mandatory to serve in the military So I serve my one month as a special researcher a program But if you really go there Soldiers are not really meaningfully making decisions. So, you know There's a famous talking that in Vietnam war or even Korean war. They're shooting rifle like this It's not very meaningful to me. They're just shooting the bullets out for any random direction and it's not very meaningful. So So we are talking about some of the conundrum. This is kind of the endless loop that The AI the date you mentioned a few times that the reason they are trying to make AI into the defense is that It is better Statistically so in in 2012 there's a famous of professor in Stanford Feifei Lee to Chinese and She she demonstrated that the computer can make a better image classification many of the military soldiers job is to Identify friends or foe and the shoot the missile or not So if the machine makes a better decision Statistically faster. Why don't you delegate it? Especially when you are talking sitting at the computer screen Trying to control the drone, you know half the half the around the earth Communication is hazy and you're trying to see oh if this is a human or not with the limited bandwidth Or you're gonna let the computer on board with the clear 4k UH division Running in real time to make the decision and we already know that the machine is more accurate than humans I'm making the image classification. This is where the problem starts. So yeah, no, I'm sorry I had the parallel is to self-driving cars. Yeah, exactly I everybody's afraid that they're gonna be a huge danger In fact because they don't get tired because they don't get distracted They are likely to be much safer than human decision-making in driving cars And so we are letting our fears of an outsized negative outcome. That's extremely unlikely Outweigh the enormous advantages of fewer highway deaths in the United States for example Yeah, I'd like to add on that autonomous driving for civil application This is also the life or death situation. So if you turn the wheel left people die turn right Something the other crashes and this is but the if we are willing to delegate Driving why not military decision? That's this is one of the extension of the argument I'm not that I'm supporting of it But then this is the one of the very logical Question we have to ask so if the countries that are sort of at the forefront of developing AI technologies China and the US is what people generally argue you would both say The cats out of the bag the ship assailed whatever other metaphor I want to strangle that There shouldn't be limitations placed on the military's the export of those technologies just because they might be repurposed for military Uses if that's sort of a fair read of your position I'm gonna put it to the test right now in the room and see how many folks agree with that So do people understand sort of the premise that they're arguing that there should be no limitations placed on the export of artificial intelligence Not that there should be no limits, but there's no practical limits to play. Okay, so this is going to be a Quick poll of the room. Do you think we should attempt to place restrictions on artificial intelligence? The export of artificial intelligence out of fears it could be used for military purposes. So yes to that So the the rest are no So no limits on a on the export of artificial intelligence interesting. Let's open it up for questions at this point We have a microphone. I believe Rightfully behind you. Oh, oh, I think we just have to wait for the mic. Is there? Oh Yes, oh, I'm sorry Go back here And if you want to introduce yourself if it's relevant your affiliation, okay, thank you my name is Joanna Bryson and I'm a Professor of artificial intelligence and I do a lot of work in AI ethics And I really appreciate the discussion. It's really great. It's a it's a shame That's it wasn't wasn't here too, but I think you guys have done a great job But I want to slightly disagree with one of the things that you were saying about the impracticality of global regulation So you were saying that the problem with global regulation is it restricts what we can do and I Regulation is not only about restriction actually often we up regulate. We actually fund some things But in particular in artificial intelligence, I don't think we're restricting The goal isn't about what algorithms we should release although of course we know that we there's been controversy for example on an encryption algorithms what can be released But I think what's more important is that we need to be regulating accountability and we need to come to global Negotiate acts answers about how do we maintain? Human and corporate accountability through software So right now there's a lot of smoke screens around AI if you think of AI as a person like a legal person Then you can use it as a sink where there is no ethics and there is no accountability because a machine will never care if It's in jail But if we say no we don't accept that it's always some human who is accountable Then actually AI can increase Accountability and and it can lead to beneficiary even in the battlefield, but certainly in driving as you were saying So I think that was the one thread that's missing is about what are we what are we looking for a regulation? I think it should be how we handle accountability. I love that Suggestion let me ask you to take it one step further and tell me how to do it Right, so how do we for example? How do the American and Chinese governments make each other? Confident that there's accountability in the AI development, right? Okay, so there's a there's a huge issue there, which is Partly hacking you cannot talk about any technology including AI without having cyber security So how can we really demonstrate that we have done the sorts of things that we say that we've done and there there will be it is probabilistic So when you have cyber security having said that I'm not I'm sure there's better people to talk about cyber security than me in the room what I'm really good at is AI and What we need to do is set standards About Understanding and we can do this within our own nations for liability So standards about saying you need to be able to prove due diligence You need to prove that what software library you were linked to what data library you use for machine learning What procedure you used to build your system? So people have been saying oh, we don't know the weights and deep learning We don't know how can we hold it accountable? We don't know the synapses of an accountant's head either, but we don't look at that We look at the accounts and so basically if we just Demystify AI get a little more technological Knowledge and realize that we can go through we can log The system by which we develop the software and then hold people Accountable for following good and safe procedures, which they have not been doing to date Right and that you can read like Frank Pasquale in the black box to see the incredible Mess that's being made with the data that we're using for training AI Very interesting suggestion. Thank you. So let's just follow But does anyone in the room have a suggestion for Corey and for David about a sort of mechanism for if this is something that we think is a Good ambition to have to have some sort of check on the system as we just discussed What's the mechanism for that? I was actually thinking We're talking about technology is neutral, right? It's really about the intent of the people we turn to regulate Which I totally agree with the gas was speaking of there I was thinking about is that something we could learn from the gun control in us, which is you know It's a more parallel here Is that something like ID? Plastic a curve you could track the guns and also some licensing requirement to track those AI's which has you know has a lethal Impact so this might be something to think about Thank you. That's a good suggestion David is that is that a practical Way to approaches. Oh, there are procedures. They have to execute to develop for the certain Weapon systems is what they call the CCW the conventional certain conventional weapons systems So one of them are nuclear weapons and now the AI is joining that level of the Threat some people perceive as a threat. So we have to execute that kind of the procedures to Make it make it make it work. Correct. I think that's very a good point that a technology is always neutral It's only a humans. So we put in put them into what situation what application? We use a GPS every day and the GPS is a definitely military system that turn into civil and We cannot live without it, but the time is a very definite military heritage So but the GPS is only telling where you are. So it's not gonna kill somebody So what's AI is a really are people worrying about is it isn't making the decision? But that's the that's the part people are very concerned about Go right over here And if you can't introduce yourself, yes, hi, I'm Anusha. I'm a global shaper from Lahore Pakistan and I am a human rights and technology lawyer So there've been suggestions that you know, for example, if somebody is employing robots or technology That robot should pay taxes as well. So how far do you think an economic argument of that sort? Justifies the use of technology in that way and I think it was a bill gates who suggested taxing Taxing robots The response on that Okay, well, I think the The AI can create a lot of a they can do a lot of work You know, they do very really great great job So I think they are talking about the having some tax on it I think it makes some sense I think we have to discuss how we're gonna really see It's sort of one step towards the what was discussed earlier about thinking of AI systems as human in some way so that they have some level of Accountability accountability and warfare and accountability when tax day arrives so when you take the account accountability we had this kind of discussion in the the civil aviation we like to they discussed about having a Autonomous aircraft we may able to see it as autonomous air Taxi China is a leading country on that. There was one company in Yi Hong They made a human passengers or pilot or flew that thing in the future. They're gonna be autonomous They expect and the IKO International Civil Aviation Organization banned autonomy for the time being in the in the civil aviation the reason why is there is no Accountability so someone asked a very nice question human pilots can be even more Unpredictable, but the why do we allow humans to fly their plane when the AI is not Right, it's just the burden of proof. Yeah, we are we are at this level of discussion We never thought they I can fly anything But now we think it can fly it really is true and I'm a I'm an engineer on the part trust me you can make it fly and The the answer I thought is this we can ask a pilot. Hey, why are you flying in this way? Even if he's in a very bad way of doing it, you know in from the last few years of accidents AI don't answer why he's doing or it or she or whatever You know Prana on his four with them is doing it, but we can ask humans Maybe he's lying but the money is answering. So this is a basis on it I think a lot of the fear about Autonomous aircraft and things comes from the likelihood of hacking if someone can interfere and you don't have a human to say Whoa, wait, this doesn't look like it's supposed to look And override so a human in the loop or a human on the loop who's constantly being able to reassess or developing autonomous feedback loops that Send alerts the hacking is really a big issue. That's the reason we talked about the little encryption technology in the beginning of this session It is really big issue, you know the current setup of a disabled aviation We say that the airplane has to be remotely piloted to allow the meaningful human control The group this allows the the vulnerability to the system the hacking is very hard You're probably assuming the hacking on software, but it's very difficult But then we are letting humans to remotely control We are actually opening a bigger door for problem because anyone can send a radio signal That can be interpreted as the legitimate to control signal and we can have God forbid 9-1-1 in a global scale without any terrorists going on the airplane I think it goes to this point about accountability though, right? If there's a human in the loop you can say you can put that person on trial for their judgment, right and so Groping our way towards systems of accountability in autonomy. I think it's a really important question I want to take back to one other element of her question though, which is This fear about massive job loss Associated with artificial intelligence and with robotics. It's certainly going to be true it most of my job can be done by a machine and Probably better than I can do it and that's also true for lots of other people's jobs But what technological pessimists miss in this is that the same thing was true during the Industrial Revolution The same thing has been true what what technological pessimists underestimate is that a whole new Economic ecosystem is going to grow up around robotics and we're going to find new constructive things to do Things that we haven't imagined yet that need doing that people will move in and do so we shouldn't believe that just because Robots are going to take our jobs that those are the only jobs that are ever going to exist or the only jobs that people want to do Right compared to So in the 5th century history of the Shaki tribe, I would have been Siphon wheat which is hard work. I'd have been a stevedore I'd have been a longshoreman I like my job way better than all of those jobs that technology replaced previously And I think we should at least Imagine that the likelihood is going to be the same in this next revolution Okay, let's take a question from this side. Do we have a question inside of them? No, we had one right there. Okay, right here. So I I guess you're What you said about the practical barriers to regulation of Of these technologies can be applied to a lot of things As well. And so in science, we have the precautionary principle So shouldn't the onus and the burden be on those using that technology to prove that it is in public interest Worses Using the approach that that you suggested that well, it's because there's so much opportunity for For positive use or there's positive impact To not for us to not make use of that So so we shouldn't we the other way around? That's a fabulous question and American law since that so the Patriot Act when in 2011 I'm not quite sure of the date when this came into effect But that is the nature of American law for for Scientific research that has military and deadly applications But it's very hard to determine what that is It's even in the case of biological agents that could that could cause pandemics It's it's very difficult to make a real-time judgment It's not that hard to make a forensic judgment after the fact but in real time as the nature of science progresses and the biggest impediment to those kinds of restrictions Have been scientists themselves who see also the upside of these many things. So the precautionary principle if it prevents a cure for Parkinson's disease for example, there's not just the negative burden of proof. There's the positive burden of proof So yes, you're right And American law is structured that way where dangers can come out, but as a practical matter It's extraordinarily difficult to judge that in advance Other than the judgment of the scientist herself as she's figuring out what she's doing The I'm just good moderate is prerogative take a quick question the Ethics different of being a military pilot who might Launch a strike directly from a plane and being someone in a military capacity of approving a strike that a drone An AI powered drone has sort of identified a target then it becomes an approval process Yes, or no, but the ethics of those two jobs different Well the burden of responsibility ultimately rests with the person making that decision so for example a Celebrated war crimes case in the United States where a commander was woken up in the middle of the night and asked and given the information about What was known about a particular party and and determined yes that commander made his decision in about four minutes when asked by the investigating authorities The war crimes investigating authorities they asked him how long it took him to make that decision and he said it took him the 42 years of his military experience so It it's harder than it looks as a practical matter, but yes in in military in the American military there is always a human who has to take responsibility and An investigation that can easily be triggered into the judgment that they exercise I like to there was an interesting comment from a US lawyer last year in the UN meeting He made up. It's his idea, but I need to comment that So he said of AI can be actually sometimes a better decision maker because it does not put its itself as the Variable to compute the good or bad the humans sometimes make a bad decision because he's involved So making a selfish decision Absolutely, right the other round Though the challenge is magnitude right so a commander wakes up gets woken up in the middle of the night of four minutes She has to make a decision. She makes it. You're talking about 20 people With AI if you're talking a million decisions a second this scale of being wrong and Tracing back to program that algorithm and how do we put her on trial for the million decisions? What if it's a group of people that program that algorithm? Yeah, it's it's a very hard problem I agree. It should be done. I'd love to see it done. I struggle with how practically to do it It's one of the example we can think about the autonomous car They say that autonomous car when it hits people, you know, eventually people die and They say the car manufacturer or to be exact who? Wrote the algorithm would be held responsible. That's what people think these days So similar thing can happen for military but the issue of risk tolerance also comes in right the Geneva conventions For example permit civilian casualties as long as they are in the context of achieving a legitimate military objective and Are proportional to the objective achieved so so That's a wider risk tolerance than we're likely to accept in most civilian applications And so the context really matters for this which is what makes it so difficult to think of Global solutions to these problems Then we have another question In the back row there Hello, I'm Risalat. I'm a global shaper and I'm from Bangladesh So in the context of the global solutions I just wanted to bring up another dimension and get your thoughts on that which is The power of norms and kind of having that discussion together And I think when we know that the stakes are so high I think I see a real lack of that conversation happening on a global level and the kind of deep forms of Multilateralism that we need to kind of move towards in order to address these challenges So I just wanted to hear how you see that conversation developing and what opportunities to have that conversation on global scale And actually just what where do you think that should happen? I? Think at the level of the UN Yeah, because member states are there and I think all member states see That these are really great challenges and there are many right? So what are those norms that people can agree like member states can agree in the collective global interest in in light of those challenges I? Agree that norms are really important and perhaps the most important restraint in a web of Regulation and law and international practice that can develop And I think we're only at the very start of it the Google case that Nancy mentioned. I think it's really important because peer pressure Among scientists and engineers about what are we willing to do and what are we not? Who will we do it for and who will we not do it for? the problem with AI Parallels to some extent the problem with the developers of nuclear weapons Right most of the people who went to work on the Manhattan project to develop nuclear weapons were world-class scientists many of them were also refugees to the United States and We're working on a project that they otherwise might not have Because they were worried that Nazi Germany would develop nuclear weapons before the United States Britain and Canada did and yet When Nazi Germany surrendered the Manhattan project continued only one person Resigned went from the Manhattan project because by that time people were so excited by the science were loving the community of Interaction that they had together so even in that case where you had very strong norms And then most of those scientists were shocked to see The consequences and the uses that the research they had done had been put to so I think that's one of the challenges of norms It's very hard to imagine the the eventual uses of science and technology but there are Norms that scientists and engineers can develop that bind the range We have not seen that be effective so far Especially the place where I am most nervous about norms not emerging is in Biological developments not in IT developments because they are we've had several celebrated cases science magazine publishing articles that for example the American government believed Created a wider accessibility to very dangerous knowledge So the proliferation so it's very hard to do but I agree with you that we're at the start of a conversation about norms and personal accountability for what gets developed He's absolutely right that there are you ends like having meetings right now. I mean it's three times a year and They are trying to come up with the norms and I certainly I observed that It's hard to come up with that. I mentioned there are three different approaches or opinions about AI They are getting there, but even in the human ethics in the part of the world We have a slightly different ethics, you know, basically thou shall not kill applies almost everybody but There is a certain Variation to kill who you know for what occasion and what is okay? What is not okay? So for this is a very common father for science fictions, but I will be confused There's a lot of concern by countries Outside of the leading edge of AI development. So if you're not China or the US Many countries this came out in previous rounds of UN discussions many countries worry That we are going to figure out how to do this and then prevent everybody else from doing it So as Nancy suggested to lock in the advantages of the first mover and all of the positive Elements will accrue to the first movers. Okay, we've time for one more quick question. I think did you have a question? Who do we have? Oh back here and we do need to keep it a bit concise Yes, I absolutely worry about it and it's one reason that So I worry about it in two ways. The first way is that in free societies our vibrant Public debate about these issues Has the potential to put us at an asymmetric disadvantage Because our engineers will refuse to do stuff for our own government as the Google case shows and they are not holding the Chinese Government to the same standard they are holding their own government to So there's the potential for asymmetric advantage as it becomes nationalistic the second thing is and I think our trade The spiraling up of trade restrictions the spiraling up of concern of China as a military Challenger to the existing rules-based order mean that the governments are going to Start screening Scientists out of projects that are federally funded. They are going to stop funding Foreign research. They are going to give preferential research to like-minded countries, which the Australia Group already does That is the cut the 34 countries in the Australia Group already do so so yes, it's it is problematic I also think it is largely inevitable because you cannot insulate something this Potentially advantageous as a weapon from being seen in nationalistic terms One quick lightning question that we need to wrap up. We've talked about artificial intelligence robotics And quantum computing is sort of the next the cutting edge semi cutting-edge technologies that will have military applications What are we going to be talking about at this form five years from now as the what someone like me might see as sort of the scary cutting edge of military applicable technologies What should I worry about next? First of all five years really a Long term for AI, you know, I was amazed that the Typical academic academia it takes one year or more to circulate an idea to write a paper You know except then not accept and then it takes an year. There's something called You know an archive, you know with X there and they exchange idea in The time constant what you call is a one week. It's one week and they upload the whole code Anybody can download they can branch from it. They comment on it though The cycle is extremely fast and thus Typically takes like they say six months or shorter to completely recycle an idea So five years a lot and I one thing I'm sure I've been wanting in AI axiom is don't make Prediction if because after five years you will be a laughing stock But but but the one thing for sure is it's gonna be tremendously fast and then we will see AI use almost everywhere and Military will not be an exception Okay, worry about the life sciences I think the big surprises are gonna come out of biology and the life science is not out of it A whole new field for me to be terrified about it Excellent. Thank you all so much for coming. I really appreciate it I think we're gonna be having this conversation for many years to come about this sort of military applications of technologies And how much more complex they're getting so We'll reconvene here in a couple years and in person again. Thank you all so much for coming. Appreciate it