 Hi everybody. I'm Edward Felsenthal. I'm the editor-in-chief of Time Magazine. I want to first of all welcome you, congratulate those of you in the audience for making it to Friday. And what I know has been an incredibly busy, fast-moving week. We have a terrific panel of experts today to talk about an urgent topic which is cyber war and how we deter it and how we deal with it. I want to quickly introduce them. You can find fully detailed bios in your programs and on Wikipedia but let me give you a quick sense of who we have today. Brad Smith who's the president and chief legal officer of Microsoft. Ashton Carter, former secretary of defense in the United States and currently runs the Belfast Center at Harvard's Kennedy School and Professor Jean Yang from Carnegie Mellon who's a computer scientist who studies these issues. We have lots of great subjects to dive into. I want to first though kick off with a one-minute video from my colleagues at Time just to set the stage on the issue we're dealing with. The modern laws of war are among the most significant legacies of the 20th century. The Geneva Convention, the Strategic Arms Limitation Treaty, bans on chemical and biological weapons. By no means did they succeed in taming the savagery of human conflict. The people, the families we found lying all around had not been injured. They'd been poisoned by chemical bombs and shells. But they did create ways to limit the suffering that wars inflict and to punish those who violate the laws. But the newest form of conflict is subject to no such rules. Cyber war is growing more sophisticated and more destructive in the 21st century but it is waged on lawless ground, covertly, cheaply and with no system of accountability. The potential for damage is phenomenal. You could argue in the last few years NATO in some ways has been sleeping on the job. Early efforts to rein in cyber warfare have been hesitant at best. Proposed limitations have been vague and very difficult if not impossible to enforce. But the urgency is clear. This has the whiff of August 1945. Somebody just used a new weapon and this weapon will not be put back into the box. Let me start with a question for the audience. Just a show of hands. How many in this room have been hacked in any way before? About half. How many of you know who did it? Three. Secretary Carter, let me start with you. Whose responsibility is it in the government or maybe it's not the government to figure out who did it and respond? What agency, what group? Well, in terms of responding to an attack that is already currently, first say that my way of thinking in this sort of is a way of simplifying things in people's mind. An attack is an attack. There isn't a cyber attack. An attack upon our country is an attack. That's my view. That was my view. And so it draws in all of the tools that we have to prevent and respond to attacks and we don't have to respond necessarily in a cyber way. But you're asking how do we even know what happened? That's where others come in. That's where the intelligence community comes in. If North Korea launches a ballistic missile, that's a very conspicuous thing. Nothing else flies like it. You can see it plain as day. You can see where it came from. You can see where it's going and there's no ambiguity about what's going on. In something like WannaCry, it took a little bit of time and that was not a defense department responsible. That was an intelligence community responsible. But if you're really asking who's responsible overall, I'm sorry to say we all carry this stuff around. And if we don't have good cyber defenses, which we generally speaking don't, and worse than that have poor hygiene, some of these attacks just go after the people who haven't patched or haven't updated. And there are plenty of them, so they get somebody when they throw something out there. But good defenses and good hygiene are a responsibility of the private sector as well as the public sector. So it's just pervasive as all the things we have. But if it's an attack, certainly I as Secretary of Defense felt that an attack was an attack and it was part of my responsibility to respond. Well, you've perfectly set up the transition to Brad Smith. What is the role of a Microsoft and of a private company? We talked some before we all came in here, this group, about the degree to which cyber attack is similar or different to other more conventional forms of war. But it is unusual in that there is a very public and private nature to this kind of attack. I agree. I mean, I think first of all, as Asha said, we all have a shared responsibility, but I would say in that context, companies in the tech sector, including Microsoft, have the first responsibility. We have the first responsibility to strengthen defenses, to build more secure software. And then we are the first responders. What is interesting about these attacks is if you think about the development of crime or warfare, it probably took place first on land and then on the sea, eventually in the air. Now they take place in cyberspace. But when there's an attack on cyberspace, fundamentally, it is often against privately owned property. It might be your property in the audience or online because it might be your phone. It might be your laptop. But it is typically launched on our cables against our data centers focused on some vulnerability that someone has identified in our software. So we have this unique role to play in that regard. And that has, I think, forced us to recognize the high responsibility we have as first responders. Now, in addition to that, I do absolutely think there's this shared responsibility with every consumer and customer, both to maintain good practices and hygiene. And then, of course, if we're trying to determine who was responsible for an attack, we actually need to work with the customer who was attacked, whether it was Sony three plus years ago or people last year because we need to work with them and frankly look at their devices sometimes and see what fingerprints were left. And then ultimately, I think governments have a fundamental responsibility. Your video captured it well. The most sophisticated attacks in the world today, unfortunately, or at least in 2017, came from governments. And there is hence this question, how will governments respond individually and collectively? Professor Yang, who's responsibility? Microsoft's the governments or the consumer? Well, I think there's responsibility on everybody. I think that I'm in order for software to be secure consumers have to demand it. They have to practice good hygiene. There needs to be some awareness of what it means to have software that is not vulnerable or less vulnerable than it could be, so that everyone can practice that and demand that of their hospitals, of their voting places. Something that really shocked me in the recent couple of years was what WannaCry and the last US election had in common was the operating system that was the cause, one might say, for these vulnerabilities is that these systems are running Windows XP. And so I don't know if you remember Windows XP, it's very old. In fact, Microsoft hasn't issued a patch for Windows XP since 2014. This means that these systems have known vulnerabilities and they're just out there. And attackers know exactly what to look for if they want to break into these systems. And so who's responsibility is that? Well, people, it's the fault of the hospitals and of the voting places running Windows XP. And in some sense, governments are responsible for ensuring that people are practicing good hygiene when it comes to this. But for consumers and for citizens, we also have the responsibility for being aware that this is not a good practice and making sure that we're staying protected from these kinds of practices. You've made the point, though, that for so many consumers and consumers with whom an attack could begin, technology is still kind of a black box. There's a mystery to it. You talk about hygiene. I think many consumers don't know what to do for regular hygiene. How do we fix that? Right. So I think that there's, well, there's two parts of it. One is for people like us who are technologists to help raise awareness and educate the public about it. I think the other side is for consumers to stop seeing software as a black box. So something that I run across a lot as a computer scientist is I talk to people and so I in particular work on programming languages, which is all about helping software developers write the programs they intend to write. So what this means is that I'm so normally when you think of your software, how people make sure it doesn't have vulnerabilities is they test it. But testing is is not perfect. And so every time there's a vulnerability, there means there was a missing test. And there was some special case and that special case became a vulnerability. But the work in my field actually shows that you can mathematically prove the absence of whole classes of vulnerabilities. And there are a few implications of this. One is that there are vulnerabilities that we never detected that we can prove the absence of. The other is that software is not a black box. It's something that we can formally model. We can prove properties about and we can say, you know, software should have these features, it should not have those bugs. And so there's no reason that software should be treated as magic or as a black box. And I think for us, raising more awareness about that, and for people who are not technologists, understanding that helps us demand more. Secretary Carter, to help set the stage, we've mentioned want to cry a couple of times. For those who aren't steeped in this, there were two major state cyber attacks last year, want to cry one of them. Could you, could you run, walk us, walk us through those and the sophistication? Sure, absolutely. But just to be clear, we're now narrowing the scope, which is fine. But, you know, these vulnerabilities affect all kinds of attackers. There are criminals, there are vandals, there are explorers. So there's a whole class of people that are not states. However, there's no question about it. There have been states that in my personal judgment have conducted what you'd have to call an attack. And this is a place where public, you know, your public institutions and public policy is supposed to be about doing the things that we can only do collectively. And so I think there's an unavoidable public responsibility, important as the private sector is and important as it is to do this brilliant work on, on trying to start again in some ways with, with, with some of our operating systems and not build in so many vulnerabilities to them. You know, if you don't lock your door, that doesn't make burglarizing your house okay. It's still a punishable crime. And so I think we have to defend our people. However, they're attacked by, by, by, by whomever does it. Now, when it gets to states doing this, this is where we're in this category of hybrid warfare, which is somewhere between traditional war and peace. And my view is we need to get doctrinally settled there. As I said before, I think that, that principle number one is that an attack is an attack. Now, North Korea attacking Tony, for example, if you remember they sank a South Korean ship, remember they sank the, the Cheonan? So they're capable of just doing that. And trying to stay below the threshold at which, in that case, South Korea would respond to them. And we need a playbook for that below that threshold ditto with Russia. Now, my view of that is that that confirmed, if you didn't, if you needed confirmation that Russia, unlike for the 25 years after the wall came down when we didn't regard them as a military opponent about three years ago or something, I, we need a war plan. And NATO needs a war plan. So my colleagues here from NATO, for the first time in a, in a couple of decades, because you have an understood now and basically self-declared military opponent in, in Russia. So I think we need that playbook just as we need the playbook for the little green men incursions for the, the stuff they do up in the Baltic states to stir up minorities. Right. And so this is an area of warfare that they are exploiting and trying to stay below our threshold. We need to make it painful to do that kind of thing to us. Well, let's talk about playbooks for a minute and the title of this session is, is war without rules. Brad Smith, you have, I mean, it's, to me, it's a, it's a very telling about where we are with this kind of attack to see a corporate executive in front of the UN in Geneva, as you were arguing that we should have a Geneva Convention for, for cyber. Can you tell us a little bit about that and what its prospects are? Sure. I mean, if you think about the playbook, I'd actually start by saying, look, chapter one is sort of there. How do we build more secure software and demystify it, help people update it? Chapter two is sort of increasing our response of capabilities when there is an attack. Chapter three, I do think is, what are the rules for this area of warfare? And the video I thought captured extraordinarily well the fact that really since the 1860s, technology has constantly been advancing in ways that have changed the nature of war. And we've seen two areas of rules emerge over time that complement each other in very important ways. One set of rules relates to the use of arms and the control of arms. At times, they prohibit certain arms. There's a prohibition on chemical weapons. At other times, they may manage or reduce what would otherwise be the growth of arms, nuclear arms. So you have that and yet we don't have that at all today for cyber weapons. The other area is then the protection of civilians. The great innovation that came out of Switzerland and Europe in the 1860s and has continued to evolve ever since is a set of rules that have obligated governments to try to avoid attacking or harming civilians in times of war. And that is fundamentally what the Fourth Geneva Convention did when it was adopted in 1949. And then, of course, the great irony of the 21st century is here we are in 2018. And if we look back at these attacks last year, if we look at Wanna Cry in May, not Petia in June, what we see is attacks by governments against civilians, not against military targets, but against civilians, hospitals, the electrical grid, the banking system, and as you heard, this is supposed to be a time of peace. So the world literally, in that regard, has been turned upside down from protecting civilians in times of war to attacking civilians in times of peace. Our view is it should call on us to look at the rules, the laws that we have today, come to a view, where do they apply, the more they apply, the better. And then we can identify the gaps and then we need to fill those gaps in with, as we've said, a new digital Geneva Convention. Given the breaches we see constantly, I mean, chemical attack in Syria, which we saw some of in the video, what hope do we have, what are the prospects that a Geneva Convention could reach and help deter, control the darkest corners of the web? Well, I think the video, again, captured the fundamental reality extraordinarily well, because what it said is rules will never absolutely prevent warfare, nor will they ever absolutely prevent harm to civilians. But it also made the point that I am fatically agree with, which is the world is better with rules than without them. Because when you have rules, you have the basis to start to ask two questions. First, was there a violation of the rules? And second, who was the violator? And I think we start, again, it's this point, we shouldn't think that this is so mysterious that we cannot make some real progress. This is not the first time in the history of warfare that you have urgent debates about whether there has been a violation of the rules. Every time chemical weapons are used, the first question is, were they used, how do we prove that they were used? And there are many times when governments have sought to engage in attacks and obscure the identity of the attacker. I think, to me, one of the most powerful lessons just comes from the fact that when World War II started in Europe in 1939, the very first act of war was committed by German troops entering Poland wearing Polish uniforms. So this notion of launching an attack and trying to make it seem as if someone else was responsible is not new. What we have to do and what we are doing, I believe, with the tech sector working together, with researchers and in appropriate ways with governments, is developing an improving capability to establish when an attack has taken place and engage in attribution, the word that is used, to mean we can attribute this to a specific attacker. Can I just build on Brad's excellent point? Norms and rules do matter. They're not a panacea. But in addition to the benefits that he cited, they define when a transgression has occurred and they create at least potentially the possibility for collective response. So those are two additional ways that the rules are valuable. Still and all, I'm all for that. But I really do believe that we have to go back to the fact that we have governments and we have public authorities to do things that we need and we only do collectively in protecting ourselves as job number one. Now the cyber world grew up in what we now call the tech environment which was militantly independent of government and that was a great culture in lots of ways. I'm a technologist myself so I understand that and relate to that. But it also meant that that particular technological revolution took place essentially in an ungoverned way. And that has one of the reasons that has led to the vulnerabilities we have and also to the immature tradecraft in terms of protection and government response. Some resentment, I mean you can't forget Snowden. I remember Snowden and they made government action in this area, it's so facto suspect in a lot of the community that does this. Now that's changing fortunately and the next generation is more understanding. I think governments getting more deaf, governments are getting more deaf than interacting with the private sector. But we need to understand that we've went through a couple decades of essentially wild west in this area and this is one of the legacies of that and we have to gather up that. One of the themes that I'm starting to hear also going back to what you said about an attack being an attack and your historical perspective is that there's a risk in thinking about cyber as something entirely new and thus a mystery that we can't grapple with or we don't have rules for. But one Professor Yang to you, one clear difference is the ability to interfere in elections. That's new. Talk about the software vulnerabilities there and your concerns about it which I know you've written and thought a lot about. Yeah so something that became very interesting and terrifying to me were the software problems around the last US election. Well because first of all I heard that these machines are running very old software but the other very disturbing thing was that because of the way these machines were set up without paper trails or things like that it was very hard for people to detect whether these machines had been tampered with at all. And so that was one of the challenges in auditing the last election. But there have been regulations since that have tightened some of these and what's comforting is that there are also techniques for ensuring that you can preserve privacy while having a verifiability of voting. So people have studied this in research and this is a very complicated process so it's something that is very helpful to model as a mathematical system. And in fact there is a company called Galois where there are many researchers in my field looking at verified voting machines and working with places like DARPA and Department of Homeland Security to verify voting. But yeah this was a case for many of us technologists where we said whoa we really should step up and try to do something about this problem because there are known practices for making this better. Yeah I would say you raise a really important point and I think starting with the broadest perspective the truth is the internet has introduced a new vulnerability for democracies around the world. And we are seeing I think at this point four distinct but clearly connected attacks on the internet using and democracy using the internet as the plane. The first I would say is hacking of political candidates running for office which we saw in the US in 2016 and have seen in numerous countries since. The second is the potential issues relating to voting which I think is the number one issue in terms of its potential harm to democracy if people lost confidence in the tabulation of the votes. The third is paid political advertising on social media platforms. And the fourth is unpaid introduction of in effect false news. And the issue I think we need to grapple with is at one level they are distinct. We need to address them with different tools. But I mean let's face it if you're in an intelligence agency and you have made it your mission to use the internet to disrupt the democracy these are not four different things. This is one campaign and it's another area where we need a collective response and frankly we need a collective response that asks not only what had happened in 2016. The question that continues to obsess Washington DC but actually asks given the trends we're seeing in the attacks what should we predict about 2018 and 2020 and how do we prepare for those. Secretary Carter you said an attack is an attack. What's the response to an attack on an American election? I mean what are you have a sense of what we are doing or should do? I certainly have a sense of what we should do. I obviously can't speak now for the government. But I think first I wanna say something if I may about the argument I hear sometimes Russians or Chinese and they'll say well you do the same thing and we don't and let me explain what that means. We conduct espionage on the internet and when we are spied on I don't complain. I'm unhappy with it because I wish we were not having our secrets stolen but I put that in a different category covert action is the category that we're talking about. It is not pure espionage. It has the effect of homing. Also the Russians and the Chinese will say will you interfere in elections as well? You stick up for democracy. You oppose leaders who are oppressing their people or stifling their, that's true too. But that's overt and I think it's fair to have a view of somebody else's internal politics from outside as long as you're up front about it. But that's different from coming in below. So I think it's important to make those distinctions because otherwise we get this kind of manipulation from the other side suggesting that well we kind of really do that too. Those are different things. Now it's not that the United States hasn't conducted covert action over the course of history it is and a lot of that has been made a matter of public record going back decades but these are different categories. And so when it comes to something that is in the covert action category or in the non-overt manipulation of our democratic system category, I call that a common sense as an attack. Now how do you deal with it? My view is this, again, an attack is an attack and you don't have to respond to a cyber attack with cyber. You respond to an attack on you with whatever seems like the appropriate set of tools. And it may be that it amounts to something in the trade area. It may be that it amounts to something in the deployment of systems or the provision to an ally of systems that you have shown forbearance in. There are lots of ways in which for example in the US-Russian relation just to be clear about it that you can tighten screw on Russia and have to in response to this but it's not a exact tit for tat kind of thing. That makes it more complicated, I understand that but I think that's the nature of the playbook but you do have to attribute in such a way that your opponent or enemy understands that there is a price associated with that action and when we do nothing, it simply invites more. It invites them to turn up the dial until we can't stand it any longer and you'd like to act before that trigger is reached because in the case of US Russia, US China, US North Koreans over that serious business to breach that threshold. So we need a playbook that is broader, that is not cyber for cyber and I have some ingredients of that, some I can share, some I can't share and but I think that is a big project ahead for our national security establishment. I also, if I may say parenthetically, I don't mean to go on too long, the professor made a really excellent point of about install base. And I'll give you two examples. I mean that you might think the Department of Defense must be completely up to date in no way, we had hundreds of networks, some of them decades old and unpatchable or unpatched and it's expensive to do that. So you can't expect most companies to be completely up to date, they're just not gonna be, they're not gonna have that, be able to make that investment, they're not gonna have the expertise to make that. So there'll always be that problem on the installed base. Everyone's in a while, somebody says to me, do you think we could get our nuclear command and control system hacked with a cyber attack and my only half joking response is, it's not modern enough for that. It was designed decades ago and it doesn't incorporate any of this stuff, it's excellent, it works, God forbid, woodwork but it's old stuff. Now we're gonna recapitalize that and when we do, we will be using modern IT and that'll be a real issue. Let's talk about some risks, Professor Yang in modern IT, particularly around artificial intelligence and as it relates to drones, for example, but there are many examples and a risk that a hacker could interfere with those systems. Right, so artificial intelligence is something new and going back to the subject of playbook, it's something that should be regulated and we should be aware that it is not magic and there are extreme risks with it. So one part that you asked about is drone warfare and recently many CEOs signed a petition saying that artificial intelligence should not be used for this because artificial intelligence is subject to flaws and it's trained on very specific data sets and it's brittle. So if it sees a situation that it's not familiar with, very catastrophic things could happen and it should not be used as a tool that could potentially harm a lot of people. But also going back to what Brad said about democracy and elections, artificial intelligence is something that we should also be very cognizant of and careful about when it comes to manipulating people. So this is a very powerful tool that people have for understanding the exact goals and motivations of specific citizens. So you might have heard for the last election, certain people built very precise models of what citizens might be interested in to do targeted political advertising. That's a very dangerous tool that perhaps we should talk about how to regulate more. When you have everyone connected on social media and you have social media ads hitting so many people, we should have a better understanding of how those ads are finding the people, the effect that those ads have on people and also what it means to be an algorithm in AI that's determining what people get exposed to. And there are growing amounts of researchers working on this. So this again is something that doesn't have to be magic anymore. It doesn't have to be a black box. We can understand it, we can look inside of it and we should look inside of it before we use it at such scale. Well, and extending that, Brad, to the smart home where we're now seeing essentially AI or something close to it in refrigerators and the heating systems and what are the vulnerabilities there as that technology proliferates? Well, I think the fundamental challenge is we're gonna see artificial intelligence infused into all of these devices. They're gonna change our lives, they're gonna bring lots of benefits and it does introduce yet another potential point of exploitation. Just imagine 20 years from now, you have a lot of self-driving cars driving down a highway. And if someone can figure out how to hack their way into the system that those automobiles are relying upon and cause all those automobiles to crash into each other, you can produce a calamity. So obviously we need to first of all recognize how important it is that we protect against that. We still are lacking the kind of global security standard for the internet of things. And so my greatest fear is that we'll replicate in effect what happened with say software from the 1990s. We realize later that we need to respond to a new technological era of threats but we have this massive installed base that is very difficult to upgrade and protect. And unfortunately, Windows XP has illustrated that, the WannaCry attack illustrated that. To put it in perspective, Windows XP was released in 2001. It was released six years before the very first iPhone. It was released six months before the very first iPod. Think of all the thousands of people that have been putting their devices through the X-ray detectors at Davos. I don't know about you. I haven't seen a single iPod going through any of those. Nobody's gonna walk around and say, oh, look at my iPod, isn't this exciting? And yet we still have unfortunately many millions of machines that people haven't upgraded from Windows XP even after we've run at certain times, not just reduced prices but free upgrades to try to move people off. So it goes back to this fundamental point with which I couldn't agree more with the professor. Let's demystify this. Yes, there are these nuances and details and important technical aspects but there are parts that we can all start to talk about in a more approachable way. And we have to do that as a company in an industry. Let's, I just wanna follow up on that. Is it lost on AI here? But I'm asked a lot about autonomous weapons and again, you have to find your common sense in these things. And I say, if you were gonna use violence on behalf of the public, which is what you do, you have to be able to explain why that's appropriate. And you can't say the machine told me to do it. Now I expect there to be autonomous weapons in the sense that highly assisted and ones where there isn't literally a human in the loop but there has to be traceable accountability for anything that is done of consequence. And this is true also. I don't know that you read these stories and I don't know how accurate they are about people who were supposedly assisting judges and making parole decisions by predicting how likely it would be that somebody, and so this magical result came out and it looked like science, but you peeled it back and there was all kinds of perfectly human biases that had been built into it. So I think when we talk about AI, there has to be the challenge for that is gonna be traceable accountability and a common sense explanation so that it is possible after a prediction is made or a judgment is rendered by a machine to go back and figure out why that was right and okay. And if you can't do that, certainly I couldn't justify it in the use of violence. There was a, you might remember his name, I'm not at the moment, a Russian who passed away a couple of months ago who was credited with preventing the, I think it was maybe around the shooting of the KAL, shooting down to the KAL flight and there was an escalation process that began to take place and this one human individual had the sense to stop what could have been a disaster and if you hadn't had that. You know, a more recent example. Hawaii. Hawaii. And what if that had been a hack? I suppose to some degree we don't know it wasn't for certain. I just say that is an example of what must have been, I don't know the specifics and maybe these two do a really poor design. You shouldn't be that close. Starting with the fact that he had his password. Yeah, and so we, I'm just speaking for defense, use of nuclear weapons, use of force, shooting down missiles and so forth. The do and don't buttons aren't right next to each other. That seemed to be the case in the state of Hawaii and it goes back to, you know, you're gonna have lousy tradecraft in some places. These are gonna be amateurishly engineered things that make mistakes. Let me ask each of you a kind of rapid response question and then I wanna open it up to the audience. The WEF risk report, the annual report listed cyber attacks as the third right behind natural disasters and climate third highest risk. There's a British army official this week who said it was a cyber is a greater threat than terrorism and we've been too distracted by terrorism. How you, Brad talked about 20 years from now in autonomous cars, I get that there's a future shock scenario, but what about the present? How urgent, how is this risk? I think it is a very pressing problem. It certainly ranks in my view among the top three that we ought to be talking about around the world. As we're thinking about the evolution of cybersecurity threats, you know, what started as this proverbial account of, you know, some teenager sitting on a bed in his, or usually it was a guy, his bedroom, you know, hacking into something has become something altogether different. And what we are increasingly seeing is very sophisticated organized criminal attacks, often launched out of countries where the rule of law doesn't reach. You know, the tip of the spear are these emails that you get that are the so-called phishing attacks that are designed increasingly to get you to click on something and then provide your password and then from there your email is theirs. But it's all being done to make money. That has become more serious. And then the nation-state attacks. I think that is the single most serious threat today. That was made clear by these new events in 2017. And ultimately, look, the sad truth of life is that whenever you have new weapons, you should always worry that eventually terrorists will figure out how to exploit them as well. So, you know, to say that cybersecurity is, you know, on one side and terrorism on the other is an accurate statement of, you know, 2017. We cannot live on the premise that they'll always be separate. We need to take all of this, all the more seriously, because of that. Secretary Carter, you became Secretary of Defense right after the Sony attack, if I've got my dates right. From Sony to Wanakrai, how much worse did it get? How much scarier did it get? The sophistication. I don't know that the sophistication got as great. What I would say is we collectively didn't get much better during that period, quite honestly. I don't think we have written that playbook. In the United States, internationally, with the technology community, we're not moving fast enough here. And I'm a big fan of priorities when the things you're talking about compete for resources. But we have to do cyber, we have to do terrorism. I was earlier today in a panel on bio threats. These are all pretty big things. They require different kinds of response, so they don't exactly compete with one another. And I don't know what ranking them really means, although I guess if you're running a conference, you need to know that. But in terms of action, if they don't consume the same resources, you gotta do them all. And we have some big challenges ahead. It is, in my view, and I say this as a technologist, it is the things that make new dilemmas for humankind grow out of science and technology because that's what creates the change. And that frontier is constantly moving and we need to keep public goodness moving at the same speed as that technology is moving. And in many fields, that has not been the case. I'm more concerned about our pace collectively as governments and populations than I am about which is more important. How terrified should we be? Oh, we should be pretty scared. I mean, if you think about what you did before coming here, you woke up, probably you had a software-powered alarm. You checked your email, went on social media, used running water, used electricity. All these things are powered by software. And some of it is software that is systems that are unpatched. A lot of these softwares are made by companies that aren't under very much government regulation. If you look at, for instance, Snapchat had that scandal where they were saving the photos instead of destroying them. And I think the FTC slapped them on the wrist with something that was under $50,000. So yeah, we do a lot of stuff with software and a lot of it is buggy and unregulated. And for artificial intelligence, we haven't even started talking about how to really think about regulating that. So we should be scared. We should stop allowing software to really develop in this Wild West kind of way because it matters to us now. We should start thinking about how we have some rules for this and what makes sense for the good of the people. Let's have some questions from the audience here. I don't know, is there a mic or how does this? Go ahead. I'm Wendell Wallach, the Hastings Center and the Yale Interdisciplinary Center for Bioethics. I'd like to dig a little bit deeper into what we really need in respect to a Geneva Accord or what we need added to the law of sub-arm conflict. I think we've found out pretty well in the Landmine case and now in the question of lethal autonomous weapons that it's very hard to even get a full treaty if you have some parties who just wanna obfuscate even the common sense discussion here. And I don't think that's gonna be any less true with cyber than it has been with lethal autonomy. So it seems to me the prospect of getting a full new Geneva Accord is gonna be pretty hard. Getting a treaty is not gonna be easy. And I'm wondering whether we can move more toward a simpler treaty or protocol. So just one example is we have the Martin's Clause which says that not all weaponry that might exist is really covered by the existing laws of armed conflict. And would it be enough to just declare that that extends to cyber weaponry? Or do we need something more than that in this case to ensure that we have appropriate international protocols in place? I actually think you raised a number of good points and I think the short answer is it's probably too early to know exactly how to answer them. But I think there are some important themes in what you've raised and we should build on them. I think the first theme is look to the extent that we can build on existing law, we will be better served by doing so. So to the extent that we can build a consensus that says that the fourth Geneva Convention or some of the aspects of the United Nations Charter or other international instruments apply to cyberspace, that will be helpful. One of the interesting challenges actually in figuring out how to interpret existing law, especially international humanitarian law, is that it's all written with obligations that are imposed in times of war. So there's this murky question, this gray zone, are we in a time of war or are we in a time of peace? But the more we can apply existing law, the better. The more we can then figure out how to act to fill in the gaps, the better. From my perspective, it's less about whether this is called a protocol or a convention or a treaty or something else like a commentary. The fundamental lesson is one that I think you alluded to. If you look at the entire history of international humanitarian law and arms control, fundamentally, there is only progress by governments when there is strong public opinion pushing governments to move. One of the ironies of arms control is that when new weapons emerge, initially, the governments that are at the forefront of the technology are typically the least enthusiastic about having any rules for them because they look around the world and they say, you know what, we have better weapons than anybody else so we're gonna have to give up more than everybody else. And then eventually you find that North Korea has a nuclear warhead and now they're developing the missile for it. But it will take broad and global public opinion, I believe, to push governments to move and then hopefully, and I think the landmine analogy is really apt, there's a lot of lessons to be learned from what was done to address landmines that public will then encourage people in government to apply the right approaches to make the fastest progress. Next question. I had nations, a few years ago, several years ago, Ted Koppel wrote a book, Lights Out, and I found it very frightening. And Professor Yang, you just inferred many of those things except all the way to the electric grid. And my question is, I also heard from a general in New York, a speech recently, saying probably that keeps up most of the intelligence agencies' heads of agencies nowadays. And instead of, we are concentrating on elections and we're concentrating on emails, but I would say, I know we can't prioritize everything, but that is an awfully frightening thought. And I think the dark side of the web and then the terrorism attacks and it also says that people actually have practiced some of the electrical grid on a small scale. So are we really prioritizing something like that as a realistic attack? And your comments, please. Go ahead. No, you start, please. Well, I think it, I can only talk about the realism of the attack and not so much the priorities, but there is software that runs the electric grid. You know, there have been reports of attempts to hack. A lot of this software probably runs on very old infrastructure, so it is vulnerable. And yeah, people have, so there are places like NASA and the French aviation company Dassault that pays people to prove the correctness of their software. And I think that for something like the electrical grid, it's very reasonable. It's mission critical. It affects all of us. We should be prioritizing something like that as software we need to formally reason about. We have strong guarantees about and it's kept up to date. Can I just build on that? You're absolutely right. Another way to look at all these questions is why haven't we done something yet? Why haven't we made our critical infrastructure more resilient yet? And there are answers that I'm not saying they're good answers and that they justify where we are, but first of all, these are mostly in private hands. And they don't always welcome the government telling them what they do, nor do tech companies welcome being told what to do by the government. So you run up against that. There's the question of who pays for it. If you're gonna mandate something that costs money, how do you do that? If you're gonna ask people to work together, Brad was saying earlier, and I commend Microsoft and some other tech leaders for doing this, sharing information about who's doing what. Early on, the antitrust people were threatening them because if they were talking to one another about things that were business sensitive that could violate the antitrust laws. And so there are a lot of things to work through here in protecting the, and so it isn't only a matter, although it is importantly a matter of prioritizing it. You gotta get down and do the dirty work of working through some of these problems that stand in the way. Nevertheless, I think all of that is tractable, but the reality is we've been on the path for a decade or so, I would say. Well, and to the extent, and not there yet. To the extent it takes public pressure, as you were noting, the public interest, at least in the United States, around infrastructure has been bridges and tunnels and physically manifestable infrastructure changes. And I think there's been almost no public debate, public outcry about this kind of concern. So it's a great point. Go ahead. Espen Bartaiti from Norway, thank you. Excellent conversation. It's on the question of a potential new convention. I, inspired by what Vendel Wallach and what you responded, I have limited enthusiasm for writing new conventions when you already have some. And I would, it's true that the Geneva Convention is more difficult to interpret, but what it basically says is there's a principle of proportionality and the principle of distinction. Means that an attack has to be commensurate to the military goal you want to achieve and you should avoid deliberately attacking civilian infrastructure. Well, those are good principles. So sort of a softer way would be some common process of actively interpreting what it means in the cyber world. But don't necessarily believe that we are always getting wiser just because we get older. And maybe these principles are so solidly embedded now that it's better to build on what's already there than to try to reinvent the wheel. I've, as I said before, wholeheartedly supportive of applying existing principles wherever we can. And I think fundamentally there are sort of three questions we need to think through. One is, do those principles apply already as a matter of law? And the more we can build a consensus that says the answer is yes, the better off we will be. The second is really your question, which I think is absolutely spot on. Okay, what are those principles? What do they mean? And can we interpret them, frankly, to solve as much of this problem as we can? And then the third will be what is left? What are the gaps? As I've at least spoken to people, I haven't yet found anyone who is an expert in the field who has any confidence that these principles will address 100% of the problem. But let's say it only addresses 60% of the problem. Let's address the 60%. Let's make progress, let's move forward. And then as we do that, let's use that to build momentum to then figure out how we fill in whatever is left. Let me take one from the back, sir. Thank you, Tim Snyder, I'm a historian. Depressor Yang, a lot of what people do on the internet is motivated by the little jolts of pleasure that they get, the little incentives that they get. I'm struck when it comes to updating security, people are treated in a school marmish way. Wouldn't it make sense to give people gold stars for updating their software and changing their passwords? And to the other two panelists, because it's gonna be a long time before it's demystified, people are people. To the other two panelists, if we classify active measures with cyber as legally an act of war or as an attack, what does that mean legally or ethically for how we classify human beings, citizens of the attacked country who consciously collaborate with such an action? So as for encouraging people to update their software, companies have a lot more incentive to get people to like things and click on ads than they do to have people update their software. So some of this needs to come from above. An example of a protection is, I don't know how many of you guys saw the Google Arts and Culture app where you have to take a selfie and then it tells you what painting you look like. So in three states that's actually banned because it doesn't get people's consent sufficiently before asking for a selfie. And so I think having regulation that requires companies to get people to update their software would be very helpful here because I think otherwise companies just don't have the incentive. We have about four minutes left. One quick question, and then I'd like to let the panelists sum up. John Chipman from the International Institute for Strategic Studies, one point and a quick question. The point is that there's no international law against espionage. So when CIA or the MI6 instruct their operatives to find out what the Supreme Leader of Iran is thinking, they don't say, do so in a manner that is consistent with the domestic jurisdiction of the Islamic Republic of Iran. The point of international espionage is that you break the domestic law of the countries in which you are operating and that's a fact that everybody engages in. There isn't, however, an international law against intellectual property theft. So when the Chinese government builds its J-20 stealth aircraft based on technology that they actively stole from the US company, that is against rules that are known, recognized and therefore punishment is possible. My question is, when a private company knows that it has been the subject of state-sponsored theft of its intellectual property, what is the advice of the panel of how quickly they should publicly attribute that theft and how do they coordinate with others in order to ensure a proportional response? Well, that's a big dilemma for companies. Admitting that they were vulnerable, admitting that they were hacked, admitting that their customers' data has been compromised is a big action for many of these companies. They're unwilling to do it. That has nothing to do with any expectation that the government will take action or not. It's simply surfacing the fact that this happened. That explains a lot of reluctance and a lot of latency, a lot of delay in between the time an attack occurs and the time attack is reported to the government. Also, they may not be willing to cooperate with the government in conducting forensics and so it varies from case to case. Some are quite willing, but really some are quite reluctant. So you get back to the fact that this is private property. And they, you know, if somebody breaks into your car and steals something out of it, you might report it, you might not report it. You don't have to report it. And obviously if you report it, then it'll get investigated and there's a possibility that there'll be a criminal investigation and that we collectively, through our security institutions, will act to find who did it, punish them, try to restore your property and so forth, but you don't have to do that if you don't want to. And I think there's a lot more of this that goes on. I know this. There's a lot more of this that goes on than is surfaced for just that reason, a lot more. These were terrific questions. We have about a minute left. I wonder if each of you would just quickly, what's the most urgent next step from your point of view? Well, I unfortunately think we need to advance on multiple fronts. You know, clearly we in the tech sector need to keep working separately and together to strengthen defenses, build more secure software, be more effective in acting in our role as first responders. We do need to find a way to sort of crack the code, so to speak, to at least demystify what can be demystified. To make this more approachable, frankly, we need to make it easier for customers to keep systems up to date, but at the end of the day, it is that analogy, hey, we can build great locks, but somebody better remember to lock the door. So, and then apropos this point about governments. I couldn't agree more on attack as an attack. I would love to see governments just publicly acknowledge that and act on that basis, do what we can, move as fast as we can, build on the law that is there, and then ultimately take the law to the destination where it needs to go. Amen. Demystify, use common sense and good old human reasoning here, even though it's a different domain. So whether it's rules, whether it's proportionality and discrimination in warfare, whether it's an attack as an attack, we have to continue to find our humanity within this technology and people have the good sense to know how to do that even if they don't understand all the technology as well as our next speaker does. And I think for us in research, the biggest priority should be to continue understanding programs and AI algorithms better. We've been working on understanding why our software is correct. We need more work on understanding why our AI algorithms do what we expect them to do. And then the other responsibility we have is to communicate that to the public to tell you that it's not magic and it's not a black box. Thank you so much to the three of you for being here all the way. Thanks, that was fun, that was great.