 Good morning, everyone. My name is Bill Burns, and I'm the president of the Carnegie Endowment for International Peace. And I really am delighted to welcome all of you to what is the first collaboration between Jigsaw and Carnegie's new technology and international affairs program. Today's event in many ways is a continuation of the conversation that Jared Cohen and I have had since we first met a decade ago at the State Department. The question we wrestled with then, and have continued to wrestle with ever since, is how diplomacy can best adapt to the realities of a digital age. I've truly admired the skill and vision with which Jared has pushed all of us to re-examine our assumptions, open our eyes to new threats and new opportunities, and add new tools to our diplomatic toolkit. I think we've made slow but steady progress in a number of areas, but the truth is that the gap between the pace of innovation and the rules, norms, and strategies animating cyberspace has been getting wider and wider by the day. The same is true of a number of other consequential domains of technology for international peace and prosperity, from biotechnology to artificial intelligence and beyond. This is precisely why we've made the intersection of technology and international affairs an area of focus across Carnegie's global network. For the past year, we launched our first pilot project, Carnegie's Cyber Policy Initiative. We assembled a characteristically global and diverse team, including a former Israeli Deputy National Security Advisor, one of China's leading strategic thinkers, a former Turkish diplomat, an Indian lawyer, and a British physicist, among others. Working behind the scenes with government officials, experts, and businesses in key countries, our team is working to develop measures to help reduce cyber risks. These include, in their first instance, threats to the integrity of financial debt and algorithms, systemic corruption of the information and communication technology supply chain, attacks on the command and control of strategic weapons systems, and the issue we'll focus on in our second panel today, Active Cyber Defense by Private Firms. Our guiding assumption in all of this work is that governments will continue to have a central role when it comes to rulemaking in the cyber domain. But for these rules to be effective and equitable, they must apply internationally and they must be devised in partnership with the private sector. This is why we're delighted to collaborate with Jigsaw for today's event. Our hope is really to accomplish two goals. First to map the contours of today's strategic challenge and the policy options before us. And second to delve into the specific question of Active Cyber Defense by the private sector, to highlight the complexity and urgency of bridging the gap between technology and policy. We could not ask for a more distinguished group of policymakers, journalists, technologists, and analysts to help us navigate this very complicated terrain. I'm deeply grateful to all of them and to all of you for taking part in what should be a timely and fascinating conversation. To get us started, I'm very pleased to hand over the baton to the moderator of our terrific first panel, David Sanger. David is an extraordinary journalist. He's broken one major cyber story after another. And he's helped the public have an informed conversation about the challenges before us today and those we'll have to confront tomorrow. So let me again thank Jared and his colleagues at Jigsaw for their partnership and ask all of you to join me in giving a very warm welcome to our first panel. Thank you very much. Well, thank you very much. Thanks to all of you for coming out in the early morning session. And thanks to Bill for really the remarkable things he's done over the, I guess, what now, nearly two years at Carnegie and getting programs like this going and to Jared Thun, who's here now from Jigsaw and actually wearing a necktie today as we've been discussing in violation of all of this global policy. And so we have a great panel. To my immediate left is Jane Hall-Lew, who was, you may have known as Deputy Secretary of Homeland Security, but has had many other fascinating incarnations in the cyber and non-cyber world, including of the United Nations and peacekeeping. Mike Chertoff, who of course is Secretary of Homeland Security and now a Chertoff group does some of the most interesting cyber work around. And Chris Painter, who's at the State Department and has taught me a huge amount over the years about setting up sort of international cyber norms. The subject will be discussing a lot today. So I thought I would just start with this. We have seen in the headlines a dizzying array of different kinds of cyber activity, each of which raises a different kind of way we have to think about what the responses are, whether there are norms that can be set up and so forth. Let me just tick off a few, and then I'm going to ask each of you to just sort of pick one or two and describe how they differ. So we had the North Korean attack on Sony, which was both a data theft and less noticed by many people, destruction of about 70% of Sony's computers. So it was actually a destructive act, which I suspect President Obama probably would have had to respond to differently had the North Koreans, instead of using cyber or having that available to them, had just come along and stuck some dynamite underneath the Sony computer centers and blown it sky high. And it would have looked like a different attack, even if it had similar elements. We had the OPM theft from China, which was the theft of 21, 22 million very detailed security forms, including those filled out by many of you, including a few on the front row here. And this was more than the theft of personal information and seemed to have an intelligence use that would go well beyond the kind of theft we've seen in the past. We've had intellectual property theft, also from the Chinese, famously from unit 61398, the PLA unit, but other ones as well, that was abated somewhat by an agreement that President Obama reached. And of course, we had the Russian hack in the past year, which was the combination of new cyber techniques to accelerate very old information warfare techniques and propaganda techniques. Each one has posed an extremely different challenge. And so I just wanted to start maybe with you, Mike, just to ask question, as you look at that list, can you tell us how we should be thinking a little bit about espionage versus information exploitation versus offensive attack? So, well, first of all, David, thank you for moderating. And it's great to be up here with this group. And thanks to Carnegie for convening this. I do think it's very important to separate these things and not to lump them all together. So espionage, to me, is the oldest thing around. I mean, go back to the Bible, and God told Joshua sent spies into Canaan. The only difference is we now use technology. But I don't think the reaction to espionage, understanding cyber space, is dramatically different than it's traditionally been. If we catch the people who do it, we try to prosecute them. We're often unsuccessful. Basically, it's on us to protect and preserve our secrets. Then you get to the issue of actually destructive attacks. And in addition to the Sony attack, you have had, at least according to reported news, in the last couple of years, Ukraine has had attacks on their energy infrastructure. Obviously, it was an attack in Georgia back in 2008 when there was an invasion on the command and control system. And these, I think, are, when they hit a certain scale, are really just acts of violence. And the response there ought to be similar to what we would do with an act of violence if it occurred in the physical realm. The third area, which is the information operations, I think we have to be careful about. There are a lot of people who are talking about fake news or information operations as if that ought to be viewed through the lens of cybersecurity. And I have to say, I think that would be a big mistake. If we were to say that we ought to treat cybersecurity requirements as a way of repelling fake news or information operations, I can tell you the first two countries to sign up would be Russia and China because they view content that they don't like as fake news. And I would argue there are even some people in the U.S. who think fake news means stuff I don't want to read about. So that would open the door to censorship. So I think when we deal with fake news and information operations, as David has said, we've dealt with that for decades. That probably also goes back to the Bible. You use the example, I think, of Adam and Eve. As a disinformation operation. First disinformation operation. So there I think we have to be careful not to use the expression weaponization of the news because I think that takes us into an area of censorship, which at least in terms of American and Western guys would be not a happy place to be. So Jane, when you were at DHS, seems like a long time ago, but it was only four years ago that you fled the scene. We were at an earlier stage of understanding how government should respond to this different range of threats that Mike has, I think, laid out. So tell me a little bit about how you think the thinking has evolved since then. And if you could sort of design the system right now of how you would deter or deal with each of these, either deterred by resilience or deterred by denial, what would that look like? Let me add my thanks to Mike and to yours, David, for this first-rate experience. It's always a pleasure to be with Chris and with you and with Mike. I think I would say the following. There are aspects of this conversation that we had four years ago, and aspects of this conversation that were had in 2000, and aspects of this conversation that were had in 1991 and earlier. So with the exception of some specifics which you've mentioned, we haven't answered the three core questions of cybersecurity. How do we architect systems we can trust from components we can't? How do we ensure the integrity of our identity and our information in an open internet that was designed for the free flow, unrestricted, unimpeded flow of information, end to end. And what should the role of government be? We haven't answered those questions and we haven't solved major problems from a technological point of view, and we have no sense of the distribution of responsibility when we talk about the terms. Traditionally, security has been something that societies assign to their governments to handle. We want safe streets, governments, you run the police. We want safe countries, governments, you run the army. You run the military, you make the laws. And governments these days seem to fall into three categories when it comes to cyberspace and life online. There are governments who fear threats from outside their borders. Bad guys are out there trying to get in here. Newsflash, they're here. Second set of countries are countries who fear threats manifested in cyberspace from within their borders. And we see evidence of that every single day. And there are governments who are really not threat preoccupied but we see an awful lot of money changing hands in cyberspace and want a piece of it. And when you look at the big players in cyberspace, I mean Google's the biggest by far, by far, and it knows more about its users. Facebook knows more about its users. Yahoo knows more about its users than any government on the planet knows about its citizens. So we can talk about these mountain peaks and the kinds of incidents that you've raised, David, but there's a whole topology out there that we are not paying attention to that is meaningfully moving the mountains. So let me pursue that for a moment with you before we turn to press. So you're right, Google, Facebook and Yahoo together. No far more about their users than most governments will. And yet they are clearly reluctant to take on the government role. Here, look at Facebook's position six months ago during the Russian Act, where they basically said, we're not out here to go police fake dudes for content for exactly the reasons that Mike described. And look where they are today. Where they are trying to build up an editorial infrastructure. Something those of us who've been doing it since 1851 are somewhat familiar with. Okay, and so what take the implication of your thought? Is this going to force Google, Facebook, Yahoo and everyone else like them to get involved into content? To police content or fear that they are going to be used for political manipulation, it's easy by terms of content if somebody's having conducting a murder as it was horrifically recorded on Facebook over the weekend. But it's a lot harder when you get to something like the Russia. So I think it's a really interesting and important question. I mean, I think the role of government in our lives is changing. It's not clear exactly how and the pace of that varies, I think. But governments have lost the corner on the market on the control of lethality, the control of capital and the control of rulemaking. And that has been widely diffused into private sector hands, both individual and enterprise. That's point one. Point two is as these companies emerge with greater and greater social reliance on them, there are greater and greater social expectations. We saw this with the BP oil spill in the Gulf in 2010. For a while, the company acted as if it was a private sector company that had a problem that it had to solve. And it took a while for BP to realize that there was an expectation that they would be acting much in much greater ways and visible ways in the social interest. And this was a cultural shock, I think, to BP. And I think what we're seeing again, even as late as yesterday, with the Facebook recognition of this so-called live streaming shooting, which it turns out a murder, which may not have exactly been live stream. That there are responsibilities now that are associated with the power they wield. How will that play out? We're just at the beginning of that conversation. So Chris, this is where your day job intersects the reality of what we're doing. So here in what President Obama used to refer to as the Wild West of the Internet, they come to the State Department and they say, you guys have done rules and norms for as long as we've had a country. And you've done it in weapons systems before, whether it's proliferation of chemical weapons, biological weapons, whatever. You've done it in norms of behavior, in adherence to things like the Geneva Convention. So they hand you the giant boiling stew of all these issues we're just discussing. Which lend themselves to government agreed upon norms and which don't? So I mean, this is still a very evolving new area. I've been doing this now for 27 years in different capacities, the Justice of the NSC and now it's state. I've never worked at Homeland Security. That's the one agency I haven't been at, I think. That's why we had to have you on that channel. What I've seen is a real evolution in this. And we talked, and both Jane and Mike have talked about this to some extent, where I'd say the one thing is yes, these problems, a lot of these questions are still not answered, but there's an increased and sustained attention to these that wasn't there before. And I've seen this almost like a sinusoidal way where it go up and there'll be a lot of attention that drop off. There was a 2003 cyber strategy which is still in the books, but no one paid attention, became shelf wear the next year. I remember working on that. But now it's, and I often say cyber is now the new black. Everyone cares about cyber. They don't know what to do about it, but they care about it. And so part of that is being more precise about what we're talking about. You raised a number of incidents and I'd say there's a bunch of others too. Saudi Aramco, where there was an attack that wiped things out. Estonia back in 2007. The denial service attack against our financial institutions, which part of the solution actually was diplomatic, us doing a diplomatic demolish to 25 countries for these botnets were located to ask them to take action. So it's not just the DOJ and DHS and the military, there are other options we have too. Can you just back up for those who wouldn't follow us and just describe that incident for a minute? So this was later the subject of an indictment out of the Department of Justice, but the indictment charged with Iranian actors of doing this. But there was a widespread and sustained attack. And this is an attack. And this is one of the points I was making in David did too. People just use attack all the time. Cyber attack is a big cyber attack. Well, there's different things. There is a cyber attack and this was because it was actually doing what they call a denial service attack, where you're trying to take down a website or make it inaccessible. There is the Saudi Aramco one, which is an attack. There's then intrusions and there's theft of information. It's the Chinese we're doing. There are other kinds of disruptions which deals with the integrity of data. So if you're not precise about what you're talking about, like category you're talking about, you're not precise about the actors, state, non-state, et cetera, the solutions that would be very different. So in this case, there was a sustained over a period of the year, what they called denial service attack. These are compromised computers all over the world, hundreds of thousands of them, they changed their location. The Iranians were charged with using this to disrupt our financial websites. Now, was that getting into the back room and actually having an effect in terms of the integrity of the information notes? So this was like advanced loosens in a way. But it was important, it was having and it was causing some issues. So to respond to that, we went to these botnets that said, okay, go all over the world, including the US. So we went to about 25 different countries. Germany's a good example. We said, look, you don't want these compromised computers in your country. And we did, before I came to the State Department Bill, I didn't know what a Dimarsh really was. It sounds bad, doesn't it? It sounds like you're yelling at someone. The Dimarsh can be a positive thing, could be can you help us? And we went to these countries very formally and said, can you use whatever tools you have, your relationship with ISPs, your criminal law, whatever you have to try to help us mitigate this. And they did, and that was a very useful experience. And that was also combined with some other Iranian activity was charged, including this attempted attack at a dam that was charged in that environment. But that I think was one of the high watermarks where people started to focus on. Certainly Sony was another where when a CEO loses their job, that draws a lot of attention, I think, and that makes a big difference. But then to the norm point, there's a whole group of tools that we're trying to develop. And no one tool is the silver bullet for dealing with it. So obviously you have criminal indictments, which have been done. You have sanctions, economic sanctions that you can use. You have the diplomacy, the diplomatic, not just US, but also with our partners going against transgressors. And then you're trying to build the more stable long-term environment. What are the rules of the road? What's acceptable in cyberspace and what's not? By state actors. And so this deals with a lot of the state activity and what we think should be allowed and what should not. And part of that is grounded in international law. So the Chinese and Russians, and we've talked about them, they think that information weapons are often things like Google and like various kinds of social media in terms of influence it might have in their board. And they are very concerned about their cyberspace and they want to draw a sovereign boundary around their cyberspace and control all information in it. They don't use the term cybersecurity to use the term information security. So they're very concerned about that. And we're concerned about the integrity of systems and the attacks of their systems. So at the high level, international law applying means that even the cyberspace world is not so different from the physical world that you have a whole different rule set. Things like the law of armed conflict, things like distinction and proportionality. When you get to that really high level of wartime, that applies in cyberspace. And that's important. We've got an agreement even with the Chinese and Russians. The second set is what are the voluntary norms of behavior you want to promote? And this is the beginning of rules of the road. So things like don't attack the critical infrastructure of another country that provides services to the public absent wartime. Don't attack the CERC, the computer emergency response team, essentially the ambulance in another country. And then third is just confidence-building measures, transparency measures, which is the only real analogy for nuclear world. And the idea is if you can get countries to rally around these norms and start saying these are the rules of the road that we accept, when you have transgressors, which you always will and you have in the physical world, you can work with partners to act against those transgressors using all the tools you have. And that's really where a lot of our effort is going, not the full solution. Resilience is part of the solution. Deterrence is part of the solution. But this is part of the tools that we're dealing with. And I just say that just recently, the G7 had its meeting of foreign ministers in Luca in Italy. And if you haven't seen it yet, there is a standalone declaration about these issues and about norms and about international law and about states to actually take action in appropriate circumstances. We'll come back to that in just a minute. Mike, let me ask you this. I was with Secretary Tillerson in Moscow last week. And we had a news conference after he met Putin and the main subjects, of course, were all Syria. But at one point, a Russian reporter stood up and said to him, Secretary Tillerson, sort of summarizing this, we've heard a lot from the United States about the attacks on your election system. But we also read that the United States engages in all kinds of offensive cyber activities, including against Iran, including against North Korea, a reference, I think, to the stories we've written about the military and missile systems and all that. Why should we view what you're accusing us of doing is any different from what you're doing? And he started down the road by saying, you know, attacks on military installations, I put in a completely different category. But then he didn't go on to say why. But the moment sort of crystallized, I think, the fact that in a world in which a lot of these things are put together in people's heads, we haven't been very good as a country in explaining the rationale behind our own considerable offensive operations, which are expanding and which as Jane has pointed out, we spend a lot more money on them. We spend on defense at least in declared cases. So talk a little bit about what we need to do to answer what is an increasing international line on this. Well, so first, you know, to come back to the point I made earlier, when we say offensive operations, that means different things to different people. And I break it into three categories. I think on traditional espionage, I don't, you know, I may have missed this. I don't think the U.S. has typically said that espionage ought to be off limits because that would be an unrealistic thing. Quite the opposite. It just said, even in the Chinese OPM case, they said if we could have done it, and the clappers said we could have done it, we would have. Yeah, so that's, then you get commercial espionage. And there I think the U.S. is taking the position and I think we've been consistent on this. You ought not use the tools of national power as a way of achieving competitive advantage for your businesses. And that was the subject of the agreement between Obama and Xi a few years ago, which, depending on who you talk to, appears at least to have dialed down some of the commercial espionage or commercial intellectual property theft, although I wouldn't bet that it's eliminated it. I think that the third issue, which is I think what you're really alluding to is when you have actually attack operations, or something that is a destructive operation. Now look, in the physical world, as you witnessed in Syria a couple of weeks ago, we do use weapons offensively, and we haven't ruled it out. So if you asked me, I would think that there are occasions when using cyber weapons offensively would be appropriate. And certainly in the battlefield, if you look at Russian and Chinese military doctrine, they talk openly about the battle space involving command and control and communications and dominating that. And you'd be silly, I think, strategically not to consider using tools in cyber space to dominate the battlefield if you're in a war. Then you get into the traditional question, which also applies in the physical world. When do you use an attack or an offensive operation in a destructive sense when you're not actually in a state of war? You do it to prevent something that's imminent. Can you do it preemptively? And I think, again, that's not unique to cyber space. I think that's a question of how you weigh the risk and the danger you're facing versus how imminent it has to be before you feel you have commission under legal doctrine to go in and eliminate the threat. And I have to say, as you look at what you see emanating out of North Korea, again, happily in the position of having no official role, I have some sympathy for the view that any capability that we had to delay or derail, putting nuclear weapons in the hands of an unstable regime to put it mildly, you'd have to seriously consider the appropriateness of that. Ben, you spent years in the army before you headed into this current career. So you've sort of seen this kind of problem from many places. How well does cyber translate to this issue? How do we navigate talking about it publicly, which I don't think is one area where I think in the Obama years, when I talked about defense at a much more detailed level in the past, that the piece of math that Mike just did, they never did in public or rarely did in public. You know, I'm probably gonna say something that's unpopular. I mean, during the Cold War, the concepts that dominated US foreign policy and defense decision-making were abstract concepts, deterrence, mutually assured destruction. I mean, civilians sat at the table and had these conversations. Civilians invented these concepts and could talk equally and very often from a position of sort of intellectual, higher ground, it seems to me, about these issues. But the concepts that have dominated US foreign policy and defense decision-making since the end of the Cold War are operational concepts. Can you move a brigade in 96 hours? Do you know what a brigade is? And there has, we have now opened up this chasm of dialogue. And we have people using military terms and talking in military metaphors and it's troubling to me. Because the phrase boots on the ground is one particularly I don't like. I mean, the United States Army has not put boots on the ground since the end of the Vietnam War. It deploys military solutions that have personnel that have been trained, that have a command structure, a support structure, and an entire concept. And when civilians, and I am one, so, sorry, say boots on the ground, that's sort of what they mean. They mean bodies and the army never means bodies. Never means bodies. There's this total concept of a military solution. That's why they argue so strenuously. What problem are you trying to solve? And what do you need to see to know that we'd want? I think that's a good thing that our military has evolved in that way. We are not gonna manage cybersecurity like an extended battle. Yes, it's difficult and it's in cyberspace and yes, there's a lot of dangerous and difficult and I think increasingly so, things that are going on and Google sees it every day in addition to everyone else seeing things every day. But we're not gonna manage cybersecurity like an intelligence program. I mean, if Snowden and Manning haven't taught us anything, they should have taught us that secrecy does not scale. But what we know is security must scale. So how do we achieve it? We achieve it, I think, and this is a bias of mine, while the issues you're talking about and we're talking about here about war and cyberspace and nation states. It sounds a little quaint, quite frankly to me. But I think those are issues that really may be very important. They don't feel nearly as urgent as the threats that we're seeing every single day from spearfishing, from a lost or stolen device and the access that that gives you, from an insider threat or someone who really is trying to do malicious activity targeted against you. We're frankly taking advantage of you in ransomware. I mean, this will undercut public confidence, which is already pretty thin in our public sector institutions. I mean, I think we're witnessing a near total collapse of trust in public sector institutions and what's really deeply worrying, not so much that we don't trust the banks or business or the media or whatever, is that we're not sure how to architect trust in public space. And there is a role for technology in that architecture. There is a role for norms. But I think we need to have that conversation as a society. So for me, what's gonna be the game changer in cybersecurity? You've heard this from David, so Mike and Chris, this basic cyber hygiene. And we're not doing it. And the government ought to be telling us the things that we could be doing right now to protect our enterprises and systems from 80 to 90% of the stuff that we are likely gonna fall prey to, because we know what it is. And we know how to fix it. And we're seeing that in certain places. You know, the critical security controls that CIS, of which I am a director, promulgated to not for profit. So here are the basic things you should be doing. California AG said, if you're doing business in California and you're not doing the CIS critical controls, we will determine you're not providing basic security. Major banking enterprises, law association just came out last week with the endorsement. The Europeans have endorsed this. Whether it's this critical security controls or something else, you know, Verizon uses the controls. We ought to be talking about the basic hygiene we can use to prevent this. And we're not doing it. So let me pick up on that with you, Chris, because you mentioned the G7, Matt, they're trying to go get this together. And then meanwhile, you look at the end of the tunnel and you see this approaching headlight and it's the internet of things coming with far more devices, many of which don't meet the basic hygiene standards that Jane has just laid out here. You know, with no password on them or the password is password. And you saw that in the Mariah Botnet attack last year where a whole bunch of security cameras and other internet of things were used basically to attack a firm in the Hampshire. And while it may not have been that much damage done, it was a pretty scary incident because you realized how quickly you could turn internet of things devices to one purpose. So do you feel like we're getting ahead of that problem internationally or do you feel like it's gonna overwhelm everything else we're trying to do? So I think we recognize it as a problem and we're trying to deal with it. I don't think we're ahead of the problem. I think it's a, look, this is just the latest iteration of, for instance, the cloud was a big challenge and the cloud is not a cloud as everyone knows as things are actually grounded in different places and some of the solutions apply when you're doing that. The internet of things, I think, is a new challenge and I agree with you. There are commodities and the difficulty is when there are commodities and the price differential added security adds to the price that makes it less of a commodity and makes it less desirable, that's a problem. And I think there has been a lot of, especially after that attack, there were certainly things that happened before that that were a number of connected devices that attacked refrigerators that attacked financial institutions, which is a whole new spin on freezing your assets, by the way. My one internet of things is the only one I have. These were research and development. But what I've seen is there's been a number of consortium set up in the US, both DHS and NIST, the National Institute for Sustainable Technology issued principles for this. What I worry about is a lot of solutions that may be incompatible and I agree with Jane that when people talk about cyber war, I don't even like that term, that's very overplayed. I don't even think there's ever been really a cyber war. It'll be integrated in traditional conflicts and I think we'll see that and that makes sense. We have to think of this as one of the tools in the actual set. But that focuses all the attention on that part of the problem and we need to look at all the problems according to the resiliency problem. For the internet of things, I worry about incompatible standards. The EU is thinking about how they regulate this. There's a consortium in Singapore who's trying to pull together people to think about this. The US were concerned about this. And for me, one of the lessons I've learned being in the security community for so long, and I saw this at the NSC too, is there are very different communities around this. There's the economic community, there's the security community, there's the human rights community, they often never talk to each other. Particularly when we're talking about these new developments, you wanna promote the innovation that the internet of things has. At the same time, you have to recognize when you're talking about self-driving automobiles, when you're talking about health devices, this could be a huge problem. And it could, as Jane says, undermine confidence in the entire ecosystem. So we have to do this in a smart way and the way to do that is to bring those two communities together instead of having them in second deceptive silos. And it is an international issue. We can't do this piecemeal because then I think we'll have conflicting regulations, we'll have conflicting approaches, and you're gonna most die me the innovation, you're not gonna serve security. So you have to bring it together. So Jane raised, and Michael, I'd like you to address this an interesting, one interesting way to go do this. You've got standard set, I think you said California, right? You can't put the software or the devices together if you don't meet that standard. You have the rise of cyber insurance where insurance companies are gonna put standards out. There's some role for government regulation here and the internet of things. And if we can require people over time to wear seat belts, you might require that there's got to be a basic level of cybersecurity with their cars so that somebody isn't taking command of them from afar. So is the fundamental solution here going to be government, private, or some bizarre mix that we're just stumbling our way toward here? Well, I mean, I agree with Jane that a lot of the actual implementation is gonna be in private hands and even in consumer hands because to the extent consumers insist upon a level of security, either to protect their confidentiality or to protect the operation of their control systems or to avoid injuring third parties and having liability for doing that, that's gonna create demand. The challenge is sometimes there's a lag in that and the liability system is an important part of what drives that behavior. So when you talk about self-driving cars, for example, it's not just a question or just the internet of things in general. Until the hack that was just discussed, people tend to look at the internet of things problem as, look, if you have a refrigerator that's unprotected and it's connected to your network at home, that's gonna be bad on you. If you don't care enough to insist upon a refrigerator that has the capability to protect itself then you're gonna pay the price. But the most recent hack indicated that refrigerator can be in the hack vector for a third party. So now all of a sudden it becomes your refrigerator could injure a third party and now who pays the price for this? Traditionally what we've done in this country, even before we get to regulation is we turn to our tort system and we basically said if you injure a third party you're gonna have to compensate them. And where I think you'll see government involvement in the first instance, whether it's the courts or the legislators is gonna be, okay, who is gonna ultimately be saddled with the responsibility for that liability? Is it gonna be the consumer or is it gonna be the manufacturer? And my hunch is it's gonna be the manufacturer. And what you will see over time is either through the legislative process or through the common law process, the beginning of a movement to create economic incentives for manufacturers to be able to upgrade the level of their security for their devices. So I should just say I'm not certainly not preaching for regulation here. I think that that is fought with difficulties. However, for critical infrastructure, what we've done, again, using the National Institute of Standards and Technology is to do a voluntary approach with industry for best practices, what industry should do. And it's not one size fits all, it depends on the critical infrastructure you're dealing with. The problem with the liability regime and I'm the recovering lawyer like Mike, that is that you need a standard of care. You need an accepted, what is the standard of care before you have negligence before your liability, before you have insurance and all of these other things. I think that develops over time. I remember people talking about suits and liability back 20 years ago, and it hasn't happened. But now, as industry is saying, look, there's a benefit to this. There's also an economic driver in this because consumers might actually value better security when they think that something could actually hurt them. That that will drive all these regimes, including insurance, which will help do it in the way the government itself. I think there's a fair characterization, though, of the useless conversation that's been going on in this town on the subject for about five years. In the sense that, and Chris was part of it, and it may have extended earlier, I just know it vividly during my time in the administration, the manufacturers would say, well, if consumers want security, they'll demand it. And the marketplace, marketplace will solve this problem. Let's just leave it to the market. And then at the other extreme, you had people say, no, it's so dangerous out there that only the United States government, in fact, only the United States military, in fact, only one agency of the United States military can handle this problem, give it to us. Really? I mean, that was the state of the conversation. Not very sophisticated, not very informed. We've got a lot of analogies. They're not perfect. You mentioned seatbelts. When were seatbelts introduced into the American automobile? Anybody know? 1936. When were they mandated by law? 1982 in New York State alone. So that was 50 years, more or less. When did Ralph Nader write Unsafe at any speed? 1965. So we're slow learners about some of this stuff. Insurance companies say, well, we really can't write policies because we don't have enough data. Who says that in 2017 that you don't have enough data? And when was the first automobile insurance policy written? In the 1890s, flooded with data, right? Wrong. What they did was they treated the driver like an 18-year-old with a red Corvette. And that's how we should treat everybody online right now. Because everyone online right now is an 18-year-old in a red Corvette. And posing a menace not only to yourself, but to everyone with whom you are connected. We can do better than this. What role should our government play? Well, while we all sit here thinking about our red Corvette, which I'm certainly missing and would like, we're going to open this up to questions. We have people walking around with microphones. We only have about 20 minutes for your questions. So make sure the questions are brief to the point and actually have a question mark at the end of them. And if you want to address it to any individual, let us know. So who is our first victim here? Here in front. Hi, thanks. And I'm sure I want to be the first victim, but Rick Weber at Insight Cybersecurity. So we're talking about liability. So what already exists out there in terms of government structure and policy? Safety Act, protected critical infrastructure, information. I mean, can any of the panelists talk about the existing protections that are out there and how those could be expanded in their use? Bane, is that you given your? Or Mike, the only thing. The Safety Act was really designed to allow innovation in private industry to try and help solve the terrorism problem. It wasn't really designed for this, I think. And I'm not really current on the state of play. There have been some efforts to extend, quote unquote, Safety Act protections to cyberspace. I don't have any. What is to, I mean, I think it has applied in a few cases. I think what the Safety Act is designed to do is to, if you create a technology that's a security technology, it gives you a certain measure of protection. If not with standard of technology, something bad happens. So that addresses a slice of the problem. It induces people to be willing, perhaps, to invest in security capabilities that they would then put in the marketplace. What we haven't seen yet is something that was actually proposed in a bipartisan way several years ago that I was involved with, which would have created similar protections for companies that built their own cybersecurity to certain standards so that even if there was a failure in notwithstanding the effort that had been made, there'd be a measure of protection against limitless liability. Basically what it was designed to do was to give companies kind of an incentive to invest because it would A, limit their liability if something bad happened, notwithstanding the investment and B, it would induce the insurance companies to be able to write insurance because they would know what the downside risk was in terms of having to pay damages. That effort, as I say, which was bipartisan, unfortunately, founded when Snowden arrived on the scene and then the words of internet government in the same sentence were viewed as anathema and so Congress dropped it, but that would be worth reviving. Okay, let's see. Who else do we have out here? Yeah. Is Mike coming to you? Thank you. Shanti Kallato, National Endowment for Democracy. My question pertains to the evolution of the internet of things and in particular the role that China might play in advancing the internet of things. As you know, it's placed quite a great priority on enhancing its capacity and its estate priority is laid out in its latest five-year plan. And I'm curious and perhaps this is specifically for Chris, do you know of any efforts that are now underway to develop standards that would be perhaps respect privacy and certain rights around the internet of things and how would one bring China into such a regime? So I think there are lots of different international attention on this and different international bodies one together in this game. As I said earlier, my concern is that you have all these incompatible regimes that actually don't advance security in the way that's important. Yeah, China, I think, is a player and wants to be a player in this and other standard standard bodies. As long as there are international standards that make sense, but how you do the standard setting and whether you do the standard setting and when you do the standard setting are all very important. Since you brought up China, one thing I am concerned about though is China recently beyond the other kind of threat actor we've talked about is China has a new cybersecurity law and that cybersecurity law in a lot of ways is a law that deals with indigenous innovation trying to make sure that everything that's built in China for Chinese consumers and that's a problem I think and that doesn't advance security either. So part of the problem is also looking at countries' cybersecurity policies where often they're not cybersecurity policies but they're more trade policies. The, who's there? I was on the other hand I saw before and I've now lost it. Back there, yeah, ma'am. Hi, my name is Jimin Kim from Millennium Project. Unlike cyber warfare, the information warfare mainly plays information trusted by a target without the target's awareness so that the target will make decisions against their interests but in the interest of the one conducting information warfare. As a result, it is not clear when information warfare begins, ends and how strong or destructive it is. Inability to trust media is a potential residual effect of information warfare. So how do we prevent this and if already engaged with Russia, North Korea or others, how do we end it? So I think in part you have to disaggregate, as Mike said, you have to disaggregate this issue. Information warfare as you described or influence operations have happened for centuries and cyber has been either an accelerant or a enabler to allow this to happen. The cybersecurity aspect of it really is the original taking of the information, how it's used later on is not necessarily a cyber issue. And so when you're dealing with these issues, you can't just have the cybersecurity experts at the moment because they're not gonna have the whole solution. They may have part of it, the resilience part, but you need to bring the people that traditionally deal with how do you counter influence operations? How do you do counter messaging? We've done this with ISIL for instance. How do you figure out, and not get into the point, as Mike and Jane also said, where the government is the arbiter of what's true and what's not true? And companies have a big role to play here and I think this is something a lot of the companies are stepping up to make sure not that information is accurate in terms of some gold standard, but that you don't have bots, you don't have other things driving a lot of the debate, but it's a difficult issue and it involves more than just the cyber folks. I agree that one thing I would add is I've come to think actually the phrase information warfare or weaponizing information actually is a dangerous phrase because once you start to view content as a weapon, you've now opened the door to banning it or modifying it. There are very limited types of content that I think really can be banned. Child pornography pretty much everybody believes literally incitement where someone gets online and says go put a bomb in a place that's generally not considered to be protected speech. But really other than those very narrow categories, I think you have to be very careful to begin to treat ideas as if they're in the same category as bombs. The answer for dealing with ideas you disagree with or what we used to call propaganda is counter messaging and things of that sort. Mike, can I? I'd say one thing or more on that. The countries who have been most adamant and for many years of trying to do cyber treaties, their main goal often is that control on information and so that should tell you something. Let me just press you Mike on one part of your answer. Prior to the Russia hack, when you listened to US intelligence officials give their annual state of threats to national security threats to Congress, the one that they brought up the most in the past two years was data manipulation. They didn't mean ideas, they meant changing digits so that the car goes off the road, the missile goes off the road, the financial statement suddenly changes and that's in a different category than the kind of protected speech we're discussing. It's also one of the most impossible things to police. But that's basically what you've described in many ways is the essence of what all malware is. In some ways what you're doing is you're creating an instruction to a computer, whatever, that's changing the way it behaves and in the case of a car, for example, it's actually affecting the control system. So I would not consider that to be information war for information operation. That is attacking the integrity of data. And again, a lot of the key years to separate out related to different concepts. To me, information is content, it's communicative activity that a person listens to and understands and discourses it. That is a more general description and includes things that simply operate the lights or the refrigerator of the car. And I think in that category, manipulation that affects a control system is certainly not protected and would actually begin to look a lot like a weapon. You know, I think one of the things that struck me about your question is how do we know it's begun? Both my husband and I were in the army for a long time and I've done a lot of work on conflict, conflict resolution, conflict prevention, war termination. One of our daughters said to us, Mama, how does war start? And I was like, well, vital national interests are threatened and they're exacerbated by economic tensions. And she said, no, I'm sort of like, does somebody call up the other person and say, we're at war now? And I said, you know, it sort of does not work that way. And so to your question about when does information warfare start, I mean, I think that's sort of a, you can, I don't like to use a word war either, but people use it. And so, you know, we could say we're in it already to a certain extent. What we do about it and who should do what and what the assignment of responsibilities are unanswered questions. On this question of data integrity, I guess I would just differ slightly from Mike in saying that it happens both in process and in content. If you change my blood type, that could kill me. And you can change my blood type without my knowing it. And that's corrupting the integrity of the data that is transmitting my blood type. And you can corrupt the integrity of data not only on the content side, but on the timing and delivery and direction and all manner of things. When I was in Homeland Security, we used to say, you know, which is, oh, it's really complex, it's really, you know, it's a really big agency. It's really big, it's a really big agency. It may be complex, but it's not complicated. We just had to answer two questions. Are you who you say you are? And do we have to worry about you? If you're getting on an airplane, if you're crossing the border, if you're attending a presidential event. So, there are real, I think, existential questions in play right now. So, we have to short out, this conversation's been one effort to do it. What problem are we trying to solve? And how will we distribute the responsibility and our expectations for delivering that solution in consistent ways? I'd say I'm as worried or more worried about the integrity of data than I am about things like denial of service attacks, which are temporary and low level. I think Jane used the example, also a favorite example of our friend, the former president of Estonia, who uses that example all the time. But if you change your financial data or you change the financial system clearing houses so that trades cannot be resolved, that has huge systemic effects. So, and I do think there are ways to go about that. It goes to resiliency, that goes to making sure you have backups. It's not a content issue in the sense of political or religious speech. It is, the data is verifiable. The data is there. But I think that is a huge vulnerability that we need to go out against. It is and a hard one to write rules against. I mean, Mike raised before, he said, if we can go after the North Korean nuclear program by some cyber means and avoid letting a maniac get nuclear weapons, that's a pretty justifiable goal. If your way of doing that is some form of data manipulation so that missiles go in a strange place or nuclear tests don't take off, you've set a precedent that is gonna be hard later on to go deal with others who might be operating more in the blood type arena that Jane's described. I'm not sure I agree with that because I think that we, in the physical world too, we recognize that the fact that you can use kinetic means physically to disable an adversary doesn't mean that everything is open. You can buy hospitals and schools. So I think likewise in cyber, you could say that attacking a military target like a missile launching capability and altering the data and that so that it fizzles is a very different thing from affecting the blood type of an innocent person or even of a soldier which will be like bombing a hospital. But I think what all of this demonstrates is that often the discussion in this area lacks precision and we tend to conflate a lot of different things and we may think we know what we mean but we're not communicating effectively about that. And so to me, the baseline requirement is clarity and precision is changing about what exactly is the thing we're worried about and then how do we think about a precise way of doing it? And the different cultures will see that distinction in the same way you do. Okay, we have time for just one or two more, Jill. I coming to you. Bill Schoker, the OECD. I'm just wondering as a sort of jumping off point from what you've said, sorry, is this working? Yeah. The kind of not yelling fire in a crowd at a theater. Is there any sort of legal thinking about that kind of sort of baseline rule that would apply to you think at all in cyber? I mean, there's a lot of thinking about how traditional, this goes back to people thinking cyber is this unique thing that has no grounding in the physical world which is a flaw. So if you take the rules, the laws that apply in the physical world also apply in cyberspace including at the very high level, things like the UN Charter, et cetera, the question is how do they apply? There are unique aspects of cyberspace. So there's a lot of thought around this. There's a lot of people saying, okay, how do you apply these laws that have brought us into where we are relatively safely since World War II? How do you use these now in the cyber realm? Where there are distinctions when you talk about distinction in cyberspace meaning that you don't go after civilian targets? What does that mean when this is a civilian on infrastructure and how does that work? So there's a lot of thinking about that. We participate in a lot of that thinking. I think there's a lot more to go on that. But having this understanding of what the rules in the world are really makes cyberspace more stable over time. You have time for just one more and we'll go right over here. Hi, Frank Conkel reported for Governing Executive. I was curious, we talked a little bit about the prior administration's efforts in cyber following the OPM breach. What do we think of the Trump administration's positions on some of these issues? I know we've had a few, we had a bi-American, higher American executive order yesterday, I believe it came out. And that might relate to something Chris said with Chinese policy and how they build up their IT internally. I'm curious about what your positions are on that. Going forward. I think this administration, one of the things that every administration has done and I think that's really sort of to its credit, we may not have progressed at a speed everyone would have liked, but people have been building on the work of what's been going on before. And there has been genuine accumulation. I think there's been a more positive relationship with the private sector that's been really necessary to rebuild in the wake of the Snowden leaks. And that's important. What are the priorities for this administration going forward on cybersecurity, if it seems to me? One, I think would be to narrate a posture for American business and American public to understand the role that cyber is playing in our lives and what we can each of us do about it and how we'll distribute responsibilities. The government has a role to play, but not the only role. I mean, that's the point I was making about hygiene. There will be a conversation during this administration about privacy. It's not a word that we brought up yet, but it will be brought up. There's only, the American view of privacy is really quite distinctive. It's the ability to limit government's intrusion into our lives. That, for example, is not the European view of privacy or prevailing view of privacy. That view is more about ownership of data. What responsibilities go with a sensibility about privacy and the need for increased security? I mean, there's nothing you can do online and confident that your identity or your information is not subject to exploitation. And so we want, this government, this administration will have to narrate a purpose and objective in this dimension and be clearer about what the role that the American people can expect of government. It will not be the case that government can do all that needs doing here. And I think that conversation is gonna be the main feature of the cyber agenda in the next four years. I would just add that I think, this is no longer a fleeting issue. This is an issue that's gonna stay on the front burner for any government. And it did start, you know, this has been each administration building the last. We had the conference and national cyber initiative at the end of the Bush administration that was built on by the Obama administration and I think it will be built on by the Trump administration. We've already seen there's an executive order that is coming at some point. We've seen some speeches by Tom Bossert, who is the new assistant of the president for Homeland Security and Counterterrorism and cyber, where he talked about the importance of, for instance, norms and the importance of working with countries to sanction bad behavior. I think this is something that is top of mind that's gonna continue to be for years to come. Well, Chris, Mike, Jane, thank you very much. This was a great way to open the session. I've been asked to ask all of you to stay in your seats while we do a quick swap of microphones with the next panel because they want to keep everything on time and move straight on. So thank you very much.