 Thank you, a very warm welcome to everyone on behalf of the Berkman Klein Center. I'm Robert Farr, some of the research director there, and I'm really thrilled to act as the very brief host for this at the occasion of the launch of Bruce Schneier's new book, Click Here to Kill Everyone. I hope you're going to tell us that indeed that's not true that we can't yet do that. So I think you all know Bruce. Bruce is a longtime member of the Berkman Klein family. He's a cryptographer, an expert in security. He is probably perhaps one of a handful of the clearest and most incisive thinkers on cybersecurity out there. And more than that, he's really, really good at thinking about systems and institutions and understanding how technology intersects with political and social structures. He's a very prolific author. I'm going to have to get a new bookshelf that's like a Bruce Schneier section, which is really a very good problem to have. He's going to be in conversation today with Abby Jakes. Abby is a postdoc at MIT. She's a philosopher who is working on the ethics of AI. And without further ado, I'm going to turn over the floor to Bruce and Abby. Welcome. Let's start by saying there are chairs. Is there a chair there? Is there a chair there? And who has a chair next to them? Raise your hand. Okay. So there are a bunch of chairs. So please come in and sit down. I also, I forgot some housekeeping. So number one, you are all under surveillance. Know that. This is being filmed and recorded. Number two, for folks out there that want to lob a question in over Twitter, the hashtag is BKC Harvard. And number three is after this talk, books will be for sale probably over in that general vicinity where Ruben is. Anything else I'm forgetting? Okay. All yours. Okay. So I wanted to just start by laying some groundwork. So Bruce is really terrific book. Everyone, can everyone hear me? Yeah, great. Bruce's excellent book calls our attention to a crucial inflection point. So we've gotten used to dealing with computer security. We know, and insecurity. We know that sometimes we have data loss. Sometimes we have data breaches. So, you know, we try to protect against identity theft and we have backups, things like this. But Bruce points out that we're entering really a new era. And this new era is characterized by the fact that now everything is becoming a computer. So all kinds of things that used to have computers in them or maybe didn't even have computers in them. Now, our computers with various other systems attached to them. So our phones, obviously, but also cars and power plants and all kinds of other systems, airplanes. And these are things that operate in the physical, not just the digital world. And that ends up making a really big difference. A hacked car can lose control of its brakes on the highway. And a hacked power plant or a hacked water treatment plant can cause blackouts or public health emergencies. And a hacked bio printer could release a deadly virus into a hospital. So not only that, if all these kinds of hacks were carried out at scale, if we suddenly could hack all the cars or all the airplanes or all the power plants, we could have really catastrophic new kinds of risks. So Bruce calls our attention to this moment and he very carefully lays out what it would take to protect ourselves from this. What a better, more secure, networked world would look like. And the kinds of forces that are going to make that difficult to achieve and then how we might navigate in the imperfect situation in which we find ourselves. So he's really thinking about how there are technological solutions for the majority of these kinds of problems. But the real challenges come on the policy side, the political will, the incentive structures to getting the changes to actually happen. So, Bruce, is there anything you want to say about what's going on in the book that I haven't sort of generally... I think that's the crux of it. I'm trying to write about the changing environment. And we all know this, that the old way of dealing with computers is to stare at a screen. And the new way to deal with computers is going to be to interact with them in our environment. And that will be our refrigerators, our cars, other appliances, and toys, and systems. We're interacting with computers in this room. We're being recorded, but I don't see them. And that's the new way to interact with them. So there's this pervasiveness of these systems and then there's this new power these systems have. They directly affect the physical environment. And this is what we call automation. This is autonomy. This is physical agency. And that sort of changes. We've had all these assumptions are a bit... We're kind of like negotiated detents with security. We're doing authentication this way and we're doing patching this way. And we know software's not that great, but we're managing in this way. And that's all worked really well. But when you move into the new environment, maybe they don't work so well. And that's what I'm exploring. And I think that's a really important change. And it does mean that those of us who have no security have a lot to teach. The other parts of our ecosystem that don't have that history. Yeah. Yeah. So because we're here at Berkman Klein and we have people here who think a lot about policy, I wanted to ask you a question about levers are available in the policy domain. So you talk quite a bit how you think that it's very important to get this right, that we're going to need good government doing good. You really think that this is a place where market based solutions or voluntary self regulation are just not going to be enough as you see it. And it seems very plausible in the context of the kinds of solutions you're proposing. But of course, you're also very clear eyed about the challenges that that presents in our moment. I think I may be even more pessimistic than you about our moment. You mentioned that, look, once these systems start killing people, governments often think that's the moment to regulate. I sort of worry that that will be an excuse to make things much worse. But I wonder if we're worried about the sort of federal level, if you think that there are other avenues, you mentioned that states often are doing better on these things. Massachusetts are home and California sometimes. And so do you think that there are is enough leverage, say at a state level kind of plane of action since software is this right once run everywhere, the way that the GDPR is seeming to have some effects because the EU's regulations are trickling outward. Do you think that we could tackle this by trying to go around something like a federal institution and use state levers? Or do you think that we're just going to have to try to make federal solutions work? I'm curious where you think we could locate these things. I think the answer is going to be all over the above. Right now we know we have a pre-dysfunctional federal government and it's not the place to look for answers. I think this software has this interesting advantage that it is right once sell everywhere. And so the car I buy in the United States is not the same car I buy in Mexico. Environmental laws are different. The manufacturers tune the engines to the local laws. But the software I buy here is the same software I buy everywhere, because it's much easier for a software manufacturer to write it once and sell it everywhere. So if they if there is a law passed in California, right, California has a bill that's going to be signed of a very small change in IoT security, you can no longer have default passwords, right of the 50 horrible things that are in security. This is one of them. It's not going to make a big difference. It'll make a little difference, but it'll make a difference. And if a company wants to sell a product in California, they'll remove the default password. They're not going to have a separate build of the software for us here in Massachusetts. It makes no sense. So we will all benefit, right? Similarly, the EU passes some law regulating security of interconnected toys, which they are about to do. There was a pretty some pretty bad security interconnected dolls that allowed for, you know, super creepy spying on children. And that's going to be fixed and we will benefit from that. I mean, I'm sure that the extra old toys will be dumped on our market because they can't be sold in Europe. But after we buy them, they'll be the better ones. And so that's something we really don't have in the privacy realm. Right, you know, so GDPR, the big European data protection regulation, Facebook very much wants to figure out who is under that jurisdiction who isn't. Because they can, if they if they can, they can differentiate offerings. They can spy on the non Europeans more and make more profit. That doesn't that same desire to circumvent doesn't exist in safety. In safety, all you have is the desire not to spend the money to fix. But once you're forced to spend the money to fix, you fix it everywhere. The refrigerator will be improved everywhere. The thermostat will be improved everywhere. Even the car will, in a way you don't have in privacy. So I think it gives us an advantage in this particular area, which we don't have when someone's trying to steal our data for profit. So I'm talking about something that is an interest that you can share, AI. And so your focus is really on bad actors, hacking, or even just corporations sort of under the influence of surveillance capitalism, exploiting their users. But when we think about the AI kind of contexts, often people think about the kind of stuff that actually my work focuses on, which is more the unintended kind of problems where whoops, the autonomous vehicle won't do what we expected. The helper robot is out of control. Or there's bias and things built in that we didn't anticipate. So I'm curious what you think the specific risks are around AI and whether you think that there are differences between the kind of domain of specifically bad actors and sort of the more unintended consequences kind of stuff. You know, it's a debate that always I think exists in the security field is security, a subset of safety, or a safety subset of security, right? And what you're thinking about are mistakes, things that happened randomly. What I'm thinking about is an intelligent malicious adversary. Now it's different in defenses. If you are in, I don't know, doing environmental safety, and you need to secure buildings against hurricanes, you can do things. And there's lots of things you can read about how to make buildings hurricane proof. But you know deep down the hurricanes will never change what they do based on your security. The hurricanes don't get smarter. They don't adapt. That's very different if you're doing ATM machine security, where your adversary immediately adapts and immediately figures out how to circumvent what you're doing. So I mean, they're very related. And actually, when you get to things like crashing cars, even as by malice, it is more the realm of safety. It's going to be the safety regulators that are going to look at the bad actors taking over cars. It won't be the security people. In some ways they're very similar. You look at some security events that after the event happens, the safety and security response are the same. So I mean, after the terrorist attacks on 11, one of the things, one of the stories I would tell is I think in the 1940s, a plane crashed into the Empire State Building. It was an accident. It wasn't on purpose. But if you think about everything that happened in the moment after the crash was exactly the same, right? We need firefighters, we'd rescue people, you know, we need people to understand how buildings stay up. Everything was identical. So if you're an emergency responder, you don't care whether it was a deliberately set bomb or someone accidentally punctured a fuel line and exploded. It's the same response. So I think there's a lot of overlap in safety versus security. But it's all, the difference is going to be in that adaptation. That the bad actor, programmers won't make different and new mistakes because you've protected against the old mistakes. They're keeping making mistakes in some Gaussian bell curve of mistake space. I don't know I'm making this up. Yeah, I mean, I think that the overlap point seems right to me. I'm not super clear that there's a bright line. I mean, you mentioned at one point that the of the catastrophic risks that scare you, the one that really worries you as a criminal attack that sort of gets out of hand. And we've seen some of these before where there's a sort of a bad actor, but then what they thought they were doing ends up being not what they actually did. And even you know, Mirai Botnet and things like this that it looks like these are going to be, you know, cases where there's sort of a combination of these effects. So it seems right that we shouldn't think that there's a big separation between the bad actor cases necessarily and the safety and the security cases and the inadvertent cases and the bad actor cases. And we share the same issues and in transparency and systems that will adapt and to a point where there can't be understood. Who do you hold accountable for? For when an algorithm does something which you would normally hold human career accountable, which which human do you pick? What if you do if there isn't an obvious human? Right. What do you have the human says, yeah, I wrote that algorithm, but it's way different now. And I don't know what it's doing and I'm not in charge. And don't blame me for that. It's gone weird. Right. Now what? Can you hold an algorithm into account? What does it mean to incarcerate an algorithm? Can you? You know, can maybe we can program that it doesn't like that? I don't know. You have to solve this stuff. Do robots have feelings? I mean, you talk about how can we pretend they do some useful way, even if they don't. I mean, and you've talked about how the courts have been reluctant to hold people responsible for vulnerabilities, programmers responsible for vulnerabilities, when they were exploited that it's tended to be Oh, it's the hacker who did the bad thing. It's not the programmer who put the vulnerability in there. And you might worry that there'd be analogous problems with assigning responsibility in this domain. And this is worth looking up the history of software liabilities in the United States. We deliberately decided not to hold programmers responsible. And the belief was that that would be a huge judgment innovation. That we would we would stick the proximal cause, right? So, you know, windows is lousy. Someone finds a vulnerability, a hacker breaks into your machine and steals your money, right? We could blame a lot of people on that chain, but we're going to blame the hacker, you know, when we choose deliberately not to blame Microsoft for selling you a shoddy piece of goods and pretending it's not, right? We're not going to even blame the person who discovered the vulnerability and published it. But we're going to blame that proximal cause. I think that holds less well. This again gets the change into where computers are. That'll hold less well in a car. You know, we have a lot of case law that when a car has a flaw in it and someone crashes, we assign a lot of liability to way back the manufacturer of that car, of that part. And some, of course, to the driver in the road condition and maybe, you know, whoever designed the bad intersection, you know, we think of a lot of other, you know, causes on that chain, but we do go back in a way we don't in software. And I just see this changing as, you know, where you have all these rules about software. We already have rules on cars and medical devices and consumer goods and appliances and all those things. These rules aren't going away. As software moves into this world, these rules are going to chew their way back. Let's hope so, right? Let's hope so. So one thing, sort of a small point, but I found it striking, one thing late in the book you mentioned is that you think that we need to sort of demilitarize the internet. We need to think about changing the sort of models and metaphors we use for thinking about cybersecurity. And in particular, you say that this militarized talk isn't necessarily the most productive, that maybe what would make more sense is talking about pollution or public health or these kinds of different models that naturally suggest a different way of relating to problems we find in our networked world and also of solving them. And it struck me because another theme in the book is the way that it's very tempting and common to focus on offense rather than defense. And that feels to me like a particular kind of militarized kind of macho posturing kind of mindset that's queued by this particular way of thinking and that if we switched over to something like a public health model we might get a really different intuitive sense of how to approach these problems. And also it would help us bring certain kinds of problems under the same umbrella. I think about the way that something like Cambridge Analytica kind of problems look like unauthorized illegitimate experimentation on human subjects. And you might think that it's worth thinking about these kinds of models that we use to structure our thought about these issues. And this is hard. I mean the military attack defense model is pervasive in security. I use it all the time. It is how we talk about these issues. And I think it does really limit the way we think about them. We think about it in terms of attack and defense and this very adversarial nature and a public health model. It just gives us different tools to think about things. And when I think of cyber peace I'm really channeling Camille Francois who many of us remember. And very much talked about this other way of thinking. That when you talk about cyber war even if you don't like it you're buying into the frame that there is this natural hostility. And there is kind of right. I see it. I know it's there. But you can use a public health model which does have attackers and defenders and bad actors and good actors. But we don't think of it in that same militaristic term. This really is kind of idealistic. I think right now what we can do is expand the frame of the discussion. I don't think we can go to U.S. Cyber Command and say surprise you're now like the NHS. And they'd say great. Right. But you know it is a step in that. I think that's important. I think these sort of other ways of thinking are going to give us windows into answers that we don't necessarily have right now. Again no time soon. But these problems aren't going away. So let's talk about power for a second. It looks like with regard to these problems most of the power resides with governments and corporations. And you bring out very well how those are precisely the locus of many of the problems. And there's this hope that government power can be the way to mitigate and manage these risks. So I'm wondering how can we leverage government power in the right ways when it is itself so much part of the problem. Is it that we need to you know is this just a remember to vote moment. Is this a we need to cultivate sites of power outside of governments and corporations. Is this a man the barricades moment. Like what do you think about how we can disrupt the power structure enough to actually get these changes to happen. This is hard otherwise you would have done it already. I tend to think that our best answers lie in multiple power sources watching each other. We know that unbridled government power is bad. We know that unbridled corporate power is bad. But if we can get the two of them watching each other they can keep each other a check. And it can't be just the two of them. There's a strong role for civil society. For NGOs to monitor both. There's a strong role for journalism to monitor both. And in any robust political environment there are multiple political parties that are monitoring each other. There are multiple sources of corporate power that are monitoring each other. I mean the problems we have today. I mean we can very broadly you know blame on the large monopolistic powers that don't really have the checks that you'd have in a more dynamic and fluid market. The at least the United States very narrow political spectrum of you know okay political thought. We have a far-right party and a middle-right party. Which really limits the amount of monitoring they're doing on each other. For the for NGOs the press these are hard issues. Plugged people noticed that Julia Angwin has a new media venture. This is exciting. She's left ProPublica. Yeah with Jeff Larson and is going to do data driven investigative journalism. This is fantastic. And this is something we and this will serve as a check against some of these problems with algorithms and autonomy and what a decision-making that we wrestle with. I mean that is a phenomenal thing. We need more of those. We need dozens of those. I mean you know every time we get an email from Ron Debert. Right telling us of another great piece of investigative computer forensics he's done. Exposing government abuses of power. Surveillance of control in various countries. We need dozens of those. I mean there isn't a good answer here. And we want to push power levels down. Take our autonomy and push it up. But you know hand waving we need more distributed power and I think that's how we get that. You know this is again I will turn to your political philosophers. How do we do that? How do we how do we make government work in the 21st century? What does a representative democracy look like in this century? I mean you could really make the argument that the current constitutional democracy is the best form of government mid-18th century technology could produce. Right you know because when travel and communications are hard we need to pick one of us to go all the way over there and make laws on our behalf. That make a lot of sense but now travel and communications are easy. So maybe that makes less sense but what will replace it? Alright we're going to open up for questions in a minute but I have one more question for you. So you talk about the need for the internet plus as you call it to be resilient. And that connects to something I've been thinking about which is that I think we're going to need to spend more time thinking about how to make our systems fail gracefully. So part of the problem with the sort of Google photos debacle wasn't just that black faces weren't recognized as at the same rate and as well as white male faces. It was also the particular way the failure happened. It was that they were a photo of a black face was classified as a photo of a gorilla. That's a whole different thing from differences in rates of oh I can tell that's the same person or something. And I think that especially in this new era that you've called our attention to where all of a sudden these systems are reaching out into the real world in various ways. The sort of there's no undo button once we've got this problem in a 757. This failing gracefully is going to be an important feature. And is that part of what you have in mind when you talk about resilience and maybe you could just say a little bit more about what you're thinking there. It is very much and we know how systems fail gracefully. I mean airplanes have sort of two ways. There's the airplane way where there are multiple ways to do something. So if the landing gear fails to deploy through the button of deploy landing gear there are like two or three backup systems including you know going under on the bottom plane and hand cranking them and bringing them down. So that is a way that systems can fail gracefully. That there are multiple backup systems. The other main way we do it is this building which doesn't really have multiple ways to hold the roof up but it's just been overengineered. If we think that the load is going to be X then we design the struts to take two X load. And both of those are ways to fail safely. Your car you know as much as possible fail safely. If you if you take your foot all you take your hands off the steering wheel it doesn't like a radically glow left and right. You can imagine steering looks like that. Got to hold it steady. But no it naturally is steady because that is a you know a better way for it to fail. I think we need to start doing that with our systems that these this fail catastrophically isn't going to work. That's really what I'm talking about in in in the cover right. This is with the title. I mean that's a little bit science fiction not really but the notion that you could have a system where in one click you ruin it for everybody is how our computers work. So I don't people followed there's a lock company called Amity. They make locks for hotel rooms and these are those key cards. And I think it's earlier this year there was a vulnerability discovered. I think it was last year maybe because I think it's in the book. Vulnerability in in in their product. What it means is every single hotel room that is secured by this lock is now insecure. Surprise right. Everything. That's a particular mode of failure for computers. And they do not fail gracefully. The way you fix this is you walk up to each lock in the world one at a time and fix it. That is not failing gracefully. That is failing. It's failing maximally catastrophically. And I think that is the wrong way of thinking. So we then we so we do need sort of this better way because we're not going to design absolute security. Nothing is absolutely secure. This is too complex but maybe we can contain the security. But after the the blackout in 2004 we lost power in the Northeast Quadrant United States and South Southeast Quadrant of Canada. The power grid of the nation was redesigned. So you wouldn't have those kind of catastrophic failures. Because the failure was a particular k power line in mid Ohio. And that started a cascade of failures that was a huge blackout. We tried to limit those. Airlines have finally realized that they can rejigger how planes are scheduled. That if there's a weather failure in a certain city it doesn't affect the entire country. It just affects planes going in and out of that city. And these are ways to to decouple, to decentralize, to disengage in order to fail more safely and securely. And there's a lot here. I mean you talked about some of the tech problems and and there are a lot of tech solutions that aren't being deployed for all these these policy reasons. And there's a lot of tech solutions we don't have yet. What I tell people is that I mean this stuff is hard, but it send a person to the moon hard. It's not faster than like travel hard. We can do this if we have the economic incentives. But we're missing all the incentives for companies to do it better. Equifax one year anniversary a couple of weeks ago. Big deal. Everybody's personal information in the country was stolen. Big press event. Legislators were annoyed. I testified in front of the house, one of the committees and there were angry legislators on both sides of the aisle saying this cannot stand something must be done. Fast forward one year nothing was done zero. Right? The lesson you learn is skimp on security. Hope for the best. If the worst happens whether the press storm get beaten up by Congress you know verbally and then nothing happens. Right? Facebook is good the same thing's going to happen. Don't think anything is going to be different. And that's unfortunate. On that note I think it's time to open it up for questions. I think they're mics. There's a question right there. Wait let's let's get to you by getting the mic. Thanks that was incredibly interesting. I'm Uli Kappen. I'm a Neiman fellow. I'm a journalist and I'm interested in agrhythmic accountability reporting and I was I had my little party when I learned yesterday the 20 million grand to the markup you mentioned. And I would be interested in could you expand a little bit on the role of journalism that has played in this field up till now. If there has been a positive example to change the field of this interest and if you had a wish list on which issues journalists should focus during the next year. I talk a bit about the history. I mean they've been great wins. Julie Angwin has done some great reporting on on racism of algorithms that do bail and parole decisions. This have Kashmir Hill she's also done reporting on algorithms and discriminations and biases and opacity. Frank Pasquale has written about is a book called a black box society about algorithms and and lack of transparency how that's bad for society. Right now I think journalists are the only people holding algorithms and algorithm designers to account. I don't think I mean we maybe in Europe there is some government accountability. I don't think it's very much. So I mean journalists are are what we got here. You know I guess nonjournalists Latani Sweeney here at Harvard. This is some great work on algorithm. She's one who proved that some of Google's rec ads were racist. And like in horrifying ways in ways that you just look and say I mean don't you have don't you pay attention. Yeah I would add that the the examples that Bruce mentioned are doing such important work and it really is one of the only sort of areas where this stuff is being called attention to in a really public way and it's it's vital. I would say if we're thinking about what journalists should be doing the other part of your question I would say precisely focusing on the kinds of issues that those pieces are about. So there's also a temptation in other kinds parts of the press to focus on what Bruce likes to call movie plot scenarios. The really wild extreme disaster scenarios. And I think that that's not helpful in terms of people understanding what the technological challenges actually are or what the real plausible locations of harm are. And so focusing on these not as sexy kinds you know in certain ways things that you don't feel like Michael Bay would turn them into a film these kinds of scenarios about bail and parole lending even things like hotel room keys just to get people have a sense of the ordinary objects that kind of work. Universities I mean Harvard doesn't many universities use automatic scoring mechanisms. You know we know Palantir has been hired by the U.S. government to use big data analytics to find illegal aliens. Yeah. That sounds horrifying. But you know can we understand what's your false positive rate. What's your false negative rate. How good are you mean. Do you what sorts of controls. What sorts of legal protections. What sorts of appeal is there. You know any of those things. I mean there's a lot algorithms are going to do they going to make more and more decisions. And they're going to be hidden. You know you're going to be denied a government service. You're going to be denied admission to some kind of corporate events. You're going to see a certain kind of ad. When you go onto Facebook and not see another ad. Algorithms make all those decisions. And they make them suing some. David Weinberg isn't here. He sent a great little essay recently on five definitions of fairness. Right. Go read that. This stuff is robustly hard. But you know a little bit of transparency go a long way. Am I doing it. Can I slip a question in while to run anything. You're kind of charged. You never want. Exactly. So Bruce you had written several years ago about the feudal Internet. I was wondering if you're thinking on that has changed as we. It is not. And by feudal I mean feudal with a D not with a T. And by this I mean the Internet where you have a protector. And we know this right. Some of us are Apple people. We have iPhones and I and Apple computers are data is an iCloud. They keep our calendar and our email and our photos. And then since they are protector others of us use Google in the same way or Microsoft. And this is almost like we are surfs to these feudal lords that they offer us protection in exchange for all of our data. And kind of annoyingly it's not half bad a deal. Because doing it yourself is hard. You don't want to be a Ronan. You don't want to write. And you want to be have a protector. I don't think that's changing. I mean as we move the Internet of Things we are seeing these these big ecosystems. And now we the fight now is who's the controller. Right. Amazon Echo is all about being the central hub for all the things in your home. Right now if you have an IOT anything you're controlling it on this. Your phone is the controlling hub. Which means it gets to set the rules. Coffee maker doesn't. It's just a coffee maker right. But if it wants to have its app in the iPhone store because if it doesn't nobody's going to buy it. It has to set the it has to follow Apple's rules. So Amazon wants the Echo to be that Google is using Android and whatever it's voice thingy is. Right. Everyone wants to be that that that point of throttle. Right. That and that's all about control. And that's another going to be another point of feudalism. And you will say you know I don't know anything about these things but but they got an iPhone app. I trusted Apple has done some vetting. Right. This is good as long as our feudal lords are benevolent. Right. This this goes bad if they turn evil. History of corporations doesn't bode well. But yes this is this is all about. Hi my name's Parker Abel. I'm a secure and assured systems engineer at Draper's Draper Labs. Miter recently released the most common weakness list weakness enumeration list. In the top twenty five most common weaknesses for for systems are hardware based. And all we have discussions about our policy issues and software issues. But the more software you add the more insecure that a system gets. So that's why DARPA has started a challenge for inherently secure processors. It's actually been one and they've been created. So the defense industry is really concerned about hardware based security and often we see you know military being ahead of the curve in terms advancement. But what is it going to take for industry to start thinking OK hardware based security is crucial and we actually need to focus on it because it I work on it every day and what I see is you know everyone's plugging fingers in a hole in the dam but we need to reface the damage. So I talk about that. And I think the problem is the reason no one talks about it is it's insurmountably hard. That this is I think of a supply chain security that you know the heart and you saw this in public debate recently with should we in the United States buy Kaspersky antivirus should we trust a Russian antivirus program. And also in the debate of should we buy ZTE phones and who a network equipment. Should we buy Chinese made devices that plug into our network. And you know that's an important question. It is not of course not just the U.S. 2014 China ban Kaspersky. They also ban Symantec because U.S. based can't be trusted. India has banned Chinese made hardware 1997 their debates in the United States whether you trust Checkpoint an Israeli company with our security. And remember Mujahideen Secrets encryption program written by ISIS because of course you can't trust Western encryption. But you know that really is just a discussion of what country the company that the product is made is located. This is not a U.S. made product. It's gone right. This is made in one of several Asian countries. The chips are made in one of several I think again Asian countries different ones probably the programmers are probably carrying a couple of hundred different passports and any one of those steps in that chain can support the security of this. It's a great paper last year. You can break an iPhone security with a malicious replacement screen. Surprise. So right these hardware problems. The reason they're hard is the industry is deeply international. No one will ever buy a U.S. only iPhone. It will cost 10 times the price and nobody wants it. So the reason nobody's thinking about it because no one wants to because it's hard. I mean even the U.S. military just kind of pretends that it's going to it buys chips from from China because there's no choice. And yes another paper is about four years ago. You could have a you make a mask reaching me design your chip and you make a mask basically which is what you give to the chip maker to make me a couple of million of these and they can take your mask and slip another layer in that you don't know about. Make your design when you get back something that you can test from today to tomorrow. It's exactly what you asked for nothing more. It has been subverted. You don't know what. Right. So that's doable. You know if I was a country I would be doing that to other countries. Right. Wouldn't you. So yeah these are big problems. What will it take get people to to think about it. I think it's going to be a disaster. And even then it is so it's really hard to think about something that is expensive and nobody wants to. You have to be forced to. And this one is super expensive. We really have built a very international tech industry that gets our expertise in programming from all over the world or expert the expertise in hardware and software in fab. We go where labor is cheapest to do some parts. We go where labor is smartest to do other parts. We go on the net to do parts that are distributed and nobody wants to do anything about that. That is a terrible answer. And I'm sorry. But I'm glad you're thinking about it. I'm glad somebody is. I want to pause and give my you mentioned that the title in the cover so the title is mine. I'm so proud of it. I'm happy with the cover. I'll show you. I gave you two reasons I like it. One there's a button that says OK. It's only one button that says OK. And it's clearly not OK. And two it looks like this thing has been throwing error messages for the past hour and no one's been reading them. So the cover has curb appeal which is what we want. I have actually seen it in airports which is kind of cool. And so this is my theory of book writing. And we might have another book writing seminar in spring. I'm thinking about it. So there is a chain of readers. The title gets you to read the subtitle. The subtitle gets you to read this stuff the flap also known as the Amazon summary. And that gets you to read the book. Right. So it is it is very much a chain. At any step I can lose my reader. It all has to work or I don't get a reader. So you go for this is my first ever clickbait title. And I kind of back off from it like page three of the book. I mean you're just like all right. I mean I got you here but let's you know let's be reasonable guys. But it really is you know because I need that flow. I know one unless they kind of know me and I was blinded by an dictionary book. Right. But that's not the reader I'm trying to hook. I'm trying to hook a reader that will go through those steps. So I tend to like a provocative title. A descriptive subtitle. A slightly sensationalist but not inaccurate. Flap. And then a kick ass good book. Right. That's that's that's that's my recommendation if you are writing a book. Mission accomplished. You've been quite patient. Is oligopoly our friend or our foe when we're trying to deal with this kind of problem versus it seems like it sounds like there's like one manufacturer of hotel keys and they fail and every hotel in the world fails. If there were 20 of them the fire would be more contained. On their hand imagine if there were 20 operate 20 dominant operating systems instead of three or four for your phones or your computers. Seems like that would make things harder to fix. So how do you see this. Well this is this is the trade off of between having a few and investing and having many and having some kind of safety numbers. Right. I mean you see this in reproductive strategy to basic systems of reproductive security. The one is what we primates tend to use is have very few offspring and invest a lot of resources in bringing them to adulthood. Then there's a lobster method which is have a couple of million offspring ignore them completely and play the numbers game. Right. Both work. Yeah. Now we're good likely going to have some hybrid because there are costs to multiple things that aren't security costs. Multiple OS is annoying for a lot of reasons for interoperability. And in some places we don't we don't want we want everyone to use TCP IP we want everyone to use PDF files because we want to have that that ability to transfer whatever and to use the same photo format and video format otherwise we're not going to work. So in some places there are sort of natural monopolies of interoperable formats. Some monopolies are accretive because of they just get more valuable. Right. Facebook. No one's on Facebook because they like Facebook. Right. Nobody. We were all on Facebook because if we're not we don't get invited to parties or whatever right. We're on Facebook because the people we need to communicate are on Facebook and there is that and everyone remembers the moment they had to join Facebook. Right. And there was that way you said good guess I have to join. There was a thing that you couldn't do otherwise. And I am probably the only person still not on Facebook. And that's okay. That's right. But you know it has a social cost. Right. There are things you don't know about in your friends' lives. There are social events you don't hear about. It is a social cost. I notice it. I feel it. I am ordinary enough to pay it. But that makes me three sigma and like not a useful example. On the other hand there is right a lot of benefit to there being multiple sources of even social media platforms or of lock manufacturers or operating systems or phones or apps that there is more security in that diversity. And it's going to be some combination. And different industries have different sweet spots of where you draw that line. I don't know where it will be. It probably will be one line. It will be different in different things. Hi Bruce. Thanks for this fascinating and slightly scary talk. I want to do a little bit of a dive into one of your examples. The hotel key problem. I mean pretty much everybody kind of gasped when you said you have to go to every lock to fix it. But I had a completely different reaction which was that well every hotel employs people that go to every lock you know pretty much every day. You know to perform something you can consider a public health function which is cleaning a room. Now I'm sure the you know hotel lock industry isn't geared to have you know maids fixing locks. But why not. I think it's a good question. I think it's because the company never envisioned it. I mean you could easily imagine where we hear designing a better you know a better hotel lock. We would say hey we're going to need to do maintenance. We're going to need to software updates. Let's make it so that you just plug a USB stick in. And or maybe that's a bad idea. I can't pretend. So like Gen 1 plug a USB stick in. Therefore an unskilled person could do that. And you could integrate it into the normal life cycle of a hotel room instead of thinking well we designed it perfectly. Nothing bad can ever happen. We don't have to think about that. But yeah I think that's going to be when we think about failing safely, failing securely. You know what is going to be the update mechanism. And that's actually a really good idea. So maybe I don't make it a USB key. But you want to make it something that there isn't or maybe we're going to mail each hotel a specialized device. And that they're going to plug it in and push a button and maybe type a code and it will push the software update. Yes. But I think that is going to require some better engineering. And the law company isn't sophisticated. This is one of the problems with IoT devices. The reason this thing is secure is there's a team of engineers at Apple. And there's one at Microsoft and Google for their devices that are designing this to be as secure as possible in the first place. And when a vulnerability appears they write a patch. They push it to my device. This thing improves. That lock was designed offshore by a third party by an engineering team that came together design it dispersed. There's no group of engineers waiting to patch it. And you know maybe that lock is patchable. You're a router at home the way you patches you throw it away and buy a new one. That's the mechanism. There's no patching mechanism. Let alone a team that could write the patch which there also isn't. Now throw it away and buy a new one is a valid security upgrade mechanism. This also is secure because every three to five years we all get new devices. These things have a pretty fast churn. And the new iPhone, the new Windows, is better designed, more secure than the previous. Now when you get to consumer goods you do not have that. You buy a DVR you can replace it in 10 years. You buy a refrigerator you want to last for 25 years. I bought a thermostat. New thermostat at home last year. I expect to replace it approximately never. And we don't know how to do that. We don't know how to deal with that kind of life cycle. Or think of a car. You buy a car today. Let's say it suffers two years old. You drive for 10 years, sell it. Someone else buys it. They drive for 10 years. They sell it. Probably gets put in a boat, sent to Latin America. Someone else buys it, drives another 20 years. Okay. Go home. Find a computer from 1976. Boot it up. Try to run it. Try to make it secure. We actually have no idea how to secure 40-year-old systems at the consumer level. We have the faintest clue. Right? So how do we make this work? Option one is replace cars in the same life cycle as the phone. That won't work. That's actually going to literally cook the planet. That is not going to be the answer. Is it going to be that Ford has to maintain a test bed of 300 makes and model years and test every patch? Anyone who's an engineer is going to cringe at that notion. We don't know how to do that either. We're going to have to figure this out. At the level of cars, the expense of things, at the level of these cheap things. You have a DVR. It could have been part of the Mariah Botnet. And there was a really dumb vulnerability. One, you have no way of knowing. Two, you kind of don't care. And three, the only way we're going to remove that vulnerability is when you turn the thing off and throw it away. We're stuck. This is hard. We're approximately 40 days away from the term election. So if you could take everything you mentioned about unique systems, disparate responses, and otherwise apply that and maybe give a prognosis on our voting systems? So I've written a lot about voting in election systems. The good place for information is verified voting.org. That's where I'll send people if you want to learn about what machines use and what jurisdictions, what vulnerabilities there are. I think there's a lot to be worried about. And the three areas that we have concern of the computerized systems determine who's eligible to vote and where. That's our registration systems. There's the actual voting machines. And then there's the computerized system that tabulates all those machines into a final result. All three are vulnerable in different ways. And we know that at least the first two were targeted at some level by the Russian government in the 2016 US election. You know, it's hard to know what will happen. Certainly there are lots of vulnerabilities. I worry just as much about appearances and reality. And this is important. Elections serve two purposes. The first one is to choose the winner. That's the obvious one. The second one is to convince the loser. To the extent that an election doesn't convince the loser, it is failed as a democratic mechanism. And for the loser to say that election wasn't fair, I didn't actually lose, we've lost everything. So they need to be secure in appearance in addition to reality. And don't know anything will happen. You know, my guess is not just because it's dangerous, propaganda so much easier. But we don't know. Lots of things have happened where we don't think it's enemy action, where we think it's a mistake. They've been also some weird mistakes. Machines have been opened up and they've been zero votes. Machines have been opened up and some mechanic got a negative number of votes. But those all seem to have been mistakes. The errors, not actual malicious action. So don't know. But hard. I want to hear what Abby's thinking about this. You've been sitting there listening. About voting? I mean, like one of the questions I get from listening to Bruce speak is where kind of the locus of control, thank you, and responsibility lie. And the same question applies equally to kind of where's the public interest, where does ethics reside in this system? Are you happy with the answer? Are you in here? Or what are we going to do about this? And where's your perspective on this? Well, I'm thinking... Quick, call a philosopher. I mean, in thinking about the voting, I take Bruce's point really seriously about the sort of communicative role of the voting process. And, you know, we just this week in the New Yorker have a piece about, you know, Russia turning the election based on Facebook activity and a few targeted spaces. It looks like there's actually pretty good evidence that the election may have actually turned on social media kind of manipulation. It didn't need to be the voting machines, just as you were saying. And I think that as a philosopher, what you worry about is how can we manage all of these questions about what are we to do about our elections precisely when it's distributed and vulnerable in so many ways. It just becomes the kind of thing that we can't just say, oh, well, you know, we'll secure our elections. It's about how we're going to communicate about this and how like, is it more worrying to really publicize these vulnerabilities than, you know, from a point of view of making our system go, or, you know, we think that we're already in a vulnerable moment for democracy. Do we think it's enough on the margins that we try to kind of not say too much? I mean, there are really very puzzling questions about what to do about this moment where things feel a little bit like they're teetering on a brink. U.S. has two particular problems that other countries don't have. One, we don't have a bureaucracy to ensure the integrity of elections in the same way that other countries do. For security, we relied more on mutually distrustful parties watching each other, right? You put a Democrat and a Republican at the table and each will watch what the other does. And that was great for, you know, mid-20th century threats. That was a reasonable security solution. It works less well today. Our second problem is we don't have one election. We have like 52 separate elections and they're all very different. Under different rules and different machines and different systems and different authorities. And we can't, you know, as a country secure our elections. We don't, we don't, as a country, have an election. We kind of pretend we do. And those two things make it harder for the U.S. to do it than the U.K. or Australia or France or Japan or, you know, any other country that tries to run free and fair elections. And philosophers talk about the difference between ideal theory, which is like, what should things be like? And non-ideal theory, which is like, what do we do from where we are? And this feels like a moment we are deep in the weeds of non-ideal theory. And there are striking puzzles about that. So I head there. Hi, thank you very much for your talk. You were speaking about not blaming developers for unintended flaws. But what about the growing industry of zero-day vulnerabilities that is another actor that is now like playing a huge role? So there is a market, right? There is a basically cyber war military industrial complex that has sprung up. And it has several tiers. It has major defense contractors selling cyber weapons to countries like the United States. It has these kind of mid-tier cyber arms manufacturers selling weapons to countries that we'd all agree probably shouldn't have them like Kazakhstan and Sudan and Uganda and Mexico. You know, all the countries you sort of hear, Syria in the Citizen Lab reports, right? And there are people that sell cyber weapons to criminals. And there are markets in vulnerabilities and attack tools. In exploits. And one of the ways to judge how secure your system is is to look at the going price for a vulnerability in it. And, you know, which means if you've got an iPhone, you're doing pretty well. If you've got an Android phone, you're doing less well because, you know, I think a good iPhone exploit is now worth half a million dollars. And that's, you know, that's a that market perturbs the world because if you are a software engineer, you can make a legitimate this is not selling criminals. This is selling to actual companies that, you know, have offices and mailboxes and pay their taxes. You can sell an iPhone exploit to a cyber weapons or arms manufacturer. It'll be used in ways you probably don't like, but maybe you don't have to look that carefully. And if you can, you know, get by the ethics, but, you know, vulnerabilities go that way. You send it to Apple and they'll give you a bounty of a few thousand dollars. And you probably you feel better for the world, but, you know, what's that worth? This is hard. And this sort of show is again, how where we are is making solutions harder. I mean, I argue in the book that we need to adapt the defense dominant strategy and say that defense has to win. The system is too important. Um, what's his name? Black and Unnamed. It'll come to me in a second. Uh, Dan. Not Dan. No, I'm black and unnamed. So I'll get that in a second. Talks about that the U.S. should buy all vulnerabilities. We should pay top dollar and buy everything and then immediately give it to the defense. All right, then Dan Farmer. That we should, you know, that's what we should do. That that would be a smart use of our dollar that buying them and using them for offense is actually a dumb use of our dollar. And letting other people buy them is also dumb use. We should corner the market, corner the market and destroy it. That's radical. But that's, you know, an interesting way of thinking about it. And that is much more of a public health way of thinking about it. Right, if we can eradicate malaria, you know, in Africa, that improves us here. That's not just foreign aid. That's, you know, planetary health. That's a good idea. Right, if we can subsidize China to produce cleaner energy, that's not foreign aid. That's helping us here. I mean, come on, people, we're all in this together. And this is a different way of thinking. Jim Gettys has been arguing that the best approach to dealing with some of these security issues is requiring everything to be open, including firmware, hardware specs, and software. And essentially the bizarre approach to ensuring or trying to ensure system security as opposed to the cathedral approach of trusting Apple or Google or the Chinese government to protect everybody. What do you think of that trade-off? You know, I don't think it's that's that important. I think things that are theoretically open are often practically not open. And I think there's some value in openness, but there's also value in proprietaryness. I'm not convinced that that will make an appreciable change in security. I make a change to other things. It might be good for society in other ways that are broader than security, but for security, Microsoft actually has a really secure OS right now. They did a good job. I think they are more secure than Linux. That if you know what you're doing, you could do, right? But, and that's not because they're closed, but it just sort of shows that it doesn't necessarily have to be closed as worse. So I don't think, I don't have an ideological dog in that fight. Although you can certainly argue from a lot of other social goods that openness is important. Certainly, if there's gonna be an algorithm that will determine whether I get released on bail, I think it's important for society that algorithms be open, right? If an algorithm decided that I was drinking, a breathalyzer algorithm, I should be able to examine that source code and contest it in court. That just seems like a no-brainer. That's less security and more public process. And because of what we learn again and again is when these algorithms are started with scrutiny, they're lousy. They're embarrassingly bad. They work at random occasionally. And we have this bias to trust computers. We might not in this room, but you go outside this room, the computer is always right is what people think. Because it's a computer. Of course, it's correct. How could it be wrong? It does calculations. It doesn't make mistakes. But we can laugh, but that is not the prevailing opinion. But we know how to deal with that. And we can put that algorithm in escrow. We can deputize a commission who gets whatever, signs whatever agreements and analyzes it. We all would accept that. We don't. I'm gonna go to a reception tonight and I'm not going to vet the food. But I know that there's an organization that did. There were health codes and inspectors and it all happened. We can set up a system where somebody that we all trust looks at Google search algorithm and make sure it is not racist and sexist and otherwise biased or classist or subservient to Russian trolls. Whatever things we agree we don't like. We don't all have to look at it. We can solve that. And it's not the only time that we've had to as a public vet proprietary things. I really think that's a bullshit argument. I might take the pronger to ask the last question. So before we disappear out into the world, what should we do? You've been thinking about this. Who are we gonna lobby? What are we gonna write? What are we gonna study? What do we invest in? I'll tell you something I'm working on that I would like help with. So maybe you could help me. I'm trying to think about how we educate people in different pieces of the process so that they can play a role in making these things better. So I'm working on a curriculum for engineers to help them identify and address ethical issues created by their work. I think that we need to think about, there's a program I'm involved with at the Media Lab that's gonna be about democratizing AI through K-12 education in AI so that all kinds of kids can grow up fluent with these tools. I think that we need to think longterm about things like this, how we make it the case that all of us are more equipped to engage with these issues in the places where we find them both as we encounter them in our lives and in our work and in our politics. So that I think is a crucial thing and also vote. I guess my answer is similar. I ended my book with this call for getting policymakers and tech people to understand each other. That not just my it, not just cybersecurity but pretty much all of the hard problems of this century, the hard policy problems are deeply technological. AI, the future of work, climate change, food policy and to the extent that we have policymakers and technology talking past each other, go watch the Facebook hearings. You wanna see what bad looks like that we're just gonna get terrible policy and we'll get terrible tech. So I love the idea of teaching programmers and engineers ethics and I wanna teach policymakers what software is like and we need to have this discussion across, CP Snow calls the two worlds. This is not a new problem but it's become I think much more urgent. The going dark debate is all this talking past each other. The going dark debate whether the FBI should be able to break into iPhones. Right, that's all tech and policy talking completely past each other and we need, and this is places like Berkman. This is what we should be doing is getting tech and policy together. We need technologists on congressional staffs at federal agencies, at NGOs, in the press. I am trying to teach computer security at the Harvard Kennedy School. I'm going the other way, try to get policy to be able to understand tech and everybody's gotta meet in the middle. And I think any longer solution is going to include that. That's great, this has been wonderful. So by a book, Bruce, are you willing to sign? I'm willing to sign. Excellent, and please join me in thanking Bruce and Abby, thanks. Thank you. Thank you.