 Well James I see an odometer of participants joining us that is just skyrocketing. We've just crested 80 This is great. You know, I feel like this event is really going to be like the Woodstock for the modern era I think people who are not going to show up will tell others that they were It's going to be generation to find it. That's at least my hope for the meeting Hopefully as a moderator you can lift us all up to reach that goal No, I enjoy setting low expectations and then barely meeting them and you're not helping with that It is strange that zoom appears to be adding people one at a time. It was kind of turnstile. Does that represent? Yeah, it is very strange counter seems to be slowing So I'm going to welcome everyone to what is already the beginnings of a great conversation between Johnathan Citrin and James McIng We will be talking today about cyber security. How far up the creek are we? Which just seems like a great question for this moment right now Before we begin if we will not have a chat function going on today, but if you would like to Pose a question for the question and answer phase of this conversation. Please use the Q&A function in zoom I'd like to welcome James and Jonathan to the conversation today James is an associate Computer science at Harvard and we are excited that he is also the newest member of the board at the Brooklyn Klein Center So welcome James Jonathan Citrin is the George being his professor of international law in Harvard as well as many other Roles that he plays at Harvard. He is the co-founder of the Brooklyn Klein Center and also a faculty director and I welcome you both Thank you so much Liz. Thanks for having us today and thanks everybody else for joining We know you have the choice in zooms At any given instant and we appreciate your choosing this one James welcome. Welcome. Welcome. So glad to have you both on the Berkman Klein Center board and here today To gauge just how far up the creek we are on Cyber security a question if you had been asked that 10 years ago In you know approximately 2010. What would your answer have been then? Well, you know in the current era time seems to flow so much differently than it used to it sounds like you're talking about You know the medieval era, but I would say that Look, I mean, there's never been a time at which cyber security has been great There's never been a time in which we could sort of look at the landscape and say yep We're dotting all the eyes crossing all the teens Everyone gets to go home early spend time with your family I think that the challenge that's getting sort of worse over time though is that computers are becoming more ubiquitous Right. So whereas before you know back in the olden day even back in 2010 You know, you didn't have the pervasiveness of things like IOT You didn't have the pervasiveness of things like machine learning algorithms being used to think about who should get credit We should get mortgages who should go to jail or not go to jail Whose applications to a job should be passed on to the next stage And so, you know because of this increasing ubiquity of technology I think that it's actually incumbent upon us to sort of scope this issue of cyber security more broadly than what it used to be You know what what cyber security sort of used to be was Loosely speaking can someone hack into my system, you know Can my data be stolen and almost as the sort of very binary yes or no way But I think that now is the technology becomes more pervasive You have to start thinking about these bigger questions of not just you know Can someone hack into my account, but if they can just access my system in some seemingly benign way Can they game it in some way? Can they influence it in some way to create? You know societal outcomes that are not as easy to quantify as yes, I was hacked. No, I wasn't but which still might have huge societal impacts So back in the good old days, we'd worry about data exfiltration Having something on a platter on your machine and then somebody gets quote into the machine and Makes off with the data and then does something with it or shares it further We'd have so-called privilege Escalation where some piece of software that's just supposed to show you a dancing hamster instead is able to Get into the machine and muck with all sorts of other stuff and you're putting out the machine today might not just be some laptop precariously perched on a Shelf but could be a refrigerator or a Fitbit or some kind of I don't know Skata system that controls whether a dam opens and closes But it sounds like you're even going farther than that that your definition of cybersecurity itself is Broad and for that then I would that apply. I'm just coming up with examples here to like Hacking the college admissions process. Is that a cyber security issue that's different from the old Was it Matthew Broderick and war games who just logged in and changed his grades? well, I think that you know sort of it's to this larger issue of algorithms being pervasive and you know computational systems being pervasive and What does that mean when? potentially untrusted or you know Chicanerous participants can submit things those algorithms can submit things to those systems, you know, and I I think that the problems Are getting worse in part because these systems that we've created To take in this data and to compute on them and then give us some answer Increasingly, we don't really understand how those things work and this has always been a problem, right? I mean, like I said that this mythical time you talked about the the 2010s, you know Even back then when we look at you know, say operating systems for example, you look at Linux You look at Windows you look at macOS. There is no single person that understands every single line of code in those systems You know, you've been dealing with sort of this problem of Are our constructions transcending our ability to understand them? That's been happening for a very long time but I think that What's happened is technology has become more ubiquitous is that people Are certain segments of the population have not been as concerned about this as they should be and they sort of look towards Computational systems as the sort of magical opaque answer boxes. They say, oh, well, you know, how are we going to determine how to You know admit students into a job or into college. Oh, what does use a computer to do it? You know because that seems like that's what computers are good at doing. They take in input and then they output answers but in reality, there's all this sort of underlying complexity in terms of You know, are these systems secure sort of in that old school cyber security sense? And also are these systems more secure in sort of like the new school cyber security sense? And that are they gameable or can you influence them in ways that were, you know, not envisioned by the original creators of the algorithm So the old school way of defending was some combination of trying to be extremely alert oneself like This is a link and it looks like my utility company, but don't touch it or it's all over and having some You know good code to defeat the bad code. I'm running McAfee although McAfee seems a bit bonkers the founder So I'm running. I don't know some reliable Russian thing like Kaspersky or Icelandic or whatever it is What's the equivalent today of doing that? We're what virus definitions by metaphor am I update? What's the how do we defend against the new generation of threat you're talking about whether in theory or by example? Well, you know, we've reached a very awkward point in the conversation You know, I wish that I could tell you that like look my friend Jay-Z. Just go to the app store download this app It's called security. It's great. It's got 4.7 stars. People can't agree on everything, you know but like sadly such a thing does not exist and you know, I think that One reason why security sort of broadly writ is increasingly more difficult to achieve is that it's not It's not easily definable in a sense of I just take this checklist I do these things and if I do these things therefore my system must be secure, you know I think instead that that trying to achieve high security. It's somewhat of a design attitude Where at every level in your system design, you're sort of thinking about What are the possible things that could go wrong? What are the ways this system can be influenced and You know, what are sort of circuit breakers that you might have in place in case? something unforeseen happens and you know, that sounds like a very Vague answer in a certain sense because it is you know There isn't a magic way to do stuff But you know what what I frequently find for example as a computer science professor for example is that sometimes people will want to Rely purely on quote-unquote testing to ensure notions of security and safety They'll say hey, I tested my my code with these, you know 15 different test cases Surely it must be ready to push the production and the problem is that you know, typically those tests. They don't think adversarially broadly speaking they don't think about well here are You know for example certain political goals that someone who uses your system might have and then how might those goals Influence how people use the system So I think that it is really more about changing the way that we talk about the design of our technical projects You know in the same way that we say oh well There's no simple way to figure out if our system is going to be used ethically or not because even ethics itself is Very poorly defined when we think about security yet that have a similarly sort of broad attitude of saying they're just These sort of fundamental Questions which are ambiguous Which have no clean answer, you know, what is security? How do I make my product secure? And so as a result we just have to be more imaginative than we are right now in terms of defining how we test our products for security So I somewhat see what you did there, which was Interestingly, I asked a question without even thinking about it one way or the other That was more about from a user perspective What do I go get in the app store to secure my Stuff and what's the equivalent of that for my fridge? Do I need to buy an extra add-on so the ice maker doesn't start spitting fire? but You were shifting to the supply side before even putting that fridge off the assembly line or more Systemically before cutting the ribbon on a new system writ large for college admissions You need to have a more imaginative approach to security and I'm I don't know then how much Does that mean we should be licensing or otherwise scrutinizing or having some regulatory even overlay on people producing code because You know the incentives are such that racing something to market you can fix the bugs later kind of thing You know what would incent if it's if the if the benefit is going to have to be applied on the supply side What's going to incent the suppliers to worry about systemic risk that might not even be traced back to them? Those are all great questions Sadly, I have no spiritually satisfying answers, but you know because I'm professor I've learned how to filibuster my way up to the next question So I think like the there's one side of me, which is the citizen side of me Which says yes, certainly we need regulation to force these tech companies to quote-unquote do the right thing because you know Evidence suggests that the current arrow of late-stage capitalism is not pointing towards These tech companies sort of doing the right thing in many cases that being said when I look at this from sort of the computer science It's sort of the engineering part of Sort of my my my job. I think my goodness I get worried about what might happen if the legislation that comes out of the regulations that come out or You know technically in articulate if they're written in a way that doesn't understand the underlying technology I mean you and I have spoken a couple times about the GDPR Which I think is a great example of how things can go well and they can go poorly so you just clarify to those listening They might not know GDPR is European and it stands for goddamn privacy rules That's right. That's right That is basically what you should type into Google if you want to know more about that turn the safe search off So yeah, so basically the GDPR is this Leviathan set of rules produced by the EU that among other things give users several rights That at least at face value seem like they're they're they're good You know the right to for example have your data be enumerable You can actually go to a service writer and say what are all the things that you've collected about me? You know, you get this right to be forgotten You can go to a service provider and say hey all the stuff that you have that belongs to me Get rid of it. I don't want to be known by your service anymore So at a high level that's great, but almost immediately you start seeing all these corner cases, right and all these subtleties For which the the idea of what like what is the right thing to do? It's not clear So for example, what happens when you? Upload some data. Let's say from a fitness tracker or something like this and then a service provider Runs a machine learning model over that data and then derives some some insights from that data Like here's maybe a better exercise routine You could do based on our understanding of your own unique physiological profile Well, when you exercise your right to be forgotten, what happens to that model? Is that yours in some sense because it was derived from your data Well, what if it wasn't derived solely from your data if the company did some type of meta-analysis of data Belonging to a bunch of different users than to still down a plan specifically for you So, you know, the GDPR doesn't really speak to a lot of these sort of thorny issues that arise in practice And so, you know, when you look at how companies try to comply with the GDPR What I hear sort of like off the record of a lot of it is sort of like prayer base Because they don't understand exactly what the GDPR is asking of them and then furthermore From the sort of hard tech side, there isn't a lot of good tech support in terms of like Operating systems permittives or things like this that would help people to comply with these laws So so it's a bit of a mess. So I both simultaneously say yes I do think that we need to have more regulations But I also think that the only way they're going to succeed is if we get more buy-in from the people who are actually Making the tech and of course that that's that's a double-edged sword because whenever we start talking about self-regulation Right, then people say, oh, what you know what in the same way people say, why should we trust the oil companies to write? Environmental law, you know, why should we trust these tech companies to, you know, write laws involving, you know, data privacy, for example Well, it calls to mind. I feel like there are basically two laws of internet governance that if we could just abridge them or figure them out, we'd all be set. The first is we don't know what we want and the second is we don't trust anybody to give it to us and If we just had a better idea of what to do and trust it anybody This is what you were just talking about like any governmental entity to responsibly implement that vision and Align people towards it, we'd be set. In the absence of those two things What do you foresee the trajectory here? I mean fast-forward suppose are we still gonna be on zoom in 10 years or is Elon Musk gonna have I don't know put implants in or something What's this conversation is gonna look like 10 years from now we'd be like, oh we were We were on the right track it but at last we solved it is it gonna be like no we thought it was hard then but oh gosh It's even harder now. What's what's the direction? This is going You know once again another unsatisfying answer every direction we're going every direction I mean, I think that this this issue that you you touched upon about, you know, who do we trust? That's that that's an issue that pervades a lot of these questions about cyber security, you know For example, you know the debate over encryption When should encryption be used should back doors we put in so on and so forth because I think that like at a high level Encryption seems like a good thing. Why would I want someone looking at stuff that wasn't intended for them? But then we know you look at sort of issues of you know, who actually uses some of these encrypted messaging apps Who's actually using Tor what's actually? Being communicated with using these technologies and then you have this tension between like, you know Quote-unquote regular citizens wanting to not be surveilled and then also that's not wanting criminality to flourish So it's tough. It once again gets down to trust and so I Don't really know that those fundamental tensions are going to be resolved, you know cleanly in the next 10 years because I think that You know sort of what what's ended up happening is that the rate at which some of these new technologies are being introduced is Outpacing the rate at which we can Understand the implications of these things. So to a certain extent, you know, I think a lot of these current state of cybersecurity is actually driven by You know, how quickly do new companies get formed and how willing is the stock market willing to you know, sort of private equity willing to fund these companies because for example, let's we could imagine a world Which is certainly not our world, but we can imagine a world where You know, the people with the monocles who fund startups They basically say, you know slow down their young company We're actually not going to give you money an additional round of funding until you think deeply about, you know how your technology Might be exploited by hackers on how your machine learning models might have bias in them so on and so forth We could certainly imagine such a world because it is it's at least describable using a human language like English But of course, we do not live in that world right now. And so one of the big problems. I think with cyber security Is that currently, you know, the the funding model for a lot of startups is is one that does not emphasize things like security For example, it emphasizes things like user growth. And so, you know, if that situation is going to continue that's going to continue to Create these imbalances in terms of what these companies prioritize Well, one I guess I'll call it a theory rather than hypothesis But one sensibility I have about the past 10 or even 15 years of consumer-facing technology has been a movement originally from what I call owned which is to say you're running Microsoft windows on Your laptop and then you're going on to comp you serve if you're online And if there's a problem online comp you serve Has an 800 number and you call them and you yell at them And if you want to regulate them you go to Columbus, Ohio and you know where to find them and then it moves from that owned Nature where some vendor is responsible to unowned namely internet And now I'm just double-clicking on stuff and downloading it on to that Windows PC but it's just running and you know, Bill Gates doesn't have anything to say about what you're doing online and That creates this profusion of startups and services that aren't vetted aren't thinking about security and you run it all anyway And my theory had been circa 2005 to 2008 That that was going to create its own backlash because people were going to find their experience so Insecure at multiple levels that they would demand a return to the coffee serves of the world So there'd be some vendor responsible for Being the umbrella over everything and then fast forward from 2005 to 2008 to today And it feels like the world is a lot more owned that When we're online we're spending our time on just a handful of apps that may or may not even be websites that we visit and They might not have toll-free numbers, but they have CEOs and they are so long as the regulators are willing a big asterisk Regulable and I don't know if that means how this plays into your story, but does it does it mean that? Some dodgy startup the way it really moves today is it gets bought pretty early by Facebook Which has like an early warning radar of a startup that could conceivably compete with it in ten years So they buy it up at which point. Okay. I know whom to call if I've got a problem. So some of the story That if a competition an antitrust story would be one of worry about consolidation From a security standpoint, is that actually a green shoot? Is that a well? It's not so chaotic out there as it was 10 15 years ago. Yeah, that's an interesting question I mean, it's definitely true that you know if you empirically look at you know, sort of like the startup landscape now particularly in tech Yeah, a lot of these sort of Young tech companies that could threaten the offshoots. They just get they get eaten up They get they get bought out by these larger companies and then you know, sometimes that's the last we hear of them Sometimes that stuff gets merged into the mothership, you know, it depends You know as to whether that's good for security though It's not entirely clear because when you start You know having these sort of like data hegemon's it's not obvious that that Centralization incentivizes people by people I mean these companies to sort of do the right thing And I would also say there's this interesting aspect to you know, sort of the walled gardeness of The modern computational experience because on the one hand, you know, particularly in the Apple ecosystem This is exactly what Apple wants exactly what Apple wants is to say, okay You buy our Apple box and then what happens? You only plant little app seeds in that Apple box that we have blessed and if it hasn't gone through our review If it hasn't met our you know standards of quality you kicked out So in a certain sense Apple wants to live that experience there But you know if you look at you know, say Android for example, you know The Android app store is comparatively super wide open and if you look, you know in terms like what apps people are running You know, yeah, the Facebook app is popular. Yeah, maybe the New York Times app is popular But my goodness the long tail on that app store is insane And in fact, you know from the security perspective if you look at how a lot of people end up getting hack I mean this happened to me yesterday not the hacking part just to be clear Maintain my credentialing this how rumors get started, but I was I was on Duolingo Which is an app to help teach you languages and so Because I'm miserly. I don't pay for the The ad-free version so I saw this ad for this game and This game had clearly been designed in a week And it was basically you're trying to redirect the flow of water to make sure that a fish gets gets water So it can breathe and so I'm looking at this app and I'm like that's malware 100% of the time, you know and I go and I look at the app reviews on Android Half of them are clearly written by bots, you know It's like this game that the best it is for sure 18 stars out of five, you know And so that's on the app store and that can be now moded So despite the fact that you know We could let's say look at Android and say the security of the Android platform in and of itself might be good We might say the Google provided apps might have these high levels of security You know when you allow sort of an open app store, that's where you allow a lot of vulnerabilities there and so I think that You know in my opinion, it's not clear that We're definitely going down this route Whereby, you know, you can't side load apps Everything has to be blessed by a central authority at least in the Google world and to a certain extent the Microsoft world as well. You can still load things on your computer that might not be good for you Yeah, it just kind of seems like the worst of both worlds that What most people see and are offered unless they're not bothering to get do a lingo premium are very mainstream things from the usual suspects and Yet, there's still the stray link that can creep in and you or your kid or whoever can click on it and then Everything is terrible and in The analog counterpart world if we're thinking about stuff that affects human health and flourishing There are some standards what I can buy at a supermarket or what's available a hardware store and whether the light bulb I screw in is going to blow up when I flip the switch and It does seem like we've long ago given up not even started any form of Scrutiny of that sort that we're just relying kind of on Pinkerton's to you know commercial vendors to serve that role now maybe It just even as I say it I hear the Heresy that it represents. It's like I'm not looking for a government panel to judge every applet or extension on a browser But at the same time, I'm not Jonesing to give up That kind of scrutiny on product labeling or on supermarket So I'm just even trying to explore my own inconsistency. Is it just rank status closing? Well, I think so so here's a here's a thought experiment, right? So You go to the app store and let's say that you want to buy a flashlight app You just want an app that's gonna turn on the flashlight on your phone. You're a simple man You like simple things So you download the flashlight app and then it says, huh? Here are the permissions that this app is asking for one of them is the permission to turn on your flashlight All is well in the kingdom, but another permission it asked for is the permission to look at your contact list Seems curious right why should my flashlight need to know what my grandmother's phone number is so like to us right now in this like sort of Clearly laid out sort of discussion. It seems obvious something is fishy there But you know at a higher level Who would prevent or decide that a flashlight app having Contactless permissions is wrong for some definition of wrong, you know Like there are these kind of interesting questions of scale when it comes to regulation and certification and things like that and in part It's because and this gets back to something we were discussing earlier, you know, how do we concretely? And probably automatically if you want this up to scale in an app store sense to find what it means or something to be secure Or for something to have too many permissions and so the reason why I think that the the flashlight app example is pretty funny Is because it very clearly Identifies, you know permissions that should not be given To an app and yet it's not entirely clear how we how we would sort of adjudicate such a thing You know getting back to what you were just saying are we to have some counsel of trusted elders who sits around and Looks at all these things and says Verily, you know Zeus has told us that a flashlight option only have access to the flashlight That doesn't seem quite right, but on the other hand if we don't allow any type of sort of Sensoring a regulation or things like that we get into these very clear very clear problems well, I think Like I'm really wanting to take that question Very seriously and it makes me start thinking that alright as between government and Some industry whether it's the same industry producing the stuff or some industry that springs up to do the monitoring I can maybe see now why if it's about sending inspectors to slaughterhouses around the world or the country You might need a government for that because there's a lot of physicality involved. There's a lot of Economies of scale for that that you only achieve when you're doing all of them at once and it's a common public good So that augurs towards government expertise whereas here if you're talking about an app store Maybe that's not as much the case that the government isn't in particularly better shape to go look at the flashlight app than Apple is or somebody else to do it and Here at least from your example, we do know what we want. We don't want flashlight apps that can look at your contacts There's no reason Unless it's about some obscure funding model The only reason that flashlight is free is because it's selling grandma's phone number, which now we're just arguing with the Cato Institute so If we know what we want then it's just whom do we trust most to give it to us and If it's nobody is there some new institution or institutional relationships we could create I mean, do we trust Wikipedia's vetting of the many contributions offered at any given moment? I don't know trust is a big word, but We might not say we do but my guess is we all when we're looking up something and Wikipedia is the first hit or Ciri knowledge is just slurping it right from Wikipedia if that's how we're gonna find out how many claws a crab has we're gonna trust it and Similarly, I suppose for those systems running GNU Linux or something There's a bunch of people purporting to contribute code to it And then there's a council of elders right within the GNU Linux community that own different tributaries of that and I don't know if any of those examples of kind of hybrid or novel governance are scalable but At least if we try to hold constant for a moment the definition of the project of Cybersecurity and its boundaries the 2010 definition such as it was if there are enough best practices emerging We do know what we want and then it's like all right. Do we use a free and open-source software model? Do we use an industry council? Do we use? Government we can just start to try to answer it now as we move Towards an ever-larger definition of cybersecurity where there isn't best practices anymore for these larger Societally implicating systems. I find myself a little more at sea again Yeah, I think that You know, it's tricky because you know because I think as soon as we start looking at for example You know the government's role in things like security do do we start caring about the government's role in performance? For example, do we start looking at the government's role in accessibility? You know is your is your service accessible to people who are blind or who can't hear things like that in America? There is a government role for that right there is always the specter for those who are designing and not thinking very carefully of ADA requirements Kicking in and for performance. I guess there's at least enough government regulation that says you shouldn't lie about the performance if you say you've got a quad curl quad core 16 thread 18 piston Processor like that had better be inside, right? Yeah, so we can I mean there they're definitely sort of like sort of analogies or precedents we can draw with sort of existing technologies although I think that You know that a lot of the things that we're talking about With respect to let's say cybersecurity aren't so easily Quantifiable so you know it's one thing for me to say like you know I'm going to build an elevator and that elevator has to be you know 14x Load capable such that you know if you if you overload it by some enormous amount then like nothing bad is going to happen But like what would it mean for example for me to say your app must be 14x secure in terms of like you know hacker resistance And so I think in part one of the problems is that some of these security metrics we have are Qualitative and like you know even if we all kind of agree that like these are some best practices Like the extent to which someone satisfies them can sometimes be subjective So let me give you an interesting example of this think about Huawei So you know people have done like some reverse engineering of like Huawei's equipment And so for some of this equipment people found not that there was an explicit backdoor That like literally said like you know hey like you know communist Chinese party like come in here We've got the door open for you But instead and in some of this Huawei equipment It was using outdated libraries like outdated code that were known to have some security vulnerabilities Now one could interpret this observation in several ways One could say well, you know Huawei just wasn't using best practices when designing, you know This router or whatnot and they got unlucky and they can always change this another way to interpret this Which is like what many people in the American government currently believe is that? This was not sort of a mistake of coincidence that this was done intentionally Precisely as a way of in a certain sense Laundering away the backdoor capability right because then the Chinese government could always claim none or no I mean anyone technically could have taken advantage of this problem. So, you know, imagine that, you know, this came up in front of a You know a litigator, you know, and let's say that you had certain different types of laws one of which was just for sort of like Negligence lose and by the way for all people in the audience. I'm not a lawyer when I say negligence I mean this as sort of an unwashed lay person would say it But so maybe one of these laws says you're just negligent Whereas another law says you have specifically aided and abetted, you know You know a foreign combatant, right? That's like a much more Sort of over the top an aggressive charge, what would you do in this case, you know Like there's arguments to be made on both sides And so that's why I think that sometimes looking at this from the regulatory perspective, although I believe it is necessary There's a lot of gray areas there. Maybe it has to come to Proving intent for example, I mean you can probably speak more about this than me But it's it's it's these questions are ambiguous Yeah, it's it's a common question and has been for years of people just rolling in say to law school about why There aren't huge damages owed for Building vulnerable software that then is quite predictably exploited with horrible consequence when there is for You know putting bad soup on a supermarket shelf and the weird answer turns out to be the happenstance Just saying American law common law that purely economic Damages or dignitary damages usually aren't recoverable from mere negligence that if somebody Does something they really shouldn't have done that falls below the standard of care that could hurt you physically But it doesn't happen to hurt you physically it only makes you just deeply upset and traumatized and reasonably so No case it just doesn't go with and then of course we teach the exceptions to it But the exceptions are rare now that could always be changed And I've always assumed that a big reason why that hasn't been changed It's not on the law side of the ledger, but on the technology side of the ledger that figuring out blame When there's so many bugs to go around I mean it's funny to think of Huawei Making the case that the catastrophic bugs are merely that it's just there's not even negligence. It's like what do you expect? It's a router. Of course, it's vulnerable Rather than intentional It's so common and often the mistakes are the result of multiple problems that once being exploited That we wouldn't know upon whom to pin the blame Now it's possible to do it and I suppose to the extent that there's an umbrella over it like an app store You could blame Apple for any bad apps that work their way in it would just have them by design the predictable consequence of having Apple Switch from a it's permitted until it's prohibited model to it's prohibited until it's permitted and Whether we want that and the hit to innovation on that I don't know but I guess it raises for both of us. Maybe that the broader question of Do we need to have some Transformation and our thinking around cyber security for things to get any better or If there were a big enough check to write would you know how to spend the money and to whom to kind of fix the problem? well, I think that What one way to look at that question is to say well, maybe trying to come up with sort of like a a Crisp and finite enumeration of things that should be done or otherwise you're gonna get sued Maybe that's sort of a fool's errand and maybe instead what we do is we want to sort of What we want to regulate or sort of incentivize the use of a good process and that's a very that's always you know Whenever someone ever uses the word process like that just trust them immediately unsubscribe from their mailing list That's a weasel word, right? But it could be that what we want to do is we want to say well We want to we want to see evidence that you engaged in sort of a process of war gaming What might happen if things go wrong of thinking about you know, unintended consequences and if you go through that process? Then we will say well, okay bad things could still happen But at least you know, you you are able to sort of do what we would consider to be due diligence I think that might be an interesting model to look at I think though that you know that the constant challenge that you always bump up against and it's not clear to me how to sort of Adjudicate this but it's that you know things like regulation in my opinion. They objectively throttle innovation You have to jump through more hoops You cannot do things as quickly as you might want to do as an engineer now as an engineer I personally am fine saying I'm willing to take that hit, you know, like the The food pills and the jet packs may not be coming for five more years But you know, we're not like killing people with dangerous food pills, you know, but that is attention there and and there there's sort of You know different countries I think we'll come up with different sort of ways to balance these these different issues But I think that you know, unfortunately, what's going to happen is that there's going to be some huge disaster That's going to take place, you know some Some part of the power grid is going to fail or some big chunk of hospital infrastructure will fail due to some What we what we will say in retrospect will be some preventable cyber security issue and then there will be some legislation that comes out That will be better than nothing but not optimal and we'll just sort of have to refine it And so, you know to my mind like one of the reasons that a lot of my research focuses on sort of like tools for developers That allows developers to try to make their code more secure or some definition of secure is that I want to sort of Try to make things better and give developers power to do so before that disaster happens And we have to sort of you know have some tragedy and look backwards and say ah, if only we done this that the other Yeah, I mean your example of a terrible thing happening It makes me wonder if a division that seemed cleaner in 2010 than it does now between Industrial systems and consumer facing systems could be a division that says How much regulation there is if it's something controlling a power grid? It's not clear. It needs to be able to Like the tether to the internet at all times or Be usable on Android or something Whereas if it's just my laptop, what's the big deal or if there's enough interdependence that? No, you add up enough laptops and what is deep inside the you know guts of a Tesla, but a laptop at the end of the day That might be a distinction that's Harder to maintain and I don't know if you have thoughts on that I was also thinking we can turn to some of the questions to that have been rolling in from the world at large Yeah, so maybe I'll just briefly address the last question after we can look at the Audience submitted questions. Yeah, so I definitely think that's a good idea I definitely think that there are there that there should be differences in What let's say Cyber-physical systems for power grids have to do in terms of certification in terms of regulation Versus, you know the proverbial You know fruit ninja or something like that I think that the sort of scope for harms is different in both cases and I think conveniently for some of the cyber-physical stuff Security is more narrowly defined Which is convenient from the perspective of sort of like regulatory type things. So I think that's a good idea All right, well turning to some of our questions One of them is whether you think that standards like NIST and ISO or a way to formalize Trust invest practices for new technologies. How much do you buy the alphabet soup of? Organizations that have stepped forward and said well, we'll come up with some process or some other form of label I'm not against them per se. I think NIST does some good work. I would say though that You know standards aren't going to save us all though because you know at a high level Many people on this call have probably heard about this concern that there's going to be this splinter net that basically at some point China and aligned countries are going to basically form their own standards and Define their own notions of interoperability and then go do their thing while the rest of the world does their thing and so You know, it's interesting to think about you know What happens when there are competitions amongst standard bodies, right? Because it once again boils down to this very basic question of who do you trust so on the one hand, you know I like the fact that you know NIST can weigh in and say, you know certain things about this crypto algorithm is good This crypto algorithm is bad, but you know NIST NIST has not been anointed By the gods as the single standards body and so, you know, if we're looking sort of at security more broadly So for example, if we care about let's say securing communications that travel between multiple countries You know that may have multiple different standards body sums of which are competing Then you know the issue becomes Becomes more subtle, but yeah, I think getting back Jayz to something you said earlier about You know different standards of regulation for different settings I think that NIST style certification or NIST style standards is particularly valuable when things like cybersecurity can be defined in more of a crisp way Say, you know, here's a checklist do this this and this and then everything will be roughly speaking fine I think for some more complex stuff like for example, like how do we know if machine learning algorithms have bias? I think NIST, you know, at least in my reading is sort of less Qualified to comment on those things in part because some of these questions are questions that are you know cross-cutting and interdisciplinary You know, if you want to say something like, you know, is a machine learning algorithm bias yet start bringing sociologists ethnographers historians things like that and at least historically NIST has not been Sort of they haven't had that that wide enough set of expertise You want to put the HIST into NIST Here we go. That's the name of the rap album right there. Hope everybody heard it. That's the mixtape All right Well exactly on that note One of our new fellows says that in every sort of cybersecurity training I've had to take they tell you that the weakest layer of a security system is the social layer EG the person who presses the link in the phishing email and it's true It's really the people that make things so awful That's my own editorialization to be clear in your opinion How does the social layer and the vulnerabilities associated with it change given the expanded definition of cybersecurity that you've explored? it's true humans are often times the weakest link and That's sort of like one of these really Dark realizations you come to you know, it's like you realize like oh man like bad things happen to good people and then this Observation here slowly follows shortly thereafter. I think that you know user education That's always a tricky thing, you know, because I mean many problems in society faces could be solved with better user education I Think that one of the problems with cybersecurity that we're currently seeing that is very relevant is that you know Look at misinformation, for example, which I would personally put in the sort of domain of cybersecurity Even though, you know, that's not strictly speaking about you know, can I be hacked or not for my passwords to be stolen? The question of you know, what is misinformation who should decide What sources are trusted this is a political question and so You know that the idea of user education ends up being very thorny because for example, if you talk to a lot of political conservatives in America They think that this question is being analyzed incorrectly In some cases by Twitter by Facebook so and so forth So, you know, maybe it's interesting to think about, you know, what types of user education are sort of perhaps non-controversial Here are the signs of a phishing email stuff like that versus what parts of user education are sort of more ambiguous like Sure, this particular Facebook ad be treated as true or not, but I do think user education is a big problem I would say though that like part of the reason why it's a problem is because for many products Security was not thought about as a first-order design principle since the beginnings of the project and Things change rapidly in a way that sometimes confuses users So if you want an example of this, this is sort of like a homework assignment Go look up the history of what the Google Chrome browser shows when you browse an HTTPS website Right, this has changed several times, right? It used to be that if you go to an HTTPS website You'll see a little green lock up in the upper left-hand corner of the URL bar This has changed to have various different visualizations Depending on whether Google thought that they should call attention to the fact that you're on a good site In HTTPS site or should they actually call attention to the fact that you're on a bad site and let sort of the steady state be You know, so on and so forth So I think in part because of sort of like UI issues like that It becomes very difficult for users even who are well-intentioned to figure out what's going on to specter cyber security Yeah, it also suggests that some Some of these things we really do wish that the elders of science could just fix And we don't think of it as important to living in a free society that we understand How our refrigerators work and the fact that they might be Wi-Fi aware shouldn't Change that but when we talk about missing disinformation Or some of again the broader social things you're bringing into the rubric of cyber security It seems like having people consider that is like Innately part of it now. Maybe there's a way to try to fix information so that all we see is the truth but your point that People are gonna disagree about that. I don't know it suggests somehow that If somebody in 2020 is saying I want to go into cyber security It sounds like by your definitions what they are going into as a field is gonna be Quanta more broad than what they thought they were going into if they were joining the field in 2010 It kind of calls for an interdisciplinary center of some kind Right somehow if we could have a clearinghouse of people from a variety of different background if only Let's let's brainstorm about that afterwards. I think you may be on to something. Yeah Beautiful friendship This is where people realize that this is not the ad-free version of the webcast because they didn't pay for it So there's a little bit of sponsored advertising in the middle for the Berkman Klein Center for the internet and society All right, let's see other questions with an increasing rate of companies signing up with the three major cloud Providers for their back-end or to host their website. I imagine AWS Azure Google What are your thoughts with the cyber security issues that arise with this heavy concentration of information centralized on three main services Yeah, it's a it's a problem. I mean I think one of the reasons why You know back in let's say the late 90s and early 2000s why you saw so many attacks being launched on Windows was that even after Microsoft got serious about security Because they were the monopoly it just made sort of financial sense If you're an attacker to focus all of your malicious criminal energy on Windows because that's where all the users are it's reminiscent of the Willie Sutton Quote when asked why he robbed banks. He said because that's where the money is. Yeah Yeah, exactly. And so It is true that in general when you have more consolidation That's sort of what what does that do? Well, there's a there's a good thing Which is that maybe by consolidating that company gets access to more devs they can sort of have like more extensive Security teams so on and so forth and it is true that I would definitely say that you know The big tech companies let's say Microsoft and Google have better security shops than sort of smaller startups for example But it is also true that then the eye of star on that is the world of criminality Slowly tilts its malevolent gaze towards those companies and it really and like we've seen this right like we've seen problems where AWS goes down and then sites that you as the end user would not associate as Amazon sites They now disappear They do not belong in the same material universe as you do at that point because you know the Norfolk Geno data center went down Now what you are starting to see some companies do is try to diversify across multiple platforms And they do this for two reasons at least one reason is For security or availability reasons so they want to say oh well You know if Google's data center goes down at least Amazons will still be up with high probability And they also do this to try to sort of like prevent vendor lock-in and have some negotiation leverage You know they can always go to the other data center provider and say hey, you know look you're cool But you know I'm getting these cool You know signal messages from this other data center provider So maybe like give me a deal on you know the next The next contract we get so I think that's actually a really promising way to try to improve the security by intentionally designing your distributed services such that you Store data in multiple providers One of the long-time advisers to our assembly program, which you are also an advisor to that's pkmla.org HTTPS to get there for those watching Asks is poor cybersecurity just like the rest of tech today It just works poorly most of the time dropped audio reboot web page does not load order Does not go through and we'll just take a long time for things to normalize automobiles started out pretty unreliable I should add my own observation. Thanks to the assiduous application of tort law. They got a lot better after a bunch of payments for featuring an ornamental spike on the steering wheel of the early pinto But our people's standards just too low when it comes to tech So I that suggests that just wait ten years and somehow we'll have figured it out the way that we have with automobiles I like this question a lot because I personally think that software quality has gone down over the past five years and I think a major reason for that is because I think that a lot of companies now They've been very inspired by the model of the web, right? It's like back in the day So for those of you who are old enough to remember this like back in the day They're these physical stores you'd go to if you had to get software if you want a new version of windows You would drive somewhere and go to like a best fire an office depot you get a physical device looks like a frisbee It's smaller. It's shiny. It's called a CD compact disc and that's how you got your code And then like once every year or like, you know once every six months, let's say but not frequently Your computer would grind to a halt while it downloaded this huge security update and that's how things you store So software these days often times uses this model called Continuous sorry continuous integration so it's the basic idea that your software is always downloading a bunch of tiny feature updates and a bunch of tiny security updates all the time and This is very similar to what happens in the web ordinarily where every time you go to a web page to a first approximation You're sort of fetching a bunch of new content and that content can change You know and so when we think about you know, what version of Amazon's web page is running right now It almost doesn't make sense to say that it's almost like I forget if this is like a ancient Greek or Roman mythology But it's like you have this ship. You know one plank is changed every day It's like at what point do you have a new ship? That's you know sort of roughly speaking how modern software works today And so I think this has been a mistake Because what ends up happening? I agree with the questioner that now software is much more flaky and it is harder to Understand how it integrates with the outside world than even with itself And in part people went towards this continuous integration model because they feel that to not do so Would be to seed the race to get new features To companies that do perform continuous integration But like the classic example I give is like if you go to like let's say facebook.com that web page It never completely works at any time, right? I mean you'll hit page down. It'll give you a little night writer thing like no more posts to show my friends are not all dead I know they've been posting stuff. What's going on Facebook is broken. Then you know, I click on something I get some weird thing that washed up on the beach like it's supposed to be a photo of my friend That's also incorrect Why is that happen because they're constantly pushing out new features, you know all major websites are like this So to get to the question, you know, I think that Unless companies sort of decide to deep prioritize Pushing out new features and sort of prioritize stability and security more I don't personally think in 10 years that software is going to be better I think that some companies have managed to do this sort of rapid release strategy Well, like google chrome is a great example of that But I think that many other pieces of software do not do continuous integration well well An hour has flown by It's perhaps to be understood as a sign of our times that it's not as if we were able to come to a whole lot of answers but I would love to pop our tape to use an old metaphor Into a time capsule and revisit it in 10 years and see How much we're still talking about the same stuff or Gosh, how naive we were back then when the real problem was Cut to the sopranos ending But I'm delighted at the prospect of Our being able to continue to work together Through the center and elsewhere and I gather you do take graduate students, correct? So if anybody out there is wanting to work on these things Slots available apply now and continuously That's right just constant background radiation of applications Nothing. I love better waking up, you know 89 messages in my inbox. Yes Wonderful All right. Well James. Thank you so much for this conversation And looking forward to the rest of this year and to being able to conduct At some point the not too distant future events like this actually in person Rather than through a so far reliable, but no doubt has its vulnerabilities Technological intermediary Yeah, thanks for the invite to join this conversation at a great time and yeah, I too look forward to the year 2525 when we're back And we can chat about the uh, the doom that is cyber security First in the person. Thank you Very good. And thank you everybody for turning up and uh for your questions All right Liz are we done? Is that it? How do I turn this thing off? Thank you very much. Uh professors mckins and zichron and uh for such a wonderful talk and to the audience for joining us today We'll see you next time