 With that, we're going to get the session kicked off so you can hear as much as possible. So, of course, I'm sure many of you have already seen, director of CSI, CSI, of CSI, the long day director, I'm sorry, Jen Easterly, and Scott Shapiro, who has written a book. I have not read it, but the title is cool as shit. So, I'm in. All right. Enjoy. Awesome. Thanks, brother. How's everybody doing? Being like, we're the last panel in the night. I'm loving the turnout. Hey, so I have to say, I was super excited that Professor Shapiro agreed to join me. I was reading this book, and it's called, I don't know how many of you read it. It's called Fancy Bear Goes Fishing, The Dark History of the Information Age in Five Extraordinary Hacks. I'm like, this is pretty good. And the reason why I was so taken with it was because a lot of what Scott wrote about really resonated in one of the big priorities that we've been working on over the past year, which is technology that has to be secure coming out of the box, secure by design. And so I was really taken with the whole work in it and just kind of reached out to you, I think over Twitter. And you should follow Scott on Twitter. He's known as the last shit poster. I doubt that you're the last shit poster, but anyway, he's a very good Twitter follow. And I just reached out to you. I'm like, dude, you want to chat? And so thank you for being here, my friend. Thank you. Is this on? Yeah. Thanks so much. It was a DM, and I was so excited. I remember responding, I'm going to play this simple and say thank you. And then you're going to call your mom. And then I call my mom. All right, so I'm going to do the interview, which is fun, because I don't have to worry about what I'm going to say. But you can interview me if you have any questions. So I want to make this fun. So really, Fancy Bear Goes Fishing is really the history of cyber crime through the lens of five epic hacks. And it starts with the Mora Swarm. How many people remember the Mora Swarm? All right, not bad. Not bad. All right, so just give us a quick summary of hacking greatest hits. Yeah, sure. Yeah, sure. First of all, thank you so much. By the way, everyone for letting me be here and Jen for inviting me to talk about Fancy Bear. So the Mora Swarm is, you know, as many people in this room have heard of it, for those who did it, who haven't. So on November 2, 1988, a first year graduate student from Cornell University, Robert Morris Jr. released a worm on the internet as an experiment and it went awry and it crashed the internet. And he had to tell his dad that he had crashed the internet, who at the time was the head of cybersecurity for the NSA. And he ended up being the first person to be convicted of the Computer Fraud and Abuse Act. The hacking epicenter switches in the early 1990s to Eastern Europe, to Bulgaria of all places, which acts as like the world's virus factory. And I kind of explore the production of DOS viruses and particularly the reputed best virus writer Dark Avenger and try to figure out who Dark Avenger was. And if you're in the audience, Scott. Dark Avenger, are you out there? Scott.chapiroa.edu. Calling Dark Avenger. He probably wouldn't own up to it. It's a great name. So the third hack is 2005, when a 16-year-old South Boston boy hacked the cell phone of Paris Hilton and then leaked nude photos onto the web. The fourth hack that I discuss is the fancy bear hack of 2016 with the DNC. I'm sure we're all very familiar with that. And finally, the Mariah Botnet, which was this giant IoT botnet, sorry, in some ways still exists out there. And that was built by a first-year undergraduate at Rutgers as a way of getting out of his calculus exam and then ended up crashing the internet in October 2016. So those are the five that I kind of try to tell the story of hacking. So which was your favorite, or at least which one had the biggest impression on you? Oh, God, that's like saying, which of my kids do I love the most? Which of your kids do you love the most? Is this being recorded? So I would say, in terms of the person I feel most, I feel for the most is Robert Morris Jr. because he's like a grad student and he has this idea and it's an experiment. And then he crashes the internet and really messes up his dad's career. And I really feel like I could have done that. I feel like that too. So just that just complete, like, oh, my God, what did I do? And then the whole world is looking at you. What's fascinating is that before the Morris worm, the Americans never heard the word internet before. And so it's, you know, this new thing and he's the poster child for this new thing. So that's that I feel sad about that. Did you interview him? No, as a matter of fact, I wrote him, he did not write back to me, but I was able to interview a bunch of them. The one, the one who I got in a way got the most out of was interviewing Cameron LaCroy, who was the boy who hacked Paris Hilton's cell phone and learned about his story and finally found out the true story of how he hacked the phone, which is in fact, all the other accounts are not true. So that was really fun to learn that. And I was so don't worry, everyone's going to read the book. Yeah, I mean, I have to have some teasers. So if you want, if you want, okay, all right, teasers. So read the book, find out. So look, 1988. And then actually, there wasn't a ton of stuff that went on until, you know, maybe the late 2000s. And then we saw a whole bunch of things that the Sony hack, obviously want to cry the things that we saw with sandworm. So do you think it's significant that there was that gap of sorts? When we first realized there is an internet and the enormity of how cyber attacks could have an impact. And then, you know, this just explosion that we've seen to include ransomware, like how do you think about that? Yeah. So I mean, if you think that in 1988, Americans had never heard the word internet, it takes about three decades for the internet to suffuse throughout all areas of society, which creates an enormous attack vector for nation states. Now remember, there's lots of hacking happening in the 1990s and the early 2000s. That's why this conference exists courtesy of Microsoft. We'll come back to that. So what happens really in the late 2000s and in the 2010s is that nation states get into the act. And so the first kind of shot, so to speak, in cyber conflict is often pegged to the 2007 Russian supported DDoS of Estonia. And the reason why it's of Estonia is because Estonia at the time is the most digitally integrated society. So it's just like this unbelievably great target. We should also not forget the role that the United States government played, Stuxnet is 2009. And a lot of attacks are responses to Stuxnet. And then you have things like WannaCry, which are NSA exploits. So I feel like the United States definitely hurried things up in terms of cyber conflict. When you think about the motivations though, so you write a little bit about motivations falling into crime, espionage and war. So motivation like matters, right? So how do you think about people and orgs and overall behavior? Where does this all sort of, how are you thinking about it when you're writing about it? Sure. So one of the things I love about like hacking, not just the activity, which is really fun, but like thinking about it is that hacking can be just with the same technical activity, can be either crime, espionage or war. And they're handled in completely different ways. And they have very different threat models. So cyber crime, what's the main motivation for cyber crime? Well, it's the main motivation for crime because it kind of, cyber crime is kind of a crime now. And that is money. And normally done for self-interested purposes, espionage is so different. Espionage is done by nation state actors to collect information for the use in national and economic security. So, and this is something that I think people find very surprising, but it won't be surprising somebody in the government, but spying is legal. I mean, it's legal under international law. Every country recognizes that every country has a right to spy under international law. So you have like one group of people that want to get money, and that presupposes certain types of threat models. But then you have this other group, which is basically they have infinite resources, they're not after the money, and they believe it's completely legitimate, and so does the world. What's really fascinating in what lots of people in the government worry about in academia is when does espionage become war? And that also comes into motivation. Or is it, was the hack designed to gain information or was it designed to disrupt digital systems? And so you have to kind of know why the people are doing it in order to know how to handle which government agency is supposed to handle it. But then also, if you own assets, you have to figure out like who are the bad actors and what are they after? Yeah. So I feel bad because I'm like drinking my beer. Sure. Okay. I'll catch up. All right, good. So you know, it's interesting. I spent a bit of time at the NSA and was part of the team that helped us stand up US Cyber Command. And we always used to talk about like turning the hat, right? Moving from Title 50 to Title 10 from computer network exploitation to computer network attack. But a lot of talk now about what can constitute cyber war, which I think is still sort of unsettled, right? And you are both a professor of law and a professor of philosophy. So I feel like you're the perfect person to ask this question. So can we have an answer? Yeah, yeah, sure. By the way, I am the perfect person. So I got into this thing because I had written a book before Fancy Bear called The Internationalist about the evolution of the right to war from 1600 to 2014. And so when I would go out talking about the book, people would ask me, well, what about cyber war? And I'd be like, yeah, what about cyber war? I don't know. And it's really, really difficult to figure out. And let me just tell you, there's a very basic answer, which is so uninteresting. And then there's a really interesting answer, which I will mention that the really uninteresting answer is that when cyber attacks have the same kinetic effects as kinetic weapons, that can be considered a violation of the United Nations Charter of Oracle II foreign and active war. But that's not that interesting, because it's just like cyber is just one more weapon in the arsenal. What's fascinating about cyber stuff and cyber war is not that we can take an exploit and mimic what a bomb can do. What's fascinating about cyber is that we can take an exploit and do something that a bomb could never do, which is to affect the availability and integrity of information. And so what happens is, I think, is that all of our laws of war, because it makes sense, it was like developed for over centuries of land and naval battles, the big concern there is physical security. But we live in a world of information security. And how the laws of war are supposed to apply not to those who are threatening physical security, but those who are threatening our information security, that is something we have very little experience with. And I would say that the laws of war ought not to be applied to cyber, but there are laws of international law which are relevant like the norm of non-interference. But instead of trying to retrofit everything into the old kinetic model, I think we need to think about what would war be like when what's being attacked is information security. Fascinating. I want to get into some of the key concepts that you talk about, and I want to start with a knock-knock joke. Sure. Knock-knock. Who's there? Upcode. This is definitely an updog joke. Upcode who? You wanted to say updog, didn't you? I did. I did want to say updog. What is upcode? Okay, thank you very much. That was the smoothest introduction. So I would say the following. When you're typing on a keyboard, all the code below your fingertips, I call downcode, your OS, your firmware, network protocols, whatever, all the norms above your fingertips, I call upcode. That is your psychology, your ethics, the rules of DEF CON, website terms of service, social rules, professional rules, rules of professional ethics, and finally, of course, the law. And so one of the things that I realized by writing the book is that when I would write about the technical vulnerabilities in the downcode, ultimately, I was always trying to explain why the vulnerabilities made it there in the first place. And they made it there in the first place because the upcode, because the rules that surround us give us the wrong incentives. And so one of the things is I actually went and rewrote the book because I thought, wait a second, we come to this conference because we love code and we love downcode. We love working with this stuff. But in some sense, that's, when we're at the level of downcode, it's almost too late. Well, it's definitely too late. Yeah. What we need to do is we need to intervene much higher in the upcode stack. And this is what I love about your work and CISA is focusing on that upcode stack because what you want to do is you want to catch it before it's a problem. So let's go to that. You mentioned software vulnerabilities. So why don't you sort of give us the elevator pitch for why the internet and software is so insecure? And how the hell do we fix it? Yeah, sure. I have like five minutes, right? So yeah. So what I would say is this is the elevator pitch. The elevator pitch is to use your elevator pitch but to flip it. One thing which is fascinating about the development of the internet is that it was insecure by design and insecure by default, which is going to make, which makes unwinding it so difficult. What do I mean by insecure by design? Well, the internet as so many of us know here is built on an end-to-end principle. You push all the time consuming things like security to the end points. So it's smart on the outside, dumb on the inside. And so all the pressure is on the end points, which would be your like your OS and your users. And it's really a tricky situation when you put all this pressure on the end points. The people who were building the end points like, you know, the Unix, you know, you know, Thompson and Richie. I mean, Dennis Richie famously said that like when he built Unix, security was not in mind. So you have the system that pushes all the security responsibilities to the end point. And the end point creators are researchers who are most concerned about sharing their information rather than protecting it from being stolen. So you have a system that's an end-to-end system that's built by researchers who are not concerned with security at the beginning. And that's the, in some sense, it's baked in. So how do we fix it? Yeah. So the way we fix it, I think, is you have to give people the right incentives. You have to give people the right incentives to change their behavior. Because ultimately, why do we do anything? We do anything because there are reasons that we think we have. And the reasons that we're given are almost always given by the social norms, the work norms, the legal norms in which we inhabit. What's so, I find bizarre and upsetting is that this product that is supposed to be eating our world, software, is almost completely immune from legal liability. And one of the things that I know from teaching law is that if you do not give actors reasons to internalize their costs, they will not. And it is the path of tech law to start with a completely unregulated space or largely unregulated space for it to gain enormous amount of economic and political power. And then all of a sudden people start and think, we need to change this. And I think we're at that point now. President Biden's national security strategy has proposed that there should be software liability for vulnerabilities, for security vulnerabilities that have been, I assume, negligently produced. And I think that's a fantastic and long overdue innovation. And it's going to, I think, put an enormous amount of pressure on developers to start doing security by design, by default. Yeah. And we are, so for those of you who are interested, we really are working on a serious initiative on secure by design. If you go to our web page, there's a lot of work. I've got a lot of teammates here, Bob Lord, Jack Cable, Lauren Zabrick, who are working on this. And, you know, it really is about catalyzing a revolution, frankly, because it's 40, 50 years of original sin back to the internet. You know, you think about revered security researcher, Dan Kamitsky, right? And, you know, he famously said the internet was never designed to be secure. The internet was designed to move pictures of cats. And we're very good at moving pictures of cats, right? And so, you know, we had this issue, internet full of malware, software full of vulnerabilities, AI full of, you know, disinformation. And it's one of the reasons why I'm so captured with these AI, you know, AI. And we're not going to talk a ton about AI, but I think it's just another flavor of software, right? At the end of the day. But I do want to talk about our awesome community. You've been coming to DEF CON for how long now? This is my fifth time, but first time is 2017. Awesome. So we're at a hacker conference today, my favorite, my favorite conference. And this book really dives into the importance of the InfoSec community, right? And I like to my core believe that this community is the key to saving, frankly, saving the internet, saving technology, right? The ingenuity, the creativity, the curiosity. That's why I come because we need your brainpower, your creativity, your ingenuity, your curiosity to help us solve this problem. So can you talk a little bit about how the InfoSec community is shaped as you think about your understanding of how to best deal with this insecure technology? Yeah, so I want to say, I mean, I don't know, I don't know all of you, just to say the very least. But I actually love the security community. It reminds me so much of the academic community, though nice. And the reason why I mean is say the academic community is because you come here, it feels like an academic conference in the sense that they're very high standards. People are really into what they do. And they really want to convey to you what they know, and they want to hear what you know. One of the things I cannot tell you what it's like to be a researcher and to work in this space, because everything is put in the public domain, I shouldn't say everything, but so much is put in the public domain. And it is an enormous service to people who are less competent than you, because there are those of us who read your white papers, read your reports, and learn how things work that we would not have been able to learn otherwise. One of the things, and maybe we'll talk about in a bit, but one of the difficulties of this field is it's absurdly interdisciplinary. It's absurdly multidimensional. You have to know so many things. Part of the things I would say that the role the InfoSec community can play, which you already do play so much of it, is teaching people, well, like me, but like those professionals, sociologists, anthropologists, lawyers, about what you do, because you don't maybe realize how opaque a lot of it is and how young the field is, and that there are not many courses on it. There are not that many textbooks on it. And so, you know, if this is going to be a collective effort, it's going to have to be led and supported by the people who know it the best, namely you. So there's a book ends up talking about solutionism, the death of solutionism. And can you just talk a little bit about the whole concept of solutionism and how it applies to how we think about creating more secure technology? And then I've got two more, wait, I've got one more question, and then I've got a big idea that I want to share with everybody and get your feedback on. Okay, so I'm going to, you know, let you get to your big idea. So I would just say, so solutionism was the term that was invented by the social critic Evgeny Morozov, kind of making fun of the tendency of our society to think that every social problem can be solved with an app. And I love, I don't know who wrote this, but one of your team or you or somebody said, we can't cyber our way out of this problem. And I think that's 100% true. I mean, we've been spending so much time trying to cyber our way out of it. And it's actually an inefficient way to do it. The efficient way to deal with social problems is not at the typological level. It is at the social normative level. It is changing culture. It is changing education. It is getting people to see things that they didn't see before. And I think that the problem with solutionism is it motivated, it's easy. You can say, oh, well, we're going to earmark several billion dollars to do something that's going to fix things. We're going to call it a cyber moonshot. We're going to call it a cyber Manhattan project. But what we really should be doing is doing the hard work of working our way up the upcode stack, figuring out where can we intervene so that we don't get lousy down code that needs to be fixed. That is, I feel like the way in which mature societies regulate technology. Yeah. You said Manhattan project. So I have to talk about Oppenheimer. How many people saw Oppenheimer? What did you say? Oh, don't say the end. Yeah. How many people saw Barbie? I haven't yet, but I'm super psyched you. I haven't seen, I have to be honest, I have to be honest with you all. I haven't seen Oppenheimer either, but I'm reading the books. Either way. I have to finish the book first. So you know, it's interesting. Some people talk about, you know, this being an Oppenheimer moment with AI. And I'm just wondering how you think about that in terms of how we should all feel like we are good stewards of the technology. Like everybody out here is in some way cares about technology, cares about creating technology that is safe for America to use. So like, what's your, what's your thinking on that? I mean, what I'm going to say is kind of banal, but I think it deserves repeating like over and over again. One of the things I find interesting about working with engineers is they work so much with tools and they follow the specifications that they're given that they in some sense think of themselves as tools. They think of themselves as like, oh, you know, just following orders. I'm just following the specs. But like, you're not a tool. Like you're a morally. Most of you. Most of you. I regret saying that. But the thing is, we're all morally autonomous agents. And it's especially the case with AI where like, everyone's like flipping AI this AI that everyone's so excited about things. And we don't have the beginning of a clue of how to make it safe, which is really like if people would just be like, honest and open about this, because there's a way in which this community has a credibility culturally that lots of other fields don't have, like, you know, lawyers, academics, all of our prestige is dropping. I feel like the security community has a very high high prestige socially. And if you come out and say this is unsafe, this is not a risk that should be taken. It will be taken seriously because you know what you're doing. And that will be a that will not only be a really important contribution to human progress, but it also will label you to sleep at night. Yeah, that's fantastic. I totally agree with that. So one more question, because you are a professor of philosophy. Who is your favorite philosopher and why? Oh, my God. Okay, so that's we didn't discuss any of this. Okay, so that's that's that's an impossible thing. I would just simply say that I'm a legal philosopher. So Herbert Long, the side of his heart, HLA heart is my favorite. Anybody ever read concept of law? Oh, you're killing me. Oh, my God. These are not your people. Yeah. No, you're my people, but but you could be more my people. You know, the concept of law is a beautiful book. But of course, the three greatest philosophers of all time are Aristotle, Hume and Kant. I mean, that just has to be said. I think the greatest philosophers of all time are John, Paul, George, and Ringo. Amen. Okay, I have a big idea that I just like thought of earlier. And so Scott is a wonderfully human being, a great writer, a professor, but he is also an accomplished musician. And we wanted to bring our guitars, but we didn't. But this is what I'm thinking. Do you know what Defcon lacks? Like a freaking music village? Right? Come on, we need a meat. Come on, people. We need like a rock and roll village. This is what I'm thinking. Jeff Moss is going to kill me, but I've already like already previewed this with him. So we need like everybody to bring your instruments, whatever it is, we could have a village that's like a jam session, but we could also bring the electronics, we could bring the amps, we could do all kinds of hacking on the amps. So I'm really thinking I'm going to talk to my team about like CISA, and perhaps Yale University, right? I'm like, Scott up to all of this, like the easy. We'll form a super group. Super group. So whoever wants to be part of music village rock and roll village, next year. See us after this. But in all seriousness, look, I really want to thank Scott for being willing to do this, for showing up with such expertise, such grace, such humor, and for your friendship. So thanks everybody for coming and have a great rest of the evening.