 So thanks everyone for coming this morning, anybody having questions before we get started? Comments? I don't know, something? We're going to say something besides me? So we'll get started. Security, so we talk about three things. Somebody, tell me what they are and why they're important. I mean, besides what, like, confidentiality is not answered because it's on the board, right? What does that mean? What does confidentiality mean and why is it important? Only the intended people should be able to read or, you know, access certain information. Only internal people? No, intended people, yeah, only authorized and tested people. What about integrity? An authorized person can modify or change the information. Yeah, so no authorized person can change or modify or add information to, right? Yeah, that could be one. What about availability? Denial of service? Did just denial of service, though? I don't like that. What was that? Well, if I can say it louder, so. No interruption of the service. Yeah, no interruption of the service, so it could be or degradation of the service to unacceptable levels. You could think of maybe if I could attack the New York Stock Exchange, just to make it a little, everybody else's trade slower or my trade's faster, right? It's not really a denial of service. They still get service, but it's a couple of milliseconds after me and I may call money. Obviously, I don't know how to do that, otherwise I probably wouldn't be here. But, you know, that's, so yeah, so that's important. So availability, these are all very high level concepts, but they are important to the security of a system. So I think I may end in class this way, I'll do it again. So, anybody here like Perfect Code? I think I just put my hand up. Perfect, bug free. Works the first time you compile and run it code. Terrible people, everyone in this room would be bad programmers, would be ashamed. We're all human. We're all human, yeah, right? And what do humans do? We make mistakes, right? We make errors. It's part of, it's fundamental part of being human, right? If we could fix it, at least with software that would be cool, but probably in general, I don't know, it's kind of boring if nobody ever made any mistakes, ever. So yeah, so software is developed by humans, humans aren't perfect, therefore the software is not perfect, right? And so this shows up in software, right, we're developing software, because a human error can introduce a bug or a fault, right? So what do you mean by like bug or fault? What does that mean? Is it kind of an error curve? What do you think of as a bug or a fault? Unintended performance of the program. Unintended performance of the program? What kind of performance like fast is not fast enough? Like what it was supposed to be, like the task it was supposed to perform. Yeah, so some kind of, it's like an unexpected task that it did perform that it wasn't supposed to? Yeah, or that was unexpected. Or that was unexpected? Yeah, something that was unexpected, yeah. What else? You've been developing software for years, right? What are some of the mistakes you made, to be honest? It doesn't do what it's supposed to do. It doesn't do what it's supposed to do? Or it does what it's not supposed to do? Or it does what it's not supposed to do? Yeah. It fails, I don't know, for a particular, like, test case or something? Yeah, it fails on specific data, so it could work for kind of 90, 95% of tests of the data input, but that 5% causes a crash, right? And that would be definitely a bug or a fault. What else? We're very excited to know this. Yeah, breaking functionality, it could have worked before, but it doesn't work today, right? For the exact same input, right? Those kinds of things happen at the time. It's not secured. What was that? Like, not secured. Not secure in what sense? Like, if there's a login page, if you're trapping the credentials and you're getting the login button, it's just, I think, any other credentials than also going to the home page. Yeah. It's not secured. Like, it's working fine, but it's... Yeah, so that's, I guess it's kind of a little bit of the unintended behavior, right? Like, it shouldn't behave that way, but it is, right? But it's still functioning correctly, yeah. Adverse side effects? Adverse side effects, like what? Memory leakage or... ...affecting the operating system in a battery. Yeah, so maybe your program's leaking memory, right? It's still running correctly. It's running correctly for workflows. But after a date, usage is ballooned up to two gigabytes, and in three days it's four gigabytes of memory, and then all of a sudden it gets shut down. The funny story I like to think about that is the... I think the initial 1.0 version of Ruby on Rails. So, anybody know the company and where the guy who created Ruby on Rails? Yes. Good. I should have looked this up beforehand, because I don't really know the answer. I don't know. I know. He goes by the name DHH. I don't actually know... David? Something, something? David? Any minute? Yes. There you go. What company does he work for? The company they work for. They make base... No, not base. It's a base camp. Base camp. Yeah, base camp. So he was making base camp, and they have these chats that you release now where the initial version of Ruby on Rails was so bad memory, right? It was leaking so much memory that he had a loop that would restart it every 10 minutes. And that's what they were doing in production. Yeah. He didn't tell that when he had this awesome page explaining Ruby on Rails and this really cool framework, right? And they eventually fixed all those bugs, but I mean, I guess it worked. But, you know, it's a pretty big fault. All software faults? Security bugs? I think. Okay. That's not necessarily... Not necessarily? What do you mean by that? Well, obviously all faults work. You don't have to always be security related. They could possibly just compute a value incorrectly. It doesn't have to do explicitly with security. Just compute a value incorrectly? That generally, could that be a security fault, though? Or it can be an UI fault. A UI fault in one sense. I got one pixel off or something. Yeah, one pixel, like it's not showing the text or something. Yeah. So you're trying to put the correctness of the program inside integrity and then saying, look, it's violating the integrity of the program, therefore, it's a security problem. What do you guys think? That's the wrong person who said it. It's kind of hard. Is it? Not necessarily. Fair faults could be errors also. And an error is different from a bug. In what sense? A bug is something which is not in the specification and it is performing something weird. And error is something that it could be like memory such as... Okay. So maybe it's a problem of terminology. So for me, they're all the same. Errors, faults, bugs, to me, they're exactly the same thing. So whatever we would consider a problem with the software, if it's not doing what it's supposed to be doing, some kind of fault or error or bug, to me, they're all the same thing. So then the question is, is any problem with the program either doing unexpected behavior, unintended behavior, or under-intended behavior, are any of those a security bug or all of them a security bug? No, why not? It's a mountaineer. If you look at all these security bugs... Everybody will say, yes, I have a really good job at like security, if you're really important. No, I'd say security faults are a subset of your software bugs. Based on how we define security, you've got confidentiality, integrity, and availability. So maybe if it falls outside of those, the bug does. Then it doesn't. That's not really a subset. Okay. Yeah. So a subset, I don't know if that seems reasonable. Does that make sense? Maybe disagree. I don't think we can prove that not all, I mean bugs are not security bugs because there might be an innovative way that someone will find out in the future that could be used to exploit that bug. So maybe all faults could be security bugs. What happens a lot throughout the history is like we've known about buffer overflow, we'll actually see examples. We've known like buffer overflows were a problem in the early 80s, and it wasn't until people started actively exploiting it that they were like, oh, this is actually a serious problem. They're not just programming faults. It's actually a security problem. I don't know. I mean I guess the answer, how the high levels know, I think it's pretty clear you can have a bug. Let's say the text has moved one pixel to the left or to the right. It's hard to imagine a scenario where that is a security problem, where that violates the common job and the integrity and the availability of the system. So that's like a pretty simple fault. But then you have cases, so somebody was talking about a UI problem where text is missing. I guess it depends on what that text is in the context of the system. It's the price of an item. Or maybe it's a discount that could be applied to a purchase or something like that that affects what the customer is paying. That could be violating the integrity of the system. Or some of the examples. What are the other examples? You have to think creatively and think how. Bugs don't just exist in isolation. It's all about in the context of what system. But I think generally we can say in most cases, not all security faults are security bugs. So if you have a fault, some kind of software fault, software crashes. Software is not performing it. It should be. It's doing something unexpected. It's not doing something expected. It could be a security bug. Or we'll use the term from here now vulnerability. So vulnerability exists in the software. We'll say in the system at this point now. And so it's only when that is, I don't know, is a vulnerability. Let's say you know there's a vulnerability in your system. Is it a bad thing? Are they all bad? Are they all equally bad? No? Why not? It could be something very trivial. Say it prints all of these as bees for some reason. That could be a vulnerability. It's not going to affect too much on the UI. It depends on where it's coming from. All of these bees, how that affects the, so is it actually a vulnerability? Does that affect the security of the system? Given that they're calling all bugs and fault as well. No, no, they may be. They may be, but they're not all. They're kind of taking the subset approach, right? So some faults, I mean, you have to think very hard, right? But I think that would probably be in a case where it's not necessarily a security. Even then, suppose the, say, product ID was very different from what was there at the back end to what was there at the front end. It's not going to affect the user a lot. But if the price changes to a large extent, then that's going to be a huge problem. Yeah. So let's go into the example that was used earlier about, let's say we have a login form, right? That's a login form where we can type in our username and type in our password, but it doesn't actually check the password. It just lets anybody in. Is that really bad? Is it bad in every single case? No. No? What would be a situation where it's not bad? A web page only has a login form but does nothing with the login information. It gives no extra access to a logged in user. Yeah. Or maybe, let's say it's on a machine inside of a secure facility that you have to be a military personnel to access, right? It's not even network accessible, right? It could be accessed only by certain people and they have a key card to get in too to even access it. I mean, it's not great. So it still could allow somebody to get in there. We can see there's all these steps that they have to take to get there. So vulnerability is, I kind of think about it as in some ways a fact, right? It exists in the system, but it's just sitting there not doing anything, right? I mean, it's just part of the system. So it's only when somebody actually tries to exploit that vulnerability or use that unintended functionality. So that's what we call, when it's triggered or exploited, that's when it actually compromises the security of the system. So the kind of way to think about it is the vulnerability is a problem in the system that could compromise and affect the security. But how that actually gets exploited may depend. And you need to know knowledge of both things, right? You need to know what is this vulnerability in the system? What could it allow them to attack at a base level? What is this vulnerability? And then how could an attacker actually exploit this? So instead of you have to have physical access to the machine, that's a lot worse than actually that example Dropbox a few years ago had this problem with their website where you could just type in anybody's email address and any password and they weren't actually checking the password. That's a huge issue, right? That's a major, major issue. You know, incredibly critical, very important, affected the security of their application there, but now you're talking about the same problem in a different scenario. It becomes different, right? And it's about the exploitability. How easy is it to exploit that user? So if you think about some vulnerabilities, like let's say they require the user themselves to type in a very special sequence into a text box and then hit Enter and then the program crashes, right? So it's probably vulnerability because it affects the availability of the system. How easily exploitable is it? Yeah, it all depends on, so you're the attacker, right? You have to somehow trick the user to type in a special sequence into this box and hit Enter. It's not impossible. People do it all the time. Has anybody seen those things on Facebook where it says copy and paste this thing into the URL bar? Actually, if you look, I think if you go on Facebook now, they've disabled the developer console and they've disabled pasting JavaScript into URL bars because they wanted to stop these kind of behavior because it was working, you know? How many users does Facebook have now? Is it a billion? I don't know. Tons of users on Facebook. With all these users, some people are going to fall for it, right? It happens all the time. But if it's, you know, a custom software on the desktop, it's probably maybe hard to convince somebody to do that, but, you know, it still could happen. Okay. Software security should be easy, right? Just write perfect software. Yep. Done. Go home. Restless desktop. Just kidding. Just kidding. So is that actually enough? Let's say we live in a world where we can write perfect software. Is this enough? Nope. Nope. Why? Because we... New requirements might come in. New requirements might come in? Yeah, so let's say we write perfect software once, and then the perfect team that wrote that perfect software leaves, and then maintenance programmers come in and have to develop a feature X and extend that, right? Who aren't so perfect. Yeah, that's a good one. What else? Users. Users. Ooh. What about users? Are you saying that users aren't perfect? No. The system used by users and a human being might get the perfect software. Yeah, so we have users to think about, right? What if we make the... So you think about... Everybody remember Windows Vista? Yeah. Right? The user account control, the UAC, so it would pop up every time a program, hey, this program is trying to run as an administrator. Do you want to allow it? And it happened, you know, it's a good security feature, but it happens so frequently that users just got to the habit of always clicking yes, yes, yes, yes, yes. So then when something really bad did happen, they just clicked yes without thinking about it, right? You know, I wouldn't necessarily call that a perfect security feature, but a very good security feature that didn't take into account that users aren't perfect. Yeah. Yeah. And isn't perfect subjective, you know? For you, it might be that, okay, this is my perfect software, but I might still be able to, you know, exploit certain vulnerability or, you know, make it do things it's not supposed to do. What's that? It's perfect. It's perfect. Perfect. Perfect. I say it's perfect. No. It's perfect. It does exactly what it's supposed to do. It does nothing more, nothing less. No software faults in the software. What if the requirements are not perfect? The requirements are perfect? Yeah. It's a whole other issue. Yeah. Yeah. Even the libraries that the software is using might get defecated in the future and like it can break down the software. Yeah. Right? So how many, does anybody really just write a software, a piece of software entirely from scratch? Like you start with C. You use the C standard library and that's it. Nothing else? No. No, right. It'd be crazy, right? We use programming languages. We use the libraries of this programming languages. We use third-party libraries. We use frameworks. Those frameworks use libraries and other third-party things, which those things probably use other things, right? All of this is getting into your nature. So, okay. So yeah. So we, okay, we talked about it. We need that perfect software and we have perfect users. So we train them all. So what else? Are we still perfect here? They can be backdoors. Like on the server. Backdoors are on the servers. What do you mean by that? It's not programmed properly. There can be one piece of software which is perfect, but not all software can be perfect. Yeah. So let's say all of our software is perfect. All of our software is perfect. Yeah. So the hardware could also be backdoor. Yeah. The hardware could be backdoor. Yeah. What about, so let's say I write this beautiful piece of software and I give it to everyone in this room and you go and run it. It's perfect. And then you go and run it. Are all of your installations going to be perfect? No. Why? Part of the requirements. Yeah. The requirements. What was that? Some of the other stuff. Operating system. Operating system. The environment. Yeah. The configuration, right? So now we have to make sure not only do we write the software properly, but that's configured in exactly the right way. This is a big problem in PHP because PHP had special configuration parameters to enable magic quotes. So it would automatically quote input that came in. Well, if you install that same application that's secure on one server and you take it to a server with a different version of PHP that doesn't have this, it's now insecure. So yeah, you have to make sure that the software's perfect. And so somebody mentioned this. We have to make sure that we're using a perfect operating system, right? Because our application trusts that the operating system is doing anything correctly. It trusts that the operating system is keeping the integrity and confidentiality. So, you know, why would I, if you write this beautiful, perfect application, great. I'll just find a vulnerability in Linux and exploit it. And now I've got your application and I'll write it down. Even though you wrote a perfect application and you have perfect users and you've configured it perfectly. So is that enough? Is that all we need? All we need, Boko? You talked about hardware. What are some other things? What else can go wrong? This is what we're trying to do is think, get this adversary mindset. So, our software will depend, I mean, any software will get converted to machine level, machine level bits and bytes. And afterwards it has to be processed through the processors. And that depends on the transistors. And that depends on the behavior of electrons. And many of which may depend on the temperature. So, in that sequence anything can go wrong. Yeah, there's literally, I think it's right over here. Perfect privacy. Perfect privacy? You can find out that you're using perfect software. Yeah, so we're not, actually in this class I'm kind of going to put the privacy aspects on the side because it is a whole separate thing about, you know, what it, maybe I've done it perfectly but maybe I'm giving it my privacy to do so or something like that. Right? Like it is definitely a component but that's a whole other issue. Yeah, so talking about transistors, there's actually this story. So when I took a grad class in computer architecture, our professor told us the story that there was these two supercomputers that I think the military created. And they were done exactly identical. And they had one that was, I'm going to make up a place because it fits. Let's say they had one in, like, San Diego and they had one in Denver. And the one in Denver had twice the error rate as the one in San Diego. So they go through, they look at all the parts, they prepare you for placing them, it's all fine. So they realized, what's the big difference between, well, I don't know, what's the difference between San Diego and Denver? The temperature. Elevation, yeah, exactly, right? So Denver is my ally city, right? So it's at a much higher elevation. And so when you're at a much higher elevation, you have more cosmic rays hitting your system. And so they found out that's what it was. It was the higher incidence of cosmic rays at that elevation caused random bit flips to happen in the system, which would eventually cause the system to crash. Crazy, right? So they put lead shielding on it and all the problems went away. Yeah. So what about your hypervisor? Any of you running hypervisors? Virtual box, VMware? You're running something in the cloud? You're running on top of a hypervisor KVM? So we talked about hardware. What's controlling the hardware? What was it? Firmware. Is that what you said? No, I said power. Oh, power. Yes, power too. There's code that runs the hardware, right? Each, almost every piece of your system has some firmware that's running it. There's actually people that have shown that. So there's code that controls your network card. And so they found a buffer, I don't know if it's exactly a buffer overflow, but they found a vulnerability in the firmware on that network card. So they could send you one packet and exploit your network card. And from there your network card has direct memory access to the entire physical memory of the machine. So then they can completely control and own your hypervisor, your operating system, and all of the applications. Just from one packet that never gets sent up to your operating system because it gets stopped on the network card. So, yeah, it's super easy. You just write perfect software. You have perfect users. Not only do you have to write your own perfect software, you have to somehow ensure or rewrite everybody else's software that you depend on. You have to configure that software perfectly. You have to have a perfect operating system, use a perfect hypervisor, run on a system with perfect firmware, and run on a system with perfect hardware. So there's actually been research done that's shown that you can actually build a CPU such that with a certain number of instructions, the CPU drops down into ring zero, so it has complete access, like operating system level access. This is something that you can actually build into the chip itself to be able to control the system. So this is why I still have a job. So this is not easy once you start listing it out and thinking about all the things you actually have to do. So kind of what I said on Monday is we're going to learn how to develop secure software or some strategies, techniques to develop secure software. But we also need to learn software and security. So I don't want us to be talking about these things at a high theoretical level about these type of vulnerabilities. I want us to really understand the technical details of how these vulnerabilities exist. So we're going to answer some of the questions like how does software break? What are the types of ways that software can break and cause security problems? And to do this, one of the main my main goal in this class, besides making it work a lot, but having a lot of fun doing a lot of cool interesting projects, let's phrase it that way, I want you to develop an adversarial mindset. So that's the key thing about security is you need to be able to look at a system and say, okay, what are the goals of the system? What are the intentions of this system? What does the system actually allow me to do? And then what's the difference there? How can I force the system to do something? And not only that, what does it actually mean for the application? Is it actually a security vulnerability? So this is the key thing that we're going to develop. So this is why we're looking at all of these software problems. We're going to talk about bugs to develop this mindset. Because besides this, how do you know how to defend a system and develop software securely? You don't know all the ways if you go wrong and you can't. So I can teach you about all the different vulnerability classes, all that kind of stuff, right? But what I really want is for you to have the mindset so you can go find entirely new classes of vulnerabilities. So you can look at the new, the crazy internet of things, protocols, technologies, bitcoins, you can look at that and say, huh, what happens if I do this? Or do they think about this? Or what if I send the exact same transaction twice to somebody? How does that change or affect their model? Where does this data go? When I throw data at this system, where does it go? Who touches it? Does it get tweeted out to Twitter? Is it input from Twitter? Does it get sent to a third party? Is it part of an XML file? All these kinds of things. Who processes this data? So let's say I give data to a system, maybe I find out it's not horrible there, but then later every week an email gets sent out to all the users containing some of that data. Maybe I can influence that email. What can I do with that? These are the kinds of things I want you to think about. And really the other thing is what assumptions is this system making? Because this is really what it's about is you want to try to violate these assumptions. And this is why I really like security, because to be able to do this effectively, I'm going to make everybody see the matrix or heard of it, maybe? Yeah. You need to be Neo. You need to see the system. And be able to see all the pieces, how it all fit together and be like, ah-ha, if I do this, everything breaks. That's what you need to be able to do. And that's why this class is very demanding work-wise is because you need to know everything. We're breaking down all the abstraction layers here. We're not going to talk about, oh, here's just C and here's a buffer overflow. Here's a C program. Here's exactly how GCC version compiles to x86 assembly. Here's how it uses the stack-in-net assembly. And here's how it does function frames and function calls. And look, with the buffer overflow, we can control this return instruction pointer to be able to execute code of our choose. Right? That's being able to look through and see the matrix and be able to see everything and manipulate it to your will. So that's where we're going to practice kind of hands-on breaking of software, to be able to practice software insecurity. Any questions here? This adversary mindset is very similar to the testing mindset, right? So this is something, at Microsoft, I really liked we had S-Dets, which was sort of software developer engineers and tests, I think, which they're getting kind of rid of a little bit. But I thought those guys were amazing. Some of the people on my team were great. So I built this feature and I thought it was all cool and it was like my very important, I think I was at the intern at the time and they came back and they were like, actually, when you have a tool monitor and it's on the second monitor, not the first monitor, is when a problem occurs and it crashes. I was like, what? I never think to test that, right? But I should have. And so it's part of that. It's doing weird things, looking at corner cases, trying to think through how does this, how does this software develop? What probably didn't they think about that kind of stuff? So as part of that, what we're going to do now, so we're going to look at to try to develop this adversarial mindset. I find it very helpful to understand and look at the history of, I call it software insecurity or hacking or big incidents, that kind of stuff. Because I think it really does help frame the context, right? So we can understand our current situation, right? So we can understand why you're here at a class about computer security and why we think it's important. So, okay? Yeah. Good question. Sure. Using this previous concept, so I can say that secure tests is a set of software tests. Yeah. So that's, in a sense, yes. So, and it's the difference to me between software engineering and software security or computer security. It's, I don't really care about finding bugs, right? You can find bugs all over the place, right? I want to find bugs that allow me to compromise the security of the application. So to do that, you need to understand the application itself, right? To a software engineer or to a software tester, any bug is great, right? You can crash the program, you can do whatever thing. If it's one pixel off, that's a bug. For me, I don't care about that until it affects the security. But you have that mindset that says, hmm, I know, okay, now I know I have this functionality and maybe that thing itself doesn't allow you to do anything. But once you find another flaw, you can put them together and say, aha, I can leak memory, memory address, and that allows me to do buffer overflow, which allows me to do this other thing and actually exploit it to defeat the SLR and all this other stuff, and now I've completely owned the program. So yeah, to me, that's the big difference. They're very, very similar and they use similar techniques. But the big thing about security is you need to understand that application or you need to understand the application and the system in its context. Because ultimately, you need to be able to convince somebody else that it's worth fixing but it is a real problem. So if you can't say, yeah, you have this one pixel off, but see, that's really important because it affects your confidentiality, whatever, whatever. I mean, that would be hard to sell, but you need to be able to put that in context. So to me, the history of security incidents hacking is actually tied really closely to the development of the internet. Does anybody know when the internet first, like the first beginnings of the internet started? What about DARPA? What about time? Like, around, like, decade? In the 70s. In the 70s, yeah, long time. So, every, I hope, pretty familiar with the internet. It's taken maybe a networking course. That would be very good. So the internet is a network of networks, right? And their, their network is autonomous, right? Which, actually is really important because it means that there's an open architecture and every network is different. It has different administrative domains with different goals, right? So, the ASU network operates a lot different from the UC Santa Barbara network, which operates a lot different from the Microsoft network, from the Google network, right? All these kind of things. And, it's built in that they can actually do this and have their own thing. And so, this is kind of an understatement, right? The internet is critical to our lives. Does that mean not use the internet today so far? Anybody? I mean, maybe if you just got up at, like, 10, 10, 25, and rolled over here, maybe you could say that, but, yeah, it'd be hard. But you'd have to have not have checked your phone for any emails or text messages or anything like that and looked at the weather, what the weather was like today, there was so much everything. Okay, so back in the 70s, DARPA created a project called the ARPA Met. And so, the first four nodes, I think this might be an easy one. So the first four nodes, does anybody know what they are? Universities or institutes? What was that? University of Utah. University of Utah? I just wanted to dial a server. Nope. I wish you could dial a server. Nope. What was that? What was that? What was that? University of California. University of California. UCLA? Yep. Berkeley? Berkeley? Nope. MIT? MIT, no. Nope. Are we guessing colleges when we talk? Yeah, so, part of why I like to say this, my alma mater, UC Santa Barbara was one of the first nodes. The Stanford Research Institute and the University of Utah. So, actually a little funny backstory about this is these schools are kind of selected for a little bit political, not political as like national politics, but, I heard they included Utah because they didn't want to just fund the three California institutions. So then the other states would complain that you're only funding projects in California. So that's kind of why Utah was included. I got to think that SRI was probably included, so it wasn't just like a university thing. They had some like private institutions involved. But yeah, so this was the map. This was the internet. UC LA, UC SB, SRI, and Utah. And then you can see the different machines here. So this is in those boxes. It has the machines. So the 360 at UC Santa Barbara, the System 7, the 940, I don't know, the PDP- PDP-10. I'll be honest, I don't know all of these I thought. So the crazy thing about this thing that we use every single day that's connecting our fridges to news feeds and our TVs to, I guess, ads and stuff, is starting with just these four, these four nodes. And it was based, so the other thing I really like a little tidbit is it didn't start with TCP. It started with another protocol called the network control protocol. So in the 80s as more nodes went on, January 1st, 1983, they just moved. They moved the whole entire network to a completely new protocol, TCPIP. I think they called it the flag day. So you know how they did this? How do they do this? So thinking about it here today, how would you do this? Turn it off and turn it back on? Difficult. Who's going to do that? It's a system of autonomous networks. I have a trick question because I don't think you can do it today. I honestly think it would be virtually impossible to do something like this today. Actually, it's great. So one of the professors who's retired at Santa Barbara, he's been there since the mid 60s and he was involved in my lab. And so in his office, there's a book about this thing. It says Internet directory on the side or ARPANET directory or something like that. And I was like, hey, what's in this what's in this book? He's like, oh, that's just a list of every computer on the Internet in 1980 something. And the name of the administrator, their phone number and their email address. What? So yeah, the network was so small, they just decided they called people and they all decided on a date and they decided they were going to switch over from NCP to GCQIP. They just did it. Yeah, they turned it off and turned it all back on with new protocols and it just worked. I guess this is crazy. Think about it. You can't do that now, right? Every now and then you want, like, IPv6 has to be backwards compatible with what we currently have. We cannot completely move from IPv4 to resets. You're just going to break that to me. Think about it. Economies would crash. Economies would crumble. It would be pandemonium. It would be nuts. About the same time in the 80s, DARPA funds development of Berkeley Unix and the important thing here is that it's TCP-IP implementation introduced the socket programming abstraction, which is used in almost every network application now. So then, so I don't know if they did it concurrently, but ARPANET in the 80s, the internet grew beyond just ARPANET and just the universities. And MilNet is the military network, so the military's version. And back in the day they were connected and then the military realized, oh, it's a terrible idea to let random people have access to our military network. So they're completely disconnected. I don't... I don't know if they still are. That's a good question I'm sure about. Anybody know off the top their heads that any military people are here? The MilNet still completely separate? I would assume it would be because, no? No, it depends. The top secret stuff would be but they have different Silenets that are just completely off. But you can email now people that are on MilNet. That's within the last 10 years. Interesting. Crazy. And then, I like this, that the NSF created a super computer network. And when you have super computers you need to transfer data incredibly fast from one location to the other. So they've built this awesome network on this backbone of 56 kilobit per second. Which is just nothing to think about. Well, you gotta think about the time, right? It was back in 1986. That's fast back then, right? Oh, you're good. So the 90s and the early 2000s, this one, everything started to explode internet-wise. Incredibly fast growth in both size and volume. So what was the killer application of the internet? What was that? Netscape broken. Netscape? Yeah. I'd say more generally the worldwide web. So the web really was the key driving factor of the internet. Pretty much up until that point, only nerds were on the internet. To be perfectly frank, right? So they had email, they had mailing lists like Usenet and a bunch of other services, FTP. But there was no, you know, they were clunky and it wasn't until in 1991, Tim Berners, Lee at CERN, creates the worldwide web. He used it to his idea is he wanted to, he worked at CERN so what does CERN build? Yeah, the large hadron is it hadron? Yeah. Yeah, that big collider thing, which is huge, right? So this is, he worked on this, you know, to help this, this project and they have a lot of people, a lot of different resources, everybody moving every place and he was like, huh, it would be really useful to have some place people could go to like get a list of who's in what offices and what their phone numbers were and then like it'd be great to like be able to, like, because the idea of hypertext was around. So he's like, it'd be great to like click on somebody's name to then go to their web page so you can find out more about like their, how they got them and then he built this simple implementation on the next step operating system and released it and then spread like crazy and I'll say the internet explodes. It's really hard to get a feel for how big the internet has become, how big the web has become and how much it's really changed. So here is a graph of, this is the number of websites in existence from, actually, in 1999, which I guess we extended it to 1991, it would be one, right, all the way up to now. I'll be honest, I don't know the exact methodology for determining this. But you can see, Scott, let's see how it goes. Let's see. I mean, I thought the internet was big in like, I don't know, let's say 2006, not even like 120 million websites in 2006 and now 2015, we're at like, I don't know, we're hovering around a billion websites. This is probably only public ones, right? It's probably not counting all the internal stuff. You know, how do you count something like Facebook? Yeah, it's one website that has a billion users, right? So, this growth is just absolutely insane. And what's funny is I was looking at some of my advisor had a picture from 2006. So this is just 99 to 2006 and you can see the same exponential growth through there. We can't even see it here because the growth is so big. We're living a pretty cool time. This is a really cool visualization I like of the internet. So this is a visualization of networks and how they're connected to each other based on real data. So yeah, it's cool and you can see some of the backbones there and white that are transmitting high-volume to data. Yeah, and also the arrival of Linux in the 1990s. No, actually not. So, well, I don't know actually. The interesting thing is that Tim Berners-Lee created his implementation on the next-step operating system which was the computer when Steve Jobs left Apple he founded the company Next and they made actually a really cool operating system which is actually pretty fun to program and so he developed the client and the server application on next- for next step. It didn't, I think that internet didn't, the web didn't really take off until people ported it to other languages and that kind of stuff but I don't know necessarily that it was specifically Linux or because I think in the 90s it wasn't huge, I think. It was more it was more Berkeley you can see a lot of Berkeley and a lot of sun. I was at MCI World Home in 98 and then I worked for an internet company like that did dialogue in 95 I did all support cool but it was all it was all Unix it was all Berkeley right so some variant of Unix and C yeah I mean your web pages your interactive stuff was all down through CGI through C right right cool alright so this is going to be a brief overview of the incidents we're going to talk about there's a lot of incidents but I kind of picked something really trying I'm going to stretch your concept of what hacking is where it came from so in 1972 we had phone ringing in also in the end of 1972 Bob Metcalf published an RFC so what's an RFC request for comments yeah so it was with I don't know specifically the internet and hearing task force but basically that we'll look at about what he saw the state of security then in 1972 in 86 German hackers tried to obtain secrets to sell to the KGB which is crazy interesting story in 1988 was the first internet warm which is also really interesting in 94 Kevin Mitnick has people heard of Kevin Mitnick's name some of us all right famous hacker who attack broke into the San Diego super computing center and more recently in 2010 I really like this one Albert Gonzales received a 20 year sentence for hacking so he and his hacking group stole I think it was 200 million credit cards and they were caught and they got 20 years in federal prison and there's added dots here because there's so many more we can talk about hardly we can talk about all kinds of stuff why isn't I like doing computer to grade boards of young men ooh the enigma machine I don't can I go hey I don't know much about it I think I'd be good for a crypto class because I think that's more relevant there for me these are what shapes our current thinking about software security right so I don't think that is much I think these things definitely did essentially the internet worm and the driven hacking incident like these were things that the the security community just started was talking about it worried about it thinking about how do we solve this problem okay so it all starts with cap and crunch so what is cap and crunch for those that maybe not from this country and don't know serial and at some point they used to have a toy serial so it's just a box of serial and cap and crunch is the mascot so in 1972 the guy his name is John Draper he found that the whistle that comes in a box of cap and crunch produced a sound at the 2600 hertz frequency that was one of those lines so if you want to go to the store buy a box of cap and crunch so you can feel part of the hacker's spirit John Draper you totally welcome this is the whistle that produced that sound so why does this matter why kind of what who cares I don't know it's a phone dial tone what was that it's a phone dial tone a phone dial tone yeah so it turns out that the 2600 frequency was used by AT&T to authorize long distance calls so what would happen is and I'll be honest I'm not an expert on telephony systems especially circuit switch and all this stuff and exactly how they work but the basic idea was when your phone call would go over the network and when you would make a when it would authorize you to make a long distance call it would have to communicate that fact from your operator to the other operator to the switches in between and it used frequencies over the phone so it used it would make a sound of 2600 to say what John Draper found out is what we now call phone freaking so he alias his hacker name became Captain Crunch and he built this blue box oh are we out of time no alright so we'll continue the story of Captain oh alright alright let's be thinking different to the u.s. thanks