 All right, see, and we just started right on time. Okay, so to make sure everyone's in the right room, this is CSE 545 Software Security Fall 2018. Does anything about that seem wrong? Good, okay, just making sure. So the eventual instructor of this course will likely be Dr. Fish Wang. This is a little introduction. This is Fish, you can say hi Fish. She'll say hi eventually very soon. He's a very nice person, and he's also incredibly intelligent. So to tell you a little bit about his background, so he did his undergraduate, or sorry, actually I don't know where he did his undergraduate or anything, but he did his PhD. I was getting my background, this is how many times I've introduced myself to not somebody else. So Fish did his PhD at UC Santa Barbara, and he is, it doesn't say it on here. He may have created these slides. He is probably the best reverse engineer I've ever seen, and he may be the best on the planet. That's not actually really even an overstatement. So what is reverse engineering? What is engineering? What is what? That's building something. Building something, right? So you first design it, and then you, so you have some requirements, you make a design of those requirements, you implement it, and then that gets compiled into some code that then gets executed by the CPU. So the reverse engineering process is going backwards. So starting from the binary, essentially working your way up to understand the logic of a program. So I've seen Fish do this. He also doesn't know I was gonna say this, so I don't know if he's watching. That's what happens when you just have somebody else teach stuff. I've seen him, he just scrolls through binary code and translates it to see in his head on the fly. Because you can ask him about this, he's been doing this since he was eight years old and he got into understanding how Windows works and Windows, as you all know, is closed source. You don't have access to the source code. So you have to reverse engineer to understand what's going on. So as part of that, so he finishes PhD, I believe this summer is when he defended. He's just starting as a brand new professor here at ASU. He has, his research is focused really on system security and which actually kind of downplays it a little bit. It's really trying to develop tools to automatically find vulnerabilities in binary applications. So he was one of the leaders of the, so he's a core member of the hacking group Shellfish, which is mainly UCSB's hacking team, which is one of the longest running teams who competed at kind of the Olympics of hacking, DEF CON CTF, which I'll mention in a second. But Fish was part of a team that created a, what they call it, autonomous cyber reasoning system. So that's basically the idea of DARPA. Do people know what DARPA is? So they're a government funding agency in the United States that kind of funds crazy things. So one of their grand challenges was back in, I think the early 2000s was, can you build a car that can autonomously go from point A to point B, right? Which now it kind of seems silly because we have autonomous self-driving cars driving around Tempe, but back then that was kind of a crazy idea. Can you even do it? And so back in, I think it was 2013, 2014, DARPA had an idea to do a cyber grand challenge, where the idea was there's these human hackers that are incredibly good at analyzing a binary, finding vulnerabilities, writing exploits. Why can't we have machines do it? And so DARPA put on this competition for teams to create an automated cyber reasoning system. So I didn't pull up any pictures here, but I'm sure fish will share super cool pictures. Every team essentially had like a mini supercomputer. And then when the contest started, they cut all the cords. So there was no outside communication. And Shellfish actually came in third place overall. And if you talk to them, one of the cool things was they were actually first place on attacking. They had the cyber reasoning system that found the most vulnerabilities and exploited the most things. But their defensive strategy was very poor. And there was a whole game theory aspect of patching your binaries to make them more difficult. But that would cause you to lose points and they were kind of patching indiscriminately. So they actually, I believe they re-ran the results. And if they had not patched at all and not done any defense, they would have ended up first. So this is awesome. And one of the really cool things that came out of that project is this tool, this open source tool called Anger. Which if you go look is a, essentially an open source binary analysis framework. So this is all written Python. It can do all kinds of crazy stuff, symbolic execution. It is very, very cool. And it's actually used in a lot of other companies and a lot of other things. He's published a lot of papers at some of the top tier security venues. So yes, Fish is awesome. I'm super excited that you guys will get to take a class from him. All right, so I'm Adam DuPay. I actually don't have anything on myself because I can talk about myself. So I have, let's see, I did, I know where I did my undergrad. So I actually did undergrad, master's and a PhD all from UC Santa Barbara. So I did their equivalent of a four plus one program. Got my master's degree and I said, I'm done with academia. I'm never coming back here. And I went to Microsoft, worked a full-time job as a software developer and then decided like, ah, I really like doing research. So I went back to UCSB for my PhD. I was there for four years, graduated in 2014, got a job here at ASU and I've been here ever since. So my kind of research actually intersects a lot with Fish in the sense that my PhD was on automated ways to find vulnerabilities in a web application either through source code analysis or by just black box interaction. And so we have a lot of overlap. We're essentially a hard packers. We love breaking things. We love finding vulnerabilities. We love writing exploits. And the really cool thing in turning that into research is thinking about how can you automate that and how can you actually turn that into an automated system so that that way you don't have to hire a hacker who costs, you know, $500,000 a year for a pen test one time a year. You can actually automate that process. See any questions on me, my background? I've also taught this class I think three or four times. But I'm not teaching this class. Cool, okay. So a little bit of background on what will give you a little more insight into Fish and Eyes mindset. So Fish played as part of Shellfish in this. So as you will come to learn in this class, this class may be a lot of hands-on security learning and you will do a lot of capture the flag contest in class. For those that don't know at a high level, essentially a capture the flag depends on the exact type but essentially the organizers will write custom binaries or custom applications that contain one or more intended vulnerabilities. So the idea is every team gets that. So you have to study your application, identify the vulnerabilities, write an exploit, fire the exploit of every other team to steal some of the data called the flag which you submit to the organizers for points. And this has actually turned into a big thing where there's CTFs essentially every weekend. You can go to CTFTime.org and see that there's constantly these CTFs happening. And so the kind of the world championship of these CTFs is DEF CON CTF. So DEF CON is a conference like this underground packing conference that happens every summer in Las Vegas. And I say underground because to get in, they dump like it's, so it's a, I think a four day conference. There was a Friday, Saturday, Sunday. And to get in it's only like $260 which compared to any type of regular hacking conference which can be in the thousands of dollars. So it's pretty cheap and they only accept cash. So they're trying to keep things as anonymous as possible. There's a culture of not taking pictures of people unless you have their permission. So that kind of stuff is discouraged. So at DEF CON, one of the big events is this capture the flag event. And so from there, it's kind of an invite only thing where there'll be some kind of qualification event. So this year, Quals was in May. The top 24 teams from that qualification event were invited to play in DEF CON CTF. This is all in person in Vegas. One of the, I think the Italian team flew in, I think 33 team members from Italy to play the CTF. And so the cool thing was that fish was playing the CTF and I was actually part of the group organizing it with the order of the overflow. So we were creating all these challenges and making sure that the game was fun and could be played by everyone. So I want to share a little bit of this. So this is kind of what the room looked like. So we're the organizers here in the center, kind of running everything. And then there were 24, you can think of like, squares of tables all around us with all the teams. And then there were 24 ethernet cables that were shrung and taped along the ground to go to each of these teams, which is a huge pain, which we had to do ourselves. Luckily we had some great volunteers to help us. And I actually don't know if you can see him. I tried to find a picture of fish and he said he took no pictures of himself during the CTF. So while we were kind of organizing the game, they were playing. And I should mention that there's also this, so most of the organizing team were all ex-shellfish players. So I used to play with shellfish when I was at UCSC. Ex-shellfish, ex-shellfish, ex-shellfish, ex-shellfish, ex-shellfish, ex-shellfish. That guy in the back that you can barely see, actually won DEFCON CTF one year, I think in 2008 to 2006. So at the closing ceremony, so this was, I should mention that the CTF goes for a total of basically 48 hours. It was 10 hours on Friday. And then we shut the network down 10 hours on Saturday. And then we shut everything down. And then four hours on Sunday. But the teams, of course, would still work on challenges and stuff during the night. And while we would be busily, crazily fixing infrastructure also during the night. So it was a cool cat and mouse game. This is us at the award ceremony. If you see a bunch of zombies in this picture, it is because I actually did the math a while ago, which is sometimes there's math you shouldn't do. Right, like I did the math to figure out how long I slept from Thursday morning to this time on Sunday, it was about 6 p.m. on Sunday. And the total was like 10 hours. Just because Thursday night we were all coming together, trying to get things to work. We pulled it all night over. Friday night we had to completely rewrite our patching system because it wasn't working. So I got three hours of sleep from that. And the next night I was like, I'm gonna die if I don't sleep. So I slept like six hours. So yeah, at this point we're all kind of a little bit out of it because everybody, it takes a lot of work to put on something like this. And you gotta imagine when you invite 24 of the top hacking teams in the world into your network, weird stuff happens. So oh, I didn't put the thing, team. So the good news from our perspective is that Shellfish did not win because that would have raised some flags in the community because here these ex Shellfish players are organizing and then Shellfish wins. So we're actually very happy about that. So anyways, that I think frames kind of this course a little bit as I'll talk about it in a second. So if anybody have any questions on that, we'd be happy to talk more. Yeah. This is only related to DEF CON, but why did you release one of the challenges like five minutes before the closing time? We just didn't sleep that night. Yeah, exactly. That's how it goes. So yeah, organizing is a tough mix of when you release things, right? So we know teams are gonna do stuff overnight anyways. So you kind of have a rough plan of when you're gonna release things. And so if there's something that we know teams can work on over the night that they don't need to system for, that's something we may or may not do. Yeah, it's kind of a nice thing and the curse of organizing this. You're parts a referee because kind of whatever you say goes, it turns out some of the teams were making too many requests. So we started unplugging their cables from the network and then when they could complain, we'd say, well, you're making over 60 requests per second to the service that we've said is very slow. So stop doing that and we'll plug you back in. At the same time, you're kind of building like an obstacle course or building a lot of puzzles. So I don't know, maintaining that balance is tricky. And then you make decisions during this competition on no sleep. So that's also adds to the fun. Cool, okay. And I shouldn't mention, I don't have a picture here. The winning team was a team deafcore group which has a group from South Korea and Georgia Tech. The really cool thing about this is the winning team gets eight of these black badges, these deafcom badges. You can see these badges here, like the badges we were wearing to get access to the conference itself. A black badge gets you access to the conference for life, so for free. So it's super highly sought after. These are really prestigious kind of black badges. And so that's kind of, actually there's no monetary prize in this contest. It's just bragging rights and black badges. Cool, okay. So now I want to talk about security, why hopefully we are all here in this room. So has anyone ever run into an error or bug or failure? Yes, has anybody not? Has anyone not, has anyone written, has anyone never written code with a bug or error or some kind of failure? Yes, happened to me one time. I think it was, I mean, very late in my programming career where I wrote like a just a 50 line Python script. I ran it and it worked. I was very worried. I couldn't triple check those results because nothing should work on the first try, right? So software basically apparently has errors, bugs and failures. So what are some of the kinds of things that you've seen in terms of errors, bugs, failures? Yeah. Seg fault and kill. Seg faulted, so seg fault the program, so what does it mean when a program seg faults? Try to reach reader rights, not in the, against what's permitted. Yeah, so it reads or writes some memory that it wasn't permitted to you by the operating system or executes to. Anybody else ever run into any bugs? A lot of null pointers. Null pointers? Yeah, so trying to dereference, it's kind of a similar thing where you're trying to reference or dereference and trying to get memory that's located at a zero, which does not exist, which is a null pointer. Yeah. Stack corruption. Stack corruption, what does that mean? Buffer overflow, something like that. Yeah, so there's a couple different ways. You could do a, basically a stack, what's that called, a stack? Is this an overflow? Or you hit do a recursive function too much and you use up all the stack space and you run out of stack space and the program blows up and crashes. You could accidentally override a buffer on the stack, which as you'll eventually see in this class, called clobber is a really important memory that's located on the stack which causes the whole thing to blow up. What else? Man exceptions. What was that? Not a number. Not a number, those are super fun. Nobody's mentioned my favorite Python exception. Unicode decode errors, or we're making it unicode of bytes and strings. Yeah, that's been, that was super fun. And they come in all kinds of forms. So has anybody seen this or is this so old that nobody's ever seen that? What is great about that? See the one with the smiley one. Yeah, it's the one with the smiley one. This one? Yeah. Which one's friendlier? Both are equal? Yeah, you just don't want to see. So what does this mean? How's that different from what we were just talking about? This is a kernel panic. It's a face. It's a kernel panic, what does that mean? The operating system had an error. Yeah, the operating system had an error, which means what, for you? It's a new image. We changed them all the time. Yeah, it means something really bad went wrong, right? That means your operating system is basically this layer, right, that's talking more or less directly to the hardware. And then your applications are running on that, talking to the operating system to do stuff for it. So when your operating system fractures, basically everything goes haywire and it's definitely not good. Yeah, and it could be hardware failure. It could be some software thing. It could even be a, have you all heard the story of this super computer built by, I'm gonna say government agencies, I don't remember which one. So this was told to me by a computer architecture professor. He said that the, I wanna say the army, but I don't remember which branch. So somebody commissioned these two super computers that were built. One was up in like, let's say like Colorado, Denver area and one was on one of the coasts. I don't know, let's say Santa Barbara for ease of use, although I know that's wrong for sure. And one of the systems had three times the error rate as the other system. So what do you do? So you're in that situation. That's clearly not okay, right? So what's probably a likely cause? A bug. A bug in a software. So you double type a software, 100% identical. Particles coming from the sun. Let's wait until you get there. It jumps to particles. Nobody jumps that first. Network issue. A network? So I think this is pre-network. So we'll say all the data, all the code was verified, the data was verified, it's all local, no networks. Hardware issue. Hardware issue, right? You think something is broken with the hardware. So they replaced every piece of hardware and tested every piece of hardware in the other system. Eventually what marked out is because the system with the error rate was at a higher elevation, it had more, I believe it's gamma particles or something, I'm out of business, so don't quote me on that. But particles from the sun that would flip a bit randomly on the computer and causing it to crash. And just because it was at that higher elevation, there were more particles that would make it there than would make it to the lower elevation. And so they put lead shielding above both of them and the error rates dropped to identical. Which is absolutely insane to me. And everything runs on computers. Just think about that. So yeah, maybe you got unlucky, maybe it wasn't your lucky day. And the crazy thing is you think about all the memory in your system. If you choose one bit at random in there and flip it, it's probably not gonna do anything. But you choose the wrong bit and everything crashes and you can get one of these blue screens of death. Or the newer handier where it says your PC ran into a problem and needs to restart, which is actually a much more useful. I had no idea what this would tell you because it's too hard to tell you. Scan it, scan it. I'm not gonna scan it, but you can scan it. So yeah, so kind of interesting thing here is reflecting. So you can actually see that one of the problems is if your computer just, I mean, if your operating system just straight bites the dust and can't do anything, all it can do is show you this screen. How does Microsoft know what to fix? Because they have no information about what went wrong and they can't debug it. So now they actually have a cool thing where they're gonna collect information about the system to upload to them so they can actually figure out why it crashed so they can prevent it in the future, which I think is pretty cool. What kind of secrets are they uploading? That is a good question. That's an entire line of research of what other information about your computer, it's kind of a trust thing in some sense, right? Do you trust that they are not going to use that information? But then they have, I don't know, personally I'd rather have my machine not crash in the future. So it's kind of one of these, it's good for everyone if everyone does it, but it's incentives for you to not do it. I usually get these crappy drivers. Yeah, it's drivers, so current, all of your, anytime has anybody gone to some, like plugged in some random device, it doesn't work, you go to the website down with the drivers, right? You know that driver is running in the kernel as part of, essentially as part of the kernel of all the kernel, so if the driver crashes, the entire OS usually crashes. So yeah, Microsoft has actually done a lot of work in making those drivers better, more safe, safer I guess, but it's difficult still. But yes, that is oftentimes the problem. But not to pick on Microsoft, of course. I'm a Windows person, and I mean, I'm a Mac person and I definitely see this Mac error message. You need to restart your computer. It's kind of nice, it gives it to you in multiple languages, just so it's clearly not using your language, like the section that you're using, and then telling you just to reboot. And not to think that all of these only Windows and Mac's kernel panning can crash, you can even get this on Linux as well, and it's way less useful error message. And I've definitely seen this, there's a time when I was, I think this was 2013, 2014 when I was first using, I was trying to build my own little mini cloud out of random systems that we had laying around in our lab. And so I was running a super early version of Docker on them, and just creating containers, killing them, creating containers, killing them, and over time you just get crazy kernel lockups and CPUs, whatever, man, it was super, that actually made me negative on Docker for the next four years. I've only just started using it and accepting that it's mildly stable that I can actually use it, cool. So, okay, so how would you kind of, so something that, so we could say errors, bugs, failures, something that doesn't work as expected, but where do they actually come from? The program? The program? Yeah, so the programmer oftentimes in what sense? So they like deliberately sit there and write like, if Adam presses this key, then the computer should hard track? It's just either a misunderstanding of the system that they're programming or a failure to anticipate certain issues with the architecture. Right, so one thing to think about is the code is the code, right? Right, we know this, you're all, this is a master's, you know, a graduate level course. The CPU only does exactly what you tell it to do and nothing more, right? It doesn't magically think to like, oh, I should go fetch this other data or I should go execute this other command, right? It's doing exactly what you tell it to do, so if you tell it to do something wrong that it doesn't know how to handle, that can cause a bug or some failure. So something that we've talked about, so thinking kind of really in terms of like if this doesn't work as expected, a couple ways to think about this are in terms of behavior, right? So the behavior of the program, so we'd expect that a program, if we give it arbitrary input, would not crash. Is that a valid more or less? Do we agree with that? What if I give a program a terabyte of data? Should work, but maybe, it depends on my requirements, right? I mean, the program, if I'm going to create an in memory index of every word in that terabyte of data, there's probably no way I could do that on a normal system. So I need to understand the requirements to understand what it should, like what is correct behavior. So even something as simple as that, like does the program behave correctly can be kind of tricky to understand. What are incorrect execution results? Let's take an example of that. So we've talked about crashes. We've actually focused a lot on crashes, which seem to be a predominant theme. I've taught, I've even taken 340. I've taken, I've taught 340, so I've seen, I think I've seen every possible crash that a program can have, a C program. That's actually one of the cool things I've been teaching that class for a while is when you see something new that you haven't seen before, and you're like, how did you get it to do this? It's just, it's just impossible. I've never seen this before. Out of like 300 students for the first. So what do we mean by incorrect execution results? Yeah, like an integer overflow or underflow. Yeah, so an integer overflow or underflow, right? So maybe they're doing some incorrect calculations. So instead of, I mean, let's say, have a super simple example. Let's say the integers are capped at like 99 because you have a weird, let's use a byte. So it's capped at a byte. So the largest value is what, of a byte? 255, yeah, for an unsigned integer. So if you're counting up how many people made a request to your webpage using a byte, as soon as it gets to 255, when you add one, it now becomes zero, right? Which may not be what you want. What are some other examples? So that's kind of one that takes advantage of essentially how computers represent numbers versus how programmers, and we think about numbers. So do you do an integer division? Say that again? Integer division? Ooh, integer division. You wanna elaborate a little bit? Like when the inspector is on the floor, where do you divide integers? Yeah, so that's a classic Python two mistake, which I definitely have ran into many times where you're dividing two numbers, you expect the answer to be a float, but your language runtime actually truncates the float and keeps it as an int. That can lead to errors, especially if you're doing that in a loop over some calculation and then that error kind of accumulates and at the end you get this horrible garbage result. That means nothing to your input numbers into what you wanted to calculate. Any other examples? Your equality checking. Ooh, a quality checking on the floating point. Yeah, that can get super weird. That's evil. Well that's what? Evil. I like that. I was reading a blog post about this person who was trying to fix a bug in a game. If you open the game after the system's on for a period of days, which we didn't use to do, back in the 2000s, we used to turn off the computer and turn it on every night. Every night we'd turn off the computer every night. But this game would query performance counter and then the number got so big, it casted to a float and lost all its precision, so the time steps were all off. That's crazy. And that's a funny, super funny bug. And that's actually upsetting when I went to the Apple Store, I think it was like three or four months ago and I was like, this phone is slow. Like, I don't understand. He's like, we should be turning it off once a day. I'm like, no, I should not. I refuse to do that. You should make a phone that doesn't need to do that. But this was 2018, like that. All right, cool. So we're about backdoor, so how would you define a backdoor in terms of behavior? Who hasn't said anything today? I can call on people. I'm not gonna be here long, so I don't need to. I don't need to make a video. An application has an admin privilege and a programmer hasn't given some specific instructions that only admin strippers can access. That's an example of a backdoor. Ooh, but if it's, well, okay. But if it's only functionality that an admin should access and admins are only allowed, can only access it, how's that a backdoor? Because wouldn't that be functionality? Right, if the design says there should be this admin section where admins can do. There's no specification, like there should be no such admin, but still there is super access. Perfect, yeah. So one example to go back to the other one is, let's say you have this admin page, like on anyone ever access their router, their home router, the wifi. Yeah, what do you usually put in when you first boot it up? Admin admin or admin123, you find the exact name, right? So there's hard coded passwords there for your username and password. So this is, you could think of this if you didn't want that in your admin, if you had an admin page and it said only admin should be able to access it, but there's a user account of admin admin by default in the system, that you can definitely think of that as a backdoor. And actually, Fish can tell you he's done some work on automatically analyzing firmware samples of binaries to identify these kind of hard coded backdoors and basically hard coded authentication methods in these things, which is very cool. Cool, so yeah, so backdoors, definitely, I think we can argue are bad, right? So it's functionality that somebody has injected into a program that makes it do something it definitely shouldn't be doing. What about, so what about performance? Why is performance important? Do we care about performance? Yes, you should. Do people really, I don't know. Why do you care about performance? So imagine now, put your mind, you're a software engineer, you're building a product for a company, why do you care about performance? Yeah, let's do like, sorry. Right, so, exactly. So fundamentally at the end of the day, what are you building a program for? Yeah, some user, somebody to actually do something, right? And if they can't, if not doing what it's supposed to be doing and the time that it should be doing it, then why did you spend all that time and effort and money building that thing, right? So you can build the best, most complicated algorithm possible, but if it takes multiple months to run, let's say, oh, this is a good example. So let's say you are incredibly smart, which all of you are, and you've decided, I'm gonna build a system that's gonna predict the stock market prices of the next day. So it's gonna use all the historical data, you're gonna use tweets, you're gonna use blog posts, you're gonna use everything available to predict stock prices. The catch is, it takes a week to run. At that point, it's not even worth it to do it because by the time you get the results, they're still out of date that it's useless. So yeah, so this is, you can also think of these and you should be thinking of these in terms of bugs and errors and failures. Anything else? Cool. So this is what we think of, okay, something that doesn't work as expected, but then as we've been kind of alluding to this idea of okay, what was expected, right? So in terms of what should happen, and this is kind of things we've just talked about. It doesn't work expected by the programmers. So the programmers, there's some unexpected behavior like we talked about with maybe dividing two numbers and getting an integer instead of a float, right? That'd be something that the programmer didn't attend. So some interesting things are, third party manufacturers are collaborators. Somebody may buy your software and not know that it does something that they don't want it to do. So who would care about stuff like that? End users, so end users may not want, so we actually just had an example earlier of a crash system on Windows automatically uploading your crash data to them after a hard failure. Some people may not want that, and they see that as unwanted behavior. So what about, do you ever think about, well, so does, so do companies write all their own software that runs on their own systems? I don't know, no. Why not? Because they can do cloud computing. Yeah, software reuse, it's cheaper to just pay it for somebody else's stuff if it would take a long time to get something usable. What about things like the military? Outside software. Yeah, they buy outside software. They oftentimes have to because it's way too expensive to build their own, right? And so you think about somebody like that, do they care what things and behaviors are running the systems that they're running? Hepersos. Hepersos. Hepersos like that, that's a good answer. How about they're terrified about it and they would really want ways to limit some of that functionality or to rip out even parts of that functionality while still having the main core still run? Cool, okay, so yeah. So we talked about it a little bit. Why do errors and bugs and failures exist? Why do I still have a job as a security person? People make mistakes. People make mistakes? Yeah. What was that? On account of four errors that occur because of things people can expect. Yeah, so there's a lot of, I mean there's a ton of different reasons, right? I think fundamentally when you boil it down for me, it's humans, yeah, right? We're humans, we write software. It's, I don't think we'll talk about it in this class, but it turns out every now the halting problem. Halting problem essentially at a rough high level means you can't prove that for any arbitrary, I think is it any, you can't create a system that for any arbitrary program P will say whether it crashes or doesn't crash on any given input. And it turns out that writing a program that says is there a, find all the bugs in this program is the same problem as the halting problem. So we can't solve the halting problem. We can't make an automated tool that will find all of the bugs in a program that we care about. So, which is good news for security people. Things are always going to be bugs. And as we'll see, I mean as you see, and you think about all of these systems that you're using, right? Security, like security errors, bugs, failures, they come at different layers and they come usually at the interaction between different layers. So not just, so, yeah, exactly. So, fundamentally, right? So errors are eventually, I mean humans I guess were the root of all evil in terms of software bugs. It wasn't for us, I guess there'd be no computers but there'd also be no computer bugs. And so, maybe something like we said, programming mistakes or programming misconceptions. Maybe the programmer thinks they're doing things correctly but because of the exact semantics of that language runtime, maybe on that particular operating system it's gonna behave differently than they expect. You can even get into cases and this talks about jumping to cosmic rays as the result of your bug. So when your program's not working right, what do you blame first? Your code, you, you should blame you. Anybody who said differently has far too great of an ego of their own code. You should look internally, look in the mirror, look at your code, there's probably a bug there. One time out of a thousand it will be a compiler of the compiler actually having a bug. And of probably, I don't know, one out of a thousand of those times it'll be some hardware thing. If you find a hardware thing, that's crazy. But again, right, if the compiler has a bug, well, humans wrote that compiler, right? And so they, there's a bug in there that causes your program to have a bug. But now you can see, start to see the complexity that we're dealing with, right? So you would have to code your program with no bugs and you have to use a compiler that doesn't have any bugs. And so the core idea here is that you have errors, programmers make errors which causes a bug in a program. So you can think of like a, I think of bugs and vulnerabilities as kind of latent. They're sitting there in the program. Maybe it's some code path that never gets executed but it's still there. And then when you trigger that bug, when you trigger that bug or you execute it, then that actually causes that failure, right? So that's when, if it's in the kernel, we'll see the blue screen of death because we crash the kernel. If it's in an application, we'll get a seg fault. If it's a vulnerability that we've properly exploited then maybe we get total control of that program and get it to do whatever we want to do. Which is inherently not crashing. Okay, cool. So, and this is the way of kind of thinking about these things. So this is why kind of some of the best security people are good developers themselves or have developer experience because they understand the developer's mindset when they're writing a code. So when they're analyzing it for vulnerabilities, they can try to identify, oh, this is where likely somebody made a mistake like the divide by, like the division with integers, right? You say, ah, that's a common problem in Python code. I should check every division instance to see if this would potentially lead to a vulnerability. So are all bugs security bugs? What's the difference? You know it when you see it. Security bugs, say a lot of it. Yeah, so security bugs can compromise the user data. So let's say if there's a bug that allows an unauthorized user to compromise user data, then that would definitely be a bug. Our security bug, I agree. Yeah. Bugs that can be leveraged to do something with the programmer does not want to happen. Right, okay, so bugs that allow, say it again. A user to do something that the programmer specifically does not want. Right, okay, so a bug that allows the programmer, allows someone that forces the programmer to do something that the programmer didn't expect. So kind of this idea of unintended behavior. Yeah, so it's nebulous in a sense and I think that's getting to a good definition. I like to think of it as a bug is a mistake in a code. A bug could lead to all the things that we talked about of crashes, incorrect execution results, fat doors, but it may not matter. I don't know, if you have a payroll bug that allows somebody to change their salary and get paid a million dollars, that would be a huge problem. If it's a, let's say, I don't know, what would be a weird bug? A bug that adds an extra letter to somebody's name in some system. It's like, ah, that's weird, but it's not necessarily, it's not gonna bring down, it's not a core security vulnerability. So I think of it as it really is program dependent. So it depends on what that bug allows an attacker to do. So for instance, if it allows them to compromise user data, like we talked about, then that would clearly be a problem. And the way I like to think about this is it means it's not just the buggy behavior itself, right? So if I told you I have the ability on a certain website to edit any page on that site, is that a security bug? Why does it depend on the site? Do you own the site? Exactly. I may own the site, what about Wikipedia? Right, I don't even need a guest account, I can go and edit any page on Wikipedia. That's clearly not a security vulnerability, but the same behavior on cnn.com would clearly be a security vulnerability, right? So it's like context is incredibly important in thinking about what is the application's intended behavior, right? So you can think of it like a set, right? We just think of abstract set. What is this program supposed to do, which you may have to reverse engineer by interacting with it, and then can I get the program to do something that it wasn't supposed to do? I mean, that's a security vulnerability. So this is actually something that comes up a lot when people report security vulnerabilities for a bug bounty program, is they'll write up this whole report, like I can do this, oh, here's a good example. So one time I taught, I think I've even been in this class, I think it was 591, my very first class I taught here, I told the class, here's a hacking challenge, it's a web hacking challenge, I gave them a shell on the system, like a user account that they could log into, so that's what you needed to do some levels, and I said if anyone can get root on the system, you'll get an A on the assignment, you'll get extra credit. I probably didn't specify enough that a root, so the admin account on Windows, like the super user account, is the root account on Linux, and so I meant if you can elevate your privileges from a normal user to the root user, that would be a huge security vulnerability, and so I reward them appropriately, and a student very quickly, after the class emailed me a screenshot of them doing LS-based slash, they're like I can get the root, the root of the file system, right? You don't understand, and you think you've gotten extra credit, then you're gonna try for that extra credit, so I had to explain, and part of that was on me in explaining, but it's understanding, so the application clearly, it was intended that you could poke around the file system and look at the root files of the file system, so that was kind of the intended behavior versus the unintended behavior, cool. Okay, so for software security, so here now we think about bugs, you think about the universe of all possible bugs in a program, only some subset of those will actually be security bugs, and they're for the same reason that we basically, and it's kind of close from exactly of what we just talked about, so security errors, they're always made by a human, there's some human that makes some mistake, it's a security bug, it's exploited, which generates a failure, and as a result, and this is the key, right? It's what we talked about, the security policy of the system is violated, so and again, that depends on the system, right? The security policy of Wikipedia does not say nobody can edit web pages of Wikipedia, but actually explicitly is the opposite, right? One of the things that Wikipedia though is anyone can edit a webpage, but there needs to be a history of all the changes that have made that, so if you were able to edit a Wikipedia page and erase that history, then that would be a cool security bug, so you see how it's pretty nuanced depending on the site that you're talking about, right? If you came to Wikipedia and were like, I can edit your pages, it'd take the whole way, that's crazy, but if you say I can edit your pages and no log will show up in the history and I can change it without any logs, that's something that would be really interesting. Questions on this? Security, you gotta let it groundwork, right? Because you're gonna be, you're gonna train you to be little hackers, well, not little, like, you know. You're all grown, I hope you're all adults, if you're not, that's so cool. I should probably just stop talking at this point, but we're gonna train you to be security professionals, so you need to understand what things are actually bugs and what things are security bugs. So, okay, so when we think, how long does this course go till? Five, 50. Five, 50? Fun, okay. So when we think about kind of broadly, when we think about security, we typically think about three things of security properties that we want to guarantee about a system. So we talked about one thing with regards to user data. So only authorized users should be able to access their data. So how would you kind of describe that? What would, I mean, privacy? Privacy, yeah, so privacy, or the other way to think about it is in terms of confidentiality. So you wanna keep data confidential to only those people that should have it, right? What are some other properties? Is that the only security property that we'd ever want? What was that? Integrity of the data. Integrity of the data. So what does integrity of the data mean? Yeah, right, and that's actually not the same thing as confidentiality, right? So if I can, let's say, edit your bank account so that you now have zero dollars instead of what you had it before, maybe I guess for some that would be good, but for most that would be bad. And they don't even need to know what your previous balance was, and they don't care, right? All they care is that they drop your balance to zero, right? So they violated the integrity of your bank account without actually violating the confidentiality of your bank account. So confidentiality, integrity, is there anything else we care about? Separation of privileges. Separation of privileges. I would say, how does that a security property? Because I suppose by getting different privileges, you can violate any of the other two properties. So right, so I would say separation of privileges is more of a way that you, like, a design principle of how to design a system such that it does these things. You could, I guess, technically design a system without any separation of privilege if you are very, very careful. But yeah, it's a good additional defensive layer. Attack resilience, attack resilience. So no one user should be able to hog all the resources. What do you mean hog all the resources? Why do you care about that? So, for instance, if you are running a web server, a single person opens multiple connections, and essentially drains all the DCP sockets on a machine. But why is that a security problem? It's more of an availability problem than a security problem. Is that the same, is that different? No, it is different. I think so. I'll service. Is that a security problem or not? Yeah, you can help all the stock exchange. It depends on the application. Depends on the application in general, yes. So in general, availability is definitely considered like the third security component. And the reason is kind of if the system is not available, then it's not doing what it's supposed to do, right? So it might as well, I mean, for some systems, it might as well be useless, right? So yeah, you can think about actually a really good attack I have to talk about on availability is, I've heard of stories. So you think about spam filters on emails, right? They're pretty good, right? They do all this complicated detection across things to look at spam messages. But there's a service out there on probably underground forums or whatever that will just send, so you give it an email address and they'll just send random messages, like messages with content that's complete gibberish. So why would that be useful? So will that get past spam filters? Yes. There's no way to, it's literally everything is random. Is it, why is that a service? Why does that thing exist? It overloads your inbox. What was that? It overloads your inbox. You can overload some of these inbox. So I've heard of a story of hackers breaking into a bank. And what they did before they did it is, right when they were gonna do it, they use one of these services against the security team at the bank, filling up their inbox with garbage so that they wouldn't see the alerts that were coming in that they were setting on. And so you can think of that as an attack on the availability of some of these inboxes, right? It's not that you're not breaking into it, you're not deleting emails, but you're filling it up with so much garbage that they can't actually get to the good information which I thought was very cool. So I think of it, I always remember it was CIA, confidentiality, integrity, availability. You can add in kind of other ways to think about comp, and especially when you look at the history of like computer security, it came from a military background because they were the first people to start, they care very much about confidential things. They don't really care about privacy, right? They care about confidentiality, making sure that sensitive and top secret, whatever documents are kept that way. They also care about integrity. They make sure that the data's not modified or changed. Another way to think about that is in terms of consistency, that is a consistent state. Also, availability is another big one. So this is kind of what you should be thinking about in your head when you're looking at the security of any kind of system, and these give you pretty broad big goalposts to work towards, right? So like I mentioned, everything needs to consider the application that you're testing, but at the same time, if you can understand that let's say the confidentiality security policy of the application, if you find some way to violate that, that's a really cool exploit. So for instance, there is this crazy story, I wish I could find it about, I think it, I don't wanna make up details, which is what I'm definitely gonna do, but somebody found a bypass of the passcode page on like an iPhone. They found out that if you like open the, I think it was, yeah, if you open the camera and then go back and then go right back, and if like really quickly do it, it would like just let you in sometimes. And so it's like a clear like confidentiality breach, right? Because you don't want people who don't know your passcode to get access to your phone. And so there was a clear case of that being not. Cool. I think like when the iPhone first came out, that was the issue. You could get to the address book to make a call, you could edit contacts and add a URL, you put the URL and open the web browser, and you can exploit the web kit. Interesting. So this was always a fun game too. If you go to like a hotel or something and you use their computers that have like a super lockdown, you know, version of Windows, it was always fun game to play to figure out how, what you had to do to get to the real Windows, because it's just running some applications. So sometimes it was just easy as like, Windows R for the run and then you do CMD and now you're on a prompt or, I don't know, there's all kinds of fun ways to help files, something like help files to a link to somewhere and yeah. Anyways, so yeah, yeah, all that kind of stuff is you know, those bypasses of basically security mechanisms, right, that are trying to gate keep you or another one that I'll mention briefly that you can actually look up is Dropbox had this issue that somebody noticed, I think in 2011, no, I must have been before that. I don't remember, but where somebody was able to realize that you've probably done this before when you're logging into a website and you realize you've typed in the wrong password like right when you hit enter so you know it's gonna fail except they did this and it logged them in and so they're like, so they logged out, put in their email, put in a random password and it logged them in and so they told Dropbox immediately and they had a fix within like a half hour or an hour but it turned out somebody pushed some code that messed up the check, the password checker just not checking the password for whatever reason. So Dropbox was actually really good because they, I mean they have a whole blog post where you can read exactly what went wrong I think they may have done like a snippet of the code too and then they ran an analysis on everyone that logged in during that period and everyone who logged in during that period with a wrong password so they could identify the accounts that were actually affected. So it's super interesting. But yeah, all this kind of, these bugs, these security bugs can happen at any time and kind of whenever any code changes, right? And so, so you can even have vulnerabilities so we think, so we're gonna use kind of the vulnerabilities term and set up security bugs because that's a little bit more technical. I guess I should speak for fish. I'm much more of an academic so I like thinking and one of the things you should definitely do with my advice is to, so what's the difference between a vulnerability and an exploit? Vulnerability is a bug and exploit is taking advantage of the vulnerability to do something. Yes, so it's exactly what we talked about with bugs and failures, right? The bug exists, the vulnerability exists in the code until it's triggered or until it's exploited, right? So the exploit is the actual act of using that vulnerability to get something or to make the program do something it's not supposed to do. So this is something that it's one of these pet peeves of mine is people throw around the terms vulnerability and exploits when they mean the other thing. So just be in your mind very precise about what you're talking about and what you mean and you'll make me happy in the future, which shouldn't be your goal but you'll make yourself happy, how about that, cool. Okay, so, so yeah, we kind of talked about this touched on this a little bit. So vulnerabilities, they don't have to be unintended behavior, right? They could be expected behavior in a sense of what we talked about of weak passwords, right? Or password default passwords. This is something that's expected, right? It was created. Somebody, you think about every router, right? Somebody had to write that code that puts in that default username or password of admin admin, right? That is expected behavior from their standpoint. And when you consider the use case, right? It actually, when you consider the use case, it actually makes a lot of sense because you have end users, probably non-technical end users that are gonna be setting up these wireless networks. Do you want them to have to, I don't know, read a sticker on the back that's this crazy random thing or is it just an easy admin admin way in? You may even have, so what's the example of like conflicting security policies? If you're writing a client and server, and your client thinks that their client is trustworthy, I mean, and their own security policies is mapped, so server might send data to the things the client can protect, the client thinks, oh, the server's gonna send this encrypted. Yes, so actually, so like a great, there's a lot of good examples. Actually, the web is this way by default, right? So the web is you have a web application which is running some server-side code. Every time you make a web request, that code generates HTML with JavaScript and you think about JavaScript coding a browser as client-side code, so you have server and client there. So if the server's assuming that the client is doing certain checks, like making sure the input is not this length, it doesn't contain any characters, and just blindly trusting that input, you can make any request to any web server at any time. Or kind of another way to think about this that I just thought of, I used to do this as a kid of the if mom and dad had different security policies of can I have a glass of chocolate milk before bed and mom would say no, so I'd go to dad, she'd say yes, and then I'd start drinking chocolate milk and I'd get in trouble, but I don't understand. I asked one, and then you learned which one to ask first, right? So you don't get into trouble. Yeah. I wouldn't, I don't know if I call Heartbleed a conflicting security policy, but that was server trust-declined and client-toped and server-founded. I mean, you could think of it in that way. I mean, it's a pretty common buffer over read, I would say, trusting the client in terms of sending the length properly, but I think that occurs so many times if a buffer overflows in all kinds of situations that I don't think, I think this is more like what you were talking about, like clients and servers that trust that they're gonna do something without realizing that somebody else can make that request. Or, yeah, so that's a good example, cool. Okay, so we talked about this a little bit, but now we're gonna go deep because this is fun. So we're talking about this. So are all software fault security bugs? Oh, some of them can be, right? So we talked about availability, right? If I had anybody ever seen this on iPhone and I'm sure that something exists for Android where somebody would find a bug in the rendering engine for some font, and so if you sent somebody a text or if they looked at a tweet, it would cause your phone to crash. And these were terrible until you updated your operating system. So you can think of that as an attack on availability, right? Because if you can do this and send this to people, that could be an availability attack. Okay, so how can we be secure writing software? Write better tests? Does anybody write tests? Nice. How good are your tests? Do you know how good your tests are? All your test packs. You need to have somebody else review it. You have somebody else review it? Follow best practices. Follow best practices? Try to break your own code. Try to break your own code, which can actually oftentimes be difficult, which may be why you need somebody else. Has anybody ever had this experience when you are staring at the bug in your code and you're like, I have no idea what the problem is? And then somebody walks over and is like, well, you're missing a semicolon right there. And you go, ah, how did I not see that? Yeah, what else? Is it okay? Automated testing. Automated testing. So yeah, you can get pretty far and get some good. So all of these are kind of about, I would say increasing your assurance that the software is secure. But let's say you wanted, huh? Yeah, so you can try to manage risk by doing some kind of least privilege. You can separate your code into modules that each thing does a very small thing. You can, I mean, they put like, you know, software runs on airplanes and on like things that land on the moon. So how did they do that? Very carefully. Actually, that is the truth, right? You, I mean, so they do all kinds of tests. They also write code. I think it's something like five lines that I think a day is slow or fast. It's like five lines a week or something. So you could do things like, you just write it incredibly slow. You have a triple quadruple reviewed. You can do things like have multiple teams develop the same things, like break your thing into components, have many different people write different components, like multiple copies of the same component and only use results when they all agree. Yeah. Observe approvers like Daphne. You could try to prove your code correct, which is incredibly difficult and still in its infancy, I would say. Daphne makes one of the kind of installs. See, exactly. And that's, so how much does it cost to write software in a way that it can run on a plane or run a plane, I guess I should say? A lot, yeah, I don't know, but it's a lot. You can look up the numbers. It's like tens or thousands of dollars per lines of code or something. I mean, they have, I actually can't remember off the top of my head, the exact organization that certifies an organization to develop security critical code at various levels, which means they do all of these things that you're talking about. So, let's say you do that and you write perfect code, perfect software. Is your thing secure? Depends on what it's running on top of. It depends on what it's running on top of, right? Why? Is that kind of a problem or a good way? Yeah, so we talked about operating system, right? The operating system is at the layer below all of your applications. So, if I have an exploit against your operating system and I can see all your application's data, do you care that your code was bug free? Do your users care that your code was bug free? No, they don't care, exactly. So, okay, I'm going to talk about this, but people are going to use your software, right? So, anybody remember Windows 7? So, Windows 7, for those that don't remember, was this really big Windows release after Windows XP. had a really bad security posture by default? Vista. And so, oh, Vista. Oh, sorry, I'm taking it farther back. Vista, yeah, okay, historical out of the, great. Okay, so, thank you for that. So, Windows Vista, they said, okay, we're going to have this crazy idea when users won't be the administrators on their computer by default, right? And who better to make the decision of whether to run a piece of software or an administrator than the user? Better than having Microsoft decide, right? Have the user decide. And so, is this touted as this revolutionary system that it's going to be amazing and can solve all these kinds of security problems? And technically, it's very good. I mean, the stuff they had to do to make this work is probably insane. So, what happened when they released it? It's too complicated for the users. It's not too complicated for the users, I wouldn't say. It really turns off. They, I would say, not even turned it off. So, what happened? It's like, yeah, they call it like user fatigue or alert fatigue, where every time the user were trying to run something, an alert box would pop up saying, this program wants to run as administrator, do you want to do it? And of course, they'd say yes, because they're trying to do something. And they keep doing that over and over again. So, when they download a piece of malware, malware.exe is trying to do something, run as administrator, do you want to do this? Yes, they just click yes. And so, here you have actually technically very good security protection mechanisms, but you're not realizing that the users actually have, these are human beings using your system and maybe the average computer user of Microsoft Windows can't make a security decision about whether something should run as an administrator or not. So, you need to have perfect users, right? Is that even enough? No, we just talked about that, you need different. So, not only that, you need to configure the software perfectly, right? So, this is actually a story that I tell when I was, I think probably my very first Linux server that I was adminning is one of these $5 a month things that I had to run a Ruby on Rails website I was creating. And I got to some bug or some problem that I didn't understand how to solve. And so, I did a chmod777-R on slash. Like for those that don't live and breathe in Unix commands, I set the permissions of every single file on that thing to readable, writable, and executable by every user on the systems. And let me tell you, my problem went away. Everything just worked, like the thing that was not working worked. And then I logged off, everything was working and I tried to log back on and it told me I couldn't log on because my key was no longer valid because my SSH authorized key file was world writable. And so then I had to make a ticket with support and the poor person there was like, your permissions are all weird. I think we're getting nicely set. And so they fixed it for me so I could get back in. But yeah, that was a good lesson of life. So your software can be super secure, but if an admin, CHmods, everything is 7777, then you're gonna be insecure, right? Because the configuration, how your software is actually deployed really makes an impact on the security. So, yeah, to actually do this, right? You need to be able to write perfect software, perfect users, configure the software perfectly, have a perfect operating system that we talked about, and maybe even down to now we're having problems in CPUs, right, of having, with Spectre and Meltdown, of having, is that where we get to that? I don't remember. Having, or what, second? And memory. Yeah, row hammer attacks, which basically allow a untrived user to flip bits by, so apparently, I mean, it's not apparently, it makes sense, right? All of the, you think it when you really boil it down to like a hardware memory stick, right? It's just a series of electrical things that keep a charge of one or zero. And it turns out if you, I think it's read and write a line really quickly. Two lines or flip it or something. Oh, that's right. If you, was it write the two lines above it? I don't remember all the details, but if you mess with two of the lines above it, you can actually cause some bits to flip in the middle line on something that you weren't even touching or manipulating, which is insane. So that's like row hammer, so. Yeah, you would need to go all the way down and write hardware that from the ground up and was completely perfect. So the idea of this exercise is to show that it's essentially in pop, I don't think we will ever get there. So we, yeah. You can also use a vandek container. A what? A vandek container. You can feel it perfectly. Yes, so then you'd have side channel attacks through basically electromagnetic radii, I mean, stuff. So yeah, all kinds of stuff. I mean they have, there's a paper on how to extract SS, you know, encryption keys via the fan noise. Because the fan noise is a side channel to the power usage of the CPU, which is highly correlated with what operations are being run. Wasn't there also one about keystrokes from Wi-Fi? Oh, there's all, keystrokes are a lot. There's a lot of different ways to get keystrokes. You know, even your phone here on the table, you typing here, the gyroscope, figuring out the vibrations to map that to your keystrokes. All kinds of things, nothing is secure. You're just kidding. No, I'm just kidding. Yeah, so then, you know, even that's not unique. So now you need to be running a perfect hypervisor. We're gonna talk about firmware, right? So, I already know that there's something that runs first on your computer, right? When you turn your computer on, there's a little piece of BIOS that actually runs. How do you know there's not a back door in there that's selling all of your data to your own? I don't remember if it was Lenovo, but someone used this to buy S-H-T-E-P-S. Probably, I would not be surprised. So that they could put their own ads in the pages. That's crazy. I would not doubt it, was it? Was it a little bit? I don't wanna pull anybody out right now, but sure. Yes. Always go with Lenovo. Cool, okay, and so, it turns out that not, so this would be the perfect world, and it turns out the real world is even easier. You don't have perfect software even to begin with. So, does anybody work for the company? Oh, nice, wow, a lot of them, okay. Yeah, so you may or may not be aware that everything doesn't always go according to plan. They may not actually be deploying any effective security protections. The administrators may not be even be applying patches for known vulnerabilities. Sites don't monitor or restrict access internally, so as soon as you compromise one of the, let's say developer machines, well, developer machines are pretty powerful, but let's say somebody in a different organization, human resources, now you can make it access to the Git server and get all that source code. So, yeah, there's a lot of the real world is definitely very crazy, and so this is not only all of this, but so Spectre and Meltdown have people heard of this. So, if you've taken a computer architecture course, you know and understand that CPUs will oftentimes, so when there's a branch instruction, fundamentally the CPU does not know which branch is gonna be taken, so it would have to wait until that current instruction is done executing to decide what to execute next. Modern CPUs use a series of pipelines where they're doing little pieces of each instruction at a time so they can do things very quickly. So what do they do to solve this problem of not knowing where branches go? Branch prediction? Yeah, so they try to predict which branch will be taken so they actually have some dedicated piece of hardware that does that and then depending on which one they think is gonna happen, they will specatively start executing that right behind the other instruction and then if their guess was wrong, then they flush that instruction, get rid of it so it's like it never happened, which is how CPUs do work, should work. The problem that people found out is those speculative instructions actually start messing with the cache. So you can start inferring information by using the cache to figure out specatively what would have happened. It's a whole thing. You should read these write-ups on Spectre and Meltdown because even the hardware is being crazy, yeah. We followed up that it was messing with the cache, but that it was visible to the user? I guess the question is how, I recall it was visible through a timing side channel using the cache, but yeah. And there's multiple variants and I honestly haven't kept up with all of them, but they're all crazy. There's even like a net, I believe there's a net Spectre or net, one of these has a net prefix now where you can do it over the network and do this kind of timing thing. And once it just came out. Huh? You expected once it came out. Yeah, see, it's all kinds of, everything's messed up. Like the CPU itself, you can't even trust anymore. Cool, okay, so we talked about this, so I'm not gonna, I wanna get to the actual course stuff. So how to develop secure software. We talked about a little bit. There are a lot of different types of ways to get there of increased, you know, you will never be able to do perfect software, right? But the goal is, well, it's very easy to do really crappy, very insecure software. And so the idea is using kind of some of the best practices we talked about, testing, peer review, pen testing, internal test, like internal pen testing, blue team review, red team review, like these are all good ways to increase your assurance that there are no security problems in your code. Okay, what time do we have? 58. 55 or 50? 50. 50. 50, I think that's what you think. What you're just trying to tell me. Okay, so we're gonna go through this quickly. Okay, so you will be able to, basically the idea of this course is you're gonna be thinking about analyzing systems and are they designed security? Are they implemented security? Are they deployed and configured securely? And this, so a security analyst, it's essentially difficult to automate because it requires world knowledge, right? You have to, like we talked about, you need to understand what the system is supposed to do, the intended behavior, so you can detect the unintended behavior. And this is kind of, it also requires this testing mindset, which you definitely need to start developing, of thinking, okay, how can I break this and what would it do if I did that thing? And then, and you also kind of, I think of it as thinking in terms of a scientist, right? You have some hypothesis, if I do X, then the system should break and give me access. So you do X, it didn't give you access, now what? What part of your hypothesis was broken that you can rethink to try a new approach? This is really just what it is, you're just continually going through that. Okay, so for this course, the goal is you will learn to identify design, implementation vulnerabilities in systems, network protocols, and application at all those levels. You'll learn about protection, detection mechanisms, and techniques, and you will absolutely be learning by example, so vulnerabilities and how to exploit them. Fish says the devil's in the details, my philosophy is you don't actually know how security works until you put your fingers to the keyboard and actually do it. So I can stand up here, a fish can stand up here, teach you about buffer overflows till you're blue in the face, until you understand buffer overflows, and then we take you to a computer and say, okay, exploit this program. Then you realize it's much more difficult than it is in theory. So figuring everything out, and then when you actually get it working, it's one of the best feelings in the world, it's awesome. So I'm very excited for you all. So you will be learning what cyber hackers do in real life, infiltration, stealing valuable information, achieving persistence, these are all things you're going to be learning in this class, and learning by practice, by playing against each other and capturing the flag events, how to combat cyber hackers. So looking at traffic, auditing systems, understanding the state of things. So you will, the skills you're going to learn are ability to understand and assess the security implications of network systems, they're all terrible, that's a hint. The ability, you'll be able to, after you leave this course, perform a security analysis of a system. So essentially teaching you how to pen test the system and to understand that. To understand and contribute to research on the topic. This is an outline of the course, I'm not going to go into this, there was stuff that fish definitely wanted me to cover, so let me do this. This is all going to be super cool stuff. Cool things are, you're going to be a real hacker. So all of you have it in you, hacking is not about like lead speak or, I don't know, pretending to be cool or something, or pretending deliberately to not be cool, like the opposite. Being a hacker is all about knowledge and understanding how a system works and how to make it do something it's not supposed to do. So it's based on knowledge, you're all smart people, you can, with hard work, you will be able to learn that knowledge. So there'll be eight in class competitions as well as a final competition, you'll work in teams of up to four to five, you'll have to prepare for each competition in advance. Fish is going to release the services beforehand so you can actually work on it beforehand. And then afterwards analyze your performance and do some self-assessment to understand how you did and how you can get better for the next competition. So some important things that Fish wanted me to mention. Okay, requirements. You will need very good programming skills. So this actually goes into what Fish wants me to say. C++ Python, so basically you need good C or C++ skills because a lot of the binaries are written in C or C++. You need to understand, read it, write it. Python or a scripting language, Python is a very good one. You need to be familiar with bash kind of shell scripting. You can have good understanding of OS basic operating system concepts, process, file system, memory management, basic networking knowledge. And I will say from Fish that this is, it's not a hard class, but the workload will be heavy. Take that as you will. If you're not ready for a heavy class, so heavy class means, I mean it's gonna involve work, right? There's time and effort in this course, not hard in a sense that you will, it's not gonna impossible course. I don't know, it depends on your backgrounds, I guess, and your personality. Like we're not gonna be proving P is equal to NP or anything like that part. Everything is doable, but it requires effort. I think that's the key here. So if you're, and this is the same way I run 545 too. So I get about 40 students out of 130, no 100, I think I went from 170 to 130. So if you're not ready for a heavy class, drop it. This is not an easy get an A course to show up. This is a hard, I mean not hardcore, but it's a intensive hands-on course. So if you're willing to put in the time and the effort, I think everyone here can do that and get an A, but don't expect an easy workload. Okay, so let's go quickly, the grades are not curved is what you want me to mention. So there will be no curving, we'll have a grading scale up that you'll be able to easily see. Here's the grading breakdown, I'll let you look at that later. It's mainly, you know, homeworks with our projects on your own, midterm, final, in-class hacking competitions of the group and the hacking final. Miscellaneous, so the class, so the course is all gonna be run on Piazza. Please feel free, ask questions online in class, make appointment, go to office hours. Fish really believes that part of being a good hacker is going out and learning new things. So being proactive and going after, taking advantage of the resources that you have in terms of fish, don't ask me any questions after a fish starts talking, I like you all, but I have my own class to teach this semester, thank you. The TA is Mohsen, Mohsen stand up wave. I'm TA. So Mohsen, you'll get to know him really well. Let's see, okay, very quickly, I'm not sure if this is actually permitted in this room. You can ask questions at any time. Oh yeah, he wants no phones and no laptops during the lectures, FYI, and he is totally gonna say seriously, no phones or laptops during lectures. Obviously not the CTFs, you will know well in advance when those will be, so don't freak out. Let's see. Okay, the other important things are logistically, and we'll probably figure this out as we get a little closer. Okay, any questions on that first? I know we're over time, I appreciate you guys' patience, so I could finish this. I think we had some good discussions. Okay, yes? Is his curriculum gonna be the same as the curriculum that you usually use, so? I don't know how to answer that. I think the answer is, well, the way I think I'll phrase that is every professor teaches a class in their own way, so he's not me. Yeah. Yes. But do you know what the homework will be? Do not know. Do not know. Yes, yes. Is there a book for the course? There book, I think no. There'll be a syllabus up very soon. He's actually, I think he's traveling to DC right now for a grant, end of a grant thing. Couple questions, though, that you can come up and talk to me. So for the CTFs, you will need your own laptop, so I will only need a laptop to use. If you don't have one, please talk to me, so we can, or talk to me so I can let fish know so we can figure something out. We'll definitely make sure that happens. There you go. Awesome. It'll be a fun semester.