 All right, folks, let's rock and roll. All right. Cool. So yeah, I can see y'all making a lot of progress on the assignment. That is really great to see. I think even those of you who, you know, have started are definitely ahead of the game. The things that worries me is this number of people in the system is only 356. We have about 500-ish students in the class. So if you're not haven't started yet, please start if you're just hurting yourself. And so I don't know, I can say it's on blue in the face. I thought I'd remind you once. It's going to be much better for you if you start early. That way you can ask questions. There's been a ton of discussion on this court, kind of discussion on piazza. There's lots of resources out there. Ian had a review session last week where he made several videos going over various concepts. I think those were super helpful. So you got all the resources you need. It's two people. You definitely should be able to get through them. But again, the closer you wait, if you try to speedrun this at what is it like Friday at 11pm, I think you're going to have a bad time. So don't do that. All right, you're all adults, whatever, make your own choices. Cool. Now we get back to the fun stuff. All right, since class last week was taken up completely with assignment one, now we go back to the overview. So remember, we're talking about what are the three components of security? I know if I don't ask you this on Thursday, you don't remember on Tuesday. What was it? Integrity? Availability? Confidentiality? Okay, I'll pretend that somebody said that. No, I think somebody did say that. Yeah, so we have those are the three aspects of security. And then we can try to enforce that in a system using security policies and security mechanisms. So what's the difference between a security policy and a security mechanism? Yeah, so a mechanism is more like the technical aspect that tries to help enforce the security policy. So it could be like, like a lock and the policy of who's who can what to do with the key who gets access to the key. Can you copy the key? Those are all kind of policy level things. Awesome. Cool. So we're talking about the correctness of a security policy. So we talked briefly, I'll just to go over it. We're talking about ways to define a policy, we can define a policy in natural language and English, we can define a policy using mathematics and a formal language definition. And we can use kind of something in between like a language that is dedicated to policies. So but how do we know what why do we care if a security policy is correct? Yeah, so because if it's not correct, then what's the point of having this policy, right? If our policy says that we need to lock the front door every time we leave the house, but there's a back door that's unlocked that the policy doesn't say we need to lock, then what's the point of our policy? It's not actually accomplishing our security goals. So we always want to think about our assumptions whenever we and we want to be very explicit about this because, you know, we can, we want to understand in what context is our policy correct? So we can say, okay, we should lock all the doors and windows of the house every time we leave the house. But what is that assuming that could impact the security there? Yeah, so that's somebody's not going to break you when we're not there. That's a great assumption. What else? We assume those are the only ways in. What else? Did the people follow the policy? Yeah, what else? The locks? Yeah, we also are assuming that the security mechanisms do what they're supposed to do, but they can't be broken. So yeah, these are great. And so we made a lot of assumptions and these are part of and again, being explicit about these assumptions is very important. So that way, somebody read your security policy, and maybe your analysis where you say, Hey, I believe this is a correct assuming these things, right? And then you can have further. So maybe all those things we talked about, maybe if we had automated cameras that looked for motion detection inside the house, those would be another layer that we say, Hey, if the mechanisms of the doors fail, if somebody forgets to lock the door, we have a second layer of defense, these cameras that are in there. Of course, they have their own assumptions, right, that they can actually detect the people that they are on, that the power is on, that the internet's on, right, these are all things that could impact that policy. We assume that the policy, you know, one of the main things is we assume that the policy is correct, that the mechanism actually correctly influenced the policy. This is always something to think about. And the other aspect we always want to consider here is trust. Why is trust important? Yeah, so we want to think of it in terms of what does our policy trust? Who does it trust? And what if those people like violated our trust, right? So in that way, we can maybe build other mechanisms, other policies in place to counter the fact that maybe there's a malicious insider would be one thing. And again, this is another thing where we want to be explicit here. We want to say, hey, we're trusting the locksmith who installs our keys to not just keep a copy themselves and then later come back and break into the apartment for the house, right? So there's trust built into the system. Similarly with the motion detection cameras, right? We're trusting that there's not a back door in there that they put in there. We're trusting the people in the house to follow the policy. So these are all things we want to think about. And we want to think, hey, is this the appropriate level of trust for this policy for this situation? And mechanisms, we want to think, have you already covered this? Oh, why don't you say anything? I just love hearing me talk about this stuff again. All right, that's fine. Cool. Okay, yeah, the procedural, we talked about that on Thursday. That's good. Yeah, cool. Look at that way farther than I thought. Okay. Yeah, we got to assurance. Yeah, that makes sense. Cool. Okay, so we talked about the security of the policy. We talked about the security of the mechanisms. Hopefully, whatever I said was exactly the same or very similar. So what I said in the last talk about this, who knows, I just stand up here and talk about things. So assurance, and this is kind of one of the key notions of the course, right, the course is named Information Assurance. And this is kind of the question of how to trust that the system is secure. Yeah. One word, don't. Sure. Don't trust that the system is secure. Yeah, that's great. Why not? Always be a loophole. Yeah, there will highly likely to always be some kind of loophole. So does that mean you should stay up at night worrying all the time about the security of your house? Somebody was mentioning in the chat that they, all this talk has made them paranoid about their own house? No, why not? It's what, say again? It's life. Oh, it's life. Yeah, life is full of unassurances. That's one way to look at it. Yeah, that's, yeah. So it kind of depends on how your personal view of how well you've mitigated the possible threats, right? So if you're thinking, so it's kind of a weird situation to be in, right? Because if you put it in the policy of a house, where we said, aha, we have keys and we have to lock the door whenever time we leave the house. That's the only mechanism you have. And you're like, I can sleep soundly at night, because whatever somebody breaks in, it's not really my responsibility, because I implemented this policy and this mechanism. If somebody were to break in, they could go to you and be like, hey, but what were the assumptions that you made that the system was secure? How come you didn't have other levels of defense here to try to detect when somebody broke in, right? So the motion sensor cameras, those kinds of things. Again, this is kind of a nebulous balancing act, because we can always, as we talked about, we can just dump more and more money into something. So it's difficult to say when something is secure. And so the idea of insurance is this notion of how do we trust that the system is secure? And maybe another way to think about it is how can we trust some, how can we convince somebody else that our system is secure, right? So if you say, yeah, I don't know, I just bought these locks off the eBay, and I just installed them myself. And so I think we're good to go in this house, right? That's one way of approaching it versus saying, hey, I hired expert locksmiths. They were highly recommended. We got the best locks. We thought about all possible countermeasures. We thought about threats X, Y, and Z. We have mitigations for some of them for some of the other threats. They're outside of our budget, and so we're not going to necessarily worry about them. So part of this notion of assurance is being able to convince somebody else that you have thought about the security of this system holistically. You've thought about all the threats. You've thought about the mitigation. You've thought about the costs of those things. So can we quantify this? Can you say I have 90% assurance that this system is secure? Put it another way. If you're selling a security solution and somebody came up to you and said, aha, I have a camera system that you can install in your home, that'll make you 20% more secure. Should you do that? Based on the statistics you do that, how'd they come up with that number, the 20%? Interesting. So you could do some sampling. You could try to do some analysis to understand the impact of the system on the ability of the system to be compromised. That may or may not be good for your area, because that doesn't necessarily tell you if a targeted person who wants to get in there, can they get in there, as opposed to just background, I think of background radiation, but background criminality. Yeah, that's interesting. And then there's even the trickier question about what about all the people that got broken into that didn't know about it, had the system didn't know about it. So there's a detection aspect there. This is actually one of the most difficult concepts and ideas in security is how to actually quantify that and quantify a number that people actually trust. And especially given context, so maybe you could do this kind of study, but it'd be hard to say, well, hey, what about that system and these locks? Or what about that system, these locks and a security unit that drives by your house every hour or neighborhood or whatever. So how do the combination of these things work to improve your assurance of the security of the systems? And this is really difficult. So imagine you're in a job. So you're the CISO, you're in charge of securing an organization. You go to the CEO and you say, hey, I need $10 million to implement this security solution. They go, okay, great. We definitely don't want to be attacked or compromised. How much is that going to improve our security? And you go, well, more than it currently is. But I can't guarantee that we're never going to have an issue. And so that actually becomes a very difficult inside of an organization to argue for more resources. Actually, the counterintuitive aspect is the budgets of security groups. I mean, it sounds, it's both counterintuitive and very obvious, but the budgets of security groups don't increase until there's been an incident. So it's only after they've been compromised that the organization goes, oh shoot, maybe we should fund this thing, which is kind of the backwards thinking that as a, it's counterintuitive in the sense that if you are the security organization, you kind of have a weird incentive to get exploited or to get hacked because then your budgets will increase after the fact that you can actually do the things you want to do. Not that anyone actively thinks that way, but it's kind of interesting there. And so yeah, so kind of like I mentioned, right? So the concept of assurance is pretty nebulous and it depends on all sorts of factors of what type of analysis was done, what types of threats were considered, what types of threats were not considered. Yeah, somebody just, somebody mentioned pen testing. So one of the ways maybe I could try to see if my house is secure, I could hire a bunch of criminals to break into my house. What if they can't break into my house? What does that tell me? It could be that my security is perfect or it could be that the criminals are bad and not good criminals or maybe they're good, really good criminals because they just took my money and like didn't do any work, right? So fundamentally or they, yeah, another good week, they're even better. They figured it out, didn't tell me they're going to break in later to my house. Another thing to consider is if they find, so say they do find something and they say, aha, we were able to get in because your second story window was locked or was unlocked. Okay, so you fix that, you now make sure you lock all your doors and windows every time you leave. Is your house secure now? No, why not? Yeah, who's to say they found everything, right? So this is kind of the nature of a pen test and only show you the holes that the pen testers found. Yeah, so you'd, part of actually what you do is to find the parameters of what they're allowed to do and not allowed to do. So you'd probably say, hey, physical destruction of property, like we understand the threat. Yeah, somebody could get through our walls. We're probably not, we don't care about that as much, but we're interested in other types of things. So, you know, it's you work with them, but at the end of the day, kind of if they show you bugs or vulnerabilities, that is something you obviously shouldn't fix. But once you fix them, you can't just think, okay, I'm done, everything's safe, and we're secure. So it's an important part of the assurance process. If, if somebody had never had a pen test, you might have less assurance in that system than a system that has very frequent pen tests. So that's also usually, when I think of assurance, I kind of think about, are we increasing our assurance or how is the assurance of one system compared to another, depending on what things have been done. So you can kind of do a relative comparison. Well, I trust this system more like, actually for, like a good example is probably Microsoft Windows. So back in like the early 2000s, Windows security was like a huge joke. Like they had remotely exploitable kernel exploits, and that created this whole family of worms that would scan the network for vulnerable systems, exploit that system. And when it got on there and scan the network for vulnerable systems, exploit that system. So it would propagate exponentially throughout the internet. And it actually like took down the internet for several days, like code red and slammer. And at the time, Microsoft kind of realized, huh, we're not even looking at what we've been thinking about security. We just made this operating system and then connected it to the internet and then boom, everything blew up. So what they did is they actually took a huge reexamination and actually come came up with a concept called the secure development life cycle, the SDLC, I think, which basically integrated security in every aspect of the software development life cycle from design to implementation to testing at all aspects, trying to identify vulnerabilities as early as possible, all kinds of stuff. And eventually over time, what this led to is it led to them being actually having a much more secure operating system. It took years and years of effort. But now if you find one of these vulnerabilities that remotely gets into a Windows machine, I mean, that's worth like millions of dollars because of the time and effort that they've invested into trying to prevent these types of things. So that's kind of where I think about kind of assurance. Okay. And we can think about it in all aspects as well. So we can think about it in the specification phase. So the specification, what is the system supposed to do? Why do we care about security in the specification phase? Yeah, so Brian mentioned the chat basically to not mess with what it's supposed to do. So yeah, remember this is part of security. Part of security is understanding what the system is supposed to do. One of the key examples I like to give is if I told you I could edit any content on a like top 100 visited website in the world, would you say that's a pretty cool security vulnerability? And I'm an excellent hacker, just arbitrarily change the content, do whatever I want to it. It could be a feature like what? Like Wikipedia. If I showed you Wikipedia, like, look, I can go in and edit any Wikipedia page. You're just looking at me like, well, of course you can. That's the point of Wikipedia. But if I do the same thing to Facebook or Google or any of those sites, that would be a massive, massive security vulnerability, right? Exactly the same behavior. But in the different context, because in one application, it's actually designed as a feature. And in the other application, it's not supposed to be there. So that's why thinking about an understanding what is the security design here. And if we even think about Wikipedia is not a free for all edit whatever you want, right thing, they actually have mechanisms in place, they can lock things, they can revert things. There's a whole system of editors that review changes to make sure like I just don't go in and change things. So understanding what the system is supposed to do. One of the key things we think about here is like how do we so how do we so how have you had let's say specific specifications defined for you in your career so far. You're building something, writing some code. Where do the specifications come from? The requirements. So maybe requirements from the users. Where else? Yeah, so maybe like a rough outline, you can also think of that as like almost like prototype driven specifications where you have some prototype that has some of the features like not not anything core in there but has kind of the rough outline. What was that? Yeah, so some type of best practices. So here when we're thinking about specification, right? We're not actually thinking about the policies or mechanisms. We're talking about the system itself. So how is the system specified? So it could be like a UML diagram. It could be all types of things. One of the key and it could be natural language, right? So one of the key things is how do you know and one of the things of assurance is how do you know once it's all built that it actually follows the specification, right? Because hey, if we say it should do X, Y, and Z, we should test or identify if it actually does X, Y, and Z. If it doesn't do those things, that may be a vulnerability. It may say, hey, in our application, we have a concept of administrators and regular users and administrators should be able to do this set of actions whereas regulators should be able to do this other set of actions. And so how that's defined is you could actually define this very mathematically in math that could be checked, which gets into the design. So specifications, you think of as what should the system do? Design is more about how do we design the system? So we design all the software components. It's like, okay, we need this. You can think of it as class hierarchies, interactions. And at this stage, what we want to verify from a security perspective is does the design actually satisfy the specification? So how can we do that? With test cases? Yeah, what form would these test cases take? Yeah, I think an important thing to think about is these don't have to be test cases that we think about here as input to the program, here's the expected output, because here at the design page, we may not even have code. We may just have a whiteboard diagram of how things should work. But we could still go through mental test cases. We could walk through the design and say, okay, what about this design is ensuring security properties X, Y, and Z? A cool thing is if you actually have your specification defined in a formal language, then you could actually prove that the design matches the specification, which is pretty cool. And depending if you write also your design in a similarly formal language. And then we implement it, right? So then we build the things. So, you know, we usually think of we implement things in code, we implement things on top of frameworks, other types of things. And we want to ask the question, does the implementation satisfy the design and also transitively write the specification? So again, how do we prove this? Yeah, so we can talk to the stakeholders and the customers that we got requirements from, right? We can actually present them with the prototype or the implementation and say, hey, does this actually do what you want it to do? We can also run through our own manual test cases. We can, from a security perspective, we can maybe hire a, like we talked about, we can hire a pen testing firm to do either source code analysis or just black box testing to try to identify any security flaws. We can also ourselves, I think that the key thing to think about here is coming up with negative test cases. So it's actually very easy and very common to test, hey, oh, like I think I mentioned the drop-off. So it's very easy to have a test case that says, hey, when I log in as a user with my username and my password, I get logged into the application. So that's like a positive test case, right? It's testing the accepted code. Oftentimes when people forget to test is the negative to say, okay, when I log in with my email and not my password, I should not get it. Right? And if you mess up your checks and you accidentally accept all passwords, that test will fail, but the other test will still pass. And so this is part of actually thinking like a security analyst is looking at it and saying, okay, where are the possible bugs? Where are things that are complex, tricky? Actually, some of the best security analysts are also really good software developers. So they write a lot of software. So they understand the mindset of a developer and they say, oh, I bet they forgot to do X, Y and Z because I know I wrote code that like used to do that or I bet they're not aware of some like poor key language issue. So now so we've done all this. Let's say we have reasonable assurance that things are secured. Does that mean our job is done? We build this great thing. We go home. Say it again. Yeah. Okay. So you can overlook something small. That's great. Let's assume that I have not though, like I've fixed everything. I am now perfect. I can find everything. Can I fix it all? Do we go home? Yeah. Two, actually. One, put the behind the department. Okay. And two, even if syntactically everything is correct, they could be unforeseen threats between this protocol that caused. Okay. So the first one is, yeah, great. So hardware stuff, right? Depending on what hardware I'm running on. I could have potential issues that way. Another thing to think about is I think about other systems that I talk to, maybe that interaction isn't necessarily correct. Yeah. Yeah. We may have to worry about coach. That's a great concept to think about. And that actually comes with pen testing, even, right? So like with pen testing, let's say I could hire somebody that can find every single bug or problem with my system. Great. I hire them January 1st. And then I don't hire them again until next year, January 1st. How much code is being written and pushed to production in that whole year span, right? So ideally you'd want every change to be pen tested, but that's almost impossible. So you need automated systems. Yeah. Maybe hackers are humans too and they learn new things. So they may be new techniques. Let's say I know all of that. Let's say I'm extra perfect and I found every possible issue. Cool. Yeah. So that's good. So maintenance. I think the key thing there is the difference between... Let's go with the fire alarm example, right? So let's say a fire alarm that I test in the laboratory and I verify, okay, this much smoke, it goes off, everything's perfect. But then when you put it in the house, what if your fire alarm is actually in a place where smoke doesn't go and so it doesn't go off early enough or it doesn't have enough ventilation or something? So the key thing I'm trying to get you to think about is software is not just written, right? It needs to be written and then deployed somewhere. It needs to actually run in some kind of environment. So like Pone College, right? Or the Pone.cse365.io system, right? That system could be perfectly secure and if I had the root user on that system you could SSH into with the username root and password root, that would be an insecure system even though the code itself is correct. The way it's deployed is it's deployed in an insecure manner. So this is what we think about. So even if the implementation is perfect, right? Because of the way it's deployed, actually my funny story here is I was creating a website when I was doing my undergrad and I didn't really know what I was doing. I didn't understand Linux. I didn't have a class on security and access control that we'll talk about. And so I was having security permissions problem, I was having permission problems and trying to install some software. So I just did a chmod 777. So chmod is the change the file mode, the permissions. 777 means everybody has access to everything. And I did it recursive from the root. So I went through every single file on the system and changed the status to everyone could do everything. And I was like, great, now my problem went away. The software just worked and everything was good. Except the next day I tried to SSH back into the server and it said like key rejected. And I was like, what's going on? I had to file a support ticket with my hosting provider. And it turns out SSH, and I think some of you are finding this out, even the SSH clients, if the permissions are incorrect on your private key, it won't actually accept that. So similarly with it uses the authorized key file and it checks, hey, if everyone can write to this file, then I'm not going to give you access because somebody could have put their keys in there. So I was locked out of my server and I remember the support ticket, they were like, your permissions are like really messed up on this thing. Like we can fix this issue, but you should probably just reinstall because this is like not a good state to be in. And it's because I didn't know what I was doing. But you can easily, if you're not paying attention to deployment and configuration, then you're missing out on a key step of how that software is actually being deployed. And it could be deployed in an incorrect manner. I'll say the other interesting story I have is while doing a pen test, I used to do some pen test for some companies and we basically had access inside their network. So we were testing scenarios of like, okay, we're employee X, Y and Z that has access to this thing. Let's see what systems on the network we can see. So we ran the scanner of all of the systems, all the IP addresses, did port scanning, we'll talk about later to see what services were open. We ran, I think, Nessus, which is like a vulnerability scanner. And we found that one of their systems had a, so IPMI is like a way to do remote management of physical servers. It's like a management protocol. It's actually a physical separate processor that executes on that system. So that the way to think about it is when the current, when the system's actually down to think of like a big server farm with like pizza box racks of servers. How do you turn those back on without like going in and hitting the button on all of them? And it's this IPMI thing that has a separate processor that can control the whole system. Well, we found this and it had a vulnerability with like a default username password. And so I was starting to look at it and be like, Oh, this is so cool. I can like go in, upload my own OS kernel to get access to the system and show them that we could take over the system. Before we did anything, of course, we talked to the people we were working with to say, Hey, this is what we found. This was the machine. This is what we're going to do. And they saw it and were like, No, no, no, don't do that. That's our credit card processing machine. Like, if anything goes wrong, and that goes down, like we will be downed down. So do not do that. And so we trust you, we're going to fix it. Don't worry, like put it in the report, but please do not test that actual system. So and this was a deployment issue, right? The system itself was secure. A lot of the design was and even they, the interesting thing was they asked me like, Oh, how'd you find that? And I was like, I gave them the scanning command to use. And they were like, because they didn't understand because they did weekly scans of their whole internal network see what was out there. But they use the defaults, which only scan the handful of ports. And I use the thing that scan all 65,000 ports, which found this one thing that was open. And so that's an example of deployment problems. Cool. So this is why usually when we do pen tests and these type of things, we test the actual real systems and deployment, not like we talked about the lab environment where everything's perfect. We want to actually test the system as it's deployed. And many of the things that we're talking about here, right? We're talking about cost benefit analysis is the cost of this security mechanism actually worth the protection that it provides, right? And so we talked about actually hardware problems. So have you heard or been aware of the Spectre Meltdown hardware attacks? So basically that was a hardware cache level issue where one process could like read data from another process using like a shared hardware CPU cache. That's actually really crazy. You can go look at more of the details in there. But the fundamental way that they solved this basically cost us I think 10 to 15% in terms of performance of all systems that have this mitigation applied until they had new CPUs that did better, had better defenses. And so this is like a key example of like, huh, well, we can protect this attack, but it's going to take all your stuff is going to be 10% slower. Kind of crazy actually to trade off to even consider making. But that's a lot of what we think about here in terms of defensive measures. So no, you know, every defense has some sort of a cost. And so is it worth it? It could be time, it could be money, I mean, yeah, it could be time, money, resources, right? The way I like to think about it is so you have the best detection system in the world, right? Running on all of your systems, looking at all of your network packets, and then it's logging logs of what it finds every day, but nobody's looking at those logs. So what's the point of having this system if you don't invest the resources to actually look at what alerts it's giving you? Yeah, exactly. So this is the key idea. At some point, you're putting in more resources than the benefits are worth. So we want to think about, you know, this goes into what we talked about with risk analysis. So thinking about what threats does this countermeasure it address? And is it actually worth it in monetary terms? Yeah, so risk analysis. So this is something to think about is should an asset, something that we have be protected? And yeah, so, you know, we want to think about, so okay, what threats does it face? Is risk constant? So like I do this analysis, I say, aha, that system is worth protecting. And I'm done. What's an example where maybe risks can change over time. So the system itself maybe changes. Yeah, maybe before we weren't accepting credit cards and now we are. So we go, oh shoot, our security risk has definitely changed because now there's a huge incentive for hackers to break into our systems. Yeah, what else? Say louder. Yeah, there may be a something that we didn't know was vulnerable that now everyone knows, but we didn't know we were running the system and we didn't patch it. This was how the, what was the big one? Let's say experience. I think that's wrong. It was one of the credit card processing companies. I can't remember the name. It's because they changed their name. Equifax. Yeah, that was the one I was thinking. Yeah, Equifax just had one system in their network that was like a developer system that they had spun up for some testing stuff. They didn't even realize it was on and running, but it was running an old version of software that had a known vulnerability and that's what they use as the foot hold into that network. The other one I like to think about here is so is the system that has the company quarterly earnings reports. So public companies every quarter have to release their, their financial statement for that quarter. Why is that important information? It affects the stock price, right? So if they went over expectations, the stock price would go up if they went below expectations, stock price would go down usually. And so this type of information is incredibly important before it's public, before the announced date. But then after that date, it doesn't matter. Like somebody could break into that system and who cares because all that data is public anyways, right? So you can see like the the criticality of an asset could shift over time. You can think about in the military context, right? A, like a telephone system, you know, is always pretty important. But when somebody's calling in a military strike on somebody else, that becomes much more critical that that system works. But then when the mission is over, then who cares about those systems? Like the criticality goes down. So the risks and the threats actually are constantly changing throughout time. And even within a company, so is another thing to think about, right? So the laptop of a, of an intern who's here for three months, is that the same level of risk that you put with the CEO's laptop? Why not? Yeah, the interns laptop won't have important information. The interns email doesn't have any important information. The interns laptop can't send an email to the CFO saying, I need you to wire $10,000 to this account so I can close the deal. There's actually a lot of business account compromise, a BEC attacks, where they get into like a high level CEO's email address, email account, and then we'll send emails as if they were them to people saying, hey, I need you to wire money over here to complete this deal and they'll do it. Cool. Other things we want to think about? Why do we care about laws and customs? We're just talking about security here. Why do we care about this stuff? Yeah. We don't want to get arrested. Yes, that's always good. Especially, you know, we'll be talking about, you know, ethics in this course. I think I mentioned it the first day. You know, we don't want any of you to go to jail. And if you do end up in jail for the security stuff, don't tell anybody that we taught you this. Just say you learned on your own. And from the security perspective, right? Laws can actually restrict policies and mechanisms. People were kind of jokingly mentioned when we were talking about the house example of creating like home alone style booby traps. But depending on the state that you're in, that may or may not be legal. Yeah. Or even the country, for an actual role on happy example, is actually illegal to counteract, to protect your dad. Yeah, that's a super interesting concept. And this actually occurs in a lot of different ways, even with research. It's like, okay, we're doing some analysis. We see some malware is connecting to some command and control server. Some people can get control of that server. Maybe we know the hosting company. Can we do that? Oftentimes, yeah, it's tricky if we're working, usually the way you do that and approach that is work with law enforcement in order to get that stuff taken down. Yeah. Yeah. So that's another aspect, right? I mean, we'll get into cryptography. The key question there that they think about a lot is when you put backdoors on encryption, it's all about who controls the key to those backdoors, right? Is it actually, you know, and if we look at what let's say the Snowden leaks, a lot of people would claim, hey, I don't trust the government to even keep those keys secret. So now it's not just that we put in a backdoor that the US government can read, but that now anybody could read or anybody who breaks into or steals that key. Yeah. Yeah, there was, I think I maybe mentioned this, but there was back in the 90s, there was laws against exporting cryptography protocol or cryptography software. And so what they would do is they would print out in books because books were protected. They would print out the source code to like GPG and PGP and these tools, they would print out a book of the source code so that they could ship it out of the country and people could type it back in to recreate the software on their side to get around some of these laws. But yeah, so what's the difference between laws and customs? Yeah. So yeah, so exactly. So for instance, I don't think there's any laws stopping Apple from not publicizing all the photos on your iPhone. I mean, they could they technically control those devices or maybe after you back them up to iTunes, it would be highly, highly violating cultural norms if they were to do that to just leak everyone's phone photos everywhere. It may actually be illegal. I actually don't know I'm not a lawyer. So don't hold me to that. I don't think they want you upload them to their iPod servers. I don't think you have any ownership of that data. There may be maybe some of the newer like California privacy laws maybe, but it's hard to. Yeah, I mean, fundamentally, it's the distinction. So anyways, we talked about that. Oh, this is an interesting kind of case to consider. So thinking about privacy laws, right? So let's say an administrator gets a at ASU gets a notice from a professor that says, hey, the student has been spoofing my email and sending out emails from me. Can you go take a look in their email account, their ASU email account, and there may be laws or customs that restrict their access to doing so, right? And it kind of makes sense when you think about because like we talked about with insider threats and trust, you know, we don't want necessarily that administrator to be able to just search through email, everyone's emails all the time whenever they want. It's I guess a little bit different in the university context because I think our emails are FOIAable because we're government employees. But anyway, it's an interesting thing to consider. Yeah. Yeah. So there's there's interlocking mechanisms there, right? And the goal is to prevent somebody like you from, I mean, a rogue like IT administrator from just going and searching through the emails of people, right? So it has to be approved and somebody else has to approve it and then you can't even do it yourself. You have to give access to somebody else. We've, yeah. So there's a lot of, you know, but, you know, and from your perspective, on one hand, you can look at this and be like, yeah, but this is getting in the way of me doing my job, right? And it could actually impede, I don't know, like a security investigation or something. But, you know, it's important things to be to think about and be aware of. Customs, I kind of like this one. Uh, this was a company that was all the way back in 2017. They were creating a Swedish company, a microchip that you can implant under your skin that would do things like have. So anybody working in a company and have like an employee badge, got a badge in to get in, right? ASU, we also have the Isaac cards to get into places. Well, if you've ever forgotten your card at home, you know how annoying that is to not have access into the system or whatever you need. And so I can just put it under your, under your skin and never lose your card. What do you think? Great. You want to come up here? I can, I'm not going to do it, but that's gross. Actually, I didn't think about that. What was that? You don't want stuff, just any stuff or electronic stuff specifically. Anybody pro microchips? This is boring. Nobody wants to take the, that feels like a biohacker wants to like, all right. Don't want to use access to the arm? Yeah. You see the, yeah, pun puns where they pack like it's flim puns and stuff like that. I have, yeah. I'd be a little worried about that. Yeah. Let's say it's really dumb. So it's, you know, you can kind of see in the picture, right? It's just a tiny little microchip. So it's not, let's say it's not connected to anything, maybe. Correct. Well, like obviously it has something on there. Yeah. It's got, let's say an RFID chip. So it's got something to recognize you. Yeah. So anytime you leave the company and go to another company, you either have to have a way to flash that or they'd have to cut you open, take out the chip, then maybe the new one gets stuck in there. All that just to like not forget your card at home. Yeah. The other thing is what other people could use that and read the tracker, right? So if that's now, I mean, if that's now something that people could use to either track you, follow you, and it's something, you know, yes, I would definitely agree. We all carry tracking devices basically on our persons almost all the time to do your cell phone. But, you know, if you really wanted to, you could leave that at home and just walk out your door and nobody is implicitly tracking that unless you have, I guess, electronics in your body. And so this same company, I thought this was amazing. So this was an article from 2017. This is an article from 2021, the exact same picture, the same company. I think they're just like are really good at getting pressed for this thing because they just spun whatever they were already doing back in 2017 in 2021 to create a microchip that allows users to carry their COVID vaccine passport under their skin. So basically exactly the same idea. But now you could do your rather than having your vaccine card or a picture or something. Self-described body hacker, that's pretty cool actually. So are you just going to like start having a bunch of pictures out of it? Yeah, and then you have to remember like, oh, like this finger is for this and this one's for this and if I'm going to the grocery store, my rewards card is in my pinkie, yeah. What's the best to see when I catch the season? Yes or no? Because it doesn't actually see any identifying information just to list the best season for the best go back. Yes and no. It all depends on, I guess, where you'd off that information, right? Because somebody has to, you know, if you're putting it in you, somebody have to verify that I am me. Like it's like an identity problem, right? Because then how does the person scanning it know that I'm me? Maybe they take a picture of me and they embed the picture in there or something. But yeah, that's what I say, if they embed that in me, it doesn't have any identifying information. I cut it out and put it into somebody else, right? So there's all kinds of weird issues. But anyways, okay, I wanted to, oh yeah. The other things to think about is human issues, right? So thinking about inside of an organization, who's responsible for security in that organization? And this can be things like the like the CISOs that we mentioned. But one of the things that will become more and more clear to you as you go through your career, some of you who already in companies will understand this that the structure of the organization and kind of the power dynamics there has a lot of influence over what happens. So many times when we talk to people who work in security groups in companies, they say, hey, my job's great. Every day I get a look through the software of the company, I find bugs and vulnerabilities that are really bad. But all I can do is report them to the group that's actually developing that and hope that they fix it. And they just tell me, well, yeah, hey, that's not really that important. We like triage this issue super low because there's no like organizational power, organizational structure, like organizationally, the security group doesn't have any power. It also thinks about how much things like budget, right? How much budget does a security group have, right? That impacts what we talked about in terms of mitigations. How much organizational power does the security group have? And this can be kind of going to a direction of having too much power where users, people are very good at finding security workarounds when it impacts their job, right? If you've ever, like, I know people have worked at companies that like don't allow external software or don't allow you to go to Facebook or something like that. But so what did they all do? Like they would go on Gmail, which had gchat built in and would chat with each other over gchat. Or they'll figure out a way to install applications onto a USB drive and then be plugging USB drives in in order to run applications. Or all kinds of crazy workarounds because people, at the end of the day, need to do their job. So this is like a way of thinking about the different aspects of security. And again, okay, so yeah, like enforcing security, right? It's a combination of people and systems. This is what I think the classic example of this was. So in the example of Windows, one of the big problems with early Windows was any software that you ran, ran as the user account who was also the administrator of the system and had full access. So this gave rise to a lot of Trojans where people would say, hey, download this thing. It's a great new game. You'd run it and boom, it had full control of your whole system. Microsoft, of course, realized, hey, that's not a good idea. What if the users ran as a non-administrator user? But of course then to do things like install applications and do various kinds of things, well, we'll just ask the user and put up a dialogue box that says, hey, application X, Y, and Z is trying to do A, B, and C. Do you want to allow this or disallow it? And you think, great, look, we've built this beautiful security mechanism. We are improving the security of this was in Windows Vista when it was called the UAC. I think it was like user account control or something like that, but I don't know what the C sounds for. But no, no, I think I am not certain that both things work. The UA, I'm very confident of. Okay, cool. Yeah, so basically what happened is almost every time a user was doing something on the system, this UAC dialogue box would pop up. They install software, UAC dialogue pops up. A software wants to update, UAC dialogue pops up. Eventually they get what's known as alert fatigue, where they just are used to seeing this alert so often and they always click accept that when they did download a game or a virus or whatever, they just double click on it, it would say, hey, this wants to run as an administrator and they would just say, yes, do it. Because Microsoft developed this beautiful security system without thinking about the people that were actually now being forced to take these security decisions. This is actually one of the great examples I like to give of how security evolves with like our cell phones now. So our cell phones now don't run as all one user account, even though under the hood it's all running Android is Linux and this is like an OS X BSD variant. But under the hood, they're not, every app is actually running as a different user, which helps keep them isolated from each other and limits the potential damage that one application can do. And that's thinking through and being like, hey, like then as a user, what now what decisions I need to make are about permissions. And of course, that is again, a whole other issue of putting that decision on to users. But at least it's a less, it's definitely a more improved system than the Windows Vista style UAC dialogue boxes. Questions? Cool. All right, we're going to roll right into access control. So stick with us. Sweet. All right. So we talked about an overview. So we talked about kind of the overview of security. We talked about the CIA triad policies, mechanisms, all that fun stuff. And now we want to think about access control. So let's say, so we'll bring it back to an example. So we have a university's academic integrity policy disallows cheating. It could literally be any university, right? Our university's policy also disallows cheating. This includes copying homework with or without permission. So some CSE class has students do homework on a shared server. We can say it's like general.asu.edu. So it's a shared system. Everybody has access to it. Each user has a different user account on the system. So some student A forgets to read protect their homework file. Student B then copies the file and turns it in. So who violated the policy? Who did something wrong if you want to argue more from a moral perspective? B? Who gets in trouble? Well, it's a harsh group. So whoever wrote the server, why? What's wrong with the person writing the server? So maybe we can actually put the culpability on the people running general.asu.edu. They messed it up because they made this situation possible for a student to create a file that's readable by everyone. Oh, the people who own the system? Yeah. Okay. It's like leaving your door before four o'clock. Yeah, it's possible for a student to create it easily, but it's still the man, still the thiefs, and the choice that had to ask the person. Interesting. Yeah, yeah. So there's, but then that also, you could use that argument. So are you saying a student A has no culpability? Oh, okay. A has some culpability. So much in the actual, like, he wouldn't decide. It was a case of, like, he forgot the password protected file. Yeah, so maybe, yeah, this is actually one of the interesting points here, right? Because how do we know that? So I'm telling you that the student forgot, maybe that's what they say, but consider an alternate scenario where student A intentionally wants to cheat with student B. So they just create a file that is not read protected. And then afterwards, they claim, well, I just forgot to do it. So from our perspective, there's literally no difference between these, those two scenarios. Freaking. Tell them the temp director is readable by everyone. Let's say it was a, I mean, that's even a weirder circumstance. Let's say it was my editor that created, like Vim creates a swap file in the temp directory of the document I'm editing. Maybe it's that file that they read. Yeah, you might not. Yeah, that's interesting. So maybe the person didn't have any control over it. Yeah, let's see chat is the person who forgot to protect it made a mistake. It was an accident. The student who copied the file did it intentionally. So is knowingly doing something wrong. Yeah, so the point of this is to think about the importance of access control, right? Again, it comes down to a question of should I mean the key questions here that we're going to talk about here is should it be possible for this situation to occur, right? Should the system, if the system really wanted to enforce this policy, can we use some kind of mechanism to ensure that a student can never accidentally make a file read only for a read for everyone, right? So those are kind of the questions that come into play here. First, we need to do a little bit of definitions. This also known as like a and a. The first thing we're going to talk about is authorization. Oh, actually, we'll get into the other one later. But so authorization is basically what can you do on the system, right? So who are you or we'll get into the authentication question later because that gets into some cryptography concept. So we're first going to do this. We're going to do some crypto, then we'll talk about how to authenticate people. But until we get there, if we assume that we know who we are, so I'm user whatever Adam D on the system, the question is what can that user do on the system? Yeah, so authentication answers the question of who are you authorization says what can you do? And yeah, so like we talked about, right? So again, authorization has this really tight coupling with trust. This is we talked about this with the administrator example with emails, right? So even an administrator being able to read emails is an example of maybe trust that we need to mitigate. So we want to understand a system we want to understand who can do what on the system. We know that we can't eliminate risk. One of the key principles we'll talk about is basically the concept and the principle of least privilege. So can we give each entity the least amount of permissions, the least amount of access in order to do its job? Why is that a good risk reduction strategy? Yes, you only get access to what you need and nothing more, right? If you don't need access to some information to do your job, why should you have access to it? Or like in the previous example, if we wanted to try to eliminate the risk of that happening, let's just not let a user ever be able to set their files to be readable by everyone. All right, then we can prevent that case. Okay, so we'll kind of use these concepts interchangeably of authorization and access control. The way I think about it is authorization is the policy. So who do we want to be able to do what on the system? Whereas access control is the mechanism that actually kind of enforces that. And we'll look at there's different models of thinking about access control. Some are born, some are actually very familiar, and this is partly what you're getting used to in the homework assignment is kind of the UNIX permission model. Some of you are finding out through trial and error that when you try to write to a directory that you don't have access to, you get access denied. If you try to write to a file like slash challenges slash interaction level 12, that will not end well, and it will tell you that you can't do that because you don't have permissions for that. We'll also look at other access control policies that kind of came from the military perspective, where they really want to be strict about making sure that things and information can't leak out. So there's interesting things here. So to help us think about and model these things, we're going to use, I don't want you to freak out, we're going to use some mathematical notation. This is okay. This is just like sets and other types of things. It's nothing crazy. I promise you'll be fine. Everyone okay? Yeah. Okay, so we'll have a set of subjects S, subjects you can think about as things that can act. So in different contexts, in different systems, this may be different things. So in a UNIX system, subjects are actually not users because a user can't actually do anything. It would be the process that has an associated user ID that identifies who that user is. So think about it as things in the system that can act, that can do things, that can try to read things, write things, execute things. Objects. So we'll have a set of O objects. So this is the assets or objects in the system that can be acted upon. So if you think about just a file system, the set of objects would be all the files in the system. And depending on the system, the subjects may or may not be in the set of objects. So depending on if you can, if subjects can change their own rights or do things like that. The other set we'll talk about is the set R of rights. So what can the subjects do to the objects? So think of this as terms of permission. So the concept we've seen before and that we're getting familiar with in Linux, read, write, execute. Those are the rights. And then actually modeling these is incredibly easy. So this is not a complex model. What we can just do is to understand the security or the current access control policy, we can list all the subjects as, I believe we're going to do them as rows. So every subject gets a row, every object has a column. And so here we have the subjects U and V. We have columns. So objects F, G, U and V. Those could be whatever. F and G could be files. U and V could be user accounts. And then at the intersection, right? So in this matrix, every cell here has the set of rights that that subject has on that object. So we would say that, okay, U has rights R1 and R2 against F, but only R2 against G, whereas V has no rights on G and has R2 and R3 against F and V also has no rights on V. So we can take the UNIX model that we've been looking at and we can kind of simplify it down to this example. So each subject would be a process. So we can have a process P, a process Q. Files are objects. So we can have files F and G and then rights. There's actually a lot of rights. We won't get into all of them, but we can think of read, write, execute, append. Why is append an interesting right? WRIGHC, but RIGHC. Yeah, so we can't modify the contents of the file, but we can add to them. Why is that useful? And why is that distinct from writing? Yeah, so adding a configuration would be one example. The other one will be logging. So any logging messages, you don't want the user to destroy what was already there. Because if you have write permissions, I mean, you can just delete the logs by writing, you know, nothing and then or writing over it. Whereas with a system will ensure that we can't ever change the data that was there. And once we write to it, it's done. Own. So ownership. So who has ownership rights? So we can think of it as a simplified unit model. Hey, if I have ownership rights, then that means I can do other operations like change the permissions on that object. But again, what these things, you know, at the conceptual high level model, these rights don't mean anything. But to actually interpret what this model means, we need to understand what these semantically mean. So we can represent them as read, write, execute, append, and own A and O. And then we can model these things. So we can just say, okay, process P has rights, read, write, and own to file F has read rights to file G, and has read, write, execute and own on P. What that means for a process to have ownership over another process, I don't know, but you know, this is just a model can write to process Q. Maybe that means it can send it messages, but it can't actually read messages back. And so we have Q can append to file F can reads and owns file G. And so yeah, let's let's go over so one of the benefits of representing access control in a like matrix like this. Yeah, easy to visualize who has access to what right if I said hey can process P writes to file G. Can you answer that question? You can answer that question and the answer is no. Okay, just making sure. So I heard yeses and knows, right. This has total information of all of the state of the system. What are other are there any other benefits? Well, this is nice about this. Yeah, so say that one more time. I think so. Yeah, that's great. So you can see for one process P, right, you can see all the rights that that process has on every file on the system. We could also look at for each file, what are the other permissions that every other process has, like it's that file, right. So we kind of have total information, what's one of the drawbacks of this approach? Yeah, so this would mean every time I add a user to the system. So there's roughly 550 of you or something like 500 of you at this class. Every time I add a new user, I have to create a new row in this table. And I could figure out what the rights are to all of the files. And then anytime anyone ever created a file on the system, I'd have to add another column with another 600 rows. And we keep going and going and going. So if I have a million files with 600 people, I have what's that 600 million entries in this table, I actually may not even have the memory, or it may take a lot of memory to even represent this kind of a thing. Cool. So this is not how it's typically implemented. So if you think about the operating system itself is the one that is doing this checks, right, you've seen it yourself when you try to delete a file that you don't have right permissions to, it says, hey, like that's not allowed. And so it must have some notion of this list. But the question is, and the giveaway is, hey, it's it doesn't keep this whole giant table, but it still needs to do this information somehow. How does it do that? So start thinking about how would you implement this kind of nice, beautiful theoretical matrix model into something else? So let's stop here and we'll pick up on Thursday.