 started, okay, cool. All right, thanks everyone for joining us. This is Tuesday, the, what is it? The 19th, I'm trying to look at my phone to see it. We'll be playing, so it's the Tuesday recording, so we didn't have class on Mondays, so we're flipping. Tuesdays and Thursdays will be live going forward, and Wednesdays and Mondays will be recorded. So I'll be playing this Wednesday night. So if you're there, hi everyone, type hi in the chat so that I know you're there. Yeah, cause it's always nice to know that there's people out there with you, rather than just doing this alone. So thank you everyone. Cool, hope you had a good, long, fulfilling, nice break. And now we're back to talk about assurance. So specifically assurance is we talked about how do we trust that a system is secure? And specifically, how can we think about a system being secure? And one of the key things that we talked about is how much do we trust this system, right? And again, this is something that is more of a qualitative assessment. We can increase our assurance in a system that we trust it and that there's no security issues, but we can fundamentally never fully trust a system. And why is that? So why can't we fully trust a system? Anyone? Chat, nothing is perfect. I don't know, I'm pretty perfect. Potential vulnerabilities? Yeah, so potential vulnerabilities, thanks. So yeah, there may be potential vulnerabilities and I'd say actually, rather than saying no system is perfect, I'd actually put the blame solely on us on humans. So fundamentally humans make mistakes, we make problems when we're implementing things. And it could be, so we'll look at different ways that we can think about that. But fundamentally there are many different ways that bugs or mistakes or errors can implement them then can get into the process. We talked about quantification, now quantification is a difficult thing. And so when thinking about assurance, so how much do I trust a system? So if I asked you to say, okay, how much trust should I place in the system? And let's put, we can put maybe a scale of one to five or one is no trust at all and five is very high assurance. What would you need to understand in order to have you set that levels? Yeah, so you need to know something about the, this is from the chat, so the product. So this goes back to our house example, right? We need to actually understand what is it that we're protecting so we can understand well, are the security measures sufficient for that, right? This gets into the notion of threats, how realistic are the threats, how important are the threats and how can we combat them? Also level importance of the system. And so we can look at the system from many different levels. So we'll think about this in terms of the, what I like to think of as a software design cycle, right? So thinking about how does, how do we create software, right? We first have some sort of specification, right? We need to know what should we build and what kind of specification should we build here? So we'll think about from the specification, what should the software do to the design, to the implementation, to actually deploying it and operating it. So for specification, right? This is exactly what we talked about. So what is the system supposed to do? So why would knowing what the system is supposed to do help you think of the assurance in the system? Why does this impact insurance? Oh, that's great. So Nathan had a really good point in the chat. So you know what it doesn't have to do. And so you can find weak spots, somebody else said, right? So you wanna know, so if, you know, this goes into the concept of risk, right? If this is a system that is supposed to control the launch of nuclear warheads, would you need a higher level to achieve a high, you would need a high level of confidence in the all aspects of the system, as opposed to, if this is a system that, I don't know, wakes you, let's say wakes you up in the morning. Or somebody just had a good example in the chat, James says, does my hello world program need to have security, right? Well, probably not, right? Or at least the level of assurance you need in a simple hello world program that's five lines of code is gonna be much different than a, something that controls airplanes for another example, right? So we need to understand what is the system supposed to do? And so we talked about, in terms of security, different ways to define it. How do we define specifications in terms of, we'll focus this just on software systems, but this mentality and this mindset of thinking through these different levels can help us definitely in this path. So how can we define a specification? Would it be like the user's input? Yeah, so thanks, Ethan. So this would be like, what is the user's, well, so for specification, right? Could be one way of thinking about that is in terms of the user, right? So what can the user do with the program? Somebody in chat wrote features. So what features does the program support? Do any folks here work at a software development place? Yeah, so some of you or have worked. So how did specifications, how were specifications defined? How did you know how you were supposed to build things? Yeah, user stories could be product manager, right? So it could be text or it could be your manager telling you what to do, right? But it has to come or spec documents. Those are great things. So this is maybe a Word document that describes what the feature should be, how it's supposed to be developed. So all these things are different ways, right? We can specify specifications in English. Now, what are some problems that we've talked about even before of defining specifications in English? Yeah, so chat's already blowing up with vague ambiguity, misunderstandings, difficult to describe, right? So this is, so it's, and also now think about, okay, you have the specification of what it's supposed to do and then you've built something, how can you actually verify that what you built is what it's supposed to do, right? So these are kind of the problems with English text. And so how can knowing about the specification maybe improve our assurance that something is correct? Or is there anything we can study at the specification level to think about to either increase or decrease our assurance here? Should we just wait until the system is built to try to think about possible security problems? Yeah, so you can think about actually testing. So this is a good thing that's coming up in chat. We can think about testing the specification. We can think about have someone else read it. Actually, one of the great examples I've heard of this is thinking about, well, should this feature even be built? Right? Is there, do we actually really need this feature? Maybe it opens us up to increase security problems. And you may think that's silly, but how many people here you can type in chat let's say ordered something from Starbucks through the mobile app. So done mobile ordering. Yeah, and so do you use a card that you, like a Starbucks gift card that you auto load with money or do you use a credit card? Yeah, so some people are saying both either auto load, right? So I actually talked with a person who worked, I don't know if he wants to remain anonymous or not, but a person who worked at Starbucks and when they were developing this feature, so they were saying, okay, we wanna be able to mobile order from our phones. And this is actually a much more difficult problem than you think because Starbucks, the company owns the mobile order app, but that order needs to go out to an individual Starbucks store. So without thinking of all of that complexity, think about just the problem of credit cards, right? So one of their specifications that they wanted at the beginning when they were thinking about, okay, I wanna be able to order Starbucks directly from my mobile phone. Now, one of the things is, okay, well, then how do you, you need to be able to put in your credit card? Where do you store the credit cards? And actually thinking about, and it kind of when they were developing this didn't have a clear notion on mobile devices of what happens if can people, what's a secure way of doing that to actually store the credit cards? Plus now Starbucks itself has to store credit card information and be able to deal with all that. So this person who's a security person said, hey, what if we didn't do that? What if we didn't support credit cards right off the bat? What if we allowed you to just use what we already have of these Starbucks gift cards that you can auto load and put money on? We have a whole mechanism in stores to add money to those devices. And so that's what they launched with. So they actually got rid of some features. So right at the specification level, they said, hey, this actually opens us up to a lot of complexity. It opens up to a lot of security problems which we know we'll do eventually but can we load with, can we deliver a kind of minimum viable product to see that people actually like this and reduce our risk at the same time? And so this is a really good example that I like of showing an example of actually if you stop a feature even at the specification stage, right? That can actually increase your assurance in the system because, hey, if you're never collecting credit cards you don't have to worry about those credit cards being securely stored and are they encrypted at rest and in transit and all of those other things, right? So they kind of dealt with those problems later once they realized that people actually wanted this feature. Cool, so in the next stage then we have the design phase. So then, so we've got the specification now we need to design either how our software is gonna work and talk to each other, the different components, right? We need to design all that. So now we have the question of how to design the system. So why is it important? What aspects of the design of a system can impact the assurance that we have that the overall thing is secure, right? Why is this important? Can there be, or think of it a different way, right? Think of it in an adversarial mindset. Are there security problems that can be introduced here in the design phase? Yeah, so some people are talking about usability. So how usable is it? Other people are talking about permissions or access control. How can we think about who should have access to the system, right? And specifically maybe how to control and manage that access, right? We wanna do that at the design phase. Why is it better to do it now with the design phase than at the implementation phase? Yeah, it's actually much, so this is being brought up in, I know what you mean in the chat. So it's actually cheaper to fix things in the design, right? In the design phase, it's maybe still on a whiteboard or it's still in some document or diagrams, right? If it's in the implementation phase, you've already built the code and maybe you've actually codified that design in your code, but that design is fundamentally broken and it can be difficult to kind of bolt on security. So this is actually why it's really important to be thinking about security through all aspects of the security, of the software lifecycle. Yeah, so we wanna think about aspects of the design that can impact security and we wanna think about, okay, what aspects of the design are security relevant and important for security so that we can test them later? So do we care about the design satisfying the specification? Yeah, we should, why? Yeah, so exactly, that's the point of the specification. That's a great thing that just came up in chat, right? So does the design actually satisfy our specification? Yes, because otherwise, if we're designing features that aren't in the specification, that may have a security problem or if we're designing less features that are in the specification and those could have security problems. If we just say, well, okay, we need different user accounts. Well, how do we verify people's identities? How are we storing their passwords? How are we validating that they are who they say they are? Yeah, so we wanna make sure that the design actually satisfies the specification, right? Now, doing that is, so how can we prove this? Can we prove that the design actually satisfies the specification? Yeah, how? Some people are saying testing, so testing, right? So we can actually test the design, right? We can maybe walk through test cases and say, does the design satisfy these specifications? Is testing a proof? Can you prove that a design satisfies the specification with a test? Some people are saying no in the chat, why not? Yeah, so an important thing to say is we can, with testing, right? And this is an important concept that'll come up a lot, right? With testing, we're testing a specific case. So we're saying, does the design satisfy the specification in this different case? Now, if we get a failure and we say, aha, the design does not satisfy the specification, a test is great because now we have a test case that we can say, look, the design does not satisfy this part of the specification, and here's my test case, right? But fundamentally, we can't test every single part of the program. To prove that, we'd have to use something like math and formalism to prove that it's never possible that the design does not satisfy the specifications, and especially if specifications aren't English and designs aren't English, this is incredibly difficult. But that doesn't mean that it should be impossible that we shouldn't do it. This is the notion of assurance again. So if I have a design that I've said, hey, we have a red team come and test our design, and I can point you to these are the 30 different test cases in five different categories that they went through, and this is how the design satisfies the specification, you will have greater assurance that there are less security issues than similar software that does not have that level of review, right? So that's great, awesome, cool. So what's after design? So you've designed the software, what's next? Build it, yeah, right, implement it, build it, you wanna build the thing, right? And of course, I'm presenting a very kind of abstract model where you kind of have like the old waterfall model where you have phases, you have the specification phase and the design phase and the implementation phase. Of course, these can be happening concurrently, but it's helpful to think about them, especially as we're talking about in terms of assurance as a part of what types of things can go into this. So we have the implementation phase, right? And so how do you actually implement the design? This should be the easiest part that you're normally, so what, so in your classwork, what's the normal design that you're working with? The design, so think about the implementation, so what's the input for you in terms of design, right? Yeah, the assignment, well, the assignment instructions or you could maybe think of it as separate levels, maybe the assignment instructions is the specification, your design is when you sit down and say, okay, how should I work this? Maybe it's a UML diagram, and then we need to actually implement that in code. So at this point, you know, this is the point where we actually have code, right? So what we want to, as far as insurance is, does the implementation satisfy the design and why is this important in terms of security? Right, so why do we care that the implementation satisfies the design? Right, because ultimately like, oh yeah, please, somebody's just on muted. I was going to say, if our implementation doesn't match the design, we've added vulnerabilities we didn't plan for, probably. Yeah, so especially in the context of security, right? We want to verify that our implementation implements things that we talked about in the design. If, for instance, our design says, and our specification says, hey, we should have different user accounts, and these different user accounts should not be able to log in as each other unless they have the same username and password, but that turns out not to be the case based on the way it's implemented. That introduces a security problem. Yeah, so if we, and that's a great point, Dean in the chat wrote, if you designed security features but don't implement them, then they don't help, right? This is exactly right, so this is great. And again, we can think about, okay, in terms of assurance, we would like to prove. So how can we prove that the implementation satisfies the design? Yeah, so I can have tests, right? I can have unit tests. I can have quality assurance, which maybe tests in the general sense, feature checks. I can have user stories and verify that the implementation passes those designs, right? But again, just like before, right? Unit testing or even pen testing, right? I can bring in an outside party to try to break this software. Again, at this stage, that doesn't guarantee that it's secure, but it clearly increases your assurance in the security of the system as opposed to somebody who doesn't do that, right? So again, we're thinking about all the ways that we can actually do this. And okay, then great. So now we've implemented everything, we're ready to deploy it. And let's say, so, okay, yeah. So why do we care about how something is deployed or configured or operated? Shouldn't we just say, hey, here's the implementation, I built it, it's secure. Now take it and install it and run it. How come our job's not done? Let's think, yeah. If there's, it's like how with the security features, how you're not supposed to do certain things if you want it to be secure, it's like, you might want to make sure that whatever it's being used for, that's not being used in ways that could compromise the security. Yeah, that's great. So we wanna make sure it's not being used in some way that could be, and the really important thing to think about in your mind and the way I like to think about this is, okay, even if we implemented it 100% correctly, right? We could say that there's absolutely no security flaws in our implementation. If we just assume that for a second, are there ways that this can be deployed or configured in an insecure way? Think about your wireless router you have in your house, right? So this possibly depending on what version and how old it is, a lot of them have default username passwords like admin admin, right? So having that built-in feature is clearly nice for a user because then you always know what that password is and if you have to reset the router, you can always get in. But it means that if somebody untrustworthy is on your network, they can now access your wireless router and get at the heart of your network and be able to man in the middle of your connections, right? So this is why actually newer things will have a random or a different password on each router and that will be on some sticker that's on the router itself, right? Also, so this would be a kind of what seems to be a feature but actually in terms of deployment makes it can bring cases of insecurity, right? Because the people aren't thinking or the designers didn't even design it thinking, okay, but is it a good idea for all the users to have the same default username passwords? Or if your assumption is that, hey, the person using this will change the default configuration, that is usually an incorrect assumption. So we always wanna think about how is the implementation deployed? One of the classic examples and stories that I have of this myself is it's hard to prevent what kinds of mistakes users can make. So if you're relying, let's say your application relies on your operating system having the concept of two different users. So if you think about a general is probably a good example, right? You have access to general. You each have user accounts on this server. Now, if I deployed a homework testing system and I said, okay, you can submit something on general and it'll submit it, test it, give it to you right away. But if I made a mistake in my deployment and I allowed the directory that the homework assignments and the testing infrastructure was running into be world readable by any user on that system, that would be a massive security problem because you could disclose all the test cases, you could disclose all of the testing infrastructure. This is actually something I've done myself. So embarrassingly, when I was in undergrad I was creating this website called the Woot Watchers. But anyways, the details aren't important, but I didn't really know what I was doing. I didn't have a class like this so I didn't understand permissions and those kinds of things. So what I did was I had a website on some server and something wasn't working. And I can't remember what wasn't working but then what I just did was I used the CHmod command that we talked about. I did CHmod slash 777 dash capital R. So I recursively set every single file on the whole operating system to be world readable, writable and executable by any user on the system. And then actually all my problems went away so everything was fine. And then I disconnected from the server a few days later I tried to connect to it and I couldn't connect to it. And that's because SSH has a feature where if your trusted key file is too permissive it won't let you actually log in. And so I had to contact support and I was like, hey, sorry, I can't get into my server. And they're like, okay, let's look at it. And they're like, your file permissions are really messed up. I was like, I know I was trying to solve a problem and it just, you know, it went out of my hand. So they let me back in and then I, I think re-imaged and figured it out. But anyways, so that's like a clear case where a user, I was as a user had way too much power and I deployed it in an incorrect way. And so we always want to be thinking about this of, okay, how was the implementation deployed? And we can think of features that we can implement at the specification design and implementation phase. So for instance, if you expect that a certain folder or directory is not world readable or not world writable you could test that in your code and don't run if those conditions aren't satisfied. So you can prevent these kinds of things but you need to be actively thinking in that cases. So we always want to be thinking about how are things deployed? How are they configured? How are they operated? And this is something you should take with you and think about that when you're developing software. So how can we prove that, again, the deployment, the implementation that's deployed and configured and operated is actually, and how can we prove that it meets the implementation design and specification? What do y'all think? Yeah, we can again, apply testing. So this is actually why testing at this stage especially for security would be called a penetration test, right? So a penetration test is you hire a company to break into your deployed systems, right? So these are actual systems where are running in deployment, deliberately testing these kinds of things, right? Saying, okay, but it's deployed, it may be secure but it's deployed secure. Is it configured secure? Is it operated securely? Yeah, so we can apply testing at this stage. We can also try maybe verification but again, that's a little bit difficult. We can also think about deployment and configuration operation earlier in the cycle so that we can make sure that like somebody just mentioned in chat having good defaults and I would say augment that by saying having secure defaults, right? Having secure defaults is incredibly important because we know that users will not change their defaults. Great, cool. Okay, so now we're gonna touch on some other aspects of kind of the broad topics of security and then we'll eventually move on to access control. So and this came up when we were talking about the house, right? So we always have to be thinking about what's the cost benefit analysis? So are the security measures and mechanisms worth the cost? And why is this important? And what are we measuring here? Yeah, so somebody said, great. So we're thinking about risk, right? So risk is what actually goes into that. So that's part of the benefit equation, right? So we want to think about what's the cost in implementing the mechanism and what's the cost in terms of maybe the cost of actually having a security breach, the cost of our reputation being damaged if we're seen as somebody that has a breach. Are the programs or system worth the cost that we're investing in it? If let's say our company makes a million dollars a year, does it make sense to spend $5 million every year in terms of security? No, right? It just doesn't make sense because the cost benefit just on those terms doesn't make sense, right? Yeah, so we talked about in the house, the example between securing my house versus securing Bill Gates's house. That's a clear example of the cost benefit of ratio being differently because different risks and different threats impact the system. Cool, so we talked about what factors to consider. We talked about the system itself. Some people brought up the example of complexity. This is actually a great way to think about complexity is actually a form of risk. So the more complex your system is, the more likely there are to be vulnerabilities. And so it's actually really helpful to be thinking about how to make the system less complex which means less risky, right? If you have a complex system that has like 10 different states it could be in, you have to make sure you're doing your security tests in all 10 of those states or if it only has one state or two state that makes it actually much easier to test the system. Cool, and this gets into this notion of what we've been talking about a risk analysis, right? So this actually comes into, so we think of risk analysis, there are assets on our systems, right? So in the case of the house we wanted to secure, the house itself was an asset that we wanted to secure in the context of an organization, an asset maybe a server or a desktop machine or a, and so we wanna think about, okay, what assets should be protected? So think about all of the laptops in an organization. Is it, should the same level of security be applied to every single asset in the company, like every single laptop? So I see some nos in the chat, you wanna defend your position through either voice or chat? I mean, a laptop used by, and if like a laptop routinely used by like a manager with admin access would need more protection than one that's just like used by like lower level people who don't have as much access. Yeah, and it's super, thanks Patrick. So it's super helpful to think about the extremes, right? So think about the CEO's laptop who can order with a single email, order a multiple hundreds of thousands of dollars wired transfer from the company to somewhere else. That should be kept at a higher level and has access to more private and proprietary information. The CEO's laptop should probably be kept to a much higher security level than a standard laptop. So another way, actually, I guess maybe another example of this is, are you, do y'all, anyone remember blackberries? So blackberries were these phones that actually had like an early quote, quote smartphones actually had a keyboard so that you could do email on your phone and had physical buttons you can type. Apparently, if I remember correctly, Obama liked his blackberry so much that the NSA had to make him a custom blackberry just for him so that he could use his blackberry securely. So yeah, so this is a super, so this is an example of, you know, and this is a, so we need to always be thinking about, and this is why we talked about it from the beginning, what threats does an asset face? What are the consequences if it's attacked? What's the likelihood of these threats? And then we need to think about what level should we protect this asset? One thing I want you to think about is, does risk actually remain constant over time? So if I say, you know, we've set this level of risk to the system, is that constant throughout time? And if you say no, then what cases would be the counter example? And by counter example, I specifically mean what asset, so can you give me an example of an asset where the risk would vary over time? If you go back to the kind of that blackberry example, not a whole lot of people are using it, and you know, depending on their position, like that blackberry in my hands is not as big of a deal as you said in the hands of a CEO, like that specific device could change hands if it belongs to a company a number of times. That's great, yeah, so a device itself could be used by multiple people if there's a log internal. Yeah, that's great, or like a shared computer, right? That you all log into, you know, like a classroom computer. Actually, this could be a good example if you still remember what it's like being in a classroom, but they had those shared computers, right? So it's different if I'm there putting in my, you know, ASU username password to use that versus if Michael Crowe is doing that, because inherently in ASU, he has a higher position of authority and the compromise to Michael Crowe's account would be much more severe to the ASU organization than a compromise of my account. Okay, so other thing, okay, so let's another thing think about Bitcoin exchanges, right? That's something where a Bitcoin exchange being a target could actually the risks increase because the benefit to an attacker increases or decreases depending on the value of Bitcoin, right? If Bitcoin suddenly was worth $0, nobody would care about, let's say hacking Coinbase, but when they're worth, you know, I don't know whatever the current number is, that also changes. Another thing to think about is think about the server. So a server or machine that stores the company's profits, their profit reports. So companies have, public companies have to do quarterly reports and these are really important because they drive the stock market prices. So we talked about, I think before, if you had access to those companies' earnings reports before they're released publicly, you could trade on that information and make money. And so that would be an example of a server where the risk is really high exactly, you know, a day before those results are gonna be released, but once they're released, the risk drops to zero because that server only has public information or not zero, but close to zero, right? That server only has public information so you don't actually care if anybody hacks into that. Yeah, or that's a good example somebody's mentioned, taking over somebody's MySpace page, right? The risk of that is probably decreased significantly over time. Cool, this is great. So excellent, chats on there. And the other thing we need to think about in terms of security are laws and customs. So why is that? Why do we care about the law or the customs? Yeah, so we need to, oh yeah, go ahead, please. Sorry, if someone's hiring us to implement a system, it would probably be preferable that it's not illegal, that we haven't done anything that'll get them arrested. That's great. So yeah, if somebody's hiring us to do it, other ways to think about it in the context of what we've talked about is laws can restrict what policies and mechanisms we can apply, right? So various countries actually have or have had in the past laws regarding cryptography and what kind of algorithms you could use at what levels. There's actually this crazy thing where back in the day it used to be illegal to export a program that implemented cryptography or certain cryptographic algorithms outside of the US. It would be subject to export laws. The loophole, of course, was books were not subject to this requirement. So they would get the source code to a cryptographic algorithm or protocol and print it out in a book and then they could ship that book out of the country and then other people would re-implement it based on the book. So this is kind of some example here. Privacy laws. So somebody brought up privacy. So this is great. So privacy laws. So what would be an example of, so let's say there's a law that says a company cannot read your email. Let's say like that, right? So a company cannot read your email. Now, I report to ASU, hey, my email's been hacked. Well, now how can an admin, if I have these laws that restrict privacy, how can that admin actually do their job and investigate this breach, right? So we need to think about these things in advance and be aware of them to make sure that we're following all proper laws. How do customs differ from laws? So not customs in terms of importation. Yeah, that is one way of importing goods, right? So customs, you could think of like societal norms would be another way to describe that, right? But this is a, so it could be things that are maybe legal, but cross a barrier in terms of customs or societal norms, right? So these could be things, you know, like you could think about your company if it's requiring you to go through a, let's say a metal detector to detect if you have any USB drives or something and if it's illegal or if it's against policy to bring in USB drives, you could have that policy. You could also have a mechanism that says, anyone who enters our facility has to go through this really deep inspection process to make sure you're not bringing in any USB drives. The cultural norms of probably your company or would probably you'd get a backlash to that because of these customs. And this can be definitely, the thing in here is it definitely can vary from place to place. So, but it's something we need to think about because again, as we talked about, users are very good at evading security mechanisms if it gets in the way of doing their job. Yeah, so Kenan wrote a great thing in chat of there's no legal penalty for violating customs or social norms, but there are other consequences for breaking them, right? That's great. So here's an example I like to use of this to get you to be thinking about this is, okay, so a company, so we talked about using badges and cards to be able to access a company. So, there's a standard process of a company, you wanna know who's accessed the building, you wanna make sure that they're a valid employee. And so you have an ID card, like my ASU ID card. So we have this Isaac system at ASU that you have your ID card and you can, you present it to the door, the door checks that you're a value that you're actually a real person and then get you inside the building or that you're authorized to enter that area, let's say. But one of the most annoying things about those has anyone ever accidentally left their card at home and had to either file some sort of, at Microsoft I do this a few times and they'd have to make you a temporary badge that you'd have to wear all the time and they'd have to get somebody to vouch for you, verify your identity, all this stuff, it's a hassle, right? So why not get rid of that problem and put a microchip in your finger because that's essentially all it is with your employee ID or something. And then that way when you wanna get into somewhere you just, boop, touch the door and it gets in. Doesn't it sound great? So let's have a little, so who would, you can write in chat, would you be for this or against it? You can do, I don't know if you can do emojis in there, thumbs up, thumbs down, you can say with text. So against, yeah, so most people, there's some people for, that's pretty good. I think that's fine. Who knows, maybe this is the, people viewing this in the future would say that, hey man, we're all basically cyborgs now. So what's another microchip in my finger? Yeah, so it's a small chip that says the size of a grain of the rice and so you can swipe into the building, you can pay for food in the cafeteria, all just like waving your hand over it, right? So, yeah, so this is super interesting, right? So we're all, there's a lot of chats, things happening in the chat. This is why I deliberately chose this example. And it says at the bottom that's not mandatory but some people more than 50 out of 80 employees actually had done this. And this happened in Wisconsin, the company was located in Wisconsin. So it wasn't even like a foreign company thing. This was discussed at a United States company which I think is super interesting. So yeah, so this is, I just wanna bring this up because this brings up a lot of, there could be law aspects here, could the company track you when you're home? How do you actually know that the chip is doing what it's supposed to do? Maybe it actually has a wifi thing in it or a 3G, a cellular connection. So it's reporting on your locations. How do you, people have discussed in the chat what happens, how do you remove it, right? When you leave the company and you have to get like surgery or cut something out of your finger every time you do this. Yeah, so that's, yeah, so these are all great points about would you want it or not want. I mean, there's, I think we can all agree it's a clear, it can be a clear convenience win, right? You could, if you look at it from that perspective you could say, okay, yeah, we actually do understand how this, as a user for getting your card sucks and so this could be a clear convenience win for the users. And maybe you can also say it makes it easier for them to follow the security policies and security mechanisms because if it's just putting your hand up there maybe that's easier than a badge or something like that, but so yeah, these are, but then there's the clear downsides that we talked about. This is one of the really important things of, you have to look at a potential security policy or security implementation from all angles. Wouldn't this be also a security threat too because you have to start that data into summer and someone could actually like hack into that system to locate the employee. Yeah, so that's, yeah, I'd say that there's probably not an increased risk of that as opposed to a normal ID card because that, your location is tracked at least inside the company. What I'd say there is what information is actually on that chip and how difficult is it to read it, right? There's actually a problem with passports and other devices where they can actually read information that's on the RFID chip inside, let's say your passport. And so you'd wanna, maybe before you did this you'd wanna say, okay, but what about, what information is it on there? Is it my employee ID? Does it say my name? Does it say my employee number? Can somebody with an RFID reader just read that information or is it encrypted somehow by the company? These are all things you definitely wanna bring up and think about that. Cool, this is great, okay. And now we're gonna go on to the last thing. So we've kind of been hinting at this a lot. Now we're gonna talk about it directly. We also always wanna be thinking about the human issues. And this is, again, can be what we talked about end users, so what are end users going to be doing to get around possible security issues? Another thing is who is responsible for security in an organization? So anybody that works at an organization, what do you have, is there like a group or that you maybe know about that's responsible for security? Yeah, so people are saying in chat that like, yes, all employees should be responsible for security, but I do mean, is there a group that is in charge of security? If you just have everybody in charge then actually nobody's in charge, right? So everyone should be in some sense responsible for security, but by the same token, you need people who are actually gonna be driving this, right? Yeah, so usually there'll be a one, at ASU we have a chief information security officer. So there's a high level employee that has an organization that is dedicated to security of the organization. Many companies will have that. And so again, why does what budget they have affect the security of the organization or maybe our assurance in the security of an organization? Yes, this goes in hand with what we were talking about about like the appropriate level of security. Yeah. So it's like giving enough budget to match what risks are. Exactly, thanks, Patrick. Yeah, so that's, and this is a, that cost benefit analysis, right? If we actually don't have enough budget in order to implement features or implement mechanisms that we need in order to ensure the security of our systems, then we're actually operating at a higher risk threshold than we should be responsible for. So these are an important things that impact, right? These are things that oftentimes we don't think about in terms of human issues. It is a human issue. Somebody just wrote in the chat that the CEO is responsible for defining budget for the security teams, right? So somebody in the organization, whether it be the CEO or the CSOS boss or something defines the security budget, which has a direct impact. So this is a human issue of how much can that happen? And another thing to think about is how much organizational power do they have, right? How much is the security group completely separate from everyone else? What happens if they find problems in products? Where do they go to to fix those things? And so these are all actually human and organizational factors that impact the security of the system that are beyond what we normally think about in terms of just the end user security, right? So understanding the security of an organization, you have to understand the context of this different thing. Another thing of who enforces security, right? At the end of the day, it's all people and systems, right? So you can have the best systems on earth, but if the people either aren't enforcing your policies, aren't following policies, are circumventing your mechanisms, that can introduce security issues. Cool, any questions on any of this stuff that we talked about so far? Awesome, cool. Well, thanks for that, that was a fun discussion. Hopefully, you'll be looking at your hands thinking about like, would I want a microchip in there? And then if you think about that, you think of who should be able to access that information? Ah, man, time this up. I changed that, but not that. Cool. Okay, so now we're going right on to access control. So we want to be thinking about, and access control really tries to answer that question of who should be able to access what? There's a question about, that we'll get into later, that tries to answer, how do I know you are who you say you are, right? So if I say that all students in the class should have access to this system and TAs should have greater access and the professors should have even more access, we have the problem of, well, how do I know who is who and who belongs in which group? But we're going to ignore that problem for now and we're going to just focus on thinking about access control. So as an example, so the policy, the university's academic integrity policy disallows cheating which ASUs does, in case you were ever wondering about that. This includes copying homework with or without permission, right? So this is an important problem or an important policy, right? That is there to ensure integrity for everyone and to make sure that grades are fair, all that fun stuff. So a certain CSE class has students do homework on a shared server, which is similar to what we talked about earlier. So it's a shared system similar to general.asu.edu. Okay, so student A forgets to read protect their homework file and student B copies the file. So who violated the policy? Some people are saying both. Does somebody want to make an argument for that? Maybe somebody who hasn't spoken yet. If you haven't spoken yet, I'll contribute to the classroom. We can hear your nice voice. I think it's B because well, protecting your file is convention. It's not a rule per se. I don't know if it is a rule in this case, but if it's not, then it's not a fault of student A, but not copying is a rule. So in that case, student B was the one who did something wrong here. Okay, great. Thank you. So we have a case for student B copies the file because they're the one who deliberately violated the policy. I think we can all say that B definitely did something wrong, right? Actively violated the policy. What about student A? So we have an argument for student A not being, not being responsible. Does anyone think that student A did something wrong? I think we should give a warning to student A because even though he didn't do anything wrong, a student in the future can say they forgot to do it even though they did it on purpose. Yeah, that's a great point, right, Peter? So maybe A, I guess we can't really tell the difference between A and this is kind of the problem, right? So student A, how do we know that they intentionally forgot to re-protect the homework so that they could use this as an excuse when they got caught and say, well, I didn't do anything. I just forgot to re-protect my homework file. It's student B's fault. So yeah, so maybe actually the problem here is with the policy, right? Maybe the policy should say, right? So the policy maybe has a problem that, the fact that it doesn't say that you have to make sure that your homework files are not readable. A question along those same lines because I was going to point out maybe the policy or maybe is there something that the university could have done to enforce that policy to enforce that this happens every time? I don't know if that's possible or not, but- Yeah, that's great. So this is actually- That's at least a question they should have asked, right? Yeah, thanks, Stephen, I appreciate that. Yeah, so then this actually kind of goes back to our discussion on, so we have the policy, right? But do we actually have a mechanism that can enforce that? Could we actually make it so that a student can never have their homework files being readable by other people, right? So this could be a problem of our implementation of the system or a deployment of the system. Can we do something to general.asu.edu such that a student can never make it so that their files are world readable? Okay, so yeah, so this is a, actually it's all give a brief history. So Matthew in the chat asked how is it, how is student B even able to access A's homework in the first place sounds like partially the school's fault? The actual, it's really funny actually the way computing has evolved. The first basically computing systems or mainframe systems where every user of the system used a dumb terminal that would give them access to the mainframe where they'd all be able to execute on there because machines were so expensive. And so this is why systems like Unix, Linux, all these things evolved to have multiple user accounts so that multiple people could use the system at the same time in a secure way. But one of the, but then what happened basically is desks like computing became cheaper and cheaper such that it actually made sense for people to have machines locally. And now we have a computer in our pocket and in our laptop. And so actually we think about things in more of a single user mode it's your laptop or your phone that you use but actually these concepts still happen. So for instance, any modern mobile operating system so both iOS and Android, every app that you use runs as a different user account so that they can't directly access each other's files. So that's great. And actually you'll see in the future an example of where you'll actually be using a shared server to do homework assignments. Which will be fun. So yeah, so this is sending us up for this problem of access control, right? And so we're gonna need to, we're gonna be thinking about different concepts. So we're gonna be thinking about authorization. So authorization is specifically what can you do on the system, right? What does the system allow you to do? And this is in contrast to what I talked about earlier versus authentication. So authentication is who are you and authorization says what can you do on the system? So we're gonna cover authentication later, we're gonna focus the rest of this on authorization. And so there's a tying into what we talked about. So we have this, these notions of, and we go to, yeah, 15, okay, good, we got some time. So we have this notion of what can you do to the system which is authorization. We also have this notions of trust and risk that we've talked about before. Why do we care about authorization and trust? Why does that, what do those have to do with security? And what do they have to do with, even with each other? Well, certain authorizations are gonna get certain trust levels, right? It's kind of like that teacher versus the student or the CEO versus a general worker. Yeah, so if we think about authorization in terms of what can you do to the system, right? Sorry, if you can do more things to the system, then you can, then you have a higher trust in the system. Think about administrators of a system have full control over that system, therefore they inherently have higher trust in that. The organization has higher trust in them. And if we authorizing correctly, we may have a breach, right? If you get authorization to something that you shouldn't have access to, then we have problems, that's great. And we also think about these things so we use authorization and trust to manage risk. So how does that help us manage risk? Maybe somebody who hasn't talked yet. Well, I think it's because you don't want other people to access your private information. And so you want to protect information and you don't want other people to see it, just like a student and professors, students are definitely not going to be able to see or professor gave the TAs about the homework assignment solutions. Yeah, that's great. Thanks, Duchenne. So I think maybe a good example of this would be grade scope, right? So when you go into grade scope, you have the authorization to see your own homework assignments but not those of others that have been submitted. But me as an instructor in grade scope, I have authorization to see the homework submissions for my class, for all of you, but I can't see homework submissions for another class at another university, right? And this manages risk because it allows me to help mitigate the possibility that somebody is able to do something they shouldn't be able to do. And we know, we've talked about this before, I will answer this, can you eliminate risk? The answer is no, right? We can never eliminate risk, but we can help reduce and manage risk by making sure that people have the correct authorization. And specifically, the security concept we're driving towards here is the concept of least privilege, which we'll get to. So you should also only have as much authorization as you need to do your job and no more because that opens up the possibility of you having access to stuff you shouldn't need to. Cool. Great, okay, so the way I think about this is in terms of authorization versus access control, they both kind of answer the high level problem of who should be able to do what to the system. The way I think about it and the way I think that's helpful is authorization is the policy, so who should be able to do what and access control is the mechanism, what actually implements the fact that people should be able to do different things to the system. And so this kind of then puts that in that context. The terms are kind of used a little bit interchangeably, but whereas access control can refer to both the policy and mechanism, but it kind of helps, it's helpful to think about this in those ways to be clear. And one of the important things that we want to be able to do is model access control. So when we create a model of it, we're creating an abstraction that allows us to reason about access control. So to reason about the desired access control of the system and why is this important? Why do we care about modeling access control? Yeah, so we can, okay, go ahead, please. I think it's because you want who to want access, let's say in a military level, you definitely don't want to low-level people to access the high-level stuff. Yeah, so we wanna be able to try to, again, increase our assurance that the access control policy, the access control mechanisms actually enforce the policy that we want. So if we think about, let me pause the recording real quick. Oh, actually I will pause share. Oh, that's nice, okay, cool. So you can't see my screen, that's correct. Bring up notes from the class. Can y'all see this? Yeah, I can see it. Okay, cool. Let's pull up the slide next to it, okay, cool. We'll go through a quick example to think about why modeling access control is important. So if I have an access control system, but that says, let's say I have students and I have students and professors, right? So my policy is students should only be able to read their own homework files, okay? So my policy is, should only be able to read your own homework files. And the students, their access control, the professors can read all homework files of all students, and students can create homework files, can read homework files. So everyone following along with this example. So our access control system, our policy that we wanted for students should only be able to read their own homework files. This is the security property we're trying to guarantee. And students can create homework files and they can read homework files that they create. And professors can read all homework files of all students. So does this access control satisfy this property? Yes or no? Think somebody gets in the, yes and no? Okay, so somebody says one problem is can read, okay. So if I maybe say, can read only the homework files that they create, maybe I'll change it a bit, can read only homework files that they own. Maybe about this. Is that also, are we also still good? So how are you, when you say yes or no, how are you coming to that conclusion? So how should we be thinking about this? How do we answer that question of yes or no? It does or it does not? We can look at what we do not want happening. So in this case, the students should not be able to see the files that they don't own. And that was specified in the access control that they were able to use their own files. So I think that is how it's specified. Yeah, so one way to think about it is we have some property P, right? So this is our property. Let's see if I'm gonna mess things up, but that's fine. And we wanna see, is there ever the case that not P can be true, right? Can we ever get the system into this state where not P is equal to be true? Can we ever get into the case where students can read other people's homework files? So what we can do is walk through this system, right? The system is some state, right? And we can try to say, can I get the system into that state? So professors can read all homework files of all students. So no matter how many homework files, the professors can read them. Does that violate P? No. No, it doesn't violate P, right? Okay, so students can create homework files. So you're a student one. So we can walk through this, right? Student one creates homework one. And homework one is owned by student one based on our policy. And then we say S one can read homework one. The answer is yes, based on our policy. If I have student two, student two can read homework one. Can student one two read homework one? Yeah, so they could, if they can make themselves a professor, but if we assume that that's correct, we'd say no, it's not possible. And now there's no other states we can move into, right? We can't move it into this because student two does not own homework one. This is excellent. Okay, great. So this is kind of a hand wavy argument, but you can see how you can reason about these steps that you could take. And we could try to prove, and we'll see that if we model this extra control system, this could actually allow us to prove this. But what if I now said a professor, professors can now change who owns a homework file? Is my policy still correct? No, why not? Right, so I need a counter example, right? Using all of the rules of this access control system. So I'd say student one creates homework file H one. Professor one, oh, so now I'd say student one owns homework file H one. Professor two changes a homework file H one so that S two owns homework file H one. Now student two reads homework file H one, right? So through no fault of, and again, this isn't getting into whose fault it is, right? But because I've now added this ability to change the access control system, right? Now I have the ability for somebody else to be able to change who owns what. Now I can actually lead to a condition where not P is true because this is, if not P, then something like this, right? So because another student can read another student's homework file that I've added this, right, that would be a problem. So I'd neither need to amend the policy to say something like unless the professor is the one to make the change, right? And this is where we get into the problem of a policy, right? That even written in English, right? My intention here is, of course, if this policy was an account integrity violation, if the professor chooses to share that homework file, let's say it's a good homework file that is used as an example for everyone on the next homework assignment, right? If I didn't have something like this, then that would be an act of integrity policy and this access control system could allow me to violate that. And I could even, if I now add, so let's say, right? Again, if I added the ability for students to be able to change who owns a homework file, now we have a more clear example of a violation of this policy, right? Where, and then we have questions, yeah, who should own the homework file? Can multiple people own it? All that stuff needs to be thought about. So what we're gonna do is we're gonna have a way of thinking about and modeling access control so that we can reason about the steps that happen on a program. So we're gonna use a little bit, and I urge you, don't be scared by the notation, just a little bit of kind of scientific notation, right? So, or formal notation. So we're gonna call the subjects in our system S. So subjects are things in the system that can act. So in our previous example, students and professors are both subjects in the systems. Were homework files subjects in our system? No, right? People are saying in chat. No, they're not because they can't act. The homework file in our system couldn't actually do anything. It couldn't try to read things. It couldn't try to write things. It's only things that can act. But it's an object in our system. So it's included in the set O. So these are assets or objects in the system that could be acted upon. So thinking about before, were students and TAs objects in our system? No, why not? They can't be acted upon, right? They, I can't act upon this, but if I say a professor can, if I maybe say that a professor can kick a student from the class, such that they no longer own any homework file or they can't log in, let's say, right? Now, so objects in this system would now be homework files, right? But now students can be acted upon. And so they now are objects in our systems, right? So they can be both a subject and an object, right? We'll see that this actually, it seems a little bit weird, but when you actually get into it, it actually makes a lot of sense, especially when we think about that. And in this case, professors can't be acted upon. So they would still be subject, only subjects and not objects. Cool, okay, great. And then we have the rights that can be, that can happen. So what can the subject do to the object, right? So in our previous system, we had the rights of reading a file, we had the rights of ownership who owned what, and we had a right of being able to change ownership. Okay, so one way we can do this, that is incredibly, and actually let's keep going with this model. I kind of like this model. So we have the students, we'll say lowercase S, S1, S2. So we actually have in our model, so our subjects is, let's say the set containing all students. Oh, it's class time. Man, okay, cool. We'll get back to this later. Thanks for letting me know, I appreciate it. All right, see y'all on Thursday. And if you're on Wednesday,