 All right folks, it is noon on Thursday at 3 1st. Any questions before we start? Yes, we're talking about mechanisms. So what happens, for instance, we build our mechanism off the concept of SCANF, and it decides SCANF no longer works. And how do we... What is SCANF? SCANF, I can see. Oh, oh, oh, oh. Correct, so... Okay, so there's a couple things to think about there, right? So the question is basically... It kind of boils down to how do we trust our mechanisms, or in some sense, how much trust to put in our mechanisms? Have we got thoughts on that? A different thing. How can we trust our mechanisms? Should we trust our mechanisms? Only if they turn our friends. Do you ask the question? It's certified. And what? It's like certified. So maybe we can delegate the trust to an authority and say maybe there's some authority that didn't certify that, yes, we should trust this. How can we trust our mechanisms? So you can do research to see how reliable the mechanism is, so you can maybe look at the past history of this mechanism to see have there been security problems in the past, so maybe you can use that as an indicator of future quality, right? But still, it's kind of the same thing as looking at the stock market's past or the predictive future, right? Past performance isn't a predictor of future performance, so it can help increase your assurance that maybe... So if there's not a lot of security vulnerabilities or security problems in your mechanism, that could mean one or two things. Either it's written fairly well and it's fairly secure or nobody's bothered trying to hack and break it. So you could test it. So you can actually maybe not trust that the organization is doing proper testing. Maybe you could take the tech mechanism and actually test it. That can involve if it's software, maybe you can read the source code to try to determine how are they doing things securely. There, you're limited by your knowledge and what you can actually do, right? There could be an attacker who could find other ways to bypass your mechanism. So should we just give up? Should we not use any mechanisms so we can't really trust them? You have some kind of communal assurance, because if all of your mechanisms are the same as everybody else's mechanisms, then at least everybody is equally vulnerable, right? So the security software that I buy from my website is the same as Google uses. If someone hacks with me, that means they can also hack to Google. Google stuff is more valuable. So I can depend on someone else using their resources to verify if the product works or if it actually works. Yeah, so this would kind of be the, well, if everybody in my neighborhood has the same door lock that everybody in the neighborhood uses, it's likely to get broken into as I am. But it just is like the next scenario? Or is an attacker just going to rob every single house on the block with like a crew of 20 people? So it's an interesting dynamics here, right? One thing that makes me think of is like taking your really crappy car and parking it always next to a very nice, expensive BMWs and whatever, right? So the theory being, hey, if someone's going to bother to break into a car, why would they break into my car when there's a really nice car right next door, right? So it's similar to kind of the value of the target, right? If I'm using a very popular, some kind of security mechanism that I know a lot of companies are using, you can use maybe Microsoft and Windows as an example, right? So a lot of organizations use Windows. Maybe I'm trying to decide if I want to use Windows. There are probably exist vulnerabilities in Windows. The question is, would an adversary want to use a security vulnerability against Windows against me as opposed to Google or another large organization? So that may depend on you and your threat model. Well the inverse of that is also true because they could also spend more of their time trying to hack you as opposed to a bigger company. So if they hack you successfully, then they can hack the bigger company. Possibly. It depends on the context, right? So in the case of Windows, right, they can get a copy of that software so they can just test it on their local instances, right? So in that sense, I'd say there's probably nothing that I don't know why they want to test. Because every time you're going to remember every time an attacker launches an attack, right? There is a chance that somebody would text that and if it's something that's previously unknown, now the entire world where people know about this. So you just burn this exploit or this attack against a low value target and now it's essentially worthless because people know about it. It's an interesting question. I mean, so I guess the flip side is should we just get rid of mechanisms altogether? Just say, we'll just stick to policies. Because then policies are in their own way. Yeah, policies don't actually enforce anything, right? So you kind of have, you want mechanisms in place. You also need to consider the trust you put in the mechanisms to see if they're actually going to implement your policy and if that's appropriate for the cost-benefit analysis for you and your organization. You may want different mechanisms or you may want, there is a little bit of value in kind of standing out from the crowd. So if everyone else, so it used to be for a long time that Max did not get viruses, why is that? Because Apple employs better programmers than Microsoft and writes better software. The market share, I would say the market share of Mac users versus Windows users was around, back in those days probably around 10% to 90%. And so here you have a software monoculture. So if you find a security vulnerability that you can exploit in Windows, you can hit 90% of computer users. So an attacker trying to benefit their maximal impact is not going to bother trying to write viruses, write some malware or something for Mac because who cares? It's 10% of the people, right? If it's the same amount of effort, then I'm going to put my effort into something that's actually much more vulnerable. So now that is no longer the case. They have malware, they have ransomware that actually targets macOS now too. So on Tuesday, I drove home the point that we should always hire somebody else to look over our code or check our mechanisms or do our verification. And it's not because I'm lazy, of course, that my eyes keep pockets. But there's this idea that you are sometimes, not everybody, but a lot of people get married to their code and they think this thing that I brilliantly created, even if I designed for security, is beautiful. And I cannot break it. And just because you can't break the thing you created doesn't mean somebody else can. We talked about ensuring that our control is actually smarter than us. And then we shouldn't hire script kitties. And I just wanted to bring up the point that that's not necessarily true because if you hire a script kitty and they download some utility and run it against your website and they get in, they give that script kitty $10,000 because your website's not secure. So I just like hiring other people that you can't do effective tests on your own code because you have this creator's bias. And there's lots of things, the assumptions that you might not even be stating, but they're in your heads subconsciously about your code. Right. So yeah, I definitely agree with the overall point that it is, it can be very difficult to test software that you wrote. I'm sure we've all faced this with writing a program project, right? The code looks correct to you. It looks like you've implemented 100% of what the specification said it should be, but it's not passing the test cases and you don't understand why. So you keep slamming your head against it or you do other kinds of tools to try to help point you into maybe what you're not doing. Yeah, I've actually experienced this point in my professional software development career. At Microsoft you have SDEs who develop, you know, S-Dets who create test spaces, create testing infrastructure and kind of handle the testing side of things. And I remember when I was doing my internship there I was shocked because I had to develop. I can't even remember what the feature was. I developed some features, something, you know, a month on it or whatever, an internal level thing. Dude, I was super proud. Like, yeah, I fixed this bug, now this code is awesome. Like I spent a lot of time looking at it. And then the testers get it for like a day and they're like, yeah, it works good except when there's two monitors and the application is on the second monitor and not the main monitor. And I was like, what? I don't know. And then it didn't even occur to me to even think about possibly testing anything like that, right? So the fact that somebody else who didn't have the same bias that I had in writing the fix, they were able to do that as well. So definitely I agree. I think the interesting part comes in that the outside person is an inside person. So a lot of orgs have security groups internally that work with the product development teams to do these kinds of analysis and ideas. And that's a pretty effective model. Part of you taking this class even if you don't go into security is getting you exposed to and aware of security problems and security issues. If you're not aware of something, then why would you ever think that it could even possibly be a problem? Yeah. What's your second point? That was it. You had a second point, but I don't remember. He doesn't remember either. Okay. That's fine. It was a good point. I'm not saying it was a bad point. I just forgot while I was responding to the first one. Yeah. Very good discussion. Which kind of ties into what we left out on Tuesday, the cost-benefit analysis. So are the mechanisms effective for the price that you're paying and the benefit that you're getting out of it? Are the security measures, the policies, the mechanisms, are these things actually worth the cost? So what's easy to quantify about this question? The cost and the dollars. It's very easy to measure. You bought a $10,000 security product. You are hiring a security engineer for $120,000 a year. Right? These are all things that you can quantify. What about the other side of this equation, the benefit side? It's how much it secures, involves the security of things, not the type of product. It's like how much of those security measures actually secure for our products. Yeah. So part of this, you could try to measure what assets are you protecting. So that would be a part of kind of the cost-benefit analysis. If you're spending $20,000 to protect a server that if it got compromised, it would only cost $5,000 worth of damage to the organization. It doesn't really either affect anybody's job or work. Why are you spending that much money to protect that asset? Now it could be because you don't want that to lead to a large-scale attack. So there's definitely some thoughts there. Yeah, so the benefit part. So you can maybe measure benefit in terms of what harm could come to the organization if there was a security problem. Yeah. I mean, some kind of pay is relatively insecure in certain areas and totally secure in other areas. For instance, you'd like Coke's thicker recipe. Only TV will actually know half of the recipe. But if that was true for every single level of Coca-Cola, that would not really work. So why do they do that? So why don't they do that for like the aluminum that their cans are made of? Why doesn't every single company do this? It's an efficiency issue. I mean, if every single person only had like half. Or even if it's like they're talking about the keys, it took two people to do every single process. It would take forever to do anything. Okay, but double, what else? Sustenance. The reason they do it is because it's a trade secret versus trademark or patented or anything. But the point behind it too is that you have to, that's all their value. I mean, if that gets out, people can replicate it, then they lose all their values. They put these really harsh security mechanisms in place so that even though it makes doing business difficult because they need that level of protection of that system. They think they need it. Or now it's become such a PR thing that they think it's good for public relations that people think that they have this, right? So there's definitely multiple different levels to think about, right? Is how, you know, is this the core, essentially you can think of it as the crown jewels of the company, right? Is this their core thing? And then Eric brought up the point of, well, maybe, you know, they could have patented the recipe for Coke, right? But then what would they have to do? Have to tell a lawyer. Do you have to counterfeit it in China? No. Well, that's a different issue because they probably weren't worried about that when it started to show up. Do you have to actually show what it is? Yeah, you have to demonstrate and describe exactly what it is in order to get a patent on that, right? So they did the cost-benefit analysis and said, hey, it's better if we keep this secret, right? But they don't necessarily spend that level of effort and secrecy on every single aspect of the company because that's not as important as the company here. So this is the key point. We're spending a lot of time on this because this is the thing that we often forget. So you're inside an org organization. You're trying to accomplish some organization-level mission. And so, for instance, if security is getting in the way of that, people are going to find ways to bypass your systems or you're going to be the most hated organization in the company because you're preventing people from getting their jobs done and preventing the company from continuing to function. Yeah. Well, let me just like an analysis for like, let's go back to the housing comparison. Young care is who comes to the porch of your house, but you do care about moving to your front door. So, friends, there's like, sometimes you don't really care about security of certain places versus the security of any kind of unique area or something. Right. And it may be context and time dependent, right? When Amazon leaves a package on your doorstep, you may care about who's coming up to your porch, right? But in other times when there's nothing there, maybe it's, you know, you care less, right? So, yeah, this context sensitivity and time sensitivity is interesting. So we're talking about this. So, part of what the role of security we think about is developing a security policy, developing mechanisms that take into account this cost-benefit analysis, is doing kind of what we've been hinting at and talking about is risk analysis. So kind of the key question is, the first question would be, should an asset be protected? What does that mean, asset? I think we talked about it a little bit, but, yeah. Guess anything that has value. Or could have value. Or could have value. Right. To the organization, to you, the person, to whoever's kind of creating this security context. So should all assets be protected? Depends, right? So let's say, good example, so laptops in company. So how many people have a laptop as like their sole device? Not including a phone, like as opposed to a desktop? Some. So you probably as an employee want a laptop as a computer of work, right? Mobile, you can take it home, you can work from home. Companies gonna love that you're working from home on not any to five. So should you protect that asset? Just a laptop. Probably should protect that since you have a higher risk of getting stolen since it's outside the office building and there could be confidential data on it. Yes. So the question then is also it depends, right? It depends on who's using a laptop, right? So even if, let's say, there's no confidential, no important data on it, the CEO just uses it to play games or something at home. But the CEO's laptop is a pretty big deal to the company and if it gets stolen or lost, that could be, you know, you could have problems. So you may want to protect that, not even considering what's on it. Now it's if you're bringing now corporate confidential data onto your laptop and bringing it home to where it can easily be left in an Uber, be lost, be stolen, right? Now, when that happens, now the company's data and important information is out there. So the other way to think about this is on you and your student records, right? So there's the FERPA, the FERPA requirements, which require that we keep all your academic records confidential. It also means that if your parents ever come to me and try to ask what your grade is, I can't ever tell them what it is unless you specifically authorize me to do that. And part of that is making sure that if the grades and everything that I have on here, if I ever lose that laptop, that data is not going to be compromised. So they do that by putting certain mechanisms in place by ensuring that I have my hard drive encrypted so somebody can't just get the hard drive and read all the data that's on there. They have password policies. They have how long the computer can be alone before the lock string pops up, right? All of these different mechanisms that they put in place to enforce this policy of not losing this law about not losing student data. So the other thing we didn't talk about and go back to the beginning, right? What threats does the asset face, right? So we talked about a laptop probably has different threats than a desktop. Why was the difference? One's easily mobile. Right, one is easily mobile and the other one is more difficult, right? More difficult to be mobile and you can even, like I think, the computer labs have computers locked to something. Yeah, so you can even try to physically secure the desktop machines to try to ensure that they're only, that they don't actually leave, get stolen, right? So this comes into risk analysis, right? A laptop, data being on a laptop would be much more risky than say data being on a desktop inside of a company's building where the doors have security badges and everyone has to wear a badge to prove that they're an employee, right? So the risk levels there are different and so there may be different policies, different mechanisms in place when thinking about data on a laptop versus data on a desktop even though the data is the same. Another key thing I always think about is what happens if it's attacked, right? An attack in our CIA triad so the confidentialities compromise the integrity is compromised or if the availability is compromised. So why might this change? Why might this change your analysis of the risk action? You might not care if something isn't confidential anymore because it's not really of value to anyone else but it's not still available to people in your company. It might become a much bigger deal. Yeah, exactly. So there could be that. So the military has a process of marking confidential data on confidential and declassifying information, right? So once that's declassified it doesn't make, they don't really care to me. You care a lot less about protecting that than you do about when it's actual confidential information that is still under that designation. So for instance like Coca-Cola, right? If somebody were to figure out and steal their recipe and display it to the world maybe at that point they stopped caring about having only two people knowing the recipe, right? And all of these kind of security mechanisms in place would seem silly because everybody knows the secret, right? So the consequences there can change based on context and based on what the data is. So we also need to think about what level to protect an asset. Should we be protecting this, you know, the CEO's phone the same as a level one customer service rep? Should we be protecting, when we talk about houses, should I protect my house the same way the White House protects their house and the same way that Fort Knox protects their house and those very bases protect their houses? Probably not. The same ideas apply here when we're talking about computer systems. How much, because security is not free that's what the cost-benefit analysis is, right? Everything takes time, takes money, takes people and that's time, money and people that they could be spent doing something else. Either improving your product or securing a different asset that is more critical to the organization. Does risk remain constant on business? Maybe in the hotel rooms, like working in Starbucks, that risk is gonna go up dramatically. Exactly, and your risk may change by what you do with that laptop. When they just give it to you, right? Fresh from the box, brand new version of Linux or Mac OS or whatever you're writing on it, Windows. The risk there is pretty low. If you lose that, there's literally nothing on there. You have not done anything to that laptop. The moment you place confidential information on there, student credits, company quarterly earnings reports, employees, salaries and data, and so security numbers on that laptop then the risk really increases, right? And so what should we be thinking about when we're trying to analyze the risk of a system? What should we be considering? Should we consider the baseline risk, the most highest level risk, average and... Well, using the example where you mentioned, the Amazon delivering a package to your front porch, it's kind of contextual. You kind of have to consider it all, but that's part of the risk analysis. When do we need, at what times or what locations do we need the highest security and then at what times may we be able to cut it back? Right, exactly. So yeah, it's kind of all of the above in some sense. So in the laptop example, it makes sense to essentially assume that there is or will be confidential information on the laptops and protect them accordingly, right? It doesn't really make sense to distinguish between when does actual confidential information get on the laptop. In the case of the... So one response maybe to people stealing your Amazon packages would be to put up, you know, super huge gates and cameras and everything, right? Really secure your porch area and then you get a package and it gets returned because they couldn't deliver it because nobody wants to go into your fortress to put a package on your doorstep, right? So yeah, you have to think about how do the mechanisms you put in place, how are they going to reduce risk, reduce effect, availability, all those sorts of things. And this is something to think about when you think about the policies and mechanisms of a system. So if we think about a system, let's say we have a corporate environment, right? And we say, okay, the employees, they're living in the late 90s. They don't need access to the live internet because there's nothing out there, right? So there's no internet access in this office. So then, what threats now do you not have to consider? Any non-local or non-physical threat, right? Exactly. So if you have something connected to the internet, you have to be aware of any possible, any person on the internet trying to attack and compromise your system. So that's part of the threats you need to think about. If you say, okay, no internet, we're just going to consider this local area network to get into this network architecture, to plug in. Now I don't care about everyone in the world. I care about people who are physically located near me. So in that sense, then maybe what I could do is, on that network, I can, oh yeah, so you could then provide additional mechanisms, policies, controls to ensure that you control that only authorized people are around your network. Now, so this is part of the risk that you're thinking about the system. So you say, well, the risk to attacks is pretty low because it's only local attackers and we control who can access, get local access to our networks and we're not connected to the internet, so we're not worried about any all possible attackers on the internet. So is that risk level going to remain constant forever? What can change that risk level? Yeah. So we could be targeted explicitly. That's a good one. We can also, what if employees are like, man, not actually on the internet is really annoying. It would be great if I could do that. I have this phone that's always connected to the internet. So what if I just tether my phone to my desktop machine, right? Now the desktop again on the internet, but now what has changed? Well, not necessarily your requirements, but your requirements of securing things are still the same, right? The threats, exactly. So the threats that we're considering now are changed. We used to say, hey, the threat of remote attackers is incredibly low, but now because of something an employee did, our threat is now a lot higher and a lot different than what we were considering. So why is it important to think about that scenario? Because employees are stupid and this will happen. I would not say employees are stupid. I would say that employees are trying to do their job and get things done, right? And so if your policies and mechanisms allow them to do something, it would probably be safe to assume that they will do that thing at some point, right? So there's nothing preventing in, you know, either our policy or no mechanism in place to ensure that somebody can't even even just hook up a Wi-Fi router, right? They can just buy a router, buy some kind of router, plug it into our network, and then now all of a sudden we have internet in the office and our threat model has completely changed. Another way to think about this, so there's a story I heard a couple weeks ago at the cybersecurity conference where this general was saying they had an incident, so the military network, this mill net, they call it, is completely disconnected and separate from the internet. So they're on completely different systems, use completely different, they hope physical things, I mean, I've never been to the military, I don't know, 100%, but it's physically separate. So you assume that random people on the internet cannot attack you, right? Because that's part of, part of why you do that is you create this super draconian mechanism that says we will never connect to the public internet. So you're the Russians, you want to hack into the US military computer systems, you put malware onto a USB stick and you supply the local convenience store to a military base with these USB drives. And so that when this USB stick is plugged into a computer it auto runs a piece of code that phones home back to the Russian systems and allows them access now into your network. And so this is what they did. So they did this, they, I don't know, influenced, we'll say, a supplier of USB drives to a convenience store that was located right by a military base and they just waited. And eventually, even though it was against policy to plug in a USB drive into a computer, somebody was in a crisis where they really had to do it and they just happened to have one of these USB drives plugged into the system and think that the Russians had now hacked in and bridged this gap there. So what do you do then? How do you fix that? Well, you have to think that employees are a major threat level just because they have a certain amount of trust that you already give them. You've got to keep training, training, you know, we're going over policies and these things can't happen. Training, policies, you would also do a super draconian mechanism that I heard about. Allegedly, the story of the Marines in response to this incident said, okay, no more USB ports. So they epoxied every USB port. I gotta say it's pretty effective. That's a pretty effective mechanism to really prevent this kind of, you know, and enforce this policy. And so that can help you contain the risk, but it's important to consider these things, right? Because if somebody beforehand said, hey, even though we have this policy in place that you can't plug in USB drives, this is a known injection factor. We're not patting people down to make sure they can't bring in any USB drives. And all it would take is for somebody to take it out of their pocket and plug it into the computer, right? That may then have led people to think about, well, maybe we should be, maybe physically disabling it is a bit much, but you could probably implement a group policy across all the systems to basically try to disable the USB drives or do something like that. When USB drives and then basically, as long as it doesn't have its authentication code, it does not activate the USB to repeat. I would still be concerned about that because the USB drive you could take, if you ever took it home and you plugged it into your computer, your personal computer that was viewing and was infected that way, then that's how you let something in. So in these situations, when the stakes are really high, right, it's kind of better to have these very draconian policies and mechanisms, or can be. How do we quantify risk? So how do we put a number on risk? I'm 20% risky in my system. I've been proved from 18% so I'm doing much better. I'm proved from 25%. Now I'm only at 20. And to give me a billion dollars, I'll bring that down to 5%. I'll buy it based on what kind of information you're storing. So if you're so secure in a number, that's going to correlate to a higher risk than just your name. Yeah, so maybe you could think about quantifying it in the central... The quantification is the difficult part. So qualitatively you could say, hey, we just went into a new policy where we're actually never going to store credit card information. Our credit card processing agent is going to do that for us and we can still function. So we've clearly reduced the risk of an attack because we now no longer have the sensitive information. But what number do you put on that? Where you 5%, 10% reduction in risk, 20% reduction in risk? This is a hard question. And it's... I don't... Yeah, just like... What did we talk about quantifying before? Assurance. So it's similar... Two sides, similarly, is the same coin, right? So it's very difficult to try to quantify risk. You could maybe try to say, well, hey, last month we detected 10 pieces of malware on our employees' computers, which we, in 24-hour time period, detected, identified, quarantined, and reprovisioned them machine. And this month, we brought that down to 5%. That took... we did it in 24 hours. So have we reduced our risk by 50%? Really, you just had less malware. Yeah, it doesn't mean there's not other things you didn't find the first time. Exactly. The only thing we're looking at and measuring, then, yeah, maybe. And really, in that scenario, so I think maybe the way I think about it, well, it kind of depends. I think that it still goes back to the quantification that's difficult. And even that point, I don't think you can say definitively, yes, we reduce risk. Which is subjective? It's subjective, and it's dependent on the tools that you have to detect these things, right? If you're only measuring yourself on things you detect, what about all the things that you don't detect and don't find and don't ever see? If you were able to say, hey, last month we had 10 malware infestations. We were able to get each of those back online within 24 hours. If you say this month, we had 20 malware instances, but we were able to recover each instance within 30 minutes. I'd probably argue that that's a good reduction of risk. You've probably reduced the impact of attacks. And maybe the raw number that happens is less under your control than you think. So you can try to improve that response cycle to get a system back up and functioning, but it's still difficult to quantify. Yeah. I mean, in a lot of respects, security is a lot like iceberg. You're only aware of what's on the top. You're the president you're aware of, but the reality is, there's so many threats you're completely unaware of. Yeah, so Chase, I think it was back in 2014, Chase had, I believe the number was 175 million households. In the US had their credit card and information leaked and stolen from Chase, JP Morgan Chase, and the attackers were in there for six months without them noticing. And so that's a, you know, huge, yeah, that could be happening there and you never see it, so clearly they're doing something very risky, but you can have these, these large kind of events. The other things we need to think about are laws and customs involved, right? So that's what drove this whole FERPA requirements, right? Is the university has policies on people's laptops. Actually, I think every computer system has to be encrypted, has to have these certain policies because of laws in the country. So laws can either maybe restrict your policy and mechanisms that you can enforce. I'm not a lawyer, so I'm not going to talk a ton about this, but you can't just, as part of your mechanism, you can't just shoot people. I think that goes to the booby trap thing I think we talked about, right? I believe, is it every state that booby traps are legal in? It's not like a state thing. Yeah, so I believe you're liable for any booby traps you set in your house. So that would be a very, probably ineffective mechanism for deterring people from entering your property and it would also be an illegal mechanism. So if you would be liable to get you in trouble, I mean, that's just bad news all around. And so if you are touching health care data, so there's HIPAA, H-I-P, the double P-A? Yeah. Yeah. I can remember what the double is on the P's and the A's. So HIPAA is a requirement that restricts how you can handle and how an organization can handle health care data. Anybody do any PCI compliance? Is that a law? Yes. No. That's a trick question. But it's a credit card company regulation. So the credit card company said, we don't want the government to enforce what we should do. So the companies came together and created, I don't know if it's a PCI board, but they developed a series of things you can do to try to certify yourself to, I think there's different levels of PCI compliance. Is that correct? And so they state, they know the credit card company state, if you want to store credit card information, you must be certified at one of these PCI compliance levels. So this may be more regulatory or gets more into the customs parts. You fail your audit, they find you. Yes. Yeah, so that'll be bad. Right, but it's not the government saying you have to do that. It's the industry self-organized to do that. It's part of the contracts that you would sign with those, the companies. There have been lots of different laws, we won't tire of this, but lots of craziness involved in cryptography. So it used to be the case in the United States that cryptographic software was considered munitions. And so if you were working with another company in another country, and part of your software had some cryptography or encryption components, you would have to apply, I believe, for an export license or to export that software to the company. So this is something that you have to be cognizant and aware of as a developer, as an organization, because if you're developing this software and shipping it out, you could open yourself up to huge legal liability for doing so. They actually, I think one of the funny stories here is that this applied to software, but this did not apply to books. And so for cryptographers who were working in this time period, they would write crypto software and new crypto routines and they would publish them as books that they could then sell and ship to people in other countries to get past these export restrictions. Very weird times. Privacy. So I said student private data is of the utmost importance. So now what happens if an assistive administrator, an IT person, finds a virus on my machine or something, something weird is happening, they are performing their job and they see that a Excel spreadsheet looks like it has a malicious macro in it. They open it up and it has all your grades there. Did they just violate the law? Did that, well I guess that's a separate issue, but should they not open that file if they think that there's grades in there or fake grades and that nobody's ever looked at them? I'm sure they would have some level of access to things or get permission to have access to those things. So permission is one thing, so often times when you sign up for computer systems, part of the terms of use is you giving the authority to the system administrators to be able to look at your files and the courses of doing their duties. Yeah that's definitely part of it. Sometimes the law will have specific provisions for these types of things that says administrators in their jobs would be exempt from these privacy restrictions. It's an interesting question, I don't have a good answer. Obviously privacy is important and privacy laws are also important. At the same time, you have to balance that with these types of things and be able to respond to incidences. Yeah. But now somebody is going to authorize to see what you're going to call, right? You own your grades in some sense, I mean I don't know that that's a technical definition, but basically I can tell you your grade, I can't tell you anyone else's grades in the class, I can't, I'm not even supposed to, I think post grades with a number or something that could match to you. Like I couldn't sort you guys by last name alphabetically and then turn that name into a number, one through 132 and then post that because you could easily figure out who's got what grade. So that's why they have posting IDs that you can use to do this which is like a secret confidential information that if you wanted to do that, like you can authorize me to do that. So if you sign something or I have confirmation from you that I could share your grade with the class and this actually often happens when people apply for jobs, they want a letter of recommendation or something for me, it's like okay well what I want to say is that all I can legally say I think is that you were a student in my class and that yes I know you. But I can't really give any more information unless you authorize me to do so that I can say yes they were a top student in the class, they were awesome, they got an A plus or whatever, but I can't say that unless you authorize me to do that. So it'd be similarly like I can't just go tell the system administrators what your grades are, because that would be breaking that privacy, those privacy laws, but if it's part of their job they come across a file that has that then I think that would be covered. What's the difference between laws and customs? Induction and the percentage of crime during the Super Bowl. The Super Bowl is a customary, it's a custom that you don't go out and rob people because you're too busy watching. Interesting, okay so yes, that's a very interesting example. Well that's not quite true because if you're at the Super Bowl, certain crimes are a lot more prevalent. I'd say general watching Super Bowl is a custom, so it's an example of kind of like a thing of society that we do, which would be one way of trying to describe things. And here we're talking about is what do the social norms kind of allow you to do or not do? You don't need a law in that respect. Interesting point I have thought about that. So maybe the customs would say that well you don't really need laws or everybody kind of acts in a certain way. I think that provides a very high level of faith in people that I don't know that. It's a common agreement. The consequences of violating a law is often a lot more severe than violating a custom. Ooh, interesting. Yes and no, it kind of depends, right? So if you, let's say, you're at a company, you violated some social norms and custom and that basically then everyone bought your product, found out and now they no longer literally everyone's going out of your product and stopped buying from you. That could be far worse than what a government, like a $1,000 fine thing can slap on you, right? So that's an interesting point I have thought about that. But usually the government can do things that people and individuals can't do. Just from a slightly more dictionary look at it, a law attempts to be more rigid set stone and may not always be agreed upon by everyone but is there a law of custom that's more like hey, we all kind of agree this is how things should be done. It's kind of like an informal law. Cool. So yeah, that's a good way to phrase it. So this is an example I pulled up from the news. Maybe you saw this a few, I don't remember what this was, did anybody see this news article? That a company was offering, I think, microchip implants into their, I believe, skin, so like your finger. So the idea was instead of having a badge that worked, you know, you use a microchip in you that you can use to access the doors, to pay for things in the company's store. So I can do an office building, pay for food in the cafeteria, you can just do by just touching the door. So does this violate any laws? The question would be I have no idea you have to ask me what it is. So I believe this is not illegal. As long as it's voluntary, I wouldn't see or it would be an issue. Even if it was. Yeah, so I mean, you can think of a situation. So this, I think specifically this was not, I mean, this was not mandatory, so I didn't mandate that everybody do this. It is voluntary. But would you work for a company like this? Some of you are going, I wonder how much are they paying? Some are emphatically shaking their heads back and forth. Absolutely not, no matter what the price. Some of you are like, I have five chips in each of my fingers. Perfectly yours. Yeah, so I think this is an interesting case because it's clearly, it's not something that's illegal to my knowledge, but it does kind of rub the social wars and the customs the wrong way that like, wow, so you're like A, where are these chips coming from? B, who's implanting these chips? C, how can I trust what the company is putting in these chips? D, when I go home at the end of the day, I can put my badge in a specific place and then go about my life. It's very easy. I mean, you can put GPS tracking in these things and figure out where your employees are going. Even when I was working at Microsoft, so one of the employee benefits was you had an Orca card which was there, like bus pass kind of card which would take you on the buses and all that stuff. But we didn't own it, they gave it to us, so we always wondered if we were using this if the company could then track where all we were going with these passes. I definitely think they could have if they wanted to, but it's like a free bus pass, and it's hard to complain about that, so we used it anyways. Customs can be a lot harder to determine than laws, because laws are ideally fixed, I think, around a defined geographic location with specific consequences, and customs are harder to identify because I think those more apply to social groups which can have many different places and they can either hinder you or help you. For instance, many different cultures in America have different customary holiday greetings. Happy Hanukkah, Merry Christmas, et cetera. So in order to some companies make the policy to have their employees say happy holidays as not to offend some people, yet it will still offend some people who demand that Merry Christmas be the holiday greeting. Exactly, so in this example, you can see only maybe the cyberpunk techno crowd being like, yeah, it's super cool. I can't wait to put a chip into my brain like this is the first step in that. And there's some merit there. I mean, that would be cool. See, some of you think it's terrible, I can already see. So yeah, so this is just kind of, yeah, please. Yeah, that could be. Maybe they blind you in some sense to think, well, a similar issue to the, actually that's a really, the super cool example is interesting. So attackers will often use these customs against you, so usually they will try to attack a system on like a Friday night, which would kind of give them a weekend of people not being there to do anything. That's when they launch attacks. They'll try to launch attacks on Christmas and major holidays that they know that a lot of people will be taking the time off and won't be able to respond this quickly. So as part of this, some security organizations have what they call basically a follow the sun model where they have security headquarters all around the globe at places where there will be somebody up available on call at all times. But if the people who can fix it are away on holiday, right, that's where it gets really interesting. So yeah, so I think this is an interesting example to talk about, to kind of bring up that there are, you know, things that like, for physical security, right, when we talked about that, like you could, maybe have the policy that you could frisk all your employees every day when they come in for work. And that would probably be legal, I don't know. It would bring up some weird social norms that you're budding up against, right? Whereas when we go to the airport, we don't think twice about that happening, mostly. And we've kind of come to accept that that is a part of air travel. Or if you go to a military compound and they ask you in a secure facility and you have to bring your phone and put it in a special box and enter a meeting, right? That would not probably fly if I made everyone do that to come to office hours to talk to me, right? So, I think all this goes into, and this is kind of the laws, the customs are all feeding into this, we've even touched on this when talking about employees and what they do, right? The human issues are very important. So we can't just, it goes in with this constant, so part of being in effective security person in effective in the sense that you're going to help your organization its mission in order to be more secure and keep people safe. You're going to be very effective if you not only are very good at security but you understand the business so you can make the cost-benefit analysis but also if you can deal with the human issues. So what are some of the human issues? A lot of people tend not to remember the customers just write them down. So you have maybe difficulties in implementing policies or mechanisms. Your policy is you want people to have complex hard to guess passwords. Your mechanism is to enforce that their password meets certain requirements. But really what you're trying to do is make the passwords not easily guessable and they just write the password down and sticky tape it on the monitor or right next to it. So there's human problems in essentially following your policy and your mechanisms but well maybe following your policy but following the mechanisms but still kind of undermining the security goals that you're trying to enforce. Yeah. Blackmail in order to get a security clearance they ask you what kind of questions about if you've done anything incredibly stupid in your former life that no one knows about that you used to get a student's password you can violate these policies. So you may have insider threats that we want to think about is the humans in the organization we may not want to put as much trust in these humans as we think we should I mean we know these people they're good people but you know like you said things can happen depending on the level of involvement right that's kind of what it all what it comes down to as the risk and the cost of analysis yeah. So humans make mistakes right and that's 100% true no matter how good, qualified, smart, intelligent we are we're gonna make mistakes we're gonna have problems there's gonna be problems in our policy if things we didn't think about problems in our mechanisms all kinds of things that we need to think about yeah. Employees leaving confidential confidential information. On purpose? On purpose. Yeah so that's a little bit with the the blackmail issue is similarly so this is kind of insider threats is kind of what we talk about there is somebody who's inside the system where in a system where we completely trust everybody who's inside that may not be the case and even major organizations like the NSA found this out the hard way. Well humans just are fallible in general they make mistakes they make bad decisions they're vulnerable to so many things that you just can't put in a box you know and protect them from. Yes. Yes humans have a lot of wrong you're focusing a lot on the security aspect but put yourself in the shoes of somebody working you know you're working in a security organization at a company when you're working in an organization people who work what do you kind of care about? Human-wise. You care about doing your programming work in a good little employee. Satisfaction, happiness. So if you're a manager you probably care about the satisfaction the happiness the morale of your employees in your org. Usability. It's kind of the first one that pops into me. I mean usability in there that's really hard to use and then training. So yeah two aspects so having the effect of the security elements but that's something that we didn't bring up there is you may have policies and mechanisms but if they're so cumbersome and unusable that they basically defeat their own purpose then why have them? Anybody use Windows Vista when it came out? Some of you? Yeah. So Windows Vista if you missed the marketing Windows XP Windows XP was incredibly vulnerable. I mean it was developed the kernel the giant ball of bugs there was tons of worms, exploits, everything I think there used to be a statistic that if you plug an unpacked Windows XP to the internet it would be compromised within five minutes which is terrifying but probably correct and so Windows Vista said hey you know what's insane is that you run A. the default as the administrator of your machine and B. any program that you download and run from the internet also run as administrator privileges so part of this that was saying hey we're going to completely solve the security problem on Windows we're going to make it so that any program that wants access to important resources you know pop up a box that will say hey look this program is going to be accessing this resource do you want to allow this or not and now we can give people power and they can be more secure and the world will be a better place and a ton of marketing this was I think UAC I think it was called user access control or something but I don't really remember so what happened so it's a beautiful awesome technology right so I think regardless what you feel about Microsoft and Windows they have a lot of very smart people that work on these things so they can make this simply the best possible thing so what happened when it was released does anybody who played with this don't remember this yeah you want to tell me what happened you just click on it you don't even read it you don't even process it you get trained because actually a lot of software that legitimately needed those controls would pop up that box so you'd be running a game or something or any kind of software and say hey I need to use this stuff and say great and then over time that happens so much you get just trained they call it alert fatigue, alert blindness where this alert pops up you just are then Pavlonian trained to always click okay so that when you download some delicious piece of garbage from the internet you're executing it and you see that it means that you just click okay and you don't even think about it so even though there was a beautiful awesome technical mechanism that was trying to enforce this really good security policy it completely failed in terms of usability and nobody and it really caused even more problems because the bad guys didn't even do any clever really they just had to get you to click on it and double click on this EXE and it's just going to go and you'll just say yep yep, yep, yep, administer your pollution so one aspect we didn't talk about in human issues is where are you in the organization so you're a security officer, you're in charge of security, but who's your boss why is that important for people who work in companies what can your boss do drive direction so they can CTO walks into a company that has no security and all of a sudden they have power that the previous CTO didn't have they can go security calls so there is no security person so they can try to push for different things, what else what does your boss do he can go to lunch I guess it's not really I kind of call that very sheltered jumps they can fire you they hired you, they can fire you they can tell you what to do right, that's part of being an employee is your boss says jump you say how high that's part of the agreement and if you say no I don't feel like jumping today they fire you and get somebody who can jump because that's what the organization means so if you're responsible and your boss is let's say you're in charge of security but your boss is the chief technology officer so they're in charge of product too and their main concern is product that they got stuck with the security component and you go and say hey there's major problems with this product we have huge gaping security vulnerabilities we've got to delay launching this they say yeah, yeah, no we're not going to do that by the way stop looking so closely at our stuff right so this is actually why security is incredibly can be difficult when you embed it in an organization because this flow where a group has power in the organization that influences how much control and influence you can have as a security group I think there are stories probably about back in like the early right at Microsoft before all these worms happened in like early 2000s I'm sure there was a security group in there somewhere in Microsoft who were screaming their heads off about all the problems of how the software was so insecure and I don't think it was this unknown problem until they got virus after virus, worm after worm and so much bad press that Bill Gates himself had to write a memo that says we have to completely change our security culture at Microsoft and as part of this they developed this secure development life cycle to try to think about how to develop software so now he really was at the top and had to embed this culture throughout all of Microsoft so that you know if you're developing a new feature you know what group within Microsoft to talk to to try to talk about threat modeling and think through the threats of your design so that whole life cycle that we talked about before they have this really nice security model now because they have these huge problems and if you don't have that, if you don't have anybody who has enough power in the organization to actually make change then you're going to have a very difficult and hard time you know how much budget does the security org have how much organizational power do they have and this can influence how you can actually effectively run and secure an organization so who is in charge of enforcing the security maybe you have physical security is different than the system security and the computer security maybe there's a networking group that's in charge of network security but if they don't have a common boss that's kind of forcing them all to work together then you as a leader would need to figure out how to build those relationships to your peers and other organizations that you don't have any kind of direct power over so these are all incredibly important things to think about when we talk about the human issues related here any questions on this? humans are hard computers are easy programming is easy compared to people I mean, for some people security is hard security and people is even more difficult but it's important to keep that in mind because you can be securing something that a human cannot use humans good strategy ask no questions so that's kind of giving you so we actually realized that this area is essentially mislabeled so we did security but you can do a broad overview in a day I mean we really dug into threats policies mechanisms looked at the complexity of the problem and talked and thought about that now we're going to drill down more and look at different areas of security so the next area we're going to look at is access control so this actually fits in really well because we were talking about things so we'll say A universities we don't want to model this necessarily out of our universities policies an academic integrity policy that disallows cheating is it in the university's interest to do this why why do they care? you're paying money why would they care if you cheat? could devalue the degree because you're graduating students that actually can't do what they say they can do I guess potentially you discourage the employers from investing in the company and discourage employers from investing into the school to recruit here they can try to lower the number of people they recruit here which lowers where your students are getting placed in the jobs what about for all the people who don't cheat those three of those people so I've seen all kinds of stuff in my time here but what really bugs me is when people cheat there's people who worked really really hard like 20, 40, 50, 60 hours on a project get like a C or a C plus and somebody else cheats and gets an A and didn't learn anything and didn't put anything in so for me, part of this is making sure that it's fair for everyone in that hey, you're putting a lot of time and effort into this that doesn't necessarily guarantee you an A but if somebody's going to cheat and take the easy way out we should disallow that and do what we can to make sure that we've made things fair for everyone and so this policy includes copying homework with or without permission so some CSE class this slide now I'll never fix it class class so CSE students, a class has students who do homework on some shared server ever used general before at one point I mean not so it's a shared Linux system that everyone has access to you don't have accounts on there so student A is doing their homework assignment and forgets to re-protect their homework files so by accident we talked about humans make mistakes, right? accidentally they accidentally create a file I should say here that one of the worst things I ever did on a server was an understat I was trying to set up a website for something I created because my first time as an admin on a server and I was trying to configure everything it would not work so I just like Hmodded slash recursively as 777 which is world readable, writeable and executable for everyone and I did it and then everything worked which was great and I logged out of the machine tried to log back in got a connection denied unauthorized or user or something like that I can't remember the exact error messages and then when I submitted a ticket to the customer support system they were like yeah by doing this so SSH has a policy or if your authorized key file is world readable it will not allow that in there so you can't SSH into the machine and I was like sorry like I was trying to get things to work so it's definitely easy to do this so student B copies the file who did something wrong student A at fault, student B at fault both at fault well B should have taken it but A should have left it out for him to take it so what is B B copies the file so they're complaining who agrees with that you don't think person B violated the act of integrity policy no B did I say B? you're not raising your hand let's raise our hands we all can see that this is that's why we're wrong it thinks A is in the wrong does the policy state anything further so far the policy just includes copying homework with or without permission so according to the policy student A hasn't done anything wrong that's right I've had instructors who have said that they've made a personal classroom policy that if something gets copied both parties are out so don't share your code there's two different issues in here the university could have an academic integrity policy CSE department could have its own academic integrity policy an instructor could have their own academic integrity policy an instructor could have a separate policy for a specific assignment like a group assignment that would clearly deliberately go against this policy but it's clearly allowed by the instructor's policy so that may it may depend on that so let's say there's no other policies in play so the second question was got into who what was the thing that technically as far as punitive measures from the university if that's what we're talking wrong sorry sorry yes I remember so the policy is there anything else to this policy let's say no so wrong is a little bit of a loaded term here maybe deliberately so I'm trying to think of fault versus actually breaking policy yes you so B I mean intended to do something wrong so there was intent there whereas A most likely it was just negligence it was just it was an accident we're going to hope at least in this case does the policy say anything about intent the policy hasn't applied intent for punishment but where's the school so we're the administrators I guess is where I'm going shouldn't they be preventing the copying shouldn't there be some kind of higher level policy say mechanism is there a mechanism in place so even if there's no mechanism in place did student B violate the policy yes could we think of enforcing a mechanism such that A could not make this mistake do we need to to enforce this policy not necessarily so this is I think it's a little tricky example but we're kind of towards the end here so in this case because of my interpretation would be the policy does not say anything it does not say that you cannot share your code with anybody it just says that copying homework with or without permission is not allowed whereas the policy could say and should say and doesn't are syllabus that if you even were to do this you would still be in violation of the policy and then it could be the policy could say it's up to the discretion of the instructor to determine intent if they determine that the intent is not there then maybe they won't have different levels of mechanisms do you want to say something about the club yeah so I have one a student hasn't announced that here you want to might say you feel like a rock star oh yeah so it can be important thank you introduce yourself yeah hey guys my name is Will I'm the president of ASU's phone levels we're a hacking club here at ASU and I just wanted to let you guys know about us we have meetings on Tuesday Thursday Tuesday 430 to 630 Thursday 4 to 6 we have free food at our meetings so you guys get food but we focus on binary exploitation we do some web stuff as well but currently we're focusing on binary exploitation and the purpose of the club is to compete in CTS or capture the fly competitions which are basically big hacking competitions that are set up we actually have one this weekend but I wanted to let you guys know about us and if you're interested in looking at any of that are coming to any of the meetings so we're being informed about what we're doing Jim and Margaret you can post the man's message I was going to say email me and I'll add you guys to the mailing list and WF gives