 Hello everyone. Welcome to the DevOps track that you're all awake this afternoon. I'm Jeff Shelter, I'm one of the track chairs and I just wanted to introduce our first featured speaker, Case Cook. He previously worked at Canonical where he led the Ubuntu security team and currently is working at Google on Chrome OS security and I'll just let Case take it from there. So this talk is mostly just some ideas about the next system and some low-handing fruit. I already had the chance you can download my slides there from anywhere else. Every presentation I give. I always cheat. I show people how to pronounce my name, which is just Case. It's spelled K-E-S, the Dutch spelling. But that's my name. I'm not responsible. So just a little introduction about me. Basically, I like breaking into computer systems. So I figured I should do something about that that was productive. I've been going to DEFCON for quite some time. DEFCON, if you're not familiar with it, is probably one of the largest computer hacker conferences in the world. And they run a contest each year called Catch the Flag. It's a team-based challenge where there's computers and defend their computers from other people. And that's sort of been working on my skills and making friends and building teams until we got a team that won in 2006 and 2007. There's a lot of work. So I'm a Debian. I'm a Debian developer. Ultimately, I managed to channel my tendencies and sneakiness into securing computer systems as opposed to breaking into them. I worked at the the open-source development lab that was here in Portland. It was sort of the predecessor to the Linux Foundation, the folks that paid for the store vaults. Quick trivia, I just tried to share with everyone about having hired Linus Torvalds at OSDL. He worked from home, of course. But anytime we gave tours at OSDL to show off the lab and those other things, people really wanted to see where Linus Torvalds sat. And we always had to say, oh, well, he works from home. We get this blank stairs. This isn't working. So one day we decided to actually just print out a nameplate for him and put the Linus Torvalds nameplate on an empty cube at OSDL. And then anytime anyone asked, hey, where is Linus Torvalds? We could say, well, he works from home, but here's his cube. And people would take pictures of the empty cube and do those things like that. So anyway, OSDL was a lot of fun. I moved on to working at Canonical on the Ubuntu security team to Google, where I'm mostly focusing on Chrome OS and props. So about this talk, we're going to talk about security. And the first thing I want to do is try to convince you that it's important and that the direction I'm going to that makes sense. I'm going to cover some areas for how to design your systems and some low hanging fruit and then finally to thank you to start working on it. So the first part, what do you mean post intrusion? Deadi here is that most security breaches and problems with services and system security aren't a single bug anymore. It's a long chain of attacks that gets an attacker what they want. So the standard progression of an attack is more like this, where you find a bug in the public-facing service and then you find a bug underneath that. It gets you a privilege escalation and you find a bug that maybe gets you into the kernel that maybe you can move on to doing more remote attacks on another system and this sort of continues. Examples being the somewhat recent kernel that were penetration, which was sort of like this series of stolen SSH keys, kernel bugs and backdoor SSH demons. Like there's this whole frame of attack, then more recently what was affecting some web servers, people discovered that they were having iframes injected into their outbound web services and don't want to figure out where they were coming from and eventually they located as a kernel rootkit that had been installed and was actually injecting iframes and containing malware links and whatever else into the outgoing PCP streams. It was pretty scary. But all these things sort of in the information security community tend to get called the advanced persistent threat and by advanced I think it just means successful. So, and for those of us using Debian 1.2, the acronym APT is very confusing. Anyway, these are sort of the things that are going on in real world attacks on actual systems and that's what I mean by post- intruding. That first attack just gets you in and then what happens from there, you really have to go through a lot of steps to expand the reach of your attack. So it's important to defend against that. And that gets me to layer security. Any well-designed system is going to have a lot of layers of security. This is basically doing more than one thing to protect the entirety of what you've got set up and there isn't perfect security. So in a crazy world you could sort of prepare to reach that every single layer that you've designed and think about how do you, they contain a bug at any of the, any one of those layers. And it's, it's, you can go about this pretty systematically. And the reason that perfect security doesn't exist is because there's bugs in everything and people think about, okay well I have this, this file that only my user can read, which means no other user around can get at it. And while that is the design and that's a good first step, what if there's a kernel vulnerability? And the kernel vulnerability completely bypasses permission checking. So you end up in a situation where it's like, okay, if we can just reduce the scope of what people have access to and what interfaces they have available, we're in a better position to defend against things. So another problem that exists in some software upstream development is this reasoning that, oh well our code doesn't need to be defensive because there are no bugs here. So why should we be defensive about it? That doesn't leave any room for mistake. Everything has bugs so why not position the code so that if it encounters something that was unexpected it deals with it gracefully as opposed to being exploitable and doing other things, even if it's, even if that condition by your estimation isn't possible being defensive. And that's more about development more than that. So I think another, one of the first areas that we look at is privilege separation. I've broken this down into a couple layers here. So I'm going to talk about dealing with authentication tokens, whatever that means, tweenly, discretionary access control or DAC, and that's the standard UNIX permission model that most people are familiar with. And the idea with that is that those access controls are ultimately up to the user in the UNIX permission model. The mandatory access control is sort of a stronger version of that and it tends to be dictated by the system administrator and it can further confine existing DAC permissions. And then finally sort of a little piece at the end, talk about multi-factor authentication in the sense it's actually pretty easy and quite powerful. So moving on to authentication hygiene. So I don't want to confine people's thinking to just SSH keys in this case, but anything that you use to prove to a system that you are, when you say you are, these sorts of things should be applied. It's pretty broadly applicable, in my opinion. So as a quicker side, I want to try to encourage people to stop using password or only password-based authentication. In fact, I have been trying not to say the phrase password, because that implies just one tiny little bit of text when you really want to say passphrase. But anyway, keep your tokens, your keys, your SSH keys, this is all encrypted. And tie them to a specific device and don't put them on devices with remote access, because this means if you lose control of that system, you've lost control of all of the authentication tokens that it contains. Access to however many more machines. So if you confine your SSH keys, for example, to just your laptop, which doesn't have SSH listing or your phone, which, in theory, doesn't have too much listing, but is kind of scary anyway. But anyway, that also gets you better logging and finer control over the replication of those tokens, because you know where someone is coming from. So for some examples, like on a local device, your laptop, your desktop, whatever works that you've got, you find your keys. Here, those are the two halves of the SSH private public key, that's all fine. And if you didn't have a password on your key, go at it now with that command, and you can actually set that up. So if someone steals your key, they would need to also have stolen the passphrase to unlock it. Theoretically, it is arguable that that is a form of two-factor authentication. I'm not going to say that on a remote system, in the lower section. You can see there's no keys. All we have is the authorized list of keys. Who's actually able to get into this machine? Where do they come from? I like retaining the SSH key comments, because it tells you who generated what machine. It's a little bit easier to figure out what is going on, and you're in a better position to recover access as you need to. On top of this, this is a little bit SSH-specific, but when confronted with confirmations, actually use them. This is sort of a dead horse, and check your SSLs, or check your SSH key, whatever. But I find a lot of times, the reason it sort of gets skipped over is just that people go, oh, well, I don't remember what the host key is, that chain. And I don't really remember how to actually find it, because if you look on the system, you're kind of like, well, that's the file, and the man page for SSH key gen, as opposed to the list, or show, or something like that. It's not obvious how to actually find that stuff, but you can actually dump the key contents with that command line there. So you're just basically directed to a specific file, which is the host key itself, and ask it, ask to report it. And then if you want, you can do the cool little ask-yard thing, which is kind of fun. I understand the reasoning for doing this, but it's still just as not memorable as the long string of hex to me, but it's fun to look at, I guess. So anyway, yes, please check confirm. So moving on to DAC. So again, this is standard gen use permissions, and what I've got here is sort of a design I like to use, but obviously I don't want you to adapt it to how it would be for the environments, but sort of the basic ideas I've got is you've got an initial account that is tied to an actual human person, and if you're dealing with a lot of, you know, a lot of admins, or a lot of users, developers, whatever, this makes it a little bit easier to track who's got access to what, because you don't have to go, oh, well, who's part of that role, or who's part of this thing, if you're gonna make accounts for people, actually make accounts for people, and you can manage membership in other accounts separately, which I'll get to in a second. And then you've got your web services, and the role, the user under which those web services are running is distinct from the other pieces. It doesn't have access to personal people's information. It doesn't have access to reconfigure itself, or change its execution mode, because it's just supposed to do what it's been told and configured to do. That runs in that small environment. The idea being that, you know, when an attacker breaches your service, they're in as that user, and they have a very confined set of permissions that have access to that system, and they actually have to start jumping through hoops to elevate their privilege or change their privilege. So, and then, another user for doing service maintenance, you know, what do they work on? Database updates, software upgrades, whatever, but they have the ability to change the behavior of that web service, but they don't have the ability to completely alter the system the way the root, the system, you know, the full-sys admin super user does. And that gets you to the final one, which is for, you know, configuring device layouts and hard drives and whatever else. That's the tip of the iceberg, and you don't want to have that user being the one that's configuring things because there will be mistakes made. So, I mean, this is sort of like the classic application of discretionary access controls in the, you know, kind of a service environment. And a lot of people do this. The things are set up. Oh, and another thing about this is it's good for logging because you can actually see transitions between privilege levels. I mentioned it earlier, but, you know, this, you know, dealing with debt permissions actually pay attention to them, look at stuff to make sure you're really keeping stricts and data separate. There's been a lot of fun bugs where people can upload avatar images or whatever, but they can also upload a PHP script. And because that was, the avatar image area was actually marked as executable for the server, then they can just run arbitrary code. So, you know, keep careful control on that separation. Those privileges are, or the permission bits are there for a reason, and, like, being careful of keeping them separate where they think about, well, why is this the way it is? And keeping transitioning between privileges, I hinted at earlier, you can do with sudo. I like using sudo for that because you can really define specific roles in groups of people and what they're able to change to, or you can just do it through the SSH keys that I had shown earlier. And so in this example, I've got, you know, the some service group of people that's listed there, and then you define that, okay, they from all hosts can become some maintenance user and run any command, and that gets you those privileges. After discretionary access control, there's mandatory access control. Many people are familiar with SE Linux. I prefer app armor, and it'll be easier to use, but there are actually several available smack-ins, and I have a personal use of myself, but they, all of these basically lump together do a similar thing, and they provide very explicit confinement over what a service is able to get at beyond the discretionary access control. SE Linux has a tradition of being a little bit more difficult to deal with, so I'm using a little demonstration here of using app armor as mandatory access control to define a specific website that I host. Here, the first URL there, there's the upstream documentation on a full walkthrough of how to define a web service using app armor to get it available, starting from no confinement and working your way through each service. There's a lot of notes in the Ubuntu default profile too, the second line is the default profile in Ubuntu for the Apache pre-fork demon, and it defines a series of profiles and paths, as they're called, that get used when you're switching between virtual hosts. Anyway, it also has pretty extensive documentation about the steps you can take, but as an example, I've got this site here, and it includes the things that are common to all of the Apache clients and base, which is access to various directories that need to actually operate on shelves, not just for the shared libraries and things like that. And then this site happens to run PHP, so I've included the PHP abstraction, which includes all of the PHP-specific things, where it stores session codeys, local caches and things like that, as well as the specific paths that are allowed for this service. So when an attacker breaks into this virtual host, they're gonna be running in this tiny, tiny box where they can run basically PHP, but only if it looks at the files that it was already able to look at by this profile. And it keeps things very tightly confined, even beyond whatever might be available for a web server running with its users. And of course, all of this is assuming they don't have a kernel exploit, which would then bypass this layer entirely. And this gets me to multi-factor authentication. Some people say, too, but you can layer these on, so really you can have as many as you like. One downside to recommending sudo for primitive separation or for privilege is that in a way, basically have one passphrase for multiple accounts. If I'm gonna sudo to a number of things, you can query me for my own password. So if my password was compromised, suddenly all of the accounts associated with my log-in with its sudo access are compromised. Again, you should use the HKEs anyway, but anyway. So I like adding another layer here that is very difficult to sniff, recover, whatever, because you actually need some physical objects that I have or carry, whatever. There's a bunch to choose from. People have seen them all over the place. There's like the standard HID RFID cards. The RSA token is a little number that cycles through Ubiqui, which will send similar to the RSA key, but it sends a one-time password. Google Authenticator is another one-time password that runs on an application on your phone. And then Duo Security, a company called Duo Security has a Duo Unix thing. They're free for personal use. I packaged it, so I really like it, but it does good things. But any of these work, and it's actually pretty simple to put them in place. Anyway, I think Ubiqui's probably another good one, but I like Duo Unix because it's entirely phone-based. It's either calling you or SMS you. I don't need any special application. I don't need anything installed. I just use it. HID knows what my phone number is. Here's an example of installing Duo Unix. It is really straightforward. So it requires an account on their site, like I said, and you just tell it what your phone number is and it SMSes you things to confirm that. And then you configure the two keys it gives you so that your server can identify itself to the Duo Unix servers. And then you turn it on, and this is a piece of Debian and Ubuntu, the PAM off-update. You don't have to go around editing the PAM configuration files. You don't have to figure any of that out. These are the maintainer constructed. Where should this PAM module fit into the global scheme of things in your PAM configuration? So you run PAM off-update and you go, I want to turn on Duo Unix. Okay, and you're done. Very, very straightforward. No crazy editing, nothing like that. And then here's an example. I clear my sudo cache, and we resudo it, ask me for my password, and then kicks into querying Duo Unix, and it says, oh, well, either do you want me to call you, or do you want to type in your next one-time password that we SMSed you, or do you need a new batch of them, or whatever, it's just really awesome that way. What's fun about this is with SSH, if you're using SSH keys, the SSH daemon isn't using PAM to authenticate you, because it's using its own database, it's using its listed keys. So it does not actually run through the PAM off stack. It'll use PAM session once it's validated who you are. So what's interesting about this setup is if you install a two-factor like this, but you're using a key-based SSH, your SSH key gets you into the system, but if you try to actually change your privileges with sudo, you're going to end up with a two-factor query. And I kind of like that, because it ends up with sort of two different two-factor authentication schemes in place. But if you're still using SSH with passphrases, you have to turn off the challenge response config and then you'll actually get the full prompt. You still can do it on connection. There's a bunch of different configurations you can look through if you really want to. There's a lot of details about how to tweak it. So on to kernel tunables. This is more low-hanging crude. This is sort of a small selection of options that I get the impression some people don't entirely understand or options that are relatively new to the kernel. Some of these are already configured in your distro, but many aren't for some reasons I'll cover. So do look through these. As kind of an aside, while looking for a good picture for this slide, I googled things like too many knobs or many buttons, I found this image and I just could not pass it up. I mean, look at the hard hats. You know they're serious. It's great. And as a complete random bit of trivia, this room does not exist anymore. It turns out this is actually part of the Oregon Trojan Nuclear Facility here in Oregon on the Columbia River. It was turned on in 1976 and shut down in 1992 and went through a controlled demolition in 2006. So that was really fun to watch. But anyway, random trivia about the guys in hard hats. Okay, back to security. So if you're not familiar with Linux kernel tunables or SysControls, they're all in Proxxis. I tend to like using one of the command line tools called the SysDTL to work on them, but they're just files you can read and write them. The documentation is a little scattered, but mostly it's in the kernel source itself in the documentation SysControl directory. It's good once you find it. They're sort of in the man pages, but the main place to find this is in the documentation SysControl. It's too bad I couldn't find a particularly better place to point anyone to look at these. And I felt like it was important to document them because there's like 1,300 of them. I mean, most of them, no one wants to look at only a small subset of them are particularly interesting for security, but I mean, there really are a lot of them. So here's some examples of looking at a specific SysControl. This is the randomized VA space SysControl and this controls how user space processes have a randomized memory layout as a security defense. So if your system doesn't say to here, do you need to fix something and or yell at someone? That should really be to by now. That is the IPv4 PCP send codes. And if you're not familiar with this piece, when the kernel accepts an incoming PCP connection or starts accepting the connection, normally it stores the details about this connection in memory and waits for the other half to finish the TCP handshake before continuing. When a lot of connections are pending, this is the SYNFLUD attack, a denial of service attack, the kernel has the option to switch to actually encoding all of this information in its response during the TCP handshake. So it doesn't actually need to take up any memory to do that. What this means actually is that it overloads a bunch of the TCP headers for options. And that means things like window scaling doesn't work anymore. But frankly, in the face of the denial of service, I'd kind of rather still have a connection at all than have perfect bandwidth mediation and scaling. But I think an important thing to note about this is if you turn it on, you don't suddenly lose all those options. The kernel will actually enable it once it sees its under-memory pressure and has lots of outstanding connections. So it's actually a dynamic system that once it's under attack, it will engage this to try and protect itself. So fundamentally, there's no reason for this to be off. You basically only benefit from it. And some distros turn this on, but it's worth checking, I think, because I think historically, there's some misunderstanding about what happens with it and what effects there are in enabling it. And it's a nice front line for... This is about a debug facility at the kernel. So this control is Ptrace, which if you're not familiar with, is used to examine and control other processes on the system. It's what's used on the back end by tools like Strace or GB. However, it's really quite dangerous. The thing that really drove this home for me was that there are processes running on your system that hold the credential in memory, in their own process, except in Ptrace, there is no way to access them. You can't get at them through any interface. It's in their memory, and when they shut down, they're just gonna get rid of it. An example of that being an SSH connection or a connection to another system. So imagine if you're logged into a bunch of machines over SSH, and some attacker manages the gain control of your user ID on that system. And without elevating a privilege in any way, just as you, they can walk through all of the SSH connections you have and actually establish an additional tunnel through that SSH connection to those remote machines because of those privileges that SSH has running in memory. So that's really, really scary, and we don't want sibling processes to be able to manipulate each other in that way. So Ubuntu has this feature. It is upstream now, and I wrote this, but it basically disables that, and you can crank it up even higher and completely knock out Ptrace in the system entirely, although that has really bizarre effects on some things. So it's kind of, if you're gonna go crazy with it, it's really only working in very specialized environments or embedded systems and things like that. But I've been trying to convince other distros to turn this on, but if you build your own kernels, I recommend enabling Yama and turning this on because it'll gain you a little bit of defense in one of the kernel interfaces yet another layer. So like VA randomized space earlier that I mentioned, if this value isn't set on your machine, you need to find something to yell at. So I'm gonna demonstrate a program with a very, very small change. People might be familiar with the classic null pointer dereference crash as demonstrated here. You've got a pointer to a structure. It's set to null. You dereference it in the printf, and it crashes because there's nothing at null. But if people haven't really taken a close look at this, it's not obvious that null is just a special value that's been overloaded. It's just address zero. Zero is still a valid memory address. It's just that there's a convention that it isn't mapped. It isn't available. So if we change this to add a request to the kernel to please map memory address zero to this fixed location, this program completes with no problem because it dereferences the structure that's at zero, looks at the memory. The memory is there, it happens to contain, and it completes successfully and finishes. There's no problem with this. The real danger here is that the kernel itself is running in the same virtual memory space as the user space. So when a process makes a system call into the kernel, like open or write or any other interfaces, the CPU switches modes into kernel mode, but continues running in that memory layout. So if the kernel happens to perform a null dereference, it will start looking at the memory that the user space has mapped, and this can lead to really ugly things and a quick promotion escalation. It's very ugly. So the point is no one actually uses this range of memory, or they shouldn't be, except in really special conditions, you have lasers and stuff like that. So just disallow mapping of the first 64k of memory. Not a lot of structures are going to be bigger than 64k. So really make sure that this one is set up because it's quite a nasty way to jump from user space directly into kernel space with no special privileges if someone finds it. Again, luckily this is the default upstream, and if it's not set, freak out and figure out why. But I just really want to call attention to it because it's a bad one. This is somewhat recent. The K point are restrict. So there's a lot of kernel debugging facilities in the proc file system. And in my opinion, there's basically two types of people with kernel developers and attackers. And if your system admin debugging the kernel, you're about to become a kernel developer, so that's still a kernel developer. And in the same way the kernel developers are using this to fix bugs and solve problems and whatever else, attackers are using this information to mount their exploits. If an attacker has to guess at a value instead of being able to just trivially look it up, they're going to run into a lot of problems that they will liable exploit. As an example, if they don't know the value of one byte, although it's usually quite a bit more than you need to know, they would have a 1-255 chance of landing an attack. And if they actually thought that they should mount that attack on, let's say, a large cloud host that had lots and lots of machines, and they scattered this attack across thousands of machines, technically they're still going to land it a couple of times, but I'm pretty sure the sys-set-ins are going to notice that they're 99% of their machines going down. Kind of the nice thing about the kernel is if you mount an attack against it and you screw it up, there's a really good chance that you just took that machine out completely. So it'll take a potential exploitable vulnerability and turn it back into another service. People are monitoring their systems usually a lot better than some even that is working too fast because someone's beating them. So again, this is mostly for debugging. It shows the specific addresses and the number of files about where things are located in the kernel, and that's just not useful. So turn on the restriction and all those values go to zero for everyone who is improved. So at least it went through ships this way. I wish more did, but some people are really resistant because it turns out that the people who are involved in making Linux system distributions tend to be kernel developers. So they look at this and they go, oh my god, I can't lose this debugging information. I don't think our distro should do this because I'll use it without really looking to the fact that, oh, but what about the other millions of people using your distro that do not want this? But related to K-Point or Restrict is D-Message Restrict. Again, very much like K-Point or Restrict, but it's for D-Message. No distro I know of actually ships with this restriction enabled because so many debugging scripts for so long actually call D-Message to get as much information they can about a system to report a bug or whatever. It's part of nearly every bug information collection script in the client. However, if you limit D-Message to root, it seems fine to me in daily operation. I've been running all of my machines with this set for a long time and all it does is reduce the amount of information that's exposed to a potential attacker. It doesn't stop anything else from working, so why not make it harder for an attacker to deal with your system? You outrun the bear, you just have to outrun the person behind you or something. Similar to that, make them move on. So turn on this restriction. It's another easy win. Some of the more recent ones. Much more recently, since the 3.6 kernel, I think, is Protected SimLinks. And this talks about, basically, an entire, like, a classic splash of privilege escalation vulnerability known as the temp simlink raise to the predictable temp file name, which boils down to time of check, time of use raise, where you go, is this file here? Oh, it's not? Well, then I'll open it. But in between those two things, maybe it got created and a bunch of other things similar to that. So this here is an example of an attacker setting up to do bad things by making a simlink to a predictable file name, intent, and linking it to, you know, etsycrom.evil, hoping that they can actually populate this with a script that will get run by Root, and then they'll have privilege escalation. And, you know, they've got the simlink there, and here's the script that checks the file and echoes stuff into it. And then Root, at some point, runs the dangerous script with a buggy program or the buggy dmin, or something, basically, is gonna work on this predictable temp file, and simlink gets written, it's followed the simlink and written into a bad location. So the idea is that you've got Root following a simlink created by someone else that was able to predict that behavior. And this is a giant class of problem. You see, I don't know, I've seen this forever. It's a... It's a very large class of attack, but the embarrassing news about this is that there has been a solution for this flaw for something like 16 years. It's been in other really specialized distros, like Open Wall and JR Security and things like that. But there was a lot of political pressure against using it because it changed the semantics of the POSIX file system. But I was crazy and decided to actually try to get this upstream of the kernel and put on lots of layers of asbestos and fought for it for a couple of years. And it's actually finally been merged. The problem was that this corner case that it solves was only there, really, for an attacker. There was nothing actually using this in daily life. So if you look here, if you turn this on, what happens is that script runs, and it says, I'm not gonna touch the simlink. So it went and enabled simlinks in a world-rightable sticky directory, which is basically temp, var temp, and these are places where you end up in a situation where there might be multiple users working on the same file system area. The simlink can't be followed unless the owner of the simlink matches the user who's trying to follow the simlink. So really straightforward business. And it solves the entire class of vulnerability. Though I would, as an aside, I would note that if you had multiple controls in place, this would also get caught, because, you know, let's say in your mandatory access control list, you said, okay, temp, you can write to. And then the demon starts out and starts following it, working on temp, and follows the simlink often to some other location, and the mandatory access control says, no, I didn't want you talking to that. So there are multiple ways to solve these things, and obviously when you're gonna have many layers of security, sometimes you're solving the same problem from time to time, from time to time. Anyway, so like I mentioned earlier, it's a recent setting on a Simflux 3.6. It's disabled by the fault upstream, unfortunately, because there is one piece of software written in 1992 that happened to use it in break, so it got turned off. But anyway, it's enabled in Ubuntu. I think I've managed to convince the Fedora and Red Hat folks to do the same now. So in theory, this should be set, but please check it. So I'm going to show you how to solve the whole bunch of problems. Related to the simlink protection is hardlink protection. And this one's still still surprising me a little bit, so hardlinking is a lot of fun. And in sort of a principle of least surprise, assuming we've got the two directories in the same file system, because you can't link, you can only hardlink on the same file system, what do people think happens when you try this? You hardlink a file you have no access to whatsoever. The answer is, it goes ahead and makes it. And this, every time I see it, it's the Fedora scheme. So this means that any user can control the file system location of a file in a way that they have no access to. In fact, it even retains the permissions, the owner, everything about it. So this allows me, as that user, to anywhere that I have right access to to create hardlinks to files that I have no business dealing with. So I can, and this leaves all kinds of problems, I can DOS the file system by filling up the inodes on my file system by just creating hundreds of thousands of entries for this one file. Another good one is you can pin a flawed setUID binary. Today, the sudo binary has a bug in it. And let's say your home directory is also on the same file system's user, which is not a good idea. But you can actually create a hardlink to the setUID sudo. And it has all the same permissions. It's owned by root. It's still setUID. A system comes along and upgrades its software, and it believes it has removed and replaced the buggy sudo. But since you have a hardlink, and you retain a full copy with perms as root sudo of the vulnerable binary, it's like this is a really, really bad place to be. And additionally, you can still, like, similar to simlink attacks, if you could trick a vulnerable demon or service or something into following a path that you control, and have to be hardlinked to a file it has write access to, you can modify it. But anyway, the point is, this all gets really nasty. So if you turn on vector hardlinks, this isn't allowed. You can't hardlink to something you don't have read and write access to. And if you read the POSIX spec very carefully, this is perfectly valid. A POSIX implementer is allowed to choose how to deal with this situation. They can either do it before or do this block. And basically, nothing depended on this unexpected feature of POSIX. Similar to the simlink stuff. Strictly speaking, the act demon did rely on this feature. It was kind of a mistake anyway, and it was a two-line change to fix it. But anyway, this is enabled in a bunch of as well. It should be on a padora similar to the simlink protection, but again, it's pretty recent. Keep an eye out for it, and look for it. Because I think this is valuable and is unfortunately disabled by the call for extreme, so I'm trying to make sure people aren't aware of it. One of the most common kernel modules is this one, Modules Disabled. This is another fun one. So kernel modules, as you may be aware, allow user space to extend the functionality of the kernel. Now, another way to look at this is that the root user can trivially run code in kernel space, since they're handling the kernel modules around here, which is when this code is sort of the definition of the vulnerability. So it's normally to extend hardware support and do these other things on a machine. But while there's plenty of completely normal kernel modules, your Wi-Fi module, whatever USB thing you plugged in, kernel root kits are made of kernel modules that are then loaded into kernel using this mechanism. And kernel root kits can be pretty evil stuff. They're hard to detect. The iFrame injector that I mentioned earlier, this was a kernel module that got loaded by malicious VSA software and was messing with people pretty badly. So it makes sense to draw a pretty hard line between the root user, the kernel execution environment, or ring zero. These things should be considered separate things. And this tends to be especially true for hosting services, usually VPSs that have an off-disk kernel where you're booting your image from a kernel that is defined somewhere else that isn't necessarily on the disk. Many of them tend to turn off modules anyway, but just in case, this is another good idea. So, again, for laptops and desktops, whatever, since you're supporting various hardware and you want to be able to plug things in and what not, disagreeing modules tends to get in your way more than it is a benefit. So there's significantly less call for needing unexpected arbitrary modules. So, once you know what you need, you can turn off module loading. And suddenly, if someone has managed to get through your service, get through your DAC, your map, whatever you've got set up, escalate all the way through it and try to really put this into, you know, attack your kernel execution environment, they're then stumped by this. They're execution, as opposed to just handing modules to the kernel and say, hey, please, be evil. So, what I like to do is, since it's a little confusing for me to have module loading that is controlled by the ModPro program and SystemTrolls, you know, that it's related to this because it's a SystemTroll value, but it's really ModPro, I like having, I define an alias for ModPro called Disable. And when it encounters this Disable, it just runs the SystemTroll to turn it off. And this lets you actually use things in ModPro, like at the end of rc.oco, you can say ModPro Disable, and suddenly all the modules are off. You can even list it in your etc.modules file, where you're listing all the modules you definitely want to have loaded. At the end, you say Disable. It's loaded all the modules and Disable, and you're done. Because you might be racing other things, and the thing that's loading modules might be running right alongside your firewall bring-up script. And halfway to the firewall script, modules get Disabled, and suddenly the firewall can't load all the random modules it needs to do its thing. So putting it at the end of rc.local depends the word, or anywhere else, that seems like a good place to work. And so that's the end of that. And now I beg you to start today. You know, make a plan for anything. Whatever, anything in here that look good, or other stuff that you thought of, or you've been meaning to do, just make that plan. If nothing more, it's a plan you deviate from, and you've still got a plan, but you know how, why, and where you deviated from it. It's really good for the next plan that you make. And obviously prioritize the changes. If you're logging in as, as we're at Overtelnet because there's something, it really doesn't matter for you to configure your mandatory access control. It just doesn't make sense. So get things in the right order, pick the right living crew. And then ultimately, I like to say, verify these changes. These are system settings like anything else. If you've got some automated checking service, looking for, that port 80 is listening for your service, check these settings too. Make sure something didn't accidentally turn them off, or, you know, a new image install that you've got didn't lose that customization or something like that. Make sure you can't load some arbitrary kernel module. Make sure that PROC-K all-sins is filled only with zeros. You can't read those settings, and it's really the only way to be sure. So just, that's the end. I think I have time for questions. Yes, I have a little bit of time for questions. If anyone has anything, there's a microphone there, because I don't have a microphone. There's a microphone there, because they're recording this. I can be reached at some combination of those addresses. And that's the link to these slides again, if you want to take a look at them more closely. Well, I hate to do this to you, but I actually need to make an announcement. You don't mind. My name's Matt Harmon. I'm the web manager for FEMA, Federal Emergency Management Agency. As you know, we had a little bit of a situation in Oklahoma yesterday. And what we are looking to do is pull together an initiative. It's Drupal for Oklahoma, actually. And what we'd like to do is at 7.30 tonight in the Coater's Lounge at the Double Tree Hotel. If you'd like to help out, we are looking to stand up a site that will help the victims and emergency responders off the ground. We wanted to develop a website that will coordinate transportation and help deal with housing issues. These are two immediate needs that we need. Hopefully we can get a site set up by tomorrow morning. It's one of those things that my team is already in town for DrupalCon. And we figured we've got this resource here. We've got all these wonderfully intelligent folks. Hopefully a little bit of civic good. Get it done. So if you'd like to at 7.30 tonight at the Coater's Lounge in the Double Tree Hotel, come out and help us. My name's Mike. I'm a systems administrator with the Focke Concept Consulting in Ottawa. I don't want to give away too many details, I guess, because, as you've mentioned, DEF CON at the beginning, your presentation, I don't know how many black hats are in the room. But we're using SUDU on our servers. Our developers log in using SSH keys. We use SUDU and we have sort of a common account for managing our Drupal sites. And one of the, a couple of my developers are sort of mildly frustrated because they also have root access because they're capable people and they need to do, you know, Apache changes and systems admin type stuff in addition to just the management of the Drupal sites. So a couple of them are saying, like, it's really annoying to become this Drupal management user. But then I also have to become root because I have to change this other thing. And I can always be switching back and forth and it's just, I don't know if you have any suggestions for that. Well, I don't know. I mean, it depends on if, I mean, if it's a social need or a technical need. If it's really, you know, if it's just frustrating to do it then maybe some tooling needs to be changed or something like that. But ultimately, I mean, there's a reason for separations and there should be a documented process and if anyone has questions about it, you go, hey, look, this is why we do it. This is why it's important to us to have this separation. If you can't be supported by any good design ideas, then it makes sense to change it. So I think, I don't know, I mean, in those situations I tend to have two terminals running. Yeah, well, me too. I've gone through the process of sort of scripting up some stuff, but it's very common as well. And then sudoing a lot, like allow that through sudo. Exactly, and that works pretty well. And those, when I get really paranoid I try to actually use control around that script to just in case there's crazy stuff to do. Yeah, that's a good idea. I just like confining processes. Okay, thank you. If anyone does come up with any other questions, feel free to email me. Thank you very much.