 All right. How are you guys doing today? All right. Are you guys paying attention to the American election at all? So we're going to make SEO Linux enforcing again, instead of America grade. It's already great enough. The Canadians sent us a video and told us we're great, so it's fine. But so this talk, I'm going to try to, I'm going to go deep into some technical stuff and I want to show you that you can architect things in a new way with containers that could possibly be net security positives in some places and negatives in others and kind of understanding the risk between the technical decisions and kind of what you get for free and what you don't. So I joke, you know, just because you're paranoid doesn't mean that they're not after you. This is from a book by Joseph Heller back in the United States after World War II. And the idea here is that you can be as paranoid or crazy as you want. I mean, it's all, security at the end of the day is kind of a risk assessment and you have to decide how much is enough. So I'll give you an example. I had my house robbed a couple of years ago and, you know, I had some watches stolen and some other things and I realized that I wanted to buy a safe. And so I looked online and I had had a little fire safe, but it was locked. And so the robbers just stole the safe. And so they carried out the safe and I was like, wow, that's not good. I should have just left it unlocked. But then I decided I was going to buy a giant safe. So I bought this huge safe that's like up to my arm. It's a giant metal safe. And the first thing I was going to do is put it in the basement. And my friend goes, no, you can't put it in the basement. If your house ever like catches on fire, the fireman will put water and then it will ruin everything in the safe. And I'm like, fine. So I'll put it upstairs. And so then I started reading online and people are like, oh, if there's a fire, the safe will fall through the floor back down into the basement. And I was like, well, I've got the solution to that. I'll buy another fire safe that's little, put it in the big metal safe, and then do that so if there's a fire, at least one, they're like, yeah, but the water will still get through it. And I'm like, so I got plastic bags and I put my hard drives in my passport and other things, you know, in plastic bags, in a fire safe, in a giant safe. On the second floor of my house, with an alarm system on the house. And I'm like, I think we're good. But I mean, it still bothers me because I know there's ways around. They could jam the system. They could do this. And next thing you know, you're driving yourself crazy. So you can become paranoid very easily. But that doesn't mean that it's not really a risk. I mean, there are risks to what I engineered for a solution at my house. But I'm accepting that they're enough. It's enough security. So in Container World, though, there's two major problems. And so Dan Walsh, who I dedicate this shirt to, you know, he's famous for saying containers don't contain, right? And for a long time, we've done process isolation. And if you think about containers, they're really just process isolation with extra controls in place. But the magic is that we ship a user space around, and then we use that user space, mount it, and then run the binaries inside of, you know, essentially extra technical controls, such as, you know, user name spaces, things like that, and SE Linux and C groups and things like that to limit what those processes can do. But at the end of the day, they still all share the same kernel. So there are certain things that happen in this scenario that people don't think about. There are root-only exploits. So if you do things like user name spaces and you delegate root inside of a container, and there's a, happens to be a privilege escalation, you know, that comes out that only root can execute, which that exists, and that blew my mind when I first thought about it. But nobody cared before, because you always had root, so who cares if you escalated from root to root, it didn't matter. But if you do it in a container, you now have root over that kernel, and you can shut off SE Linux, you can shut off Seccomp, you can do other nasty things, depending on how well you can escalate. The other major place that I think people are concerned is the images themselves. You know, at the end of the day, if you think about how a server works today, it's a collaboration of really generally three different teams that modify the user space on a server. There's typically the operations team or architecture team that will lay down a core build. And then typically in an old, you know, in sort of a traditional enterprise, you'll have middleware people come along and install a database or install a Java app server or install something else, maybe Ruby, maybe a web server, whatever. Then you'll have application developers that want to add stuff to that to make it actually into an application. So who controls what in a container world? Now we have this user space that we can ship the user space around. We can move it to the desktop, we can move it to a dev server, we can move it to prod, but at the end of the day, now we're fighting over who controls what's in that user space, because before the user space was stuck on a server and you couldn't move that user space around, but now that we are, now it's obvious that we're competing for who controls what's in that user space. So like I said, things like bad content, who's responsible for what? So the problem, another challenge with containers is it takes a lot of hard work. So people think you can just run anything in a container and technically that's a great, there's a lot of gray area, right? It's very easy to run things, and I came up with this after putting many, many things in containers is that you need kind of three basic principles. It's easy to, if it's easy to separate out the code, the configuration and the data, it's fairly easy to run in a container. And if you kind of look at, you know, the bigger world, things like MySQL, if you look, there's a binary, you know, MySQLD, that's pretty easy, that's the code. There's one configuration file, Etsy MyCNF and a Red Hat distribution, pretty easy. Varlib MySQL is one single directory where all the data is. That's pretty easy. So now if I want to run a Docker command, I can do a Docker-v bind, you know, in slash varlib MySQL and slash Etsy MyCNF. That's pretty easy, but I've separated out, and then if I want to do it in Kubernetes, I PV and, you know, mount, you know, mount a persistent volume in with the data and blah, blah, blah. It's the exact same kind of process. But as long as I've separated out that code configuration and data, it's easy. If I have an application that can't do those things, that's very hard. So say I have something very big, like an application that's made up of a bunch of little sub-projects and it puts data all over the system, or imagine I end up with a Turing complete problem where it's a web application, but the web application allows people to create directories on the file system and put data wherever they want. That's a bad security practice, but there are applications that do that kind of stuff, and you don't know where all the data is, so it's really hard to nail down. So that becomes a challenge to actually doing containers effectively and securely. But if you can do this, I'm going to tell you about how you get a bunch of stuff for free. And I also, in another talk on migrating, I go deeper and deeper into all kinds of other problems, like licensing and networking, and there's a whole bunch of other things. But if you can't even do the basics of code configuration and data, it's pretty hard to put in a container. But if you can, you can end up with a lot of benefit from a security perspective that you kind of get for free. So we always talk about container security without a lot of depth. We just say containers are less secure. So I kind of went back and visited my old Department of Defense training from when I was younger. I worked for the government for seven years. And I thought about it from a confidentiality integrity and availability perspective. So confidentiality is the idea that I won't leak data to people that shouldn't have that data. Integrity is the idea that if I load the file off the file system or I read the website, it really is what I think it is so that I'm actually establishing person A is actually communicating to me the data that I should be getting. And availability is the idea that I can't be denied access to that data, right? So from a container perspective, I would argue I'll go by confidentiality is pretty much identical. It's a wash between containers and regular VMs or physical servers. At the end of the day, if you share an NFS volume or a fiber channel volume, if you don't encrypt it, it could be mounted by another VM or another malicious user and just read the data off. You're going to distribute public keys to the container or to the application. If you kind of just leave those keys laying out on a VM, you know, it's kind of the same thing as if you bind-mount them into a container when it starts and then that container will now have the keys to decrypt this data. So it's honestly pretty much a wash. Availability. I'll jump over to the other side. Availability I would argue is actually better in a containerized world than a non-containerized world. I would say like a web server, that web server, as long as the physical or virtual machine doesn't fail underneath it, you could probably get to five nines pretty easily, you know, six minutes, eight minutes of downtime a year, that kind of thing. That's pretty easy to get to. But imagine in a Kubernetes environment, it's pretty easy to get to almost continuous, you know, availability pretty easily. As long as you have your routers are high availability and you have multiple masters, it's pretty hard to take out a Kubernetes cluster. So if that thing dies, it's probably going to restart it within second somewhere else, on a different node somewhere. And Kubernetes, we call it cockroaches, it's hard to kill, right? It's like cockroaches, you smash one, it'll come up somewhere else, smash it, it'll come up somewhere else. It's really hard to kill a process in Kubernetes. It does a very good job of keeping it up. So I would argue availability a little bit better than just running regular processes or VMs. But from an integrity perspective, this is where I think we really get kind of not warm and fuzzy feeling around the integrity, because we're sharing a kernel. And so the idea that somebody might have gotten, you know, broken out of one container, messed with a kernel, and now the trust that I have about my running application versus the image that I had deployed into the registry server becomes, you know, probably lower, or I have less trust. So let's go from an integrity perspective. So this is because we always think of the center here. We think of these two in the center. We always think about running a container or a virtual machine. But in reality, we can run containers on virtual machines, and then the property of we get the isolation of the VM, you know, added to the container. And honestly, that's probably the best of all worlds. We also have the other opposite where we have, you know, a container and then virtual machines running in those containers. So Red Hat Virtualization and OpenStack does that by default, right? Like if you use a Red Hat hypervisor, it uses S-Vert, and it puts all of the different VMs in different contexts. It auto-generates MLS labels for SE Linux, so all of those VMs run in different containers, essentially. So, you know, those two people go, all right, that's what we talk about if I added an OpenStack conference, right? Because we're talking about running OpenStack on containers. We're talking about running containers on OpenStack. But when we get Harriers, when we start talking about, well, can I get rid of OpenStack if I have containers? Can I get rid of, you know, why do I need containers if I have OpenStack? People, I've seen that argument form a lot, and I think that's where we get mad about, well, how much, you know, containers aren't as secure, the integrity's not as good, blah, blah, blah. So I'm going to argue that there's more to it than this, so there's three layers you've got to really think about with containers. Not just the host. You know, the host is kind of what we're arguing about, like, is the host secure enough, right? Do we need to run a VM or a container? But I'd also argue it's kind of layers, right? You've got the container image in the middle, you've got the container host, you've got the container platform. And in a virtualized world, we didn't typically give people access to the platform. We would typically file a ticket, have a VM created, blah, blah, blah. We never got to the full self-service world that we wanted to, but with OpenStack, we do at least get to that level. And we delegate to that level, right? We don't let people, in an OpenStack environment, we don't let people SSH into a nova node and like fire up a VM, right? We don't do that. We give them access, you know, to a horizon or, you know, or a command line access and there's role-based access controls, blah, blah, blah. The exact same thing with containers. We want to typically delegate to, you know, Kubernetes and let the Kubernetes master, the OpenShift master handle that. And so I'm going to dig through each of these a little bit deeper. But before I do, I want to talk, to show you, talk about, again, I mentioned we always like to compare a VM in a container, which is a false analogy in a lot of ways. So process-only isolation is still very common in, like, high-performance computing workloads. Like, if you go out to most HPC clusters, they're not running containers, they're not running VMs. They have a bunch of researchers logging into a scheduler and scheduling their jobs. And there's a lot of potential for hacking each other there, right? But if you're at a university and you have geology researchers and biology researchers, you're not really that worried about it a lot of time. You're like, oh, well, the geology guys aren't really trying to hack the biology guys and you go process-only isolation, good enough. Nobody would ever get fired for doing process-only isolation in an HPC environment unless it's some kind of high-security government, you know. And then, typically, what happens is they go, well, we don't want to share the cluster with these guys. We want our own cluster. And then, for some reason, we trust our own people but not those people. And so we end up, what we do is we kind of shard to get enough tenancy. And so I argue it's really a tenancy problem when we're deciding where we want to put something. So, you know, in corporate IT, if you think about, like, SAP and Oracle and web servers and DNS, I probably wouldn't run... I think, typically, I wouldn't even run DNS in the same cluster that I would run in Oracle Database, right? I mean, in a VMware environment, who would do that? Typically, you would have it in a separate cluster. There'd be kind of an infrastructure cluster that would be enough isolation and tenancy to run, like, external services. You might run DNS and web servers side by side. And I would run, you know, data center. But, you know, so I'd argue, think about the tenancy. And this is a funny question. So, again, getting back to paranoia and going ad absurdum, going as crazy as you want. So the first thing that would happen when I was at a data center five and a half years ago before I came to Red Hat was a customer would say, well, we want a VM. And I would say, okay, well, but what if the VM dies? They're like, well, can you restart it? I'm like, yeah, but what if the node dies? We want two physical systems. We want VM to be able to go between two physical systems. I'm like, all right, well, what if the rack dies? They're like, well, we got to make sure they're in different racks. Yeah, exactly. All right. Well, what if the data center goes down? Or what if one of the networking link goes down and gets slow and at half partial fails, blah, blah. They're like, well, we need, okay, we need the application running in two different data centers with two different weather patterns. We don't want earthquakes affecting one another. And then you could go crazy and be like, you know what, let's just put one on the moon. There'll be enough isolation. But that's absurd, right? Because if the Earth blows up, not really going to matter if the data center's running on the moon. So you can make yourself crazy with enough isolation and tenancy. And so that's why I try to come up with this tenancy scale. So how much isolation is enough? And I'd argue that it's pretty similar to what we're doing today, like I wouldn't run DNS and SAP servers in the same Kubernetes cluster. I would never do that. It just doesn't make sense. I would probably have an infrastructure cluster that ran DNS in some of the external services that we run. And then I'd probably have a web cluster that runs the web applications. And I'd probably have maybe an internal IT, one that runs the internal IT things like SAP and Oracle, things like that. So I'd say you use the same kind of logic that you use now when determining whether you would move between clusters. And so it's not really about VMs or containers. It's about thinking through the tenancy requirements of the application. So now I'm going to dig into... So that was kind of the conceptual things that I think you need to know, but now I'm going to start digging into the technical things. So Red Hat is a big supporter of SE Linux and Setcomp. So by default, all of our containers run what we call S-Vert, and it's the same technology we used for VMs. So we generate an MLS label, and I jokingly say, if you have people revolting against you, if you're Russia and you want to stop people from revolting or whatever, I shouldn't say Russia anywhere. If you're in the United States, we do it too. You try to divide up the people that are revolting against you, right? You want to filter what they can say to each other. You want to filter who they can talk to. SE Linux is kind of like who you can talk to and Setcomp is what you can say. So you can imagine if I can limit who I can talk to and then what they can say. It makes it a lot harder to make things bad happen. So think about... SE Linux is about isolation between data structures within the kernel. So you know, ports and file systems and files and processes. So it's like the data structures in the kernel, preventing them from talking to each other in certain ways. And then Setcomp is about the SysCalls that enable that to happen and limiting which ones, so what they can do to each other. And I'm going to demo some of this. So let's think through the container images first and then I want to demo something about container images. So I try to think through it from like what we have today and what we get if we adopt containers. So a standard application versus what you get with a containerized application. So from a container images perspective if you think about it, it's a user space. It's just like a server that's out there now except I've packaged it up into a container image. I put it on a registry and now I could ship it around and move it. But what do I get nowadays? I get trusted content. So the same value proposition that Red Hat has of offering a Linux distribution with a proven user space that's tested and there's a secure response team, et cetera, et cetera. You kind of have all those things, right? So those are the basic things. And we do this now. We do scanning. We do patching. We have, you know, CVE database. We have a bill of materials. You know, like if you look in an Anaconda-KS script you can see what we've installed at core. You're like, okay, if you do an RPM-QA, I don't know how many of you do an RPM-QA, you can verify what's changed. Like if you've seen Bin Bash and it's like the execution, you know, or like permissions on it has changed, somebody's been mucking with like something that I installed through an RPM and that's kind of scary. So we have that. We have security response teams. We limit root today. We limit users, right? Like I don't just give everybody root on my system. I had a guy argue with me, by the way, about this. Like just a couple weeks ago I gave this talk. He's like, well, we do. I'm like, well, that's awesome for you. But I wouldn't give root to anybody if I could, right? I wouldn't even give it to assist admins. I would prefer to have audited access to root if I can so that I know even if a assist admin does something malicious. So long story short, those are what we do today. In the future though, we add some other things, right? Like the bill of materials gets actually easier. The signing, so this is, I jokingly say, so back in the day, back in the late 90s, I ran web servers for NASA Glenn Research Center. And we had really, really investigated and played with trying to run read-only web servers. So this was, I don't, I mean, all of you look mostly old enough to remember this, but there was this huge push to like try to run read-only web servers. And it was like all the rage, like 99, 98, maybe early 2000. It was like, well, people are hacking into web servers so we need to make them read-only. So what we would do is we would burn the web server and all the content onto the CD, boot up the CD, and then every time we wanted to update the content, I would literally go down in the basement. We had a server room. I would burn it on my desktop and I would take it. I would run tripwire on it, make sure that I audited all the stuff. Then we would take it down in the basement, reboot the server, blah, blah, blah, take an outage, all this blah, blah, blah. It was a pain in the butt. And so then that died off and everybody stopped doing that because it was too hard. But we get that back for free in a containerized world with read-only containers. If we can sign the images and then we do read-only containers, we're in a pretty good space. We're actually kind of back to read-only servers, but it's softer to find and we don't have to fart around with re-burning CDs. So I'm going to demo, I just want to demo, this is something that I think a lot of people don't realize. If you don't run... So here... Actually, can you guys see this? Okay, good, it's high enough. Let me make it a little bit larger. So this Docker command... Oh, and this is actually... Here, let me take out the privilege because we don't want to do this, but I was demoing something else. But unless you do that option, there is a friend of mine, Gareth Rushgrove, Gareth Rushgrove gave a talk a couple weeks ago or maybe a month ago in Amsterdam, and he said, you know, there's this concept people's mind, they have the threat model of what's in their mind and then there's reality. Most people think containers are read-only, but they're not by default. You actually have to run this. You have to do a dash dash read-only. So when I do... In fact, here, let me do a more advanced command of this. I have it in my history here. Let's just run it from this. So, VIM demo... Sorry, I want to show you guys a little bit more advanced version of it. So let's look at this one. So demo 01, read-only. So, in this, I'm going to show you how easy it is. Again, when I've separated out the code, the configuration and the data, how easy it is to log access to the configuration and the data, make the code read-only, and now I actually have a net positive, probably, from an integrity perspective. So in this, what I do here, this first one is really just about adding an audit D rule. So I'm just adding an audit D rule at the top. Then I'm showing you here, you're running it with a dash, you know, dash dash read-only, but I'm mounting the data in slash mnt slash container 01, you know, and then I'm mounting it into the container as that, and then I'm running some stuff, and then I'm going to run some tests, and I'll walk through what they are. So bash dash x demo 01, I want to show you guys this. So I just restarted the logging, and it all happened, but let me walk through what happened. So you see right here, I ran it read-only, and then I tried to touch a file in slash temp, and it failed, right? So if I don't make it read-only, I can touch files in temp, I can modify that user space while it's running as a container, which is dangerous. Again, the threat model and the reality are different. A lot of times in people's mind, they're like, oh, containers aren't read-only, not by default, they're not. So you have to add that. But then notice that I did, I was able to touch to this, right? I was able to touch this, and in fact, I didn't show you that actually, I think actually, oh, I did. All right, so notice this. Actually, let me go back up here and show you this. So this capital Z, I mean, how many of you know what the capital Z does? Raise your hand. All right, so sum, that's good. So the capital Z makes SE Linux when we fire up the container, it forces SE Linux to label that file system. So now the MLS that was generated for that process when it's running, it labels that file system with the same MLS. So this process is the only process that can access data on that file system now. So as I've brought those into existence, as I've brought the process into existence, I've ensured from an SE Linux perspective that this process is the only one that can access data. In fact, even if I go from the host, I can't access data in there. There's ways to limit, essentially, I'm getting almost better security. So I'm getting SE Linux context enabled. I'm only able to write to a single directory. And so then I show, with a simple audit D rule, I'm able to log everything that happens. So the touch got logged. So if you look slash user bin touch, it shows you did it, shows what time they did it. I'm able to audit all access to that data because that data has been limited to a single directory. And the nice part is if I can figure out there's only a single configuration file or single directory for the data, it's pretty easy to systematically mount things in a way that I would just have a few audit D rules and all of that gets logged now. Which is something that we couldn't easily do without containers. And then I kill it at the end. There's nothing magical there. So this is something I would argue, if you run that, if I was running whatever process needs access slash MNT data, if I was running it as a regular process, I would actually have a lot less security than if I'm running it as a containerized process. It would be harder to build SE Linux policies to make that happen, right? Like nobody goes and builds custom SE Linux policies. It requires a lot of skill to do that. And then another one I didn't point out, like atomic diff or Docker diff. You can see differences between layers in the file system. That's pretty cool. You can actually see what's changed in a container that's read-only also. And that's again, how would you know say if you fired up a regular process again without something like Tripwire which is another thing that we typically disabled because it was too hard, because you have to burn a database, trust the system for that time that you actually create the database. You know, essentially do a hash of every single file, copy it off put it on something that's read-only because that's the only way to for sure know. And then again, hope that you didn't get compromised during that time that you were doing that. So there is even risk in that. But at least in this scenario we actually get that pretty much for free with containers. Since we've packaged up the user space we've signed it, we've scanned it we know exactly what we have in that read-only user space. It's actually much better than running a single, like a regular process because that's much harder to do. Nobody does it because if the process could modify anything in the user space it wants and we haven't limited to only writing to a single directory, nobody does that. I mean just at the end of the day, nobody does that. Although SC Linux by default does limit for example Apache, can only write to Varlib, MySQL, things like that. But running multiple web servers becomes much harder. With this it becomes you just keep firing it up and it keeps labeling new volumes. It's the exact same security for 20 web servers or one web server. You get one only if you run a regular VM. So moving on to the container host so again today what do we get? Kernel quality. I mean red hats are known to be pretty good. We always fix security changes pretty quickly. You can drop capabilities. You can do things like that. Nobody does it but it is available. Read-only images again are a nightmare. Booting up servers that are read-only is not something that anyone does. Even in Amazon, nobody does it even though you could. Limiting SSH access, we do that pretty well. We know how to configure things pretty well at this point. We know how to use Ansible. We know how to use Fit Chef. We know how to use Cloud in it. We know how to do those things and do push driven or policy driven or declarative ways to define what's in the user space. We're pretty good at tenancy. I think today we know what we're doing from a tenancy perspective. But things that we again that we don't really do, we don't generate MLS context for every process. We don't do that. It just doesn't happen. I did talk to one user that was actually generating app armor every single one of their apps. I have to admit I was like, you know, slow clap because I haven't seen anybody else doing that. How many of you guys develop a separate SE Linux policy or app armor policy for every one of your apps? I don't know, maybe 100 people in here and nobody's doing it. Oh, one. You're in the top 1%. You should get a tattoo. One percenter. So again, Sec Comp makes it pretty easy in a containerized world. S-Vert is like brain dead easy. We're gonna fire up this. This is a funny demo, but it'll be fun. So let's go. We're in here. So I have this. Let's show you this. Reboot. So I created this Sec Comp policy. It's very simple. It blocks you from running a reboot system call. It blocks you from mounting stuff. The mount will become apparent as I show you the rabbit hole. Why I had to add this. So let's do this. Reboot. So what we're gonna do is imagine I wanted to give a sysadmin or some kind of power user of some kind the ability to go run tcp dump. Like to go troubleshoot some network outage, right? But I don't want them to do nasty things like be able to reboot the server. So does everyone understand super privileged containers are like giving somebody root? A super privileged container can do everything that root can do for the most part. And in fact, you can do all kinds of nasty things I found. In fact, my first version of this demo a guy said and I'll go into it. I'll just demo it. I was like, all right, well let's stop them from being able to run poweroff-f. If you type reboot, reboot actually tries to talk to systemd. There's no systemd running in this container and so it will never try to shut down. But poweroff-f actually has the syscall hard coded into it. So when we hit it operation not permitted, right? So the first thing I showed this to another Red Hat guy and he goes, yeah, but what about systrigger? And I'm like, oh wait, what about that? So I was like, well, we'll just make it read-only. So notice I started it so slash proc slash sysrqtrigger. So if you send a B to this, let's do an H. Oh wait, I can't send anything to it because I made it read-only. So I can't send a B which would actually reboot the system and we tested it. It does happen. So I was like, dammit, he's like, yeah, but I could just mount it. I'll just do a dash t proc proc slash mnt. I was like, you bastard. And so I added the mount one and I was like, now you can't do that either. I was like, so now I've limited you from doing all kinds of things. And you're still root. You can do all kinds of other stuff. You can tcp dump. You can do all kinds of things. You could share the same. I could allow you to share the same tcp stack with a host so that you want to snoop traffic you could. I could allow you to do a lot of administrative tasks, but I can also very granularly limit what you can do with syscalls. So this is a way of showing like there are ways to delegate and again do this with a regular file system and a regular server and a regular user space. I mean, it's pretty hard to delegate root in a way that's sane. I'd still argue this probably needs some more testing, but I just want to do it as a demo to show like that's pretty powerful actually that I can limit. I can limit them from mounting. I can limit them from echoing stuff into proc. I can I can limit them from running the syscalls. So there's I'm pretty sure there's no way to reboot the system still unless they actually hack it. But again, there might be another way I'll figure out a block that too. But again, security is one of those rat holes that you can go crazy. So another one that the final place that I'll really go to is the is the is the platform, right? So you want to think about the platform. This is just like open stack, right? I mean, at the end of the day, it's all the things you want to do. You want to do the role based authorization there. You want to do quotas there. You want to do your centralized authentication from that single point. And you know, that's the demarcation point for the user at the end of the day. Like I want them to interact with, you know, the the kubectl command or the OC command in OpenShift. And, you know, that's where network separation happens. And again, it's very similar to open stack at that at that point. That's where key management things like that happens. I don't want to allow people to SSH into my clusters. They have a thousand nodes. I don't want them SSHing in and firing up containers. I want to log it all at the platform layer and I want to delegate control there. So I have another demo that I want to show you. And this one is actually really cool because people always have this problem of like they want to they're like, well, how do I know where my stuff's running? And I'm like, you don't really need to know where your stuff's running. You just do an OC get pods. And, you know, like, okay, so let's say I have, let's say users are calling me up and saying there's something wrong with the registry. I don't know. I got to push an image and it fails halfway. And you're like, you're like, well, I don't want to deal with this. I administer the container platform. Like somebody else is in charge of the storage and the registry server or whatever. So I'm like, okay, well, let me just like let this dude log in do an IT and then let's do this. Or let's say it's a network person. We want to have a network person go do it with this one. Actually, I haven't tried this one. Let's see if it works. Okay, cool. So now watch this. All right, so now we do this all the time. Let's watch, like, troubleshoot the HA proxy config. I don't know where this thing is running. I could care less, right? But I just OC exact into it because I could list where I could list it. I know which process I want, which is essentially an application if you will. Maybe I'm the network person. I just want to be able to go mess with a router. That's all I care about. I don't care about the registry. I don't care about all the apps that are running. Somebody's complaining. I want to go look at the config file on the HA proxy and make sure it's working. That kind of delegation is pretty powerful. You know, again, I didn't have to give them full route. I could give them access to just certain pieces of infrastructure within the cluster and allow them to mess with that. The same is true for applications, right? I don't allow application teams to mess with each others. I can just give them access to a single project. They can OC exact into it, troubleshoot it, do whatever they need to do. I can give people access through SSH. There's no reason to. I mean, they can have a jump box. We could have a centralized jump box that would only allow people to do Cube CTL and, you know, OC commands from there. And maybe we log them. We could do all the normal things that we would do to limit control. But then at the end of the day, we wouldn't let them have stage into the cluster. We would limit them at that. We would demarcate them at, you know, the point of presence essentially would be that platform. So I just... I have two more slides I'm going to rush through. But at the end of the day, I kind of just showed... So like a standard web application, like I think these are pretty commonly understood, right? Like this is all the stuff we have today. But some of them are very inconvenient, right? Like so like tripwire, mutable user space is a pain in the butt. The fact that anybody can go and change the user space. There's no temporal understanding. We have a tool called CloudForms, which can do what we call smart state analysis. So like snapshot the file system, essentially go and suck out a bunch of data from the file system, like once a day, once an hour, once a minute, whatever you schedule it to do. But even when you use configuration management, if you don't have the... The configuration management doesn't touch everything on the file system. It's not like tripwire. It only touches the stuff that it's aware of. So like if somebody goes and does something nasty outside of the scope of the configuration management, I have no spatial understanding of what they changed. I have no idea what they changed. Again, unless I'm using tripwire, but not a lot of people use tripwire because it's inconvenient. And then I have no platform level granularity of delegation, right? I mean I can delegate control to like OpenStack. I could do that and then they would have complete access over that VM, and that's fine. Maybe that's enough, again, tenancy for what we're doing. But there's no process level delegation. I can't delegate down to the process level. And then, honestly, people just leave stuff around and they don't patch it very often. So with a container, though, containerized web application, we have all the same standard tools as a regular server. We have read-only containers. We have signing. We have all the regular tools. We have platform level delegation down to the process level granularity. We have spatial and temporal understanding. So when you build the container image, I know on today, at this time, this is exactly what was in it. It got signed. I scanned it. And so I know as long as the signature matches, I have temporal trust that it hasn't changed since I signed it because I can verify it. I have spatial understanding because it's a finite user space that's much more limited than the user space that's just laying around on a server that could change at any time. So spatial, I have space and time understanding of it. And then it's easier to do updates because now, if I do do an update, I roll forward to a new container image or a new layer of the container image and then if that doesn't work, I roll back. So I've taken the pain out of doing updates a lot less and I have a funny story that I share with that but I'll cut it off. It was one that we messed around for an entire day, a developer and I, we were supposed to do our quarterly patching but it's still not working. I restored from the backup because it's still not working. And we had done a YUM update. That's all I'd done. I did a YUM update and the web app just wasn't working. And I'm like, what are you doing to test? He goes, I'm logging in with a user and I go, are you sure the user works? And he goes, oh, and this was mind you. This is Saturday afternoon after about eight hours of messing around with trying to restore this thing back to what it was supposedly before. He goes, oh, I'm using the wrong username and password. And I was like, I hate you. I hate you. I hate you. But with a container, I'd have been like, rolled back. Not my problem. I was like, something's wrong. I have temporal and spatial understanding. I have a signature. I know that that code is identical to what it was before. I would have rolled back and I'd have said, have fun. And better, we would have done it the day before. We wouldn't have done it on Saturday. We would have tested it on Friday during the day while we were there for work hours. He would have said, oh, it doesn't work. I'm like, I don't know, have fun with that. He came over to me because I guarantee he would have bugged me. And he would have came back and been like, and I'd asked him the same question and eight hours later except I'd have been working the entire time. And then he would be like, oh, I'm using the wrong username and password. I'd be like, cool. And then he said, yeah, it's good. And then on Saturday, I literally would have done the update in two seconds. Nobody would have been the wiser. And we would have taken that synchronous time and made it asynchronous. And so that to me is one of the biggest benefits from an operations perspective. The limitations though are still tenancies not well understood. There's a shared kernel. Applications are hard to break up into code configuration and data, and there's a whole other bunch of things. And I have a whole talk where I go into migrating into containers. And there is more infrastructure. Another problem that people get is they're like, well, if I add Kubernetes, can I get rid of OpenStack? And I'm like, nah, I wouldn't look at it that way. I would look at it as adding on top because nobody has an entire containerized environment. There are web teams that might be able to do that. There are not corporate IT teams that can containerize everything that they have. It's not happening in 2016. I mean, that's maybe 2036. We'll be able to make it all containerized. Then maybe we can have that conversation about that being enough. But at the end of the day, you're going to have a mix of containers and VMs. So I'd argue it's adding more infrastructure. And so that becomes more of a spend and an investment in understanding and training and figuring out how to put things into containers and then add all these security controls. That's a large investment. So that's one of the downsides. And honestly, just people need a better understanding of how all this stuff works. I'd argue it's Unix 101. It's what I've known for 20 years. But a lot of people need to go back to that and understand that, like processes and SE Linux and how isolation works, how mounts work. Again, when you look at all the user namespaces in the kernel user mount network, all those things go back to the Unix 101. It's like Unix internals 101. But that's not well understood anymore. I think we forgot about a lot of it. So with that, I have about maybe two minutes. I think I have two minutes. Actually, I'm looking at my watch. I have one minute. But I will go to questions. Yes. It can and it can't. So his question is, if I have a bunch of applications and say I have a base layer that they all inherit from and I need to patch a CVE in that base layer, how do I propagate that to all of the images? So I've been a proponent of a thing called a three-tier supply chain with middleware layers. So maybe you have Ruby layer, you have a PHP layer, then you have all the apps inherit. So all the PHP apps inherit from the PHP one, all the Ruby ones inherit from that. I built a demo with OpenShift using what are called build configs and deployment configs. And I can show you an automatic propagation of all those patches. So what I do is I go in and I patch the base layer. The base layer gets built. It stuffs it in the registry. An image change trigger, then once those all get put the PHP one gets put in the registry, it triggers all the PHP apps to rebuild themselves. And then the Ruby one gets put in the registry and it triggers all the Ruby ones to get rebuilt. I can do it in the push of a button. Now it took me a long time to build that. It took me probably, I had never done it before. Nobody had ever looked at how to do this. But in OpenShift, it's trivial once you've built that. And if you stick to a regimented hierarchical structure, I call them, as long as you have good Adam and Eve images and really solid Adam and Eve images and you have good demarcation that kind of maps to your business. Again, I have PHP specialists and Ruby specialists. I have operations team guys that know how to do the base layer really well. I have app team specialists that know how to put their app on it. I've now got a single language that we're all speaking. If they give me their Docker file, I can rebuild that. If they give me their CICD tests, I can run those to make sure the app's still working. I could test it the night before. And I show, even in as the applications get rebuilt, I can either automatically redeploy the app if I have tests and I trust those tests. But I'll admit that scares a lot of people. I can rebuild the images and then you can have the app teams whenever they go rebuild their app. They'll pick up the changes without even knowing it. Then they'll go do their tests and they'll do their user acceptance tests and blah, blah. So even if you have a manual process to allow that UAT to go into prod, you can still leave that in place. Or if you have a really good automated and you're totally about automatic service deployment, you can do that too. OpenShift makes it pretty trivial to do that. It's a lot of upfront investment to make sure to get everybody on the same page and figure it all out from an operations perspective and policy. But actually, once you get it in place, I do a demo, it's freaking awesome. It blows my mind because I've been doing this for too long and I remember the pain of doing patching for the last 20 years. This is what I've wanted forever. It does make it easier once you buy into the right way to do it. Any other questions? All right. Thanks, guys.