 All right, I am showing that it is time to get started. Y'all feel free to move on in. There's plenty of spaces up front. Appreciate you all being here. My name is Thomas Cameron. I am a digital transformation and cloud strategy guy at Red Hat. I actually developed this deck with Marunal Patel, who's one of our principal software engineers at Red Hat who does a lot of development work around things like RunC and a lot of internals for containerization. So what we're here to talk about today is a little bit about container security. This is introduction to container security. We've only got a short amount of time, so I wanna make sure that we all understand that in 45 minutes we're not gonna make you an expert in container security. This is just a brief overview. What we're gonna talk about today is a little bit about what Red Hat has been doing around containerization, what containers are, how they work, what are containers not, and then also we'll dive into container security. What makes up container security? We'll talk about things like kernel namespaces and control groups and the Docker daemon itself and how to secure it and using Linux kernel capabilities and my favorite topic, security-enhanced Linux. And then we'll talk about some tips and tricks and come to a conclusion. So who am I and why should you care? Like I said, I'm Thomas Cameron. I've been in IT since 93. I literally had to shave my beard because I became a gray beard. That was heartbreaking. But I've been doing this since 93. I was a police officer in a former life. I actually come from a background. I got a, you know, from a military family, I come from a security and law enforcement background, but I realized that I would like to actually not live on beans and rice for the rest of my life. So I changed careers into IT. I've been at Red Hat since 2005. I am a Red Hat certified architect, Red Hat certified security specialist and I have other certifications. Dating myself terribly, I started out as a Novel Network Administrator. So yes, I do stand on the porch yelling, you kids get off of my lawn. Spent a lot of time focusing on security and organizations like banks, retail organizations, manufacturing companies, e-commerce, things like that. I don't know everything. You know, one of the things that's really cool about being at Red Hat is that it is incredibly humbling. Anytime I think that I really know a lot, I'll just go hang out with the engineers who actually write the code for a while. And I'm like, yeah, definitely, definitely an excellent technician, you know? So Red Hat and Containers. We've been involved with containerization since probably 2010. We acquired a company called Macara back in 2010 and we rebranded their technology as OpenShift. It used what we called at the time cartridges, which used SE Linux, kernel control groups, kernel namespaces and so on. But in about 2013, Docker really came to prominence and we realized that it made sense for us to join into the Docker community rather than trying to push our own container format. So we joined in 2013 and we are one of the, one of the top contributors to upstream Docker. Last time I checked and it's been a while ago, we were like the number two contributor. And Docker has done some amazing things. They've been through multiple successful venture capital rounds, AppSera, Cisco, EMC, Fujitsu, et cetera, et cetera, et cetera. All these companies, including Red Hat, are totally on board with container standardization with Docker. Even Microsoft has announced that they are going to get on board with Docker containers. So what are containers? So containerization, and today I'm specifically talking about the Docker container format, is a technology which allows applications like web services and app services, databases and so on to be run abstracted from and in some isolation from the underlying operating system. The Docker service can launch containers regardless of the underlying Linux distro. So that's one of the really cool things. They're super, super portable. Containers give you the option of incredible application density. That to me is actually, I think one of the things that's most impressive about it. You don't have the overhead of a full OS image. It's not virtualization and Linux control groups also allow for maximum utilization of the system. You can schedule as many containers and you can allocate as much resources to those containers as you'd like to make sure that your system is 100% fully at capacity. And the same container can run on different versions of Linux, right? Ubuntu containers can run on Fedora. CentOS containers can run on RHEL. It's super, super flexible. So the old ideal that we've all heard about for years and years and years, right once run anywhere is a lot closer to reality with containerization. So that brings us to what are containers not? Well, first off, they are not a panacea. Containers are not the cure to all that ails you. They're great and they have their place, but containers are not a fit for every application, at least not yet. We're heading towards a direction where it may be that containerization is good for everything. I don't think it's realistic to think that we're there yet. And to be clear, containers are not virtualization. I've heard conversations in the hallways here where people are like, oh, this is just like a lot, you know, the next generation of virtualization, right? And it's not. Containers can be run on bare metal. I run containers on my laptop all the time. Not the same thing. So let's talk about how we secure or what is going on under the hood, so to speak, or behind the scenes to secure containers. So the first thing that I wanna talk about is kernel namespaces. Kernel namespaces, just a way to make a global resource, appear to be unique and isolated to a process. The namespaces that the Linux kernel can manage include mountain namespaces, pid namespaces, UTS, IPC, network namespaces, and user namespaces. All processes belong to exactly one of these namespaces. A process that calls unsure system call with clone new, for instance, arguments will get its own new isolated instance of a namespace and a clone or fork system call could also be used to spawn a process in its own namespace. So let's dive a little bit deeper in to what these mean. So mountain namespaces are a collection of file system mounts that make a process, a process's view of the file system hierarchy. The root of a container is made up of files packaged by the container author. So when you pull a Docker file from a registry, if you build one internally, when that process, when that container process spins up, you're going to see the root of the file system based on what's inside of that Docker file. So other mounts are added to the mountain namespace of a container for security and convenience. You know, if you have a container that, or if you have multiple containers that you want all to access the same var www.html directory, for instance. PROC is mounted so the container can see the process is running inside of its pid namespaces and dev ptsptmx is mounted so containers can get to spawn their own isolated terminals. And then some locations, such as prox slash kcore, are masked over with dev null so the containers cannot access raw memory. Because again, you don't necessarily know who's running what inside of the container. And raise your hand if you think that giving container runners access to raw memory. Yeah, we'll laugh at you. Okay, just want to make sure. And then other locations such as proxys are made read only as sysfs isn't completely virtualized. So if you look at this, this is kind of an eye chart, but if you look at this, you can see that as I spin up an Ubuntu container on my Fedora box, you see that big hunkin' crazy name right there, that big long uuid? That is the namespace, the file system namespace, or I should say the mountain namespace for this container. All right, process ID namespaces. Pid namespaces are really just again that sort of isolation of pid numbers so that containers see their pid numbers as being a totally separate table from the pids of the parent operating system. The first process in the container becomes pid one inside of the container. And in this example, what I did was I again, fired up a Fedora instance or a Fedora container and I'm running bash. And you can see that process ID one in here looks like bash. We all know that's silly because you know, you've got to have some sort of a knit process or system D. The reality is that if I am on the host and I take a look, I run PSAX, that pid one that we saw inside of the container namespace, or I'm sorry, inside of the pid namespace in the container is actually that one right there, 18557. So username spaces, these isolate UIDs and GIDs. Now I see a bunch of you guys taking pictures. You're absolutely welcome to take pictures of these slides, they will be available after the presentation. I'll post them on my people page and I'll give you the URL to this at people.redhat.com.dkm and then they'll also be available at the conference website as well. Feel free to take pictures if you want, but I promise I'll give you the slides. All right, so username spaces, really that just same thing that we've talked about, it isolates and abstracts the user IDs inside of a container so that even though inside of a container it looks like I might have root access or I might be UID zero, I'm really actually just, it's still running as a non-privileged user on the machine. So if I use, for instance, OCI tools and take a look at my user maps, I can see that it looks like I'm root, but in reality, I'm mapped to user ID 1000 in a host OS. Okay, so again, that gives us the ability to do things like spin up applications inside of the container that would bind to privilege ports, but I don't actually have to give somebody root access in the host OS. Interprocess communications namespaces, same thing with IPCs. The container has no IPCs in this example mapped, so if I look at this and I do IPCS inside of my container, there's nothing mapped, where in reality, if I go to the host and run IPCS, I've got all kinds of interprocess communications mapped. So again, it's just abstraction of IPCs inside of the container, so that if I do need to have multiple processes that are doing IPC communications inside of the container, they are isolated from the host operating system so that again, we keep those secure and segregated. Now changing gears a little bit. Linux control groups are awesome for managing resources. Control groups provide a mechanism for aggregating and partitioning sets of tasks and all of their children into hierarchical groups of special behavior. In other words, I can put various resources inside of control group and then I can limit how much CPU it gets, how much disk IO, what percentage of disk IO it gets, what kind of network bandwidth it can consume and things like that. And then PIDC groups are used to allow C-group hierarchy to stop any new tasks from being forked or cloned after a certain limit has reached. So for those fresh college CS grads, no, your fork bomb attack is neither unique nor clever. We will stop it. All right, this ensures that even if a container is compromised or even if it just has badly written code and spins out of control, we can put limits around it using Linux control groups so that we can say you are going to have no more than two gigs of memory, for instance. And even if somebody does something really, really crazy it starts gobbling up memory. Once it hits that two gig limit, it's going to be stopped. Note that when I run the command system control status Docker service, I get the control group and slice information. So I'm on a Red Hat Enterprise Linux machine here. When I run system control status, notice that I see my control group information for the Docker service and then for each container that gets spawned underneath it, it's going to be in its own control group. You can also navigate the CISFS C-group pseudo directory to see what resources are allocated to your containers. Now there's like 8,500 entries in there so I'm not going to go over all of them but you can get really, really granular and look at exactly how much memory, how much IO, things like that are allocated to each one of those containers. So in this case I do Docker run and this is kind of a cool demo that Marunal did. I create an instance of bash but I only give it 100 megs. So if I look at the C-group inside of there, I get all the information, I get all the crazy UU IDs that I was telling you about earlier and those are all of the resources that are available to me. So CPU sets, huge TLBs, devices, my block IO, CPU and so on. And if I look in memory, where'd it go? Oh, next slide, hold on a second. Yeah, so then if I, yeah, there it is. If I look in there, in CISFS C-group memory, I can see that that's how much memory this thing thinks that it has. And if I do a fork bomb, then what'll eventually happen is I can run it, boom, boom, boom, and it drops out of that Docker container. It just terminates the container. If I go look at the logs on the host, you can see where it goes. Nope, out of memory and it kills it, even though it was only 100 megs. So I can force out of memory really pretty much any time. Whoa, that is a blank page. All right, so let's change gears again a little bit and let's talk about security vis-a-vis the Docker daemon. The Docker daemon itself, user bin dockers, responsible for managing the control groups, orchestrating namespaces and so on so that Docker images can run and be secured. Because of the need to run kernel functions or to manage kernel functions, Docker itself runs with root privileges. Make sure that you are aware of that. Don't just grant access willy-nilly. Some of the things that you need to consider, as with any service that you're gonna run on your machine that has the grants root privileges, be aware of who's gonna be able to run it. Only grant access to trusted users. Now, older Docker documentation, when I first started doing this, recommended that you add users to the Docker group so they can run Docker commands. We don't allow that at all. So if you're using any RPM-based distro, like Fedora, REL, or the other one, CentOS, then you have to actually run it with root privileges. So if you're using the REST API to manage your hosts, make sure that you don't have any vulnerabilities exposed. Make sure that you're using SSL or TLS. If you're using REST over HTTP, don't expose it except to secured networks or VPNs. So another one of the components of Docker security or container security is the Linux kernel capabilities. Now, historically, if I am root, I have access to everything on the system. I can do whatever I want. I am omnipotent. Linux has the ability to break root into 38 distinct controls that can be enabled or disabled independently. So the cool thing about that is it allows you to grant or disallow required capabilities to a process so that it doesn't need to be granted full root access like before. So for instance, with Linux kernel capabilities, I can actually grant a regular non-privileged user the ability to bind you to a port underneath 1024 or mount file systems or do those things that typically have been reserved for the root user. The default list is a minimal list that works for most applications. There is a catch-all Linux kernel capability called CapSysAdmin or CapabilitySysAdmin. We don't allow that in Docker because it's kind of generic and it's kind of dangerous. So the best practice is to audit and drop all of the capabilities that aren't actually being used by your container. All right, so this is kind of an eye chart, I apologize, but this is just looking at the Linux kernel capabilities in Docker in GitHub. The graphic turned out not to be what I was hoping it would. But essentially, you can go, my point here is, you can actually go and look at the Docker source code on GitHub and, well, at least I think so. I don't know that's kind of changed with Moby, hadn't it? Anyway, but you can look and see what kernel capabilities are exposed. So the next layer of security for container security is security enhanced Linux. SE Linux is mandatory access control system, processes files, memory, network addresses, ports, all kinds of things like that are labeled and there are policies in place or there is a policy in place which is administratively set and fixed. The policy will determine how processes can interact with each other, how they can interact with the file system, how they can interact with users, network ports and so on and so forth. So SE Linux is primarily concerned with labeling and type enforcement. So if I have a mythical service called the Foo service, the executable on disk might have the label Fooexecti, the startup scripts might have Foo, let's see, what is it? Yeah, Foo config T, the log files may be Foo log T and that's the type, the underscore T is the type. The data files may be Foo data type and when Foo is running in memory the process may have the label Foo underscore T. So that's labeling. Type enforcement is the rule that says when a process running in the context of Foo T tries to access a file on the file system with a label Foo config T or Foo data T that access is allowed, right? That kind of makes sense. The Foo process needs to read the Foo config files or the Foo data files. Any other access though unless it is explicitly allowed is denied. So if a bad guy takes over the Foo T process that's running in memory and then they try to access a file on the file system with the shadow label, the shadow type label, well that probably doesn't make sense, right? Cause we don't want an internet facing service to access Etsy shadow files. So if that Foo process running in the Foo context, the Foo T context tries to access the directory home T Cameron with label user home dirty even if the permissions are wide open the policy can stop that or if you specifically grant that using SE Linux policy you can allow it. But by default unless it's explicitly allowed it's stopped. And SE Linux labels are stored as extended attributes on the file system or they're stored in memory for processes. Labels are stored in the format the SE Linux user, the SE Linux role, the SE Linux type and then optionally MLS or MCS labels. So for the mythical Foo service the full syntax for that might be user U that's the SE Linux user, object role or object underscore R Foo type and then S0 and C0. So when you're dealing with SE Linux on again on Red Hat based systems so Fedora, Red Hat, Enterprise Linux, CentOS or Scientific Linux or any of those derivatives. The default policy for SE Linux is called the targeted policy. In the targeted policy we don't use the SE Linux user or role so we can actually ignore them. We can also ignore the MLS, the multi-level security and potentially multi-category security but I'll talk more about that in a little while. So think of MCS labels as just as extra identifiers. In SE Linux for containers we can be super, super granular about which processes can access which other processes. So even though these look almost completely identical you know I've got user U, object role they're both in the Foo type but I've got S0 for MCS but C0 versus C1 as far as SE Linux is concerned those are totally different. So if a bad guy were to compromise one of these processes and try to get it to access one of the other ones as far as SE Linux is concerned it would say nope you guys are running in different SE Linux contexts I'm not going to allow that. So again, type enforcement says that a process with the first label is different from the process with the second one so it would prevent them from interacting. And then also neither of those processes would be allowed to access anything under like slash Etsy or anything like that because it would have different labels. Now on a standalone system running the Docker service like if I'm doing development on my Fedora laptop or on my rel laptop all the containers run in the same context by default but our Paz offering OpenShift actually gets a little bit more granular and OpenShift which is our implementation of Kubernetes and systems management and orchestration for containers each container actually has its own context and again we really rely on labeling to get granular control over those. So even if a bad guy does gain access to the Docker container process on the host SE Linux prevents them from being able to access the other ones or the host itself. So again, you can get super, super fine grain. Now I did an example to show you what that looks like if somebody were to exploit something and take over a container I'm gonna show you what that looks like and how we stopped them from having access. So what I'm gonna do is I'm gonna be root I'm going to emulate somebody taking over my process I'm gonna change my context and I'll show you that I'm gonna block everything. So in this case, I'm logged in as root so I do ID and you see that my SE Linux context is actually unconfined type and I've got a broad range of access S0 through S0 and then C0 through C1023. That's my ID. Now what I do is I use RunCon to change my context basically and I'm going to take on the context of an OpenShift process. So now when I do that, I immediately get an error message saying, oh, I can't read your BashRC file because that BashRC file has an SE Linux label that says it's only supposed to be readable by root. So I take a look at Etsy Shadow and again, I get permission denied. I try to create a file in the root of the file system and I get a permission denied. I try to look in Home T Cameron and permission is denied but being the good little hacker that I am I'm like, well, I've got root access, right? So I'm going to run set and force zero and turn SE Linux off. But again, because I don't have the correct SE Linux context, even though I am still UID 0, even though I still am the root user just because I've changed SE Linux context and I'm not in the right context that gets denied as well. So that's one of the really cool things about SE Linux is even if I compromise a process that has root privileges, if it doesn't have the correct SE Linux context to go and do other things on the file system it's still going to get blocked. So big fan of SE Linux, I like SE Linux. How many folks in here historically have done the set and force zero or turned SE Linux? Bad users, bad, bad. Go to YouTube, watch the video on SE Linux for mere mortals. It's about 45 or 50 minutes. It's actually a pretty decent. It's been viewed something like 80,000 times at this point and I've gotten fairly good feedback on it. So go watch that. So set comp. Set comp is syscall filtering to allow, to match system calls or even their arguments for more specific matches. So kill, error number, allow, trap, trace. You can use either whitelist or blacklists and Docker uses a whitelist approach by default. So what it does is it disables calls like kexecload, init module, delete module, et cetera, et cetera, et cetera. And it also disables 32 bit syscalls. What that allows you to do, again, is to have really, really fine grained control. And again, this is not typically something you're ever going to mess with as someone who's a consumer of or even spinning up your own containers. But under the hood, you can do some pretty cool stuff. In this case, we've got a JSON file that says that we're going to trap the get CWD, get current working directory system call. And we're going to stop it. And so what happens is, in this case, we do Docker run and we say with the security options that are defined in that JSON file, and as soon as it comes up, we get our Docker container and you do PWD, like just show me the working directory, you can get as granular as stopping individual syscalls like get CWD, right? Now in the real world, this is a silly thing. I would never do this, but it's a great example to show that you can get incredibly fine grained control over your container using set comp. And it's just JSON. I mean, it's fairly easy to read. And you can shamelessly copy other people's JSON files and modify them, which is what I do because I'm not a developer. All right, so some tips and tricks, some things that you should do and some things that you shouldn't do. Remember that at the end of the day, containers are just processes running on your system. We put some pretty cool security around them and we've done some layers, but at the end of the day, they're just processes running on the host. So use common sense, the same common sense that you would use in running anything else on your system. So do have a process in place to update your containers and follow it. I cannot tell you how many times I've had conversations with developers who are like, yeah, I had an itch to scratch, we had a project, I had a tight deadline, I put something together, I containerized it, I pushed it out there, they're running it into production and now I'm off on another project. And I'm like, well, what are you gonna do to make sure that that thing, if there's a security vulnerability in it, like what do you do to fix that? I don't know, when the operations guys call me to take a look at it, I guess I'll take a look at it. Eh, that's dangerous. So make sure you have a plan in place and follow it. Run services in the containers with the lowest privileges possible. Drop unused capabilities as soon as you can and we showed you a number of ways that you can do that. So make sure that you are using the minimal capabilities and minimal permissions possible or privileges possible. Mount file systems from the host, read only wherever possible. Remember that with containers, we have the ability to use, to mount file systems from the host across multiple containers. So if you don't need right access to it, don't grant right access to it. Treat root inside the container just like you would on the host. Run your containers as non-root wherever possible. So for a MySQL container, for instance, you wanna run it in the context of the MySQL user. Now this is enforced in OpenShift. We force the lowest privileges possible. And definitely watch your logs. I mean, this is pretty good software and as much as it will tell you if things are going sideways, if things are being done that are unusual, you can see in many cases when something has gone wrong or somebody's trying to attack or something like that in the log files. So pay attention to the log files and consider using no new privileges as a security option for your container. In other words, even if somebody does manage to compromise it and tries to do a privilege escalation, if you're running it with the security options, no new privileges, they can bang against that door all they want. They're not gonna get any additional privileges. So some don'ts. Don't go to Bill and Ted's excellent Docker repository. Or registry and download like any old container that you find out there. If you're gonna develop them internally, that's fine, just use proper hygiene and have that plan to update them or use Docker registries that are trusted. There's Docker's registry, there's Red Hat's Docker registry. You can build them internally. Just again, kind of use common sense. Don't use SSH inside of the container. If you are using SSH inside of the container and you're administering your container via command line login, you're doing it wrong. Build your containers in such a way that if you need to make changes to it, you just spin up a new version of the container and push it. If you're logging in and administering multiple containers that kind of defeats the whole purpose of being able to spin up a thousand of them, right? That's not good practice. Don't run with root privileges. As I said, we enforce that. Some distributions do not. Don't disable SE Linux. Seriously. Don't disable SE Linux. And don't roll your own containers once. Roll them out into production and never maintain them. And this is something that I see happening a lot. Don't run production containers on unsupported platforms. Hey man, we downloaded this thing for free and I can run it and I don't have to pay anything. And then you have a Heartbleed or you have some sort of vulnerability that's kind of the fire drill thing who you're gonna reach out to. All right, so that all being said, go forth and contain stuff. Containers are awesome. I love what we are doing, what the industry is doing with containers. They make application deployment incredibly easy. They give you ridiculous density capabilities. They can be very, very secure if you just use a modicum of common sense. By design they're relatively secure but there are some of those gotchas. As with every other piece of software out there, containerization just requires some feeding and maintenance. You gotta take care of your systems. Well maintained, these containers can make your business a lot more agile, a lot less complex, and if done right, absolutely safe. So any questions? Yes, sir? You spoke about hygiene and not downloading any old container. Is there the ability to code sign containers so that you cryptographically sign them and I can validate, you developed it, and I trust you? So the question, in case anyone didn't hear it, is there a way to cryptographically sign containers and make sure that they're valid and they're trusted? So I don't believe that right now there's a standard way of doing it. I know that that is something that there's a huge desire for. I know that internally we've been working on it. I don't know if it's been accepted upstream yet, but the answer there is stay tuned. That's something that definitely there is a huge desire for and I need for, so. Yep, yes, sir? Right, mm-hmm, mm-hmm. Right, so the question is why you're asking for clarification on don't run SSH inside of the containers. So again, the goal of containerization is this is automation, right? I wanna be able to have one Docker file, I wanna have one Docker image, whether I need to spin up one container or a thousand containers. So if you have configuration changes that need to be made, now theoretically when you spin the container up, you're going to have your configs inside of the container and they're gonna be adaptable so you're gonna use things like if hostname is this then do something or other else, right? The point is that you wanna have your logic inside of the container so that you don't ever have to physically log into it and make any changes to it. That defeats the purpose of that automation, right? Yes, sir? With regards to either your own or third-party sourced containers, are there tools or best practices on that sanitizing and auditing? Yeah, I mean, honestly, you would use the same rules for building your containers that you would if you were deploying any workload into production. Like even if you had a VM or anything like that, right? Least privilege is possible. Least surface area for attack possible. Load the minimum software possible into the container. It's really, I don't wanna say it's nothing new because it is kinda new because we're doing containers and so your brain needs to bend slightly differently but the concepts behind security are the same concepts we've had for years, right? Keep it as small, keep it as simple as possible, keep it as easy to update as possible, keep it updated. So it's really the same sort of methodologies you're just doing it into container files instead of into VMs or bare metal machines. Yes, sir? Yes, maintain? Okay, so the first question is what does maintain a container mean? So when you're creating your containers, right? Containers can potentially be made in layers, right? You've got the file system layer, you've got application layers on top of it so you can actually create containers out of a bunch of different components. The, when I say to maintain your container, what I mean is don't just have some port developer on a laptop somewhere, build it one time and then go put it into production. Have a cycle where you're gonna have a dev, you know, development and a QA or a penetration testing or something, user acceptance testing, whatever you wanna call it and then it gets out into production and then have a way to do either introspection or have a scheduled task that says, hey, I know that I used Drupal, for instance. So have a task that's gonna go, oh, hey, I see that a new version of Drupal has been released. Make sure that then I'm gonna spin up a new container, put it into QA, put it into dev. So this is where you get into the whole concept of CICD, right? So that's what I mean by maintaining your containers. Make sure that you have an environment where you don't have this fire and forget one time type of thing is that will bite you in the butt, I promise you. And he said he had a second question. So, no, you can wait right there, just, yeah. So the question is, how do I maintain containers? Like if I'm running MySQL, how do I do backups and restores of the database and stuff like that? Yep, so in that case, what I would say is, again, we don't want to SSH in and run a bunch of manual commands if you can avoid it. So either do something using REST, do something where maybe I've got a console on one machine that's gonna be able to go out and talk to over API calls to the database, for instance, and say do whatever maintenance tasks you need to do, or potentially even when that MySQL database is running in a container. Because remember, we want containers to be throw away. So I would set up my container so that it's gonna mount the underlying file system from the host or some sort of shared storage in a way that I've got my databases there and if I needed to run any kind of maintenance or do extractions or anything like that against them, that I could do it there. So that even if the container gets compromised or I have to throw it away, my data is still secure. Yep, thank you, yes, sir. Quires privilege and privilege use of resources and it's something that you're not owning. I have a couple of examples. Vault is one of them. Which what, I'm sorry? Vault server. Okay. My hash core requires a certain amount of privilege. Yep. So I was wondering what's best practices and what exactly that means. Sure, so remember that what I said is, don't, I didn't say don't run privileges with, I'm sorry, I can't even talk. Don't run containers with privilege escalations. What I said was run them with the least possible privileges. If you have an application, if you have a third party application that absolutely positively requires running as root inside of the container, then that is your least privileged container, right? I mean, you're using the least privileges necessary. I would follow up with a vendor of the software and say, hey, this is a concern. Maybe we wanna see if we can do a privilege step down to a user account or something. But if the application requires it, then just be aware of that and make sure that you're doing things to keep any un-privileged access to it, et cetera, et cetera, et cetera. It's kind of the standard. If you gotta do it that way, do it that way, just try to isolate that container as best you can. Okay, yes, sir. So you asked about using Kubernetes and you asked if the tools were inherent to, were you talking about Kubernetes? Oh, I see what you're saying. So honestly, most of the stuff that you saw me talking about here are really things that are kind of happening under the hood. So very rarely would you actually need to go and make changes. You did see though that I can pass security options, do the Docker process. If you're doing it standalone or if you've decided that you're gonna build your own container platform, then those are some things that you could do. If you use a container platform like the Atomic Server from the Fedora project or OpenShift from Red Hat, we'll handle that sort of under the hood. We'll do those things for you. Yes, sir. How does a container know that it's running on a container environment? Hopefully it doesn't. The whole point is that we want each container to think this is my domain. I am king here. I do whatever I want. And if you do it right, then they're not gonna know that they're a container and that's a good thing. Now, having said that, if in the container you look at the proc file system and you see those big honkin UIDs for like the root of the file system, that's a dead giveaway that you're running inside of a container. Any other questions? Was this helpful? Awesome, good job. Well guys, thank you very much for coming. I appreciate it.