 All right, so I think that it is time to get started. I'm showing that it is 11 o'clock. Welcome, everybody. My name is Thomas Cameron. I am a global cloud strategy evangelist at Red Hat. And today we're going to be talking about container security. I'm reusing a presentation that I used at DockerCon this year because Marunal Patel and I collaborated on it. And he filled in a lot of the gaps, honestly, that I had with my knowledge. I am very much from the system administration, systems engineering background, but he is a developer. And so I was able to shamelessly pick his brain for some of the content here. What we'll be talking about today is what are containers, or actually, I'm sorry, a little bit about me and a little bit about Red Hat, and then a little bit about where containers came from and what Red Hat has been doing with containers. We'll talk about what containers are, how they work. We'll talk about what containers are not, because I think there is still some confusion about when to use containers, when to use virtualization, whether they're the same thing and so on. And then I'll talk about the components which make up container security. We'll talk about kernel namespaces. We'll talk about Linux control groups. We'll talk about the Docker daemon and how it works and the security that it provides. And then we'll talk about Linux kernel capabilities and security enhanced Linux, how it works, why it matters. And then I'll talk about a few tips and tricks with container security and then draw some conclusions. So as far as who I am, as I said, my name is Thomas Cameron. I'm a cloud strategy evangelist at Red Hat. I've been working in information technology since 1993. The reason that I enjoy security is that I actually started my career out of school as a police officer. I really enjoy law enforcement and forensics and security. So I'm kind of a weird hybrid. I come from a security background, but now I'm a computer geek. I've been with Red Hat since 2005. I am a Red Hat certified architect. I'm a Red Hat certified security specialist and I have other certifications. And I've been in IT for long enough that I actually have a very strong background in Microsoft security. And before that, I was a Novel certified network engineer, so I'm really dating myself there. I remember IPX and SPX. And I have spent a lot of time focusing on security in environments like financial services and retail and manufacturing and things like that. So I certainly do not claim to know everything. I've been at Red Hat for long enough to know that no matter how much I learn, there's always somebody who knows a lot more than I do and I always have something to learn. So let's talk a little bit about Red Hat and what we've been doing with containers. Red Hat has actually been working on containerization since 2010 or a little bit before. We acquired a company in 2010 called Makara. And Makara, Makara was a platform as a service company. When we bought Makara, we liked what they were doing but we really rewrote it and rebranded it as Red Hat OpenShift. Makara had technology that really was analogous to containers today, except they called them cartridges. Those cartridges used security enhanced Linux. They used Linux control groups and kernel namespaces. But what happened was that as tends to happen in the open source community, open source is really a meritocracy. So even though we liked the technology that we started working with Makara, in about 2013, the community and the industry really started showing that what Docker was doing made a lot of sense. So we rebranded OpenShift to use Docker as an underlying cartridge technology and we started participating in the Docker community. And the last time I checked, we were the number two contributor to the upstream Docker projects. We're very actively involved. And that's not a, look at Red Hat, we're the number two contributor. It is a, we recognize that we have a responsibility to the greater community to be good stewards of the code. And so we contribute as much as we can upstream to make sure that we are being those good stewards of the code. And Docker as a company is actually doing some pretty incredible things. They've been through multiple successful rounds of venture capital raising. And companies like Asara and Cisco and Goldman Sachs and Intel and Pivotal and also Red Hat are all on board with container standardization with Docker as a standard container format. Even Microsoft has announced that they will support Docker formats. So let's talk a little bit about what containers are. Containerization, specifically Docker, is really just a technology that allows you to have applications, whether they're web servers, for instance, databases, application servers, to run abstracted from and in some isolation from the underlying operating system. The Docker service can launch containers regardless of the underlying operating system or the underlying Linux distribution. Containers can do some amazing things. They can drive very high levels of application density. So you can run a whole lot of applications on a single machine because you don't have the overhead of doing full virtualization for every application. Linux control groups also enable you to really maximize utilization. A lot of times we think about control groups as stopping something from taking too many resources. But the counter to that is that Linux control groups also allow you to really manage utilization. Because I know that, let's say I've got a system that's got 32 gigs of memory and let's say I'm gonna grant, just for a nice round number, one gig of memory to each of my applications, that means that I know for sure that I can have at least 31 applications as I wanna reserve some for the operating system. But it allows you to be very granular in your control of the underlying systems. Now really, we know that if we've got one gig per application, not all applications are gonna use that full gig. So the point is you can be very, very precise in how you allocate resources using containers. And the same container can run on different versions of Linux. You can run an Ubuntu container on Fedora or a Fedora container on Ubuntu or a CentOS image on Red Hat Enterprise Linux. It's very, very flexible and it allows for the distribution of containers across pretty much any version of Linux you have as long as you've got the right version of Docker. Now that leads to the question, what are containers not? Well, containers are not the cure to all that ails you. In the American West, there was a joke about snake oil salesmen, that people would sell snake oil and say, it'll cure everything from bad breath to cancer. And so a lot of times I see people saying, containers are the fix for everything. Maybe not, maybe not. There are a lot of things that containers can do, but they are not a panacea. They don't do everything. Containers are not yet a fit for every application out there. There are application vendors who simply won't support their application in a container. There are some applications which don't lend themselves to containerization. If you have a really massive monolithic application or like a giant database or something like that, they're not necessarily a good fit for containerization. And containers are not virtualization. I hear a lot of people making the comparison of containers to virtual machines. Really not the same. I run containers on my bare metal operating system on this laptop all the time. So it does not need to be virtualized. So let's talk a little bit about some of the features of containerization. The first one that I wanna talk about is kernel namespaces. Namespacing is just a way to make a global resource appear to be unique and isolated inside of a process or to a process. The container or the namespaces that the Linux kernel can manage include mount namespaces, process ID namespaces, Unix time-sharing namespaces, IPC inter-process communication namespaces, network namespaces, and user namespaces. All processes belong to exactly one each of these namespaces. So a process that calls the unshare system call with clone new arguments will get its own isolated instance of a namespace. A clone or fork system call can also be used to spawn a process in its own namespace. So let's drill down a little bit into what each of these namespaces mean. So mount namespaces, just a collection of file system mounts that make up a process's view of the file system hierarchy. The root of a container is made up of the files packaged by the container author. So when you create your container, you're gonna have a root file system, for instance. And then other mounts can be added to the mount namespaces of the container for security and convenience. Slash proc, for instance, is mounted so that the containers can see the process is running inside of its own PID namespace. So when you look at the proc namespace inside of a container, you only see those processes for that container, not processes outside of it. And then things like slash proc slash pts slash ptmx is mounted so that the container gets to spawn its own isolated pseudo terminals. And then some locations like proc kcore are masked over with dev null so that containers cannot access raw memory. You don't want to have containers being able to get to any memory outside of just that container namespace. And then some locations such as proxys are made read only as sysfs isn't really completely virtualized yet. And so I've got an example here, and I'm sorry, this is kind of a dense, but you can see that that crazy long UUID from the parent or from the host that the container is running on gets mounted as root. And then you can see that proc is mounted. And dev is, all of these are mounted and they're very similar to what you would see in the host operating system, but these are isolated so that inside of the container you only see those file systems which are abstracted through the mountain namespace. That's a security thing we don't necessarily want the container to be able to see anything that's on the host operating system unless you mount a file system from the host within the guest at run time. So when you do docker run dash IT and you can do, you can mount var www.html for instance, or something like that. But the whole point is you don't want the container to see the contents of the host operating system unless you explicitly allow it. The mountain namespaces is how we do that abstraction. Process ID namespaces or pin namespaces just isolate the pin numbers inside of the container from the pin numbers on the host. So for instance, I've got an example here where I do docker run dash IT fedora and I just run the bash command. When I do a PSAX inside, the container thinks that bash is process ID number one. But we all know that that's not really process ID number one, but process ID namespaces take that bash command right there and tell inside of the container, yeah, yeah, it's number one. That's your only process where the reality is it's really process ID 18596 on the host. The whole point here again is we want the container to not have awareness of what's going on on the host. So if something bad happens in the container, it can only affect those process IDs which are abstracted or presented to that container. User namespaces just map UIDs and GIDs. So that inside of the container, I can have a UID or a GID that appears inside the container to have root privileges and in fact does have root privileges inside of the container but they don't have root privileges outside or on the host. So as an example here, you can actually add ranges if you want to and the challenge really with user namespaces is that or one of the benefits of having user namespaces is that if I spin up 50 containers inside of each of those 50 containers, there'll be a UID zero or root access but user namespaces are what's going to map that to the user that actually spawned that container. So again, it's isolated and one of the things that Marunal was talking about when we presented this is layer sharing. We're still doing some work on that. The ability to do sharing of the layers within containers and it still needs to have some work done in the VFS virtual file system. So he did this example where he used OCI tools generate UID mappings and so basically what you can see is he runs the run C command, just runs the test command and you can see that the mapping of that UID zero inside of the guest is mapped to UID 1000 or a regular user account outside. So that's how you can have root access inside of your container but still not be able to compromise the host that it's running on. And again, this is all based on, I mean, this is all with an eye towards security. We wanna give you as many privileges inside of the container as you need to do what you need to do but we don't want you to be able to compromise the hosts. Interprocess communication namespaces, same concept. It's just masking IPCs so that within the container the table of interprocess communications that you see appears to be global but it's really not. It's just those within your container. So in this case, I run a Bash instance on a Fedora container. I run IPCS and according to what's going on inside of the computer, I don't have any interprocess communications mapped whereas the truth is on the host that's running that container, you know, I've got zillions of them. So IPC, again, we just want to make sure that we've isolated what's going on inside of the container so that I can't see what's going on on the host so we can't have any of that interaction. Now changing gears a little bit, talking about control groups, Linux control groups. Essentially, control groups are just a mechanism for aggregating or partitioning sets of tasks including their children into hierarchical groups to specialize behavior. So basically that allows system resources like CPU or memory, disk IO, network IO, et cetera into a control group and you can assign limits to that. The big benefit of that is that if something happens to a process that's inside of a control group because we have limits on that control group even if somebody does something really fancy, you know, every year when we get our next group of graduates they're like, oh, I developed this cool attack called a Fort Bomb. Yeah, it's not very creative. People have done it before but the cool thing is if you run that within a control group you might exhaust the resources for that one control group that you're not gonna take down the rest of the system. So even if a container is compromised or even if you just have poorly written code, right? Because we're humans, we make mistakes, people are gonna do silly things that misbehaved container should not be able to impact the host or other containers. So if I do a system control status docker.service I'll get the control group and slice information. So this is what that looked like. So I just do system control status docker and I can see where it's running and you can also see what control groups, including the fact that that is an SELinux enabled control group that the master docker service is running in and then each docker instance after that will also get its own control group. So you can navigate through the proxys C group pseudo directory to see what resources are allocated. Now there are over 8,500 of them so it doesn't make sense for me to try to go into all of them but like I said, you can go in there and you can see every container, every process. And if you do, like I did a find command and piped it to WC-L, there's over 8,500 just on my little laptop. So you can imagine that it gets pretty crazy. Now what I did in this example was I did docker run IT and I'm limited to only 100 megs of memory, an instance of a Fedora container running bash. And so you can see that if I look at prox one C group I can see all of the various control groups and the slices that are allocated to that container. You see that they're all roughly, I mean it's all the same scope. But the cool thing about this is that if I look at memory limit in bytes for instance I can see what my memory limit is. I've only got 100 megs of memory available to that container. And in this case what I did was I said, basically I did a fork bomb inside of my container. And so essentially what happens is after a very limited amount of time that docker image dies and it exits back out to a command prompt. So essentially even though I did a fork bomb and I'm like, ah I'm gonna take down the whole system. Now what winds up happening is if you look in the log file the out of memory killer gets invoked for that one container. So the nice thing is we kill that container the rest of the operating system is unaffected. So that's what that allows you to do. The memory limit is higher than resources. I'm sorry? The memory limit is higher than resources. We will give you the gun. We will tell you where your foot is. What you do after that is up to you. Okay. That's interesting, I lost the image. There was a pretty picture there. We're gonna move on. All right. So moving on to changing gears a little bit. The docker daemon itself. The docker daemon is responsible for managing those control groups for orchestrating namespaces and so on so that you can run images using the docker daemon. Now docker because we are doing things like accessing network and storage and things like that does run with root privileges. So be aware of that. There are some considerations for running docker. Only allow trusted users to run docker. Older docker documentation like when I first started playing with docker had you add a user to the docker group so you could spin up docker instances as a regular user. At Red Hat we don't allow regular users from running docker so you have to give them privileges either through sudo or some other mechanism. Only delegate the ability to run docker images to trusted users and remember that they can mount the host file systems in their container potentially with root privileges. So just be aware of that. If you are using the REST API to manage docker which is very, very common, make sure that you are using the latest versions of all the docker software. Don't have any vulnerabilities exposed. In other words, keep your systems up to date. Make sure that you have strong authentication preferably including either only VPN or using TLS or SSL or something like that. What is going on with my images? That's really weird. I promised this works just a little while ago. Hmm, I don't know. So we're gonna talk about Linux kernel capabilities. I'm not sure why that text got all wonky. So we're gonna change gears a little bit and talk about Linux capabilities. So historically, the root user had the ability to do anything. Once you have root privileges you can have complete access to the system. But Linux capabilities breaks root privileges into 38 distinct controls that can be enabled or disabled independently. This allows one to grant only those privileges required for somebody to do a job, whether it's a task or a logging in or whatever. But the cool thing about Linux capabilities is you can use them to take away privileges that the root user has, but you can also use Linux capabilities to grant privileges to a non-root user. For instance, you could do something like say a regular user combined to a privileged port or port under 1024. So there are a lot of neat things that you can do with Linux capabilities. The default list is a minimal list that works for most applications. There is a catch-all capability called capability sysadmin. With Docker, we drop that because that is kind of a catch-all. It's like, well, I'm not sure where this capability should fit in with the rest of root privileges. So I'm just gonna put it in cap sysadmin. So because that is a catch-all, we disallow or we disable those capabilities for Docker containers. The best practice really is to audit and drop all the capabilities that aren't actually used by the container. So unless you explicitly have a need to bind to a port or something like that, don't allow that. Yes, sir. Just make sure that you run the every code pass and something like this. You can, you can do it programmatically or you can also just kind of look at it from a common sense perspective. Like take a look and what is my application doing? If I don't have a need to bind to a privileged port or if I don't have a need to manipulate file systems or something like that, you can actually drop those privileges on the command line or if you look, this is actually the default template for Docker and this shows the privileges that are allowed and then also it shows like name spaces that are allowed as well. So you can see this programmatically. Yes, sir. Oh, yep, finished. Okay. So you can, you can see what privileges are enabled by default and then you can also use security tools. You can either run like Strace against it or you can use whatever your, whatever auditing tools you're most familiar with to see sort of what calls are being made and if you don't have a specific need for it, you can just allow it. So this is happening when you say it's a catch-all, does it include some of these other capabilities or is it capabilities that are not including others? It is only those capabilities which don't fit into one of those other 38 categories. So if I give someone system in, it doesn't mean they can definitely get that. Right, right, at that point, you would actually need to grant them the Linux capabilities for, I think it's called Netbind? CapNet Admin. CapNet Admin, yeah, yeah, yeah. Okay. Yeah, so yeah, so again, it's kind of a catch-all and there's a ton of stuff in CapSys Admin that's like just random stuff that didn't fit anywhere else. So yeah. So let's talk a little bit about security-enhanced Linux. So that's Linux capabilities. Security-enhanced Linux is a mandatory access control system. Processes, files, memory, network interfaces and so on all have SE Linux labels and there is a policy on how those processes actually how the labels can interact with each other. That policy is administratively set and fixed. The policy will determine how processes can interact with files, with other processes, network ports and so on. And SE Linux is primarily concerned or at least for the context of this presentation is really concerned with labeling and type enforcement. For, let's say we have a mythical service called the foo service. The executable on disk might have the label foo exec t or foo executable type. The startup script might be foo config type. The log files might be foo log type. The data might have the label foo data type. When foo is running the process and memory would probably have the label foo type. So type enforcement is just the rules that say that when a process running in the foo type context tries to access a file on the file system with foo config type or foo data type that access would be allowed. And that makes sense, right? You want the foo process to be able to read its config files and its data. When the process foo type tries to write to a log file with foo log type that would be allowed as well. That's part of the SE Linux policy. Any other access, though, unless explicitly allowed by policy is denied. So if the foo process running in the foo type context tries to access, for instance, the directory slash home slash t camera that has the user home directory type. Even if the permissions are wide open, if that's not specifically allowed, if the process running in the foo type context tries to access a directory with the home dirt type, it will be denied. So SE Linux labels are stored on the file system as extended attributes or managed in memory by the kernel. So SE Linux labels are stored in the format, the SE Linux user, the SE Linux role, the SE Linux type, and then multi-level and multi-category security. So for this foo service, the full syntax of the running process might be user you, object role, foo type, and then no additional MLS or MCS labels. So when we talk about security enhanced Linux, the default policy for SE Linux is the targeted policy. We don't use the SE Linux or user, sorry, the SE Linux user or role, so we can ignore those. We do use MLS and MCS for things like open shift containers, which I'll talk about in a little while, but we really only care about the type because remember this is about type enforcement and the MCS label. Think of MCS as just extra identifiers. In SE Linux for containers, we can be super granular about which processes can access which other processes. So even though these are identical, it's user you on both, object role on both, we don't really care about those, but it's the foo type on both, but because we have a difference in the MCS, the multi category security label, those are according to SE Linux, totally different. So if I have something that's running in C0 versus C1 and somebody compromises C0 and tries to attack the next container over that's running with C1 or C2 or C3, SE Linux will not allow that access because according to SE Linux, those are totally different. Type enforcement just says that the process with that first label is different from the process with the second label. So policy would prevent them from interacting. There's no policy allowing a process running with those labels to access the file system unless it is labeled explicitly with either foo config type or foo content type or another defined label. So neither of those processes would be able to access say Etsy shadow or the home directories or anything like that because that SE Linux label is so different. So on a standalone system running Docker, all of the containers run in the same context by default. In OpenShift though, every container gets spun up with a separate context. So you'd have the OpenShift type, C0, C1, C2, C3, C4, C5 and so on. So even if somebody were able to gain access to the Docker container process on the host, SE Linux would prevent them from attacking other containers or the host itself. So in the following example, I'll show you what this looks like. What I do is I emulate somebody who exploits a container. I use RunCon, which changes the, changes what context I'm running in to set my context to that of an OpenShift container. I attempt to access the Etsy shadow file, try to write the file system, try to read from a home directory and you'll see that even though I am running as root, I get blocked. So here I'm running as root, right? And then what I do is I run the ID command and I can see that I'm running in the unconfined space. Now I use the RunCon command to change to an unconfined user system role as the OpenShift type. Remember that SE Linux is about type enforcement and labeling. So I'm gonna run as the OpenShift type and I'm gonna run the bash command. And the interesting thing is, as soon as I run that command, I get an error message saying, whoop, I can't read the contents of the bash RC file. So if I try to cat Etsy shadow, for instance, even though I'm still running as root, I don't have access to Etsy shadow. I try to create a test file in the root of the file system. Permission is denied. Even though I'm running as root, because I'm running in the OpenShift context when I try to access home T Cameron, for instance, it doesn't allow me to do that because my SE Linux context has changed. And even though I'm running as root, if I say, well, that's okay, I'm just gonna turn off SE Linux. If I try to run setting for zero, it's going to fail as well. So this is an example of how changing to the context of an OpenShift container or any container is still not going to allow me to have access to the file system. All right, so let's talk about Set Comp. Set Comp is syscall filtering. So you can match system calls or even their arguments for more specific matches, like kill, airno, allow, trap, trace, et cetera. You can either use whitelist or blacklists and Docker uses a whitelist by default. So what it does is you can potentially disable system calls like kexecload, initmodule, effinitmodule, deletemodule, ioperm, swapon, et cetera, et cetera, et cetera and disable 32-bit syscalls. So in this example, and this is one of the ones that Roonal did, he actually created a JSON file that basically it blocks the getcwd command. So which is kind of silly. You would never really do that in the real world because that's a really important system call. But it's a great example in that what he does is he runs Docker Run IT and then he says security options include reading that Set Comp, that JSON file, and when he runs a busybox with a sh command, so just give me a shell, it says, nope, operations not permitted. So I can't see what directory I'm in. He does pwd and the operation is not allowed. So it's a silly example, but it's a good example of showing how you can take even core system calls, blacklist them through a file, and if you make that part of the command line to launch your containers, you can block those system calls. All right, so let's talk about some tips and tricks. Some of the things that you wanna do and some of the things that you don't wanna do. Remember that containers at the end of the day are just processes running on your host. So use common sense when you're running those processes. Make sure that you do have a process in place to update your containers and follow that process. Don't just download some container from Docker Hub and then run it in your environment and don't update it. You have to have a process in place for running updates. Always make sure that you run your containers with the lowest possible privilege. Drop unused capabilities as soon as you can. When you mount file systems from the host, if you can, mount them as read only. That way you don't grant any access to write information back to the host. Treat root inside of the container just like you would root on the host. Run your containers as non-root if possible. So if you can run, for instance, a MySQL container, run it as a MySQL user. This is actually enforced on OpenShift. So if user or Paz offering either the commercial version, OpenShift Enterprise, or the upstream open source version, OpenShift Origin, we do enforce that. Have a mechanism in place to watch your log files because if something weird is happening in your environment, it should get logged and you can react to that and make changes. And enable no new privileges for your container if possible. So you'd run Docker run IT, Fedora bash, dash dash security opt, no new privileges. That way, even if somebody does manage to try to do something bad inside of the container, we're gonna block them. Don't just download Bill and Ted's excellent container from the internet. Try to use trusted sources, whether it's the Docker repository, look at the history of your Docker image, make sure that it's something that's being updated and kept smart. Don't just download any old container, don't run SSH inside of the container. Kinda defeats the purpose of containers, you wanna build your container and push them out, you don't wanna have to log into them and do updates. Don't run with privileges, in other words, drop privileges as quickly as possible. Don't disable SE Linux, please. If you don't understand how SE Linux works, if the discussion that we had today is not enough, I did a video at Red Hat Summit called SE Linux for mere mortals, it's on YouTube, go watch it, it's only about 45 minutes, it's a pretty good explanation of how SE Linux works. If you're not comfortable with SE Linux, turning it off is not the right answer. Yes, sir? What kind of interactions are there when someone runs a Debian or Ubuntu kernel which doesn't use SE Linux on SE Linux system so they don't have context on the files in the container? So inside of the container, SE Linux won't be enabled. That's bad, it's not a showstopper, but it's definitely not good. But at least that container running on a Fedora host or on a RHEL host or on a CentOS host, at least that container will be protected by security enhanced Linux. So even if somebody does compromise that Debian container, they won't be able to attack other containers on the host. I recommend, and this is not a dig on Debian or a dig on Ubuntu or anything like that, I recommend that you use containers that do understand SE Linux. SE Linux is at least the file system labeling and things like that are happening within the container. Are there any conflicts though if they're using like that word? There are no conflicts. I don't know if AppArmor would actually run inside of the container, I'm just not sure. I'm gonna try it. So it's the same color, so yeah. If it's SE Linux on the inside, it can be SE Linux inside too. Yeah, I don't know, I haven't tested that. I haven't tested it, but my recommendation is, use SE Linux aware containers. It's not always practical, but at least if you're running a container that doesn't rock SE Linux, at least that container is still protected by the host. Let's see, don't roll your own containers once and then never maintain them. I see that a lot. Some developer somewhere gets asked, hey, will you create something for me? The developer creates it, gives it over the ops team and then the developer is on to the next project. Don't do that. Have a life cycle management process in place and don't run production containers on unsupported platforms. The Wild Wild West is not the place to do production. All right, so in conclusion, we're right at the end, go forth and contain. Containers are incredibly cool technology. They make deployment really, really easy. If anyone sat through Adam's session on the previous hour, where he talked about how to build layered containers, there's some amazingly cool technology going on around the building and deployment of containers. Containers do leverage some incredibly cool capabilities within a Linux kernel and by design, they are relatively secure. Nothing's perfect, but if you follow just kind of common sense rules of software life cycle management, containers can be very, very secure. They can make your business more agile, your organization, your community more agile and potentially less complex. And if done right, they can be very, very safe. So with that, any questions? Yes, sir? I was just running inside containers. My understanding was that it's wrapping up with experts of labeled-only processes which belong to the container, but it doesn't care about what happens inside the container so it's like, what happens in Vegas stays in Vegas. Yes, so right now, SE Linux really only cares about the process running on the host. So that container process running on the host is confined within SE Linux and we have rules around it. I've actually run a bunch of Fedora containers that has, and I've found that SE Linux inside of the container is turned off. I'm not sure what the status of having SE Linux inside the container turned on is yet. I actually meant to go bug Dan about that while I'm here. Do you know Time Frame or Roadmap? We're not gonna do that. It's not gonna happen? It's just not gonna happen. There's a lot of, I mean, we can sit down, but to keep it simple for the question, there's technical reasons. We're not gonna name space SE Linux. Really? Okay, so there's your answer. Okay, thank you. So I had hoped that it was a not yet, but apparently it's a no. So speaking of SE Linux, we use it on some sort of like a long quest, we don't have a long quest to send emails because that's a long quest, so it's gonna send spam. Is there an easy way to do that with Docker? So if this container can only receive connection, but cannot contact anyone outside, or I have to do that by hand with the immediate goals and the question? Right, the best way, so the question is, is there a way to secure systems so that they don't accept new connections? They can accept and create new ones. Oh, okay, so they can't do outbound? Yeah. So you can do that a number of ways. IP tables, for me, would probably be the easiest way. Yeah, but you can have SE Linux policy. You could have SE Linux policy that would block outbound. I think that would probably be a lot more work than just an IP tables rule set. I'm not even sure how I would do that. You could. Yeah, I would use IP tables. So it's like where you are block-listing the system calls. Is it possible inside the container to modify the system calls? So you are actually thinking that we are using this system call, but we are using a different one, okay? So this kind of. So it's just called mapping, basically? Like abstraction? Yeah, exactly, like the users. Like what? I'm sure that it can be done. I don't know how to do it. So set count. Yeah. And there's certain things, like the big ones are socket. Yeah. Right, they're multi-placed and they do that. Libsec.com takes care of that management for you on the right platforms. And in fact, if you follow Libsec.com, the recent change that went into, I think it was a four or three, where Andy made, he hooked up all the socket sys calls so that they can be both direct wired and through the socket calls sys call. So there's about 87 different ways of 32-bit X86 you can call socket and Libsec.com will just take care of those for you. It should, if it doesn't, it's a bug. Yeah, file a bug, if not. Yeah. All right, was this helpful? Good, okay, any other questions? Why the white list rather than black list? So I don't know the answer to that. I've heard a couple of different versions. Do you know why we chose white list by default instead of black list by default? I, you can, and it's fine, this just came up maybe a couple of weeks ago too. You can actually do both, like QEMU, I don't know what status of that was, but they used sys call filtering also. And there was some talk about doing an initial black list and then, you know, for setup, because QEMU has a lot of stuff to get the VM right. And then right before you actually say go to the VM, you know, you tighten that down a little bit further. You could do something very similar with containers that thought was being that, for example, let's say there's a kernel vulnerability found that a sys call in both of these particular arguments, not going to call it a biopic pattern, you know. You could have your normal list and then you could quickly throw in a specific, you know, a separate additional sys call filter to isolate and say, okay, I don't want you to call AQUA with this specific process. Similar to what? Yeah, so I'm not sure where that stands, but there was some discussions around that. I'm not too involved with AQUA, but we'll start with you. Yeah, the explanation that I heard, but this was kind of third hand, was if you black list by default, then you have to explicitly go through and allow everything that you want to have happen. If you white list by default, then you really are going back and basically blocking the things you don't want. And I think that the nod was made to usability before security, which there are arguments for both sides. It's kind of like, why don't you use the strict SE Linux policy by default? Well, because it's a pain in the ass and it's really hard to go through and open everything up that needs to be opened up to make your system usable. I get the impression that it was, we're gonna choose white list by default and only turn off those functions that we don't want because if you do black list by default and you forget to turn something on, the user experience is gonna be terrible. Now that's third hand, don't take that as gospel. I think that's a pretty good argument. I think it's a fair argument. I mean, there's always the balance, right? You got security over here and the more security goes up, the less usability is there. So that's, again, that's kind of what I heard in a side conversation, so I think that's the case. All right, guys, we got seven minutes left. If anybody has any other questions, otherwise, go get some coffee, have a cigarette break, whatever. Thank you.