 All right, so it looks like it's 2.50 on my clock. Welcome, everybody. My name is Thomas Cameron, and I am the Global Solutions Architect Leader at Red Hat. Today, we're gonna be talking a little bit about containers and security. This is an introductory session. We've got 40 minutes together, and so we're not gonna be able to dive terribly deeply into containers and security, but what I really wanna do is talk a little bit about who I am, a little bit about what Red Hat has been doing with containers, containers in general in the industry, what are they, how do they work, what containers are not, and then talk about the components that make up container security, including kernel namespaces, control groups, the Docker daemon and how it works and how to secure it, Linux kernel capabilities, SE Linux, one of my favorite topics, and some tips and tricks and some general conclusions. So to start off with, who am I? I'm Thomas Cameron. As I said, I'm the Global Solutions Architect Leader of Red Hat. I've been doing this since about 1993. I have a sort of a cool interest in security because I actually started out my adult life as a police officer. I was a corrections officer when I was a teenager, I was a police officer when I was 21, when I was 24, I went, holy crap, I can't afford to do this anymore. And so I changed careers into IT and I've been in IT ever since. I have been with Red Hat since 2005, been in IT since 93. Got all kinds of Red Hat certifications. Before that, I started out, like most folks in IT, I started out in Novell Network. Yeah, I'm kind of dating myself. Then I became a Microsoft guy and got Microsoft certified and fell in love with Linux back in about 1995 and have been doing Linux ever since. I have spent a lot of time working on security and organizations like banks and manufacturing facilities and e-commerce companies, things like that. I certainly have learned the longer I've been in IT that I don't know everything but I've certainly got some impressive scars. Generally though, just a big old nerd working in IT. So let's talk a little bit about where I've come from at Red Hat with containers. We've actually been working with container technology since before 2010. A lot of folks don't realize that. We bought a company called Makara back in 2010 because we saw that we needed to have a platform as a service offering. The Makara acquisition was eventually rebranded to OpenShift, which is our container offering today, our Paz offering today. We started doing containers except we called them cartridges using SE Linux control groups and kernel namespaces which should sound a little bit familiar if you're working with containers today. In about 2013 though, Docker really started doing some amazing work. And in the true spirit of meritocracy and open source, we kind of realized that, holy cow, this Docker thing has really taken off. We had been doing some contribution to it. We really ratcheted up our contributions to Docker. And last time I checked, and I haven't checked in probably a month or so, we were the number two contributor behind Docker to upstream the Docker project. And industry adoption of Docker is phenomenal. Docker's been through multiple successful venture capital rounds, AppSara, Cisco, EMC, et cetera, et cetera, et cetera, et cetera, including us have all invested and worked on standardization of containers with Docker. Even Microsoft has announced that they will support Docker containers. So, what are containers? At a very high level, containerization specifically Docker is a technology which allows for applications like web services or database services, application services to be run abstracted from and in some ways isolated from the underlying operating system. So for instance, Docker service can launch containers regardless of the underlying Linux distro, which is very cool. We've had the promise of software abstraction right once run anywhere for a long time. And it's kind of worked, but with containers, we're getting a whole lot closer, I think. Containers can enable incredible application density since you don't have the overhead of a full OS like you do with virtualization. And Linux control groups allow for really, really impressive utilization of the system. Control groups are not only, and I'll talk more about them in a little while, control groups are not only about stopping a process from taking over the system, but control groups are also about carving the system up into a little bite-sized pieces to get the best utilization possible. And the same container can run on different versions of Linux. Ubuntu can run on Fedora. CentOS can run on RHEL. Human sacrifice, dogs and cats, living together, mass hysteria, or at least really cool things for developers to do to roll their applications out. So what are containers not? Now, containers are not a panacea. They are not the cure for all that Ilsea, or all that Ilsea. And they are certainly not yet anyway a fit for every application. We see folks that are like, oh, we should run that in a container. Sometimes it makes sense, sometimes it doesn't. If you are beholden to third-party ISVs, for instance, if you're running big enterprise databases, or ERP applications, or something like that, those vendors may not yet, and probably won't yet support those in containers. So like I said, it's not necessarily a panacea for everything that you wanna do. Containers are not virtualization. You can certainly run containers in virtualized environments as you can run containers on bare metal machines, which I do all the time. So I do get questions periodically that you can kinda tell by the way that someone's asking the question that they're kinda thinking in terms of virtualization, that's really not what it is. So let's talk a little bit about container security. Containers use several mechanisms for security. And it's a layered approach. It's kind of the old onion idea. You got multiple layers of security, multiple ways of keeping bad guys out of it, and let's face it at the end of the day what we really want is for the bad guy to go, this is too hard to target, I'm gonna go next door. So Linux kernel namespaces, Linux control groups or C groups, the Docker daemon itself has security built into it. The Linux kernel or Linux capabilities, Libcap has the ability to limit activities or limit privileges that root processes can run, and then other security mechanisms like App Armor or SE Linux. I know SE Linux, so that's what I will talk about. So let's talk about kernel namespaces. I've had a lot of conversations with folks when you have sort of the stock conversation about, well, how do we secure it? You talk about kernel namespaces and you've got mountain namespaces and PID namespaces and user namespaces, and you get the kind of a blank nod and people are like, yes, but what does that mean? And so we'll talk about that and I'll show you some examples of what these mean. So namespaces are just a way to make a global resource appear to be unique and isolated and the namespaces that Linux kernel manages are mountain namespaces, PID namespaces, UTS, IPC, network, and user namespaces. And let's talk about how those look. So with mount namespaces, what this allows a container to do is the container will think that a directory that it has access to is, which is actually mounted from the host OS is the exclusive domain of the container. So for instance, when you start a container with the dash V, the path on the host, and then the path inside of the container and optionally read write or read only argument, you can mount a directory that exists on the host within the container. The container sees that directory in its own mountain namespace and doesn't know that it's actually on the host. So the cool thing about that is if you have any sort of shared resources that you want multiple containers to have access to, especially if you wanna make sure that that content is not gonna be modified by the containers and it's gonna be identical across all the containers, instead of trying to copy it a whole bunch of times, you just mount it one time or you make it available to the containers so that it's always the same in every container. And as an example, what I've done here is on the host, I cat var www HTML index.html. Yep, that's my silly web page. I use docker run dash it with the dash v argument and I say take the file system on the host, put it there on the container, run an instance of fedora and my executable is bash. So now you can see that my prompt changes from my user prompt to a root prompt so I'm now inside of that container. And if I cat that var www HTML index.html file, I see the same content inside of that container. Now the cool thing is, this is just inside of one container. If I spin up 100 containers, they're gonna see the same thing. And depending on how I mount that, whether I mount that read write or read only and you should do it read only, then the content of that file is going to be immutable and within the container, you're not gonna be able to damage that content. So you can make content available to containers and make it read only so that the person operating the container can't do anything about it. And because we're on kind of a tight schedule, I'm gonna move a little bit quickly. So the next namespace that I wanna talk about is process ID namespaces. So pid namespaces really just let the container think that it's its own contained instance of an OS. It's its own instantiated operating system. So when you start a container on a host, it's gonna get a new process ID. Pid namespaces enable the container to see the pids inside of the container as if they were unique, as if it was a new instantiation of the operating system. So in the following example, I launch a Fedora container running bash and I run the PSAX command. And what that looks like is, when I run Docker run IT Fedora with the executable bash and I run PS, it thinks that that bash instance is process ID one. It's the first process. It's a self-contained operating system. But then if I open up another console on my host and do a PS, and actually, there's a typo on here. I apologize for that. I should have gone further up into the Docker process because I did, where'd I do it? Where'd I do it? I accidentally copied that process ID number and I shouldn't have. It's actually the bash command that we're talking about is that one right there. So when I show you that, that bash instance on the host, that's actually process ID 18596. That's just that isolation or that abstraction of those process IDs within kernel namespaces. So I also want to talk about username spaces. When you start a container, assuming you've added your user to the Docker group or however you've done it, it started as your user account. So I start the container in this example as my user T Cameron. But immediately once it started inside of that container, I'm user ID zero because of that namespacing. So you can see I run the ID command and I am T Cameron. I run Docker run dash IT fedora with the executable bash, but my ID changes to UID zero. Now, am I root on the entire system? No, I am only root inside of my container. I still need to pay attention and do smart things because I am root inside of my container and I can do all the silliness that I want to. But this is an example of that UID namespacing where a non-privileged user, not through SU or anything like that, but just through the privileges that are granted from within the container, can have elevated privileges inside of the container through that UID namespacing. Now networking namespacing or network namespaces, pretty cool capability where basically it allows the container to think it's got its own IP address. It's not gonna be in the network range of whatever your physical interface is. In fact, the Docker service will set up IP tables, masquerading rule to make sure the container can get out to the rest of the internet and in this example, let me just go to the page, I use Docker inspect to take a look at the network settings IP address within the container that I've fired up and I get this address as 172.17.0.7. When I do IP add or show on my interface, my interface isn't even connected to a network so I don't have an IP address at all. That address, that network namespace address gets set up and then the Docker daemon is smart enough to set up IP tables masquerading so that inside of my Docker container, I can get out to the network and pull things down and stuff like that. It doesn't necessarily, depending on how you set it up, it doesn't necessarily grant access to that container from the outside directly because we are doing masquerading, but it's going to segregate that container off from the rest of the network stack on the host and make it safer. Interprocess communication namespacing. Our IPC namespaces, same thing with interprocess communications. Essentially, it just abstracts them out so that within a container, for instance, I can do IPCS and nothing. Inside of the container, the thing thinks I'm my own operating system, I don't have any IPCS stuff running or any processes running or anything, really. It's not doing anything, but if I go to another console, see right there, I'm the root inside of my container, but if I go to my main console on the host and I do IPCS, I've actually got page after page after page after page after page after page of interprocess communication mappings going there. So again, all this kernel namespacing is doing is isolating the container so it thinks its own unique little world and it doesn't know about anything that's going on in the host. All right, UTS or Unix Time Sharing System namespaces allow the container to think it's its own separate OS with its own host name, own domain name, and so on. So if I look inside of, or if I look on the host and I run the host name command, that is my actual fully qualified domain name for my laptop, t540p.tc.redhead.com, but if I fire up a container, you notice that it changes, I've got the root prompt there, and when I run host name in there, it thinks that it's this randomly generated, you guys have all heard the pets versus cattle analogy. This is a cattle name, right? It's just a serial number almost for the host name for that container. It thinks it's its own instance, its own network name, and it's segregated from the activities of the host. All right, so that is kernel namespaces. Again, the whole point of kernel namespaces is really just to let the container think it's doing its own thing, it's segregated from the rest of the networking and rest of the capabilities of the host so that if anybody does anything bad to it, it's isolated within that container. Now, control groups allow for some really cool, fine grained control over resource utilization on the host. So, there's a ton of really good documentation about control groups in the kernel cgroups.txt file, and really, all it allows you to do is to aggregate and partition sets of tasks and their future children into hierarchical groups with specialized behavior. This allows us to put various system resources into control groups and apply limits to it, like how much disk I.O., how much network I.O., how much memory you can use, how much CPU you can use, so you can have some really fine grained control over what's going on. In the case of, for instance, the Red Hat Atomic Server, for instance, which is our container, specialty container server, we actually set up control groups that are very fine grained for every set of containers that gets kicked off. And basically what that does is it ensures that even if an individual container is compromised, they still, whoever takes over that container, or even if somebody just has poorly written app and their container just spins out of control, you've got some Java job or some craziness going on or somebody that does a fork bomb, because someone always has to do that to prove something, does something inside of a container, we're gonna limit it to just taking down that container. So, for instance, when I run the command system control status docker.service, I get the control group and slice information. So, you can see, when I run, again, system control status docker, there's my control group and it is in its own control group, including SC Linux, which we'll talk about in a little while. So, even if a bad guy does something silly to it, you can apply rules to that control group that say no more than 10% CPU or no more than 10% network or 5% network or whatever. You can look through the sysfscgroup pseudo directory to see what resources are allocated to your containers. Now, there are over 8,500 entries in that directory, just on my little laptop. So, it's not practical to be able to dive into the depths of it, but essentially, you can look inside of there and get information about, again, memory, CPU, block device, IO, network IO, and so on in that environment. I just showed that when you go to the sysfscgroup directory and do a find.pipewc-l, there's like 8,500, almost 8,600 instances in there. All right, so, we've talked about kernel namespaces, we've talked about C groups, let's talk about the features of the Docker daemon itself. So, the Docker daemon, user bin Docker, is responsible for managing the control groups, orchestrating namespaces, and so on, so that these Docker instances can be spun up and secured. Because of the need to manage kernel functions, Docker runs with root privileges, do you be aware of that? Now, at Red Hat, we don't enable the ability to do things like, if you read the Docker documentation, they say to create a Docker group and add your user to the Docker group, and once your user's a member of the Docker group, they can run Docker from the command line. We actually don't enable that, we would rather see you actually run things as root and let the Docker daemon drop privileges. So, there are some considerations for running Docker. Only allow trusted users to run Docker. I recommend that, so let me back up. The Docker documentation recommends that you add users to the Docker group so they can run the Docker commands. That's fine, but just be aware that there are some risks associated with that. Make sure you only delegate that ability to trusted users and remember that they can do things like mount the host file systems and potentially do bad things to you. So again, we recommend that you actually only grant privileges to the users that you want to by using a mechanism like sudo, for instance, so you can create your Etsy sudoers file and say that they're only gonna have access to you certain Docker commands and things like that. If you're using the REST API to manage your hosts, make sure that you don't have any vulnerabilities exposed. In other words, keep your systems up to date. It's kind of common sense, right? But a lot of times we get something working just the way that we want it to and then we're like, don't touch it. So, don't do that. Make sure you keep your systems up to date and make sure that you're using strong authentication if you're doing things over REST. If you're going to use the REST API over HTTP, please make sure that you're using SSL or TLS. Don't expose it unless, except by secured networks or maybe by VPNs. So, Linux kernel capabilities or libcap. Really cool technology which allows you to essentially set limits on root privileges. Historically, the root user has had the ability to do anything anywhere to anyone, right? Once you're authenticated, I am root bowel before me. But Linux capabilities is a set of fine-grained controls which allow a service or even users with root equivalents to be limited in their scope. So even root users can be cut down on what they're able to do, but you can also use Linux capabilities or libcap to grant regular users the ability to have elevated privileges without having to do SU or anything like that. For instance, a user could be granted the NetBind service capability and they could bind a service to a privileged port, in other words, a port below 1024, even if they're not running as a root user. Now, in containers, a lot of the capabilities to manage network and other services not actually needed. For instance, SSH services, Cron services, file system mounts, things like that, really not needed from within the container, typically, except for SSH, you never need to run SSH in your container. Don't use SSH in your container. It's dangerous, it's silly, it's gonna get out to date and people are gonna do bad things to you. So, a lot of these things are not needed. By default, Docker does disallow a lot of root capabilities including the ability to modify logs, change networking, modify kernel memory, and the catch all capsys admin, which I'll show you some more about in a little while. So, if you look at, and I'm sorry, this is kind of an eye chart for those of you in the back, this is the libcap page on GitHub, and man, that came through horribly badly, I'm sorry. It looks really good on my screen, but basically, I'm not gonna go into it, but this is a table of all of the capabilities, all of the root capabilities, which can be managed by libcap. So, I've talked about a lot of them, networking capabilities, changing kernel memory, and stuff like that. But if you look through the libcap GitHub page, it's actually really informative. It's really cool to go through and see all of the things that you can limit, and this is actually the Docker filters which use libcap, so you can go through and see all of those by looking at that page. And again, you guys will get this presentation at the end, so you're welcome to go follow that link. All right. So, one of my favorite topics. I actually present about SeLinux on a pretty regular basis, I really enjoy it. I've got a YouTube video called SeLinux for Mere Mortals. If you don't like SeLinux, that's cool. Go watch, give me 50 minutes, go watch SeLinux for Mere Mortals on YouTube, and I think I'll probably change your mind, or at least I'll make it so you don't hate SeLinux quite as much. SeLinux is a mandatory access control system. Processes, files, memories, addresses, network interfaces, and so on. All have labels that are maintained by the kernel or as extended attributes on the file system. Everything's labeled and there's a policy which is administratively set and fixed. So, the policy is gonna determine how processes can interact with files on the file system, how processes can interact with each other, network ports and things like that. It's a really cool technology. It can be a little bit complicated, but the thing about SeLinux that I tell folks is, SeLinux is really only about two things, labels and type enforcement. So, for instance, if I have the mythical service foo, the foo service, the executable on disk might have the SeLinux label foo underscore exec type or underscore t. The startup scripts might be foo underscore config type. The log files, foo underscore log type. The data may be foo underscore data, right? It's actually fairly intuitive. When you're doing SeLinux, it's all about labeling, right? When the foo process is running, it may have the label in memory foo underscore t. So, that's labeling. Type enforcement is just the rule that says, if I explicitly allow the foo exec type, for instance, to access the foo config type files, then when the foo service starts up, it can read its config files. Then when I set a policy, it says, oh, yeah, the foo exec type can also write to foo underscore log types. Again, that's fairly intuitive, right? You want your process to be able to write to its log files. But type enforcement says, unless I've explicitly allowed it, I'm going to deny it. So, for instance, any other access unless explicitly allowed by the policy is denied. So, as an example, if the foo process running in the foo type context tries to access, for instance, the directory slash home slash t camera which has a label of user home dir type, even if the permissions are wide open, even if I have done chmod 777 on my home directory, right? We'll give you the gun and point you to your foot. We'll tell you how to do it. But SE Linux will step in and go, no. Unless it's explicitly been allowed, I'm going to deny it. So, SE Linux is really cool. It can save your bacon in the event of misconfigurations. I've seen it happen. When I talk about type enforcement and labeling, when I talk about the labels, the labels are usually stored or are stored in the format of the SE Linux user, the SE Linux role, the SE Linux type, and then, optionally, MLS and MCS labels. So, for that mythical foo service, the full syntax for the label the running process might be the user user object role foo type and then we can have MLS and MCS labels as S0 and C0. Now, when we're talking about SE Linux, the default policy for SE Linux is the targeted policy. In the targeted policy, we really actually don't care about the SE Linux user or the SE Linux role. We really care about the label, because remember, it's all about labeling and type enforcement. So, we can also ignore the MLS or multi-level security labels since that's really only used in the MLS policy, which is usually only used in like Department of Defense or CIA or places like that. We really only care about the type and the MCS label. So, think of the MCS labels really as just extra identifiers and the reason that's important is we can use this in containerized environments to provide very fine-grained control between containers. So, for instance, these are totally different labels. I got userU objectR foo type S0 C0 and then down here, I've got S0 C1 even though those are identical, except for just that MCS label. From an SE Linux perspective, they may as well be completely different as black and white or whatever. Type enforcement says that the process with that first label is different from the second one, so policy would prevent the two of those from interacting. Also, there's no policy allowing the process running those labels to interact with the file system unless it's labeled a foo config type or foo content type or another predefined label. So, if one of those processes, for instance, was compromised and it tried to access a file on the host, let's say Etsy Shadow, which has the label shadow underscore T, by default, SE Linux, if it's not explicitly allowed, it would be denied. So, on a standalone system running Docker, for instance, all the containers run in the same context by default. If you look at OpenShift, for instance, or Atomic Platform, that's not the case. Each container actually runs in its own context. You can do that on a standalone laptop machine. You'll have to tweak your Docker command to do it, but you can absolutely do it on a standalone machine. But as a, for instance, I've got three instances running. They're all running in the OpenShift context or with the OpenShift label, I should say, but you've got different contexts here. So, even if somebody were to access the Docker container process on the host, even if they compromised the Docker process and they got into one of your containers, they still would not be able to access the other containers on the machine. So, what I'm gonna do is, I'm gonna show you an example of a simulation of somebody exploiting your Docker environment. So, what happens is, on the first line, I'm logged in and I have the context, let's, we really only care about the unconfined type, S0 to S0 and C0 to C1023. So, when I run the ID command, you can see that is my SE Linux context. What I'm going to do is, I'm logged in as root. Root is omnipotent. I could do anything on the system that I want to as root, right? But what I'm gonna do is, I'm gonna use RunCon to change my running context and I'm gonna change over to the OpenShift T label with S0, C0 and C1 and I'm gonna run the bash command. I am still root. All I've done is I've just changed my SE Linux context and the funny thing is, as soon as I run bash, it goes, whoop, permission's denied to bash RC because I'm no longer in the right context. If I try to cat Etsy shadow, for instance, even though I am root, permission denied. If I try to touch a test file in the root of the file system, even though I'm still root, I've just changed my SE Linux context, permission denied. If I take a look in the home directories, even though I'm still root because I'm no longer in that correct context, I have changed over from the unconfined context over to the OpenShift context and OpenShift does not have SE Linux access to home T Cameron, I immediately get permission denied. I'm root. Well, as root, that's easy. All I need to do is just disable SE Linux, right? Nope. If I try to run set and force zero because I am no longer in the right context, SE Linux will see that and go, nope, permission is denied. So set and force failed. So SE Linux is an incredibly powerful capability. The things that I've talked about previously, obviously all really, really important. Kernel namespace is really important. Control groups for keeping compromised systems from taking over or compromised containers from taking over your system, really important. My humble opinion, not that I'm biased or anything like that, because I've never present on SE Linux or anything, but my personal opinion is SE Linux is really the linch pin to security in a containerized environment. So let's talk about some tips and tricks. Containers are, at the end of the day, just processes running on the host, right? I mean, containers are not magic. They're cool, but they're not magic. So some of the things that you do want to do in a containerized environment do have a process in place to update your containers. Follow it. It is so easy. I get it. It is so easy for a developer to come up with something and it's like, hey, I got it to work. It's working perfectly. We're gonna throw that bad boy out in production and then I'm gonna move on to the next project. It happens. We know that, but have a process in place to update your containers and follow it. Run services in the containers with the lowest possible privilege. Drop root privileges as soon as you can, whether it's web services, database service, I don't care, Bill and Ted's excellent service. Make sure you drop privileges. Use services that allow you to do that. Mount file systems from the host read only unless you absolutely positively have a good reason not to. Treat root inside of the container just like you would on the host. Watch your log files, pay attention, and don't just download any old container you find on the net. Bill and Ted's excellent container repository may have some cool stuff in it, but unless you've vetted it and you know what's going on with Bill and Ted, you probably want to be real cautious about downloading them. Please don't run SSH inside of the container. Use the system management tools of the host or use Git or something like that. Please don't run SSH. Don't run with root privileges unless there's absolutely positively no other option in which case find another container or use another piece of software. You shouldn't run with root privileges. Don't disable SE Linux. If you really think you need to disable SE Linux, go watch SE Linux for mere mortals and then send me an email, thomasatredhat.com. I am available. I will talk to you about SE Linux. Don't disable SE Linux. Don't roll your own containers once and never maintain them. Again, have a policy in place to keep those things up to date. And again, don't run production containers on unsupported platforms. It's a shameless little marketing plug from Red Hat there, but you really want to have a certified platform or you're gonna be able to pick up the phone and go help if something bad happens. So in conclusion, go forth and contain stuff. Containers are awesome. I really, I've been doing this for a long time. I'm the first one to admit that I'm pretty jaded, right? You know, we always hear every year, this is the next new big thing and it's gonna be awesome and it's gonna change everything. And after about 10 years in the industry, you're like, yeah, okay, whatever. Containers are pretty cool. I don't know if I'm gonna say that they're gonna be like a tectonic change in everything we do in IT, but containers are pretty cool. They make app deployment really, really easy. They leverage some incredible capabilities of the underlying operating system. And by design, they're pretty secure. They can be secure if you maintain them well. There are some gotchas though, as with every other piece of software out there. It requires some care and feeding, right? You gotta take care of your systems. But well maintained, containers will absolutely make your business more agile, less complex, hopefully, and if done right, safe. So thank you very much for coming. I appreciate it. We'll open it up for any questions. If anyone's got any questions, I can't see anything right now because I've got these flamethrowers in my face. But if you have a question, please go up to the microphone and I'd be happy to answer them if I can. Somebody's gonna stump me though. It always happens. Yes, sir. When you used Run-Con to change your context, would you have been able to use Run-Con again to change it back? Nope. That was a one way trip. That was just for demonstration purposes. Hey, look, here's your foot. There's the pistol. Yes, sir. Hey there, thanks for the talk. I'm Rich from Cloud Engineering at Box. I'm just curious, what work has Red Hat done, if any, around regulations like FedRAMP and PCI and container configuration? We've actually done a whole lot of work with getting the atomic platform certified for common criteria and things like that in conjunction with the folks in the DOD and folks like that. Like, as far as specific projects and stuff, I don't have that right here with me, but yes, we are absolutely aware of the requirements for that and we're working with the federal government to make sure that we are at least pursuing if we haven't already received a lot of those certifications. Okay, thanks. Thank you. Hey, just by the way, real quick, guys, if you need to reach me, I don't think I put it on my slide. I am thomasatredhat.com. You can follow me on Twitter at thomasdcameron. If you have any questions, don't hesitate to follow up and these slides will be available on the website when we get done. Yes, sir. What do you think is missing to get better multi-tenant security from two different containers on the same system? What is missing? That's interesting. There are a lot of things that we need to get better about. There are a lot of things that we need to get better about around just doing simple stuff like enforcing security, enforcing updates within containers. I think some of that glue, the plumbing around that is probably something that the industry in general is weak at. From a kernel standpoint, do you think there's any capabilities right now that you're missing or do we still need to run different VMs to keep tenants separated? I think that, no, actually, I think that with containerization we're doing a lot better. If you look at what Docker specifically and what Red Hat as a contributor is doing around Libcap, that is changing almost weekly it seems like. So I think that what we're really having to do is spend a lot of time taking a look at what capabilities are absolutely positively needed and weeding out the rest of them. I think that's probably where I'm seeing the most activity and then also SE Linux policy and doing things like SE Linux segregation on like what we do with OpenShift. I would like, I'm lobbying internally at Red Hat to make that available for every instance, every place that we use Docker. It's not there yet, but we're working on it. Thanks. Thank you. Yes, sir. So you mentioned more than once to not use SSH. Can you elaborate on that a little bit? Why not and what would you use in place of that? So the problem is, again, SSH is like an open door way to the world potentially. You know, if you secure it correctly and you use keys and no passwords and stuff like that, it's better. But the thing is, if you need to have access to your systems, do it, don't have a million instances of SSH running. If you need to log in to the host, SSH into the host and then make whatever changes, do a Docker attach or whatever. But running SSH inside of the host is an invitation for disaster because invariably what's gonna happen is you're gonna have an old outdated version that has security holes in it. And people are gonna think about, ooh, I'm running PHP or Java or whatever. So I'm gonna pay attention to the application and I'm gonna update the application, but they'll forget to do the SSH daemon and you just eventually wind up shooting yourself in the foot. So the same could be said about any system, not just containers? Yeah. I mean, yeah, if you're not maintaining it, the thing, I think the reason that it's, I don't know if I'd say critical, but I think the reason that it's so common in containerized environments is you have app devs, and even if the app devs are being really smart about keeping their Java or Node.js or whatever up to date, they're not sysadmins, right? So they don't even think about the SSH daemon. The guy that they got, they tugged on the sleeve to set that up, he's not involved in it anymore and so that's where you see that kind of thing slide. Okay, how are we doing on time? Do I need to get down? Two minutes, okay. Hey, thank you very much for the presentation. Thank you. So to the previous question around SC Linux and security, so what additional security capabilities do you get by running within a VM or do you believe that containers with SC Linux are truly, that they do not require VMs to be truly secure? So if you run your containers inside of a VM using something like Project Atomic or the Atomic Host or whatever, obviously you're gonna get the ability to, let's say for instance, you're doing really, really heavily multi-tenant environments and you wanna have this customer have these 50 containers. It may make sense for you to spin a VM up for them so they can spin up those 50 containers. You've got your control group set up inside of that VM. Maybe you're even using control groups and the underlying hypervisor to make sure that that VM doesn't lose its mind. So potentially there are gonna be cases where it absolutely makes sense to use virtualization in addition to containerization. I think as we get more sophisticated with containerization, I think, don't hold me to it, but I think we're gonna see less of a need to have segregation at the VM level. I think we're gonna get to the point where you're gonna be able to have these big honking hosts and just spin up zillions of containers and apply security through C groups and SC Linux and so on so that you don't have to have those multiple layers. Does that make sense? Yeah, thank you very much, appreciate it. Thank you. Okay, one last question. Is there any development around the isolation of containers from a networking point of view? I mean, you can totally disable the communication between containers, but I haven't seen how I could be more granular, let's say, I lost some communication, but not any time. Yeah, so we're looking at it, but then there are also some third party tools like, oh, crap, I was just talking to him last night. He's a former Red Hatter and I'm drawing a blank as to his company name, but they're using Quagga. They're using Quagga to do dynamic routing and dynamic networking so that you can get all the way down to the individual container layer and set up really harsh, strict rules that say, this thing can only get out to the internet and I can only get back in and they can't see each other. So it's not just Red Hat, it's not just Docker. There's a ton of folks who are working on that. It's clearly a gap and there are a lot of people who are trying to figure it out. Open source, hopefully the meritocracy will be that the best one will rise. Okay, thank you. Was this helpful? Was this good? Okay, good. Thank you very much. Thank you for coming. I appreciate it.