 Good morning and welcome to DEF CON Sunday. How are we doing? Yeah. That is a disturbing level of enthusiasm. Wow. Welcome to the most coveted speaking slot. First thing in the morning on Sunday. Funny story, a couple years ago I spoke at Black Hat and I spoke at the same time that Dan Kaminski was giving his talk on DNS. Hardly anybody came to that. So here I am. And David was my speaker handler there so I'm now returning the favor introducing him in a highly coveted speaking slot as well. So it's going to be an interesting talk. I'm really excited to hear some more about this stuff. Let's give David Mortman a big hand. Good morning everyone. Thank you for coming out at this really stupidly early hour. I appreciate the effort. So today I'm going to talk about Docker containers in general and that whole security thing with regards to that. So a little bit about me. So in my day job I'm the chief security architect for Dell software. In spite of that they do let me use a Mac for the most part. Unless I'm going to customers in which case then I actually have to pull out the windows thing. It's kind of scary when that happens. Anyway, so I do cloudy stuff most of the time and I've been poking around a bit at Docker. So it seems like I've gotten a lot of publicity in the last few years. Everyone's like oh my god Docker this, Docker that. You can't go anywhere near a tech blog without someone talking about how awesome Docker is. So what is the big deal about Docker? In some sense it's not a big deal at all. It's a container. For those of us who have been around for a while, remember jails and charoot? Yeah. I remember setting up charoot like on an FTP charoot, charooting an FTP because hey that's more secure. The cool thing about containers is that you're taking standard basically charoot or jails or LXC which is the modern version of that stuff and you're wrapping it with metadata. So now you're giving it some context of what's inside the container. So now you're not just saying hey I've sort of contained this executable but I can tell the rest of the operating system what's inside it. And so now I can say hey this now is a portable format. It's actually what you've done is taken a container and you've made it a package format. So now it's just like any other packaging format except for rather than being a single executable with a list of dependencies you need to download yourself or rely on an app to get or one of your favorite package manager of choice. All the dependencies are self-contained in this little package. So that's pretty cool. It's very effective from an operational perspective. Life gets a lot easier and especially when you start looking at things like hey I'm developing something and I need to hand it off to QA who then hands it off to you know some other security team for evaluation who then hands it off to production. If you're lucky that's the order it goes in. If you're not lucky you know we get called three weeks later and say well it's in production can you scan it? But you know in theory that's the way it works. But the great thing about that means is that what actually goes from dev all the production is the actual same exact code. So you actually avoid things like it worked on my laptop or well we thought you had this version of the library in production and we actually but in dev we're three versions later. So it's really convenient that way. So from an operational perspective it's awesome. So but the problem is of course that there's everything has security issues in it because you know what doesn't. And in the last year people have gone to lots of effort and they'll say oh my God containers don't contain. They're not secure. They're not like a VM. Because VMs are secure we know that right? And containers you know in some absolute sense are not as secure as a VM. They are much lighter weight in terms of security in terms of isolation but they're pretty good. And the fact is for the most part they actually do contain. There's a couple issues. I'll get to those a little bit about places where they don't do full containment. But even in the current state even if you looked at what it was like 20 years ago with charoots and jails you've significantly actually reduced the attack surface that someone can go after. And realistically if you think about it well if they escape your VM if you escape the container well they're just where you would be if you're running on bare metal. So it's not actually a huge loss in security that you get at that point. And in particular there was a few blog posts over the last year since Docker was released where people were like oh look trivial escape from the container. I can just do this and there was a beautiful one where you could actually as a Docker user launch a container create a SUID bash shell copied out of the container and get root on the host OS. Oops. And so I was like oh that's scary. I should validate that. So I was sitting around the other week and I'm taking all the posts of container escapes that people have done in the last year and Docker has fixed all of them. Mostly through the expedient of changing the default configurations. Funny how that works you know. And I'll get more into this a little bit later but it's things like you know what? Don't run Docker's root. Okay that's actually most of them fall under don't run Docker's root and do a few other basic sort of hygiene type things. You know the this admin equivalent of washing your hands. You know putting away the trash. So that's good. Escapes aren't trivial anymore. However there's still a lot to do. So let's start off like where are we today? You know what do we get from Docker? You know what does Docker give you or other containers frankly they're all more or less the same. There's app C and there's Intel Clear Linux thing that's not quite a container but they'll have the same basic structure going on. They all have you know some sort of where is it good? You know some sort of basic container management to limit what you can do. So go ahead myself here. So they all have like you know C groups and they all have name spaces mostly. Most of the areas are name space. This is good. This means that if you're in one network stack you can't see another containers network stack. Generally a good idea. Excuse me. They all have things like IP tables. Every single thing you know the file system is its own name space processes of their own name spaces. This gives you a fair amount of protection. There's two key places that there are not yet name spaced that are being fixed. The first is there's not a user name space yet. This means that if you're operating as a user a particular user in a container and you escape the container somehow you get to operate as that same user outside the container. Not so good. They're fixing that. And the next release of Docker, I'll talk a bit more about that later, is implementing name spaces in the underlying structure. So pretty soon we'll actually have a username spaces. Another big issue however is that the kernel key space, you know that place where you put crypto secrets or passphrases that need to live in memory is not name spaced at all. This means that as the host OS if you put say a critical credential into that kernel key space any of the containers can see that. Not so good. Also if you have multiple containers running any one container happens to put something into the kernel key space all the containers can see it. So if you need to use containers be really careful about what you're putting, what kind of key management, what kind of secrets you're dealing with. And similarly be careful about user space stuff. So what you want to do in this situation, so this is why the state of the art, and I'll repeat it, is to run one container per VM or one container for bare metal. You still get a lot of benefits of containers especially in production without running risks around that especially around the key space situation. So that's a really useful thing to consider. The key space stuff is addressed by running SE Linux. Does anyone here actually run SE Linux? Okay keep your hands up. Okay. Keep your hands up if using SE Linux means the first thing you do is turn it off when you get your operating system up. Exactly. So SE Linux is a really cool tool for most of us if we're not Dan Walsh we're not actually capable of using it to its full extent of its capabilities. So this is one of the pain points still in containers. Running SE Linux by default actually solves this particular key space issue is my understanding. But to really get the benefit out of SE Linux takes a lot of time and effort. Name space. So right. So there's dedicated network stacks as I mentioned. Now when Docker first came out there was no ability, there was no signatures, there was no way of validating that the container you were downloading from a registry was actually the container you thought you were getting. Nothing. That's comforting kind of? Okay it's not comforting at all. It's terrible. Two, let's see in Docker I think 1.3 maybe 1.4 they started offering signed manifests for official Docker containers. So if you were going to download a container from Docker, Docker.org that has the official Docker stamp of approval on it, the manifest that described the container had a signature on it. That's a good step forward. Except for the part where the container itself isn't signed. So there's no way to actually validate that what's in the manifest is in the container. But boy howdy is that manifest signed. So I was like okay. So what does this get me? It gets me the validated manifest. So I feel real comfortable. Okay I don't. But they're fixing that and I'll talk about that a little bit later as well because that's kind of cool. What Docker is done. The folks have hired some really smart people in the last six months to a year to work on security Docker. So I spoke to them several times now. And they basically released Docker knowing there were these security issues. They're like this is beta code. We have a roadmap for fixing the security issues and every single release adds extra functionality on the security front. So this is good. We're getting better. That's the trend we want to be. Right? Definitely not the opposite direction. So they just recently released a really great white paper, high level white paper on securing Docker. I'll be posting the new version of the slides we posted online and I'll be a whole section with all the links to the various resources I'm mentioning over the course of the talk. So they have a great high level white paper on how Docker works, how containers work in general and some high level security things you can do. They also recently released a, with CIS, a document on how to harden Docker. It's 190 pages. So I had a lot of spare time apparently and I read it all. And I've pulled out some highlights for you so that way you don't need to read it all but it is worth going through. And one thing you're going to find here is that as you go to lock down Docker, this list is going to sound very familiar to locking down anything else really. I mean there's a few special things around Docker but realistically speaking it's an application and it has some special corner cases but in the end there's a lot to do. Just like anything else. So they recommend, it's a good idea, is let's restrict network traffic between containers. If you're running multiple containers on your host there don't allow Docker, the containers to talk through internal buses, through the internal operating system guts. Make it go over the network. Excuse me. That's a great thing. Make sure everything goes across the network because then you maintain that network namespace and you maintain the integrity of those separate network stacks. As soon as you start allowing the containers to communicate through the host OS then you start losing protection. So always, always, always make containers talk across the network. Even if it's just using, I mean they'll generally just use whoop back anyway but at least that way it's going out the stack and back through and then any network controls you have in place like IP tables and things also take effect. Here's a clever one. Turn on audit D. For all of the Docker files in the network itself. And then here's the radical part. You actually have to read the logs. I know, I know, we don't generally do that in this industry. We just collect logs or spray them to DevNol but please, for me, it's 10 a.m. on Sunday. Most of us are somewhat hungover. Please review the logs. They'll make your auditors happy at least and they'll be nicer to you. So that's worth something right there I think. The other thing, this is a good default. Don't turn this off. Only use SSL or TLS when you're connecting to Docker registries. I think we all know this but don't turn it off anyway. And in fact, don't let the Docker daemon itself listen on the network. I mean if you're in production you may not be able to avoid that but if you're doing local development there's no reason to actually have your Docker daemon listening on the network. That provides an immense amount of protection. The Docker client is sitting right there on your machine anyway. Don't have the Docker daemon listening on the network. That gives you a lot of protection. Especially because by the way, the Docker API has no authentication. It has no identity, concept of identity yet. It has no concept of roles. It's just wide open sitting there saying, use me, abuse me. So please, don't put it on the network. And then if you have to turn on the network at least enable some sort of certificate based authentication on top of it. Using NGENIX or something like that. At least that way you get some comfort level that only the people you know are actually using that. Since the API itself has no authentication, proxy something on top of it. Just give yourself some safety if you have to put on the network. This happens realistically in any sort of larger environment or if you're doing some sort of orchestration using third party tools or something you're going to have to put on the network. Which sucks. But give yourself some protection. Another radical idea. Lock down all the config files to root only. Make ownership root root. The config files generally aren't anything critical information so they shouldn't be writable by anyone else but they can be readable by the public. Make sure your search, if you're using any search or keys, make sure they're all again owned by root. Perms are 4-0-0. I mean this is more or less obvious but I've seen multiple test installs of Docker where they're like oh I put a search there and they left it perm 666. So check those things. I mean this is not rocket science but that's a different talk. Don't run Docker. Don't run your containers as root. Run them as non-root users. This gives you some protection if they manage to escape the container. At least they're only running it as that user. Just like we do with Apache. Just like we do with Tomcat. Just like we do with MySQL. Basic stuff here. And then only use trusted images. This is kind of a weird thing. I know. Just don't download random shit off the internet and click on it, right people? Come on. That was funny. I'll get more to that later though because there's a whole, this is actually a general problem space around trusted images. It's not an operational issue but I'll get to that a little bit later. Minimize your package installs. Again, basic sysadmin 101. Don't install shit you don't need in your container. One app, one process, one parent process per container. Keep it simple. Containers are fast to spin up. Applications are increasingly getting distributed. Webified SOA, things like that. So just have, if you have a container, just have one app running inside that thing. If it's a microservice, fine. If it's a web server, fine. But don't, you don't need to build your entire application stack top to bottom in one container. It's tempting especially in dev to be like, oh, my web server, my app server, my database. It's all cute in a little package. Well, it's not much harder to spin up three separate containers and keep those communications more secure. It's also easier to audit that package. And it's much easier to avoid dependency conflicts and issues with security issues brought in by third party libraries, which I'll get to again get to a little bit later. Take advantage of kernel capabilities, Linux has this concept of kernel capabilities. Take advantage of those. Restrict that container to only have the capabilities at the kernel level that it needs. The, it does the benchmark, the CIS benchmark actually has a great list of what those, all those capabilities are. And by default, the default credentials are actually pretty good. Because its capabilities are pretty good. If you start getting to some weird raw packet stuff, you might need to adjust that a bit. But the defaults are pretty good. This is one where I'll say, trust the defaults. But be aware certain things. Ping ICMP in general does funky things and this network stack in general. So that might break. They're working on fixing that with capabilities thing as well. The slides blinking for everyone. And generally speaking, when you're talking, you know, the default capabilities are net admin, CIS admin, CIS module. That generally does it. That's pretty much all you need. Don't use privileged containers. So privileged container is one that actually has like root level access, lets you do root level functionality. Generally speaking, you don't, that's, if you're doing root level, you know, privileged containers, you're actively abusing the point of containers and negating them as well. So that's not so useful. So avoid privileged containers unless you really, really can't. So. Okay. Another rocket science item. Don't mount sensitive host file systems, directories, et cetera, in your containers. You know what? Your container doesn't need the actual Etsy mounted. I know. It doesn't need dev. And it really doesn't need proc. So don't mount that shit. Another thing is, this was one that surprised me, is don't use, don't SSH into containers. Don't put SSH into your containers. You don't need it. If you need to access a container, log into the host operating system, use NS Enter, which basically does the ability to jump into your container. But generally speaking, SSH is hard to secure. It's hard to manage. And it takes up, and it's kind of funky. It does interesting, bizarre things with the stack. It gets in there and makes your, you need to greatly expand your capabilities to make it work properly. So avoid SSH if at all possible. It's just, it adds complexity you just, you really don't need. So avoid that at all possible. Also, if at all possible, don't use privileged ports. So obviously you can't avoid that if you're running Apache, if you're running, if you're running Apache or similar application that needs to run below port 1024, you can't avoid that. But generally speaking, anything other than those front facing services, don't run them on privileged ports. Anything that has to run on a privileged port needs greater access to the kernel, it needs more capabilities. It's adding to your tax surface. So you're mid-tier stuff, don't run it on a privileged port. Your database, don't run it on a privileged port in that container. Just don't do that shit. Again, set reasonable limits for memory usage. So anyone here ever had the pleasure of configuring Java, you know, on the VM, max memory, min memory, all that shit, yes. I've gotten to front rowers doing this, and yes. You're going to have that same joy as you start running containers. But this is a good idea. Particularly if you're going to a production environment, set those maximums. Give yourself some protection from DOS attacks. Or even runaway processes. There's no reason, for the most part, there's no reason the container needs all the memory on a box. If you're running something that's memory intensive, you probably want to be running bare metal anyway, possibly not even in a VM. Containers, that's not your best suit. So set reasonable CPU priority. Again, this makes sense. Make sure that you're not going to have a container go awry and steal and kill your entire machine. Not crazy rocket science stuff. Set reasonable U limits. Is anyone here actually like U limits? I mean, they are generally speaking the thing, the bane of existence when I was a sysadmin, especially when running databases, or either the SQL or no SQL variety is, you always end up upping U limits constantly. But pay attention to those. Again, that's a great way of protecting yourself. Container goes awry. Make sure you have a reasonable limit on that U limit. And at least that way, you see what happens. You see that pain and suffering coming. And it prevents that container from getting out of control. For the most part, containers, you can mount that. There's no reason not to mount your root file system as anything other than read only. There's literally no reason to ever mount your root file system as a read write in a container especially. If you need to make changes to your container, what you're actually going to do is take a copy of that container offline, make the changes you want, generate a new container image, and then launch it. I'll talk a little bit more about configuration management later and the ways in which containers really change configuration management in terms of a security perspective. Only bind your containers to the appropriate network interfaces. Don't go to the default of having them bind to every network interface on the box. For the most part, most of your containers can just be hooked to loopback. And there's no reason that you would be ever exposed to a network interface off the box unless you're actually having them talk, unless obviously you have multiple boxes running. But in dev environment especially, there's no reason for containers to ever be able to see on anything other than loopback. This is an exciting one, which is limit your, so containers will automatically restart when they die. That's a cool feature. You want to limit that to like three or five, maybe a few more than that. But the last thing you want is your box, you know, your container constant will be starting and hosing your box. This, I mean it's just generally good practice even in dev environments. The last thing you want is, you know, to, instead of having a DOS attack take down your box, have a DOS attack force a constant reboot cycle and take down your box. That's just as painful. Don't share namespaces. You can, so by default the namespaces are separated between the host and the containers and devices. They're there for a reason. If you share namespaces you just destroy the point of namespaces. So don't share namespaces. This is a default that they're not shared, but some nice people think, I know we'll make this easier. Let's share that namespace. Well, you could do that, but then you may as well run everything in one container or just not use containers at all. You've just, you've just shot yourself in the fit in that case. Backups. I know, I know it's weird. Back up your shit. I know, I have this guy in front going, no man, don't do that. Get logs. I know, logs. So logging is a little bit tricky still. The last release of Docker just added syslog hooks finally, so that makes it a bit easier. Every single major SIM and log correlation vendor now has a mini tutorial about how to enable Docker containers and logging in their product. Post it on their websites. So that's easy. I mean, it's not ideal yet. It's still a little tricky, but there's directions posted, so it's definitely, you know, it's like programming off stack over for a cut and paste and you're probably in good shape in that regard. Work with a minimal number of images. Anyone remember when we first started doing VMs and like, people would generate a VM for every single application they had, as opposed to having a base level, like three or four host VMs and then add the applications on using like Chef or Puppet. Don't get yourself in the same situation with Docker. Image sprawl is a huge problem already for folks, especially when you get to maintenance windows and things like that. So really, minimal number of images. Start with a base image and you can always add things on as you need to. Every time you add an image, your problem gets not quite exponentially harder, especially once you get above like 12 or 13. People are really bad at managing large numbers like 12 and 13, it turns out. Minimal number of containers per host. I recommend anything remotely production oriented one container per host. Keeps it simple. If someone escapes, it's not the end of the world. If I'm going to run multiple containers per host, they're going to be like services. So it's going to be like, okay, I have a big honking box, rather than running 12 VMs on the box, I'm going to run 12 containers all running the web server or something like that. And then still you want to make sure you have some diversity across boxes, just like you do with VMs. That way, if you lose your hardware, you're not down. Just like anywhere else. So I talked a bit about trusted containers. So you want to actually know that the container you're using is the container you think it is. So this becomes a supply chain problem. How do you know that you actually have what you think you have? So as I said earlier, right now, Docker published images have manifests associated with each image and that manifest is signed. That's a start. It's not ideal because like I said, the container itself is in sign, so you don't actually have any proof that the container itself is what's in the container is actually what's in the manifest. They're fixing that in the 1.8 release, which is due out any second now. They've already been released. I've been off the internet mostly this week, because there was a security conference or two going on. So my laptop has not been on the wireless here. Yeah. I didn't want to be on the wireless sheet for some strange reason. So anyway, so supply chains, you want to watch that supply chain. You want to validate that your containers are what you think they are. If at all possible, given the current state, don't use public repositories. You know, set up a private repository, validate that you know the images, what you think it is. Run that repository TLS only and then just continue to validate, irregularly sort of double check things and especially for that repository server, kind of keys of the kingdom there. So audit, monitor, have appropriate protections in place to make sure that those containers have not been violated in any way. There was a recent blog post about last month, maybe six weeks ago, where someone went ahead and claimed they said, you know what, 30% of the images on the public doctor repository are insecure. This proves the doctor is insecure. I was like, that seems like a really big number. I bet it's kind of on the small side. And so I did a little bit of research and poked around some other folks, did some deeper analysis and turned out what they meant was that 30% of the containers they found had some sort, had a library or an application inside it that was vulnerable to some sort of exploit. So yeah, and so you download, if you're going to use one of those containers, you do what you do with every container, which is you run app to get upgrade after you download the container to make sure you're running the latest versions of the code and you move on. I mean, just because it has a vulnerability in it or update code, isn't the end of the world. But it does mean you can't just assume that your container is up to date, which is why I said earlier, patch. You know, you have to actually pay attention to the stuff. You have to patch your containers and you have to date just like anything else. Now I'm going to go out on a limb here and do something a little bit radical. This one was not actually recommended by the, by the CIS benchmark, but don't use Chef with your containers. Don't use, in fact, don't use any online configuration management with containers. I know, I'm getting some quizzical looks in the front row. I can't actually see the second row, so maybe why. The reason I'm telling you this is Docker and containers in general are the ideal candidates for immutable servers. What the idea is, the problem with using configuration management is you get, or the reason configuration management was invented was the concept of configuration drift. Which in configuration drift is, you have the binder on the shelf or in the Excel spreadsheet that says, this is the configuration of my web server. And over time, you make changes to that configuration but it doesn't get copied to the spreadsheet. It doesn't get printed out and put in the binder for your disaster recovery. And then three years later when you have an issue, no one actually knows what the configuration looks like. So tools like Chef and Puppet were invented and one of the benefits they have is not only does it automate everything, so you get consistent configurations across all your boxes, but now you have basically a CMDB. You actually know what Chef or Puppet thinks is the configuration is the configuration. And in fact, if you're running these tools and the configuration, someone changes the configuration on the box, Chef and Puppet do a sort of trip wire type thing and they say, ah, ah, ah, and shifts it back to the way it is. So any changes that happen outside that space get pushed back. Well, Chef and Puppet are kind of heavy clients. And in the container world, as I said, you don't want to run extra shit in your container. You want to run one process and that's not going to be Chef. It's kind of pointless to have a container just sitting there running Chef or Puppet, right? Because you're not doing anything at that point. But your configuration is good. So instead, because containers are so fast to spin up, we're talking milliseconds in most cases. Instead, what you want to do when you need to make a change is you create a new container and spin that up and then shut down the old container. And then if there's an issue, the new container doesn't work. Well, you shut that down and bring the old one back up. Or run them in parallel, classic AB thing. So you might use your load balancer and start shifting load over to the new containers. But any change you make generates a new container. So this is what Netflix does. They don't actually ever make configuration changes to running VMs on Amazon. They burn a new AMI and they spin up a whole new, a whole new instance or, you know, hundreds of instances and then transition their load balancers. Facebook does similar things as does Amazon. So this is the great thing here is that you then have a history of what everything looks like and you're not worrying about configuration management failing. And you can keep your container nice and tight and clean. And related to this issue, of course, in terms of just trusted containers is that because you only have that signature on the manifest is how do you actually have any sort of attribution to the life span of that container? So there's some interesting stuff coming. So there's some other things we can do beyond these basics which is you can run app armor. Actually, the Docker folks recommend you run both app armor and SE Linux. The cool thing is if you run SE Linux, once you get your configuration right, it actually lives with the container. So you don't need to track that separately. You figure out what your ideal configuration looks like. It's built into the container. So as you transition across your infrastructure, it goes with it. So at least, again, that's still consistent. There's a cool tool called Set Comp. I just found about this a few weeks. So this is really cool. It actually lets you limit sys call and sys call arguments on a case-by-case basis. So now like you can get some really tight control over what those system calls are doing back to the kernel, back to the operating system in general. So that's pretty cool. The Docker folks released this cool tool called Docker Bench Security. It's up on GitHub. I'll post the links to that with the other stuff when I get the slides read done. And what Docker Bench Security does is it goes through your container and validates or alerts you on all the recommended configuration and settings for your Docker container. So all these, most of these recommendations I made are actually already built, checks for them already built into Docker Bench Security. So you can just download that, warn against your container and make sure you're in reasonably good shape. So for the most part, you know, although it's a complex list of things you need to do and, you know, checklists while useful are really boring to go through every time you do something, Docker Bench Security automates that for you. So that's a win right there. There's also two third party things you can do to lock down your containers more. The folks at Canonical have released a project called LXD, Lima X-ray Delta. And what that is, is basically a container hypervisor. So rather than run your containers in a whole OS, they're building the container version of a VM hypervisor. So that way, there's very little, should you manage to escape your container, there's a very thin layer just like you have with a traditional hypervisor. So that's out, that's open source, and that's continuing to mature. So that looks promising. And there's a third party commercial package called AppSera. And they do policy and security of containers in general. They do both VMs and containers. I was originally built with platformers as a service in mind, but as containers have taken off in general. Basically, it's a policy based language which lets you set permissions and what containers themselves can do. So this is looking promising. I haven't had a chance to deep dive on it. Derek Collision, who is the primary author, was the primary author of Cloud Foundry, and he's been involved in the cloud and virtual computing space forever. So it looks very promising. So that's another one to check out. They do have some free accessibility things. There's some cool stuff coming though from Docker. Docker has not, they're not sitting on the laurels. They're like, oh, it's good enough. They're adding security, they can add security. At DockerCon several months ago, they announced a new project called Notary. And what Notary is, it is a secure package management system based on the update framework. And what they have done is, the Notary, which is coming out in 1.8, which as I said is going to be out any day now, maybe out in fact, is that it is, gives you the ability to not only have signed manifests from Docker, it lets anyone have signed manifests. And more importantly, it's a content, it's basically Notary, which is part of the V2 registry, is a content addressable registry, which means that now your manifests contain a list of hashes of all of your contents of your container. So you don't need to sign the container or encrypt your container, though you could do that if you wanted to. So now when you get the manifest, you validate the signature on that manifest, and now you have a list of hashes of all the contents of the container. So now you can actually validate that what's in the container is what you think it is, and it's not restricted to official Docker containers at this point. So you can do this yourself in your private registry, you can have much more confidence that the container you downloaded last week and validated as acceptable for your standards is still the same container. Now, the update framework is pretty cool because not only Notary, which is using it, because not only does it enable this content addressable file system, but it has this concept of freshness. So what this means is, and the signatures are unique enough that what happens is that when you go to the registry and say, hey, I want this container, and the registry says, go to this mirror over here in the Western U.S. or this one in England or Ireland, it actually validates against the, your client looks at what's on the mirror, looks at what is on the master, and validates it's the same thing. So you know you're actually getting the most recent version even if you're going to a mirror because obviously for large mirror sites, things often get out of sync, especially if there's recent updates, it takes a while for those mirrors to update. So now you actually know, not only are you getting what's in the container, and that's what the manifest says is in the container, but you're actually getting the most recent version that you want. Or if you want a particular older version, you now know it's exactly the right version of the older one as well. So that's pretty cool. And it also has the concept of snapshots, again, like I was saying, now versioning of your container so you can actually roll, it makes it that much easier to securely roll back to a different version of a container or roll forward. And it's designed to be survivable to key compromise. The root key is supposed to be stored offline, but if any of the other keys get lost, the spec is designed to allow for survivable key compromise. So that's kind of cool. The folks at Docker are having this audited by a well-known security firm. They did ask me not to say who it is. They will announce it. When they release the results of the audit, they will say who did the work. But I know who they are and they are actually, it's a very talented folks you've heard of. So that looks promising as well. So they're doing, you know, all the right things in terms of adding security in that space. And speaking of space, they're adding user namespaces finally. So that's good. So this, once you have user namespaces, it means that you can have what the container thinks is a root user, but the main operating system thinks it's just a general user without doing complex things. And this is currently in Run C in 1.8. That Run C is the underlying infrastructure that makes Docker containers work at all. It will bubble up all the way through Docker in the next release or two. There are a few places that still need some help. I already talked a bit about the kernel key ring isn't namespaced. That's that whole problem with injecting secrets into your kernel and other people will see it. SELinux does solve that. Sort of, kind of, but it's not ideal yet. But in terms of managing secrets, there's two open source products, Vault from HashiCorp and KeyWiz from Square are designed to help you manage secrets and keys, especially in a container environment. They're both open source. So check those out. As I mentioned, the API for the Docker API has no concept of authentication or authorization at this point. They're working on that, but be aware, if you have to use the API publicly or on that work again, proxy something in front of it at least so at least you can get SSL, TLS on that thing, certificate based off anything to layer it on. I mentioned SetComp, SELinux app armor. The people who are big fans will say, oh, it's easy. No, it's not. You know, at this point, my opinion is that SetComp, SELinux app armor, they're really for the 1% for the most part. The tools for managing them are not there yet, especially at scale. This is actually for my biggest nervous point about Docker is that the tools you need to use to make these things much safer are really hard to use and they're really hard to use at scale, which means a lot of containers are going to get deployed in a less than ideal state because of that. I am totally one of those people who tries to run something with SELinux turned on and then the solution to fix it is to turn SELinux off because that is the fastest route to doing it. So I am one of those people. Logging is getting better, but they still need some help. Orchestration, again, if you do anything at scale, people, you know, you probably end up looking at something like Kubernetes or MISOs or both. Those are still, again, they're sort of for the 1% still they're early on. If you're not Google or a handful of others, you're not using them yet and they're hard to use. So that's where things are, that's what's left at this point. And then like I said, I'll post the resources, I'll send the latest slides along with all the resources because you really don't want to be trying to take screenshots of some of these URLs with all the tools and everything. And just to finish up, you know, basically it's not as bad as it used to be. A year ago it was horrible, six months ago it wasn't so bad. We're pretty much in a place where Docker is usable. If you are at that far right end of the curve, it's really usable. But it's relatively safe to use at this point. And again, if you go on production, please, one container per VM at this point. And that's my story for the day. I have just a minute or two for questions if there's any questions. Otherwise I'll give you five minutes back. Something, what about Docker OS? I'm not familiar with the second one. Those are coming along. I haven't done a real deep dive into the security of those in particular. I assume at this point that they all have the sort of same general issues to deal with at that point. Containers are containers regardless of the OS at this point still. Last month Docker was ported to FreeBSD. They were implementing it in a FreeBSD jail on ZFS. That combination itself should make for some interesting security issues. Or rather, fixes rather than issues. Just to make sure everyone heard that. Our audience number said that last month Docker was ported to FreeBSD and working with jail. So that will lead to some interesting stuff. I agree. I wasn't aware of that. That sounds cool though. So I'll definitely have to check that out. Cool. Thank you very much everyone.