 Let's welcome Andres Cidel and let's find something new about how to create the production environments, secure production environments, using Docker. Hello. Can you hear me? Yes? Okay. Thanks for coming. First of all, as my mate said, I'm going to talk about Docker security environments. And I'm going to give you some best practices you can use when you use Docker. First of all, who am I? I'm Andres Cidel from Vincorvis, which is a Mexican company, software company. And I'm a full stack developer. I've been using Docker for two years, almost two years now. But I've been doing some dev operations, automating tasks, managing infrastructure. So nowadays, I don't know what am I. So let's talk about a little bit of the content of this talk. We're going to know how containers works. What is behind the scenes when you use Docker containers? And we're going to list the main concerns that you have to keep in mind when you use Docker and how to create and maintain security images because images are the base of security in Docker and how to limit risk and good practices. This is a lot of tips that we're going to share with you. So how Docker works? The first thing that we have to keep in mind is that containers are not virtual machines. Virtual machines use hypervisor to manage the execution of a guest operating system. So containers are quite different. Containers are a bunch of processes. Containers can run services. You can install packages in those containers. Containers have network interfaces, but they are not virtual machines. They feel like virtual machines, but they are not. Containers are possible because of two feature kernels, which are C groups and namespaces. So what are C groups? What's this feature? This feature limits accounts and isolates the resources of the host. And if you have CPU memory, this network IO, it says, okay, I'm going to give two gigabytes to this container. I'm going to enable these network features for this container. And manage this as a hierarchical group. All the children of a process is going to have the same limits. And what are namespaces? Namespaces is the feature that isolates containers. The processes will have their own system view. When you use containers, for example, do PS in a container, and you're just going to see just the processes that are running in that container. So you can isolate a file system, memory, user, and networking. That's the reason it's like a truth with steroids. Containers are truth with steroids. And then C groups limit how much you can see, and namespaces limit what you can see. Okay, and next, what are kernel capabilities? In a traditional Unix system, we have two kind of processes, which are privileged processes whose effective user ID is zero. Basically are the root processes. And on privileged processes whose effective user ID is not zero. And these processes are attached to full permission checking based on the process. Permissions, et cetera. And Linux kernel capabilities allows us to fine-grind these access control system. For example, a capability, this capability shown, make arbitrary changes of the file. This allows us to change the permissions of files, and we can make these changes in every single file on the system. Okay, if you browse the code of Docker, the source code of Docker, you're going to see a list of capabilities. This is the default list of capabilities that Docker has as default. And if you want to see a complete list of the capabilities that are supported by the kernel, that's the URL, and you can study. So what are the main risks when you use Docker? What are the concerns? First of all, the Docker daemon. It requires root privileges when you use the Docker daemon. So if you control the Docker daemon, you will have access to the root. And if you enable the RESTful API, this is not authenticated by default. So if an attacker discovered your API, remember that if you have root access to... Well, if you control the Docker daemon, you will have root access or root privileges in the host. So how can I secure the RESTful API? Well, you can enable the TLS by using the flag TLS verify when running the daemon. And you can create a CI server and client keys. That's for authentication, but what about authorization? And Docker's out-of-the-box authorization is all or nothing. You can do everything or you cannot do nothing. So Docker provides a generic API, so you can create an authorization plugin by yourself and bypass this problem. And escaping is another concern. This is caused by Halloween privilege operations, not removing all possible capabilities, weak network defaults, and obviously boxing application code. It means that containers sometimes have a lot of capabilities. And if you add other capabilities that you may not need, this could be a problem. Remember that a user in a container with root capabilities could be wrote in the host. So how can you prevent this? Well, I'm going to explain each item of this list. So first of all, drop capabilities. As we saw capabilities, with capabilities, we can perform root operations or operations that requires root privileges. But for example, in this case, in this example, I'm dropping all the capabilities. Basically, running a container like this, you can just run a clock, for example. Basically, you can do nothing. Drop a single capability. You can drop, for example, a drone. And this container won't be able to change permissions. And you can combine flags. For example, drop all capabilities and add the capabilities that you're going to use. And I think you're going to start asking, how can I know which capabilities I'm going to need? So the answer is you have to keep in mind which capabilities you could use, or you can study them. And if you think that you have in your code a process which could use a strange capability and you are not secure, you can run your test. If you have a set of tests, you can run them and see if you need or drop or add a capability. And remember that containers just have to be... Containers have to have no more than they need. Okay. And this is more easy. Enable AppArmor. AppArmor is a new security model which is in charge of secure the operating system and the programs. So AppArmor uses security profiles to create a granular configuration over capabilities for your containers. And then if you are using Ubuntu right now, you probably... It's installing Ubuntu and it's running. You can check it with this command, AA status, and this is... This command is going to list the profiles that are loaded. Well, once you create your profiles, you can load them with this simple command. And if you want a container with this profile, you just have to indicate what's the name of your profile and that's all. Sometimes it's quite simple. And there is this tool which... His name is Bain. It's used for create profiles in an easy way. And define a user. Always, or most of the cases, is better if you create a user inside your Dockerfiles with the user add command and add a user directive, not your containers with root access. And multiple containers. This is a big topic. This could be another one-hour talk, but I'm just going to list the benefits of this approach. Basically, the benefits are limiting containers, limiting attack scenarios, helping prevent compromise your containers, simplifying development, allowing for easy upgrade parts. And it's very easy to run, just with the flag grid only. You can freeze the file system. If you run a container with this flag, and, for example, an attacker breaks out the container, the attacker won't be able to write any file or edit any file. Nothing. So it will be better. And you can combine it with using volumes. So you can freeze the file system of your container, but you can add a volume and write on that volume, so you can combine these operations. And another big concern is image provenance. So when you use systems that communicate among networks, trust is the central concern. So when you use Docker engine, you pull images and you push images. So how can you verify that you're getting the exact image that the developer has created? Or how can you know that the image has not been tempered with? Docker has solved this problem with Docker Countertrust, which basically signed the images with certificates and using digital signatures. And it's very easy to activate. Just export Docker Countertrust. That's all the first operation that you do as a publisher. For example, build, run, or build, run, or pull images. And it's going to work with this feature. If you're a publisher and you are using Docker Countertrust, the first time Docker Countertrust is going to create the keys and everything is behind the scenes. So you don't need to worry about nothing. You don't have to learn a special combination of the commands, etc. And why Docker uses Docker Countertrust and not a GPG? Because Docker Countertrust creates a digital signature with timestamps. So you can enable it, you can disable other images. For example, this image is no longer available, so you have to download this. So with this approach, you will have the up-to-date images on your containers. And of course, you have to create secure images, which is the next topic. How can you create and maintain secure images? First of all, verify the software. This is very important. And you have to verify the authenticity of the software that you are downloading. And when you're using package management, this takes care of you, so you don't have to worry too much. But if you are downloaded raw files or binaries, you should use, for example, HTTP ease instead of HTTP. And you have to check for signed files and valid checksums with a GPG, for example, when it comes to third-party repositories. And obviously, you could use this in your batch scripts or your background files, not just Docker. Creating better Docker files. This is important because sometimes if you want to have consistencies in your images, it's better to pull the specific tag. So it's better, for example, from Alpine 3.4 instead of from Alpine. And never run as route. Well, this is like the... This is important. This is super important. Always add the user directive. So if you use the user directive, maybe you need the user add comment. And drop privileges as possible. And if you have to use sudo, don't use sudo. It's better to go through. So you can use minimal base images. Ubuntu and CentOS, for example, have 60 megabytes. And if you use Alpine, it's a Linux minimal base image. It's 5 megabytes. And you could reduce attack surfaces complexity and size of the images. This is an example of using Alpine. And with these lines, you can install the Python runtime. Very easy. APK is the package manager of Alpine. So other best practices will arrive whenever possible. Especially when it comes about security features. The Docker demand and client, the Docker engine. About using Docker with privileged flag. This flag is going to remove almost all the limits that container have. So provide non-security. Avoid providing access to the Docker user or the Docker group. As I mentioned, if you have control of the Docker demand, you could have root access in the host. Avoid providing access to the Docker unique socket or REST API to potentially introduce coders or containers. It's especially when you use Jenkins, for example, your Jenkins manage the Docker demand in a certain way. You have to keep in mind this tip. Consider using Docker bench security. This, as the repo says, have dozens of common best practices around Docker containers in production. This is a script and it's elevated privilege for running it. Remove said UID and CGID binaries. I'm sure that you are not going to need them in most of the cases. So it's better not having them. When exporting ports or exposing containers to the network, Docker exposes to all interfaces. So you have to be sure that you are exposing the container, the network to the right interface. And follow best practices when writing Docker files. On internet, you're going to find a bunch of information about this topic and limit the container intercommunication by default. You can communicate with other containers. Even if you are not using the flag link, you can send raw packages. And limit the memory. This could help you to prevent from those attacks. This is the guide that I based this presentation on. Docker is working a lot to provide a lot of documentation about security. So thank you for listening and questions. Hello. I had a few surprises using Docker with IP tables because it injects certain rules for networking. Do you have any tips on how to deal with that elegantly so I don't write some rules and then notice that Docker is actually bypassing them? Is your container, is the purpose of your container managed the network or what you're going to do? So I had a container, basically I was running Kibana. I just needed to expose Kibana port and limit a bunch of everything else. So I wrote some IP table rules on my host and then I realized that basically Docker had inserted. I wanted the Kibana port to be only accessible from local host and then I noticed that basically my P tables rule saying only from local host except were being bypassed because Docker inserted its networking rules and they were actually circuiting my rules. So it seems to be a common problem. Do you know of an elegant solution? I don't know. It depends of your stack but we can discuss it if you want after the talk. Any more Python specific security issues with Docker? For Python, I've been using Docker with Python for almost one year and I think these advices applies for almost all languages, especially for Python. It could be, I don't try to create in-mode table containers. So if your code has a vulnerability you can drop it with read-only file systems. So that's all. Any other questions? Okay, if not, let's say thank you once more to Andreas. Thank you very much.