 Very big discussion. You made me scared. But I encourage the discussion. Okay, so a little background about myself. My name is Vincent. I'm originally from Belgium. I've been living 10 years in Vietnam. And when I was in Vietnam, I got interested in Linux containers and started playing with Docker back in 2014. Just a little bit, I mean, maybe a year after it got popular. And I started to join, like, the meetup groups there, give presentations about it, really interested into it. My job was not directly related to Linux containers. I am a software engineer from background, but I was doing more of consultancy kind of things. And I really liked this evolution that was happening. So I started to invest more time into containers, and that's why I'm here. So I'm just curious how many of you actually, like, first of all, heard about Docker? Okay, pretty much everyone, right? Actually played with Docker. Okay, a good amount. Maybe things Docker is, like, totally secure. Okay, nobody. One person. Okay, so that's basically pretty much a good illustration of where... Yes? Do you have a question? Who thinks Docker is the only container concept out there? Not me. Yes, right. Oops, I don't know what happened here. That's a good question. Like, yeah, of course, Docker didn't invent containers, right? I mean, they just popularized it. Do you send people on tracks that you can't move? It's true. Right, so basically initially when I proposed this talk to one of the organized Summan, when I proposed, I said I was playing with Rocket, this alternative container runtime from CoreOS, and one of their core principles of Rocket is that it is composable and secure from the bottom up, right? So they launched Rocket back in October 2015, sorry, 2014, and it resulted in container wars. A lot of the people that were looking at Docker were saying, really, it's not secure. Rocket is a much better approach, and I was playing with Rocket, and I thought, wow, Rocket is really amazing. But on the other side, while preparing for this talk, I decided I'm going to have another look at Docker. So when was the last time, like, anybody here that has looked at Docker, is it more than a year ago that you looked at Docker? Or have you recently played with Docker? Who has recently, like, in the last few months played with Docker? Right, just a few people, right? So Docker has in the last year been working a lot on the actual, you know, the whole content trust and the image verification and all of these processes around containers. Now, back to the concept, Docker is not containers, correct? There were containers a long before. But what did Docker do is they did come up with, like, this way or this tool set that allows you to easily, you know, build containers, ship containers, and run containers, deploy containers, right? As far as I can see, unless you can contradict me, that was not so easy before, or that was not, like, the developer on his laptop just going, run a container. Sorry, I'm not a sales person. Which were for six years, I guess, remember before Docker? Okay, right. Yeah, so I think, because one of the latest announcements is actually going in the same way, like, after Docker became so popular, back in one of the first Docker conference back in 2014, August, they, Google came with an open-sourcing of their, the way that they run containers internally, and they made it an open-source project called Kubernetes. And I started to play with Kubernetes when it was not very, like, initially, but later on I started to play with Kubernetes, and I think it's amazing the way it works, it's really nice composable, everything like that. I think it's amazing. But, like, what Docker announced just, like, last two weeks ago, at 20 of June, that, like, completely, like, what has happened? I mean, what is this? All of this Kubernetes orchestration inside the container, inside the engine. And I thought, like, why would you ever do that? But if you look at it, it's kind of the same revolution that what they're doing. I mean, maybe I'm drinking the Kool-Aid, but it's kind of the same thing, right? I mean, you have to choose. Either you go for this composable, nice, but a little bit more hard to deploy, more technical, special thing, versus this easy package, easy to run, easy to use, and everybody uses it, like, that thing. So it's a trade-off, I don't know. Yeah, you can go ahead, I mean. Interestingly, I mean, what I'm missing in the talks about containers and the engine, we get a lot of people coming up saying, this is proper, I think, those are not proper. Nobody actually stands up and says, okay, let's look at the use case and we decide to stop or forward your use case or your thought. Because many problems that I see that are being discussed about the Docker universe, are actually because people try to do things that they shouldn't be doing. The discussion about only having one service running in the Docker, or that people want to go around it and they find ways around it, but if you cannot live with just one service running on the Docker, then maybe Docker is not the container solution that you need for your problem. And this kind of talks can't be better. Yeah, please. So, also, I'm now in Singapore since a month and I am involved with the Docker Singapore, so if you want to come, please, please come. I think we would really appreciate your talk there as well. Or maybe not, but we can have a good discussion about it at least. But you made a face like maybe you won't appreciate, but anyway. Also, I am, I actually I did have a Docker shirt, but I didn't wear it because I thought I'm not gonna like, I don't want to be like old Docker. So, okay. But, okay, so and also I have to apologize a little bit because I did have a talk yesterday about the Docker new announcements of DockerCon and I hadn't estimated the effort in preparing for this talk, so if I'm not going into as much detail if you want I'm very sorry. I hope I can at least give you like a guideline or pointers if you don't already have anything really related to the new announcements, okay. So, first thing is that Docker, yes, they receive a lot of criticism about the security. So, now they are working on their image like it is secure. They have been driving the security topic for the last year in 2015, right. And they keep going on about it. So, one of the first thing is they visualize this like development pipeline where security applies at every stage. So, it begins when a developer builds the first image where you need to be able to have actual trust in the developer. So, you need to be able to sign that image, you need to be able to verify what you download from an image repository that has been signed by the proper you know, authorities like it's signed by the actual developer of that image or that repository. So, there is it starts with signing the images built on the developer laptops, goes on to actually verifying signatures on the repository or the registry in the Docker world and also verifies when it's being deployed and also make sure that what you deploy on either the cloud or on your on-premise that is running securely. So, they have been focusing on every stage of this pipeline to make it more secure. So, first off, I think everybody is familiar with the containers, right? Basically, a difference is that they share the host kernel that they use kernel abilities such as control groups, resource groups to control the actual access to resources in the kernel and also isolate the processes using namespaces and using namespaces to isolate the basically everything running inside a container. So, it does feel like a lightweight VM and that's where a lot of fusion comes from, where people are actually treating it like a virtual machine, but it's not. Okay. So, first, a little very, very quick about C groups. So, basically, C groups are higher keys that are under the resources such as the CPU or the memory and the process lives within a group under the tree. So, you can assign and control the actual resources of the whole group of processes. So, that's how when you run a container, you can actually assign the memory that that container is going to use and you can do things like that. And that's been in the kernel. Namespaces, again, also using a feature called namespaces to isolate the actual processes. So, if a container needs to be isolated by network, using the net namespace isolation, mount namespace, UTS for host name isolation, user namespaces are isolated and can be mapped since February 2016 and namespaces for process isolation. So, you can isolate things like that. And then, apart from this isolation, which by itself is not really a security feature, there's also a default set of capabilities that are assigned to containers. So, by default Docker containers actually drop a whole bunch of capabilities such as capabilities required to run SSH or anything like that. So, there's a default set of capabilities that Docker assigns to a container. But, you can always when you actually spin up a container in fine grain, identify which capabilities you want to add or drop from those containers. So, you have that control. Additionally, our more profiles can be passed in or more recently second profiles can be actually passed in to specify exactly which system calls the processes within a container can execute. So, there is this additional configurability, that's all configuration of how a container is running on your system, right? Sorry? SELinux, are they optional? SELinux is applied at the kernel level. There's a default SELinux profile. You can change them. There is defaults. And the defaults because different distributions have others associated with SELinux and SELinux with Brathat. So, but also you're asking are they optional or you must have one of these in many more? So, it depends on the distribution what's available and then it will use a default for that distribution that Docker supports. And the defaults are not necessarily the best things for your use case. So, that needs to be reviewed. For example, for AppArmor profiles, there's a tool from Jay Frazell. She's one of the Docker contributors that now that went to Mesos and works for Google. She developed a tool called Bain that actually allows you to create an AppArmor profile for you, so it helps you with that. So, you can configure the profiles. But the defaults are there. So, by default, a certain amount of capabilities are run and a lot of people, they just use the defaults, right? So, in terms of security, there's definitely something that you want to review, I guess, the actual defaults of those things. Yeah. Yeah. This one is on the host. Yeah, this is on the Docker host running the Docker demon. So, spinning up the container. So, the Docker host runs the container with a certain profile applied to the container, giving the container the necessary access or eliminating what he does not need. All right. Next. Okay. So, with all of these things, they say it's secure by default. But out of the box default settings and profiles and additional granular controls to customize the settings give you an ability to do a secure by default setup. Now, like, I took these slides from the Docker presentation. So, I actually thought about putting a question mark here. But, like, I will illustrate some of the defaults that are not very secure. Okay. So, that was the first part. Like, that was added. But then the second part is a lot of people have been asking, like, what's inside my container? How do I control the actual, like, packages, different abilities inside a container? Because now when you deploy or you build an image, it contains all of the packages within the image and who is responsible for upgrading that? Like, is it the developer? The developer? The ability to define the whole file system that will be inside the container? So, who is then going to take responsibility for updating if there's vulnerabilities and things like that? So, and how do I know where it came from? So, how do I verify the trust? How do I trust this container? How do I verify who built this container? And, how do I keep everything safe? I'm reading out the slide. So, okay. So, first thing is they have this commercial offering which, I mean, actually on every official repository on the Docker Hub. So, the Docker Hub is this public repository where people can push and it's open source. I mean, you can push and everybody can download publicly. Or you can buy private repositories to have your little private repos. Initially, there was an announcement of Project Nautilus which was doing deep image scanning on the images on the registry and that was actually enabled only for the official images and then later on expanded for more images. Actually, I think one of the main targets of Docker is to provide this because if you are using the open source registry or open source repository host system to run it on your own infrastructure, you don't get any of this. You don't get like this image scanning. So, this is one of the Docker trusted registry or the commercial offering at once for enterprises. Okay? So, how does it work? So, when a developer pushes his image to the Docker Hub or to a repository, then the trigger, the security scan is triggered which is extracting all the image layers extracting the binaries from the image layers sending it to the scanner who sends every binary to and checks the hash from every binary against the CVE database to identify if there are any vulnerabilities within it and stores the results inside the database which is then presented to the user when he goes to see his image overview. So, that's how they explain it, right? Additionally, they also subscribe to CVE notifications. So, you should get a notification in case new vulnerability has been found and exposed and declared. So, you should be able to get an e-mail. The alternative to this Docker image scanning is CoroS Clare which was announced, I don't remember in April or something, like not too long ago. So, there is a way that you can run the CoroS image scanning and you can pull your image and you can run Clare against your local image and verify if there is any vulnerability. So, when you spin up Clare, it's going to download the CVE database and then when you point Clare at your image, you can actually with CoroS Clare do that locally as well. Then the second part is they have this Docker Bench script which is a batch script that does a full that you can run against one of your running containers and it will inspect your container and does a full overview and make a general advice of how you should maybe change certain settings in your image. Like it does based on the security benchmark recommendation. So, using that, it's also open source is the best script that you can integrate with your, actually yesterday there was a person giving a talk at the meetup. He's also running this Docker Bench as part of the CI pipeline. So, every time an image gets built it actually runs the Docker Bench script to give a full report back to what are any batch practices in there. He did have one comment saying that sometimes they are conflicting. Like one of the recommendations is to not write to the certain paths and to use a temp file system and on the other side he said that actually then there's another warning that you're using the temp file system. So, it's like a little bit conflicting but it's an interesting the Docker Bench update. So, then the second part is how to the content trust. There is this project Nautilus that they announced which is implementing the update framework. Anybody familiar with the update framework? So, the update framework is a framework that came out of Tor to establish it's actually it was spin out of Tor to make a general way of verifying that updates that are shipped are actually signed and are created by the persons that actually are authorized to make those updates. So, it's kind of to verify that. And the update framework does that by defining a root key and then a higher key of key. So, under the root key you have like a lot of like different keys. One is the actual like a snapshot key to verify that the content is still fresh and many other keys and it's I don't know it by heart. But if you search the update framework it's very interesting to look at it. So, notary, sorry docker notary is a goal and implementation of the update framework and it is an integrated into the docker engine. So and this is one of the things not by default. By default because I think backwards compatibility when you do a docker push or a docker pull it basically just pulls the image and doesn't do any verification. So, if you want to use this docker notary, if you want to actually trust the content that you're pulling and verify the signatures, you need to set the actual variable environment variables such as docker underscore trust I believe and then if you set it on then from then onwards the docker pull and docker build and the docker run statements will all verify the signatures of the images. So, by default it's not on but it has to be turned on. That's one thing. So, the docker content trust if you enable that as soon as you start making a new image and you push it into the registry, you will have to actually also generate root key, generate a repository key and then it will be kept on your local laptop. So, it's all very like oh, you know I actually hadn't played with that before I actually started looking at this. It's very interesting. Anybody played with this docker notary, docker content trust? No? So, I actually have a bit of a demo script. Maybe I can try and run it. It's basically the standard documentation so I can maybe show that. Right. Well, it's the part of verifying the image. Right. And that's actually the verification I image scanning, the bill of materials. Okay. And then the very last part I want to talk about is the new announcement of the docker engine with the built-in orchestration. So, how many of you are aware of this? That this new version, this docker 1.12 was launched like three, four people. So, and that it includes this orchestration framework. So, I will talk a little bit more about that. I did a bit of a demo yesterday. I didn't set up the demo environment today, but I may try because I guess I'm going very fast and I think it's almost finished. Okay. So, I'll talk about that. So, let me have a quick look about some of the things that were interesting to find out. So, the first thing, if you do a docker login, it will ask me to enter my username and password. And this is not my real password. So, I mean, this is my real password now, but it's not the one I use. Like, this is a separate account. So, okay, didn't work. Okay, because on this one, I didn't, okay, I actually have here, this is my local machine where I'm on OS X, I enable the credential store and this is the core OS X high virtual machine, where I actually did not enable the credential store. So, actually I should do it here. Docker login. Okay. So, when I do the login, okay, and again, wrong. Sorry. Unless the internet has stopped working. Okay, now it worked. So, if I now go to the root directory and go to the config.json, in there, there's actually a string which is base64 encoded, my credentials. So, if you have access to this, you basically have the username and password, right? So, here I'm actually asking, I'm using jq to read the JSON file. I'm picking up that authentication index that authentication part and I'm passing that in to open SSL to base64 decode and echoing the result. So, by default, my password and username are right there. So, this is just a password. So, I was a little bit surprised to see that. So, the interesting part is though, if you are using you have the same on Linux, but I've done it on my OS X now. So, if I look at my config file here, sorry, all the way around. So, in here I actually set up my config to use the credential store of the OS X keychain. So, to achieve this, I had to download a separate tool. So, this is what I did earlier. I looked oh, sorry, it's very small. So, what I did earlier, I did a docker login. I looked at the config file and I queried the actual data store and then I decoded it. And if you're on OS X, you can just use the native keychain by downloading the docker credential helper which is the OS X keychain for OS X. And then when you extract and enable it, you can then set up your config file to use that credential chain. And then whenever I login, it's no longer storing the credentials inside my docker file. It's actually inside the credential store. So, if I ask the credential store to give me my credentials, obviously it's going to return me my credentials because that's how the docker engine gets it back. Right. So, of course being in the credential store, I assume it's more safe, right? I mean, tell me it's encrypted with my logins, right? I mean, okay. So, that's one of the things that by default this doesn't do that, right? So, you actually have to go in and install the credential helper and then actually change your configuration file and set up docker to do like that. The second part is the docker content trust. Okay. So, I set up that you need to set the environment variable docker content trust to anything else but no. When you do that, if you are trying to pull an unsigned image, so this is an image that I created without actually signing it, it says that cannot pull this image. So, from this point on forward, my docker client does not trust unsigned images anymore. So, again, it's something that I have to set actually and not something that's done by default. So, but again when initially they were doing docker it was all just get things up and running, right? They didn't care so much about the security part. So, now you have this backwards compatibility. So, then the way it works is you have the notary. So, if you're interested in that. So, you have notary for content trust. So, you need to enable it. So, when you push a signed image is actually prompting me. You are creating a new root signing key and that signing key is then stored inside my credential inside notary. So, let me just check. With notary you can actually get the signatures or the metadata from. So, this is happening behind the scene by docker. So, notaries integrated into docker and behind the scene it's actually extracting the tags. So, the images are signed by tag because in docker every time you push a tag it's a different image or different layer and it's giving back like the digest of this, the hash them. I mean the digest and who signed it. So, because in the update framework you have several roles. So, one is the root key, one is the time. I mean snapshot. I forgot. And then there's if you see targets basically the administrator of the repository if you see target slash releases that's a delegated role. So, it was the administrator of the repository have delegated the access to another person to another contributor. So, you don't have to share the keys with other people. You can actually ask them to generate a certificate sign it and then you can import it into your registry so you can authorize him to sign images as well. So, you have control of who you know publishes data to the image repository. Yeah. So, and then the other part is you can you can actually see the keys that are currently stored on my machine. So, so this is showing me that I have these keys stored in my trust folder and there are, yeah, there's probably things that I shouldn't be showing. I don't know. Okay. So, that's one of the things the Docker content trust which was something that there's a lot of, I think, media about it but I actually never really looked into it and I think, I don't know who, I asked earlier, right? Not so many people played with that, right? So, it's very interesting to look at that. And, yeah, those are the type of roles, right? You have the road, route, the timestamp, the snapshot, the targets and then delegation. So, I mentioned about targets and delegation and snapshot and timestamp. So, each of these have a different role like the timestamp or the timestamp key is actually verifying all of the explanations there. The timestamp guarantees freshness. So, in case there is an image on the Docker hub that has not been updated for a long time, you will get a warning. The Docker engine will tell you this image has not been updated for a while and you're sure you want to run it. So, you actually can be sure that the update that you're getting or the image that you're getting is fresh. That's part of the update framework features. Okay. And then, finally, the Docker 1.12 announcement. Okay. So, all right. So, in Docker 1.12, they implemented or they took part of the XED clustering. So, it's a rough consensus implementation and they implemented it inside the engine. So, they call it swarm kit and it's a part that you can include and with that the managers are actually creating a consensus cluster and also doing leader reaction. And a second part of this is that once a leader has been elected, actually every leader is running a certificate authority and the leader authority is issuing certificates for every worker in the cluster. So, if that leader dies, so then another node can actually take over the leader role and it will verify the workers that are accessing the cluster. So, by default, like when I did my demo, again by default if you do Docker swarm in it and then Docker swarm join, they are automatically accepting nodes, which is not the recommended way to deploy Docker swarm into production. You should effectively use either a secret token that they share for nodes to be able to join the cluster or you should be using manual authentication in which case when you join the cluster the actual an operator will have to approve that node for joining the cluster. So, they have built that in. Every certificate is rotated. I mean the keys are constantly refreshed. They have an expiry of minimum 30 minutes, that's a minimum for expiry and they have a range of 50 to 80 percent in which they will do the expiry because if every key expires exactly on 30 minutes then all of the nodes will start to request new certificates and the whole certificate authority will be swarmed. So, they stage it a little bit. So, that's quite an interesting way because if you're familiar with setting up a Kubernetes cluster, you need to set up a certificate authority, you need to set up your public infrastructure, you need to sign your keys, you need to give all of this and it's really, really not easy to set all of this up. And so, I was very pessimistic about this but after actually playing with it and seeing how easy it is and seeing, I think it actually has quite a lot of potential and that's where I finished my talk. So, any comments, questions? Hopefully not too hard but yeah. Yes. I'll do a little demonstration today of adopting this technology because we had a skeleton, one of the top 20 banks and we had this old doctor of different kind like high-tech startups all the seniors asked this bank so after seeing all this great technology, when are you going to adopt this? It will take us a few years so how do you see this? Right, let me tell you that this Doctor 1.12 is a release candidate and it's like, it still needs to grow a lot, I think. So, it will still take a while I think for all of this to be fully enterprise ready. This and this and this, you will gain such excitement, like what's the approach for another salesperson? How would I For me right now I just joined as a DevOps person, well, DevOps is kind of that's not going to that role, definition anyway I joined to work on the build automation system so what I definitely want to do in the organization I joined is to drive the adoption of containers at least for setting up developer laptops so when the developer comes on board he doesn't have to install the whole thing. Running Docker in production maybe, I mean, we're not there long-term maybe. I'm actually interested in Kubernetes and the alternative to it, Rocket Netis which is using CoreOS Rocket although after investigating all of this I'm still looking at options but I think I'm a big fan of the tools like CoreOS and Clare and things like that so in terms of container technology there's still so many players even Rocket is not nearly as established as Docker is so for me it would purely be for accelerating the build pipeline accelerating the development process and that part, that area of the implementation. I see people not so I said something good. You're asking if adding this stack or this technology into the organization will benefit them very quickly or it will take a long time to get the benefits or see the benefits. It depends I think one of the problems is to what is the developer attitude towards containers. If you're looking at using containers on a developer's laptop you may face some difficulties. Definitely they will not want to develop in a container. I see it more like if you're developing a new architecture at least you can use it to set up all of the other services that you're not currently working on and you have your service just on your local laptop to develop against. I see it in that way it's kind of like a much faster way than to set up a vagrant box and run all of the services or run them locally or something like that. So your question was I didn't really answer your question did I? Yes Yes You have to analyze what is your status of a team in a company. I see a lot of companies who don't really benefit from using any of that one because they have a lot of different services a lot of very simple services that do not require a lot of management a lot of ops people know how to deal with that. I forgot to mention I joined startup If you're working in a company I think there's other people here working in like companies which way more like old maybe like Of course you really benefit from all of that because it totally goes with what you are offering to your customers and you have to scale out fast and you have to tear it all down if you don't need it anymore and things do not apply to so many companies out there What I for instance I mean I like to use an existing container a lot because that is a very simple learning curve for a lot of people it's a system container so they know how it looks like inside and if you start explaining to them that for instance you know if you are using Linux and you are developer and you need to try out a lot of things and you install packages and this and that and libraries and development environments and whatever and once in a while you have to re-install your own notebook because it's totally busted with shit that you can't get rid of anymore then you use containers or you use this is what you use virtual desktop but you know the nice thing about containers is that they need to tear down the power so you can have a lot of counterfeiting longboard and virtual machines so that's nice by the use of then you have legacy systems that you somehow want to they use resources or whatever that makes sense to put them, to migrate them into the container and the system somehow have one machine running have copies of them if the machine has a problem use the capability of moving these containers around as a means of even giving capabilities of or having the ability to legacy applications that you cannot manage anymore things like things like microservices this is not the natural use of making it microservices, this was the thoughts that began that they might be used to oh yes they are of course now they took it to different areas sorry which one I mean they tried to do with microservices with solutions now that they are trying to take to different areas I forgot you said something you mentioned that like Alex C like one of the I mean Lennart Puttering he gave a lot of talks about system dn spawn basically saying that he was using system dn spawn all the time basically to debug the boot process for actual kernel and things like that so that's very interesting the CoreOS rocket is completely built on the separate parts just using system dn spawn and just using GPG signatures and a whole different meta discovery of images and I really like that and I wanted to also show it but I thought like well I'm just going to stick to this for now I just wanted to apply the same idea from before security very often people ask you well if you are a user inside you can get out these things and they say ok so by default you cannot tell me that this is impossible it's insecure but again it depends on what kind of application you are running you can have users do that I think one of Joel also highlighted back in April right a little bit of the security issues and one good comment about that is also reducing the attack surface by just building on scratch images so just binary like static binaries deployed inside a container and then why do you use a container because you can benefit from the shipping and the discovery and all of that so you have an infrastructure that knows how to run containers so you just package inside a container and it's basically like a tar with a little bit of metadata of how to run it so it's always good to use that as a format so if you want to continue you can also start after this so we can wrap it up officially and anybody who wants to continue talking can continue talking thank you very much thank you thank you thank you thank you thank you thank you thank you thank you thank you