 So my name is Michael, I'm the CTO of AMAZIO and also the other AMAZ companies and we do hosting and we do a lot of things with containers. And in the last couple of times that I talked with people, I see there's a lot of confusion of what are they exactly, what are they here for, are they going away, is it the VM, is it not and things like that. So the session today should really be about an overview. What are they actually? And what are they used for and will they go away or not? So we're gonna talk first about what are containers. Then we're gonna talk about containers and Docker because there's a lot of confusion between the two. And we're gonna try to look a bit in the future of containers and you can already see, no they're not going away. But let's first look at containers. So when I'm hearing people explaining containers, they are using, yeah, they're kind of something like VMs. I'm sorry to say, no, they are not. It's very similar and you can do similar things but there's very important parts. And to actually explain you what a container is, I wanna use the analogy of a city and a house and an apartment. So let's first look at physical servers. Let's look at the physical server as a city. Where multiple people live, we have a lot of things in there. In cities, we have things like streets. We have stuff like a water system. We have power grids. And if you compare that to a physical server, we could say like, we have a CPU. It's like maybe the street. The ramps is the water system and the storage is the power grid. And if we look at the highways of a city, that's actually the network between multiple servers. So cities provide us with infrastructure as humans that we need and physical servers provide our apps, so resources like CPU, RAM, storage and the possibility to talk to each other like we move between cities, we visit friends and things like that. So if you think about now a virtual machine, a virtual machine is a house. And within a house is within a city. So a VM is within a physical server. And the house though has its own plumbing system. The house has its own heating system. The house also has its own power distribution system. And very important, a house has doors and fences. So now that everybody goes in and takes that. So if you need a house and you need another one, you take the infrastructure of a city and you put your house there and you have another house. With the VMs, it's exactly the same. A virtual machine has a full operating system. It has its full own network stack. It's at its own kernel. And also it has its own firewall. So if you wanna clone a VM to another physical server, or we just wanna clone a VM in general, we copy everything. We start the full operating system, we start the full network stack, we start the full kernel, and we start the full firewall. And the problem is a bit first that takes quite some time to boot an operating system and everything. And it's also gonna be very heavy. A virtual machine usually is between, probably the smallest one, you can maybe go down to 200 megabytes. But usually a virtual machine has a couple of gigabytes with the full operating system and the network stack and all the stuff in there. They're not bad because in the past, we started physical servers. Every time we had a new app or we had a new version of our app, we actually put like a new servers in there. So VMs made a lot of things possible. And now let's look at the containers and you can guess it, containers or apartments. So if you look at an apartment, an apartment has a shared plumbing system. An apartment has shared heating. You're reusing an existing heat system. You can still control in your own apartment how hot or cold it is, but the actual heating produced is somewhere in the building. And you also have a shared power distribution system. So the power comes into and is shared among everybody. And we also have shared doors and fences. Not everybody can have a very secure door. So we put one very secure door at the bottom where the people actually walk in. So the doors of the individual apartments, they don't need to be that secure anymore because we have one very secure one. So we also share security. And containers, you guessed it, is exactly that. So a container is a minimal operating system. It's much, much, much smaller than what actually is needed in a full virtual machine. It shares the kernel, all containers together running on the same server are sharing the kernel. They have a common container engine, like the power distribution system. They all use the same container engine and they also have a shared firewall. Like in the apartment, we have one big door. The containers, they have a big firewall in front of them that does everything. Does that make sense? Nothing? Okay. So what are containers exactly? First of all, they are super lightweight. A container can start at like two or three megabytes. You can start and stop a container. And everything that is necessary to run in there is running in that single container. So it's much smaller than a full virtual machine. Also, they're really, really fast. A container can start within milliseconds. And not a virtual machine that has to boot everything and load different init systems and whatever. A kernel, a container that just starts. And it also stops again very fast. It is, though, fully secure because within the containers, they are separate. It's like in your apartment building, if you're in your apartment, you cannot just walk into the other apartment in there because there's a door again. So it's the same with containers. If you are in a container, you cannot see what the others containers are doing. You cannot change what the others containers are doing. You know they're there, but you don't actually see what's happened. And because of all of that, containers allow us microservice architectures. Actually, a whole other topic and would allow another three hours talking. But because they are so small, because they start so fast, it allows us to put one single service into one container. If we talk about Drupal hosting, the PHP FPM can be in one container. The nginx can be in another container. The varnish can be in another container. You wanna have as small things as possible because you can reuse them, you can distribute them and stuff like that. And if you have VMs, good, you can still run container systems on VMs. You can still, you still need, like if we look at the house or if you look at an apartment, you still need a plumbing system, you still need a power distribution system that's just shared. The same is for VMs. So if you usually look at like graphs, how Docker is used, you have the physical server at the bottom. You have the operating system and you have the Docker engine. What that Docker engine exactly does or a container engine, we will see later. And on top, the three different apps, these are our containers. So there are our apartment buildings and all the stuff at the bottom is the things that is in the bottom usually of the apartment house. But we have our app in there and we maybe have some libraries like our Drupal code is in there and maybe an nginx or something like that. And we have multiple of them. But in here, there's no virtual machine. So it's still possible to run containers inside a virtual machine. So this is a setup where we have a hypervisor, which is basically what the virtual machine is doing. It hypervises multiple operating systems. So we still have the physical server at the bottom. Then we have a hypervisor there and then we have an operating system with then a Docker or a container engine with then again, multiple apps. And this is actually what you're getting if you go to any cloud provider today, they will do the hypervising stuff for you. So if you go to Azure, AWS, Google Cloud or whatever, they are having for you, you don't see it but they're running in hypervisor for you and they're running the physical servers for you. Yes, it's the cloud and it's somebody else's server. It's still there. You may be not seeing it. So what you're doing, you're actually doing the top part. So you're installing the operating system on top of that virtual machine and you're having the Docker engine or a container engine that then runs the actual containers. So a lot of people say like, okay, yeah, that's cool and who actually uses containers? Well, turns out all your Drupal friendly hosters, they're all using containers, you just don't see it. There are other companies. Visa, for example, runs 100,000 transactions through containers every day and they're starting to do more and more. The Netflix series that you watched maybe on the weekend got encoded inside a container. Netflix starts one million containers per week and yes, they also throw them away again. So they moved all of their infrastructure onto containers and everything that happens on Netflix is happening in containers. And you ever heard about Alibaba, the singles day? They made 12 billion dollars in one single day and it all ran on containers. So the big companies, they're already there. They're already using it. Why? Because they are like that, they are fast and you can reuse your existing infrastructure even better because you don't have to move virtual machines around, you're just moving containers around. So that means they actually reduce their amount of hardware sometimes even up to 50% and it's better for the developer as well because we will see later how exactly that works. So all of that now was about containers. But what does it actually has to do with Docker? Because everybody uses the same in the same sentence and using the same. Well, so what Docker is is an implementation of containers. If you run a Docker, it starts you a container but Docker is much more than just running a single container. So Docker, for example, has the Docker engine. We saw that now a couple of times. So the Docker engine is responsible for running the containers, for looking that they're actually running. So you can tell Docker if the container dies, please restart it again for me automatically. You have networking in there. So it creates virtual networks, virtual interfaces, stuff like that. It makes sure that you have storage. You can connect storage to your physical storage or you can connect to somewhere else. It makes sure that nobody else can access it, that the security is given and Docker even has a plugging system. So if you want to extend the functionality, you can do that. That's everything given if you install Docker on your local computer. You install the Docker engine, which brings all these things which technically have nothing to do with the container itself. It's just stuff that has been built around because containers actually exist since a long, long time. They were just not really usable. They were not easy to use. You didn't just run one single command, Docker run, and you had it. But now Docker actually brings that in. Docker also introduces the ideas of images. So a Docker image is a full representation of the app that you would like to run. So I can build a Docker image on my computer and I can send it to a hosting company or to my server, and everything that is necessary, so all the files and all the configuration of that container is all in that single Docker image and I can push that around. So it's a very easy way to distribute stuff. It's like a virtual machine image, just going back, it's much smaller. The smallest Docker images are maybe two or three megabytes. You can still build them very big, you shouldn't, but. And the very interesting thing that Docker did, they implemented a layer system. What does it mean? If I'm building a Docker file completely fresh and I'm pushing it to my server, the whole Docker file is gonna pushed up there. So let's say my Docker file with my Drupal site in it is 400 megabytes, I'm pushing 400 megabytes up there. Now I'm changing just let's say one single file or a couple of files in my Drupal, build a Docker image again and push it again. The Docker's layer, the image layer will realize that only parts have changed and will only push the changes, let's say 10 megabytes over there. So the rest of the 390 megabytes, they're not pushed because what happens, they're hashing it and they're comparing these two hashes and realize I don't have to push that anymore. So it's very, very fast. You can deploy changes in a very, very fast matter because the system actually realizes what has changed. That is something that got introduced by Docker and has also, that's one of the reasons that why Docker is so successful because you can now push a tiny change within seconds to the production side. The next thing they do, there's a Docker registry. The Docker registry is the storage for your Docker images. The most famous one is the Docker Hub where a lot of people go but you can actually, the Docker registry itself is a container so you can run it in your already existing Docker environment and it just allows you to storage for all your images. You can push them there, you can have private ones, you have authentication so that not everybody can run your Drupal site on their own servers even though that would actually be a cool idea but people don't like doing that so it also makes the whole thing safe. And the last thing you maybe heard about is Docker Swarm and Docker Swarm is an orchestration system. Not a term, okay, let's look at that. So what is orchestration? Other orchestration tools or like Docker Swarm that we just talked about, that's Docker's own invention and that's what they're pushing obviously but there is also Kubernetes, there is OpenShafe, there is Rancher. So the problem a bit because containers can start very fast and you can move them around very easily that also means they're dying very fast and dying containers is not a lot of fun because then your Drupal site is down. So you need a system that orchestrates like a group of containers. So the orchestration system, you tell the orchestration system like I wanna have five containers of my Drupal site running and that orchestration system looks which servers or VMs are available, which ones are used and distributes them. If one of them, let's say the whole VM dies, the orchestration system realizes oh I only have four running, now I start another one or it can also do scaling so we can look at and say like oh we have a lot of traffic coming in I'm starting more containers. So it's the system that on top is and looks at what exactly happens, what should I need, et cetera. So if you actually wanna run Docker in production you will not have three servers and just run a bit Docker run in your console, you will have an orchestration system that will take over these things for you because containers are gonna die. They are not made to be there forever and if we go back to the Netflix, yes they are starting one million containers per week but they're also stopping one million containers. And I think that's the weirdest part to understand. A lot of times if we talk about physical servers or virtual machines, we think about something that is started once and stays there until I stop it. With a Docker container for example, if we deploy new Drupal code for you, we're not exchanging the Drupal code in the existing container, we are starting a new container with the new code. Make sure that everything is good. And then we move the traffic from the old container to the new container, are still monitoring if there are any errors or problems and only after some seconds that we are sure that the code in that new container is good, we're stopping the old one. If during that time we're realizing that something is flaky and there is errors, we just move the traffic back to the old one and we stop the new one and tell you hey there was an error and we show you the error. Something that has never been possible before with virtual machines, because with virtual machines we're replacing code all the time and now go back to what you had an hour before. It's very hard with containers that is possible. Again, all handled by an orchestration system. So why should you care? So the very interesting thing about Docker or containers in general, these Docker files can be written by developers. In the past, if you as a developer wanted to, let's say, deploy your own elastic search, you had to learn about configuration management systems like Puppet and Ansible and they were complex and things like that. Now with Docker, you can do it and you can actually run it on your local computer first. You can write the Docker file, you can create an image and you can run it locally to make sure it works and then you push it the image, you push up to the system and it's gonna run there and you can be sure it's exactly the same. With VMs, technically that's also possible. You could build your own VM locally and you could push it to somewhere else. The problem is you're gonna wait 10 minutes because it's gonna transfer multiple 100 megabytes and with Docker, even if you do it once and you push it, the next time, it will only push the changes. So it's very fast. And now you wonder, why do I care about that? Well, the problem is in Drupal itself, Drupal is much more than just an NGINX the PHP and the MySQL. We have now, right now, three different PHP versions. We have varnishes, we have NGINX, we have sites that use Node.js, we have MongoDB, we have Elasticsearch, we have CouchDB. So suddenly, it's much more. So the hosting company that tries to run your Drupal site doesn't even know anymore what they should run because there are so many different ways of running a Drupal site now and connecting into different systems. With Docker, the developer can locally configure everything he needs for his website. It's his own choice if he needs PHP 7, PHP 7.1, if he wants to use Elasticsearch or MongoDB, whatever. After he has all of that done, he pushes all the images to your hosting company and the hosting company is just gonna run the images in there. So there is no necessary to go to UI of the hosting company and select, like, oh, I wanna have an Elasticsearch, but I've never tested it locally because so I don't know if it worked and these things. You can run it everything locally and test it. And the next is containers are used more and more and more. I was at DockerCon last week. There are companies thinking about running containers in cars. So they are implementing self-driving cars and the neural network that will drive your car is running inside a container. Why? The same reason. They can very easily distribute that. If you wanna distribute a new system to cars, you have 100 millions of cars driving around in the world and overnight you wanna update all of them, you push them a new container image. And all the system, all the problems are gonna be fixed that you like how to distribute them. So a lot of people are trying to use containers even in maybe the beginning, very weird ways, but if you think about it, it makes a lot of sense. So let's look at the future. Docker just turned four. It's a technology that really has been involved over the last four years. That's very short. That's like yesterday in technology terms. So if you look at the future, we have now operating systems which say instead of actually running a service, so your network stack or your syslog instead of starting them, they're starting a container that runs them again. So everything inside your operating system is running within a container. It's just the operating system running, Docker running, and then everything single is a container. Windows or Microsoft announced, they're already having it running, you can now run Windows inside containers. And not only that, you can also run now Linux containers on Windows machines. And I'm not sure if people really understand that. That means if your client runs everything on Windows and forces you to run on Windows, you can still do everything on Linux. You just send them the Docker images. They sit in there, they do Docker run, and it runs. They are happy because it all runs on Microsoft infrastructure, they don't have to change anything, and you're happy because we can use the Linux that we all use all the time to host. Then there is an open container initiative which actually defines how a container should look like, how a container should behave. Because there is not only Docker, there are other container systems coming up like Rocket or other companies are also thinking about doing them. So it's not only gonna be about Docker itself, it's gonna be about containers where Docker is just one of the flavors. But we want that let's say one container that has been built on Docker also runs on the Rocket container engine and things like that. And that's what the open container initiative is about. So it's like in HTML, we all have a common standard of how HTML looks like. So I don't have to build my site for different browsers because they're all doing something different. The same we do also in containers. We decide together as a community, how should the container look like and how should we do it. And that's gonna boost the adaption and the possibility with containers much more. That's it. We have three minutes for questions. No, it's a very short presentation. Yes, can you come to the microphone? Can you talk about the relationship between like containers Docker and the underlining operating system like CoreOS or something like that? Yes. So technically it's not necessary for operating systems to actually know what you're running. So like you can run Docker on almost any Opera Linux operating system. Like if Ubuntu I'm just installing Docker and I'm running it. The thing though that there are some things that the Docker engine has to change on the operating system in order to work very well. And so what obviously happens is that now we have operating systems that are implemented from the beginning or architecture from the beginning to run on containers. And one of them is RancherOS that has everything in there. There's CoreOS as well. That just makes it easier to run Docker on them. Because if you never think about let's say running directly and a web server on an operating system you can remove that part from the operating system from the beginning. And that's what happens. So the operating system implement or builders, they're building stuff directly from containers. Hi, thank you. My name's Mark. So I love the idea of making everything repeatable, replacing containers. We do some of that not with containers but with using Ansible to script our configurations things like that. I'm curious, you talked about just replacing a container. How does that work with a database container? Persistent storage is very hard because if you have something that dies and starts all the time, your storage, you wanna have to be persistent. So you can run containers or you can run databases in containers. It's not easy. It's possible. Basically the key is that let's say you have a cluster or you have a Docker cluster of like 500 nodes that technically your MySQL container could start at one of them. So what do you have? You have an underlining storage system that allows you to connect it to any of them. So if you talk for example in AWS, the storage of your container is actually attached to like an EBS volume or an EFS volume. And then wherever the container then starts again it attaches it again to that storage. So that's all handled by the orchestration system. So a container in an orchestration system can say I need persistent storage. And when I start again on another node I wanna have the same again. So then it's connects all the things together. Good question though. It's not easy to run. Okay, we have to already move. If you have more questions I'm around the whole week happy to answer anything.