 Okay. So that was a great first introduction about how Docker Machine helps you provision a machine either locally or remotely. And then there was mention about Docker for Mac. So Docker for Mac was, I think it's currently in production ready, kind of production ready. So that's kind of, as you mentioned, Docker for Mac kind of hides the complexity of the virtual machine that's running. It allows you to run when you do a Docker compose up and you expose a port instead of having to find out where the virtual machine is, what's the IP? Docker machine, sorry, Docker swarm actually opens a proxy service on your laptop that then proxies the request directly to the virtual machine. So it looks like it's running locally, but it's not. Like I have, that's just quickly Docker. Is that readable? Is that readable there as well? It needs to be bigger. So Docker info. This is actually, actually Docker version. I'm using Docker for Mac here. And as you can see, my client is running on Darwin. Sorry, I won't run out of the camera. My client is on Darwin. And then my host is on Linux. So even though I am running as if it's locally, it's still actually running a Linux virtual machine. So that's kind of like a trick that Docker did. They hide the complexity, but it's still using virtual machines in the background. And if you want to play with things such as multiple machines, you still need Docker Machine. You want to create a swarm cluster, you need Docker Machine. Okay. All right. So Docker for Mac is very easy for developers to just get started with Docker. So continuing in the trend, Docker announced back in June a project called Docker for AWS and Docker for Azure. So those projects are still in beta. And you need to go to beta.docker.co and you can request access to the beta. So because they need to control how many people have access, so they can handle the support queries. So they are doing like phased rollout of this. Okay. So yeah, in case I assumed I was going first, I was going to explain a bit about Docker, but I assume everybody knows Docker. So I'm not going to explain much about it now. Sorry. So Docker for AWS, this is the part where you get access. When you request access, they require, if you do the Amazon one, they require your account ID because they will give you access to some Amazon machine images that you cannot access normally. So even though in my screenshot, I am showing you like what happens when you open it, you get like this template URL. You could quickly write it down and try to run it, but it won't work because it will say that you don't have access to the machines to actually set up the stack. So the way it works is very similar to Docker machine where you create on the common line a machine. Here we are using either via this common line or if I use the Amazon CLI tools, I use AWS Cloud Formation, create stack, and then I provide the stack name, the URL, and then I provide all my parameters on the common line. So I can do it like that or I can do it using the actual Cloud Formation template from Amazon. So it gives me options to define how many managers I want to run and how many workers. So when we talk about a cluster environment, we will have some kind of like masters that will control on which node the containers will run. So you will have many worker nodes that will actually run the containers. So there's this, in every single clustering solution that I know of, Kubernetes, Messos, Messos has masters as well, right, and Minions. Every solution I know of has this concept of having managers and workers. So you need to specify how many managers and you have a choice, maybe I can just show it. Like once you get access, you can click on the launch stack button, and click on, let me make it bigger, click on next, and you get this specified details, give it a name, and you have options. So the options are either one master, which means that if it goes down, your cluster dies. So not good for high availability. Second option is three. Why not two? Because if you know like clustering or distributed storage systems, they need to be, they need to have resilience against network splits. So if you have two nodes and one node fails, then the other node doesn't know if it's alone or like it usually cannot make up its mind. Or they're both split, then you have a split brain. If you have three, if you have one node fails, you still have two nodes to remain consistency in the cluster. So they use a distributed cluster, like distributed storage mechanism. And the options there for high availability are either one, three or five, at the moment only up to five. So five gives you then two node failure and resilience. And for my stack, I just used one master. And you can actually increase that because in Amazon, it's called an autoscaling group. So you can change the autoscaling group to later on have more resilience. And then number of nodes, those are actually, you can go up to 1000. It's going to take a while. It's going to cost me money. I'm not going to do that. So I can create many different worker nodes and then they will run the containers. And then I can specify the type of machines that I want to provision for this. And I can specify my SSH key that I want to enable for access. Okay. So it's very, like, easy. You don't need to worry about how it actually works. You give, you give the permissions. I don't know if I should create the cluster now. Did I put 1000? Oh, yeah. Thanks. See, that's why I make beautiful demos. It's my personal account too. So there is, I have not played with the advanced options. But once you come to the point that you acknowledge that this cluster will create VPCs, subnets and all of that. And then when I click create, it will create. But let me not click create right now. Let me run it on the comment line. I have, I will try to, I will post, I will try to push this markdown file so people can repeat what I did. So in this one, I'm setting my default profile. I have them preset on my laptop. I'm setting the region and then I'm doing a create stack. So let me run that one up. Oops. What did I do? Okay. I need a bit of a mess. Yeah, that's not going to work. Sorry. I'll just paste it here. I just want to change the name. Docker AWS. Ah, yeah, already changed the name so I can just execute it normally. I don't know what's happening here. I had, okay, I had this happen before as well. Let me do read me. This is not the right meaning. Okay, forget about it. I'm going to click the button. Okay, just click the button. It's simple. All right. Okay, so right now it's creating this. So last time when I created it, the, it started at around, like I started at 314 and it finished at around 324. Like then it was complete. So about 10 minutes to complete this cluster and I have one master and three. What's happening in the meantime? Okay. So, so the first thing it does, it creates a VPC, a virtual private cloud and create subnets and security groups within there all preconfigured. It creates two auto scaling group, one for the managers, one for the workers, set with the desired capacity that you predefined in the template. Then the managers, when they start to come online, they will join and form a quorum using raft. Raft is a protocol, which is what I mentioned to have distributed storage system. And then after the managers have started, the workers will join the swarm cluster. So if you do this manually, you need to create the EC2 instance, you need to provision Docker on it, you need to, once Docker engine is ready, you need to create the, you know, swarm mode, enable the swarm mode, create one master, then you get a token. Either you need to have either token based or on a different type of, you can do a token based or based on discovery. You have to join the other nodes to the masters, then you have to promote the nodes to masters if you want to have more masters. So that's very manual. And now it's all happening in the background. And by the time I stop talking, it's probably finished. So then after that, whenever it creates two ELBs, one is for SSH access and one is for any services that you create within the cluster. I have not looked exactly like how to, like right now the ELB, I actually manually choose the ports to expose. I'm pretty sure there's a better way to do that, but I have not looked into that. And so then it's ready for deployment. So what I wanted to show was, actually I forgot to stop there. If I click on the launch tag button and I can click here view edit template in designer. So this is the standard cloud formation editor from Amazon it's loading. And then it's showing me like an overview of what the stack looks like. So there's a whole bunch of systems that it creates. Oops, and I just dragged things around which I didn't want to do. But one of the interesting things I think in here is the fact that it, and I actually have the screenshot here. So let's just go here. So one of the interesting things that it does is it's created this SQS like a notification or QE system. So I read online this is, I mean, I put inside here the link to the information. The purpose of SQS is to whenever there is an instance, Amazon is going to terminate an instance, there is a grace period, like there may be a notification. And SQS is actually going to react on that. So the instance can learn about if it's in pending doom and then gracefully exit. And we'll see, I hope to show you the actual way of draining if you manually want to retire an instance, how you drain the containers out of it. But it can also react to notifications from the Amazon cloud to do that for you automatically. So there's a little link to the source where I got this from. Okay, so I actually talked about Docker. Okay, let's continue. So I had a bit of a slides here until you explain like, we have a cluster of managers and then they create a rough protocol. And there's GRPC, like a kind of like a binary protocol to do efficient communication between the workers and the managers. And about the strongly consistency and then the workers actually, because if you have a cluster of 1000 workers, for all of them to have a consistent view, it would take a long time to propagate the information. So they are eventually consistent. That means if you are creating the networks that we talked about earlier, like virtual networks where we connect the containers to, they are actually only exist on the workers that are involved with those containers. And so the propagation is only contained within a certain part of your cluster. So it's trying to do everything as efficient as possible. And if you look on the Docker blog, they just posted a lot of videos from, I think, beginning this month, like 8th October, there was a distributed system summit in Europe, which was focused on how exactly does this work. And they actually look, they actually explain how they do the heartbeat between the manager and the worker, how they do all of that. It's very interesting, very, very good to understand. But you don't need to because Docker 4.0 AWS does it all for you. Okay, I'm not going to go into too many details. I'm just going to go through the actual demo that I have. So part of Docker 4.0 AWS is that it's running this new Docker 1.12 engine, which has its own clustering system, basically what I just explained. And this was announced back in June. So in June, we had a meetup here as well. And I demonstrated how Docker 1.12 was doing this. And I'm kind of reusing a little bit of the same meetup, sorry, the same visualizations. I mean, tools. Okay, that's not cool. Did I just break my demo environment, probably? Let me just kill this here. So what I'm doing here, I'm adding a load balancer TCP, HTTP port to port 3000. Please work. Okay, there it is. So this is a container that I deployed. So in my script, I show that I am running something called the manual marks visual visualizer, which is like a Node.js application that talks with the Docker API over the Docker socket. And it allows me to show this fancy graphic of how the service like this is my cluster of all the nodes and how whenever I create a service, how it gets assigned to the nodes. So that's a cool little thing. So here, yes, I because I create this running on a certain container, I am actually setting up my load balancer to expose this port. I'm doing this manually, because I'm not using the correct syntax for running it as a service. I should probably do that. But I didn't have the time. So next I'm going to run simple engine X. So I actually skipped a few steps here. After I create the stack, there is inside cloud formation, there is an output. So I can query if you see here, I'm querying the output from the stack. And I'm getting the key which is the default DNS target. And I'm storing that into the environment variable. So if I do echo default, so this is my ELB DNS target. So I can hit that and it will hit my cluster and then it will find the containers. So if I curl that right now, there is nothing there. I mean, I can't hit it. So if I execute this command to create, there's another thing that I did here, which is I'm running an SSH tunnel from my local machine to the default DNS target in the background. And I'm exposing the Docker socket on local port 2374. It's in the guides when you run it. So if I look here, I have this job running in the background. That means that I can, that means that when I set my, so basically earlier, when Sergei talked about Docker machine, when you do Docker machine, I don't have anything now, but when you do Docker machine and it basically gives you a whole bunch of environment variables to set, like the TLS SSL certificate and so on. Now, because I'm doing, I'm using an SSH tunnel and I'm connecting to, I'm not using certificates when I connect. I'm using an SSH tunnel. So I don't have TLS verification on my client, but I can do Docker PS. And my Docker client of my laptop is talking to the cluster. So I can see I have my visualizer running there and I have a whole bunch of controller nodes, controller containers working. So I can do Docker node LS. And I can see those same nodes that I see in my visualizer here. Those are the nodes inside my cluster. So that's all happened last time I spent on the cluster. Okay. So now I can do, I can show the whole Docker info, but it's going to be a whole bunch of information. I don't know what to pick out here. Let's forget about it. So then I'm going to finally actually deploy something, which is I'm going to create a service, Nginx, and I'm telling him that I want to expose port 80. So what's happening here earlier, somebody asked what about how do I expose the service? So I have an ELB and if I go to the ELB, cancel. Let's close that anyway. Okay. So I have two ELBs here. And if I look at the listeners, this is my, I know this is my stacks or my ELBs are here. Load balancers. So now I have my original stack that I created at 3 p.m. today and then the one that I just created now. So I'm looking at this one. If I look at the listeners, right now I manually edit port 3000, right? So I'm going to run this comment. And Docker for AWS is automatically going to update my ELB with the ports that I want to expose. So if I refresh this, now port 80 is exposed. But I never told Docker about my own addition. So obviously he gets rid of it. So if I want to see my virtualizer, I need to add that again. Okay. So I manually add it back to the load balancer with the command. And now my, otherwise my visualizer won't work. So my visualizer is now showing that Nginx is running on one of the nodes. So next I can open it. Like let me do that curl command that I had before. It's probably very confusing. All the things. But anyway, I'm doing a curl. And now it works, right? I'm seeing Nginx is available. I can open that as well. That's going to open Safari, but okay. Where is Safari? Okay. So I have welcome to Nginx. I'm hitting it inside the cluster. Wonderful. So next thing I can look at. So basically what I did here was I created a service, right? And the service name is Nginx. So I can call, I can say service LS and it's going to show me all the services that exist within my cluster. There's one Nginx. And I can, let's see what I can inspect it to find out the information about this service. And I should type within the front. So this is if I specify pretty, if I don't, then it will be, it will be JSON. But if you do dash, dash pretty, you will get a YAML output, kind of like YAML, right? And so basically it tells me that this is target port 80 and published on port 80. So this is the interesting part for me. Where is this published on which port? Okay. So I can also list all the containers that make up or they call it tasks. All of the tasks that are made up this service. So I have one at the moment. And I can scale it up. So I told dockers, please scale my service to five. And if I do an inspect, it's going to tell me that the desired state is five. And if I do a PS, it's going to show me that five are running. And if I go to my visualizer, we will see that the nodes are, except that one is not yet, it's preparing. So everything else is running already. So the, the other manager is still pulling. And now it's running. Okay. So now we have five NGINX across all of the servers. So if one of the nodes goes down, it will still be reachable because we have a load balancer that directed traffic over port 80. So via port 80. So that's all working. I can't show you because if I hit it, it's always going to return the same thing. So let's try and make it a change. So if it runs on a certain node, it's going to return something different. So we can show you that it's actually load balancing. All right. So I haven't tried it. It would be hard because they have one gigabytes of memory and NGINX takes what like very few. I would need to scale it very high. Oh, yeah, I can scale. I can, I can scale it like, let's say 100. Okay. So the thing is these containers are running and connecting into an internal network. And then there's a routing mesh, which is on all, I don't know why that one is not taking anything. Ah, because I said it inactive, didn't I? Just one second. What's this IP is three, three, sorry, two, two, two. Okay. I said it to drain. So let me put it active on the next scaling. So it doesn't rebalance automatically unless one of them dies. Like I will show you when I actually set this one for draining. Then if I make this one active, I said this one for draining, you will see everything move. So we'll do that. It's fun, isn't it? Looks beautiful, isn't it? It's not me. I didn't write any of that. It's all from the Docker people. They did a wonderful job. So, yeah. So I can scale. I can do nodes, yes. This is where I was draining, right? It was my next demo, but I forgot to make it active again. Well, your question was what if there are multiple, right? How does it handle ports? So do I have a slide of that? I don't have a slide. So basically, it's very similar to how Kubernetes works, and I'm more familiar with Kubernetes. So you have, when it hits the port, it actually hits the port on the node. It actually redirects the packets to an internal network of all the containers about to say pods. So all the containers they're running, they're isolated, fully isolated from the external network. It's just that there's a proxy service running on every node that is opening port 80. And I believe this uses IPVS. It's a very interesting, so it's a kernel routing. It's a layer four kernel routing mechanism, so it's basically like a kind of a broadcasting to find out the nodes. I think we can read more about that, like, is it an issue? Yeah. Yeah. Okay, so this was a presentation back in DockerCon Europe last year, and there was a presentation from one of Andrei Sibirov from, he was working at Uber, I think, and he was talking about this IPVS kernel technology that allows you to like load balance across nodes. It's like very interesting presentation. And basically, since then until June, Docker basically implemented it inside their cluster. So it's a very impressive routing mesh across all of the nodes. Really cool stuff. And they open sourced all of this, like they call it, like they open source each part as like a VPN kit or infra kit or anything. So anything that makes up this thing, this Docker 1.12, it's all open sourced. And because they like to involve the community into improving all of the things, all of the services that they develop. Okay. So if I keep talking like this, I'm going to take two hours. So next part, I'm going to do this draining of a node. So I can do a list here. And I'm going to select one node, let's say 247. And I'm going to do the drain and replace that. Maybe. I haven't tried. Don't mess up my demo, man. I'm not going to try it right now. Let's go to the visualizer. Okay. So it's draining that one node that I selected, totally randomly selected. And so now that node is available for maintenance. And I can set up another one. I can reactivate it. You can try out the ID droning. And so it doesn't automatically rebalance here, right? But if I drain another node, let's say I pick up 223. So pick up. Okay, let me try it. So the oops, what did I do? Oh, no, I just broke my terminal. Oh, that's going to be a pain. Yeah, it works. Yeah. So he's draining and he's putting them back on that node. All right. So it goes quite fast, actually. Okay. So then we did the update. We set it to drain and then back to active. And then the next thing is what if we want to disape, like we want to replace one of the managers, then there will be a process of, if I choose the leader, so if I do Docker node, okay. So right now, if I would tell him to drain the leader here, then these other two managers will have to do a leader election. I haven't tested it. I'm not going to demo it. But if you want to try it out, I've put in a link. There's reference. If you, I'm going to push this, I'm going to share it on the meetup group. If you go there, you will see this draining part that I showed you and you will be able to see how to like do leader election and all of that, how that works. So then the next thing I wanted to do was to actually show the load balancing. So I'm going to create a service called city. This is not my example. It's, I think I put the reference here as well. So there's like, oh, no. I will add the reference. But I rebuilt it because I did a bit of a change anyway. So, okay, let me do this. Like Docker service create. I'm giving it the name city. I said I want five. Maybe I should remove the engine X before I do that. Or I will be over provision over overloading the cluster. RM engine X. Done. So they're all being deleted. Okay. Bye-bye. And create. I didn't. All right. So in this case, I built a small, I mean, not, I didn't build. I cloned this, this JSON application, which basically, this is version two where it says really suggest to visit and then a certain city. So it randomly picks a city. It shows the host name and it shows a city that was, that was chosen. So it kind of shows us which, which one of the nodes is, which one of the containers is, is serving us. So let's, I expose the port. But my, my visual, like right now, this one is broken, right? Because I created, I asked Docker for AWS expose port. I don't remember which port. And automatically it removed my own manual added thing. So I need to add that again. Okay. And it will take a while. But there it is. And the five instances of the random city are running. So now I'm going to show that if I hit the port that I exposed, so I expose port 8081. So I'm going to use this command to, to hit it. Let's put it up here. If I want, if I can go up here. Okay. So now it's, it's hitting. And it's because it's version 1.0, which shows me the host name and suggests to visit. It's an interesting thing. Every time you see there's a, not every time, but there's a load balancing going on. It's different containers getting hit. I can confirm the containers by doing Docker service, PS, city. That's my, my name. So I have like 31901. That's, where is that host? I don't see it somewhere. What's going on? Okay. For sure it's hitting one of the containers. This one, the first one here. I don't know if you can read that. Let me try and make it bigger. So the 319, is it readable? That's the one that's, that was getting hit. And then basically it's the name of the container. So that one should be in there though. If I do PS, I should see it somewhere. I don't know why I don't see it. Anyway, let's ignore that. And so what I'm going to do next is I'm going to do the famous rolling update that every cluster presentation has to do. So I'm going to tell Docker service to update the image to version 1.1. And as I already indicated, if I, if you see that in, in version 1.1, it's improved. It says really suggests it's a little bit more insistent. So if I, I do update and I put in version 1.1, then you see, no, you don't see anything. Visualizer maybe. Containers are being replaced, right? There's a 1.1 running. There's another one being replaced. And as they are being replaced, the load balancer should start hitting the new versions. Actually, is it around Robin or is it load-based? Yeah, I don't know exactly the round, the exact load balancing mechanism. But there's a few version 1.0 still, but in the meantime, there's a 1.1 there as well. So rolling update worked. Yeah, cool. And here, actually, we still have one, 1.0 running. And we should see the 1.1 pickup. And 1.1, it's replaced. So now we should not be getting any of the 1.0 anymore. Don't ask me how to stop a rolling update. Oh, we still see one. Don't ask me how to stop it. Don't ask me how to do a rollback. I don't know. I think if you want to do a rollback, it's very declarative. Something went wrong. I want to go back to 1.0. You just specify image version 1.0. And obviously, it starts rolling. It do another rolling update. Okay, so that's one thing. Oh, yeah, when I do this, you actually see that there's an update in progress here in the message. I haven't played more with it right now, so that's all I have for the demo. I mean, that's part two. I have a part three as well. And that's the example voting app. So the app that he just showed running on one machine, how can we deploy it on this cluster? So it runs across all of the nodes. So I also did the same thing. I cloned it. I can show you when it runs locally. So I'm going to, sorry, a lot of things that I should have closed. Okay, here. So if I do Docker, compose, up. So I get the same as the presentation earlier. It's starting all of the containers, showing me the logs. I didn't do the demonization, so I didn't detach it. But the point is, yes, I can run it locally. And I have the same situation. I press cats. It updates. I go back. I press dogs. And it updates to dogs. Works. All right, so that's running locally. Wonderful. So let's close it. Yeah, that's not good. Docker, compose, down. It's going to remove everything. Done. Let's say I'm happy with this. So the next step is I want to deploy. So when it, like when you work with containers, you use Docker file, you build an image, and then you push the image, and you deploy it. So with bundles, you have a compose file, and you need to build something called a distributed application bundle. So to do that, okay, I can do a Docker. So the first thing, the original voting application, I had to make some changes. I am in the Docker compose, and I had help from Marcos Niels, one of the Docker captains, another one. And so I basically added an image tag. So before that, in the original one, it's not. So I added that, but I detailed the changes to make it work. It's actually, I have to say one thing though. This is still very, it's still experimental. So there's still some things that are not working 100%. So like one of the things I had to do was I had to edit the compose file. Then I can do Docker compose build. And because I build it before, it's going very fast because it's using the cache. As you see, everything is using cache, cache, cache. So now if I do Docker compose bundle, so that's going to create a bundle file, and I can specify push. And it's going to, okay, so it actually just did a bundle, it did a bundle, and then it also pushes the images. But because I didn't change anything, he didn't push it again, apparently, isn't it? Let me check that. I don't know why he didn't push. Yeah, it is Docker compose bundle push. So if you do this option, it's going to create a bundle, and it's going to push all the images to the registry. So the registry is a mechanism to, it's like a centralized repository, so my cluster can pull them from the registry. This is a way to ship the application from local host via registry to anywhere. So that's, I'm using a public repo for this. And now I'm going to, I'm going to go and deploy it. Actually, that doesn't work. No, it works, but I have to do another thing. I have, there's a bug right now in Swarm that if you have two ports per one service, it's freaking out. So I have to remove one port. There's one service here, this one here. It has two ports. I have to remove it. If I do that, it's fine. Everything keeps working wonderfully, like this. And that was Marcus helping me identify this. Okay, so now I've removed the port. Did I forget anything? So like ad hoc fixes, I had to edit it to remove the port, keep only port 80. And then after I finish, I have to publish it, but that didn't work. So okay, so I'm going to, I'm going to deploy it. So if I, I go back to my, my, this is my rolling update. Let's stop that here. Docker services, alas, docker service, sorry. Yeah, docker service, RM city. I'm removing the service from the cluster. Okay. Now I'm going to do docker. I am going to change my directory to the example voting app where I have my, my example voting app. I have a backup in case I made a mistake. Deploy example voting app. So this docker client is talking to my cluster and I'm, I'm giving him the name and he's automatically going to look for the distributed application bundle file. So if I put that, he's going to say I can't find that, that. So I have to remove that just that, just like this. So now he did, created, created a default network, created all of the services. If I do docker service alas, I can see that they are all running. Let me just ELB to show the visualizer. Yeah, so the visualizer is showing me that I have the worker, I have the postgres running, I have read this. Obviously postgres is not being like, if this container dies, data is gone, right? This is not, not how you would deploy normally, read this. So if you want to have postgres persisted, you have to define a network as volume, you have to mount the volume and all of that. That's, I'm not talking about that right now. So the server is there, the result is there. Okay, so if I go docker inspect service example voting app result, I think. No, it's service inspect, sorry. So I'm not always copy pasting. Let me do it pretty. Okay, so it tells me that this is published on port 30,001. So normally you should be able to do docker service publish and then, what was the comment? It's something like publish, sorry, it's update and then publish the address like 30,001 and point it to port 80 of the result. But when I tried earlier, it didn't work. So I need to figure out why, but in the meantime, I'm just adding manually. So the thing is, when you do a distributed application building right now, it doesn't automatically open the listener in the ELB. And it will, I hope someday, but right now I'm doing ELB create and I'm adding listener for, and the ports there are not right, so I should fix that. Let's do the first one. Actually, I can use my history. So this is the, that one that's already there, oops. And okay, so now if I go to, if I go to my target DNS, port 301, so there is no votes yet. It's running in the cloud. And if I hit the other one, if I click on cats, and if I go to the result, it's updated. So it worked. Yay. Okay, so that was everything. So basically, I went through using Docker for AWS to prepare a cluster. I went through deploying a Nginx service. I went how it exposed. I went through draining nodes, scaling the service, draining the nodes, putting the backup active. And then I went through creating a, converting a compose file to a distributed application bundle, deploying it. And then there's a few raw fetches. Due to a bug, we cannot have multiple ports. And the published one should work. I don't know why it didn't work. Probably figured it out, like five minutes after I finish. And that's about it. So any questions? Yeah. When you just say deploy the application bundle to a cluster, how does it map it? Do you really specify somewhere? Let me give you the context of the .file. So basically, the distributed application bundle, I do less. All it contains is a list of services. And every service has like the image it needs to run, which network it needs to connect to. So basically, your question is like, how does it map these like to the cluster or? The reason being is like because there are multiple clusters running on the same machine. Multiple clusters, yeah. Yeah, so the way that works is I have I have my Docker host set and I have I have a tunnel running from my local laptop to this particular Docker for AWS ELB SSH. So this is the SSH port. So I'm connecting to this master. And I am forwarding the Docker engine control point to my local laptop. So I am actually telling my laptop to talk to this particular cluster, right? If you're familiar with Kubernetes, in Kubernetes, you have something called a Kubernetes control configuration. And you can set the context. So you would set the context to one cluster or to another cluster. That's you can do with Kubernetes that way. Actually, Kubernetes also has the concept of a cluster federation. So they run another layer across all clusters. And then you are able to schedule services across multiple clouds. Things like that. But I don't think that's there yet with Docker 1.12. I think 1.12 is quite exciting. The way like if you if you look at, oh, there's one more thing maybe I wanted to say. Sorry. So Docker for for AWS is having this very complicated where you have like AWS specific VPC EC2 under the management EBS ELB. It's very specific. And then inside the Docker side of things, we have the user application, the storage plugin and infrastructure management. So Docker announced back like in October for something they announced something called InfraKit, which is a way that they want to abstract the infrastructure. So why I talk about this because it's kind of it's the start of this infrastructure management tool, but it's not there, right? Right now, Docker for AWS, I do it's running the orchestration within the cluster. I don't think it's using InfraKit right now because InfraKit is very, very much at the beginning stage. So I just wanted to mention that because I don't know why I wanted to mention that. But yeah, you were asking about clusters and I thought, ah, no, because I found it very exciting because the way that they actually like built, it's built on top of a lot of experience like Google designed Kubernetes. They made it open source. And then Docker had their own swarm solution, which was running in a very different way. And they basically took a lot of the lessons learned from the first version of swarm that they had. And then they took a lot of the good aspects of Kubernetes. And they combined it and in such a way that I think it's very promising. To be honest, Kubernetes right now, it's being used in production by a lot of companies. So Kubernetes is really mature. Docker for AWS, it's still in private beta. And Docker 1.12 is it's stable. It's stable. But it's not as advanced as Kubernetes I must say, honestly. But it's definitely going in a very interesting direction. Right? So that was a part of your question. Anybody else? And now everybody's afraid, like, oh my God, he's going to take so long to explain again. Volumes? Is that working? Like I have not played with the volumes and you reminded me because we talked about logging, right? How do we get the application logs? So one very exciting thing with the latest release of this beta is they added container logs are automatically sent to CloudWatch. So what happens is when you run Docker for AWS, it actually runs a volume logging driver in the background and everything is automatically accessible on CloudWatch. If I find it here. So if I go into my CloudWatch log groups, I can see my Docker for AWS. I think there should be two now. Yes, because I have the second cluster. So if I click this one, I can see all my, it's a bit difficult to know which container it is. But last time I was having an error with my, like, I was having an error with my, for example, this container, which is the vote server. So EAD, something like that. So if I take that, oh, my terminal broke. I did something. Okay. So if I put that here, yeah, there it is. So I can see all the logs of that container directly in this CloudWatch. So it's very, very integrated. So I don't think ECS even has that like ability. I'm not sure. I don't know how the lock shipping in ECS works from Amazon. But I find this very interesting. Okay. Why did I talk about? You asked me about volumes. Yeah. But then I remember that I forgot to show about the logs because the volumes, I'm not sure how to handle volumes yet. I don't know. I really don't know because I would assume this will be part of the distributed application bundle definition. Right. Because when you do Docker compose, you specify the volumes. So the volume, when you do Docker compose bundle, you create the distributed application bundle in there. It should define the images. It should define the volumes. It should define the ports. Everything. So I think when I I ran the command. Yeah. Not supported. Right. So as I said, it's experimental. Where is it? I was there. Okay. Oh. Oh, yeah. My terminal is broken. Sorry. Yeah. So yeah, I was complaining about some things not yet supported. Like I said, it's still experimental. The distributed application bundles. Yeah. Yeah. So how does this project ECS? Oh, so ECS is Amazon's own solution to run containers. So ECS spins up EC2 instances, runs an agent on them, and then they manage the managers. So what I did here, when I did this, I spin this up. Where is my EC2 here? Right. If I go to my instances, I have my and I'm probably going to have a big bill. I have my Docker for AWS managers running. So if you would be using ECS, you would only see the workers. The managers would be managed by Amazon. So you don't need to worry about them. Yeah. Yeah. And so the difference is, yes, you don't have to pay even you only pay for the EC2 instances, like the workers, right? For ECS, you don't pay for the. So ECS maybe cheaper, but it doesn't have like the same functionality like, yeah. I don't know how many nodes you can scale ECS. I have no information here. Too bad Kai is not here. Where is he when you need him, right? So yeah, Kai is very. Do managers actually work as workers as well? Oh yeah. As you saw in the visualizers, we actually saw that some of the managers here, like we have three managers, they're actually running workload. You can set this up. You can say that this node should not run anything. Like this is usually through labels that you do this. The managers? I don't know because Docker engine, it's like integrated. The swarm mode is integrated inside the Docker demon. So it's a flag you enable. I don't know exactly how much more memory the engine uses when it actually has this. But as you saw, it's also running some containers and those containers are actually like integration with AWS, like for SQS notifications and things like that. And setting up the ELB and things like that. So on one side, we have the managers that are running in the demon, which has no matter which cloud you're on and then additional containers that manage. Yeah. Did I? Is it possible to require onboarding? Faster using local? Yes. Yes. Do you want to do it? I don't know. Yes, you can. Yeah, yeah, yeah. I've not done it though. So why do you persistent data? Persistent data. So if we are, this is a very big topic because like even for Kubernetes, it's very good at running stateless data. And when you, and it's even has the support for persistence using volumes. But still most people I know are running their databases outside of the cluster. You can have network network test storage to provide the volumes. You can either use EBS to mount the volume inside the instance. And then whenever the container moves to a different instance, the orchestrator needs to put it to the different instance where the container moves. There's many solutions for that. So it's, I mean, it's a very big topic. It's what everybody is want to work on. It's how to manage persistence. But for now, most people are still running a database outside. We are running elastic search with charts on a cluster, on a Kubernetes cluster. We do not have volumes set up. We don't have a persistence set up. But if a node dies, or if containers die, the replication of elastic search automatically re-replicates. So it has its persistence as it runs in the cluster. But if the head dies, then elastic search dies. Anyway. Is there a way to have a node affinity of services where, for example, you have a service and a database on the same... So like in Kubernetes? No, not really like in Kubernetes. But you want to co-locate certain services. So I would imagine that you would do that in the task definition. So I'm not sure actually, but I think you can... I'm not sure if you can have multiple containers per task. So the way it works, like when I do Docker Create Service, it creates a service. And then the tasks are the actual things running, the implementing the service or serving the service. So the tasks could be a container, could be a VM, could be anything. So they didn't call it containers, they called it tasks. In Kubernetes, they call it POTS. So for co-location, maybe tasks are the solution. I'm not sure. Yeah, but if you want to know more, like Google has a very detailed explanation of why they use the POTS concepts, how you co-locate and what are the advantages of that. Yeah. I'm not even sure it would be a theme actually. I'm sure that by default, they would go out and go in again. I think they would then... For what? For tasks? No, no, if you co-locate them in the same server, I'm sure that they anyway... It's one there? I'm not sure. But in Kubernetes, they definitely communicate over local hosts. They talk locally and they share volumes and things like that. In Kubernetes, you can have two different containers with their own file system. They can be one Ubuntu, can be one Alpine, but they can share a volume and they can communicate over local hosts. So you can co-locate services. That's a very strong concept that is in a lot of orchestrators, they add this concept. Yeah. Redact solution, OpenShift. So OpenArgil is there. Yeah. So it's a commercially built product on top of Kubernetes. Kubernetes, yes. So... So you can try it over there. Yeah, there is... OpenArgil is an open source. It just... It's available in... Red Hat promised me access, but they never gave me. It's all implemented out of box. Yeah, it's on top of Kubernetes. It's all implemented in Kubernetes. When we're in persistent volume, they have a concept like we can assign a persistent volume on the local hosts and you can just use it for database. So on top of Kubernetes, you have OpenShift. You have... You have Fabric8. They have their own solution. I don't know exactly what's it called. Then they have Tectonic. A lot of hosted solutions. Even DigitalOcean is like implementing Kubernetes in their service like on top of DigitalOcean. So Kubernetes, like I said, is used in production. And a lot of companies like Red Hat, they build on top of it. And Red Hat also contributed a lot to the actual Kubernetes source code. There's a lot of orchestrators. So Docker here has their own orchestrator. It's not as mature, but I think it's going in a very interesting direction. Anybody else? No? Everybody's tired of hearing me. All right. Then thank you.