 Okay. Hello everyone. I'm going to try to get started right on time because I want to make sure we were able to get out on time and I'm also hoping we can have some time for questions. So welcome. This is the orchestration talk for Docker orchestration. My name is Mike Gelser. I'm joined here today by my colleague Victor View, also from Docker. We both work on open source at Docker. So we work on the Docker project, what we call the Docker project. We're also employees of Docker. So of course we have a bias perspective but we're going to tell you our take on orchestration and we hope that that will be of interest to folks. So let me just back up a little bit because I think it helps to start these things by just sort of explaining a little bit why is this going to be an interesting talk sort of so what you know why should you care. So here's kind of our take on it. There are a lot of orchestration systems out there today and I think that each one reflects differing views of different angles on orchestration differing views about what's important and what's not. And so the the philosophy that we have approached Docker orchestration with is one in which we focused a lot on ease of use, but providing modern orchestration features that you really need to run a real production system. So I'm talking about features like declarative desired state for applications, strongly consistent internal store, a RAF store and we're going to talk more about that load balancing service discovery security. Basically we've provided these features but we've tried to expose them in a really simple a simple way. So that's kind of the philosophy you're going to see today throughout this talk. And let me just tell you a little bit about the structure of what we're about to present. I'm going to start out by doing a quick history lesson on Docker orchestration and kind of where Docker has been in this area and then where we are today and where we're going. And then I'm going to walk you through what the system looks like from a user perspective. So how to use it, what the main abstractions are. And then my colleague Victor is going to come in and he's going to do a little bit of an engineering deep dive. He's going to go into more about how the system works internally, how it was designed and he's going to do a demo. And that'll give you a sense of what is the experience like to actually use the system. And then we're going to try to have about a 10 minute block for questions because I think that's often the most fun and most informative part of these sessions. And then we're going to get you out of here by, I think it's 45 pass. So you've got 10 minutes to use the restroom and so forth before the next session. So let me start with the history lesson. Back in 2013, the original Docker debuted and what it did and it became popular almost overnight. I mean, really very quickly became popular. I think because for the first time there was a very easy way to run containers. And not just to run containers, but there was a library of pre-built container images. And so that was really compelling to people because before this point, you know, containers had been this kind of esoteric technology. You had to know a lot about Linux kernel features. You had to write your own C code and it was a very difficult process. And I think we really made it easy and I think that was a huge, huge contribution to the space. Our first foray into orchestration into multi-container orchestration came in the 2014-2015 timeframe where we actually we went through several iterations of this. But the best known one and the current one that we're still supporting is called Docker Swarm. And it's a separate set of binaries that you install on top of Docker. So you install Docker on your machines and then you install these additional binaries which come in the form of containers. And Docker Swarm gives you orchestration capabilities that make a single Docker host... Let me put it this way. It makes many Docker hosts appear like a single Docker host. So it's sort of a proxy system. One host fronts many hosts and the result is that you have that one host that appears to have very large, large amount of resources, large number of CPUs, large amount of RAM. And that's a really simple abstraction. It's very easy to wrap your head around but it has limitations and it doesn't have some of the features that people have come to expect from modern cluster management and container orchestration systems. So we, starting at the middle of last year, we embarked on a project to build a more modern container orchestration system that would address some of the challenges, real world challenges of clustering. And that was finally unveiled to the public at DockerCon this summer. That was just in June. So we're talking about technology that was just announced less than two months ago. And that's what you see at the bottom there. And I've started to give you a taste for what the commands look like. You can see that in the Docker Swarm era, commands are a little more complicated. There's a lot of port numbers in there. Now things are much simpler. It's one command to get a swarm created. What others call a cluster, we call a swarm. But it just means a collection of hosts running Docker. And then once you do that, you can launch Docker services. And I'm going to go into that in more detail because that's really a core concept for what we have built. So at this point I'm going to do a feature walkthrough. Basically I'm going to show you a set of plausible sequence of commands that you would use to control the system and I'm going to show some corresponding diagrams of what those commands would do. So the first issue is, let me get rid of that title bar. Okay. So the first issue is how do you create a swarm? How do you create one of these clusters? And it's dead simple. On the first machine you type Docker swarm init and that puts that engine into swarm mode. This is significant because everything we do is totally backwards compatible. If you don't want to use swarm mode for whatever reason, you're happy with your existing setup. You're not interested in new features. Then don't type this command. Everything will continue to run the same way it always has for you. But if you want to experiment with these new features, the first thing you do is Docker swarm init. Now you've got a swarm of one, one machine that is functioning as a cluster manager. Now to join additional machines to that swarm you're going to do Docker swarm join. And Docker swarm join simply is a command that you run on a second Docker host. You point it back to the first with using the IP of the first machine. Now you've got a two machine cluster. You can keep doing Docker swarm join on as many hosts as you want. So this slide shows, okay, we did the Docker swarm join command five more times, four more times. Now we have a cluster of six machines. Now once you've created your swarm like that, you can start to use something called Docker services, which is a new concept that we've introduced. And it's fundamentally a Docker service is a long running process that can have one or more replicas and it will be scheduled and distributed across your cluster. And I'm going to show some of the unique features of this as we go through the slides. And you can, so you launch a set of containers, they're going to be on the same overlay network, which means they can communicate with one another by TCP IP. And in this first example that I'm showing here, I'm imagining that you have a container image called front end. Probably this would be some kind of web front end. It is part of a larger web app. And you want to have three replicas of it, which makes sense from a load balancing standpoint. You have traffic that you want to distribute between three different instances of your service. And so now you've got three containers all connected together on an overlay network. You can have more than one service on the same overlay network. So the second, the red container is a Redis container. It's actually a Redis service with a replica count of one, which means that there will just be one container for that service. And now you've got, with these two commands, you've got a two tier application. You've got a web front end and you've got a database back end that those front end containers can communicate with. So now we get to some of the power of services. What happens when a node fails, which in a real world production cluster is like, that's happening all the time. Machines are going up, you're going down. You've got to design your system with the assumption that hardware is going to fail all the time. So let's say that that machine on the right goes down. And currently it has two containers from our three replica front end service. So we've lost that node. Now we're in a situation where the desired state of the system that we've declared using those commands at the bottom is deviating from the actual state. The desired state that we declared was three replicas of the front end container. The actual state that we're in right now is there's only one replica of the front end container. So the system now is going to take it upon itself to restore the desired state. So without any operator intervention, it's going to detect this deviation and it's going to restore the desired state by bringing up two new instances of that front end container on other nodes. So that's what that looks like. The desired state and the actual state have now been brought back into sync. Scaling is another example of a feature that's very easy with Docker Services. So I've previously showed you the Docker Service Create command. That's what I was showing on the previous slides. There's also a Docker Service Update sub-sub-command. And you can update any aspect of a service. You can update the image. You can update the ports that are exposed. But you can also update the replica count. And so in this case I've updated the number of replicas for the front end container from three, which is what I was showing on the previous slides, to six, which is what I'm showing here. And so now we've got six copies of that black container. And you can scale more. Here's ten copies. Again, same command, Docker Service Update, but to ten replicas. Now let me get to global services. This is another scheduling model that we have. Sometimes you want a container that runs once on every node in your cluster. So instead of scaling it to a fixed number, you just want to make sure that there's one copy on every node. Like a good example would be a monitoring agent where you need exactly one copy on each node in the cluster. Another example might be an antivirus agent. There's a lot of scenarios where you want one copy on each node in the cluster. And that's what global services are. If you look at the command at the bottom, the syntax slightly different. It's a Docker Service Create, but this time we specify mode global to indicate that we want global scheduling. And as a result, we get this green container. The example I've used here is Prometheus, which is a monitoring tool, but there's any number of situations where you may want to use this. And we get one green container on each node in the cluster. Now let me talk about constraints. So a really common question that people ask is, okay, how can I have a finer-grained control over scheduling? Like, you know, maybe I have a service with a certain number of replicas, but I don't necessarily want those replicas going to just any random machine in the cluster. I don't want the scheduler to just arbitrarily pick where they go. I want to control where they go. For example, maybe I want my replicas to only go to machines that have solid state drives. So what you can do here is use Engine Labels, which is a feature that's been in Docker forever. And an Engine Label just means that when you start up the Docker daemon on certain machines, you give it some extra arguments and, you know, Engine Labels are just key value pairs. So in this case, I'm using a key value pair to indicate two of the nodes have SSD drives. So now what I can do is another Docker Service Create. This is actually the same command I showed you before, except I added the bold part. And what the bold part says is, okay, constrain the scheduling of these nodes so that they only run on machines that meet these, this key value requirement. And so as a result, you will get scheduling of these three replicas, but only on those machines with the SSD drives. And you can again do Docker Service Scale. The new replicas will again only be, because of the constraint, they will only be brought up on the machines that have the SSD drives, the machines with that label. And that's constraint scheduling. Now, I want to talk, before I turn it over to Victor, I want to talk about a few additional features. And I don't, these don't lend themselves as well to simple diagrams, so I'm just going to kind of talk through them a little bit. One of the things that we added in Docker 112 that we're really excited about is this notion of health checks embedded in container images. So when you create an image, what you can do now is, in the Docker file, you can specify a health check line. And that's what the courier font up there is. Sorry to cast a shadow. That's what that courier font up there is showing. That would be a line in your Docker file. And what it specifies is, okay, the way to determine the health of this container is to run a command, a curl command inside the container and run it every five minutes and require that it return within three seconds. And if you have more than three consecutive failures, meaning that the latency to return from this command is greater than three seconds, more than three times in a row, or sorry, three or more times in a row, then that container is considered unhealthy. And so this is something that the engine itself will take on. It'll do this automatically. And if the container is not passing these requirements, then it will be marked unhealthy and the swarm mode orchestration system will notice that this container is unhealthy. It'll kill it and it will bring it up somewhere else. So that's the new health check feature, and that's how it integrates into our vision of services in Docker 112. And by the way, just the last line, we're working to get health checks into all of the official images because if you look at the Docker pull statistics, overwhelmingly the majority of images that people use are like the top 20 of our officially curated images. They're things like Redis and Gen X. The things you would guess are likely to be commonly used. So what we're trying to do is put these health checks into the official images. You can always override them. It's just a script. You can put any script in your container image. If it exits with a non-zero status, then that's an indication of unhealthiness. If it exits with a zero status, that's an indication of success. So you can see the shell script or command here is basically forcing curl to say, okay, if curl fails, then we're going to exit with a non-zero one. And so we're going to work on getting that into official images, but it's always overriding if you have an image that you drive from our official engine X, you can override this. And you can also control it at the orchestration level so that you don't have to. You're not obligated to use this, but we think this will be really useful to a lot of people. Let me talk a little bit about what we've done in networking because we've done some cool stuff with port exposure. So let me just try to run through this briefly because I don't want to cut too much into Victor's time. In a lot of cases, you have a service and you want to expose it to the public internet on a port. And so what we do is every node in the cluster, if you do a Docker service create command like the one I'm showing at the bottom of this slide, what you're doing is mapping a port through to every node in the cluster. And so every node in the cluster will now listen on port 8080. And it will route traffic on port 8080 to those front end containers. And it actually doesn't matter which machine in your cluster the traffic goes to because our internal routing mesh will route traffic to port 8080 to any machine in the cluster. So even if the user browses to that third, that far right node that doesn't have any copies of the container running, our routing mesh will just internally reroute it to nodes one or two that do have copies of the container. Victor's going to talk more about that. Let me just speak very briefly about security by default. Basically, we don't have any insecure mode in Docker 112 orchestration in the swarm mode. Everything has TLS encryption between each node, TLS mutual authentication so that the nodes can trust each other and be confident that there's no man in the middle. And that allows us to have this notion of a cryptographic node identity. So if you want to segregate workloads, if you have a certain set of machines where you want payment card workloads to run, then you can do that. And we can talk more about this in the Q&A. Unfortunately, I'm not going to have time to discuss this much, but we Docker have not yet done official scale testing. It's on our roadmap for the rest of this year. But we have such an incredible community that people just do stuff for us. And in this case, we have a wonderful community member. His name is Chanwit. He's a professor in Thailand at a university called SUT if you've been to Thailand. And he did his own scale testing using crowdsource nodes. And he got really impressive results. He was able to get to about 100,000 containers, 2,300 nodes. These are completely crowdsourced. They're spread across all different clouds. This was totally not the test that we Docker would want to do from a marketing standpoint, like if we just wanted to show how great our system was. We had no control over this. We didn't pay the guy a dime. We gave him some advice about, you know, look, you're going to get better performance if you do this rather than this. But basically, we had no control over what he did. And he got really great results. And you can find his results on, he published all the raw results on GitHub. And he also posted a lot on Twitter. So at that point, I'm going to turn it over to Victor Vue. And he's going to do a deep dive into Docker orchestration. Thank you. Hi. So, yeah, we, Mike, showed you that it's quite easy to use the new swarm mode. It's just a few commands, swarm init, swarm join. But it's actually quite complex behind the scene. So we wanted to do a quick deep dive on the internals. And let's start with the topology. So as Mike explained, this is a topology of a basic swarm cluster. It's basically a bunch of nodes interconnected together. But actually, those nodes have certain roles. So right now, you have, we have two roles. We have the manager and the workers. And it's very important to understand those roles because they are very different. So the managers, usually, you have just a few of them. They can change the state of your cluster. They can take scaling decisions. They know the state of your entire cluster. And they are, I mean, usually, they're on, like, bigger machines, separate machines that you know and you can identify. And on the other side, you have workers. Workers, they have only one job. They receive tasks from the managers. They execute the jobs and report status. That's it. They cannot change the state of your cluster. They cannot do much. If you like the pets versus analogy, it's like managers are pets. So you have a few of them. You know where they are. They are important. And the workers, you can lose a few of them. That's fine. Containers will be rescheduled on other nodes. So as I said, every node has a role. And the roles are dynamic. So if you want, you can promote a worker to become a manager, or you can demote if you want. Once again, managers are very important. So swarm will never promote a node for you. Let's say you have three managers and you lose one. You will end up with two managers. And if you want, you can promote an existing worker to become a manager, but swarm won't do it for you because it's really a big deal to add and remove managers from your cluster. I don't know if I take the same diagram, but put it like an horizontal way. As you can see, I use different shapes for the arrows. Every manager in the cluster is going to take care of a fair number of workers. Here I have three managers and six workers. And as you can see, every manager takes care of only two workers. And I mean, we do that on purpose. So every manager has a fair number of workers and the load is spread across every manager. Let's talk about HA. So when you have multiple managers, actually only one is the leader. The two others are followers. So you can talk to any manager to schedule a workload. If you're talking to the leader, it's going to make scheduling decisions and schedule your work. If you're talking to a follower, your basically request is going to be redirected to the leader. So what happens if we lose the leader? Two of those workers will become orphans. Then the two remaining managers will talk to each other and will elect a new leader. And then once we have a new leader, this leader is going to tell the cluster to basically reconnect to those two orphans. And once again, we're going to do this in a fair way. So here, every manager takes care of three workers. It's not four and two. So it really helps us manage scale. Now let's deep dive into the internals and start with the communication models because managers, their worker, are communicating in a very different way. The managers, they have a built-in raft system. And I'm going to talk more about it in just a minute. But every manager is going to take care of a raft. And when they want to share information, like when we take a scheduling decision, for example, we commit this decision to raft. And that's how we get propagated to the other managers. The workers are talking about our gossip protocol. And both worker and manager are communicating over our JPC. So if we zoom in into the managers, the communication, it's strongly consistent. It's holding the whole desired state of your cluster. And it's really simple to operate and fast. And basically how it works is usually when you want to schedule some containers, the manager, the leader, is going to need the state of your entire cluster. And he should know, OK, this node has these containers running on it. This node has this container. And that's how he's going to take a decision. And every manager has a state of your entire cluster in memory. So when you want to make a scheduling decision, you read everything from memory. So it's really fast. You make your decision. And once the decision is done, you just put it to raft. And as Mike said, it's secure. Everything is over TLS. Now, if we want to look at workers, it's slightly different. So as I said, it's a gossip network. So it's peer-to-peer. Basically, a worker is going to talk to one of his neighbors and so on and so on. It's also, everything is also over TLS. Basically, when a worker connects to a manager, it's going to receive the TLS key. And every worker is going to use this key to do its secure communication. And by default, those key are rotated. I don't remember how often. I think it's a couple hours, but you can change it in the settings. And once those key are rotated, they're pushed down to the workers. And the workers start using this new key to communicate. Mostly, workers are sharing information, such as load balancing rules and IP addresses. So when a container is started on one worker, its IP address is going to be shared to the other workers. Let's zoom in on the nodes and explain how it works when a user is doing a Docker service create. So at the top here, you have the CLI. So when a user is doing a Docker service create, it's going to do an API request. And the API component of the manager is going to receive this request. It's going to make sure it's a valid API code, that JSON is correct, everything. And then it's going to create a service object and commit this to raft. Then the orchestrator part of the manager is going to pick that up. And it's going to compare the actual state from the desired state. So let's see in this example. I did a Docker service create with four replicas. The desired state says I want four tasks. The actual state says I have zero tasks. So the orchestrator is going to create four tasks. Then the allocator is going to take those tasks and allocate every unique resources they need. Most of the time it's going to be IP addresses. Next, the scheduler is going to pick those tasks and look for each list of nodes in the cluster and find adequate nodes to put those tasks. So with respect to constraints and everything Mike showed you before. And the last part in the manager, it's the dispatcher. Then the dispatcher handles connections and helps take to the worker. And if we look at the bottom on the worker side, it's simpler. The worker is going to connect to the dispatcher in the manager and basically check if it has works to do. And when he receives work, when he receives some tasks, it's going to talk to its executor and basically launch the task. So launch the containers. So here, one thing that's very important to note is everywhere in the manager and in some part of the worker, we only talk about tasks, not containers. So in the code itself we have an abstraction. So swarm mode doesn't really care about containers. It scales tasks, it's a scale task, update task. So today it's containers, tomorrow it could be VMs, it could be unique analysis, it could be anything. The system is not specific to containers. Next, let's talk a bit about networking. So let's see in this example, we have two services. The one on the right is service two. He has one replica. The one on the left, he has three replicas. It's service one. So what happens when service two wants to talk to service one? Basically two things. First, every service has a virtual IP. And so I'm going to have a built-in DNS server. So when a service two wants to talk to service one, it's going to do a DNS request on service one. So it's going to be given the 10.0.0.98 virtual IP. And then we have VIP round robbing between every container IP. So when service two is going to do a request on service one, like first it's going to get the .1 IP and after the two and after the three. And it's going to round robbing like this. And next things before I do the demo, I want to talk about the ingress load balancer. This one is quite complex. So let's assume we have this setup. We have a two-node cluster. And we want, when we import 8080 on our load balancer, we want to reach the service front-end. So it's basically two steps to do that. The first, you're going to do a service create. And for example, you say I want four replicas. And you're going to expose the port like a random port, $31,000 on your cluster. As Mike said, this port is going to be binded from every node in your cluster. So every node in your cluster is going to bind this port. And it's going to redirect traffic to one of the replicas. And then you configure the port 8080 to root on $31,000 on every node. So the diagram looks like this. On top, when you do a request, so public IP and port 8080, it's going to root to your load balancer. And what's really cool with our routing mesh is you don't need a load balancer that is container-aware. Your load balancer doesn't need to know, OK, your service is only on this node or your service is only on this node. You can use a regular load balancer like HAProxy and Gynx. You just have to put every machine you have on your load balancer and tell your load balancer, OK, this port is reserved for the service. So when you hit the public IP and port 8080, the HAProxy or Gynx is going to reboot your request to any node. It doesn't matter if the request is here or not on the port, like $31,000. And then, I mean, it's pretty much the same as before. It's going to be routed to the virtual IP of the service. It is present on every machine. And your request will find its way to an actual instance. So now let's do a demo. Hopefully it's going to be easier to understand everything. So here on the left, I have a small visualizer. You will see where new machines appear. And on the right, I have three terminals where it's three nodes in my cluster. So first, I'm going to SSH into every node. I hope the font is big enough. And first, I'm going to start with, like I said, I'm just going to do a Docker swarm in it. So here I just created a swarm with one node. And as you can see, when you create a swarm, it prints you a command. You just have to copy and paste this command to any other machine. And the other machine is going to join your swarm. You have to note here, we have a unique token. So you need this token to join another cluster. Of course, you can adjust it, hit the IP. So if I paste this and put it on node two, for example, I'd have a two-node cluster. I can check that by doing a Docker node list. So you do see I have two nodes. You see the first one, node one. It's a manager, and it's the current leader because I have only one manager. And I'm going to add a third node. So I could type the same command to add the node three as a worker also. What we could do is we could add node three of the manager also. And to do this, we have a Docker node, sorry, Docker swarm, join token command. It's where you can manage those unique tokens I was talking about. And let's get the token for managers. So if I do a Docker swarm, join token manager, it's going to give me the same, basically the same swarm join command. The token here will be different. This is the token you used to join as a manager. So let me copy-paste this and paste it here. So I have three nodes in my cluster. Again, I can do a Docker node list. You see, you know that node one, it's still the leader. Node three, it's a reachable manager. Of course, it's important to note that in the joint command, the joint token command, we have a dash-dash-rotate flag. So you can use this flag and rotate those tokens. You can change them as you want. Those tokens are only used once when you connect. After the connection is established, it's all over TLS. But you can rotate those tokens every often. And to make the demo simpler, I'm going to promote node two as a manager. So I'm going to have three nodes, and those three nodes will all be manager to do this. It's very simple. You have to be on an existing manager. So in this case, either node one or node three. And you do a Docker node promote, node two. And that's it. Node two was promoted as a manager. So I don't know if I do a list. You see, I have managers everywhere. So let's start a few services. First, I'm going to create a network. So Docker network create. And I'm going to use the overlay backend and call it vote. So again, here, compared to 1.11, you don't need any external KV store to create overall networks. On top of our RAV system, we've built an internal data store. And it acts as a backend for the networking. So here, I didn't need any dependencies. I just have three Docker 1.12. On this machine, it's just 1.12 that I just installed. And that's it. And I can create overall networks. So if I do a Docker network list, you see the network is here. And it's scope. It's swarm. So it was created in swarm mode. Then I'm going to create a Redis service. So as Mike said, Docker service create. I need to name it Redis. I'm going to put it in the network vote. And I'm going to use the Redis image. So you see on the left, the Redis was created. I can list my services. And I have the Redis service, which is one replica running. And then I'm going to start front-end application. It's just a simple web app that is going to be connected to the Redis. So Docker service create. I'm going to name it vote. I'm going to put it on the same network, so vote. And I'm going to use this image. And since it's a web app, I want to publish a part to be able to access the web app from the outside. So I'm going to publish it on the port 8080. And I started. So you see it was started on node 3. As Mike said, you can go on node 3 and look at the port 8080. So I have the web app. I feel refreshed. You see at the bottom on which container ID it was processed. Since I have only one replica of this service, it's always the same container ID. And again, with the routing mesh, I can eat any machine on my cluster, for example, node 2. Nothing. It doesn't have Redis. It doesn't have this front end. And still, you can eat node 2 on port 8080. And it's going to be routed to the right container. Now, let's scale this service. So Docker service scale. And let's say I want six replicas from the container, the service vote. So I'm going to need update here. Docker service update. Sorry. All right. So I had only one. And I asked for six. So the regression loop added five more. And so if I go here, again, I can eat any node. But here I'm a node 2. When I refresh, you see that at the bottom, the container that processed my request is always different. And for the last part of the demo, because I think we're going to bring short on time, I'm just one last thing. So let me remove every service I have. So vote and Redis. And I'm going to create a service. So it's going to be, again, same with three replicas. And I'm going to create another service with this time in the global mode. So Docker service create dash dash name for two and dash dash mode. Let's call it global. So when I do global here, I say I want one instance of this service on every machine. So here you see, hopefully you see it's different colors. But at the top, you have vote 2. And at the bottom, you have vote 1. And if I go on node 2, for example, and I do a reboot, after some time, Swarm is going to detect that node 2 is down. For vote 1, we said we want three instances. So the instant one container is going to be rescheduled to another node. So either node 1 or node 3. So it was rescheduled on node 1. But vote 2, it's not the same. For vote 2, we asked that we want global mode. So we want one instance per service. So since one instance per node, I mean, and since now we have only two nodes, we do have two instances. And in a few minutes, we're going to go to questioners. But in a few minutes, node 2 will come back up. And here we go. And as you can see here, the vote 2 container was scheduled on it right away because when node 2 came back, again, the desired state is I want one instance, one replica on vote 2 on every node, and node 2 didn't have any. So it was restarted on it. All right. Let's, Mike, you want to talk about this one? Yeah. We wanted to use the rest of the time for questions. I just want to leave this slide up. We wanted to put in a quick plug for some upcoming Docker sessions. Some of these are Docker employees. Some of them are friends of Docker. And especially Jerome Pedazzoni, he's our kind of chief evangelist. And he's just a really great teacher. And he's doing an all-day workshop tomorrow. So I would highly recommend if you want to go deeper into this stuff in a really hands-on way. He has a great talk that covers this stuff. So I don't know if you have any question about what we showed. Yes? So, yeah, not right now, but we definitely have something that you use Compose. So at DockerCon, we showed a really early version of what we call Docker Application Bundle where basically it's going to take a Compose file and from this Compose file create a bunch of services. So we got many feedbacks from DockerCon, so it's still an experiment. That's why we didn't talk about it here. It's going to change in the near future. But, yeah, soon we will have some way to, from a Compose file, create services. Yeah, one of the points of confusion is a lot of people think that because Docker Compose, the V2 format has a key called services. So people often assume, oh, that's going to start a Docker 112 service, which is a totally reasonable assumption. It happens to not be correct because Docker Compose hasn't quite caught up to where we are with services. So that's the state of things. But this DAB file, I think, once this goes GA in the next version, it's going to be changed. Yeah, that's the plan. It'll be from the Docker CLI. Yeah. Is there a question up here? Yes, sir. So, the nodes, they are communicating over, like, go see protocol. So, but it's just for the routing information to route a network between containers. I mean, everything else goes to managers. It's only the manager that knows every node in your cluster and all that. No, you have to talk to a manager. You cannot talk to a manager, sir. Yeah. Yeah, it was. So this new swarm mode, unlike the previous one Mike talked about, everything is built into the engine. So we added a bunch of new API endpoints. So we have a slash services, slash nodes, all those stuff. Yeah. One of the big goals is that it should, everything should be API driven if that's what you want to do. Because we know a lot of people want to automate. So we added all these endpoints to the Docker engine API so that if you don't ever want to touch the CLI, then you don't have to. Yes. Yes, sir. No, actually, we don't because, I mean, you can do it annually, but it's not a core concept of swarm. Although one thing I didn't talk about is we have several modes for worker. So by default, it's going to be active. It can receive work. It can receive work and do work. We have two other modes. We have pose where it's still going to continue running the work, the workload it has on itself. But it's not going to remove, to receive any more work. And the last one is drain where it's going to basically stop every worker, workload it has and just wait. So what you could do is you could put some worker in drain mode. So they are part of your cluster, but you won't have any workload on them. So they're like reserved. Yeah. So we just got a sign in the back saying, I think it's time to wrap up. So we're going to let you guys go. But we're around. We're happy to take questions one-on-one. Thank you to everyone who showed up. We really appreciate it.