 Should we start? One moment. One moment. OK. OK. Catch you. Yeah, sure. OK, guys. Thank you all for coming. Please, if you have any feedback about the talks or the conference, feel free to leave any feedback on the schedule up. And also, we got some perks here for best questions during the workshop. And yeah, so that's all. Good. Good morning, guys. Good to see you. Full of loves here. My name is Nipen Rakhare. I'm currently continuing my own startup in Bangalore, India for container training and consulting. Earlier I was a red hat for three years. I was in two times the red hat earlier. Last time I was three years. I was doing the home engineering for a red hat and looked after clusters on Pot Open Stack and OpenShift here. Last year I also wrote a book on Docker called Docker Cookbook. Yeah, that's a kind of short intro. So, yeah, before we kind of formally start, how many of you have used Docker? Okay, yeah. It's really good. And how many of you used any of the Docker tools earlier, like Swarm, Compose, Kubernetes, Mesos, really good number? Yeah. Good. So hopefully people who are new here they can get to learn something and you already know something. I mean, already have used something, be part of the communication or the conversation of what we have today. And so while creating all the containers, Docker or everything, whatever we're doing in that way, our end goal has always been to run containers or applications, right? Whatever this will be. So our end goal has always been to run an application, other than talk about containers or that containers by itself is nothing. I mean, it's just like you use a bunch of features from Linux kernel or maybe not in Windows also. So you can just take an application like nginx and all that and just run this individually as fine. But our eventual goal is to run a three-tier application or multi-tier application on the production. That's where you want to go, other than just running a laptop. So when you create any applications on a laptop, you say Docker and something and it just runs fine. But when you go in production, you need to have a group of machines which work together. And then you should be able to kind of say that dynamically that I want to scale my application to end replicas and business of that. So think about a use case. You are doing a shopping, you are running a e-commerce company and during the Christmas season, you would see that a lot of sales are going to happen. So you do plan for that. So if you're just running a simple one node, one nginx, one web server, or one database and all that, your engine is going to kind of fail if these loads come to the website on a given day. So what do you want to have? You need to have some kind of autoscale mechanism. So when you kind of say, now if I'm going to see my lot of load, I should be able to increase my web interface by n times, shrink down as I don't have enough people coming to my website. So our initial goal would be to kind of do autoscaling. Those kind of features. So we want to reach there. Other features you'd like to have is basically if you're running a version X of your application right now and you want to upgrade to the new version. And the one way is to like shut down your entire operation and then do the second version of the image or the application and deploy it. The other way is to kind of do a rolling updates. You can say, okay, I'm going to run some part of the application right now and some part of the new version. And eventually I'll move to the new version slowly without hitting any of my down time. So that's where we want to go with Docker in production. And so that's what I'm going to look at today. Docker between tools which would basically allow us to do all of these things. So I was thinking of preparing the vagrant thing and where we want to try the workshop. But if anybody tried to do it right now we can't finish the workshop. So what I have is everything around within the GitHub pages. I'll just get this link. Currently just follow along with me and if you're at a speed which is good enough you can just do as with you because they're going to be like four containers, four VMs for swarm, then three or four for Kubernetes and then again four for Mezos. So I would recommend that you just follow along and understand the concept and if you have someone just feel free to share. Because this has been a good learning experience for me also. I have been just kind of focusing on Kubernetes till now but because of this workshop I get the chance to learn about swarm as well as the Mezos as well. So there are a few things why we want to have orchestration. Everybody knows about Pats and Catalystrike. So if you take an application which is like so complicated or so complex you want to make sure that you update them very carefully rather than it should happen automatically. So basically what do you want to have a typical or a very complex application which is critical to you? Should you update automatically? Should be able to do things automatically rather than you have to go and do these things. That's why we say that we don't want Pats in the server forms. We just all want Catalyst. Even production you should be able to upgrade auto scale dynamically. This is what I will talk about. The one thing we want to go also is across cloud providers. We just want to let's say you are running all your stuff currently on AWS but you want to make sure that you have some part of your infrastructure on Google or recent ocean. How you should be able to basically you are running the same application but you should be able to have multiple cloud providers. So some of the existing tools allows you to kind of run it across different cloud providers. If not, they will be coming with those features so that you can have your application running on multiple clouds. These are some of the examples what have been there. Docker swarm, Kubernetes, Mezos, Diego, Apache Aurora, Amazon ECS, Azure. There are all these features that allow you to run containers in a non-conversed way the way you provide things. So we are going to look at today the top three, Docker swarm, Kubernetes and Mezos. This page is going to get upgraded to 20. So basically if you are not covering today but eventually you kind of being in this page you should be able to see for all of them happy if you are able to contribute if you can send some requests or other things also. It's also really good. So what we need to run container and orchestrator is basically what do we need. So first of all running containers in production will require you to run containers in multiple nodes. You just want to run a container just on one machine. The machine fails, everything fails. So that's something what we want to have. You would like to have how you bind those nodes together in a cluster. Basically you can take one of my so if you can take five machines you can say some of them are master which is kind of going to delegate their work to the nodes or the slaves. And then this is going to synchronization so that you can update you can manage your applications. What we also need is container engines. We know Docker. I don't know what Docker that. You can use Docker to run containers. You can also use Rocket. Rocket is from Floraise. You can use Rocket to run the containers and the Rocket also. Now the next one is very important that there should be a single source of truth which everybody should be able to believe. If I am saying something that is truth. Some source should be a kind of a key value pair. So think about an application. So let's say you are in a cluster of machines right now if one of the master goes down or some of the nodes goes down and how somebody who would know that my cluster is now in the sync state or the way it should be. Or let's say if you are developing an application. For this application create an instance or create five replicated container for my app. How would I know that five of them are running? Now if something goes down how would I know that one thing goes down or should be now running four? Now it's four I should move to five. So there should be a single point of truth that should be able to help you out or help the cluster orchestration thing. Just like at CDN concert Zoo 2 pair are the other kind of examples what you can think of of a key value pair. So it's like I can just set a key saying that okay I can say now my app name is something I am going to contain the name and do something like this. So that goes in set in the key value pair. Now somebody want to retrieve it they can go and retrieve it. So think about any questions here? The pace is okay you are able to hear me understand me? Cool. Now running the containers just on one machine is fine but when you try to connect the containers running on different machines. So think about you are running a web tier on some machine and you are running a DB tier on some other machine. So now the containers when you talk to each other they have to go through the same most machine. Think about these are two hosts and you are running containers of each one of them. Now the container from this machine if you want to reach to another machine they have to pass through the same most machine. They cannot go from anywhere else. So now how do you connect these containers from multiple machines together and form a flat network? So that's one problem what you have to solve. So there are some of the things like a VXLAN. How many of you know VXLAN? Tunneling, an open stack or what it's basically you can say okay now transferring my data from one container to the other machine. I am going to use my listing network but I will send my packet what I want to send it there. So I am going to use some kind of tunneling and then let me just clarify that so think about this is the kind of VXLAN a kind of packet. So if you recall in your whatever CS course you have done you would have learned about IP address, header and the payload. So this is the kind of a payload packet what you send from one machine to another machine. So what you have is this is so as I mentioned these are two different machines this is your container one this is your container two and this is your interface on this machine so what would happen is this guy can easily go to this guy like a typical LAN environment but if this guy want to send something what we will do it will create a packet let's say packet one this would kind of go to this machine and this guy would create a basically an outer packet to send to that place so basically what you are seeing here is an outer packet which is getting sent from this machine to this machine and this packet what this guy is generating this goes on the payload like P1 packet goes here and this packet goes this side and then outer one gets destroyed and then this P1 goes like this. Something like this and this will give you a top level overview. This is the kind of MAC address so you can see that packet diagram outer packet and then in that you would see the VXN header and then you would see the inner the inner inner source packet as a model so basically if you want to send a new C2 it would basically generate a packet this would go part of the bigger packet and send across. This is something called a tunneling of VXN what do you have that's why you have to solve this problem if you want to create a cluster of containers cluster of nodes which are in containers and then once you have this kind of one example but you can have some other ways also to do this thing like in different layer of the network so now with all of this what Docker and Kubernetes have done they have created some kind of pluggable architecture in which you can attach different kind of drivers which will allow you to do this thing contain a communication between nodes so there would be so when you create a cluster of nodes cluster of nodes to make the container acquisition work there should be some master and there should be some slate so what job of the master is master is going to kind of say run N copies of this application on my cluster if my one container dies create a new container so that's the job of the master who would take care of this management so now run N copies of the container or schedule my container on this particular machine think about if you have a cluster of nodes one machine have SSDs and you want to say that make sure this container gets started on the machine which have SSD so you can put those kind of rules when getting container so now this is the job of the master or the master would kind of decide where it should go so after the info of all the nodes one side of the node it will kind of decide where it should go then you get the kind of affinity now you can set this container on this machine and all that those kind of thing you can do now once you have this thing you have a node you have a cluster of nodes you have a master, you have a network and all that then you would need a way to figure out a service because containers are mortal containers can die and come any point of time so always remember when you run containers you have to prepare that the container can die anytime it will come anywhere else think about a container running on machine one if this dies for some reason and it shows in some other machine some other node of the cluster now because we have if you are running in a transaction it should go across because the container dies somewhere it comes somewhere else so you should have some kind of mechanism to identify my container is now running on some other machine that's one way that's one thing other thing also is how would you I'll just come back to that so we need to somewhere to kind of discover that where my current application is running right now so that you have to kind of decide where it should be running and you have to kind of broadcast that this particular application now runs on machine node so you need to dynamically change your connections or basically you have to maintain the connection but still you have to move through different nodes okay I think we'll come back to a clear and then you need to kind of do a load balancing also so basically when you are running the let's say you are running the same application you are running the web front you are running the front end on 5 different machines then for the outside world you are going to just reach to one end point and that end point has to kind of do the do the load balancing so that's it as a client you basically connect to some IP address which is iPad of the load balancer now you can connect to different containers C1, C2 and all that and so basically if you start a connection you always start with this problem but this there has to make sure that this set the connection to each one of them and round draw it on some other way so that you kind of don't just connect to one container you can just do the load balancing and round draw it on all those things any questions here so either I am doing a good job or a bad job or you don't know what is the when one load container crashed so that intelligence you have to build an application so you cannot just say that I am going to just die any point of time and whatever I had said that I was going to die with that so you need to build an application also with that and mind that you would crash any point of time what master yes so work is you have to define the work if you are just saying defining the work is like sort of the web page for example compilation so in that case you not get the output what you want and then complete again start from the scratch again it can't just kind of be there and just start from the scratch again because you are under the output as then client right so the load balancer would say ok now this dies can do this job here because it's not working so what he said in similar scenario what we can do is probably have a storage model which is not inside the container a volume container so what are we going to do what was done and probably should run this yeah so that container that container cannot handle cannot handle now half of the work has been done half should be done somewhere else that's not something which you can do in this kind of scenarios yeah if you are talking about transactions bank transaction you are talking about then that intelligence has to be built on an application that should know that because it hasn't completed I should not take this I should do it again that kind of intelligence has to be built in an application ok so once you kind of start building a container so the containers whatever the storage they have that can basically if you are doing something if you start the container you do some file creation if you don't save it back to the disk it's gone if the container dies that's one thing other thing is so let's say now container one moves from one machine to another machine and you are doing some operations and now what would happen to those basically you are saving some data in the disk now because the container moved from one machine to another machine the other machine would not have the same back end storage what has been moved so you need to have a bit of shared storage which has been given to the all the nodes on the on the cluster think about your NFS LusterFS so those kind of shares you have to be giving to all the nodes in the cluster when you create a container kind of move from one place to another place it has to kind of because the same storage is on the other side also they should be able to work as it is because your storage is being kind of shared across them so you need to leave this kind of plugins so that's why when Docker came along it came a very good hype I mean people are reducing it but since they are going to move in production they will hit this problem two of them was like networking at the storage and you can go in production and that's why they kind of came up with networking plugins and storage plugins so that even though they have something but other guys can also come along and do that like for Red Hat we are doing LusterFS and safe plugins for Kubernetes and for Docker so basically when you create a cluster of nodes you should have automated way of mounting the volume and mounting it on all of those things so all of these things have to come played together only then run your containers in production ok so now any questions concerning these storage plugins but I guess you will come to that later anyway because what's available in Kubernetes for example what's the best practice ok so now for this kind of workshop what I wanted to do eventually is kind of have a kind of real world application and which we can deploy on all three platforms how many of you have heard about Magento Magento which is an e-commerce platform so you can just kind of take that platform and you can put down on your you can deploy it on your on your nodes or whatever and you can have a bt-commerce platform and once you kind of build that platform then I wanted to kind of do a little stress testing on that and then we'll see that I wanted to see that how that scales up so basically if I say that I'm doing 10,000 per second then how would Kubernetes would behave and so on would behave but I could not finish this thing because I'm doing a time frame what I had so I kind of did the very small thing I mean our other app was called DocChat which I'm going to see so but we still have something for Magento what to talk about so Magento we can also have a common example and a common example I have but not the external which I wanted to go for the workshop but as you are saying that as if you watch this link or if you want to contribute here you're going to update these things over time anyway so coming back to the DocChat example so we'll see a simple example chat example chat DocChat example which has already been done for the DockerConU workshop in Barcelona last year something like that so they have just talked about Docker compose I did it for excited to work for Kubernetes and for Mezos so we just see that if we take same application and work on three different platforms we'll know that what would happen if I move my application from one to another one so this kind of have a common ground approach that we'll take simple application and deploy it on three different platforms and see what it takes to do that let's just do a quick overview of Swam and then see the examples and Kubernetes and Mezos so we'll go step by step so this link should be here so now try to relate to the things what I mentioned earlier what you need for the orchestration to work together so now just try to relate what I mentioned earlier and think about that in terms of Swam, Kubernetes and Mezos and how they kind of come and play together so let's take a a second review of this kind of diagram and then we'll move on so what you see here is anybody use Docker machine Docker machine is something Docker machine is a tool from which basically you can so now think about a use case let's say you are running the Docker on the laptop is fine but you are running the Docker node on the cloud now the one option is you go and log into each of the node every time and do your Docker operation there like Docker and Docker pull over other ways that you take your basically you be in a laptop you just give the right environment variables for cloud one, cloud two, cloud three and all that and you connect with them and just do the work and you are not going properly but this is the local laptop what you are sitting here and there are different machines on your cloud so we just connect one of them to the operation done we will now go to the other and move on so this way you can easily manage your containers by sitting locally on another laptop so yeah it's okay we'll see that example so what do you have here first of all these are nodes what you have typical cloud nodes what you have all of them are containers as they are supposed to be and then these can be managed by a cluster called Docker swarm and then you have a CRI and from the CRI you connect to Docker swarm and that swarm is becomes kind of master the job of the master as we know is to first of all schedule the containers where you should go and then schedule yeah schedule the containers schedule the networking and all that so how we can go and manage those things so basically you can say now there can be multiple masters those guys can also have a failure so if one master goes down other guys will take over and we mentioned about some of the key value pairs Docker has something called a slip key so now again so you can have a single key value pair mechanism to identify the single source of truth in your environment so Docker provides the library for that so now those can be either LCD console or zookeeper so you can choose one of them if it came with it and then you have yeah I think this is good enough for now but please with the example it will be much more clear so Docker swarm manages a unique token ID for the cluster so you can say that okay now in my cluster I am going to have one master, two slaves or two two slaves, one master, two master, three slaves so how do you do that how do you identify that I am part of the cluster and so in the network you might have multiple environment so you need to identify that I am part of this cluster so what Docker has Docker swarm is kind of unique ID called token ID so for general token ID when you create your swarm node master you have token ID to each one of them because the same token ID they know that they are part of the same cluster and that is how they can work together like you already mentioned that you can choose any one of them and use it so Docker kind of trying to try to do different what do you say pluggable mechanism for all of them so networking for key value pair or malware so basically as I mentioned the networking part we are going to do a reexam in Docker swarm so what do you see here is a container in the container you have a network sandbox and this is the endpoint so think about just the endpoint is very good the IP address sandbox is your kind of so now you can connect different endpoints to multiple containers one container can have multiple endpoints so one container can part of multiple networks and there are different drivers for live networks so basically when you are generating the networking driver for your container you can choose null, bridge, overlay and remote so that should come in like this kind of probably formatted so null will give you kind of a no IP address when you create a container bridge is typically what you already see in the network that if you start a container you get a Docker zero on the machine and you get a container IP address and they talk to each other overlay is when you kind of go across the nodes you connect multiple nodes together and remote is being used to kind of create a custom network plug-ins like companies like Cisco or Juryberg when they are creating their solutions they create on top of these live networks so again let's go and search later on we can tell you on top of the remote plug-in what they have now once you create the containers on the swarm you can choose how they should be created on the nodes so basically if you look at this example so you would see that some nodes are in three nodes so you can see that some nodes are in one container this is the strategy what you can decide when you start the container the strategies can be spread kind of spread all the time bring back means you want to first finish off one node completely and then move to someone else so this way if you kind of have less number of nodes use them properly random is like you guys can one question one agent is one docker server yes one agent is one docker server one machine in the cloud running on a separate machine yes so it's like a typical node stick around a VM a physical machine that can be an agent and swarm manages that agent tie it together ok you know what so you can have filters so there are filters based on nodes so basically you can have filter on top of node and you can have filter for container base so when you talk about for container node filter you can say that depending on the health schedule the containers there for container you can say depending on affinity you can say that ok I want to run a container where my dv container is already running or I want to run a web container where dv is not running that's kind of affinity what you can choose major dependency only if I start my dv container then I start my web container those kind of things you can choose as a part of scheduling mechanism and so on and you can decide the port also another question you say start a container actually start a container not a server container no it's a container yes think about where engineering is so when you talk about containers start a container now you need to once you have that there is some kind of way to communicate basically a load balancer outside word to the inside word communication so you need to have some kind of way to figure out how they should be connecting this is load balancing everybody is not clear about that HA proxy load balancing has anybody done I think everybody has done I am guessing that so what does the interlock interlock is a plugin was created for Docker so whenever a new application comes or a new container comes in the environment it would say hey I have come please book my entry in your database hey I am a web container I would follow on from this particular point so that kind of what is being done with interlock and then you can have different the plugins through which you can manage storage for the containers so you can say so what would happen is your shared storage would be modded on each one of the agent so when the container moves from one container to another node the container will all work whenever they move okay now let's look at some of the workshop part so currently I am on my machine my system and I do Docker machine.ls just going to reduce the font size of it are you able to see at the back now visible okay so Docker machine is a way to manage your VM it's a way to manage so now once your single machine is Docker machine, when you bring the cluster the Docker swarm so now what you are seeing here is I have four or five different machines one Dev, one Swarm Master other is Swarm Master Overlay other is Swarm Node 1 and Swarm Node Overlay so there are two different examples Docker Swarm typical environment then we say in the overlay environment so first we look at the so we have got Dev and then we have so think about two combinations here Swarm Master and Swarm Node 1 this is one combination, one other environment Swarm Master Overlay is one second combination is Swarm Node 1 Overlay is different server different mechanism how many of you have linked to containers earlier? linked to containers have you used Docker Compose file I will just quick overview of how the Docker Compose file look like and how you can work them together so this is the Docker Compose file look like Docker Compose file through which we can link our entire application together so what you see here is there are two containers I want to create so basically we want to put the application so that is what we are going to build in the Docker application so anybody can do a simple chatting on that so for that we create two containers one is dbcontainer and the other is webcontainer now dbcontainer I am going to expose my db service on port 27017 so basically we start the Redis container we take a Mongo image can you create a container from that and export my other services on port 27017 there is another container called webcontainer which has been built from a particular image and I am going to say link this container, webcontainer with the dbcontainer so I am going to create a webcontainer I am going to say that link my dbcontainer with db as a kind of a link named inside the container now why we do that because in our web application what we have written so basically in the web front trend you want to work in database so you would have some kind of a program which you would write so for the application what we are going to write my Mongo client is db27017 so basically connect to my database on this port so this is the code what I have written in my application now how would I say that how would I resolve this db name in my application I am going to write db so now that is why I have given this link that my dbcontainer is linked to a dbname now when I am going to run my web application like Python program or something I am going to say that I connect to my db on this port this is kind of hard-going thing so this is how we are going to write an application rather than saying the IP address of the container or the grid you are going to use a link name called db that whenever they get linked together this link it will get automatically a db endpoint and I am going to run the Mongo so that is db27017 ok make sense ok now what I am going to do is I am going to connect to my one single so this dev is not part of any cluster right so I am going to connect to this particular container or Docker machine or Docker node Docker machine so how do I do that I am going to say that ok you will let me show you one thing first Docker machine ENV so what do you see here these are some variables what is being there in my shell now so what this mean when I am so Docker run as a Docker client in the master so client can connect to any of the node so now if I change this thing right so I am going to say that my Docker client should connect to this particular host so think about if I change this to Swarm master and so basically you see that this is a different IP address right this is 100 this is 110 so now depending on what I choose Docker host my Docker client is going to connect to the particular endpoint make sense so if I where dev interface how I do that I will just do this now if I do Docker images I would see the Docker images running on the dev machine dev VM ok now I am going to go to my Swarm folder which is on my GitHub I will go to just compose example Docker compose up so it says that ok it has created the containers and it is being linked on my dev machine IP address on 5000 port so if I want to get the IP address of my dev container I just do this I get the IP address I go to browser connect do something example jet application what has been built because of what we did right make sense any questions is there any yeah yeah that is true so this will fail actually this would not be able to go on this will fail connection time out so until you have built the application so when you are writing a web interface or when you write this jet application right so you need to kind of build a loop here saying that while true until I get this thing or what you can have is something called as dependency here so that dependent on this until it is going to do that let me get up to do that yeah so so if you want to run Docker right what you do you have to install a Docker right so in that what you get is a Docker client and the server both are on some laptop that true right yeah yeah yeah this is a simple example so think about you if you want to manage multiple Docker hosts multiple Docker servers running on VM on AWS running on Google cloud running on a local laptop but you want to manage from the one laptop your laptop so one way is like you create two separate one and then you log in you install Docker on each one machine with an ornament where it is like the way I showed you right other ways you can have Docker machine to manage all that Docker machine can go and install a VM on the cloud environment you can say Docker machine Docker machine I can show you that so that's make much more sense actually so I can say Docker machine minus D minus Docker machine create minus D virtual box my name is let's say Defcon so what this is this is going to create a VM on my virtual box here and it would have all settings how Docker should be configured that I can access from my local laptop to this all of those things so what you can do here is this driver can be your AWS can be your Google cloud so you can configure that Docker machine create me a host on AWS create me a host on wherever so now you're going to save the VM going to come up with the Defcon and I can do any operation on that so I'm just going to cut short here and that we finish our things that we have to do so you can play around with the Docker Docker machine later on so that's one example what I showed you now let's move to Docker compose so what I'm going to do right now is this is the go to because I already have setup ready so you can follow this instruction and you don't create an environment for this particular lab so what I'm going to do is I'm going to first so as I mentioned earlier that Docker swarm need to have a common token so that they know that they can't progress so what I'm going to do I have to generate a new token so that Docker provides a way to create token run at run time so I can create a Docker run swarm swarm is the name of the VM image and do a create this then automatically gives me a token ID like this token ID so I can create this token ID and when I next time I'm going to create I was going to do this token ID and run create a new master and a slave and they will be part of the same cluster what I take here is I'll take a token so I'll create a new one I say this goes to a token variable so my token goes to a token variable there now next time what I do is I'm going to say that Docker machine create what I'm saying here is Docker machine create me a new virtual machine with 2GB of RAM and make a Docker swarm master for me make a swarm master for me and my token is this token and name it as swarm master this I'm going to create a master for my swarm next time I'm going to create a node so same thing I'm just going to use the same token ID which I created earlier so I'm going to say create be the same token create me a new swarm node with the same token ID so that those become part of the same cluster would I use that to set up machine you can do that similarly I go swarm node 1 and swarm node 2 I'll set up that thing showing you 2 nodes 1 swarm master, 2 swarm nodes but in the current VM I have 1 master, 1 node only because of the prime limitation so now you can see there what was the same cluster the same token is being there now what I'm going to connect so I'm going to say that current my Docker CLI should go and connect Docker swarm so as in that Docker machine connect to my swarm master with environment you may have added a variable for swarm master so how do you do this keyword so that too I can show you that so basically if I go and so see if I go and look at just say doc swarm master sorry the evil part so the endpoint what I'm getting right now is IP address of the swarm node and my endpoint 2376 2376 is a port on which my Docker demo is running but if I want a demo so basically here what I'm doing here is I'm just going to connect to my Docker demo right now you see what I'm getting right now output is I'm going to connect to my swarm node and I have 2 containers this many image and the typical Docker in forward what I get but what I want an output of a cluster wide so what I have to go and say here rather than just a swarm ENVS swarm master I have to specify a new keyword called swarm and if I now do the Docker info I get a cluster wide output I get I have a swarm node I have a swarm master see this is a swarm master this swarm node this one so this is 2 container this is 2 1 container this way I can get a cluster wide view but I have to go to the right end points I cannot just connect to the Docker endpoint I have to go to the swarm endpoint and then I get it okay now I'm going to create a I'm going to use the same Docker file and I'm going to again start the Docker compose up so again I'm going to get a IP address because I'm running on one so this gives me that where I'm running now if I go to this IP address I would get my same application right the thing is so just remember that you're going to you're getting served from one container it's the right one make sense any questions I don't say any questions yeah as a speaker you are happy when you see more questions I would appreciate if you have more questions so that I know that what I'm talking about really makes sense okay now the next I think I might look like you can so you created the swarm master and just the same we also created the swarm node one and two no just you have to change the just do one more change there so it's always everything is there so just that I don't specify a docker swarm master keyword here I just specify I'll skip that part just call it because this is the part of mode and can you show what containers are running on your nodes right now yeah that you can do you have started the same application what nodes right? no I'm not doing it right now I just created a cluster right now so basically if I sorry so if I do docker ps so basically let me just do the shutdown first I'll show you what I have done so I'm going to do docker compose rm kill this might do okay I want different folder actually so let me go to the right folder so I was here and here I was so actually I don't have a container right now now what I do is with docker compose file I'm the same thing nothing what I have changed so I'm just going to say docker compose docker compose is something through which you can create a link together link container and create complete application so I just docker compose up minus d if I have a docker ps I will get two containers one for db they are connected together and I can access them over the web if you have two nodes and you would like this container to run across those two nodes I would like to see if they are running out on the web yeah they are coming there this is just a single node right now simple question before you let's say you are not using docker compose yes you create the pool from from the image and how do you how do you create the cluster with docker commands form yeah so there are two ways you can just you can just create multiple yeah so how you can do so think of what this is a simple example so you can create first docker you can just say docker okay so the example let me just do that let's take time so what I can do is hit first container I can just say docker run minus minus name db1 and I can say mongo and I just need to run the background so now you can see I have three containers run right with the name db1 with the db1 this one right now I can just create a new container and I can say web1 and I can say minus minus link db1 because I have used db inside I have to link the db only and then I have to change my image name to anchor slash docker chat v1 where oh yeah but still he's warning sorry he was warning because now there is an anchor instead of link now I miss this thing docker chat yeah so now you see that I got the link right so now I can just get the IP address docker inspect oh I did not do the port mapping so I need to do it so what I have to do is basically I create a new container with a new name and I say that minus p I say 9000 port of the host goes to this guy and then inside and I need to change the name here I just say web2 what happened every time you delete with backspace and this name db3 link db and then say minus p 8 let's say 9000 to 5000 the containers and should be okay right so now I know the I should get the IP address of my current node so I should get my docker host now I can go to this particular endpoint and click on 8900 and I should be able to do that okay this is how the compose works and you can do it in a link but this is what we are doing on the same node right but eventually you want to scale right where you want to go in production we have multiple nodes and we should do it in the application to multiple nodes right now I mentioned about interlock so let's just look at that this we have done here so what we are going to do here is look at the compose file has been changed right now so in the compose file what has been changed is we have added a new so we have added a new container called interlock in place of just web and db container and now what interlock job is basically whenever a new container comes in it would kind of make sure that I am so now unless there is one container like I can have 5 links to the other container so it's like okay let me show you that would make sense okay so one more thing I just want to kind of mention here is so this is fine right we understand db container we understand web container and what we didn't what we didn't do right now is in the if you recall the earlier application earlier docker file you mentioned here 5000 colon 5000 this was like 5000 colon 5000 we now removed this 5000 now you want to kind of don't want to export a port for the machine on which on which this container is running now what we also have is in the web container we are mentioning also some environment variable saying that my db container would be or my web container would be export as a dockerchat.com because that's when we need to mention in the nginx file that if anything some nginx on dockerchat serve this particular page so this is nothing but an nginx nginx application with some more configuration there and I'm going to have an interlock container which is we get it from here and I'm saying that map port 80 of the host machine on which is running to the port 80 of the container and I'm giving some details about so that it can manage the global definition from volume information and this I have to give because this is a kind of a trick what I have done so if you are able to see this thing is that I'm going to I have mentioned a constraint here right in the constraint what I'm saying that run this interlock only on the swarm master don't run anywhere else now because interlock needs the swarm some signing authority like signs like keys permission keys and all that so I'm saying that mount that particular volume so where live docker docker to boot docker contains the keys for the interlock or docker swarm I just mounted this thing inside the inside the container called the volume plugin and I say that when you are connected to docker swarm use this particular keys for this particular folder and if you kind of go in detail you understand that so what I do right now is I just go there in that particular folder I just clean up this so I'm going to go into this interlock folder and now again I have to docker compose minus d up and now I see docker compose ps so now what I'm going to do is other than accessing the web container I'm going to access the interlock container so this IP address of the interlock container is going to be my swarm master because that's where I kind of bind it to I'm going to go to my this particular folder and I'm going to do this sorry which one I need to 168 oh it's 110 no 192 168 no it should be okay backend application it seems fine don't you need to access it by some post name because it looks like the oh yeah correct correct sorry yeah that's what the problem with so I need to just change and my sorry yeah that's the right good point because that's what it except the docket it should be 110 because the engineers would not allow it to do it so now I can go to docket access it so now this is what I'm doing is just like this is what you already had but now look at the magic what we can do is we can now scale it dynamically so what we're going to say is now rather than just one thing scale it up to let's say I want to say scale it to 5 so I'm just going to say stocker scale web to 5 so I'm going to go and say so now my application is now scaling and now if I go and now look at this id but okay so this is now what we do is stocker compose ps but you are seeing here is all the customers are starting on the 1.0 but what you want is across the now this will not work so basically this is okay for running on one particular while it's not going across the nodes because what we wanted is we need to understand so see if it is if the web want to enter db we will just say web kind of db is in this port now it has to know that the db is running on this node only this is the current constraint what we have because we cannot access the container across the node value because we don't have an overlay network the current network is on the overlay network so what we have to do is the current setup looks like that is you have only one machine you are creating first db container and one web container right and if you recall the web what you mentioned try to db on some port number right now in this current scenario web cannot go to other machine so web cannot say web cannot try to db on the other machine because we have to configure the overlay network only when you configure the overlay network then only we can across the nodes environment or docker composes you can just go across the container of the same machine but you want to cross the nodes then you have to find a network with the understand together so we have to first create the docker over the network only then we can do that so now we are just going to scale one so what only four of them would be down and if we do now we get back to the original state now we go to the other four of them same here so one master two nodes overlay now we do not have a link parameter if you recall now we do not have any link parameter in the web I used to have link db to dot colon db now what we have is docker chat so what we are going to create we are going to create a network of overlay and this that would be named as docker chat so once we create this global network then we should be able to go across the nodes so this is the again composed file is modified with the name as net is equal to docker chat on each one of them now again do the same thing docker, compose, not down first of all we need to create the network so let me just go to the instructions if we do not miss something so now in this we are going to change the also if you recall that the interface is two, I have two set up here docker machine ls so now I am going to connect to docker so now I am going to connect to some master which is on the top of overlay so I have mentioned that I have two some set up one some master one some node one and I have other one some master overlay so I am going to connect to docker so I am going to say that again the evil thing and I am going to say that connect to the system which has overlay network so I am going to just change here to overlay so now I would have no container there I can say docker network ls so what you have is I already have docker network ls libnetwork what you have you can have I mentioned about different kind of networks there bridge, host, null so now I am saying I have a network named doc chat here how do you create that you just do this thing docker network create overlay network and with this subnet that's it now I am going to do the same up minus d docker machine compose yes now you see that this container this web is running on 113 now I am going to say scale I am going to say 5 now see so this down web can be managed by different machines also so they can go across nodes how does it now configuration now it's not linked because so now what would happen so now what happens is when you create this container so now that's what I said now comes the discovery part now you should always dynamically say that I should do that so first of all this interlock what we are creating here right this interlock this says that how I am going to connect to these guys first of all that part and second part is there would be an entry in each machine so let's just I am just guessing that because it is used to happen so basically now I am going to say docker so I am going to inside one of the container yeah so now what I am going to see here is this application I am going to see interlock data is this thing so what would happen is other than given the so what used to happen earlier is do a linking the host name of the previous container goes inside the other container with a linking part now here because of interlock this is saying that docker and this is how I am going to export it to the interlock interlock is going to manage that interlock is going to have the database inside that how I am going to connect the links together dynamically so it is now going to be very easy I need to put some time around and understand that but what does interlock it has a kind of mechanism to identify where I am going to have the containers are running and you can not access that so I am just going to close this thing right now for docker swarm we will now go to the Kubernetes part because we just have 20 minutes left but I think the concepts are clear then we should be able to oh so that is what I mentioned earlier right I understand the link I understand that it modifies the ETC host now it knows but when we contribute to the network how does it know where the correct correct correct is like this correct so now when I am going to so basically if you look at this part right so I am going to create different web but they won't purchase the same db so see this thing right one master two node overlay db1 right so in each of these python application right they are already the same web application same code and same code I have written kind of db in this thing now interlock is going to manage that thing for you so interlock is going to say where so as a web application I would boot interlock where I am running the db you go and give me the IP address of that this is how HAProxy analysis comes together when you look at the example of Kubernetes it will be much more clear let's think about this like now we move from the docker swarm to the Kubernetes world so we look at a swarm right now swarm is again managing this is like a container orchestration tool right how come to Kubernetes Kubernetes is also the same kind of tool so what you get a docker compose is you get the same docker binaries docker not binaries same command line to use docker machine docker swarm docker so basically you don't first does not change much but what you have to do that cost is you have to create a new interlock mechanism link them together and all that so docker sort of does not have inbuilt HAProxy thing so you have to kind of have something external to do that this is the first docker of swarm otherwise you don't have a mechanism to currently do the the DNS part so basically when you that interlock has been doing it for you basically connect dynamic definition of where should my where is running where my TV is running but Kubernetes have all of those things link together Kubernetes architecture overall it looks more or less same not one there's two different but ideas you again have one master you have multiple nodes so this is your master this is your two different nodes and there can be multiple nodes master communicates to the to the to the slaves saying that give me run my container on this node right now or whatever it is so basically so it gives the a word to the slaves it can tell how many it would say so in terms of also ok now one more difference in terms of swarm and Kubernetes is in that we are dealing with one container at a time in swarm we said that web container db container so now that is eventually you want to deploy an application and therefore Kubernetes then causes a pod so when I am going to deploy a pod think about the link what you did in the link command in the pod we would do that particular link inside the pod so in the pod you can have one or more containers and they are linked together on local host on different hosts so you would have to think about your unit is not container like now your unit is a pod yep can you run the master container on master protocol like in one pod it's like if I want to have like cluster application and one node to be like on remote local host they are going to be all remote only they are going to be all different machines only so you will have you can have node so in Kubernetes you have one master or one of one master then you have multiple nodes or the minions you can tell are slaves and the slave can you master can also be a slave but generally you don't do that so you will have multiple slaves there master is going to say like deploy my application now in terms of Kubernetes the application is a pod rather than a unit is a container on form here unit is a pod in the pod you would have the weapon db what we saw earlier right the doc chat thing that is one pod now you deploy those pods across machines so that pod goes across machines rather than one node the pod goes across machines make sense so minion doesn't map to slave slave node it's an abstract concept it's not a machine kind of like it's not a host or something no it's a host it's a host on the VM you would configure this would work as a minion we're going to start a service as a minion on the node on the node I don't understand it again because I meant if I have to like with one cluster machine I want the nodes of the cluster machine on very different like physical locations so then I thought a minion is like docker host one physical machine I'd say and then on this kind of visualization it looks like that in the pod you can have like collection of containers it looks like the pod with the whole containers must be under one node yes yes yes yes pod is a unit pod is a unit and that unit is a basically you cannot divide the unit I would have to create two pods and connect to it no no no so your application is one web and one dv which you saw that goes in one pod so now that pod can be replicated multiple times so an application can be replicated rather than pod contemporary replication will be replicated whatever times you would say I want to leverage I've got multiple machines right so it's one it's one group print container we call it web app just a little we've got a pod full of web apps I'll just show you a quick demo that would make much more because we don't have enough time yeah yeah don't worry once you go back just try out this thing you should be able to identify all of this thing so this is the definition so what Kubernetes has Kubernetes has a kind of the one what we saw in the local compost file here you would find different service file and different application controller of pod file so what I'm going to say here is I'm going to define my application as as two things one is currently I'm just going to say my application, my container is going to be replicated from this particular image that's what we saw I'm going to say like the way you just interlocked it I'm going to say that my name of this container as the front end so I'm going to give it as front end and I'm saying that this particular thing is going to be referred as front end so I'm going to create a container from this particular image and that will be called the front end okay that's all you have to remember and when I'm going to get the name, I want front end name, I'm going to look at the DNS so become a service discovery how I'm going to say that give you front end, what is the algorithm of front end so that I'm going to get from the DNS and this is the pod I'm exporting earlier anyway 5000 this is the part of how do you create a pod okay now in this pod I don't have a separate pod a separate pod I can have DB and front end as a part of one pod but I can have separate them also that's up to you in this I'm not doing that correct now I'm going to say service pod so now you remember the service pod so I'm going to create that's my pod name so I'm going to say that just look at the demo so I'm going to manage my cluster with the kubectl command kubectl get nodes so I'm running three nodes on the Google cloud right now for this particular example because again the same how much I can run so they are all ready now I say get pods, give me pods no pods right now give me service no service right now this is my default service now I want to say first create service so now DB service and I'm going to say DB controller so see the controller is we use to run how many pods I want to run so let's say if you are running three pods or four pods I want to differ all of them by DB service name so I'm going to have one common service name to differ multiple multiple pods so I'm going to create similarly for frontend so DB is again like the one which is a bongo thing we again say go frontend service go frontend controller kubectl get pods so now you see that I'm running two different pods and I'm running some services here so now what I'm also saying is that in my frontend I mentioned as frontend or I mentioned as type as load balancer I'm running this thing on the world cloud I'm going to use the word balancerism given by the cloud given by the Google and now I can scale n number of frontends the way I did for Docker swarm I can also go and say scale my frontend to n whatever I want to specify there so but before that I wanted to show something now I get this IP address of my of my service for my just one application which has one pod of DB one pod of frontend I can say 5000 now should be able to access the same application which I did there it's fine now this is this container is just one now I'm going to scale it to let's say I want to scale it to 5 again same thing sorry so how would I have this scale command so I just say 3 so now if I do kubectl getpods I would see that my my frontend teapods are restarted now again if I go here my would see this number would change this would change this way I can scale up to any one of the nodes which is the number of containers so I can now scale up scale down depending on my need and this tier description was that kind of like the instance of a service which one and then the YAML file you showed before there was a tier key tier is nothing I think it's just the kind of it's the libel so I would recommend that you guys go and look at this particular presentation so I think we just have 5 moments left or maybe less than that so basically if you I would recommend you guys go and look at this particular presentation this would give you the overall concepts should be good quick one when you scale it's always in the same no there are different nodes now so you can okay I'll show you that so basically you can say that kubectl getpods minus or wide do you have kubectl if you will see there are different names and different machines yeah yeah yeah and the physical machine that you're running when you scale it's always in the same part in the same like no there will be different machines yeah yeah that's why it's immediate you want to have all tolerance in the environment there's no point running all the content of one machine all the parts of one machine that was just like in your diagram yeah so I showed multiple examples actually so if you kind of go back look at the entire thing should be able to clear the picture and you can definitely drop me an email if you have any questions now let's quickly look at the mesos one we'll not show a demo here but I'll just quickly so how many of you have used mesos earlier anybody use mesos okay so mesos you would again the same analogy you need to look at master slave so you have one master you would have more slaves then you would have zookeeper as the key value player and so basically mesos is just not about content it is a general framework to run any distributed jobs so for contentness it has a marathon scheduler so basically dockermas talks about scheduler so basically you have so much framework actually so basically you would have a framework for contentness just marathon so what does mesos master basically collects all the resources of all the machines put them in a kind of pool and how many cpu and how many mma and all that and it will give it to the so what's happening here is these are slaves slaves first go and say okay I have one server I have four cpu I have four gv drive that goes to master master can say okay go to the framework and say that okay the framework is marathon I want to run five containers with this much of RAM and everything so scheduler master would go and give this to the framework framework would say okay now I have enough resources it would go and give this