 I just copying stuff from the keys, but yeah, I'm ready to go with that. I'll let you in words for the first. Okay, we're getting ready. We can do it. Okay, good morning. Thanks for coming. So, present myself. So, this orchestration lab is something that myself and Mario, and also Heichel Gamar had previously run at ContainerCon in Berlin last October. So, just to present myself, Mike Brite. I work for HPE. I work in the domain of NFE, Network Function Virtualisation, basically proud for the telcos. That's my day job, and then my job is playing with containers and even unit terminals. Maryam? Yes, I'm Maryam Riedon. I've worked for a redact since six months. I work for the Eclipse Chave project. Actually, our main goal today is to be able to run properly. Chave is a web IDE to run it on OpenShift. I will present Chave on OpenShift this afternoon. So, Mike, too, has another presentation tomorrow, right? Sunday. Sunday wash time. Okay, just a show of hands, or a few show of hands. I'm just inquisitive. So, who... Which way around do I put it? Okay, who's already used Docker Swarm? Okay, so others are completely new to Docker Swarm, I guess. Maybe it's better that way. Who's completely new to Docker Swarm? Okay, quite a few. Good. Get them while they're young. And who's new to Kubernetes? Okay, quite a few, too. Good. I've always so much Red Hat contingent. There might be a lot of Kubernetes people in the room. And Apache Mesos, who is completely new to Apache Mesos. Okay, good. Okay, well, it's fairly mixed. I was going to ask, is there any part that you particularly want to see or don't want to see? Okay, so we're going to look at these three orchestrators. There are other orchestrators for containers. You can reflect on that beforehand. Things like Rancher Cato from Rancher. Nomad container. That's K-O-N-T-E-N-A. But these are really the three main contenders, containers who are battling it out and the battle is heating up. So, okay, we're going to look at a few of these. So, starting with Docker Swarm mode. Docker Swarm mode being the version of Docker Swarm orchestration that is now integrated into the Docker container engine itself. Then we'll look at Kubernetes from the CNCF, the Cloud Native Computing Foundation. In fact, it's something that Google open sourced. So it's based on Google's experience with their own in-house container management, which they've been using for 10 years or so in production. And then Apache Mesos, which in particular is commercialised by company Mesosphere. Okay, so obviously two hours. You won't become an expert, but you get some pretty good experience. Prerequisite. So, sorry, this isn't that well prepared because I've changed a lot compared to the last time we ran this. I've started to put some stuff on to USB keys, which we can hand around. Apart from the Docker Swarm, which I found a way where we can do it completely online if you want, then you need a Superwell hypervisor on your machine. Basically, the Docker machine can create a virtual machine. You'll need a recent Docker client, in particular the Docker Swarm. And then also for the Kubernetes lab, you'll need an executable called Minicube. Minicube basically is the official way of getting a quick test of Kubernetes these days. So it's about the official Kubernetes project. And it allows you to have a one node Kubernetes cluster very simply. The lab instructions are at this address. I remind you of that going along. These slides are at this address. There's a read me, by the way, here, which isn't to date, be careful. These slides should be the reference, I'd say, for this session. OK. As I mentioned, the Docker Swarm lab, although you can run it with the Docker machine, you can run it completely online, thanks to Play with Docker. Who's heard of playwithdocker.com already? Basically, it's a website where you can fire up containers online. They only last for four hours, but it's fine for what we want to do here today. It's pretty cool. OK. So I'll go straight to the Docker Swarm lab. I've repeated this information. OK. One point. Everything's in JITUB. Any contributions are welcome. Contribution could be opening an issue saying, this is enough. Make it better. Such-and-such, or preferably code, adding a feature to the labs. That would be great. OK. So going on to the Docker Swarm lab. So just a slide to say, what are we talking about? So basically the Docker Swarm, we can run a cluster of nodes which are going to be running containers in them, and Docker Swarm is going to enable us to schedule containers on those different nodes. Maybe have replicas of a service, such that if one node goes down, we still have some other nodes handing that service. Performing things like rolling upgrades of a service to a new version and doing new balancing between the different nodes. And this model, that should be, should be a client above that. Of having basically master of worker nodes. We see that's pretty much the same, whereas the Docker Swarm, Kubernetes, or Apache Mesos, and they all have some sort of distributed cluster configuration. Basically, you need to have two N plus one nodes that are there to say, OK, these are my running masters. These are the machines that you should be talking to. OK, so basically, we have master nodes, which are controller nodes, and a set of worker nodes which run the Docker engine, and in which, in fact, the terminology of Swarm is tasks that essentially contain us that run on the Docker engine. Which is a side point, tasks, because Docker are saying now that Docker Swarm can manage containers, the end, or other things, and one other thing is unicernals. Anyway, let's talk just about containers. One little detail, the master node can actually run as a worker node as well, so we will see that during the lab sometimes when we recreate five replicas, we will have an instance on three worker nodes, and the three master nodes. OK. What I recommend, because I think it's nice, is to run the lab on play with Docker. Otherwise, you can run the whole lab on your own machine, using Docker machine, OK? Leave you the choice. So, I've finished these slides into the demo curry, so you don't see very clearly the address. Just... OK. This is a bit small. What's the best way to show you this address? I think, actually, the best way is... Hold on. If you access... Yeah, you're right. That's a bit of a long URL. Hold on, let me put. It's back to the URL to the slides. Sorry about this. My recommend is that you go to this address. I'll be best, and from there you can get to the other addresses. Yeah, yeah. OK. So, let me just show you what I captured already. So, this is following this link to the Docker swarm instructions for play with Docker. Let me just show you, by the way, what play with Docker looks like. So, I am not a robot. OK. This is a play with Docker session. They run for four hours, and you can create up to five instances here. So, let me do that straight away. So, basically, this is a website hosting Docker and you can create online up to five instances that will last for four hours and then they're gone. But you can do some pretty cool stuff with that. So, if you step through this document, one thing to notice is up here, you will see we've got the IP address of each node that's been created. An internal test IP address, of course. So, our node one is 10, 0, 30.3, 4, 5, etc. OK. And, as we create services, then it will open port and they will also be shown up here with a link to an open port. And clicking on that, we demand that you go and directly access a running service on our cluster, which is pretty nice. This was actually the result of a hackathon someone did just after the container con in Berlin and I just think it's really nice to use. So, this document make it a bit larger, you can see the images. This document will step you through running that lab. It's not very visible on screen those images. So, for example, here I'm doing a docker swarm init command. So, initialising a cluster and this is basically initialising our cluster in the first node, a master node. As you move down, once I've done that init that node 1 here has this blue icon next to it saying this is a master node in a swarm cluster. And there are commands then that we can use to to add new nodes that cluster. OK. If you run this docker swarm on one of the master nodes it's going to tell us how to add that node then as a worker node. And similarly, it's probably just above here we can do the same to add a master node. OK. Sorry, we need this the ID of that node. When you do the docker swarm init you need to use the address just a node of the ID of the docker node on which I'm running here. So, if you go on to node 1 and you do this command with the IP address of your node 1 which is quite possibly the same as this. Excuse me. Could you tell me that you are able to this page again? Yeah. So the best is if you go to the slides there and then let me see. OK. If you go to the link of the slides and then you go to page 8 you will get this link. I think that's the easiest way is then to have all the links available to you. And when you're on the slides it's actually this last link that's the one you want. Sorry, it could have been a bit easier to use. So I just want to just show you how this is structured and then let you work through the lab and we'll wander around and help you. OK. But the basic structure of what we're doing is we're obtaining the command we need so that we can go to another node like node 2. We do a docker swarm join command with a token that we've already recuperated. And then that node is node 1, node 2 master nodes. We perform the same thing to get node 3 as a master node and then the remaining two nodes we're going to get a token which allows us to join and make them worker nodes. Skip to the end. And so then we have three master nodes and two worker nodes. OK. The master nodes can also act as worker nodes. And then we'll work through a series of exercises of starting a service so an example service that Mario created which basically I'm going to get there. OK. As I mentioned before where we have the IP address up top once we've created a service which means we've exposed a port of that service we'll get a link in the page which allows us to actually access the service itself and the service that Mario created just creates us a nice blue rail telling us as well from which container it was served. Then we'll go on to scaling that service and as we access that page again and we'll just see the served from container address changing each time as we load balance across the nodes. Quite nice is we'll do a rolling update of our service we'll update to this is version 20 of that service which just shows an image of a blue rail and we'll do a rolling update to the version 21 and when that's done we get a red rail. Then we show how to drain a node so basically we have five worker nodes three masters and two worker nodes maybe you want to take one down for maintenance so you want to gracefully remove all the containers from that node if you've set that you want five replicas and you have one container running on that node you want to take down and of course we'll get rescheduled elsewhere but draining a node allows you to empty the node of containers so you can take a node out of service and I think that's essentially it ok so let me just put that link back ok is anyone wanting to do that lab on their own machine with a docker machine as you wish I think play with a docker is a pretty nice option anyway in that website you have the options if you want to do it again on your own I have four masters and four workers so it's really no work for me but I'm still wondering if you want to do very simple ok just let us know if you need help we'll wander around I want to drive this because I'm a training master because the training for worker works you can make a token but training token for master so I'm probably unable to create another master that way make a token ok so we should create a node to as master as well yeah free master you should create free master yeah I'm using my local docker one and after typing local swam so amazing the other one is there a fedora? this is the interface sorry just getbush can I create that problem do you want to have the same interface around the swam hopefully? no you don't have do you want to have some UI for the one ok ok there was a typo that docker swam market file already in the screen image it's correct obviously but it's not docker swam join master join token master it's join token manager I've updated the slides so what is docker so so ok ok you really need to see the images so I do ok ok so it's docker swam docker swam it's easier now for you because docker info ok if anyone's at the stage of having added those notes and I say that you should now see this swam active it's manager's three nodes five I missed all the commands if you need to see that you need to type docker info one of the matching nodes and you'll see that I'll leave three managers because I created one manager then I added those as workers then I'll leave said leave this swam then last manager that is to come ok that's interesting I might be a swam type yeah so if you refresh the page I made a typo the docker swam join token master should the manager can read somehow what it is that the service is actually running you're having the same character ok this is it yep so that you want to run you create four issues in the same way so sorry this command is really new not you can read it on this node so you can join first layer so few places ah not this no that is not is this text are you able to change the site here because it is because we run the command so you know if you copy it you want her line rate so it is done again so it just takes a little work on the three because this is we run the command which is telling us how to enable what command to run on these other nodes so they can join this is the kind of swam master this is the kind of swam in the beginning you make this and then each time you join to the correct token then they become a master as well for three masters now if you create two more in a short time it was when I created this skill series and just containers on those so when I say skill 6 is everybody being able to create the right instances or a basic instances containers I have no idea what the capacity of playing with Docker is you have the five views on which should be I can't do the last part because we are not on every finish okay then go ahead okay so if you go back to the node 1 this time do you join token workers so this command you do do you join token worker so which are here the middle of the ring is very slow are your instances okay and not slow at all very slow very slow okay we have a problem with the technology around okay I will probably have to say what you are present isn't enabled so if you join so just try to move like this okay like this okay it doesn't matter you have the nodes and the ones in the source go on try and have three managers all the same okay so this rest you have the rest of the nodes okay so now I should call the service agree or it will start so I will so service agree yes now you can do the last so you can run again so it saves then do you see your ideas after test? if you try to I mean you should do it yesterday and know that during like a few hours just try to try to try to do it and start and show you you have problems that you do things in the wrong model because the last person you have one or more masters I have a problem like a little it looks like we might reach the limits of playing with darker I can see somewhere did any more manage to get to the end that actually upgrade the service get to the red whale it's coming down so I'm sorry but I guess that's been a bit of a case then probably can't show you now either everyone is getting similar sorts of errors now I don't know maybe I should have gone for less notes better access otherwise it's 20 plus 11 I'm sorry that we really have got to the limits of playing with the docker here I can see a lot of people giving the same registry errors and this sort of thing it's I think we've just killed playing with docker so I should have suggested that you all create three nodes not five but okay we learn from experience so I propose we go on to the next lab again use those slides as your reference if you want to redo the play with the docker lab itself or there are also links to the version using docker machine on your own machine so the docker machine lab is functionally the same I did it with just three nodes and it will guide you to create three VMs and create a docker swarm above those okay so we move on to Mesos okay it's okay how do you want to the okay okay so we're going to see Mesos now Mesos was quite popular like three years ago now swarm and drainage are probably more popular and the popularity of Mesos is also due to the fact that Mesos is used by big services like for Twitter or if you have an iPhone and you use Siri use Mesos to instantiate new job new task that's an orchestrator that's not a container orchestrator I mean it can start execute tasks that are not containers but we're going to see it as a container orchestrator but it's more flexible it's really intended to be like the operating system for the data center Mesos was that was the concept of Mesos we're going to see that there are three different pieces here the first pieces is Mesos itself so the master that receive a request and dispatch us to some slaves we have masters and slaves but we also have for Mesos we have frameworks frameworks are put on top of the Mesos master and our handle so they can have a UI they can have there are like plugins that extend the functionalities of the Mesos master and allows to do the scheduling so Marathon is a framework that is done for long task that's why it's called Marathon that's long task and it's the most popular framework for Mesos and that's the one that we are going to see for the lab unfortunately here you need docker you need to use docker machine so if you don't have install you should install it and it's really handy to have docker machine because you can in a matter of seconds you can create and destroy VMs that have the latest version of docker installed so you don't have to care about which version of docker you have locally or mess with creating new containers locally you can just instantiate the new machine and use and it comes with the latest version of docker so the first command is this one docker machine to create a virtual machine where we are going to have all the containers that will be the master the framework and the slaves of Mesos and once we have done that I'm going to just show you rapidly all the content of the lab that's a short lab that's shorter than the swarm one and then you can just will provide you the link so you can follow it easily once I've created the VM we have on the same GitHub repository there is a compose file that allows you to create in one step the framework container the master container and the slaves container there will be three slaves and there will be one master and one and one framework so all these containers will be started with this line here you will be able to see what's the content how these containers are started on the compose on the file and once you have started that well the first thing you can do is you can just go to the the marketing console from your laptop from your laptop and you can also access the Mesos so there are two different consoles so every framework comes with this console and the Mesos master has his own console you can see all the slaves that have registered and all the framework that have been activated once we have so we can deploy application to deploy application you don't have to docker anything to apply a new task on the with marathon on the Mesos cluster you just have to do an HTTP post request so the content you will see you will see what is the content of the HTTP request but actually HTTP request that we do to the marathon framework to say let's start a new container with image and we are going to start two different things the first thing is just nginx plain nginx with nothing so just nginx official image and the second one will be Pacman game that you can start so you can choose how many instances you are going to start instances too so you will see how you will be able to to increase that, to decrease that once you have created that playing with the Mesos console so I will leave you 10 minutes 15 minutes to do that and then we will see we will do that together ok it's old, my docker is it's a little I think old that is the reason no no you need to download it at the output link up ok if anyone needs to download a docker machine if you go to this address it will take you to the latest release and if you scroll down a little you will find links to binaries for nginx mac windows ah sure ok so now we can take this one one table orchestration lab is there any problems downloading the docker isa I don't know I don't know I don't know I don't know I don't know I don't know I don't know I don't know I don't know I don't know I don't know how big is the docker iso somebody knows what they make that's probably a bit more common to i5 I did think of a solution that would happen at that time that's not confidence friendly so beautiful it's a bit of a problem Mae'r llyfr yn Firmen, mae'r llyfr yn sefydgl. Mae'r llyfr yn Neil yn Ysbyg Cymru. Mae'r llyfr yn ddechreu. Mae wedi cyfan o'r llyfr yn yr oedd Cymru. Mae'r llyfr yn Dac, felly mae'r llyfr yn dda'i. Felly, mae'r llyfr yn Dac. Felly mae'r llyfr yn Dac. Felly mae'r llyfr yn Dac. Felly, maeunching yn colli gyda'i eudniadau'i eiddaf. Felly mae'r llyfr yn Rwyfodd, I can't look the page near you. So, once you have that, you can just do a Docker Machine LS and you should see that I've got to one-stop is the default one. If you install Docker Machine, it comes with the default VM and I created a second one called Mesos Master that it's running, it gives you also the IP address, etc. On that machine, we are going to run some containers. The containers that we are going to run are, we are going to look at the, have a look at the compose file. So, the first one is ZooKeeper that will do some service discovery. ZooKeeper is used by Mesos to do the election of the master of the cluster, of the primary master. So, all the communication will go through ZooKeeper that will find the services for Mesos. So, we are going to use network mode, OS network mode and the port that will be used is port 8081. So, that's the port of the console of the Netflix OSS exhibitor. And we are going to use the second, the second container that we are going to start is the Mesos Master. So, is the official Mesosphere image and we are going to start it with network mode OSS and we are going to provide some properties, some configurations and probably the most important one is the Mesos ZooKeeper URL. So, this is mandatory, you have to put it. So, Mesos, the Mesos Master communicates with ZooKeeper and tells him, I'm going to have, I'm the master, I'm one master. So, if you need me, if somebody needs me, you can contact me. Once you have the master up, we are going to start the framework, the Mesos framework. In this case, it's Marathon. Again, here, that's the official image of Mesosphere. We are going to provide some information to say to the framework to Marathon where the Mesos Master is and where ZooKeeper is. So, there's the tool URL for the framework. And then we are going to start three slaves. And actually the slaves will be the one that are going to start containers. So, you can figure it out because one volume that we are going to mount is the Docker socket so that the slave that is going to run inside a container is going to be able to use the Docker socket of the OSS to create new containers. Okay, so that's the way, because we are running everything inside Docker so if you want the slave to be able to create new containers you need to be running in privileged mode and we need to mount the Docker socket. So that's a common practice if you want to, if there is something that you run inside a container that needs to use Docker it's common to mount the Docker socket. So, there will be three slaves. The only difference of each slave will be the Mesos port. So, this one will be port 5051, the second one will be 5052 and 5053 will be the third slave. Okay, so we have seen the compose file so let's just start it. So, Docker, I'm going to do this command Docker compose. I'm going to use the Docker machine that I call Mesos Master and I'm going to use the file Docker compose that is in my Mesos folder and what I'm going to do is I'm going to do the Docker compose app and I'm going to put it in the background. So, once that's done, I can do a PS to see if they are up because sometimes it can happen if you do a mistake with the IP that one of the containers hasn't started. So, when everything seems okay, so we can just have a look at the console now. Okay, so this is the IP address of the VM that is running. So, on port 8080, we have the marathon console is exposed. No application running yet, no deployment running neither and this is the marathon console and the marathon console is you can see the framework that are active. So, we have one Mesos framework and we have also three slaves that are ready to serve if Mesos needs to dispatch some tasks. So, now we are ready, we can see that we are ready to dispatch some tasks to the slaves. So, the command that as we said before to do that is just we're going to use curl to do that but I need to refresh this. Okay, so the payload of the HTTP request that I'm going to do is in the nginex.json file. So, let me just run this common. I need to set the Mesos master IP. Mesos master. So, it just output the information about the container that is going to... Not the container, it's the task. So, it's Mesos API that is the output of the Mesos API. So, let's see what we have done exactly with the JSON file. So, we have just sent that to marathon and so, we have sent to marathon a JSON file that describes the task that you want to run and there is a docker section. So, that means that we want to run a docker task. The image is the nginex image. The network is breach. We can define the port. You will have all the options that you usually have for a docker run common class. You can choose here to have an health check to the amount of CPU that you are going to use that will be needed by this task. The memory and how many instances you want. We have said so here. We have said we have started a couple of instances of nginex. So, we should be able now to see on marathon. We should see now that there are two instances of nginex that are started and the port where they are exposed is this one. If we want, we can just scale and put six. So, now we have six containers running nginex. Perfect. So, basically that's it for mezzos. You see that the way to deploy application is different. I've also added another application. More fun is a Pacman game. So, you can try that later if you want. So, that's it for mezzos. Now we are going to look at Kubernetes. OK. So, I'm sorry about the problems you've been having basically with the Docker images and then the Docker images. And we will almost certainly have similar problems with the Kubernetes lab. So, for the Kubernetes lab, as I said, I've based it on Minicube. And I put on USB keys the Minicube executable. I'll pass these around. I'm not sure how far we'll get. There might be images to download still. So, maybe that will mess us up. I don't know. But if you copy, there's a directory container orchestration map. So, container orchestration map is the name of the directory. Just copy the whole directory and pass on the key please. OK. It's 5 to 12. We should stop at 12.20 because there's another lab starting at 12.30. OK. So, let me say just quickly that, again, Kubernetes architecture, I didn't show the Mesos architecture, but the architecture is similar. There's actually something quite different about Mesos, but the architecture itself is quite similar. Again, if you think about the Docker swarm, we have a set of master or manager nodes and a set of worker nodes where the containers run. The Kubernetes model is a little different because, as I said with swarm, the unit of execution is tasks, which we can consider are containers. With Kubernetes, we deal in pods, and a pod is actually one or more containers that have some relation to each other. The advantage of pods is you might group some things together, like a database and some other function strongly linked with a database, and when you want to scale up, you want to scale those things up together. So, a pod basically has a single IP address. That means each container within a pod shares the same network namespace. They have the same IP address. So, we have to start up containers on ports that they use. Obviously, it can be in conflict, so they have to use different ports. But otherwise, the model is quite similar to the Docker swarm. OK, so I pulled the link to where you can download Minikube, but please don't. I forget the size. I think it's about 50 megabytes or something. But Minikube is very nice, because it allows you to set up a single node cluster of Kubernetes. You can do quite a lot of things, even just with a single node. I'm going to go back to the slides anyway. Basically, if you went to that container orchestration directory on JITUB and drilled down to labs and then the Kubernetes, you get to the markdown file that describes the lab. I'm going to go back to the slides here. OK, that's all I had. What I suggest is, for all these labs, you have the instructions, you have links to them from the slides. I will update the slides to make those links a little bit clearer anyway. So if you're interested, you can do the labs at home. Again, I do invite you to let us know if things aren't clear or you think the improvements could be made. Either enter an issue or make a pull request or just email us. OK, so I'll run through the demo. I'm not quite sure in what state my cluster is. Consider I'm just in the terminal here. I'm cheating by using a notebook where everything is already set up for me. So I'm running a Daco 1.12.1 client. It's normal. There's no server response there because I think I haven't started my cluster yet. OK, I'm using Minicube 0.14. This is just part of the Kubernetes project. And there's also a command line tool, Cube CTL. And the version number is 1.5.1. So I think the last Kubernetes, 1.5.2, so almost the latest version. So on that key or those keys are being passed around, you've got Minicube. You've actually got a docker as well if you need the appropriate docker client. And there's also the, I remember which I said, there's Minicube and there's Cube CTL. OK. So basically Kubernetes you access through an API, of course. And so either you have an application that you make yourself that uses the API or the Cube CTL CLI tool, which is based on the same API. And there's also a dashboard. Which hopefully we'll look at. Now I think my Minicube, OK, it stopped. Good. I will jump straight to... OK. So I'll do this Minicube start. So it'll take a little while. It shouldn't need to download boot a docker, I think normally. But I think it takes a minute to actually launch all the same. It's down on Minicube. Sorry? It's down on Minicube ISO. There isn't an ISO. There's one executable Minicube. Yeah, but Minicube start downloading to ISO. OK. That's included in the image though, yeah. OK. So my Minicube set up. So it's interesting already Minicube is very nice to use. If I give it a docker end, we'll see that it gives us the parameters to be able to connect from a local docker client into the docker that's in the Minicube. Yeah? What screen? Where is that from? So this is... It's at this address. So within the Jitub repository, container orchestration labs, orchestration Kubernetes, and then it's the Kubernetes.nd. OK. So I'm going to do a neval of that docker environment. OK. So now if I run docker.ps we'll actually see the containers that are running within my single node cluster. Not very readable, I'm afraid. Just to show you that, OK, with our local docker client we can now access and see what our Kubernetes cluster is made of. These are all the sort of, say, system containers that make up our single node Kubernetes cluster. OK. Minicube as well allows us just to SSH into the cluster. So we see that it's been up two minutes normal and I think it's called Minicube, the hostname, yeah. OK. I'm not going to do the cleanup because I haven't created anything. It's a new cluster. OK. So kubectl is quite nice. Basically two of the main commands are kubectl get something and kubectl describe something. So I do get nodes. It will tell me how many nodes are in my cluster. I've said it ten times. We have just this one node cluster, OK, in this case. Get pods will get a list of these pods from a podd is sort of representing a virtual machine. I'm not saying virtual machine, I'm saying virtual machine. It's a meaning that it's something the same IP address has several containers on it. So we haven't launched any user pods yet. Ah, yeah. OK. I should have done some cleanup. OK. I still have some stuff hanging around from earlier. OK. Never mind. It doesn't matter. Do I have a service going? Either I will do some cleanup. OK. Never mind. Where was I? Whoops. First time in the clean system. So I said we have I get no pods. We have no pods now because I've cleaned up. OK. I'm sorry. We'll let me run that again. Again, it seems the cleanup didn't go. OK. OK. No pods. I still have a service there. Strange. OK. Well, I'll continue with it as is anyway. Something not quite right there. OK. So just showing the cube CTL command. It's quite nice. Quite a lot of help on it if you just type it with no parameters. Similarly, if you type it with get then you'll see the number of sort of nouns that you can put behind it. All these different Kubernetes types of objects that exist that you can query. OK. OK. I said I showed the get node. One node. You can also do a describe to get more information. OK. I saw sufficient disk space I thought it was insufficient. OK. So it tells a bit of information about our cluster. Not much to say really. I mean, there's a lot of information that I can't see anything particular to pull out. One thing is nice to see is a list of events that have happened on the cluster. OK. That's not what I really want to show. I want to get on to some service stuff. We can get some information about the cluster itself. In particular, there's this dashboard which is an add-on to Kubernetes. If you run the Minicube dashboard command then it will actually open the dashboard in another browser window. Normally this should be pretty empty but I think I still have a service hanging around. OK. One service is Kubernetes itself and another is Kubernetes Bootcamp which is one of the examples that I give should have been cleaned up. I don't know why it wasn't. So again, I just want to show you really that the tool exists like CubeCTL. I find it's pretty nice a Minicube as well for investigating just a simple one node cluster. It's pretty nice. It makes things pretty easy to use. Which tab was I on. OK. So, if I do a run so I think it's semantically similar to a Docker run except this time we'll be deploying a pod. And it creates what we call deployment within which we have this MyNGNX containers because I put replicas equals to there then we already created two replicas of our service. OK. I was a bit slow so they've already been created probably because when I originally created this notebook we didn't have the MyNGNX image so it took time to download it so we already have that. I'm basically on any object you can do get or describe to get more information. So what's interesting to look at labels we have a label here run equals MyNGNX CubeNX uses the notion of labels. I actually Docker swam this as well. They can be very useful to then be able to use selectors to select a group of objects in fact CubeNX is you can apply objects to, sorry, you can apply labels to a node to a pod to deployment and so on and it allows you to basically create subsets. You might for example want to say about a particular node this node this has SSD drives hard disk drives and then if you want to deploy some big data application which really needs optimal disk access then you can do a CubeCTL run but you specify the selector saying okay use my nodes that have the SSD label. That's one example or you could also label the service saying this is this is a production version this is a test version this is a staging version and so on. Okay so that application really created one way to access it is to run a command I've cheated here already CubeCTL proxy because otherwise that service is running within this VM and we don't have access to it but CubeCTL one way of accessing it is to use this proxy to be able to gain access Okay Okay so I've just a script here that's just picking up the names of our pods and now that we have this proxy running I can actually do a curl request on those two pods and we see the results of those we basically just have two HTML requests HTML outputs back to back here two what Oh yeah I put a break in the ford so we only see the first one because we're not interested in both of them. So that's one way of getting into the node and accessing our service but the standard way is to actually expose our deployment as a service So if I run this command CubeCTL expose of my GDX deployment and I say type node port what that means is okay we only have one node but normally we would have a cluster of several maybe hundreds of nodes and if I put type equals node port we're saying we want an exposed port for this service on each of those nodes of course if we had hundreds of nodes maybe we wouldn't want this exposed and all of them there we could then choose use a selector to choose just a subset of the nodes okay so I'm going to expose that we now have a service running um okay and okay it's a convenience with the different ways we can access our service um but the minicube utility itself will allow us to interrogate what is the URL to be able to access my service okay so if I access that amazing okay but again just it shows that um the the minicube utility as well is something that facilitates experimenting with the Kubernetes okay um otherwise in my little forum earlier I just picked up one of the pod names um so we can look at the logs on that I want to know those pods I'm just wondering when this is from okay so it's all now okay I've reached great to the deployment um so these are just whatever was on standard out on the console essentially you can see it's the output from engine X we can see a 404 I guess for uh do you have a 404 okay no we see the um there we see the 404 and here we see that no such file or directly corresponding okay uh similarly um I can do an exec into the pod um now you might say didn't I say that the pod contains several uh one or more containers so in fact to do an exec um normally you would if there's more than one container you would have to specify which container within the pod you connect to if there's only one container then you can just give the pod name itself which I'm doing okay so I'm exacting in and we we can just see the environment variables are set so it's a bit similar to the way docker uh links things by setting up uh a port or mysql port okay and the name of our pod okay so getting back to our services uh okay so we can list services uh we can list a specific service to access this service uh well we we actually already did that because minicube with this uh service url option pulled us out the url completely but if we wanted to construct it for ourselves uh well we know that um it's the ip address of our single node so that will be actually our docker host in our case um so the bit of said we can pull out the ip address on its own and if we do a describe of the service uh we can see on what port uh we're exposing the service so I said this is like node port so all nodes will expose the service on some port okay it's the internal 80 port is being mapped to 30591 in this particular case uh so again we can we can extract that port number and then do a curl through um to access our service okay okay so I mentioned labels um so we saw that our mynginx deployment on pod has this label run equals mynginx so providing that as a selector actually doesn't change anything we'd have got that anyway we just didn't get that but what we can do is we can set new labels on a pod okay so I'm going to set a label app equals v1 and if we do a describe pod in fact this will do a describe on both pods that we have running okay so we see the first one has a label here app equals v1 that's what we just set and down here we have the second one with its labels okay and that doesn't have the app equals v1 it's sounding but just showing how we can easily add labels and then select on them okay I'm going very quickly because we're getting close to 1220 so sorry about that okay and if we want to do get pod specifying app equals v1 then we get just that one so let me redo that so you can see okay if I do selector I get one if I do without selector I get both of them okay we can scale up our applicants fairly easily okay so I did a scale of how I deployed my ngx straight away afterwards I did a get deploy and we see that the desired state now is to have four instances four but there are only two available it takes just a little time that should be complete okay and now we have four available so that's completed okay we have the next speaker yep okay we can also perform a rolling update so similar to what we did with Docker swarm I don't know if you got that far okay this one might not work because I had this service that was hanging around I'm not quite sure okay so I have this new service I skipped a step okay we have this new service which just says hello Kubernetes bootcamp and it prints amongst other things so it's very similar to how we did with Docker swarm we can go up to the new version okay sorry I'll stop there if you're interested I hope you have the links to the new version and if you're interested if you're interested if you're interested if you're interested if you're interested I hope you have the links to the lab you can do those on your own and we really are interested in feedback if that's useful to you or frustrating if you like new features thanks for coming thanks for your patience despite the problems enjoy the conference so the lab is a beast it'd be nice if more things were that's where it goes let me play the space coming next hi nice to meet you hi Michal okay so it's my work now okay thank you thank you so you have HDMI right? yes we have but the the technology university technology it tells me that it doesn't work all the time oh okay do you have VGA? yes okay but you can try HDMI and we'll see oh okay let's try let's see it's always worked for me so let's see if it works oh yes that might be useful yes that might be useful yeah yeah yeah I'll see because do you think it's okay if I post some files on my computer and I tell people to pull from my computer because otherwise they would have to pull it from the Fedora servers and will it work I'm not sure it will so you can either connect to Wi-Fi or you have Ethernet cables somewhere but I'm not sure if like people will be able to like okay I honestly don't know okay we'll try and if it doesn't work they'll just use the Fedora server oh okay yeah yeah login oh a dev what is it going to think? so how do you start the mini-cubus did you start it has the user okay so can we get the screen oh yes oh you have the eye okay I'll try thank you oh nice awesome so it works then thank you thank you okay so