 All right, if you're already we're going to go ahead and get started. Good morning, everyone. My name is Alcari I'm a global cloud evangelist with Microsoft and my partners in this session are Cambies and will who are around the room helping everyone if you need help Please reach out raise your hand. They're gonna be available to help you out with any issues if you run into them This is a lab Yes, and they are with red hat As So This is a lab. If you have any difficulties, please reach out We the intent is for us to walk through the lab together If anybody is behind or needs a little bit extra time, please let us know we'll slow down to your pace The lab is simple and straightforward. It's intended to be a really easy to to go through and has little complications. Hopefully we won't go through a lot of trouble The lab is going to be on building microservices using Atomic and we're going to be using two versions. We're going to use Fedora 23 and Centos Atomic host seven the environment is online on try stack org a An open stack sandbox where you could go and try out applications and do Testing for whatever it is that you want to test on open stack It's available for public and you could try it at home if you like if you would like to follow with me and if you want to see the slides on your laptop they're on pochub.com and We're going to be using Another link here with the steps. So if you have that in the back window in your browser will be handy when we get to it route atomic centos atomic and Fedora atomic run a full stack of Kubernetes Docker go ahead a few people are asking about the passwords already and I just wanted to mention that it's going to Be up on one of the subsequent slides, but it's red hat one two three for any passwords if anybody's curious All the passwords are going to be red hat one two three today. So whenever you have a password just Just red hat one two three We will use kubernetes, which is an orchestration engine that will allow us to run a cluster of Docker containers or Docker hosts We were using Docker for running those containers and we are going to be utilizing at CD for a Datastore for those of you who are not familiar with it It's a very simple data store that allows you to store key value pairs and restore them from wherever you want using API code Used mainly for networking so networks can store Their configuration and restore them from the different containers and different objects within our environment Flannel is what we've selected for this lab to run our overlay network For those of you who are familiar with open v-switch Flannel is a really miniature version of it. It's a simple overlay network. We're going to be using it at a very lightweight Will will run vx-lan tunnels between our Our containers it allows you access to the containers from within the hosts and for containers to talk to one another For our lab architecture today and please pardon the layout it got messed up when I moved to windows So we're gonna have Three nodes to work with today Three VMs that are hosted on tri stack those VMs are Just standard instances on an open stack environment We are using only a small version because a small size because there's many of us and the environment needs to handle all of us Doing all kinds of stuff at the same time Our master node is where we'll be logging into and it's a fedora instance that has a full operating system stack With a Floating IP address connected through via the router from the outside world and this is where you're gonna be logging into the environment from We're gonna be using a terminal session whether that's on your Mac or a putty session on your PC Or if you're using Linux you're lucky We we're gonna use we're gonna build a Registry of Docker, so we are not registry cache for Docker, so we're not pulling container images all the time from Docker We're gonna store it locally on one of our nodes. That's the registry on the on the right and We can we can deploy as many minions as we like we only gonna deploy one today minions are your Docker nodes and They are where your your containers are gonna be running Because we're limited to three Today, we are gonna run the master also as a Docker node, so the master will also be a a minion From open stack if you already logged in to www.triestack.com I'm sorry www.x86.triestack.org With the username that you have on that small piece of paper and red hat one two three password You will see this you will see the environment. You will see instances have already been started for you And pre-configured with a name resolution and with The software we're gonna be using today, so the software is pre-installed The network Apology will show you how the current network is laid out. It's a little prettier in this version of Open stack. This is This is liberty So we got the Router our network and our three nodes and this is how we're getting into the environment from the now from the Internet I'm gonna spare you with the slides. We're gonna get into the lab very quickly So the slides were just to get you oriented. So here are the details for logging in Your username is red hat one. I'm sorry Your username is whatever is on that piece of paper and red hat one two three is your password to log in to the master it uses fedora for a user and Use your public IP address that you have on that piece of paper for the registry and Minion one those nodes are on the internal network and you'll be able to log in to them from within master only You're unable to log in To write Can can we get to new credentials here and here, please? So one thing to note make sure you're going to Www.x86 not just x86. We actually have two parallel Open stack implementations. You forgot the Www. You're logging in to a Past version. This is the most recent was everybody able to log in anyone still Not plugged in Okay for public use try stack.org as when you get back home and the full lab setup is on digiradis.com if you want to Configure your VMs or install software We skip those steps for you because installing software takes some time And we wanted to save you that wait time here Also puts a lot of strain on the resources of the environment. So all of us are doing it at the same time makes it a little bit crawl Also, please remember to sudo everything as we're going through this. We're logging in as a non-privileged user. So We're gonna need to run everything with root access. So please use sudo. All right If I can ask you to please go to the URL below that poc hub.com slash steps dot txt I put the text for the lab and a txt file So you guys can copy and paste and not have to do a lot of typing Do I get a thank you for that? All right, so the text file is open here, I hope you could see it. Well Yeah, and I'm gonna be using putty Also, just to be clear if you have issues getting into the horizon dashboard The student number is gonna be your tenant. So that's gonna be specific to you But red hat 123 we the password for there and then the fedora user or the other users are gonna be for your actual Instance, it's running on an open stack. So we're gonna walk through this lab together If everybody if anybody is not yet at the log in prompt To master, please raise your hand. Okay, I think we're all logged in and if I Can do something here to make this a little bit bigger Is the display? Okay, is everyone okay seeing this? All right So so we'll walk through this together Please let let me or one of my peers know if you're behind so we'll slow down and get back and Catch with you. But the first thing I'm gonna do I'm gonna log into my registry We're gonna build the registry first because that's where our Docker images are gonna be stored And it's the easiest frankly to build so what will take us only a few minutes to go through this And then we'll get to configuring our masters master and nodes so I'm gonna ask a search to My registry with red hat 123 as a password And if I'm at the registry prompt now, I could start configuring that first will And I'm gonna start copying and pasting. So if you guys want to do the same thing or if you feel like typing, please have at it Please copy if you're copying from the text file after The space following the dollar sign Do not copy the text that has the host name It's only there to show you which hosts you're gonna be putting that information into What we're doing here is just as I mentioned. We're replicating a cache for the Docker registry so we can have our our Images downloaded locally and not have to go out to Docker every time we download stuff. You see here. It's downloading all of the Repos and it's gonna take only a few seconds and we should be all set Say that again, please So right now we're just creating a repository or a local cache We're we're not we're not creating any containers. We're not creating any fancy stuff. We're just pulling Docker into a Into a new container that services All of the Docker repos into the environment We do that only as a as a cache. So as we go to the next steps We won't be pulling images directly from from Docker all the time. This is a one-time Basically, yeah, yes, okay mine is done Who else is done? Okay, some are still going through So if you're yours is done, you could start Configuring that service the next step we're going to create a local service for that container so we can start using Using it to service requests locally There's a little bit of text here that creates that service and starts starts it in your current Colonel and and runs it. I'm just gonna copy it and paste it and It should it shouldn't take very long and then we're gonna just start enable and start services We'll reload the demon and Enable and start the services. I did not wait long enough So it's loaded and started. I'm gonna set I see Linux context for it. So we don't So we don't have any Conflict and That's that once you get to that point your registry is configured and ready to go in the past five minutes or so since We started this exercise We just downloaded all of Docker here So you're you have a cache now that allows you to launch any Docker image from from right here This is one of the strength something that makes containers so powerful. They're so lightweight so so mobile and so so flexible so Takes only this long and this many commands to actually pull the entire Repository down I'm going to wait a couple of minutes if anybody is still not ready Please let us. Okay. Keep going Slowly. Yeah, there's there's a ton of us doing it at the same time The good news the rest of the environment is pre-installed with all the software. So we're not gonna have a lot of bottlenecks like this anyone not complete yet Okay, we'll wait Al the Slide that you're on with the steps. Is that is that hyperlinked on the slide page or is that still you have to cut and paste the steps? Right there. It's POC hub Right. Is it is it hyperlinked? So can you just click on it? You mentioned you were gonna Change that so you can actually you may have to type it somehow the link isn't working. I'm sorry about that No problem Not gonna blame this one on Windows. It's me can be really give her the microphone, please Meantime that we are waiting is it possible to go through some a slide or something that give us like kind of a little Introduction and high-level what we are trying to accomplish today. I did not prepare slides This was intended to be a hands-on lab So I didn't really bring slides to to walk through any theory This is really about Drawing up your sleeves and doing it. So my apologies. I did not prepare for that So I think as we go through the lab, maybe if you're also going through it You can kind of explain like you did with me. Yeah, what it is really that's going on behind the scenes as the various components are Certainly, so Okay, makes sense There's also one thing I Haven't mentioned this environment is gonna be available for about a week after they the summit and you can keep the credentials you could change the password if you like something that you're comfortable with and If if you like to try things out and test with it We're only gonna have it for one week after the summit. So Feel free to test with the same thing But beyond the summit if you want to try things out again try stack.org is is available and open You could build that we could build on it from scratch Do we still have anyone who's not done with see four people five people? Okay? Is anyone way ahead of us? Oh cool? All right, okay one more minute before we keep going. All right, just curious anybody's still not done I see only two people three people I'm gonna go ahead and get going and for those who are still waiting for it. I hope you can catch up with us Please ask Can be's or well for help if if you're done so they can help you get there faster The next step I'm gonna be working in my master right now. So Let me move this up. So you see where we are First thing we're gonna so I did not have time to To set a sea Linux Context for our lab we're gonna simply set a sea Linux to permissive in this case and make that we're gonna make it permanent and set it to Set it's enforced to permissive and these hand in the back can be if you please Or well, if you please check it all the way in the back I'm gonna enable Docker and cockpit as anybody here heard of cockpit So cockpit is part of Atomic it's a dashboard interface for For Kubernetes Basically, it's very similar to horizon for open stack. It's just a pretty interface You just gonna enable it and start it Now we're gonna start getting our hands dirty with configuration files For the configuration files, I'm gonna walk quickly through each line What is doing in the config files and why we're doing it? If you have questions, please stop me The first thing we are going to Some can expand this so you all can see it It at CD conf we're gonna make it listen on all ports on On all network adapters and we're gonna listen on port 4001 next thing we're gonna be configuring Kubernetes Kubernetes is a Has a very involved configuration options and it's very flexible. It has a lot of Details so I cut the number of Changes that we need to make to a handful that are really easy to apply That will fit pretty much any environment. You can customize it to do a lot of a lot more But but for our purpose today the customizations we're gonna be making are gonna be straightforward Kubernetes has four configuration files to config API The controller manager and the kubelet file But we'll talk about those in a second We'll configure the config file first to make it listen on its IP address rather than than the local host address and then we're gonna define the server Port so The first line is simply changing the one two seven dot zero dot zero dot one two ten dot ten dot ten dot three Which is its IP address and the second line is just defining who is the server in this case? It is dot three hour our host The API Server file is where we can figure the API endpoints. It's like you're doing that in open stack This is your API server. This is where all of the services going are going to be talking to Kubernetes from so first we'll set it to listen on all addresses and We'll tell it where it's HDD server is remember at CD is that key value store so it can communicate with it We'll give it an IP range for the cluster so the the instances that are created are going to be picking from that IP range and then this Line here has all the specific options for our setting They are too detailed to go through so if you if you're interested in what each one of them does look at the documentation, but they They they basically help us do what we're trying to do today and all the all the different options that are available You're gonna be changing in that line and admission control So We're just gonna copy that Section, I'm sorry. I did not do the I did not back it up first so Backing up the original file and creating a new file that controller manager file tells our Kubernetes host where the nodes are and where each one of them how to find each one of them So we basically just telling it that machine IP addresses The kubelet file is specific to the node itself So for every minion, there's a kubelet file that tells it where to find its master and That this is where we just changing the local host to make it listen on all IP addresses And we are changing the host name to master in our case with a fully qualified domain name and we're setting the Which network adapters gonna be listening on small plug because I've noticed that there's been a couple of Speedy people that have gone through the lab But just as an FYI this is basically the new tri-stack environment. That's gonna go online Sometime next week these accounts that you guys are using They need to be deleted just to make room because we're basically at capacity However, if you want it to go through the lab again, or if you had someone that might be interested in seeing the work That I was on to put it all together Please feel free to join tri-stack Currently the authentication is done via a Facebook group So if you're not averse to Facebook and you do have a Facebook account Look for the tri-stack group and request to join And then we'll approve your joining because if you noticed when you went to the horizon page There was a login with Facebook So all the student accounts were set up statically outside of that But if you're in the Facebook group, you'll get redirected to Facebook and as long as you're logged in You'll get sent back to tri-stack and then you'll be able to run through the lab for yourselves using your Facebook integrated account I just wanted to mention that to everyone. Thank you One other thing to tri-stack.org is a open-stack foundation project. So it's a completely volunteer effort It's completely free and you can use it to pretty much do whatever you like as long as it's within the tenets of it You know normal usage or legality. So I Have to add also they don't download your pictures from Facebook. So don't worry about that there So next step we're gonna be Enabling and starting the services. So these are the services that we have pre-installed for you at CD kube API server kube controller manager and scheduler and Kublet We're enabling and starting those services right now a comment Alan a request is that if you could share the list of packages That we pre-installed on the fedora cloud image So the cloud image is fedora for the entry host. Yeah, and it's basically the vanilla Cloud image plus we kind of did some pre-installation of about five or six different packages And I think I was gonna share those So I'm gonna put these on the screen real quick. Okay, those are the packages that were which we pre-installed for you Kubernetes Docker at CD flannel cockpit and cockpit Kubernetes another issue that a lot of people seem to be running into is That during the start of the services the kube API server service It spits out an error that it didn't start correctly I'm not sure exactly if that's related to overall load on the tri-stack platform with 200 accounts doing things at the same time if If something else is going on they're gonna get an error Yeah, they're getting an error and it says that it failed to start and to look at the journal But if they run the start command again, it just comes right back. Okay, look at the system CTL Status on the service. It appears to be running. So if something didn't start, please try again Give it a kick next time. See if your laptop is slow or if the internet is slow or so far we configured At CD and kubernetes the next component will configure a flannel our overlay network Now to configure flannel. We're gonna do some interesting things. We are we're actually going to Push a JSON file to a Key on the key value store we we built in at CD and then we're gonna tell Flannel where that file is in that JSON file. We're gonna define the network for the overlay network Down here it says it's VX land you could use GRE if you want, but nobody does anymore We're gonna be using VX plan and we're gonna use a slash 12 network Really big network and then we're gonna tell every host to have A slash 24 network a tighter smaller network This way all the networks can talk to one another. They're on the same subnet and They can talk to the to the host as well Pushing that JSON file is just creating the file Again, if you feel if you feel like launching vi and typing that file yourself have at it I'm gonna paste it here. It's a very simple text file and json and I just typed it and And now we're gonna push the configuration into at CD. We're gonna use curl to do that Curl with an uppercase L and Dash X put would actually push that file With that key name Atomic dash key slash network slash config you could call that Papa Smurf if you like you could call it anything you want as long as you use the same name in the Flannel configuration when views Curl one more time to Validate that our configuration were pushed It's their question Yeah, we'll configure one minion today, but you'll be able to add as many as you like to in your environment The next line we're testing our configuration using curl and In our case we're gonna use a pretty a simple tool to make it look pretty jq there that JSON query To display it back in in full color. So this is a lookup into Hcd right now looking into that Key value store and showing you what's in that in that key And that's that we're done with configuring our master Next we're gonna be configuring minions We we spent about five ten minutes on on the registry Roughly about fifteen twenty minutes on Master many and should take us ten minutes and we should be ready to play Well, there's a hand over there. Well, is anybody done with master? Okay, we have five people at least so the intent of this session is To give you the tools available to Create a cluster really quickly if you have an environment already or if you want to test things out online and Once you got a cluster of Kubernetes running and you have a bunch of hosts And you're able to launch different containers and different applications You get you get a feel for what the power of and what it can do for you and you can start testing things out Very quickly. You don't have to make any investment to do that. All of the tools we're using today are open source Sent us fedora and atomic are all open source projects that are available for download on here You can go and there's a link to to go and download the images For fedora and for sent us Those and and also a link for the updated So if you're building this a month from nowadays a new virgin go to the link and download the most recent But all of that is is available at at your disposal for free anytime So you you are way ahead It will take a few minutes, but so just out of curiosity where you able to Get to the cockpit dashboard and you are you seeing it in? So it should it should theoretically reboot very quickly, but with so many of us in the environment It may be maybe taking a few minutes That's it. It's everyone done with master. Who's not done? Everybody is done with master. Okay. I'm gonna move on to to the minion and this is very quick It should should be simple. We're simply gonna be doing the same things Pretty much, but in this case we only can configuring The node without all these other services, so Okay, first thing we're doing for the minion We're gonna tell it to not go to darker to download containers, but go to our To to our repository that we created at the beginning then we're gonna tell We're gonna configure flannel to use at CD and Tell it where which network adapter to use so notice here this key right here It says coro s.com for the original key that's Packaged in flannel We're changing it with atomic key, which is the key we created that could have been Papa Smurf as I mentioned And next we're gonna just tell this minion where its master is the kubelet file is Where we give it its name and we tell it its address and where the master's address is The the question here is that once We build this nginx cluster inside of containers Behind a load balancer. Is that in this demo all in the private network or did we do anything to? Set up on the fedora host a port forward so they can actually Literally hit that back in cluster they can it's not it's not in the lab right now But it's a couple of extra steps that could be done easily So right now it is not it is just going to be internal Sorry go ahead It's something here Okay, I think I'm done configuring. I'm going to be start enabling and starting the I'm just going to enable and then reboot so we are enabling flannel D the kube proxy and the kubelet services and I'm rebooting my instance. I think I'm behind some of you a lot of you seem to have gone way past that There's a there's a question on one of the steps where the Flannel D configuration file is modified using said and I'm wondering if at the time that the instructions were made that the Vanilla configuration file may have included Core OS calm as opposed to atomic dot IO Can you go to that file and look at it see what's in there, please? Yeah, well, so the Flannel ETC key is Slash atomic dot IO slash network and I'm wondering if it needs to Be something else atomic dash key the flannel D dot Yeah Right, and so you're trying to substitute core OS calm with atomic dash key but if you actually looked at The contents of the file on disk There is not a core OS calm, but rather a Atomic dot IO for the key. So if you could you look at that on yours, Al? Yeah, I'm putting it's still rebooting. Just give it one second. I'll lock back into it. Yes We had we had another gentleman back here with a similar issue and we went through horizon and and did a hard reboot Did that host end up coming back? Somebody else who had that issue Yeah, for those of you who are having issues with the minion coming back Just as Alice as well go into the horizon interface for tri-stack on www.x86.tri-stack.org Log in with your student account and look at your instances and select the minion one instance and There's a small drop-down on the upper right corner. You can select the hard reboot option and Reboot that instance it should really only take you know three or four minutes for a reboot to come back and Allow you to s H from the fedora. So if that's not working then, you know, please try to use horizon and do that hard reboot So so what can be this referring to is actually brute-forcing it to reboot? for some reason Because there's so many of us working through it apparently It says machine restart, but it's it's hung there So we can brute force it by Going to instances on our dashboard Then on the drop-down on the left here Next to minion we can actually force it to us to reboot Can be did we use soft or hard reboot? I Guess maybe I use hard because I'm impatient But I haven't had any issues with doing a hard reboot and give it a strong kick if you like You know always come back here and watch it reboot if you like the console Shows it to you as if you're looking at the bare metal. Is everybody else having the same problem. I'm having Rebooting that minion Yeah, it does seem like But it's it's refusing to reboot for me at least Starting the boot process, but hanging there. Yeah, okay, please So I promise you and when there aren't so many of us doing this this takes only a minute Sorry, well if you don't mind when you're done Will So regardless of minion coming back up It should have come up by now, but for some of us taken a little while We can still log into cockpit and check things out if you like Cockpit is running on master. It's running on port 1990 with the same IP address that's on here So if you just put HTTP this IP address and port 1990 in your browser You should be able to log into it. Here's the file. What's which which line was it? So to log into cockpit, it's fedora and red hat one two three That's it done so cockpit is working on my environment and It pops a certificate error Because we're using a private certificate Just ignore that and keep going it shouldn't shouldn't be a problem. Oh The minions are optional, right? I mean that's just to illustrate the fact that it's load balanced. Is that correct? I? Can't hear you. I'm sorry The minions are Optional if we were to shut them all down. Yeah. Yeah, you could run the environment without the minions because we're running the master as a minion All what you will have is just one node Cluster so if everyone who's got a minion that's actually running wouldn't mind shutting them down Going forward just so that we can proceed with the rest I think what we're seeing is a heavy load contention on The open stack infrastructure. There's only seven nodes that are compute nodes and there are six hundred guests running To facilitate this environment, so it would help out I think everyone would enjoy the rest of the demonstration if they just shut down their minion So so let's let's do it can be suggesting and the end in the interest of time so you all could see the the actual Cockpit if we can shut down our minions you can come back and start them later when not all of us are doing it At the same time and try it out if you like To shut them down. I'm just gonna come back to my dashboard to instances And I'm just gonna click shut down. That's that simple On minion one and I'm gonna shut it off Can everybody see it so they say demos are hard labs are even harder We didn't expect this contention in this number of people to occur But we frankly didn't have a chance to test it with so many people either So apologies if this didn't work as planned But after the lab if you if you have a few minutes during lunch break if you launched your minion and went through the steps You'll see it comes back up and it should it should perform. Oh Let's not all do it at lunch at the same time on the positive side as the first time I've seen a 320 load average on a stack note. So That's all he's entertaining. I'm gonna go in Cockpit to the cluster node. I was going to show you how to add nodes. It's really straightforward But I'm gonna go to the cluster node here and demonstrate how to launch new applications We're gonna launch a an engine X Web server Really simple straightforward nothing special nothing complicated everything just out of the box I'm gonna keep this on the topology Tab here just to show you what's gonna happen when we launch that application So just like in Docker the command to run a new Cluster and in this case we're not saying Docker run We're saying coop CTL run Kubernetes will take over and will start multiple containers on the host that are available those hosts could Could potentially go offline. They could potentially have contention or whatever the The way Kubernetes Work as it will actually restart Services on other nodes when when those nodes go down or are having resource contention it's also going to schedule your Your your containers based on resource availability So I've if one host in our case right now We only have the one node to play with but if you have multiple nodes It will actually scale it across the environment based on on resource load We're gonna do in this case. This says for replicas. We're gonna run for Containers in a load balanced way Just because we lost one node we're not we're working with one at this point I'm gonna ask you to do two for now But if you feel like it after the after the lab is done and you're you're on your laptop testing things out Have at it launch 20 containers at the same time and see how fast that happens Obviously when there's not so many people in the room working on it See how fast it will actually launch multiple like multitude of containers for you all load balance and all running the services that you want and In a scale out fashion, so I'm gonna copy and start This here though. I want you to see it happen in the background as I actually Launch this it will be so much more fun. If we actually could I lose connection Now I lost I lost internet connectivity altogether Both my sessions buddy session and the browser stopped So we had a brief interruption and internet did everybody did anybody else have a Network drop. Okay, so I'm back online. I'm gonna run that command with with two nodes or two two containers Those two containers are gonna run engine X and they're gonna be running Listening on port 80 So you see them and In cockpit getting created. They're still light. That means they're still not online when they're Color tones dark blue that they are online and they're listening. There's one service here the Kubernetes service and there's one node right now and We have our engine X servers running in the next step in the lab We're gonna load balance them so they will be listening Port 80 and on the outside network so We are just gonna run the next command to Run them and I'm alone but load balance service and that instantly gonna create that Service and in Kubernetes cockpit Correct. This this needs to go out to the public network via Open stack. I'll take that with you separately outside if you like So when these two come online They obviously are taking their time to come online. They usually come online within seconds They will automatically plug into the load balancer, which is right this guy and they will start servicing the requests If if there's a failure on one of the nodes, they will automatically relaunch on different nodes that are available Okay, just briefly covering the rest of the lab here. So you are So if you're doing it on your own you get You know what what that's for? There it is Really the next steps. We're just going to delete those services So you could you see how to actually launch new services? How fast it does that and how? How you can quickly delete services and and that that's that you just delete the service and you delete The the load balancer on your old set If you have questions, please find me on Twitter. I'm al-kari on Twitter and DigiRidings.com has that full details of setting up your Instances if you want to know what to install and how to configure networking and all that stuff Kubernetes.io has Phenomenal documentation a lot of different Examples and scenarios you can run on here. You could do it right out of the box You could just go and just read what the scenario they're trying to do and deploy different services You could deploy a full stack if you like a back-end database server an application server a front-end a full stack of applications within seconds and Once once you got multiple minions multiple nodes on there. It'll automatically do the load balancing for you automatically do the scale out for you Give it a try if you have a chance go through some of those documents. They they they will they will really Help out and in helping you understand all the different multitude of Scenarios you could build now. Why are we doing this? if if if you consider that Google actually runs all on containers and Google runs something to the tune of about two billion containers every single day That's that's how how How important containers are today? You have an open stack cluster that you built in your environment and you want to try things out Use it for something useful if you have if you don't have that and you're interested in learning how containers work Docker in itself is a great Container technology, but without orchestration. It's really just a container host There are multiple Orchestrators out there the Docker folks will tell you swarm is the way to go and it's so much better than Kubernetes There are so many ways to skin a cat. This is one one of them Try this try swarm see if swarm works better for you. See if it has better application for what you're doing but Don't discount containers and open stack is a great platform to develop your containers on it provides you with infrastructure as a service tools Really straightforward to build whatever it is that you're trying to do with with your past If you have any questions, please ask me outside or Reach out on Twitter or on iNode. I'm Alcari as I said We're gonna stick around here for a few more minutes. We have five more minutes, but without without the second second node node coming online We're not going to be able to demonstrate the the failover options, so With that I want to thank you very much for being here today and