 Welcome ladies and gentlemen. This is a short notice demonstration that we did because another presenter was not able to make it. I hope we will excuse the difference of content. We would like to show you how to add a compute node with Miranda's MCP. This is a live demo so bear with us. It's going to take a little bit of time. This is not the process that goes immediately but we will show you the start, how we initiate the process and then we will show you the end state, what this looks like after the whole thing is complete. Just to talk a little bit about how we add a compute server or how we do anything in an MCP cloud. This is a system that is similar to a software development mechanism CICD where we have a code base that where the complete image of the cloud, a complete representation of the cloud is put into the directory hierarchy and SaltStack is used to implement the changes from that directory structure into your actual cloud. So if you see here we have at the very first code comes in, it goes into a garret. It needs to be approved. Normally we have the garret component removed for this specific presentation because it would require approval and this is just muddying the waters. After garret receives the data, reclass is building the representation of the compute node in this case. This compute node is a specific file that later on Jenkins uses to create the compute node to put the operating system on the compute node to put the Nova compute onto the compute node and all the components that are necessary. Then at the end the control plane needs to know about the compute node and this is also done in Jenkins. So SaltStack the configuration mechanism Jenkins triggers pipelines that use SaltStack to transfer the data to the respective environments. In our case we only have one environment but this is normally you would have a staging environment and you would have production environments that only will be receiving the data and the changes after the staging environment has tested good. Yeah and actually one of the most unique parts of this is that when you are registering a node even if it's adding to an existing cluster or if you're creating the cluster for the first time the node itself becomes registered with all of our operational support systems and as a result of that it's beginning to log monitor and alert from inception from the moment that MCP knows about it it's already pre-registered with that cluster and continues forward. So you can basically operate it basically from stack light as the monitoring and all the way back to the garret and Jenkins side. Right there's actually a couple of layers the lower layers provide the absolute basics then the middle layer provides some enhancement to those basics and the top layer provides things like node info and will inherit the settings from the lower layers for instance if you say okay all our nodes will have this functionality that functionality it will have a stack light on it it will have Cinder in the specific configuration in it you don't have to put that into each of the node configurations again you just put the stuff in there that is unique to this node host name and domain name and things like that. So here's the basic flow we're going to do we're not going to do the first step because quite frankly we're using a virtual machine to represent a physical node and we're just going to run through the modification of the two files that are required to establish this new node in the configuration and then we're going to run the Jenkins job and after that we're going because it takes about 20 minutes to add that for particular node we're going to switch to a completed version of it so you can see the difference as to what it started with and where it ends okay so that's what right so this also shows that the node as such is does not have to be anything specific it can be it can be a virtual machine it can be a hardware node it just has to be something that can run the appropriate codebase. Right okay so let's start with yours let's not mind please switching perfect thank you so can everyone see this okay let's start from the beginning and make this a little bit larger so the very first thing that we need to do this is unique to the to the virtual machine based nodes we have KBM.yaml this is the file that creates the control plane and the compute nodes as representations as virtual machines so we do not have physical nodes that will be added to this so I am going to add a segment for this node as you can see the segment has only a hostname and a whole bunch of inherited parameters like the cluster domain the cluster domain is not set in here the cluster domain is set farther down in the codebase so it will be inherited by everything that is going to be part of this and you see the only unique thing is the name of it in in the YAML file the name of the node itself and the type OpenStack compute and this is also there is a space too many we need to this is one of the things that you have to make sure that the formatting is right okay this is one number one and now we are going to tell OpenStack that the node is needed so we are going to to the compute node section and here you can see there's already one compute node set and we are adding a second compute node and again most of the stuff that is in there is inherited the only real differences are again the name the hostname and the network parameters that are set for this this can be automated but we want to show it manually okay looks good so the next step is we are going to tell the reclass.storage yes this will run reclass and generate the files necessary to get to build this compute node right so I was just going to mention it every node that gets added there is an entry and a directory called underscore generated inside of the node directory that defines the cluster and that has a YAML file which contains all of the physical attributes of the particular server that you're going to add. exactly okay so now you see we have 37 tasks are done and only one has actually changed something this is the one that generates a configuration file for this the next step that we do is we are this is also unique to the KVM methodology is we need to apply the state that will launch that virtual machine and create it with an operating system so this is going to take a little while the reason why is because in the back what's happening in the background right now is that KVM is launching this instance that is going to be our instance and is going to launch it from an image and then it has to boot the instance which will also take a little bit of time which we can watch here we have at the moment it's not up yet this is this it takes a little while until it until that catches this is looking at the console of the compute node that is coming up but it takes about a minute or so until the compute node is far enough that the console is even going to respond the KVM console so this is the point where we tap dance for about 30 seconds yeah technically we should probably just ballet the crazy flexible yes so this obviously this process does not only apply to adding compute nodes this is the same process is used for pretty much all the other changes that you may possibly want to do to your cloud changing configuration files upgrading versions of packages and so on that pretty much everything I can show you later on actually actually I can show you now this is the Jenkins representation we have a large number of pipelines here you can see for instance the third one from the top is the one that we are going to use deploy opens that compute node and this is a pipeline that is specifically made for this purpose we have update cloud there's some this is when packages are new packages come out the packages are automatically distributed and are used to upgrade the versions of of software on the cloud updates old environment this is this upgrade the control VMs is an important piece for the control plane open-contrail has a lot of pipelines that you could possibly use and also there's it's possible to create custom pipelines this is it's possible to do custom pipelines for tasks that are either unique to your environment or addressing something that is not part of MCP at where the pipeline does not exist but this can be this can be created so well we know let's see whether it's okay that's what you're on KBM is that correct KBM 01 I should be yes but sorry that is the problem because if I was hey we're doing this live still not a do a verse space list CMP 002 I need to do with it the complete name yeah exactly so you can see for right now this is actually an Ubuntu operating system booting we're gonna wait for just a minute for that to actually complete cooking if the operating system is not fully booted and you trigger the Jenkins pipeline it's entertaining but it's not you're not going to be very messy it's not going to be particularly useful for any particular purpose so it takes as the networking here is very slow this takes a little bit so while we're while we're doing this perhaps you want to show the open stack and the griffon is showing that we don't have right this is this is the open stack horizon dashboard here you can see we only have one compute node in there and later on we should see two compute nodes in this and the same thing goes for Grafana you see here we have only CMP 001 and then it later on you will see that the CPU user drum usage and disk usage for two nodes that is split so finally finally Ubuntu has gone beyond that stage and now we should be very close to living here it does cloud in it that's used to set parameters to make it possible to bind it into the cloud and then once this is has finished and shows us the operating system there we go okay so now I can kick off our at the add-on ups this was the wrong one that one deploy here this is a compute node this is the right one I'll build with parameters obviously we need to put a parameter here because the Ubuntu does not know anything about that compute node trainings.local you misspelled train yeah there you go okay wait wait what's another table okay then then we can actually show something here we can show what happens when the when the build fails so if you go to the console you'll be able to see the failure on one on one and yeah like that and to the right console output okay so it's in the process at this point and basically all of these jobs have these stages built into them and will follow its sequential and that's why Jenkins makes a perfect candidate for this because it's basically a step by step process to add or configure a node and as a result you're sort of able to do that in an automated fashion repeatedly that it's going to end up with exactly the same result which is better than trying to do it by hand okay so if you just go back to the Jenkins it hit Jenkins on the top and then go back into the job there you go okay there we are yeah okay so we have one stage that has completed this is does not need the hostname but it was here you can see what that this is exactly what happens we had a typo if you mistype the hostname so it still doesn't know it's there and you can actually look at the logs here yeah so it tried to use it you look at the console basically yes yes there's a failure and that would be the the general and this is actually then the right error message no minion was targeted because we've had the wrong hostname in there this was not going to work so let's try this again and this time with the right hostname filled with parameters cmp002.local.local is that right that should be fine okay so now now the boat should start again and it will continue up there I think we are two minutes left so we are going to show the end state can we please switch to the other console yeah okay I don't want that okay so when this is finished what you'd end up with is it hooked into stack light and notice here you've got cmp001 and cmp002 that would be the new one that we've just added I'm Bruce can you first show the Jenkins with the Jenkins job here yeah you can see this is an endless litany of what the system does and then at the end when it says success it has gone right so this Jenkins pipeline actually completed properly this was what failed before and of course in this case and it completed and you can see there's a whole bunch of other things they set up the repository upgrades packages sets up the networking for that node high state compute is what actually installs the components the open stack compute components high state compute is an endless list of things that need to be done to make a working compute node and then update and install monitoring that this is that is what we mentioned before stack light is already on the node and is already able to see the node as an operational compute node so as you can see actually cmp002 has now shown up automatically from having run that job and cmp002 isn't just you know there as a new registry we actually were able to immediately install an instance of VM on that and just to prove that we're our fingers never leave our wrists as we do this here's the overview of it and the actual console and of course you see the login right there so zeros go cups go ta-da somebody really loves the cups okay so this is basically how easy it is to add and modify the mcp deployed open stack environment you run through the steps all of the steps in all of the processes to manage your cluster have all have pipelines so you're not really dealing with salt or any of the back end stuff you're dealing with native mode tools that most people are familiar with garret for authentication jinkens and then grafana to be able to display the state of that node so that you can see what's going on with it and whether it's healthy or not okay right so you're also in real life you would not modify that file manually you would check it out from garret modify the file on your computer check it into a garret have it approved and then go from there and that's it from us today thank you very much thank you very much i hope it was instructional and entertaining at least a little bit