 the cloud, perhaps. Well, it claims to be doing it. Luckily, that's doing it. And if I get a link at the end of this, which says it actually recorded it, I'll share it with everyone as I follow up to the session. I hope people are following on. So let's go back and have another look at our controller node now. So there's two several interesting things that have happened. Firstly, we've gained some new users, Collar and Stack. We've gained some bridge devices. We've gained a Docker runtime, but no Docker containers. So other things could happen, too. We could have LVM configuration, for example. We could have other things besides this in terms of repos and what have you. So there's a good deal of preparation work that can be done at this stage in order to make our bare bones sort of sent us VM into a working thing. Let's start the next bit, which could take a little while. Container image pool. At this stage, what we are doing is we're bringing down all of the Docker container images to the control plane, so to the controller and to the hyperfisor of the compute node. And this will be done according to the OpenStack Collar configuration, so which services are defined to be running and placed on which compute nodes. And so what will be happening here is that we are talking to our local Docker registry, which is inside the deployment in Equinix Metal and local to the region, but is not on the CVM. And it'll have a list of containers to download. And I can show you where that comes from. I'll just change tabs again while it's doing that download. So back into the Koby config repo, our infrastructure is code. There's a file called colla.yaml. And this is largely at defaults, I think. That means that we're not going to get anything special. The only significant difference here is we're not going to be enabling Ironic. So by default, the control plane includes the Ironic service for managing bare metal, certainly up to the usury release. And so if any of these collar-enabled flags have been set and they were not the default already, then we would start to be pulling in those container images for that. And within the universe from nothing lab environment, it's perfectly possible and readily available to change the configuration from a running setup, run the playbooks again, and have it reconfigured to the new setup. Let's see how it's doing. Still a little way to go. If we were doing this on a running machine, so for example, if we were updating some of the versions of our containers, and you can update them on a very fine grained basis, so you can easily do one service and update it from one point release to the next, then pulling the containers has no effect on the running system. It's simply staging the software images local to where they are running, in a way which then enables us to shut down the active instance and replace it with a new version of the active instance in a fairly snappy way. Now, so the time that any single server is doing the upgrade is the time it takes to restart a service, for one. But also, Collar Ansible has the logic of orchestration, so things like the stateful services, the databases, and the message queues. The updating of those across the control plane is done in a orchestrated way, where there is no disruption to the service while this update is happening. My system has completed the pool of containers. Now, this is a bit that could easily go wrong with everybody in the lab pulling from the local registry. So if you hit problems with yours, do shout out. And hopefully, Mark or John could step in and help you get the bits that are missing. Overcloud service deploy then is the process of writing the configuration into the controller node and the compute node for the services that we're going to deploy, and then starting them up in a coherent fashion. The way that it does this in the Collar model is quite nice. It seems to work quite well. And I can just quickly take you through the different stages in which the configuration of different containers is written. Because Collar is what you might call opinionated. It has defaults for pretty much everything that you want to do, apart from parameters that simply cannot have a default value on every configuration. And that means that the actual configuration that we need to worry about is where we differ from essentially a reasonably sensible set of defaults. And that means that the configuration that we store under source control is not unduly big and is as simple as it can be without being simpler than that. I'll show you a little bit about how that works and the different stages in which configuration makes its way from our KB config repo onto the running machine. There's three key stages, maybe four. So I'll just change terminal again to our config here. And inside this directory there is a subdirectory called Collar. And inside there is one called config. And this is where we can put a lot of service specific config in a very fine grained way. So now we're thinking particularly about specific files or specific data that we might want to add to files inside the OpenStack control plane configuration. So it's really quite targeted and quite precise in terms of the config that we're applying here. There's several ways that we can apply data. And the first of them is that there will be a service.conf file in pretty much every OpenStack service. So for example, here we have Neutron. And there is some custom configuration here which is applied to a file called Neutron.conf. So pretty much every OpenStack service has a service.conf kind of naming convention. And so that the easy pieces we can apply usually go in there. What we can see here is a fragment of an ini file. But it's got some ginger parameterization to it as well. So we can see this variable AIOMTU. And that actually is expanded from our set of KUB variables which were a couple of directories up in the networks.yaml. So just here we can see that this is a symbolic representation of the MTIU on our all-in-one network. And that's quite a powerful thing for a start. That in our Neutron.conf we can add particular random fragments of config and refer to them in this sort of symbolic algebra of our KB configuration so that we don't have to carry around a load of numbers and repeat the numbers everywhere and then change them everywhere as well. The next level of detail is within the Neutron service we might have other config files to which don't follow this sort of fairly simplistic naming convention. And so we have ML2.conf for example. And so within a subdirectory in our caller config directory we also have a place where we can put other files which will all be transferred over into the running configuration of our services. So again, ML2.conf also has a variation on a theme in terms of this parameter pathMTIU. And this will be inserted into other configuration, other global config that caller will be applying on the machine itself. We can go further than this. And actually, we can make subdirectories within these subdirectories and say that we can apply specific service to specific groups within our control plane inventory. So for example, a good use case for that could be that we have a group of GPU-enabled hypervisors. And perhaps we want to be able to pass through PCI whitelist details specifically on those servers and not on others. And so there's a nice way that we could just create a group within our caller config here specifically for Nova config for a group of servers. And that config would be uniquely contained within those servers. So it's very powerful and very flexible. But at the same time, it's very intuitive and uses Ansible's inventories to its strengths. The next stage in which that config is passed is that if we go back up to the top of our KB config directory, we get this sort of macro expanded caller configuration as well. And if you remember that KB drives caller Ansible to manage the orchestration of deployment of containers. So within the caller configuration then. And I think there's a neutron.com. There we are. So the caller configuration looks pretty much the same, apart from that our parameterized KB variables have been expanded. So details from the infrastructure have gone from being symbolically represented to being concrete instantiations of those parameters here. But this, by itself, you never have a one line neutron config. So the next stage is then that the other details that caller is going to be applying based on the parameters of the deployment will be feathered into this ini file as well. And if we go on to the control plane itself. So if I go on to the controller node become root, that the final or the next stage of the path of the flow of configuration is that there is a directory on all managed nodes or the nodes on the control plane called et cetera caller. And within this directory, there are subdirectories for all of the containers that are being deployed. And right now we haven't quite got to the point where neutron has worked, has gone through yet. So we probably are a little bit ahead of the service deploy here. I think it might be coming up soon. What's going on is that the NOVA configuration files are being written and the containers will be being deployed here. I'll go back to the controller node and we can see that happening. Yes, so it looks like NOVA API came up 17 seconds ago. So that's very much still underway. Another interesting piece here is that we can see the container releases that we have here. So most of the caller containers will just be pulling from builds upstream. We have the option to build our own and there's two ways that we can do that. The binary path is simply to create a root image and install RPMs into it from the RDO project. And the source image, the source container kind is where we pull down the tables or the Git checkups and essentially we go past RDO and we install from the sources themselves. In terms of the tags, Suri is a rolling release branch of stable releases but we also have point versions made within a release cycle. And so it's also perfectly possible to pin to a single point release which gives you greater repeatability of the containers that are being deployed from one day to the next. It looks like Neutron is on its way now. So over here, we're gaining a few directories and our Neutron.conf, for example, should have the MTUs and there is the MTU parameter interleaved with a lot of other configuration parameters that have been applied by Color and Spill as well. This is the longest stage in the exercise because it can take a good amount of time to get through the deployments of the OpenStack containers and their configuration. Looks like we're nearly done. Okay, so on the system, the OpenStack control plane is up and actually we can source the overcloud credentials that were generated by this deployment. So I'm in the wrong directory so that for this we can use the source config, source kbconfig, et cetera, Color, OpenRC. Color generates this file with the credentials of other cloud is just deployed, just an RC file. So with the public credentials of the system, we can now do a post-configure which doesn't do much in these systems. Usually it's the point where we would be downloading things like Grafana dashboards and setting up pieces for ironic management as well. So things like the RAM disks and the other pieces needed there. So we just run it for completeness. So I just sourced the public RC but now we can look at pulling down things like a seros image and defining flavors and networks and other pieces that we might need to boot a machine. I think at this stage there's a script that we can run which is in the Google doc which comes with this presentation which I'll just open here. I'm gonna run this again because quite often I think that as part of the lab environment we do some things to the host VMs networking, the host instances networking and I think we need to rerun this script so I will just type that in. And what this does is it sets up forwarding and some ports to connect to. Okay, now we can, we step away from Koby and we start interacting with the system itself, the open stack through its own APIs. I'm gonna leave the Koby environment set up here and I'll go over to one of the other terminals or tabs on my Tmux. Oh, I'm rude, that's why. Excuse me, long day. Okay, and I will also source the other color credentials here. Okay, so now I should be able to create a VM and I'll just check if my open stack environment is set up. Run some simple commands. I think m1.extralarge is probably pushing our luck. So let's go for a tiny. There's some preloaded state here which should just make this command the same for everybody, there it goes. And the machine is created. So we have a VM running. Let's step on though and see how well it works. So make a floating IP. The scripting there is an easy way to see or to extract the floating IP programmatically but I will just use some copy and paste here. So we have, this is our floating IP. Obviously this is meant to be an external network but in the lab case, the setup local networking that we're doing, he's doing some IP tables translations so that we can use any public IP address on the outside and translate it to a quasi-public external network here which is 10.02. Add that floating IP to my VM. So I should now be able to SSH to my VM and hey, Presto, there it is. So we've made an open stack deployment and we've actually made a VM inside it and now we can connect to our VM and you can see it's a one core 512 may, not very big VM, but it's enough to demonstrate that we can make these test environments on one server for a multi-node or representing a multi-node control plane. There's a little bit more we can do as well as that. So if you remember the IP address you had from your email, if you've deployed the control plane, you should be able to open your web browser to that IP address and see Horizon work. You can also get it from Bonzero here, I believe. So if I open a tab here, we should see a login on your open stack web interface. Now what was the password again? It'll be the same one, the admin account is from the same credentials that we had before. So I'll just make this a bit bigger. So that is our little VM running inside our lab environment and accessible from your machine. How are people doing? We've got a few minutes available to complete any problems people have had if people have not quite made it to the end, but I wanted to go on to show you guys what we could do next. And I'm afraid that in an hour and a half, we don't have a lot of time to cover some of the sort of follow on exercises, but I will take you through what they look like as well. So first one, and these are covered in the handout as well actually. So if you're interested in understanding now that you've made your universe from nothing environment, you can use it in all kinds of ways to understand more about the different ways that we might transform an open stack control plane to add services, to make changes, to understand it better and how to work it. It's quite easy to take this environment and develop it further. And there's a few nice examples in the handout which you're very welcome to have a look at and work through. As I say, please do try this at home because there's a lot of things that we can do that. So I think if I go back to the controller node, we can see the full set of Docker containers that have been deployed. More than a page here. So let me just bring that down a bit. And so this is what a caller control plane looks like, a suite of containers interacting through message buses and APIs, REST APIs and things, not interacting through common file system or other things like that. On this machine, a default vanilla config of caller will have fluently aggregating logs between all of the different services. But basically there won't be any sort of developed logging service, any really sort of user-friendly logging service. So fluently can easily be configured to send data to other places, forward it on to host logging infrastructure, that kind of thing. But actually we can create our own as well. So we can, with one line of config change, we can deploy an Elasticsearch and Kibana deployment to take all of the logging from Fluency and present it in an interactive exploration way using Kibana. It takes a little bit more than the time we have now, I'm afraid, but it's, it's well worth doing if you're interested. It's surprising how well Elasticsearch will work in a limited footprint like this. Surprises me anyway. In terms of other exercises that we have here, one of the really nice things that we can do in Koby configuration, given that everything is Ansible-driven, is that it's quite easy to generate all of the configuration from a, will be effective, a parameter change in our configuration, expand everything into the final level of expansion, which would be the config files written into the host's config directories themselves. And then also then enable us to bring that back and then compare it with what's currently active on the system. And so if you were ever worried about what the full effect of a change is, this is a way in order to enable you to see the full effect of a config change and then understand, well, actually, it's changing a little bit more than I expected it would. And so you can look before you leap. It means that every OpenStack reconfiguration isn't a leap of faith. You completely understand the consequences of it in terms of OpenStack configuration. OpenStack configuration state changes. So that's a pretty cool thing as well. And it's all facilitated through using these Ansible models. There's a couple of other nice exercises I should call out before we go too far. So we've looked at Elasticsearch and Kibana. That was that one. If you've ever enabled Skydive, that's quite a cool thing to add here. Skydive is the interactive network analyzer, which enables you to understand how OpenStack's neutron hypervisor networking configures strings sticks together. And so it's quite an intuitive graphical presentation of hypervisor networking. Well worth the look if you're interested. I think now though, we've only got a couple of minutes left. So were there any questions that anyone would like to raise at this point that we could cover before we finish? I see a comment about SRIOV networking. Yes, that fits in well here. So it's a nice example of how we can use the deployment hooks so that if we have specific hardware from an SRIOV supporting vendor, we can set it up in our host boot process in a way which works for that class of Nick and then set up the environment as well. Similarly for some of the more advanced sort of hardware offloadings, things like ASAP squared from Melanox as well. Those things can be configured as using the deployment hooks so that they fit into the steps of the standard process and extend it in useful ways. There's a question on any docs on how to start with Kola and Kaby for production environments. And Lukas, you're very welcome. Thank you. Yes, the OpenStack online documentation for Kaby I think makes an excellent reference for getting set up. The sort of this end-to-end approach of deployment, sometimes it's nice to start with a reference example for another cloud that's deployed using the Kaby environment as well. And if you look in the Google doc that was the handout with this session, there is actually an example of a production system. So it's an OpenStack cloud for a radio telescope project here in the UK. And that config should be a good example of reference implementation for various scientific computing like facility features. Quite useful to get started. But the online documentation for Kaby is I think very, very good, very articulate and explains a lot of the reference details quite well. Okay. We should probably stop at this point. Thanks a lot everyone who's made it through. I hope you've found it useful. I think it's a good fun environment. And like I said, say it again. I do hope you'll take it away and have fun with the Universal Nothing workshop. It's all open and we look after it. So it usually works pretty well. We add new exercises to it all the time as well. So I'll follow up in any email if the recording is successful and with the details. So you can take that. Yeah, good point Mark. So Kaby and Collar Ansible and Collar on ISC, OpenStack Collar is a helpful channel. There's people from all over the world in there. So it's usually pretty busy. Oh, go for it Stanislav. The seed node will deploy all of the servers in the control plane, including storage and hypervisors. If you, by default, it will do all of those. So we often deploy a Ceph cluster which is connected to well integrated with but usually slightly separate from the OpenStack environment on its own hardware. And so it's perfectly possible to deploy a wide range of other pieces of the infrastructure using the seed node approach. You can deploy pretty much anything you choose to with that system. Okay, in the reference architecture there is all right. So Ironic is just being used to deploy anything that is any of the servers in the control plane. Does that answer your questions? So, okay, I think it's not about providing Ironic to users. It's about using Ironic to install the compute and the control plane, all the resources. Warren, I think the VMs, or the lab instances are gonna be up today. And I think they'll disappear either overnight or the first thing tomorrow. Okay, well, good luck everybody. I hope I'll see you all around and thanks a lot for making it to the end. Cheerio.