 My name is Chris Janiszewski, this is Ken Holden. We both have the same role, we work for Red Head, we're the open stack solutions architect. So what it means, we work with customer, we hear to their business needs, and then we try to find solution. Hopefully it's open stack, if not, then we relate to some other partners or products. That's pretty much it in a nutshell. So we're going to talk about Triple-O a little bit. And this presentation is going to have two parts. First, this session has been designed to accommodate mostly beginners, but maybe a few of the advanced users as well. So we're going to talk about, the first part is going to be about what Triple-O is, and why do we care. And then we're going to dive deep dive into the upgrade process, and Ken Holden will be able to show you how to upgrade in 15 minutes, maybe. And since this is, I try to, my approach to this is the open stack has always been perceived as a pretty complex platform to deploy. And we want to show you that you don't have to be a rocket scientist to run open stack in-house and operate it day two and upgrade. So we're going to try to keep it in this wizardly team here. So what is Triple-O and how it all started? So it actually got started by our good friends at HP. They came out with the great idea of open stack should be really good for deploying infrastructure, right? So why don't we use open stack to deploy open stack? So if you look at the slide, the Triple-O is really designed for installing, upgrading and operating open stack. But what's really important here, it uses services like Nova, Ironic, Neutron and Heat and a bunch of other ones to really do that. And just, if you think about that, I'm using Fedora on my laptop and I always, like just a second ago, I was worried that I'm not going to be able to display this thing on a big screen because Fedora is not your mainstream operating system, right? So is other Linux's. And by us using Triple-O, we're trying to make it, we're contributing to all these projects that ultimately you guys are going to use in production. So I think this is important. So my elevator pitch why you should be using Triple-O. And everyone know this picture. It's a, that's a Star Trek, right? So first, if you're here to a notion, we see a lot of companies, they try to move to this way of deploying your open stack in a service way. So Triple-O tries to make your deployment and upgrade an operation as simple as possible in a product way rather than managed service. And there's pros and cons in each method, right? With the managed services, it's definitely easier to have someone else get you to the open stack deployment, but you just lose the control. And as it is with the public cloud, for instance, it's easy to get in, it's not always easy to get out, right? You have to have a trust in your, in a managed service company that you're working with, et cetera. With the product, you pretty much have full control. And it doesn't, again, I don't, I want to stress here that you don't have to be a rocket science really to deploy and maintain open stack. And Triple-O is the tool that you want to use to make it simpler. And it's not just for the, you know, POCs, cookie cutter type of deployments. Triple-O, one of the biggest value of Triple-O is probably its ability to customize your deployment. When we picked up, from Red Hat perspective, when we picked up Triple-O, it was around kilo release, I gotta tell you, it was rough. And we would go to the customers and do, even like a simple POCs would take us a week, sometimes more, right? Right now, with the latest release of the Triple-O bits, we are in and out for this very simple once in a day or sometimes less. So it definitely came along a really, really long way. As the new versions are being released, there's a lot of new features that are being added. And then there's of course the in-place upgrade, which is the big part of this talk. All right, so really quickly, what is Triple-O? Again, it's deploying OpenStack from OpenStack. So think of it of you deploy a single host seed OpenStack. Think of it as a pack stat all in one OpenStack on single node where you have all your open standard OpenStack services. That's the one on the left. And then from there, you're deploying your actual production overcloud, what we call overcloud. So this is a pretty, pretty simple concept. I'm gonna try to be as agnostic as possible, I promise. I just wanna tell you how this, there are two slides that are redhead related. So the first slide is how we present our products. Everything redhead produced in terms of software, we contribute upstream first. And in terms of OpenStack, we have this model of upstream, middle stream, and downstream, where upstream is a Triple-O, middle stream is the RDO manager. So it's the one you can download from the RDO to deploy redhead, repackage RPM bits. And then the enterprise version is we called redhead OpenStack Platform. And why did we, so this is the second redhead related, and then I'll be done with redhead. So why did we do that? Why did we pick the Triple-O to be the installer of choice? I gave you a bunch of other reasons before contributing to all the services was very important to us. But prior to Triple-O, we had like six different installers in-house. And all people all over the company would use different installers to deploy it. It was pretty messy, as you can imagine. So we decided, hey, Triple-O is the way to go. We want to be a good guys. We want to contribute upstream to all these projects. But we also want to make it as simple as possible to the end users. All right, so actual deployment. How do you deploy OpenStack using Triple-O? This is a little workflow that I put together. And ultimately, you start with the undercloud, with this seed OpenStack running on single hosts, usually. Then you deploy it. You either create images for your overcloud, so for your production cloud, or you download them. You register your bare metal nodes. Then inspect them. Make sure they have enough nicks and whatnot or find out all the details about them. You deploy overcloud, and you validate and do any post-deployment task and profit. All right, so really quick, how do you deploy undercloud? There's one configuration file that you have to specify all the DHCP ranges and what IP addresses you're going to use for both undercloud and your overcloud, your inspection ranges. And there's some extra parameters that will help you, for instance, monitor or run, I don't know, tempest, validate your deployment, et cetera. So this is a pretty quick and easy example. And then when you do that, all you have to do is type this OpenStack undercloud install, and you have undercloud up and running. Then, again, you either download, if you're using upstream or middle stream, you can download the images from the... I'm providing link for the middle stream, or if you're running redhead, you can just do yum install and the images, and all the images will get downloaded to your undercloud. And again, we use Glance to manage these images to be deployed to the overcloud. So then you just use the regular Glance command to upload them. And then the next step, you register the nodes. So you put the credentials, you put MAC addresses for your nodes. You can do it either through the CLI or JSON file, or you can use the new triple OUI. And you're ready to inspect them. So you're running the little RAM disk image that goes over the machine and tells you how many nicks you have, what type of storage you have. So together's all the information that you can use later to your actual deployment. And then the last step is, if you want really cookie cutter deployment, all you do is issue this one pretty simple command, OpenStackOvercloudDeploy. In this case, I'm deploying one compute, one control, and I'm going to have proof of concept OpenStack ready. So pretty straightforward. But of course, the triple O, the main advantage, as I said, of triple O is its customization. So this is not official list or anything. This is one of my buzzword bingo slides, so you guys can pick things from here pretty easily. But I split it in a two-part score in advance, so you can make your networking look however you like, and then there's some storage, security, or metering, and some more advance. If you want to deploy cluster, that's going to be handling some big data, or bare metal, or hyperconverge, you can do that as well. And this is one example. This is one of our reference networking topology, if you will. So you can see we can split all the networking. We can split your NIC configuration into some bonded interface and provisioning, and then we can separate these networks even more with the VLANs. But again, this is just one example. It works really well, but instead of that, you have, I don't know, 10 NICs in your machines, and you want each of the network to run on a separate NICs. There's no problem at all with doing it. Sky is the limit, pretty much, with the networking configuration here. And then this is a little bit more complex example, but I don't know how many of you ever tried to put SRIOV on their running deployment. Okay, there's some hints. Good job. It's not an easy thing. If you do it manually, post-deploy. If you want to use triple O, there's really just one template. There's just one example that will allow you to put things that you normally, you know, put post-deploy in your configuration. And even though this might be cryptic for some of you, but if you've done it before, it will make a perfect sense. I guess, again, what I'm trying to say over here, there's a lot of features built in that you can use right out of the box. And there's another example for that. You can write your own custom extension. So if it's not built into triple O, there's a feature that you want that's, you know, maybe even more extreme than the one before, then you can pretty much write whatever you want yourself. So pretty awesome. There's a concept that got introduced in Newton with triple O called Composable Services. This is by far the most favorite feature. You don't have to build your cloud to be monolithic. You can pretty much create your own custom roles. And for example, you can create a networking role and create some sort of services to be part of that profile role and deploy that to your bare-metal nodes. Just a little example I've done on my laptop, actually, that I'm running here. I created this role called Uber Hyperconverged. And this role, even though it's not supported by us, it was pretty good for me to use on my laptop. What I did, I merged all the services, compute, storage, so set storage, and all the controller nodes into the single role, and I deployed it on the three VMs. So this way, I get benefit of running HA cluster in a VM, of course, right? You know, but it kind of shows you the benefit of Composable Services. Not only you can detach these different roles, like, I don't know, Glance API, et cetera, but you can merge them together and deploy them however you like. So very powerful stuff. And why this is important, again, you can decouple the resource-hungry services, right? So if you're running the multi-cloud or something, and your keystone is being pounded all the time, you can detach that from your monolithic deployment and run it somewhere else. Or you can merge two or more services, such as launch and storage and compute. If you want to save on, I don't know, you have a limited space in your data center, you can certainly do that very easily with triple O. Accommodate different hardware snowflakes and create the custom configuration. And, you know, you will end up ultimately, like, if you're not running production, I don't know if you remember my previous slide, I will have this one liner at the top and I was able to deploy something in a POC way, and ultimately end up in a deployment command that looks more like this, where you have a lot of customization, like network isolation if you want to do Sahara, enable TLS, et cetera, et cetera, right? And this is the quote from my most favorite games. I don't know if anyone recognized what the game is. Witcher, anyone played Witcher? Awesome, yeah. So there's a bunch of features that came with the latest and greatest triple O. There are features being released all the time. I'm not going to go over all of them. This is going to be more like buzzword bingo again. But we added a lot of functionality around day two operation in triple O, so you can deploy things like Fluent D or Sensor Client or Collect D. We added a bunch of Ansible. And there's just a couple of screenshots I want to show you. This is the one you can operate, like in day two you can track your logs with the Kibana dashboard and a Fluent D that runs on your overcloud nodes. There's a Sensu Uchiwa dashboard example here, Collect D for performance metrics. We keep adding a lot of Ansible validation. Again, we want to make it as simple for the end user to install OpenStack as possible. So we're trying to community ads a lot of simple validation that we'll check, for example. This example, if you're flushing your tokens, if you have a job in your cron that flush your token, so after a while you won't run out or your database will not grow to the massive size and you won't be able to run it. So there's a lot of validations like that that are being run in this case in this triple O UI. And everyone contributed to those two. And we finally triple O got the pretty UI. I think that was one of the things that we were hoping to get earlier. We were always CLI driven from the triple O perspective and now we have a nice UI that you can complete and maybe help the guys who are new to the OpenStack to get their food in the door in the knowledge and be able to deploy in a quicker way, even quicker way than I showed you before. And this is my last triple buzzword bingo here. There is a lot of features beyond that that came to the triple O. If you guys have questions about any of these topics, please see us after this talk or we were going to be at the booth and we'll be more than happy to tell you what it is and how it works and how can you take advantage of it. And then finally, I'm going to switch it over to Ken who's going to walk you through the upgrades. Thank you, Chris. So for this, I recorded a video. The upgrade process is lengthy, right? So I try to consolidate three and a half, four hours of time down into a 15-minute video that I can walk through. This is on a nine-node bare metal OpenStack environment, three-node SEF, three-node controller, active, active, active, two compute nodes and running instances on it. So without further ado, we'll talk about the process, the workflow for there is a minor upgrade and a major upgrade. A minor is going to keep you within the trunk release. So if I am running Mataka environment or the Red Hat version of that is OSP9, I'm going to run an update that's going to apply patches within the Mataka release, OS patches as well. A major upgrade is when you're taking it from a major trunk version like Mataka to Newton, which is what we're going to do in this video. So prior to starting this, this is something you want to plan out. It is a fairly lengthy procedure. It can take depending on how large your environment is. So my environment took, as I said, about four hours. So with eight nodes or nine nodes, if you have 100 nodes, it could take a little longer. So you want to back up your config files, back up your environment as you normally do, and do this in DevTest first to get familiar with it and then take it to production. So the basic process of it is we're going to upgrade that undercloud that Chris was talking about. So in a similar fashion of how we installed the undercloud using the OpenStack undercloud deploy, we're going to use the OpenStack undercloud upgrade. And what this is going to do is, well, first I'm going to do a YUM update of the Python triple O client. That kind of updates all of the bits on the director node to OSP10 or Newton. And then we're going to run OpenStack undercloud upgrade, which will then bring the running triple O version and the OpenStack that's running on that director to Newton. And then once we've done that, we want to do one final step before you actually begin the major upgrade. And that is we want to make sure that we run an upgrade of all the patches within the metaka release so that metaka is up to the latest and greatest bits before we move to Newton. And we also want to make sure that the OS is up to date as well because there's Red Hat's building in patches within the derail updates that a lot of times go along with the major release of OpenStack. And so let's go to the actual demo. How are we on time? Ooh, just a little bit ahead of you just to get to the meat. All right, so what we're going to start with is the way we deploy in overcloud is we use templates and we call it deploy. And so if you're a good little admin, instead of having to type this out all the time, you create a script, a deploy script. So that way when you're going to do a future deploy, maybe expand your OpenStack environment, add compute nodes, add stuff nodes, and so forth that you're making sure to run the same routine over again. So what we're going to do is we're going to take that overcloud script file that we use, and we're going to copy it, just repurpose it to another file, and we're going to source one of these YAML files. And this is the first step of the upgrade. This process updates. Solometer moves it to a WSGI service, from a standalone service. So we're going to take that. And once we save that file, we will run that. And that process, it's doing a full overcloud deploy. It's obviously not adding more nodes at this point, but it is going to take a while to go through all the nodes, make sure that the code is up to date. And so once this is done, so this usually takes 30 or 40 minutes, we trimmed a lot of the video out, so it should go pretty quick. So once this is completed, we want to head over to the controller nodes and make sure that the controller, the pacemaker, the clustering software is all clean and happy. So it's just finishing up now. So right now, we're still at OSP 9, essentially. We haven't updated any of the bits yet. We just moved to service. So now we're going to SSH to a controller, become root, and just run PCS status. And if you take a look, this is the long list of resources in this cluster. So OSP 9 and prior, or Mintaka and prior, we're running 124 year take resources in the cluster, because every single service was a resource in the cluster. With OSP 10 now, you're going to see a huge change in that. So now what we're going to do is we're going to go to the next step. The first step was succeeded. So the next step, we're going to start to do the initial pacemaker upgrade. And it's also going to change the YUM repos from OSP 9 to OSP 10. If you have like a satellite server or a custom YUM repo, you would want to source those YAML files. But in this case, I'm just tied to Red Hat's RHN. So we're going to go ahead and run this second upgrade. Same kind of process. Takes about, takes pretty much exactly the same amount of time. And then once it's completed, we'll head over to the controller to make sure everything looks good. And one note, we're running OSP 9, if you see down at the bottom is the GUI showing OSP 9. We're going to look at that in a second. It's going to show a newer version. We still have all the services, because we haven't done the actual pacemaker upgrade yet. That's the next step. But I'm about to do a YUM repo list. And you'll see the OSP 10 repos and the SF Storage 2 repos instead of the previous versions. There's still one OS 9 in there, but that's the obstacles it's not used for this update. So it did all that magic for you. You don't have to worry about doing that. So now we're going to go to the third step, which is actually where we upgrade the controllers, update the software. We're going to bring them from the Mataka code to the Newton code, apply the OS updates, and we're actually going to update the configuration as well so that pacemaker is now in line with OSP 10 and so is the configuration files for the OSP 9. And in my OSP 9 environment, I had deployed Sahara. I'm going to undeploy it. So if you saw, there was two lines in that script. So one of them was to remove Sahara, because I didn't need it anymore. At the same time of updating, I'm also removing or adding different features. So we're going to go ahead and run this upgrade script. And this one's a longer one, because this one's going to be running, as I said, YUM updates, so you're dependent upon how fast you're going to get your software. It's doing quite a bit with pacemaker, and you're going to see that in a moment. So now, like between every step, we're going to go to the controllers, and we're going to go take a look at pacemaker. And this time, we're going to look at SEF as well and see the health of SEF. And we're going to have these resources configured down from 124. So now all the OpenStack services are independent services. The pacemaker cluster is handling the heartbeat, Vips, the Galera, the RabbitMQ, and Redis. So now, as I said, at the bottom, you saw the version 9.0.1. We're going to re-log back into Horizon. And in OpenStack Platform from Red Hat, we put the version number in the bottom right and we go to admin services. However, when I did this upgrade, I noticed there was a bug that instead of saying OSP10, instead of saying 10, it says Red Hat version. So I need to open a bug on that. I just discovered that last week. And you'll see that in a second. And as you can see, the output is a lot different. The services are changed. So where does it says Red Hat version, or should it say 10? Network agents, now I can see and view where my HA routers are, which wasn't a feature in OSP9. And just showing that all those services are up, up. So basically, what we do, we've done a rolling upgrade of your OpenStack environment, your control plane. The three-node controller allows us to do it one at a time, bringing one node out of the cluster, upgrading it, putting it back in the cluster, bringing another one out, upgrading it, putting it back and so forth. So I have an instance that I had deployed two and a half days before doing this upgrade. And as you can see, it's still running. The only thing that gets disrupted is if you have floating IPs, which I did to that instance, they would have suffered a disruption because we're restarting Neutron. So there will be a slight blip. But if you were in provider networks, it should be an issue. So I'm just doing uptime on the instance. So the next steps, now we've upgraded the control plane. We need to upgrade the NOVA compute and the SEF nodes. So we do this from the director. But we're not using a deploy command. We're using a custom script that was deployed when we did the first, actually the second step of this upgrade and deployed a couple extra scripts. So this script we're running against the SEF nodes and it's going to go to one SEF node at a time, stop the OSDs, upgrade the SEF node, upgrade the SEF version and OS and all the other stuff, and then start it back up, bring the cluster back to a good state and then go to the next one and go to the next one. So if you saw before we had three SEF nodes, 18 drives, we're going to see that state in a second and it'll show that they've been updated to Joule and they're in a healthy state. So as you can see at the bottom of the screen, it says SEF was upgraded to Joule and if we do a SEF-S, you'll see the status. It's going to show a warning because there's one final step that I need to do that isn't in this automated process where I need to tell the OSDs, I need to set a flag on the OSDs, but so that's why it says health warm because it's saying they need to have a Joule OSD flag set, but as you can see, 18 are up and they're in the clusters in an active clean state and my instance never went down because since I had distributed data across the three SEF nodes, we were good to go. So the next step is to upgrade the computes and although we could upgrade them all at one time in a rolling fashion like we just did, it wouldn't take into account the instances that are on the node nodes. So what we're going to do is we're going to take a look at, we have compute one and compute two. An instance is a running on compute one. I have an instance running on there. So I'm going to live migrate that to compute two and then I'm going to upgrade compute one. And once I've done compute, and before I upgrade compute one, I'm going to turn it, disable its nova services. The scheduler doesn't send anything to it. So now I can see that that instance is running on compute two and I'm going to disable the service and then once I do this, then I'll run the upgrade script for the nova node. And I'll specify the specific ID of that nova node. And so this is going to go out, change its repos from OSP9 to OSP10, set repos to SF2, apply the updates, apply the configuration changes to nova, restart the services, and bring it back into the cluster. And then once it's complete, then I will turn it back on nova. So here's where I'm going to enable the nova service for it. And then I'm going to fail the instance back. And I made one mistake in this, if anybody's going to see if anybody catches it, but in OSP9, in previous OSP versions, there was a bug with the open stack unified command line where when you typed block migration, it actually did a shared storage migration. And if you type shared storage migration, it did a block migration. So you had to switch them. But in OSP10, that's not the case. So due to me doing a block migration right there, I actually shut the instance down. But if I would have typed shared migrate, it shows that it's running, but it actually will update to a downstate. But nonetheless, if I would have done the correct command with that, it would have been running. And of course, if you had multiple instances, you'd want to be doing a nova evacuate versus a single instance at a time. Now we're going to upgrade the compute node. This and the SF node portion were the lengthiest parts because there was a lot of packages being downloaded and stuff happening. Due to video magic, this is the fastest yum update you've ever seen. My son, he watches PJ Masks. He would say, SuperCatsby. So he has kids. My kid watches that, too. So now we're going to enable the final compute node. So now workloads can start going back to that compute node. We're going to go check on the controller real quick just to make sure that SF and pacemaker are happy. And we see, again, pacemaker is all clean. Normally at the bottom it would say that there was an error. I'm showing no errors, though. So now that we've essentially upgraded to OSB 10, the compute, the controllers, and the SF nodes, we need to do a few finalizing steps. This one is going to consolidate all of the changes we've made in pacemaker and make pacemaker the cluster in line with the new YAML files that are deployed with OSB 10 on the director. So we're going to do this major upgrade pacemaker converge. It's not stopping any services or anything like that. It's just kind of redoing pacemaker. This one usually goes pretty quickly. But at this point, your users can... You know, any type of disruption is not happening. So they're open stacks happy. It's been happy the whole time, as I said, with a minor interruption to floating IPs. So now we're going to just make sure pacemaker's good. SF is good. And then the last routine we're going to run is in OSB 10 or Newton, we move the salometer database from Mongo to Maria. So this is going to be the final step to do that. So using the same overcloud upgrade script I had before, I'm just going to copy that, change the... You don't need to do it that way. That's just how I did it. You can type all this in manually if you wanted to. But I'm going to specify that specific YAML file. I don't have to make any changes. It's complete templated-based. There's no changes necessary that need to be made to that YAML file or any of the YAML files that I've used today. And this is going to run through the final deploy. And once it's complete, the upgrade is finalized. A little bit of video magic. And if you look at the clock on the Mac on the screen, it says 12.34. I think I started around one. Now I had to do the undercloud before, which I didn't show in this video, but... So, oh, no, I think I started around 12 o'clock. And now, so we're just going to validate one last time that PaceBaker's happy. In Ceph node, that Ceph is happy. And your environment is completely done. And then you can finish your change control weekend and go back to bed. I'm just going to look at Ceph one last time. I do need to do the Ceph OSD set, whatever required dual OSDs command to get that health warning to go away. And that's it. So, is there any questions about that? As Chris said, definitely if you don't ask now, feel free to hit us up after, but please use the microphone. And there's Mike. If you guys don't mind, there's Mike's in the middle. So, you know, we can... We're recording this session so folks can hear it as well. Can you summarize for the minor, or especially the major upgrade? Summarize again, what exactly is the impact of the services from a user's point of view? Sure. Does that depend on the live migrations? It depends on your environment. So, in this environment that we're running this on, I have nine nodes. So, three active controllers. So, I can bring one controller down or two controllers down for that matter without losing disruption of services because I'm running active-active, highly available routers. You know, my Ceph node is highly available. So, as long as your environment is, you know, highly available, the upgrade process should be seamless to your users. The major upgrade process, though, is going to stop Newton... Neutron. It's going to stop Neutron. So, there will be an interruption to L3 traffic coming into the OpenStack environment. If you're running Provider, which is L2 coming in, there won't be a disruption. Is that... Okay. Sir. So, I've seen you have several scripts, right, obviously for package upgrade and such, but the rest of the stuff, it looks like the user has to do it manually. Like, the orchestration of the compute nodes upgrade, that's up to the user. Menus types like live migrations and such. The live migrations are up to the user, yeah. So, it's the... This upgrade workflow does not take into account instances on the running Nova nodes. So, if you care about them and you want to live... You need to live migrate them before you do. So, you're just going to do... It's just for the Nova upgrade portion. So, for Nova, I'm just going to move instances from one host to another, update that host that's not running anything now, and then move everything back. So, I can do a rolling update without... When I do live migrate, you're not going to feel a loss of connectivity. So, does that answer your question? Is that what you're asking? Hi. Do you have a table showing that from which release to which release, actually is like an inline upgrade, or which are not? Because my understanding is that if I run a MariaDB and save all the information into the tables, right, what if the schema actually changes? Do you guys have scripts to do the schema update as well and put the data from this to the other? So, with Red Hat, we support specific version to specific version, right? So, we do... You know, version plus one. So, if I'm running Mataka, I can go to Newton. If I'm running Newton, I can go to... You know, and so forth. And we've tested that to make sure that if you've deployed Red Hat OSP 9 deployment, that it will upgrade to an OSP 10 deployment. So, that should be taken into account. But if you've done, you know, a lot of manual stuff to your overcloud, you're kind of at a loss for being able to upgrade, because from a supported standpoint, we're taking into account that you used Director to deploy 9 and did updates using the Director methodology and then went to 10. But there are tables that list, you know, software version to trunk version, if that's kind of what you mean. But I think from a supportability standpoint, though, it's version plus one. Okay. All right, yeah. Thanks. But also, like, let's say, before you used the Nova Network, and now, you know, it's more like a new chunk suggested or recommended, right? And all those tables should be... Yeah, so if you're using Nova Networking, I don't think this would be a candidate, but Nova Networking has been deprecated since before Director came out. So from a Director standpoint, it's going to be a neutron across the board. But... Yeah, but I think for the... So Nova Network, as Ken mentioned, would not apply here because it was deprecated prior to us, you know, adapting triple O. But, you know, we have some other examples, like Cilometer is a good one, right? Like it got broken down into a bunch of smaller services. So if that's the case, if from one release to the other release, you know, triple O will make sure you have all the right steps in the script. So you don't have to really worry about it. The triple O is going to take care of it. And one thing to add to that, too, is that what we're doing right now is an in-place upgrade. If you wanted to add features, right, within a major version. So if I wanted to add, let's say, I didn't have Sahara, after I upgraded or after I upgraded, I wouldn't add it usually during, right? Or if I wanted to go to DDR, or if I wanted to, say, use SSL for the endpoints because I wasn't using SSL for the endpoints here. I would do a deploy, again, sourcing the YAML files that do that specific routine. I would just want to make sure that I do that after my upgrade and everything is settled. So you don't want... During the upgrade process, you don't want to be expanding it, adding more nodes. You want to let the upgrade just do the upgrade in the future. Good question. Thank you. Does this work for OSP 8 to 9? Yes, and 7 to 8. The example we showed here was 9 to 10, but it supported and works. If you look at any upgrades, they're going to be very similar. The notion of upgrading is going to look exactly the same. You're just going to be executing different steps, different YAML scripts to upgrade from one to the other. So from the Red Hat perspective, Ocata is not out yet. I think it's coming really soon. But you're going to have exactly the same... Not the same, but the same type of work in the future releases as well. It doesn't have to be... It can be on, let's say, Kilo, right, to Mataka, right? You have to go... It's only version... So Kilo to Mataka, yes. But you couldn't go from Juno to... The other question is... You mentioned about the floating IP, so during this process, the VN, the floating IPs, are going to stay the same or are they going to be changed? No, they won't change. But the L3 agent and the neutron server behind it has to restart. So it's going to flush that. So it's going to cause a temporary interruption. If you look at the docs, the docs are done so well. They call out step-by-step scripts. They list specific gotchas like floating IPs and so forth. So take a look at the docs. Especially the OSP10, the latest one is perfect. And I literally, to do this deploy, I followed step-by-step through the docs without a single issue. It was flawless. Are you talking about the Trubo doc or the Red Hat doc? The Red Hat doc, yeah. Last question. One more question is that... You mentioned about a lot of services came out of Pacemaker, too, you know. So is that... Is that going to be controlled by System-CTL? Yes. And the idea behind it is that System-CTL does a great job of doing it. They don't need Pacemaker to start them. They can just start. And when whatever they're talking to comes back up, they'll just resume connectivity. The important ones are the database, obviously, because we need a glare to replicate and rabbit. And the VIPs. And I think another reason behind why we, you know, we moved these guys off the Pacemaker is it's easier to detach these different services to this concept of composable role if they're not managed by some cluster mechanism. Couple of quick questions. We would still need composable upgrades to order services within an upgrade, right? Ah. You wanted to add more services? Like, you wanted to add DVR or something like that? No, I want one service to be upgraded or one role to be upgraded before the other one. Ah. You can do it so... It's documented doing the method that I did, but you can also upgrade services individually. So if I were to do, let's say, a SSH controller or do a YUM update on Neutron, you know, or take it out of Pacemaker first, upgrade Neutron, restart that, bring it back up. I could do it manually if I had a specific need to do that. And actually, so composable roles were new to Newton in triple O, and upgrading composable roles in the next version of Cata is going to be part of the process as well. So you're going to have ability to upgrade your composable roles, you know, services running separately. In OSP9, since composable roles didn't exist, you can't go from OSP9 with no composable roles to OSP10 with composable roles. If you're upgrading from 9 to 10, you still have the control plane that you had in 9. So that's one-foot caveat. But that's changing too, actually, in the future. There's more coming, right? Yep. So if I have OSP10 and I have a composable role, then 10 to 11, I should be good. Yep. Absolutely. And so 10 isn't a long-term release, right? 11 and 12 are short-term releases. And 13 is the next long-term release. And we're going to support 10 to 13 upgrades so that customers can stay in a long-term release and upgrade to long-term release. Excellent. How long for that to come out? Well, we usually follow Trunks. So we're usually a month or so behind Trunk. So every six months, we come out with a new version. And so right now, OSP11 is about the hit, which is Akata. And then, you know, so we'll follow in line with 12 being Pike version. And then 13 will be six months after that. And we support, out of the box, we support OSP10 for three years, but you can expand that to five, five-year from point of release. So five-year support of your over-club. Excellent. I don't see any other questions. I appreciate everyone for coming, and thanks for those great questions. Thank you.