 All right, here we go. OK, well, we have a lot to cover. So let's go ahead and get started. Welcome to Thursday. Hope everybody had a good night last night and are ready to have some fun today. Today, we're going to talk about what's been traditionally a little bit of a controversial topic in OpenSAC, which is Skipping Releases on Upgrades. So my name is Mark Velker. I'm the OpenSAC architect at VMware. My name is Carl Stimniewski. I'm a senior staff member of technical staff in VMware. Hi, my name is Dhar Sarana. I'm a staff engineer in VMware, engineering for VIU. So I want to set the stage a little bit for talking about Skipping Releases. And to do that, we have to think them back a little ways. So let's think back to about 2011, which is about when I started working at OpenSAC. The Diablo design summit, I think, was my first one. And Cactus was my first release in the lab. In that day and age, in 2011, there were not very good incentives to skip releases. In fact, pretty much the opposite. In those early days, each subsequent release of OpenSAC was a big change. And it was delivering a lot of stuff that everybody needed. The core functionality of OpenSAC was really still being developed in those early days. So there really wasn't a lot of incentive to skip releases. No matter which timelines were for traditional apps, it kind of made a lot more sense to bring in those new releases as fast as you can. Fast forward five years, a lot of that core functionality is now very well established. When we talked to customers out in the field, a lot of times they're saying, yeah, that thing that they added would be a nice thing to have, but really the things that I've got to have to keep my cloud up and running, they're there. So we can sort of bundle a lot of those new features that I want but don't necessarily need immediately into one larger upgrade. And maybe that's a better path for me. And in those early days, especially, upgrades were really, really hard. I remember trying to go from Cactus to Diablo in my lab and, man, what a disaster. It was basically build a new cloud and start all over from scratch. Disrupt all my workloads and bring them up on somewhere else and, man, what a mess. So to it's credit, the OMSI community recognized what a big issue that was and they made a lot of improvements over time in being able to upgrade from release to release. Now the focus has traditionally been upstream on N and N minus one, right? But it turns out there's actually a lot of infrastructure that we need there to go a little bit wider than that if that's what suits your organization. There's a lot of backward compatibility in some projects. We now have DB migrations for almost everything. There's versioned objects and a lot more sort of real world upgrade experience. People of operators have gone together and talked and kind of figured out strategies for coping with things like upgrades that are fairly big changes. So this is not 2011 anymore, right? This is 2016, almost 2017. And we've come quite a long way. Still though, if you're running an open stack cloud in production today, chances are actually pretty good. You're not running the most recent release. If you look at the new user strategy that came out, look at the numbers for who's running Juno, Kilo, or Liberty, they're all higher than Mataka. And I guarantee you they're higher than Newton since that's really fresh out the presses. So there's definitely some lag in what people are actually running, right? So we can kind of see that deployers themselves aren't really necessarily closely following the upstream release dates. And then we ask, well, are they actually following the six month release cadence as well? We've talked to operators in the field, it turns out a lot of mark, right? There's still a lot of clouds that are running fairly old versions, maybe a year or two back, that sort of have that core functionality that they need. And they sort of want to de-risk their operations by not introducing upgrades too often. And we also have now seen that open stack now fits this huge diversity of organizations, industries, use cases. If you've listened to Jonathan Brice talk on stage, he'll talk about automotive, he'll talk about NFV, he'll talk about e-commerce, he'll talk about CICD, whole wide variety of use cases. So open stack is a really flexible, powerful thing that can fit in all those different use cases, right? And it turns out that not everybody wants to stay close on master, because that's not what fits their particular industry or their particular use case, right? And it turns out that when we're looking at upgrades, there's a lot of different patterns that we can now fit in as well for all those different needs too. So it's a big credit to the inside community that we developed something that flexible. So when we think about doing skip release upgrades, one of the things you need to ask is what's your organization actually like? What is your battle card for your organization look like? Do you need to qualify hardware and software upgrades? Do you have this rigorous like six month process that your IT has to go through in order just to introduce new things in your data center? Or are you pretty loose and fluid and can move fast? Is the current version that you have working really well for you? Do you have a compelling reason to go from A to B, right? Or is it something where, you know, maybe there's nice to haves or maybe it's sort of a middle priority and maybe you have some other things you need to get done first. So it's okay if we wait a little longer on this, right? Do your upgrades coincide with hardware refreshes, maybe maintenance freezes, maybe shopping seasons. I know we've got some e-commerce customers who are like, you know, when it comes to be Black Friday, nothing's changed in our data center for about a month or two before that, right? So it doesn't matter when the open-sac release code comes because back to school or Black Friday, those are the same time every year and we're not touching anything then. Maybe it's also things like audits or physical year calendars, right? Maybe you don't want to disrupt things right toward the end of the physical year and the budget's expended and you don't want to risk bringing in people paying overtime and introducing new things in your data center toward the end of the physical year, right? So there's lots of different reasons, lots of different cadences that people might have for changing the timelines of when they deploy things. Also ask if you're aggressive feature adopters or you're just primarily using that core functionality, right? Is there new stuff that you want to introduce in your cloud that's critical for your next set of workloads? Or are you pretty well established and there's sort of some nice to-haves that you want to introduce? And then finally, look at the products that you're actually using. So there's a lot of different ways to consume OpenSack. There's public clouds, there's managed private clouds, there's distributions, all those things. And in your case, you may want to stay close because you really want to stay close to the upstream security releases, right? OpenSack provides security releases for a couple of releases back and you may not want to go outside of that window because maybe you're not working with a vendor, maybe you're rolling it your own. So you're a little bit more dependent on the upstream for the security fixes. Or maybe you're working with a vendor who has promised you that they're gonna backport all those security issues to even those older releases. So you're a little bit more comfortable with a more flexible timeline, right? So these are kind of some questions that you want to ask when you think about what's our upgrade strategy actually gonna look like. So all that to say, today's OpenSack is evolved quite a bit from where it used to be, right? More models of consumption, more stuff out there that we can use, more industries, more verticals that we're talking to, more use cases that it's fitting into. And the upgrade strategies are now a lot more diverse than they used to be as well. And with that, we're gonna talk a little bit about why you might not want to skip releases. Again, if you're working with a very small team and you're very dependent on outside services, maybe skipping releases isn't necessarily for you because you really are dependent on upstream for those bug fixes, right? You're not gonna get those unless somebody upstream pushed them to you. Or maybe you need those new features, or maybe you're very dependent on individual project APIs. Turns out, over time, OpenSack changes its API, all right? And we do that in a fairly friendly way, right? We have micro versions now. We're pretty good about saying what's the current version of an API when a major API changes. In most cases, you can run more than one version of an API. So I know in our clouds right now, we're running Cinder v2 and v3. And Keystone v2 and v3. Not always the case, though. Turns out there's some database stuff under the hood that makes it harder and possible to run LBads v1 and v2 at the same time, right? So maybe if you're really dependent on some of those APIs, you need to stick with that older release for a little while longer until you can get your applications ready for the new change. And maybe you just like living on the bleeding edge, right? There are people that do this. They have very well-established CI CD pipelines. And the concept of bringing in small incremental changes works better for them than doing upgrades that are fairly large. Okay, so, skipping releases. Now, possibly a thing, right? We have OpenSack and Enterprises. We have OpenSack in big public clouds, small public clouds, lots of different use cases that move into different cases. So we know that there's some demand out there for people to maybe not upgrade every six months. So how do we actually get it done? To answer that, we gotta think a little bit about what an upgrade actually entails when we upgrade an OpenSack cloud. So we gotta deploy the new Python bits, obviously, right? And there's a lot of underpinning libraries as well. We gotta do DV schema migrations because the database is one of those things that changes in between releases. We gotta deal with the potential removal of old APIs in an interaction of new ones. Like we say, there's different versions of APIs for different projects. And the potential addition of new components to do things like project refactoring, right? So the slumber of two years ago doesn't look very much like the slumber of today because we've kind of decomposed some of the functionality into things like Nokia and AID and Panko, right? Going back to 2011, Cinder didn't exist, right? It was all Nova Nova volume. So a whole new service was carved out of that. Neutron didn't exist in 2011. So over time, there are things that you're gonna want to introduce into your cloud. You may also wanna look at potential upgrades of some of the underpinning stuff, whether that's your MySQL database or your RabbitMQs, maybe the hypervisor software that you're running, network, server, hardware, storage, all those things, maybe things that you wanna think about during upgrade cycles as well. And you may wanna make changes to your deployment architecture as well. So maybe we start out with 15 VMs in my control plane and we wanna carve that down to eight. Because we find out we don't need all that extra capacity or we wanna free up some IP addresses. Or maybe we wanna add some more stuff, maybe separate out, I don't know, the Nova database from the other databases and put that on a separate database cluster, right? So you gotta think about architecture changes over time as well. And you gotta think about testing it all and then how to turn it all on for your end users, right? So a lot of moving parts. So what we're gonna demo here in a little while is how we've chosen to do this when we skip releases during upgrades and that's with the blue-green upgrade pattern. So for those aren't familiar, very, very quickly and very simply what it is, you start out with a control plane, it's behind a load balancer, right? So in our case, we actually load balance both the incoming APIs from end users of your cloud as well as the internal APIs. So things like Nova talking to Neutron to plumb vists into your networks, right? And get VMs connected up. So we've kind of got this layer of indirection that we can take advantage of. What we'll actually do is we build a second control plane. And that's kind of cool because basically when we do this, this is just a vanilla deploy. We're not actually switching bits and having to worry about, oh, what happens if a package upgrade fails? It's just a deploy, just as if we were building a new thing. Downside is it takes a little bit extra resources. In our case, like I said, it's about seven or eight VMs. So it's honestly not very much for somebody who's running a cloud, right? It's a pretty small amount of capacity. Honestly, we have more people that have trouble with just making sure they remember to forget IP addresses than anything else, right? I don't think it would have anybody that really balked at storage concerns or CPU RAM or anything like that. The other cool thing is when we bring up that new control plane, it's a fully functional control plane. It's just not actually accepting any real-world traffic right now, right? So that means as an operator, I can go into that new control plane and actually test it out. I can actually go do some functional testing and make sure it's all working together. So I don't really have to worry about Nova version Kilo talking to an Ice House Nova Compute or talking to a Mataka-based Cinder, right? But I actually have deployed all that together as a new deploy and I can then go in and actually functionally test the thing and make sure it's actually running. So at some point I can then plumb that into my load balancer. Now I can actually start accepting API traffic. Maybe I wanna roll that in carefully and do some more testing with inbound connections. And when I'm ready, now I can actually start syncing data between the two. So this is the only point in our case where we incur any control plane downtime. And keep in mind there's no workload downtime. All the stuff that's running in your cloud, all your client applications, all those workloads, they're still running without interruption. This is the only point in which we take any downtime at all in the management plane side of things. So at this point basically we freeze the incoming APIs so that we're not making changes while we're moving data from point A to point B and running this scheme of migrations, right? There's actually room for optimization there where we can make that really tiny, but as it is, this is a very, very small window. We're not actually having any real problems with this. Again, it's just the management side, it's not the data plane side. The other cool thing about this is if we did find problems when we brought up that new green control plane, we can stop the whole thing and we haven't interrupted anybody, right? This is the only point at which we have the potential for loss if something goes wrong. And this is very, very simple. All we're doing is moving data and run those scheme of migrations. So it's a pretty simple set of stuff. It's not like we're in the wild on the run installing new software and having things get in a very broken state. We've minimized the set of operations where there's a window for harm to be done. So once that's all operational and the data is synced, we basically take the blue control plane out from the load balancer. Now we're sending all those incoming API requests to the new control plane and we're up and running. And the nice thing about this is even at this point, if we missed something during our testing and there's something wrong with that new control plane or maybe a storage array goes south or who knows, what else could happen, right? That old control plane is still there. Now we could flip that load balancer right back around and switch right back to it immediately. We'd lose some of the data that happened in between after we did that data sync. But that's a pretty small price to pay if something's majorly wrong that we missed on my new side. So here again, this is really about minimizing the risk. And once all that's done and we're happy with the new control plane, we can drop the old one, reclaim those resources, put those CPUs back into the pool, all that storage, all that good stuff. So it makes for a pretty simple way to do this. And if we don't want to depend on upstream having say N minus two compatibility, if we want to go from Kilo to Mataka, that's actually super important. Because again, we're deploying a Mataka control plane and not worrying about whether Mataka can interoperate with Kilo. The only point which we got to worry about the two having some common ground is when we run those database schema migrations. And basically we just take the set of the two schema migrations between those two releases and run them together. So it makes for a pretty simple way to do a skip release upgrade. So with that, I'll turn it over to Carl and let you show him how it's done. All right, so I'll show you a quick demo of how we do the upgrade. Let me close that and go here and let's start that. So what we're gonna show you now is a demo of upgrading that we'll go back a bit and we'll start with the ice house and then we'll upgrade it to Kilo and then we'll upgrade it to Mataka. The first part will be upgrading ice house to Kilo from VIO 1.0 to VIO 2.0. So what you can see on the screen is a browser with our VIO plugin in vSphere web client. You can see version 1.0.0 and open stack version 2014.1.3, which is ice house. You can also see a horizon where we have one instance that will be there for the entire process of the upgrade. It has this IP that we'll connect to in top left terminal and it will use a poor man's way to track that it's available. So we'll just run uptime command and watch and we'll see that it's constantly there and we are not losing traffic. In bottom left corner, you can see another terminal. We are connected to OMS, which is OpenStack Management Server. That's our control center for VIO. We have two files preloaded. Those are Debian packages that contain bits for upgrading VIO to consecutive versions to 1.0 to 2.0 and to 3.0. So the first command we run here is VIO patch add. That command adds our Debian file to our repository where we will then use VIO patch command, VIO patch install command to install this Debian package. This process updates our internal repository with the Kilo version of OpenStack. It also updates all our control commands like VIO, CLI commands and other things that are VIO specific. All right, so we've done that. We've installed VIO upgrade Debian file. Now what we have to do, we have to log out from vSphere client and we have to log in again. So that our plugin is also updated. So that's what we're gonna do now. We are logging back. That's what we wanna get, what I'm gonna see here is that VIO has been updated to 2.0, we'll go to our plugin again and we'll see that the version has changed and now we can see that the version of VIO, let's just make this a little bit bigger. So version is 2.0.0, that's VIO 2.0.0. And OpenStack is 2015.1.1, which is Kilo. But that's, we still have our cluster left to update. That's only our management server that has been updated. So what we're gonna do now is we're gonna go to upgrade tab and we're gonna go through this wizard. This page, this wizard will guide us through the upgrade process. We specify the deployment name for the green control plane that Mark mentioned. We specify public virtual IP and private virtual IP for this green plane, that the plane with newer bits, with kilobits. And those IPs, the public virtual IP is a temporary IP that we're gonna use that we will be able to use to verify the new deployment. That's pretty much it. That's all the data we have to specify. We click finish and here you see three timers. Upgrade time is the total time of the upgrade. Control plane downtime is basically downtime of the OpenStack cluster. And data plane downtime is a downtime of our instances, the actual workloads we run on top of OpenStack. As you can see, upgrade timer is going on right now, but others are not, which means we have no downtime at this moment. There you go, the first step finished. We still have our horizon available. Nothing has changed, everything is available. Users can use their cloud. The only thing that changes is that we provisioned new control plane, meaning we provisioned VMs and we did the pre-configuration. So the next step is the migrate data step. That's the step that actually incurs downtime, as Mark said. That's the step where we migrate the database from a blue control plane, in this case, IceHouse to Kilo. So the control plane downtime started. That's when users cannot use their cloud for the moment, but all their workloads are running. As you can see in the top right corner, the uptime command is still there. We didn't lose the connection. The time is going up. So we are migrating our data. This lab is actually pretty slow, but we saw the times of just couple minutes in some of the customers' environments. So here we have the temporary public virtual IP that we can use to verify our migration phase. So basically we have a Kilo control plane that is not exposed to our users yet, but we can check if everything is correct. So we log into the Kilo horizon to see if everything is there. So as you can see, the Kilo horizon, we have our instance, everything is fine. Users also run some API verifications. Check if their CI is working correctly against their new control plane and so on. So the last step we're gonna do here is a switch to new deployment command, which pretty much does one thing. Well, there are more things, but the most important part is we are switching the original public virtual IP, the one that all the users know, the one that's configured in every CI and stuff to the new control plane, and that's what happened here. There's no overlapping, the old control plane is stopped, so the public virtual IP is only configured in the new control plane. So what we're doing here is we use the original IP, the right public virtual IP to verify that our OpenStack cluster has been upgraded and everything is in Kilo version and everything is running correctly. As you can see this, so what we're gonna do now is we're just gonna create a new instance, Kilo instance using OpenStack Kilo version just to ensure everything is fine, and we'll also keep it for the next steps of the demo. The instance has been created, and that's it. We pretty much are done with the upgrade. The last part that left since we know that everything is fine, we can delete the old control plane, the ice house control plane, since we don't need it anymore and we can freeze some disk space that was occupied by this, and that's it. That's the ice house to Kilo upgrade. Now let's do the part two of it, which is, oops, that's actually not ice house to Kilo, this should be Kilo to Mitaka, sorry for that. VAO 2.0 to VAO 3.0, that's correct. So we're gonna do the Kilo to Mitaka upgrade now. All the steps are pretty much the same. We're again gonna use the instance to track the availability of our control plane. We're gonna connect and run the watch command with the uptime command, and we're gonna also run, we'll also go to OMS to install the new Debian file. As you can see, we started with version 203 this time, so we did some patching in between. We didn't include that, but patching is pretty much the same workflow except we just do it in place. But we started with Kilo version 2015.1.1, which is Kilo version. So again, VAO patch add, that's the first step. We add the new Debian file to our repository. We see that the patch is there, but the installed column says it's not installed yet, so we install it. That again updates our management server bits. It updates our plugin, it updates all the Python libraries, et cetera, et cetera. So the next thing is to log out, log in, update the UI plugin. Those steps are pretty much the same, so the user knows exactly what he has to do or what she has to do to upgrade VAO to the next version because it hasn't changed since the first release. So here you have version 30 and opens stack 2016.4.7, which is Mitaka. We can also see, so one more thing that we're gonna do as a part of this upgrade is we're gonna change our architecture, the amount of VMs that we're running. Here we had around 15 VMs, that's a kilo version. So what we're gonna do here is we're gonna run the wizard again and upgrade to Mitaka. So the steps are pretty much the same. We specify new deployment name, this time VAO3. We specify public virtual IP. We don't need private virtual IP anymore, and version 30, we just need public virtual IP, and we use private virtual IP from the pre-configured pool of IP addresses. We finish the wizard, we provision the new control plane, which includes all the new VMs for Mitaka pre-configured, but without data yet. So that's the first step that we're gonna do. So it should finish soon. It takes a while, there you go, it's finished. So we're gonna see that in Mitaka, in VAO3.0, we have much less VMs. That's the one, another thing that we do besides skipping a release. We also change our architecture. We reduce the footprint from 15 VMs to around seven. So that's another benefit of doing an upgrade this way, using blue-green approach. So we again do the migration of data. So what we do here is we migrate the data and move the database from Kilo, this time to Mitaka version. We run migrations, et cetera, et cetera, as a part of the step. And we'll soon get a Mitaka deploy, whoops, failed. Okay, well it's recorded, so I cannot kid you much. I knew it's gonna happen. So what happened is pretty much I actually had a valid error. I had a misconfiguration in my infrastructure, so the upgrade failed in my lab. So I could have just get rid of it and record it again, but instead I decided to show you the feature that Mark mentioned, which is just a rollback. So what we're gonna do here is we want to fix it, but we want to restore our cloud for the time of when we were gonna fix the issue. So what we do here is we just run a rollback on our original control plane, which was Kilo. So we are bringing it back in the same data, with the same data that it was. We now verify that it's actually working. We see the horizon again. We see that it's running. It's Kilo version. You can see that it's all good. So users can use their cloud while we are fixing the issue. We delete the broken control planes since it didn't configure correctly. We are just gonna restart the process of upgrade. Once we fix the issue, so we go to fix the issue and we do actual part two, which is the upgrade. And again, it's with the wrong data, but via 20 to 30, but Kilo to Mintaka, not ISOs to Kilo. We did already. Let's go through these steps really quickly because we already did that. It's all the same. We maintain the same process for users so they know it, there is no surprise here. Or we have exactly the same steps. Again, we run the upgrade. You see the timers again. Again, the first step doesn't incur any downtime. So users can use their cloud as they were using without any issue. This step is long, so we don't want to incur any downtime on this step. So we provision new VMs, this time seven VMs, as I mentioned before. All our cloud is running correctly. We have horizon. We have all our APIs accessible. As you can see. And we go back and now we skip, I mean, we go to the migrate data step. That's where we incur the downtime again. So here is actual, the data is migrated from Kilo to Mintaka. This time there should be no failure, hopefully. No network failure. Well, as I said, it's recorded so I cannot keep the match. This is gonna succeed. We have a data migrated. We have a new public virtual IP again available so we can verify everything is fine. As I mentioned, our users often run some testing scripts. They check if there are no API errors for their workloads, whatever they have right now, before they actually expose this to the users. So we can verify it's running. As you can see the horizon login page changed because in VAO 3.0, we actually expose multiple domains to users. So we can login and see that the horizon and basically the Mintaka cloud is up and running. That's the stage where control plan is still in downtime. So we want to make it quick. We're just gonna see that the instance are there. We're gonna see that it's actually Mintaka because Mintaka has this fancy launch instance, a wizard, or maybe it was added in Liberty, I remember. Yeah, it's there. So once we verify that, we're gonna do again the step that we did before, which is switch to the new deployment and we'll have our public VIP, the original public VIP switch to a new control plane and we'll expose the new control plane to users. And that's the last step of the upgrade. Once we finish the step, our app is completed. Let's just wait for that. So we can summarize what happened in the demo. And there you go. So what happened here is we upgraded from Icehouse to Mintaka in two steps. Each step we skipped the release. First time it was Juno, this time it was Liberty. We skipped the recess and the second time we upgraded we also changed the architecture of our cloud. So we reduced the VM footprint from 15 VMs to seven VMs. We also reduced the space required by the VMs. And we swapped out a broken switch, right? Yeah, exactly. And we fixed that without the issue. We could easily roll back to deployment. So the last thing that left is just, let's delete the old control plane, the key control plane because we don't need it anymore. We verified that our Mintaka cloud is running. As you can see again, old control plane, 15 VMs, new control plane, seven VMs, we delete it. And that's pretty much it. Okay, Sidharth. All right, thanks, Shaila. Let's go back to our presentation. There you go. All right, so demos are great, but the million dollar question, is it real? Is it, does it really work for our customers? Well, but the truth is a lot of our customers, especially for VMware Integrated OpenStack, they've actually gone through the same process that you just saw in the demo in a fast forward way in production. In fact, some of our customers, they were kind of brave enough to actually go through the upgrade process without even telling our support system. So we're like, and after the fact, they were done with the upgrade, they're gonna, oh yeah, by the way, we upgrade to your latest lease. Oh, fantastic. You know, one of the favorite quotes that we have from one of our ACs, like as you can see, doing an upgrade while just drinking beer and watching a game, that's how it should be, right? You're upgrading a production cloud, but not to be worried about what's gonna happen around it. So, what's so special with VIO? How do we pull this off? What's the secret sauce? I mean, it really boils down to the overall architecture. When you run OpenStack on the VMware infrastructure, the basic architecture is something like that. So you have the control plane, and you use all the VMware drivers underneath these main projects that are essentially talking to your data plane, which is essentially the VMware software-defined data center. So essentially for vCenter server, NSX, so on and so forth. So therefore, which gives us the unique ability that we can actually upgrade the control plane so we can literally swap out in and out from one release to the other, but all your real workloads that are actually running and doing the work, they're never getting affected at all. You never touch them. So guess what, that's what you expect out of it. Given that, I mean, there's still things that you wanna plan for, and some maybe nuances that you have to keep in mind. Well, the simple thing, well, plan for extra resources. You shouldn't have planned for the broken switch, but yeah, things like that. Make sure that when you're upgrading, you have the relevant infrastructure carved out for that, because that's what you're gonna run with your next version of the cloud. Next, well, we are open-stack. Yeah, we are always improving functionalities, adding new APIs, deprecating the old ones, and especially when you skip releases, there is a chance that an API that existed, released back, may not be there anymore. So you gotta plan for that. Make sure that when you're upgrading, you're mindful of that part that you may not see these API. Therefore, for example, for the customers, what does it mean? Also, as Mark earlier mentioned, also things about backwards incompatible APIs, like Lbass V1 versus V2, if you have both the clients, well, guess what? You can't work with V2. But ultimately, it's all about preparing your end customers and users that are you consuming your cloud from one version to be ready for the change. So for example, as Mark said, in our cloud, we run Keystone V2 and V3, both in the latest Metaka. What does mean? Well, even though the APIs are backwards compatible, but the clients who have built their automation scripts and whatnot against your cloud, well, they need to be informed ahead in time that, okay, guys, we're gonna switch to a newer version of these APIs. You have the new capabilities, but you gotta make sure that your client integrations and whatever you have are capable of consuming those newer versions and are actually talking to the right version of the API to get the new benefit, which is very important, all right? Otherwise, well, they're gonna still use the older APIs, older functionality, and they're not gonna get the benefit of the newer thing. But that, I mean, I think that's all what we had, questions. Yeah, folks have questions. There's a mic over here, I think they're recording us, so come on up. Thank you for the impressive demo. My question is that you didn't mention when which step do you update the OpenStack services like Nova Compute, et cetera? Good. Okay, so we actually don't update them per se because we do not do upgrade in place. We actually set up a new control plane, and this new control plane has all the newer versions of OpenStack components, meaning Nova and Neutron, right? So we already provisioned them in, let's say, we do upgrade from Kilo to Mitaka. We provisioned them in Mitaka version, and the only thing we do is we reconfigure them in the same way that Kilo was configured, doing, well, appropriate updates if there is one, and migrated data around the database migrations on top of the Kilo database, but we do not have to update components per se because we provisioned them in newer version. So specifically with Nova Compute, which I think is kind of the core of the question there, if you think about a traditional KVM architecture, Nova Compute runs per hypervisor, right? In a vSphere architecture, a Nova Compute talks to a vSphere cluster. So it's not actually tightly coupled to the individual hypervisor host, and that gives us a great deal of flexibility because it's basically just a VM running a process. So we can do that as part of the blue-green upgrade and just reprimand a new VM to do that as part of the control plane upgrade. And then there's a picture up there that actually summarizes that, I mean, what Mark's saying, right? I mean, the workloads and everything on the data plane is actually decoupled from the OpenStack control plane in a way that gives us that leverage. Do we have any more questions? So one question regarding the database migrations. Some releases, especially Nova, does some migrations not offline, but an online fashion after starting the new service again? And these migrations are usually removed after just one release, so how did you solve that? So essentially, so that's essentially part of the planning. So for example, we said, well, when we do skip the release, actually planning ahead, for example, in this case, let's say from Ice House to Kilo. So we wanna make sure that when we do that, these specific DB migrations that you're talking about, for example, they work for Ice House to Kilo. Yes, I mean, essentially, if you were to directly jump from, say, Ice House to Mitaka, yes, those disappeared DB schema migrations may not be there and that may not work. So that's kind of part of the planning when you say, when we're saying, if you're planning to skip a release, be mindful of those things that, well, is there a blind spot there which actually gonna hit and actually gonna fail? And if that's the case, then essentially, that kind of is like, okay, you cannot skip that particular release if that's the case. And kind of what that boils down to really is on the vendor side, when we plan out what upgrades we're gonna be doing and what our release schedule is gonna look like, that's part of the risk that we absorb. So the first thing we do is actually try that out and when something blows up, that's the thing that we gotta go fix. So on the vendor side, we kind of handle that for you. And if you're kind of building your own distribution, or rolling your own open stack, that's one of those things you have to look out for as well. Let's just deal with that. The good news is that even if the migrations themselves disappear after a release, Git is your friend. They still exist out there somewhere, right? So they're still out there. Yeah, right, thanks. For the front load balancer, is that a standalone load balancer or we can use load balancer as a service? In our case, it's actually an LB pair. So we basically spin up two VMs that are run at HAProxy and run KeeplogD between them. Yeah. So can you use load balancer as a service? For the second balancer? In our case, no, because what we're actually spinning up is an open stack cloud. So we don't wanna have any dependencies on the management plane itself when we're actually doing those upgrades. So we do it separately. Do we have any more questions? I guess that's it. All right, thanks for coming. Thanks a lot. Thank you.