 So, it's time to talk about OpenStack on OpenStack. It should really say OpenStack on ALS because that's what we're really here, that's what we're about here today. About me, I work at HP on OpenStack. I don't know what HP does. I work on Nova and Triple O. So, we're here to talk about OpenStack. This is, you've all seen this by now, I hope. And this is the nice little image from the website and they call it a cloud operating system. I'm not sure if it is or not, but let's call that for now. And so, what does OpenStack do? OpenStack is a cloud operating system that gives you APIs to programmatically manage, consume, and use compute network and storage resources. So, you have your application, let's say it's Jenkins or something, you need to consume some resources. You say, I need a compute node, I need some storage, I need some networking. You tell the API that and you get them. Nice and simple. It has some really great impacts. And so many of you hopefully know by now, this is one thing that cloud does that enables velocity. So, now you have a really fast resource allocation, no more. I need a server. I've got to wait two weeks to, you know, get the server into the data center, have it installed, have it networked, DNS, et cetera, agility, speaks for itself, unless we could deploy test or develop test and deploy in the cloud. It's one environment for everything, it makes it easy. It's really easy to spin up resources to get rid of them, to scale things out, and so you want to take some of these properties and actually try to use them to make deploying OpenStack easier. So, OpenStack has been very successful to date. These are a bunch of the use cases today. This is once again taken from the website. We have lots of happy users, but it turns out it's really hard to deploy and actually upgrade. So, I don't know if any of you have tried to actually do that here or actually deploy OpenStack in a non-trivial environment. Turns out it's getting easier, but it's still a huge pain. So, part of that is because OpenStack is actually really, really complex. Under that nice little shiny API we have, we have a whole bunch of different services. We have Cinder, Nova, Neutron. Those are compute storage network. We have Keystone for user management. We have object storage and block storage. Lots of different systems. We have a dashboard, and each of these little systems has a bunch of different services on a bunch of different machines. And we have databases and we have message queues. All of them you want to have highly available. So, this is actually a very simplified model of what OpenStack in production will look like. Yeah, it's a little crazy, right? Turns out this is actually really hard to deploy for some obvious reasons. It's really big, complex, and hairy looking. I want to be nice if this is easy to deploy. You don't really have to sort out where all the stuff goes and deal with all this by hand, deal with upgrading by hand and all that. So, to do that we need some sort of system that actually allows us to deploy big, complex systems. Turns out OpenStack is designed to do that. So, I want to deploy OpenStack with OpenStack. And this is where you have the inception music. So, this is now officially the OpenStack's official deployment program as of a few months ago. And so, we're actually working on making sure this actually works. I'm going to talk about how we get there. So, this is a quote from the actual description of the project. So, triple O is the use of self-hosted OpenStack infrastructure. That is OpenStack bare metal, noven cinder, heat, disk image builder, and in-image orchestration to chef, puppet, install, maintain, and upgrade itself. So, we're trying to address some major issues here. Three main issues with triple O. One is bugs. We all have bugs. We all hate bugs. We have a few ways of solving that. Already in OpenStack today, we actually have a continuous integration process. We actually gate every patch in OpenStack on making sure we think it works. Our definition of works is never what you guys think means works. And so, we're working on fixing that. And furthermore, we're actually going in addition to continuous integration, continuous deployment. And this solves this split-brain problem. I've had this a lot. I don't know if you guys ever have this. On a personal level, you're working on two things at once. On a company level, you have the developers working on the next generation OpenStack and you have your production guys working on something six months to a year old. And they come to you and ask you a question about a bug. You probably wrote and you're probably the culprit of. And you don't know what it does because that was six months ago. And so, we solve that by actually trying to get deploying trunk all the time. So, continuous delivery. And this helps because instead of thinking back, man, what did I do six months ago? It's, oh, I think I did that two days ago. And you go back and it's sort of freshen your memory and you can fix it quicker. We also want to have a common API and code base for the cloud and everything under the cloud. And this is just lost code, lost complexity. We already have one API we're sort of comfortable with. Why not use it again? We also want to try to standardize the installation upgrade process. Everybody heads a little differently. And that means everybody heads a little different bugs. And that means it's really hard to test on a thousand different ways upstream. So, we want to actually have one more or less standard away with a bunch of variations in it that we could actually gate upstream open stack on. Another problem we're trying to deal with is entropy and cruft. I mentioned this in the last talk a little bit, which is you do app get install one day and it's different a week later. So, it's a bunch of different ways of solving this. Our approach is we bake everything in beforehand. So, instead of doing app get install and doing something like Chef or Puppet, after you stamp the image onto the machine, we actually do it before. So, we change the image. We have these golden images that's all the code in them. So, you know that everything you're testing is the same code you can see in production. You don't have to deal with app cache mirrors and gem mirrors and pit mirrors and et cetera. We build everything beforehand, do it once and then deploy it out. And so, we only have all the software will be in the golden images and configuration and state is outside. And hardware failure, this is a standard one. We see standard high availability techniques. So, there's a bunch of different pieces, the puzzle for deploying something like OpenStack or many other systems. Provisioning, software configuration, state and orchestration. Turns out in OpenStack, one of the reasons why we have triple O is because we have these two great pieces, orchestration and provisioning, that already do big parts of what we have today. So, instead of using something else for orchestration, we have this nice tool called heat that does that. In provisioning, we have this thing called NOVA, which gives you provisions for you, compute resources. Instead of using virtual machines, we just use bare metal machines instead. So, the same API on top and underneath instead of using a hypervisor, using bare metal driver. And the last three pieces are the pieces that are a bit more, we had to write ourselves, and they're a bit more independent and you could swap them out for things like Puppet or Chef. Disc image builder, this builds the golden images. And actually, OS, Config Applier and OS Config Refresh, we've actually renamed. We swapped Config and Applier and Config and Refresh for silly reasons. So, now it's OS Apply Config and OS Refresh Config. And those are how we actually manage the configuration of the state on machines. So, provisioning, we use a NOVA API on top of it, NOVA boot, instead of giving you a VM that gives you a bare metal machine. And this is done via a bunch of different ways. The standard way is Pixi and IPMI. This piece of code is actually being currently pulled out of NOVA into Ironic, which OpenStack actually has trademark, so you can't use that anymore. So, Ironic is now a trademark of the OpenStack Foundation, which is pretty funny. So, now we actually, this will plug instead of a virtual driver, I'll actually use Ironic instead in the future. So, you say NOVA boot, it'll talk to Ironic underneath everything. And then it'll talk to Pixi and IPMI or some proprietary third-party API. For software, use golden images. And so this does encapsulate a known good set of software. So, you build your image, you test it out if it works. Awesome, you ship it to production. If not, you try it again and you do it until you get a working image. This excludes all configuration and persistence date. So, all config files, things like database, that's outside of the golden images. And this is sort of the equivalent of packages at the cluster level, at least we think so. Another nice thing to say is it actually moves all the testing up. So, instead of you test everything in your lab, it all works out great. You go to production, something changed. You don't know what it is, it's not working. They have to go and debug everything in production. I've had to do that a few times in past companies, it's never pretty. So, we actually move all this, the problems of things breaking in, dependencies changing and things like that up into the actual testing environment. So, we have a small tool chain that we wrote ourselves for this, but you could use any sort of image-building tool chain for this. And once again, we actually get to deploy things the same way we tested them. Bit for bit, it's going to be the same except for your configuration and persistence date. So, for configuration, you use OSApplyConfig, and this is a little basic tool we have that combines metadata delivered by your cloud, something like Keat or something else potentially, with the templates on your image to produce a working config file. So, things like NovaConfig, that will be in there, Swift, anything like that. All the config files you put into here, OSApplyConfig, and that will stamp out an image. It's a little template, it's nothing too complicated. And once again, we actually wrote our own thing here instead of using a bunch of the options out there, Salt, Puppet, Chef. And the reason is because we don't want to stick ourselves, be dependent on the third party, the reference implementation, and we know many people could be very heated about which one they like, Salt, Chef, Puppet. And so, we wanted to stay out of that battle altogether, but you could use anything you want here. And OSRefreshConfig, it's the other tool in the config, OS StarConfig, sweet. And this is triggered when the metadata changes. So, in addition to actually deploying the cloud, you actually want to be able to upgrade it too. So, that means something's going to change when you're able to reconfigure things and handle that. So, there's a little service that every time something changes, it'll talk to heat, restart services, coordinate data migrations, things like that. If we're doing continuous deployment, we actually need to have a way of doing this often. So, it's a very basic process. Pre-configure, configure, migration, and post-configure states. So, a good example is if you're upgrading NOVA, you'll do something like turn NOVA off, upgrade the code, change the config file, maybe turn NOVA back on. And if there's a database migration, you'll run a database migration in there as well. Orchestration, so we use heat, which is OpenStack's official orchestration tool. And it uses its AWS-compatible, it's cloud formations, and it gives you this concept of stacks to build a whole complex system. So, you say we have all of OpenStack in heat template. So, you say turn on OpenStack, and it'll deploy OpenStack for you. So, this also supports any sort of configuration management system within a machine. So, this works on top. This sends metadata to the machines, and it manages turning machines on and off, but it doesn't actually do anything inside the machine, which is why we have the OS refresh config and OS apply config. So, this is actually a screenshot of what it looks like in Horizon these days. So, this cool little interface in Horizon actually show you the stack being built and being unbuilt. So, we have this crazy concept of underclouds and overclouds. It's OpenStack on OpenStack, so we have lots of crazy layering like this. The basic idea here is we have an undercloud which should be provisioning your overcloud. And your overcloud is what's actually producing the KVM servers that you're selling to people or giving to your tenants. And the undercloud will be bare metal So, undercloud is a fully HA bare metal cloud. So, instead of running Nova KVM, you're running Nova bare metal. It's self-hosted, so that means it's running on itself. And the goal here is to aim for one, two control nodes or so. So, you have 100 nodes, two of them would be a HA cluster for control and everything else would be blank servers that you can stamp images down using the Nova bare metal driver. And all these extra nodes are for the overcloud. The overcloud is a fully HA KVM-based overcloud hosted by the undercloud. It's orchestrated by heat running on the undercloud. And this could actually use the same disk images as the undercloud for most services. So, we have two different clouds and they're mostly the same, so you can actually run the same, you build images once, and they're for the undercloud and overcloud. So, this is what it all looks like when put together, which is a little mind-boggling and confusing to look at. It took me a few minutes to even look at this the first few times. The basic idea is the first problem we have is how do we actually, you know, create a classic bootstrap problem? So, our answer for that is we have a seed cloud running in your VM, something like DevStack, or running a laptop, something like DevStack. You plug this into your data center. You have to register all the machines in somehow. Hopefully, you know what machines in your data center. Maybe have them automatically discover. There's a bunch of different options there. And now you tell your seed cloud that I want to scale out to the undercloud, or I want to scale out. So, now it's a single VM. You want to have it scale out to the undercloud to a single VM into a HA cluster of management nodes. And now you have one VM running, or one machine running the OpenStack core services on your data center, and one copy that running in your laptop. So, you unplug your laptop, and they have a non-HA pair. And now heatnodes is a non-HA pair, and it wants to fix itself. It wants to heal and scale back at. So, you tell heat, okay, go scale out again. So, now heat will take a second node from its pool of undercloud pool to another management node. So, now you have two management nodes running in your undercloud, and now it's all self-hosted. So, that's where that little circle is here. So, this is actually running in itself. So, when it upgrade itself, you tell the undercloud upgrade me, and it'll upgrade itself. And so, there's a bunch of strange logic in there to make that all work with heat. You need to make sure that you can actually do rolling upgrades and things like that. We can't actually bring down the whole cloud using two machines out of, let's say, 100. And you have the other 98 are free. The other 98 are free now for your overcloud. And you can have more than one overcloud. And this is running Nova KVM or Xaner, whatever you choose. And this is actually what the customers will be seeing. So, one of the problems, actually, today with the undercloud is that we want to keep this as a very restricted cloud. And the reason is because it turns out things like Pixi and IPMI are not too secure yet. And so, there's a bunch of security concerns about giving somebody a user full access to hardware. Biosis could be your flash, things like that. So, the idea here is to keep the undercloud as a private thing for the deployer. And so, the goal here is now we have, we get rid of the seed cloud and now we have undercloud running. And it looks something like this. We have more than one overcloud on top and we have our undercloud. And we're continuously deploying the undercloud and the overcloud. They're both HA and continuously deploying the same code to them. So, they're always running something close to trunk. They're always up to date. They're always in HA. So, this is the basic installation process I mentioned before. Create a bootstrap node. You plug it into the data center network. You enroll all your machines. You tell heat to deploy your HA OpenStack templates. That's a nice heat template that we have. Heat drives a nova API to scale out. So, once again, we're talking to the same API that we can be using on top and below the nova API. It tells it to scale out the cloud nova boot. Spends up another bare metal machine. Switch off the bootstrap node. Tell heat to recover. So, we're fully working undercloud and we can deploy the overcloud. So, managing your deployment. When events occur that change the state, heat will trigger causing the system to respond rapidly. So, you lose a machine. You know, we're not in HA anymore. We discover that. Now we need to fix it. Or something happens. We want to upgrade the system to rolling upgrades. We need to update all the machines. Heat will manage that. And once again, heat also solves some of these problems. Or any other orchestration tool allows us rolling upgrades, make it really easy. And also, we could actually roll back as well. We just deployed the old image we had. Instead of using the something that worked, we tried to deploy on a few machines. Everything went horribly wrong. We roll back to the previous known good image and we're back to everywhere before and we can debug what happens. We also, this whole process will prevent entropy by managing the cloud in cloud ways which is using images everywhere. And we'll try to store up. The only things we'll have to keep around between is the neat that we need. And any configuration which will be in persistent volumes. So the idea there is we don't have it today but we'll have a... So Nova has a bare metal driver. Cinder will have a persistent volume driver. So it'll actually manage something like an LVM volume on the machine. So you can actually upgrade your machine and leave your LVM volume managed by Cinder around between upgrades. So this brings up a whole bunch of interesting possible applications now that we have this whole concept of clouds and crazy things like that. Well, one thing is we have more clouds everywhere and everybody likes clouds here, right? We get this strange multi-cloud tendency. We have clouds and clouds. We can have different users, different things like that. We can integrate this into the CI, CD, infrastructure, continuous integration, continuous deployment. We can actually deploy part of... We can have a take the under cloud and partition off a small part of it for a separate test over clouds. You can have your production over cloud, or testing over cloud. You can have a bunch of different over clouds. You can have two over clouds. You want to make a private cloud for somebody for a public cloud provider. You can spin up another private over cloud on top and it'll be completely isolated from every other over cloud on the system. But you have the same to actually note a node pool underneath that you can actually pull from. So you can very easily elastically scale up one cloud, bring down another cloud. All of a sudden bringing up and down cloud and what's left to do? Turns out it works today and there's a lot left to do. So we do this today. We have a... We have a testing harness that's running all the time on trunk. Trying to make sure this always works. And it actually works quite well. We're not running this in production that I know of, but I don't know anything about how HP works. There's a bunch of big pieces missing here and we're working on them and we have solutions for all of them. We just haven't done them yet. There's no persistent storage. So Cinder doesn't have a notion of this bare metal management. We actually have some concepts for having Neutron actually work in OpenSTAT in triple O today. So that's another big component is the networking. You actually have the networking managed. We don't have a full networking story sorted out yet for deploying the under cloud networking and the over cloud networking. We have the over cloud working and most of the under cloud working. Rolling upgrades aren't supported in heat yet or this concept of canary upgrade. So we have a big system, 1,000 machines, 10,000 machines. You can't just upgrade everything at once. You won't work. You actually have to upgrade things slowly, try a few machines. You do a canary deployment. You'll try a few machines, roll them up, see if that works, if it doesn't work, bring it back down. If that does work, then you go and deploy everything slowly. We don't have the full story sorted out for that yet, but we're working on that. We don't have the ability yet to actually upgrade Nova Compute without taking VMs down. It's a really big one. You have your over cloud. You want to upgrade the image on it. You don't want to take down your VMs because your users are going to kill you if you keep taking down VMs every... It's the cloud. You can't take it down every five minutes and you want to do that. So our solution to that is actually not actually upgrade the VM itself. Don't take it down and upgrade it. We actually want to r-sync the route partition as read only and r-sync the changes to the image in and then just re-kick services. So the idea is we'll have the VM will be running long-term and you have a new image, a new version of... You fix critical bug 5, 4, 3, 2. You need to push this out of your work because your users are yelling at you. You just fixed it. It's a big bug. So you can actually deploy this out. You r-sync the new code over. So now you have a new image out there and you're not deploying from packages on production. You have... You guarantee that your images are the same as your test images, but instead of bringing it down and bringing it back up, you r-sync it over and you kick all your services. Now you have the new Nova Compute running with the bug fix in it. Routementary HA support. We're working on this today, but we need everything to be super HA, work well, scale out, etc., and we're working on that. And the general problem of the one big thing here is that this is sort of like a double-down kind of play for OpenStack, which is using it to install itself means you have double the fun if it all works and double the amount of failure. So you have a bug in your undercloud and your overcloud now. So this is going to help us actually flush out more bugs. We need to be able to handle network failures and more. We're going to see that on multiple layers now. Hardware failures. We're going to see on multiple layers too. And so now we're going to have... We have to make sure they have less bugs in the underlying system. And I know this is very fast and there's a lot here, so I hope you guys have questions. Yes, the answer, the short answer that is HEAT. So HEAT has this idea of the dependencies in the system. You can see here this is a database and servers and a load balancer I think in the front. And it has this notion of the dependencies in the system. So you can actually use HEAT to manage that process. So HEAT will actually help you manage these... You tell HEAT the interactions between the services and it'll take them down accordingly. This goes back to the whole idea that we have... We already wrote this nice orchestration tool and we're still writing it. Why not use it for deploying the cloud and for the users of the cloud? There's a core CI over there so you can ask him. The answer is very responsive. We have...Triple O has members of the core. We have some HEAT cores on the project. We have... I'm not sure if you have a neutron core, but you have a bunch of cores all over. And so we have a lot of good pull on the system. And when we have a bug, and if it's a critical bug, we know how to fix it. So very responsive. Yes, I don't know anything about that though. So you have to ask somebody else about that. The answer is this is all open source. None of this is the team in HP that's working on this. We're all working upstream on this. So everything we do is public. We have a free node. All the code is upstream on GitHub. So GitHub open stack. Slash triple O incubator. It's all there today. We're all trying to make this public as possible. And we think there'll be multiple... Red Hat's also very involved. We expect HP and Red Hat will both commercialize this at some point. We also have invested interests, which is HP has a public cloud. We deploy massive clouds. We want things to work really well. We want to do continuous deployment. And so we believe in open source. We're doing it for everybody. Different part. HP turns out it's really, really, really big. 300 something thousands. I think I'm in the printer department though. I'm not sure. I know there's Converge Cloud, Public Cloud, and a few others. No, I work under Monte Taylor. And I don't know who you worked for. So I may. I mean, yeah. Once we have the template deployed for the over cloud, it's really hard to plan under cloud. So we have it. It's sort of like, you know, we need images somewhere. We might as well put them in glance. Keeping the same API is sort of... That's right. What? That's correct. You put everything in. Yeah, all the gold images go into glance. So we actually have a... The problem with this is you need a lot of machines, right? So we actually have a... a Nova Bear Metal virtualization driver. So the idea is you... That's right. So you could do on your laptop. A big laptop, but a laptop. So the idea is you spin up some instances outside and you have, you know, SH to the power driver and it's a whole big convoluted mess. But it works. Yes. I mean, you can't put into production on a laptop obviously, but yeah, we need bigger laptops actually. There is a talk about that. If you go to the triple O incubator, you can actually see there is a test, I believe it is. It is like a 40-step test or a 40-step instruction that will walk you through the whole thing, deploy the undercloud on your laptop, deploy an overcloud on your... undercloud on your laptop, and everything in between. Yeah. So please try it out. We're very... We like bugs, so file bugs and we'll fix them for you. It gets slow, but... You have a bunch of different layers of virtualization inside of itself and it's a little iffy at times, so... Yeah, so it's a bit slow, but it does work. Right. So the answer is in production, the answer is no. Running 50 VMs in your laptop, no matter how many layers nested, it's going to be slow or 10 or whatever it is. So in production, there's no actual errors in production. So we have the bare metal cloud. The idea is there that you... when you deploy an instance on the bare metal cloud, it's actually stamping an image down on bare metal and it's a regular old machine just like it was managed by another system. Then on top of that, you have KVM running on native bare metal and everything's fast. Yeah, so there's... Docker containers is one option here for the over cloud. And the idea would be... There's some caveats there. One is you have different kernels. You can't really do that in Docker. If you have a big system, that'll fix that. If you want to run really high-performance things, let's say you want to run something like Trove, let's say, or Database Service or something else. You can actually instead of swap out the KVM over cloud and use a Docker over cloud, let's say. I don't know if you've actually tried Docker today. No, I think we've tried LXC on top, right? So for the under cloud, you don't use containers. We use bare metal. And that's sort of the difference there. Yeah, so it's been great. We've had great feedback from the community on it. We have a bunch of other people are joining in from other companies. Red Hat's very deeply involved with a whole bunch of others. And the goal is to one day... Yeah, we agree. Oh, exactly. Yeah. The deployment's the easy part, so to speak. It's the upgrades that are hard. It's everything else after a deployment. So we're trying to make this easy. We want to make this standardized as much as possible. We don't actually... We don't prefer any OS. We support Ubuntu. We support REL or maybe Fedora. I'm not sure which one it is. We'll support anything else anybody else wants us to support. So we're trying to be fairly agnostic as long as they want to support whatever. And you put some manpower behind it. We'll happily do that. Yes, there's a bunch of different ways we need to fix that problem. This is a big part of it. The big step is actually being able to have an upstream tool that we can actually use to test upgradeability. And so this is the big one. There's a bunch of things we could do before that. But we're really hoping that we're all embarrassed that we can't upgrade an open stack that well. So this is where it gets complicated. And so every patch, ideally, should be a new... Or every day, let's say, will be a new release. We ideally want to go every patch to trunk will be a new release as much as possible. But we also want to support because open stack does have releases at least for now. And we're trying to change that a little bit. We do want to actually support Grizzly and Havana too. So upstream testing hopefully will actually support both. I actually tried that out recently and it almost worked. I was able to spin up a VM on a Grizzly Compute node in the Havana cloud, which was surprising. But something like this could help not only just actually testing the backwards compatibility, but testing the upgrade process as well. We hope to standardize sort of the upgrade process. There's some questions like, what do you upgrade first? And so hopefully we'll be able to standardize those in something like this. And then a downstream consumer of open stack could say, this is how they're testing the upstream. We don't like that. We'll tell them to do it this way maybe. And maybe they'll do that. So hopefully a known path that works. Which is not the case today. I mean networking issues always bring some crashing down. Right. So that's ultimately the goal and that's why we're trying to make sure that we don't actually have to restart VMs when you want to upgrade software. And so the hope is that we actually, yes bugs like those will exist. One that benefits having the same, we're going to have the same code path on top and below. We'll hopefully hit the bugs faster and we'll have one less code path to debug hopefully. But we're going to hit those hopefully and hopefully we'll sort this out quickly. Kernels are hard ones. There are ways of upgrading kernels, but we're not trying that I don't think. Hopefully you're not going to upgrade your kernel every couple of days. In which case, one option, this is the standard answer obviously, is you could migrate VMs and things like that. But if we wanted to, or somebody wanted to, whatever they call it, destroying kernels and upgrading them in place. And that's an option and we, I think, stayed away from that for obvious reasons for now. But, you know, case by case that is possible. Very. Yeah. So depending on how you want to set things up, you could have one machine. I think right now we have the two core machines in the undercloud running everything. So they're all running, and that's going to change in the future There's a bunch of different ways you could do that. You could have as many images as you want. We bake the images beforehand, and once you bake them, they're sort of set, they're not going to change. And so hopefully that's not going to be a problem to have too many images. What we hope to actually use, saying we're using KVM or using bare metal, is a fairly small change in the config. And so hopefully we actually have the same Nova compute, you know, use the same images for the top and the bottom as much as possible. Yeah, so separating things out, we want to do that. We haven't actually really sat down and decided how we want to do that yet. But there's many ways we could do it, and the idea is to do some of this as much as possible in heat and then existing tools whenever possible. So we have a bunch of people on the heat team who are working on triple-one. This is a great, you know, deploying something big and complex like this is a great, you know, if open-sat heat could do this, any other questions? That's a really good question. I think the answer is no. The problem is you're going to have a whole different layer underneath for deploying everything and managing everything, which will be open-stack. And so you'll have to do some sort of migration from there to here. I guess the real answer is if you want to use this in the future, try to work upstream with us, tell us what you want and we'll try to see what we can do. We're welcome. Do you want more and more people on this? It's an open-source project. We'll have more feedback saying, this is a terrible idea, do it that way. That'd be great. Yeah, so I don't think we've actually approached that one yet. Right, yeah. We have to actually migrate all the data to underneath and the databases and the other questions. Thinking about it. I don't know the overlap between heat and Bosch. Yeah, that's it. So it's recovery for us and it's also I change something, make it look like the new thing that I want it to look like. I change the image, make it do that. Nothing we're trying to address here is new. There's nothing here that it should be new to anybody. I think everybody's hit the packaging problem. Yeah, so it should fit in pretty well. Yeah, so you deploy your undercloud and you just have a different image, a Hadoop image, and you deploy that on your bare metal. I mean, one of the reasons why you want OpenStack underneath is because we have OpenStack on top. That isn't to say that if you don't have OpenStack anywhere, it's sort of silly to use it for this. But if you already have OpenStack somewhere else in your system and you want to use bare metal, you could actually build a separate image that's not OpenStack that uses whatever you want on there and deploy that. Right, it's when you're using OpenStack somewhere else in the system, we don't want to have two systems that do almost the same things with two different APIs on top. We want to try to use the same code base for whatever possible. I mean, I think it would be, but I think there's other questions you could ask if you're not using OpenStack. All of a sudden, I think there's more options that are really good things to evaluate. I'm not saying that don't do it if you're not using OpenStack, but when you're using OpenStack, you want to use the same code base. You already have something that does the same thing. Instead of using provisioning VMs and provisions bare metal instead of volumes in a VM somewhere or in whatever, it's LVM module, you know, volumes on a bare metal machine. So it's sort of this nice analogy with treating bare metal like cloud resources. Which makes it great if you want to, you know, go from bare metal environment to a virtualized environment for your system. Nova boot will work the same way. So, use some nice properties there. Go in once. Thank you.