 All right, guys, we got a lot to cover. So I think we should just jump right into this. My name is Keith Basil. I'm the one who got all the distro names right in the answer. Cool. I'm a product manager for Red Hat. I'm on the OpenStack product manager team. My focus is on kind of futuristic strategic things like Triple-O, Sahara, things like that. Some of my teammates are here. I'm giving a shout out to Matt and to Sean Cohen in the back. So this session is about considering the concepts of using Triple-O as a platform for infrastructure management within OpenStack. Some really cool things, so we'll just get started. A little bit about me. I'm a Virginia hair scrambler. Raise your hand if you guys know what that means. You guys don't count. They already know me, right? So I basically raced dirt bikes into woods with some crazy other guys on weekends and it's a blast. I play chess, so if anybody wants to play a game, just talk to me. I work for Red Hat, obviously. Before that, I was with cloud scaling. I was the cloud guy at Time Warner Cable, software as a service stuff. I did a lot of cool stuff in a federal space inside the DC metro area. I took a cloud architecture through FISMA moderate certification involved with Cisco and a couple of startups. So you can find me on Skype, Twitter, GitHub, RSC, in life as NoSleep, NOSL, ZZP. All right. So here's the agenda. How many people here know Triple-O very well? Very, OK. Excellent. This is going to be a great talk for you then. So I'm going to set context and go through what Triple-O is. We're going to ease into it and then we're going to drop into a little bit of detail. And then we're going to talk about looking at Triple-O as a management platform. And then show some examples of work that's being done today in the context of Triple-O as a management platform. So here we go. So about a year ago at Red Hat, I put together a plan to tackle not just open-sack deployment, but open-sack management as well. There are a lot of tools to do deployment, but there's three phases that I consider very important for the cloud operator. So you sit down with the business person. You decide how many cores, what's the return on investment, et cetera. So that's kind of the planning phase. Out of that comes, let's say, a bill of materials, network architecture, et cetera. Then you actually have tools to facilitate the deployment. And then the big thing is most people just stand up the cloud. But then when open-sack is up and running, how's it doing? There were no real tools to answer the question, how's my cloud doing today? Internally, we have a persona. My manager came up with this great name. It's called Mr. Coffee Cup. It's the guy who comes to work every day, sits down with his coffee cup and says, boom, how's my cloud doing? There was no tools to solve for his persona. So this is very much in line with that. So our goal to recap is that we want to facilitate planning. We want to do deployment. And we want to do operations and management. The last bullet point here is the most important one as it relates to this talk, because this talks about visualizing capacity, looking at metrics, doing instrumentation on your hardware, et cetera. So this is a great slide. This is a shout out to my wife and my son. They'll come home and they'll hear me on the call. And they do imitation of me. And they go, blah, blah, blah cloud, blah, blah, blah, open-stack, blah, blah, blah, salamander, right? And they just walk around the house, blah, blah, blah, cloud. So the takeaway is that this is very complex stuff, right? This is the Solonia, Ken and Pepple diagram in the background. And it just shows a lot of complexity with open-stack. So I bring that up because most of the deployment tools are all about solving the complexity part in terms of deployment. So on the left there, you'll see some open-source tools. You'll see Forman, which is what Red Hat uses today for deployment. You'll see Packstack, Cobbler. Or if you have enough skills in house, you can roll your own solution in terms of do-it-yourself. Or you can go with something commercially supported on the right-hand side. You got cloud scaling, piston, all the guys on the right that you can get a distro from, right? So the takeaway here is that this is a very fragmented landscape, right? And the main focus of all of these tools is just about deployment. So again, once you stand up a cloud, what's next, OK? So where's the love? We've got 16,000 community members, 138 countries. But wait, what about the operators? Man, who cares? Just plus one my code, right? So the developers here are having a field day. But at the bottom, you've got these lonely cloud operators who have no love at all, right? Everybody's just like, yeah, let's add this feature, add this feature. But nobody really considers the operational impact or aspect of OpenStack. The first day here at the session, I wore a t-shirt that says OpenStack Super User. And we got that from the meetup in Sunnyvale. It was a mini summit for operators. And a lot of good things came out of that. So what you see here in the conference is a reflection of a drive to shift more attention to the actual cloud operators. OK, so our cloud operators need love, too. This is a famous picture on the internet. I modified it a bit. I branded the magical unicorn OpenStack. And the cat is our hero. He's got the operator bandana on, OK? So this is our new persona. These are the guys we're solving for. The guys that are running OpenStack every day are our superheroes, right? OK, so triple O for infrastructure. So if you look at it from a high level, these are some things that we want to provide to Mr. Coffee Cub or to the cat on the unicorn. We want to give this person a dashboard. We want to help them in terms of capacity planning, resource modeling. So when the CIO says, hey, do we need to expand our cloud? Absolutely. We're going to need X number of compute nodes. We're going to need another petabyte of object storage. To be able to answer those questions, that's what we want to want to have, right? We actually want to still facilitate the deployment and provisioning, and we want to visualize our cloud, the status of our cloud, status of core services, what's the rack doing? What's our class of general compute doing, et cetera? So you can see at the bottom some sample mock-ups of dashboards. So we're going to jump right into it and explain to you guys what triple O is. So triple O is OpenStack on OpenStack. And for those who understand triple O, you can probably appreciate a word cloud describing a cloud that deploys a cloud. But let's ignore that for now. We're not going to go into the matrix right now. So basically, if you can imagine, just a simple deployment and management application. So it's just an application that deploys OpenStack. So if you think about that, you'll go a long way. And this application features deploying to bare metal. It's a community adopted process. We want to do the visualization that we talked about earlier. And I think the most important takeaway here, too, is that, one, it's cloud operator focused. So we're trying to give the operator some love. And then second, it's extensible. And because it's using OpenStack, we get all the goodness of scalability and resiliency in terms of having a management infrastructure. So sounds cool, right? So let's talk about it in a little bit more detail. All right. Before we go into detail, we're going to do a 30-second recap of OpenStack. And you'll see why this plays into the thing here. All right. So open second a minute or so. All right. So we have some components on the screen here. Again, in the traditional OpenStack use case, these services are used to manage virtual infrastructure. OK. I'm going to talk about three, every other one here, because those are the ones that kind of relate to triple O more importantly, OK? So this is only a subset of OpenStack. So NOVA provides the command and control services, right? It renders and orchestrates the rendering of virtual machines. HEAT provides application orchestration. So you define a HEAT template. You carve out your resources in that template. You hit Deploy, and then HEAT cobbles all that together and builds your application in a repeatable fashion over again, right? But it's, again, targeted toward the virtual infrastructure. And then finally here, we've got Solometer. Solometer is used to capture usage data for the VMs, right? So you can do build back, charge back for your customers. So that's a really light touch of OpenStack. Most of the guys here, I mean, this is OpenStack Summit, right? So you should know that. So now let's talk about triple O and re and look at the same components, but in the triple O context. So the concept of triple O is to reuse existing OpenStack components, but instead of targeting virtual infrastructure, we're actually going to target the underlying hardware, the physical infrastructure for managing the cloud, OK? So it talks to bare metal. So you have to shift your mind and shift your thinking and look at each one of these in the nuance of, how do we use this targeted toward hardware, all right? So Nova, same role, command and control. You guys don't have to take pictures. There's a QR code at the end. You can download this stuff right after the slide. Hi-res PDF, OK? So Nova provides command and control services, just like it does. It orchestrates the rendering of virtual machines, but instead of spinning up on KVM, now we're going to facilitate spinning up on bare metal. So it's going to use things like IPMI, Pixi, for that. So today, Nova has what's called bare metal drivers that speak IPMI and Pixi. So it's like a layer of abstraction for the hypervisor. So Nova treats the bare metal drivers as kind of a special hypervisor and deploys the bare metal. In the future, it will be totally 100% ironic. So probably some ironic guys here in the room. But ironic is a new service with an open stack that's designed to talk to bare metal. And that's where all the driver support, all the nuances, you know, ILO versus drag versus IPMI, all of that will be baked into ironic going forward, OK? All right, so the next thing is heat. Heat is a very interesting use case when you look at it toward hardware. In our mind, this is kind of a Red Hat view, we see heat as describing racks of gear or resources. So it's like a mini reference architecture for hardware. So if we have a rack of compute, we can say this compute rack is going to serve our M1 class of hardware, I mean, service for the production cloud. And these are the images we're going to run. Here's the IP addressing for that rack, et cetera. And that gives us a definition that we can repeat throughout the cloud in a repeatable fashion. So very powerful use case here with heat. And then finally, we have Solometer here. So instead of doing VM stats, we're actually looking at the hardware. So network IO for the hardware, maybe some special agent that checks the amount of memory that's running on the compute node, et cetera. So there's a lot of things that we can do here in terms of instrumentation of the actual hardware. OK, I'm going to talk about Tuscar, which is a sub-component of triple O. It was introduced by Red Hat. And right now, Tuscar provides deployment management services. What that means is that we created an API for you to orchestrate your deployment of OpenStack. There is a user interface, which is built on top of Horizon. There's the CLI. And then there's an API that you can call. In the background, you can barely see it, but there's some code there, command line script, to actually deploy cloud. I mean, you run a command, it just goes and builds your cloud. Really powerful stuff. We follow the same reuse model as triple O. And since that, we're reusing Horizon and reusing Solometer to facilitate the infrastructure stuff. OK, so triple O, it's an OpenStack program. Its title is called deployment. And the focus is on Tuscar's focus, rather, is on infrastructure management. Triple O was started by HP. And the goal for that was CI CD. And we've kind of looked at that and said, hey, this would be a good vehicle for actual infrastructure deployment for OpenStack. And then we saw some gaps in terms of Mr. Coffee Cup or the Cat on the Unicorn in terms of the operator. So we introduced these things to help facilitate the visualization, the automatic deployment, the API, et cetera. And this is an email last year from Robert Collins saying that we've merged the Tuscar code into triple O. So we're all one big happy family at this point. All right, so let's talk about the deployment flow. So you've kind of been baptized into triple O a little bit. So let's actually walk through what this looks like in terms of deployment. OK, remember this, a simple application that deploys OpenStack. OK? All right? So before we go there, I want you to understand that there's this key concept here. We have two clouds, OK? And inside this black box is OpenStack. OK, I didn't want to call out the detail because it's kind of overwhelming. But this is really OpenStack in a box, OK? That's cloud number one. That cloud deploys what we call the production cloud. So that's the one that the tenants will see. So the production cloud is the one you know and love. It's the one that your tenants will use. If you're talking to an engineer, you go to a design summit, they're going to call it the overcloud, OK? In the same fashion, the bottom cloud is what we call the deployment and management application, or the command and control cloud, the one that only the operator will see, OK? Now remember, everything in that box is OpenStack. So there's horizon, glance, nova, et cetera, all right? And we call this, from an engineering point of view, the undercloud. So now let's walk through the deployment process. Now our goal here is to have a operational cloud, right? So the red boxes there are management nodes. So these are the three instances of triple O, OK? Your command and control cloud. And they're going to deploy to the white boxes on the screen. So the first thing you do is you install triple O onto one of these management nodes. You can start off with just one, but the last design summit or the mini summit in Sunnyvale, Robert Collins had recommended that we do three for redundancy, right? So you can go one to three or to whatever number you need based on the size of your cloud. Second piece here is that once that's up and running, you log into horizon as the operator, again, command and control cloud. So the operator then defines the controller rack, defines your resource racks. So the second rack over could be block storage. Third rack could be object storage. The last rack could be compute, or whatever your business requirement is for OpenStack. So you define those. And then once you define those, you basically hit the deploy button. And then what happens is Tuscar collects all the data that you've used to define the cloud, builds a heat template, triggers that heat template for deployment. Heat then calls Nova to actually facilitate the images being deployed on the resource nodes in the cloud. And Nova talks to Ironic, which talks to the hardware. It's a little complicated, but it works. And it's actually pretty good. We had an early demo of this running in Hong Kong based on the Nova bare metal drivers. We had a rack of quanta gear, and we were doing full rack deployments in about 15 minutes. The good thing about Triple O is that it embraces the golden image or image-based deployment model that Robert presented. And because of that, you can deploy a cloud very fast. I mean, they were saying, I think, about six minutes per node. And they can happen in parallel, by the way. So good stuff. OK, so now you guys have kind of level set on Triple O. Let's talk about Triple O as management. So I want to make some changes. So instead of an OpenStack application or management application, I want to change this to a platform. So again, Triple O was introduced by HP. Red Hat added the Tusker part. It was focused on deployment, but why stop there, right? So let's look at this spanning the scope of Triple O. Inside that box was OpenStack. So we already have components to use. They're already there. The APIs are well-known. We're not changing the APIs. We're talking to heat. We're talking to Nova. We're talking to glance. We're talking to neutron. Just as before, we're just focused on a different area. The operations focus is very strong here, very natural. And I think the best takeaway here is that the community can look at this as an open platform that's distro-agnostic, that can be extended as you see fit. So we think this is a great platform for ongoing operations, obviously. And we think that because of that easy button that is now introduced, we can make the adoption of OpenStack increase tremendously. And I grabbed this last graph here off of the recent survey that Tim Bell's group did. And a couple of things here resonate with this model, where avoiding vendor lock-in for your management platform, call savings, the open technology. I saw a few other sessions yesterday where people were naturally gravitating toward OpenStack as a standard set of APIs and using those APIs to actually control other IT resources in the organization. So this is a very natural fit here for that as well. OK, so we have some vendor FAQs. There are a lot of OpenStack vendors in this room. I've talked to quite a few of you guys. And the question is, what does this mean if you're a compute vendor? What does this mean if you're a service monitoring or a security vendor or a network vendor? What's the context here for a triple O? Well, we see a few integration points. Now, remember, everything you see on the right is standard OpenStack. I mean, it's not everything, but it also introduces Ironic. So for the dashboard, it's Horizon. So if you can create a Django application, you can now show visually whatever the value add you want to roll up into for the operator. This deployment orchestration is based on heat, sorry. So within Tuscar, which talks to heat, there is this concept of roles, services, and elements. So a role would be a compute node or a block storage node. So within a compute node, a service could be Nova API to metadata service, et cetera. And then you can go down further and build elements. So the deployment orchestration basically lets you take these as Lego blocks and put together these things as roles. That translates to a heat template and you can click deploy. So the reason that's important is because if you have a service that you want to introduce to this platform, you could just create a role called service monitoring and then bring your application into glance as an appliance, right? And then just build out your heat template and off you go. It would deploy it and you're pretty good to go there. If you're a hardware guy, you might care about the bare metal drivers. And then there's some supporting components. So for example, we have a profile for NetApp. When you bring NetApp hardware online, it's sitting there in a rack, but there's no awareness of the new capacity in the overcloud, in the production cloud. So you might have to do some scheduling modifications or tell the overcloud that, hey, there's a new set of block storage that could be consumed for the cloud. So there's some supporting components that you should consider as well. This is a little matrix that I put together. So in the middle, you have hardware vendors. They would care about the operator dashboard. They would care about bare metal drivers. They care about instrumentation in terms of getting stats on how their hardware is doing. They want to do the orchestration for automatic deployment. And then depending on the resource, they may need to inform the scheduler. For block storage, for compute, they would inform Nova and Cinder, respectively, that, hey, there's new resources that can be consumed. And then software vendors, it depends. If you take the case of maybe a security company, they would definitely probably want to have something visually inside the dashboard. Do they care about bare metal? Probably not. Instrumentation, maybe. Maybe they're, but it's kind of a special case in security because a lot of security guys are looking for integrity checking compliance. And I'm not sure there's an instrumentation play there unless they're actually packet sniffing, and sorry, I'm going into my federal stuff, packet sniffing and trying to check for additional vectors that may be affecting the infrastructure. So it could be, it's possible. But you see the idea here. This gives you some ideas to think about where and how you can integrate with triple O. Okay, so we're going to walk through some profiles. So this is a warning. We'll take, you got it? All right. You sure? All right, so back to the show. Everything you see here, no promises, no roadmap, no availability announcements. This is what could be sketches only to illustrate possibility, okay? So for all the analysts in the room, ignore everything you're about to see. All right, so this is a profile for NetApp. So we're working with NetApp to do some early work with triple O integration for their product line. They have two product families, one's called the E-Series, which is more commoditized block storage boxes. And then they have the more advanced FAS boxes. So I'll run down the list and then talk about the profile for each one. So NetApp's very good at showing a cloud operator what their gear is doing, right? The value there. So storage utilization, efficiency ratios, reserve capacity, all of this is something that they do very well today. Benefits of cloning, D-Doup, snapshot. So you can actually get a global view of the cloud based on this particular resource. I put the rack over there because as earlier I said, a heat template could be like a logical representation of a rack elevation. So this could kind of describe everything that's gonna be deployed in that rack that's based on the NetApp gear. So integration points would be heat for the rack elevation and cobbling together some things. Cellometer for instrumentation. So right now we're talking about doing instrumentation of their hardware based on SNMP and actually talking to their APIs directly. So there's some translation going on there. And then for Ironic, I think that Ironic would only be a good play for the E-Series boxes because those are the ones that are more commoditized. It was explained to me that the Fastboxes are very advanced and it's not something that you wanna have Ironic reprovisioning over and over again. I mean, imagine a petabyte of data and people actually using that data. That's not something that you wanna tinker with on a day-to-day basis. So with that in mind, we're gonna look at using heat to orchestrate carving out virtual storage area networks or virtual volumes rather and then presenting those virtual volumes to sender for block storage in the cloud. And that's like the best of both worlds and it's actually easier for us to do the integration on. And then what you see on the screen, this is a mock-up from their existing visualization tools but we can work on moving that stuff over to horizon. So there's one unified interface for the cloud for Mr. Coffee Coat. We're also talking to Dale about doing cool things with their hardware. So today Ironic is focused on IPMI. There's some other drivers like C-Micro drivers upstream but we're gonna try to work with these guys to bring drag support for hardware and do orchestration of the firmware. So hardware is notoriously complex. Seth and I lived through some things with a really big install where the BIOS drivers on the machines were just too fresh off the assembly line and machines would just randomly turn themselves off. It was actually a good use case for our network redundancy in the cloud in general, you know, cattle, the cattle model absolutely works. But orchestrating firmware, BIOS, RAID configuration. We see a lot of that as kind of a pre-configuration heat template process where you boot a RAM disk, you do configuration, you boot it again, et cetera till you get to the state that you want and then it's ready for deployment. So integration areas here, Horizon, we wanna bring the console into Horizon so you can actually drill down to a machine and get console on the hardware. Ironic, salometer for instrumentation and heat to again describe a rack of Dell gear to meet a certain resource. And Private Core, Private Core, these guys are here at the summit. Very cool security company. Odette, one of the founders of Private Core was talking to me yesterday and he mentioned this concept that I wrote down in his privacy of computation, okay? What this means is that these guys basically have software that will make sure that the memory space, the trusted boot, everything from hardware all the way up to the VM is secure and trusted and private to you as a tenant with an OpenStack. So these are the things that we'll be talking to these guys about bringing to OpenStack and Triple O specifically. They're already using Horizon today. So the use case is very easy to bring their Django work over into Horizon, the latest version. I think this would be a great tool to help us solve bare metal to tenant use cases where if you're going to give a customer a complete machine, then you can ensure that the integrity of that box has not been compromised whether there's no software left over in the firmware as an attack vector going forward. So the integration points here are Horizon, Ironic, Solometer, Tuscar and then what's interesting here is the overcloud scheduling because now we can give the tenant a family of instances that are trusted. So now you can go trusted.m1 or trusted.small, trusted.large, trusted.whatever and these guys can make sure that all falls into place. Groundwork is another company, strong monitoring. They're already a Red Hat partner on the RAV side. We're talking to them about bringing what they have today over to triple O. They have some cool heat maps. They have the ability to take an image and then to kind of a image map, showing my age here with image map, image map that image to do presentation based on where you are in that image or that node. Pretty cool stuff here. We're working with them to possibly do some plug-ins upstream, glance for their application. So you can deploy Groundworks as a VM as a core service within OpenStack for the production cloud, Solometer, Tusker, and Horizon. And Solania, these guys are some ex-cloud scaling guys who started this company. Really cool service. They do monitoring, but they do it a different way. They take a look at all the OpenStack logs to do discovery and they can do things like, how's my cloud, how are the core services doing? What's the API performance, VM spawns? So really cool stuff that matters to an operator at scale. And then finally, you guys probably know Red Hat acquired Ceph a few weeks ago. And Ceph has an enterprise tool called Calamari that gives you visualization of the state of the Ceph cluster. So Sean Cohen has been kind of guiding me to think about this from a strategic point of view. And this is just a mock-up I did based on the Calamari service, where now when you deploy Ceph, you can embrace this to see exactly how your OSDs and such are doing within the Ceph cluster. So triple low is momentum. There's a lot of work upstream. So Red Hat's doing a lot of work on Tusker. Triple low, the HP guys are working on triple low. And believe it or not, Rackspace is killing us in terms of commits on Ironic. They're doing some awesome work with their provisioning process. So hats off to those guys. So this is something that is we have a lot of momentum. We have some big names behind it. It's here to stay, I think. So we released a tool called Instac. You guys probably heard of a tool called Packstack. Packstack was kind of a proof of concept installation tool. And it was just targeted toward, it didn't do any bare metal, right? So we have this thing called Instac, which is based on triple low. And it will actually install everything that I just talked about in terms of the UI. So you can actually do a deployment. You can click the deploy button. And it will actually take your cloud and go bare metal using RDO isives today. Also of note, HP announced Helion as their own distro for OpenStack. And they actually use triple low for their installation process. So again, Rackspace is doing some great work and trying to solve for the multi-tenancy use case. Now, there's a lot of work that needs to be done. All the discovery of nodes needs to happen. Complex hardware, going back to the Dell use case. Bios configuration, Ray configuration, doing all that has to be solved. And then I personally think that infrastructure awareness needs to be solved. And I think some of the guys, maybe a Solania, can help with that. And there's some others. I mean, I could have made a whole list of stuff there. But this has momentum. We, Rithat, see this as a long-term install tool for our own product. Right now we use Forman, but going back to that landscape of complexity and fragmentation, we want to kind of consolidate on something that's community driven so that we can release the technical debt related to a tool that is specific to our distro. So with that, I'll take questions. And here's the QR code for all the content. Yes, sir. You mentioned the undercloud, which has three nodes. Why specifically three? Why three? Yes. It's for resiliency. Why not two for resiliency? Why do you need three? It could be two. It could be one. One can be resilient, but one is not resilient. So why specifically three and not two? Right, so if you've got more than two, then you have the opportunity to start doing like a rolling tested Canary style upgrade. So you can have pull one out of the three, put it up to a new release, start testing it. Is that OK? That's good. Maybe grab another machine from a different pool that's not being used. Join that one in. Is that now working? Are they working together as a cluster? They're passing all their tests. We'll flip over to those ones. So it lets you maintain your high availability on the existing one that's running while you're building a new one. You're just generally slightly more flexible in that kind of area. Any other questions? Yes, sir. Here, come to the mic. What is the status of upgrades with Triple O? What's the state of what? Upgrades. For example, from ISOs to, you know, upgrade. If you have deployed ISO structure using Triple O. Yeah. Because if you make the base structure, how is the upgrade going to work? I still can't hear you. OK, since Triple O is a image-based deployment. Closer. OK. Yeah. Do you want to answer that? So this is Hugh Brock. So I'm the product manager. I think of the ideas. And this guy actually makes them real. Yeah, the question was, does Triple O support upgrades? And if not, when will it and how? And the answer is, Triple O is, its core mission, more or less, is to facilitate upgrades. So yes, it does. It will support them much better in Juneau once we have a whole bunch of stack update work done in heat and a full HA implementation in the heat templates. So basically, the reason Triple O is focused so closely on upgrades is because its roots are as a continuous deployment solution. So we want to be able to upgrade the cloud all the time, every day, every two days, whenever. So it's absolutely critical that we be able to do that with no downtime. We're not there yet because we don't have full HA on the overcloud. So ideally what you would do is, say you're going to upgrade Neutron, you've got three of them. You want to take one down, re-image it, bring it back up, then take the next one down, re-image it, bring it back up, and then fail over to the new service. And you do that multiple times. Now there's work going into heat this cycle for the Juneau cycle. Actually, there's a session on it right now to make that process automatic. Right now it's manual. But we feel like there's a, I mean, we at Red Hat, we've cobbled together a bunch of stuff to facilitate upgrading your OpenStack cloud. We feel like the triple O solution is going to be much more robust over time. Yes. So the question was, can we handle installing services on a mix of virtual and bare metal? Conceptually, yes. And the way you would do that is you'd carve out your bare metal machines. They would be assigned and registered in Tuscar in triple O. And then you would also deploy a hypervisor. And we were just talking about this for core services. You don't want to have a core services running on where you could on bare metal. You want to optimize the placement of everything. So that's the same use case where you could have a mixture where triple O could target the virtual machines. Now again, this is the under cloud. And also bare metal to get more performance for maybe your application or something specific that you need. It's possible. And that would be a really cool heat template, actually. Oh, sorry. Wow. OK. So as a management platform, will this be able to form it today? You make changes in form and it pushes it down. But if you make changes in the system, it doesn't push up into forming. So you've got to use form exclusively to implement and manage your system down. You can't bring it back up. We'll triple O, reverse that, and allow you to go either CLI or GUI. So if I understand your question correctly, this is about configuration management. OK. So the default approach is image-based deployment following the Robert Collins philosophy of golden images. So by default, everything's in sync, right? Now you can do a hybrid approach where you deploy an image and then have that hook into your existing chef or puppet system. And I think that's probably best of both worlds at that point. As far as detecting deltas on the nodes and rolling that back up, not happening. And tinker. And play. Yeah. I will say that we do have very solid CLI support right now with Tusker. So there's no limitation on you don't have to do anything from the UI, right? If you want to tinker with parameters, redeploy, tinker again, redeploy, that all can be done from the CLI. And it gets saved every time you do it. Whether you do a deployment from the UI or from the command line, it should be completely repeatable. Yes, sir. So I've been playing with triple O a little bit over the last six months or so. And it's great. The paradigm that I'm seeing at the moment is you pull a Git repository. You use the build scripts to create triple O, the deltas scripting. That creates the seed, which creates the undercloud, which creates the overcloud. Now, you mentioned that you were adding hardware into a GUI capability. And I was wondering is your plan for augmenting that process to generate the seed ahead of time and then have that provide the GUI and then you put the hardware in that? Or where is the GUI that you showed adding the hardware in to then push the cloud to? Where is that run in the stand-up process? So you install the seed node. So that's the basis. You log into Horizon there. And then you register your hardware in the GUI. And that's where it's captured. So today, the hardware registration is a manual process. So if you install five racks of gear, if your vendor is a good vendor, they're going to give you an Excel spreadsheet with all the MAC addresses for that rack elevation, right? And then you would just copy and paste that into Tuscar. And then, bam, there are your nodes. And then you can assign roles to those machines. In the future, we'd like to have some kind of distributed presence within the racks, or maybe some awareness for the Layer 2 domain, and then discover those machines and then roll those up for presentation to the cloud operator. Yes, sir? I can't hear you. Seed Horizon or undercloud Horizon? Yeah, it's in the undercloud. But into the seed, we'll get migrated to the undercloud once it's built. No, the seed is this. OK, so let's not use the word seed. This is the undercloud. So we install the undercloud. And that's where you register your MAC addresses. I just want to add one more thing to that. We have at Red Hat, we've built this Instac tool that bypasses the seed process. So what we basically do is take an existing Fedora or REL or EL6 machine and run the elements on the machine to create an undercloud. It's a shortcut around the seed processes, which is what Upstream is using. It makes it a little bit easier for us to just install an undercloud and use that to deploy. If that makes sense to everybody. Yes, if you look at the QR code, there's links to the Instac tool to do the installation on audio ISOs. Yes, I think we're done. Thank you guys. This has been great.