 Everybody ready? If you're here for the reference architectures discussion, it's over, I think. I think what we learned from that is that it's going to take a lot longer than we thought it would to make a reference architecture. Welcome. How's everybody doing? Day two, we're getting through. The food coma is just about set in. And you came to a great talk. So we're going to talk today about how we're trying to manage OpenStack with heat. That sounds pretty amorphous, and we may have attracted a few people who think the wrong thing about that. Specifically, in the triple-O team at HP under the tutelage of Robert Collins, we're actually trying to deploy OpenStack with nothing but OpenStack. We're not going to use Puppet. We're not going to use Chef. Not that we don't love them. But we want to be able to deploy with things that OpenStack developers can advance. We want to get that into the CI. So I'm going to talk a little bit about why that's important. I don't know if you know this, but you have a problem. You may not know it. You probably do, though. And it's not that serious of a problem, but it's OK because I have that problem, too. And in fact, we all have a problem. And judging that we're all here at OpenStack Summit and we're all interested in the same thing, can anybody guess what it might be? Dirty, dirty floors. Everybody has this problem. You have filthy floors at home. And you may not care. Maybe you take your shoes off, whatever. But there's a really simple, easy solution to that. Just vacuum your floors. Somebody automated vacuuming for us. We used to have to sweep. But now we have vacuum cleaners. And because of that, we're free, right? We don't have dirty floors anymore. We just have to go through the drudgery of finding the thing, pulling it out of the closet, plugging it in, and vacuuming our floors. Somebody remove that, man. In theory, though, all of us here, anybody, even the least educated of us, give it enough time, give it enough education. We could send someone into space. We could put things in space. We could do whatever we set our minds to. At least that's what they told us in the after-school specials. And I believe them. But we have to vacuum. We have to spend our time. Right? No, stupid. What if you had an iRobot? We should automate more. Sure, the vacuum cleaner is a better version of the broom. But we should automate. We really need to automate. And we all know this. I saw a great slide. Automate all the things. It's important. And we all have jobs. Some of us work in data centers. Some of us work on OpenStack, maybe. And we have dirty floors, too. We have alarms. We have pages. We have graphs. We have things that have to be tended to over and over. They're drudgery. And somebody might say, that's my job. But honestly, if you have a job with little or no automation or with last-generation automation, your job sucks. So stop wasting your time. Automate. And everybody's already got this message. This isn't a new message. Automate things is an old message. And you say, yeah, I do. I have Chef. I have Pop It. I maybe have Salt. I have something like that. And believe it or not, I'm here to tell you that that's the current generation maybe. That's even maybe the last generation. We could automate more. My analogy going back to the iRobot is what you have is really an awesome vacuum cleaner. Maybe even one that's remote control, so you don't have to get up and use it. But you still have to tell it all the things to do. You have to tell it all the packages to install. You have to tell it all the files to write. You have to tell it what to do. It's still somewhat imperative. It looks declarative. You're declaring the things inside of a server. But how many servers does it take to run an open stack? Quite a bit more. And I'll back up a little bit. Before we all had Pop It and Chef, did anybody do this? Raise your hand if you did SSH in the fore. OK, good. Because I was a little worried I was the only one. And this was awesome. The first thing, the time you figured this one out, you went, oh my gosh, I have power. And then this happened. And they're like, oh, what do I do? I know. I'll vacuum. Clean it up. But that's a complicated operation, your SSHing in. Or with Pop It and Chef, you do have a good answer now. Your vacuum server is straightforward. Run it again. Maybe you need to update your rule sets or things like that. But the failure handling is still manual. You're still doing things on individual servers and therefore probably introducing entropy and probably at scale losing time so that you can't launch your rocket. And so the new Cloudway is upon us, right? The new Cloudway is just delete server 2. Get rid of it. It's gone. Put a new one in. I mean, unless you have data, who has data? The reaction to that is somewhat imperative right now, because we just learned of this problem, because we just discussed Cloudway. And it's thus very complex. So one of the things that we're trying to do with Triple-O is to separate these concerns a little bit, to try to get into a space where we have just one tool that does just one job in this new Cloudway. There's some really overloaded terms up there, I know, provisioning software configuration, what do they mean. And I'd be happy to have a discussion on each one with you. But what I really want to talk about is orchestration, because it's something that is only now becoming obvious that it's a really, really big need. And especially if you're an OpenStack operator, you're going to need this all over the place. You've got lots of services to deploy. And entropy is just waiting for you on each one of those. So Heat's job in deploying OpenStack under Triple-O is to be a structured, declarative, multi-node, multi-service orchestration. What does that mean? If we go back to our analogy with iRobot, Heat is the iRobot. And we don't tell iRobot, go over there, and then go over there, and get the dirt over there, and then come back. You just turn on iRobot and declare clean. Clean this space. Maybe you put some boundaries for it, if you've ever used it, it's kind of neat. You can change the modes a little bit. But you declare to it, I want you to clean this space. You put it in the space and it cleans it. That's what I see as declarative orchestration, is you declare the architecture that you want. You declare all the infrastructure, the dependencies, the ordering, and Heat goes out and makes that happen. One of the great things about Heat is it can be run completely agnostic of all the things that we've subscribed to as our religion, puppet chefs, halt, juju, whatever. Inside the instance, Heat is just a tool that gives you the data, the declarative notion of what you should do next, what you do now. I just have a little example of a heat template. This is a snippet. This is not the whole thing. They are rather vertical and they are somewhat heady. But what you see here is the ammo representation. You can use JSON as well. The idea that I have here is that we have a weight condition, which is a way for Heat to wait for something. Simple. And we have a server, a server configuration, which tells us how to configure each server in the next thing, which is a group. So I've said, I'd give me three servers, make them as identical as possible, and give them that server configuration. What's nice about this is when you want to change it, you can diff it. So I'm changing important configuration from foo to bar. I diff it before I run it up. That's nice for at least some kind of prediction. And then we need to roll it out in an intelligent way. Right now, Heat doesn't do it in all that of an intelligent way. It's still a copy of CloudFormation. And only about three weeks ago, CloudFormation announced the feature that I need in Heat the most to deploy open sandwiches rolling updates. We'll talk more about that later. But what you're doing at that point is basically telling Heat, expose this new view to those servers. Expose this whole new view. And that may even ripple through in other parts of the templates. There's even embedded stacks. So this template may expose pieces to other ones. The idea is I only think about I need to change that important config. And then I look at what that does in the system. What's great from a continuous deployment standpoint is I can deploy this into my testing rack or whatever and get some really good hard answers on whether or not this is actually going to work. I can deploy the old one and then do this update and see what's going to happen and have reasonable certainty that it's going to be the same, especially since now I'm using OpenStack tools like Nova for provisioning. So I could even do this in a VM while I'm doing the development of the change and then finally go to this task track and then go to continuous deployment. So one important thing that I like to say is a bare metal is not special, star. It is a little bit special that we can't just run KVM and start up a server. We can't invent it from space. We have to find it in the rack and we have to turn it on. If you missed Robert's talk on triple O, I'm hoping that those will be online. Definitely go see that. It's talking more about using all of the OpenStack tools, so bare metal and everything to manage. But there's a few new rules and you need to keep your networking a little more simple. We don't have as much control over your networking unless you have open flow switches, then it's pretty much identical. And data is still weird because you have direct attached disks. So we can't just say allocate a gig of space and expose it to the VM. You have what you have. Also, there's a great effort going on, which is interesting that the talk before was, I think, somewhat ignorant to the fact that this was talked about earlier in the day, which is an attempt to define in-heat templates, a reference deployment of OpenStack. That's being championed not only by Monty Taylor, who's also working on the triple O, but also Rob Herchfeld, who's one of the architects of Crowbar. The idea being that you use one single template format to drive multiple deployment engines, we should be able to get there. Because if we stay declarative religiously, we should get there. We need a few things in heat. Heat needs to have a canary rolling deploy type of system. That's where you maintain a maximum percentage of change going on during an update. You also scale it up a little bit. So if you have 100 servers to update, you update one. Make sure it passes the test. Make sure your monitors say everything's still working. Then you update two. Then you update 10. And then you go forward from there. And if anything goes wrong before some point in no return point, then you roll back. So we're going to be driving that feature in. Also, all new projects seem to go through this, where it's like we just need to get it out and get the demo working. And oops, we forgot security. Actually, the heat team's done a great job, I think. So we got SSL and TLS should work for all of the heat services. But nobody's really testing that yet. I don't think many people are even testing heat. By the way, has anybody deployed heat in anybody out there? Awesome. Just a couple hands. Cool. Please more. Everyone try it. It's really interesting and cool. But we're going to need to improve the security. There's also some internals, like using keystone trusts. And one thing that's good, actually, I don't know, put that here, API engine separation. And we also need to improve the performance. We talked to it a great discussion yesterday on the parallelism that becomes possible in an orchestration system. One thing that's nice is because we have a declarative graph, we know everything we can do right now. So if we have 100 things to do in our OpenStack deployment bare metal cluster that are all going to take a long time, heat knows. He has the ability to know what things we can do. We just have to implement that. And I think Zane's going to be working on that and the rest of the team. And we also need to be able to scale out engines. Heat, actually, you can only run one of the most important part, which is the engine. It is stateless somewhat, but we can only run one. So it's not going to scale for a large cluster. If you want to join in on this effort, which I hope you will, the triple O project is where we're gathering steam around deploying OpenStack on OpenStack. There's a bunch of GitHub repos here that you can look at. Those are all actually also there in Stackforge. So if you're an OpenStack contributor, you can contribute right now. That is really baked into what we're trying to do. We want to be able to have the organizations that are really happy contributing code into OpenStack also being able to contribute deployment. So a big part of our goal is to have this at the same exact level as DevStack gate. So we want to be able to deploy onto a real rack of servers OpenStack with OpenStack on every commit before the commit lands. You can see where the value would come there. And then the other value is if you start to deploy with this set of tools, you can have some reasonable amount of certainty that you're going to be able to deploy the latest. So if you want to do continuous deployment, you won't have to do anything special. Just run upstream. So I know that was quick. That's all I have for now. I'm definitely open for questions. And if anybody has one, you want to step up to the mic so everybody can hear you? So as you start talking about using heat to deploy production clusters, besides things like rack-aware, we are going to need things like rack-awareness for controller nodes deployment to start looking at breakability. What are the other getting factors that you can think of that would hold us up from using this? That's really one of the first that definitely come up is that because a data center is a physical thing with physical networks connecting it, and we can't reconfigure a lot of that, we definitely need to have more control over where NOVA puts things. And Cells doesn't even really work out for this at all. It's kind of abstract from this concept. So we definitely, this is NOVA work that needs to happen. I think Devananda Vanderveen on our team is looking into it. We have a number of use cases. We would love to use this at HP Cloud. I mean, it doesn't work now, so we're definitely not using it. We would love to use this to deploy our public Cloud. But that's definitely something that our ops guys raised right away. It's like I have two rooms, and I need to make sure that the database servers are in there and there. They can't be in the same room, or the same rack, or the same chassis. So we definitely need a little more of that. I mean, I think it's very, very doable. There's nothing in NOVA's architecture that says you can't give hints to the scheduler, or even constraints, real hard constraints. So that's definitely a next step. The opsware equivalent? So I see heat evolving wherever people need orchestration. We have definitely had a lot of discussions on how heat will be operated this week. Rackspace has contributed a nice spec, and we'll be contributing some code towards a new format. And we also have the Tosca spec, which IBM and HP as well are interested in supporting in heat. Heat's a user-driven project just like most of OpenStack. So where there's a need for orchestration, I see heat filling that. The nice thing about it is it gathers orchestration work at a large scale into one place. I'm doing mass, as in metal as a service. So metal as a service would be sort of another backend. There's nothing preventing that. One of the architectural points of heat is that it has resource plugins. So right now, an OS NOVA server, or I don't know if that exists. The instance resource is handled by NOVA. It's built in, but you could create a mass resource and fulfill it with the same method. And that's an important part of heat. We can't predict everything everyone will want to orchestrate or put into their clouds underneath heat. So in theory, we should be able to, with our declarative model, update anything that the server can update from inside itself. Firmware can be tricky. We may need to boot into a minimal image and do that. I see that as something we would have to put into NOVA as an extension first. It's definitely another thing that's discussed. And there's more than just firmware. There's a lot of pre-boot stuff that we might need to do, like configuring your rate adapter or configuring anything in the BIOS that has to be changed. Right now, we're sort of hands off on that, driving toward just boot servers and just boot up OpenStack. But that's definitely something that will come up. And I would think, again, users would drive that. So if there's people with HP servers that want to drive that, perhaps maybe even HP would do that. Or Dell or anything like that. That would be something we would drive into NOVA. Any other? All right, well, thanks so much for coming. And again, if you want to contact me, you can find me on IRC or email. Thank you.