 No, here, I'll do this, we'll make it so that I'm, yeah, hey, how you doing, guys? This is, there's a lot of you in here. This is fantastic, thank you all for coming. So, if you can't tell from the CalSE terminal window up there, this is the DevOps panel. We have with us representatives from Chef and Puppet and DevStack. What are people using DevStack? They are using DevStack, so we were supposed to have also Juju represented, but I think it's gotten too much traction and so they're all off running, off doing amazing things. So, unfortunately we won't be able to hear that story. So, I'm Monty and you have the benefit of having me moderate this thing. So, any complaints you can direct them my way afterwards. You can do that in the form of buying me beers. It's the best way to complain about any misuse of this forum that I might do. So, you just buy the beer that you don't like or? No, buying me beer is, I hate it. So, that's the best way to get back at me for doing a bad job here. All right, anyway, this is going well. Well, hopefully we actually don't have that occur. No, no, hopefully I get no beer from anybody because it's going to be lovely. You actually down there Dan seem like you're not as much in the light as the other two guys. I am actually loving it. You can see people. DevOps stand up comedy. Yeah, that's dough and then now, sorry about that. Anyway, so, yes, that having been said, why don't we start with you, Dan, on a quick introduce yourself and your project. Why don't you? Tell me what you do. Sure, so my name is Dan Bodie and I'm actually a long time employee of Puppet Labs. I actually started when it was just a few guys in a closet. Wait, you were all in the same closet? Wow, we actually had two closets. We had two closets that we worked out of. His and hers. Exactly, and then it slowly expanded out into the hall and then we had to get a proper office and it's been quite a ride, but we do, we write automation software called Puppet, so we're the company behind the open source project, Puppet, and yes, and I do stuff. And I do integration work for the business development department. So I write a lot of code, but I really report to marketing, so it's kind of a strange job. That is very weird. Yeah, excellent. What, just because for out of curiosity, what city are you guys based out of? We are based out of the fantastic city of Portland, Oregon. Excellent, excellent. All right, same question right down the row. So my name's Matt Ray. I'm a senior technical evangelist with OpsCode. We're the company behind Chef, and as long as we're talking about fantastic cities, I'm from Austin, Texas. Love it. I'm not getting trolled for Seattle. Ah, see, I was going to have Chef versus Puppet, we should also have Seattle versus Portland. OpsCode is based out of Seattle. I'm actually from Austin, Texas as well. See? That's really kind of... We reached the same university, didn't we? UT Austin? Yeah. Awesome. Great. You guys did really, really well in football recently out here. Next topic. Wow, I did not expect to get trolled on that. So, yeah. Boomer something, I think, is the appropriate thing to say? I don't remember. So yeah, we're the company behind Chef, and most of my time is spent working on a project called Chef for OpenStack, and I just had a session on that about two hours ago, if you may state it was great. And I'll post my slides later. It's just a community project around deploying Chef. I have to deploy an OpenStack with Chef. Cool, cool. Jesse? So my name's Jesse Andrews, I'm with Nebula, and the reason I'm on the stage, I guess, is that I started a project called DevStack, which basically it doesn't use either of these things, and although you can use both Puppet and Chef to deploy DevStack, the problem that tries to solve is there are hundreds, I think 180 or something contributed in the last six months or something that, of people doing development on OpenStack, and what we want to do is on every single commit, run as much of a test as we can get before we, does everything still work together, all the different projects, so not just unit tests, integration tests, does it actually work together before we actually do the merge? And so we had originally started working on, you know, doing the packaging, and then after you get it packaged using, we actually had both Puppet and Chef recipes, but what we ran into was the problem that when any of those steps broke because developers made changes in how things need to be installed, they would run away and not actually help us because they aren't experts at Debian packaging or RPM packaging or Chef or Puppet, and so not every developer knew every single tool, and so we're like, what does everybody hate? So half the people hate Chef, half the people hate Puppet, I don't know if they hate, but anyway, everybody hates Bash, so we use Bash. So it's a Bash script that's a thousand or so lines of code now, and you could just set some options and it will download everything and install it. The goal though is that it helps, like Matt on his job, by a developer saying that we changed something and you need to be aware of it, so hopefully package maintainers and people building products and recipes on top of it can look at what we're doing and then actually update their recipes or packages based on it. I mean, if probably the people here have already been to devstack.org, it's great, it has comments for all the, what it's doing, it's a well-documented Bash script. I've actually been looking at using the same software that's used and I forget what it's called for comments for inline documentation for puppet modules. Yeah, there's, we use something, the ideas Shoko or Doko or Roko, it's different, there's one for each language, but yeah, it's pretty nice, literate programming for Bash. So one thing that I forgot to mention is I lead up the development efforts and the community efforts around puppet labs and the puppet modules for OpenStack, probably important or relevant. Okay, relevant. Great, really, you can use puppet to install OpenStack? Yes, you can absolutely use puppet to install OpenStack. Wow, that's fantastic. How's that working? It works, it's reliable, repeatable, and yeah, a lot of people are definitely using it and contributing to it. There's no question it could be better. Wait, no question it could be what? Better. And how? So I would like for it to live a little bit closer to the development process. I think, and that's a lot of what I'm doing here and asking people about is kind of where does it make sense for it to live in the actual development ecosystem? Which I think is one of the big questions. Do you want it closer to DevStack? Or do you prefer, sorry. No, no, go for it. Or do you prefer to work with releases? So that's a really interesting question. I had assumed for the longest time that it was something that consumed packages, but I think there's a lot of value in having configuration tools that can not only do deployments based on packages, but also be used for development. But I think it's gonna be a progression to get from where we are now, which is kind of something chasing releases to getting from there to chasing trunk, and then from getting there to gating, or sorry, post gating, and then the big question will be once we're at the post gating step, how much sense does it make for it to also be gating? Yeah, which is an interesting question when, as Jesse said, if you've got Chef and Puppet and DevStack and Juju and CF Engine, anybody hear from CF Engine? No? Okay, weird. But yeah, how do your developers deal with that? Well, I think that that's where identifying the minimal core that happens as you're going in, and I think you actually helped set up DevStack with the CI gating project. But as OpenStack continues to grow, there's so many options for what backend you use, what storage, even how do you want your storage and backend to be related? And I think Nova has 600 flags. So having it such that we could chain the gating, so like once something lands in trunk, then it could go ahead and deploy with your recipes that currently exist and build packages as they currently exist. So at least you know if there's regressions occurring and then you can go in and fix them. So within Chef for OpenStack, there are many parallel forks and several of them have CI already applied to them and my goal is eventually to take kind of the main trunk and offer it up to fit it into the other CI branches because there are forks that are working with source already instead of just packages because some providers need a certain feature that is not stable yet or not released yet. But let's get in there. So it's actually, you've both touched on something that I've actually been having some conversations with people about and it's interesting that you brought up the installing from source thing because it seems like I know when we started off OpenStack originally, we spent a lot of effort to make sure that we had sort of packaging thoughts going on and it seems like over time, oddly enough, as the project gets more mature, we are doing less and less with that and we see more and more people going from a source and I'm sort of curious where you guys are seeing that go. I see that it's actually a good thing because what was happening originally is we had developers going in and doing a crappy job of packaging and so what we're having now is the actual distros and people who know about building RPMs and devs and Chef recipes and all this go in and first of all, package it correctly and then secondly, sending back requirements and here's things you could do to make it easier to actually implement so. So one of the questions that I kind of have for the OpenStack community is OpenStack isn't really in the business of packaging right now and everything's running continuous integration from source. Does it make sense to actually have package deployment be part of the gating process and when I think about should we be using packages versus source for the puppet modules, that's kind of my reverse of the question is should packages be part of the continuous integration process? Interesting question, do you want to jump on that? It's a hard thing, there's history, definitely. At one time, a lot of different people working on lots of different packages and nobody knew what to do. It's certainly, I think, hard from the user community, the fact that we don't really have official packages that we recommend, we have tar balls basically. Then again, the recommended way that is to go get whatever your favorite distros is, version or crowbar or what have you. So would it be better for us to have official packages or should we be having it such that we have a really easy way for all these different people who are building both open and closed source editions to chain their Jenkinses together and then run the, they run the packaging and then they run that testing afterwards and tell us when it's breaking because some of those configuration options require hardware that we don't necessarily even, would you like to really set that up? No, in fact, I'm not going to. It's just not going to, somebody asked me earlier if they could mail me some hardware and I was like, sure, I'll put it in my living room? I don't know, we don't have a data center. So yeah, that's not gonna, but you're exactly right. Getting that, we've had that, and this is sort of the reason I asked a question about the source thing is we've been having a lot, when we stopped doing packaging in the project, it was a big question mark, but part of that did come from the distro saying, hey, we'd like to, we can do a better job of this than you're doing. Have the distro's been engaged to be part of the gating, F for CI? Like, you know, or you know. This isn't supposed to be the panel discussion on me now. Dude. But it always becomes the discussion on you. If right now we're splitting our tar balls. Anybody, anybody, great. No, no, actually. We've asked for that several times, for that to be more, to be, to do more of what Jesse's talking about. And for some legitimate reasons, they're concerned about being too noisy. And I hear that and that's fine. But it, the distros have been fantastic at doing a lot of actual work inside of OpenSec and have been thrilled to have every single person from SUSE and Red Hat and Canonical that have been in there doing Debian, even not a company behind that. But like the folks that are doing work have been doing great work. But I think that they're, the problem is they've all got processes that are centered around their release of their thing. And so we can, whereas we can talk, it'd be the same thing as Dell or HP or Rackspace. They've got their clouds they're deploying and they've got developers working on the core software. But the releases of those things aren't necessarily in lockstep. And so I think we've got that same sort of impedance mismatch. But a lot of the developers probably, at least in the companies or projects that do have a lot of development resources, as you said, probably actually, if there was, would benefit from that sort of integration. Definitely. I'm wondering, like again, this may be a sidetrack from what, this entire thing may have been a sidetrack so far, but I think that there's, both from a performance testing, the Tempest project and so on and so forth, getting that sort of gating up there and maybe just having a few sample test stacks or something that are really like, here's what you do, get the script or chef recipe or puppet recipe and it deploys Jenkins that chains to us and then just uses the official Ubuntu and the official Red Hat, whoever, just do two or three of them and then say, hey community, take over. Well, I guess sort of my incendiary, the way I was trying to make this be a little bit more incendiary is if you're already working on a puppet module that uses things from source, do injecting packages back into the mix, does that help? Or are you doing just fine pulling it directly from the Git repo? So I could, I mean, I have a pretty clear vision of what puppet modules, of how to make the puppet modules configurable for either doing source versus package installation. I mean, you can, you know, the way that actually compile and parsing is staged in puppet, you could collect, you know, you could tag all the services and packages as being open stack and then collect and disable those from another scope. So you could use that to just kind of toggle a global switch to go from package and service to some special source versus service. But I think from my perspective, the question is, is that worth spending time on? Is that, you know, how do I prioritize that? Is that one of the more important things that can be worked on? Yeah, which I think is the, what sort of question I've been thinking about recently too is like, should we, how much is this actually, where's the, is it the thing we, is it helpful? Might be, I'm just saying, is it worth thinking about? I think it would be valuable from the, like, I don't know of developers who willingly want to break, you know, other people's packaging and things like that and tests. Totally. So adding longer and longerness to the gate or adding complexity to gating to have those tests kicked off, like if we all of a sudden have the matrix of like 80 different configurations that we actually test, such as my SQL head, do we, we may not want to gate on that, but we, I think we want to know when regressions are occurring. So. That's something that I'm thinking about and I've spoken with Dan Prince about maybe looking at using Smoke Stack and having that be master that goes along with Grizzly and do post-gating, like Smoke Stack is doing post-gating. But I think not really being an active developer, yeah, what do you guys think about Smoke Stack? Are you, are you guys pretty content with the post-gating model? I guess maybe they're drifting off. And they all look at me. Well, I guess both of you guys are involved in the development process. I mean, it's hard to say everybody agrees one way or another, but I think people are pretty happy with the gating that we have right now unless their feature is in the, that gating test. Yeah, which, and then we, there have been just to follow up on that, there have been some decent discussions this week and I think this is pulling back around to what you're talking about with, with checking out the puppet or the, or the chef things is, is ways that we can, we can hook in other people's testing work, I think is how we start to expand that because everybody has their pet feature. And so how about you test, you hook up a thing and then, then you don't have to convince me, because that's, I mean, it involves beer again, but that's, there's only so much beer I can consume in a given week. So we possibly have beaten that one to death. Yep. So, only three or four people left. What's that? I know. Only three or four people left. I'm surprised that more of you are still here. So to go back into the, that's a very good point. Yeah, there's internet in here. So why, cause I'm pretty sure I know where they want this panel to go. Why, why should I use a chef rather than a puppet? Chef versus puppet. I've, I've never been asked that. So, I was going to bring up an anecdote of an actual, there was a session with Mark Burgess from CF engine and Adam Jacob from Chef, where they traded each other's slide decks and gave each other slide decks. And, you know, that was, you know, the nice things that I can say about puppet. Yeah. Well, actually, yeah. What if you said, tell me why I should use puppet? So, you know, puppet is, I'm in trouble. Yeah. Puppet is, it's embedded in the new Cisco open stack stuff. Yeah, it is. That's cool. Yeah. What's that? And red hat stuff. And red hat stuff. Yeah, I do. Yeah. Got the puppet guy giving me leads. You know, you know, Dan has been involved in open stacks since I saw you at Austin, I guess, or Bayer. Yeah. I guess I started sneaking around Austin. Yeah. So I made a year ago. We've both been at, you know, the last six summits. You know, and, We've had six of these things. Wow. And so, you know, both projects have been involved for a long time. And so, you know, you are going to get access to people who, you know, know the rough spots have been, have seen a lot of deployments already. And, you know, hopefully have got stuff that works and is reproducible and other people are finding useful. So, you know, both tools have their place. Well, so that leads to a bit there. So you get access to the people that have worked on these things before and you get access to their knowledge. To speak for our absent JuJu, I know that one of the things that they put out as a feature is sort of the reusable nuggets of, you know, app to get install WordPress server, right? The equivalent of that. What's the, but that reusability seems to be something I've heard from both camps, both you guys' camps from day one. Like what's the, is there a, you guys working on that same thing? You got reusable, like how is, how is me getting your knowledge through your system working out? So I think just in terms of, you know, general knowledge, and of course one of the reasons that Puppet and Puppet Labs is so interested in OpenStack is that the easier that we can make it to get to those self-service APIs, the more interesting kinds of automation things we can do on top of those APIs. And just talking about reusable content going beyond OpenStack, forge.puppetlabs.com is a great place to go. I think we have around 600 contributions, a lot from the community. And we have a lot of content, but one of the challenges now that we have with the forge is trying to filter that and make it easier for people to not just find lots of content, but find the content that they need to deploy their applications. Right. And so, you know, going back to the idea of reusable content, and you know, embedded kind of wisdom of that content, you know, one of the things that we have going with the Chef Cookbooks is, you know, these are used by real deployments at scale. You know, this is, it's parallel to the work being done by Rackspace. It's used at Dreamhost. It's used at AT&T, you know, HP, a lot of the work, you know, that I did with HP early on, you know, has filtered back in. And I said, Dell, right? You know, Dell's involved, you know, the session we had yesterday. We had six or seven companies where people were like, I'll do that. I'll do that, you know. And so we are watching each other's forks and branches. And then, you know, I'm just kind of a ringleader in the middle making sure that if I come in to this community, I'm actually getting something of value. You know, if I go to, you know, let's say I go to Dreamhost, for example, they're not really incented to make your OpenStack deployment work. You know, I mean, they're busy running a public cloud. You know, and so what, you know, what you get by having, you know, lots of people working on the same code and someone trying to make it useful in the middle is, you know, someone that is actually incented to help you succeed. So that's what you get with, you know, the Chef or OpenStack side of things. Cool. And I think it's similar. I've recently called myself kind of the hub and the wheel with all the sporks of sporks. Sporks. You should see my bicycle. I am from Portland. Oh, that's a good point. Yeah. The old hub and spork. You guys just eat nothing with nothing but sporks up there? No, actually I have a bicycle that's just a hub with all the sporks. Of course you do. It's amazing. It's actually a unicycle. Yeah. I'm sad there wasn't more of a Seattle, Portland thing because now it's going to be you and me and that's not nearly as fun. But I was saying a lot of what I do is really, you know, all these companies that are deploying OpenStack have the best practices and the knowledge and it's all about working with them and being the person that leads, making sure that those changes go upstream so that everyone can take advantage of them. So I want to do a follow up on that. So one of the challenges in DevStack is the fact that everybody has different requirements and people want to do Chef back ends or Fedora or Ubuntu or how well does your various recipes and cookbooks do at providing flexibility for like all sorts of scenarios or at what point do you have to say like here's my cookbooks for this scenario and here's my cookbooks for this scenario or recipes? Thank you first. I think a couple of things there. One is that most of the differences between platforms are really data differences. It's almost an internationalization problem of everything's the same, only the names have changed. Well, how about things like Zen server versus KVM and things like that? So right now, libvert is the thing that's mainly supported but a great thing about the community is there's someone from Microsoft working on Hyper-V support. Zen support existed originally because the puppet stuff was done for Rackspace Cloud but they've definitely forked off and I'm not sure how supported the Zen stuff is but I think in a lot of ways that's due to demand. The people that are using the stuff will add the extensions that they need. And so it wouldn't be that complex if the people put in the time to upstream those and you'd have one recipe that would support Zen and Hyper-V and KVM or is that? Everything's been designed to have directory layouts where it's obvious if you wanna add an extension where that extension goes. And it's also been designed so there's really kind of three layers. It's kind of three layers of code where at the very back end you have very specific classes and very specific configuration interfaces so you might wanna do something like this is a glance file back end or a glance Swift back end. And it's maybe there's a hundred of those that are composed into kind of the roles that you may wanna deploy, things like a glance, which is a registering API or a Keystone or a database which is a database with the six databases, or sorry, a MySQL server with the six databases that have to be added. And then at the very top level there's something called controller and something called compute. And the interesting thing is I designed it like that thinking that people with different amounts of complexity, people for getting started would say I wanna controller, I wanna compute, but that really most big companies would wind up customizing using all the little bits. And what I found is that's actually not true that people are actually fairly happy to say if you can either give me a controller and a compute or if you can just give me the seven or whatever roles I want, that's actually what people prefer is that as the interface and then hide all the details of the implementation behind it. And so, Chef? Yeah, so the way Chef OpenStack is organized is around roles. And so currently the N plus one model of end computes and one controllers is definitely there. All the services could be decomposed to run on separate machines. So if my SQL is clustered over here, Rabbit's clustered over there and then computes are each on different boxes. So that's the approach we've taken as the roles are all separable. So as people need to replace an individual component, maybe Rabbit isn't what they want, they want something else. We'll put in a different, we'll use that role to search against just plug in whatever the message area is. So then as far as like, again, I think Zen server versus KVM is the largest difference just because in Zen server, the recommended way to deploy is using a DOMU to run most of the things. Whereas KVM, we're running directly on the host. I know that the crowbar uses it in a certain way, so that's KVM if I'm not mistaken, but is there a good Zen server? I don't support it. There is, over in the SmokeStack cookbooks, they've got a Zen server. No one has really asked for Zen specifically. What kind of happens with the cookbooks is if somebody wants a feature and we talk about, hey, how are we going to get that supported? So right now we have KVM and LXC and talking about Hyper-V, they all follow the same model words. You fork off an attribute and it says, I'm using compute. Compute needs to include a recipe. Oh, what is my hypervisor? It's KVM, include these. If it's Hyper-V, include this. So if Dan or whoever is working on SmokeStack put the time into upstream that change and others or others were interested, that's something that would fit in again. Absolutely, absolutely. The model we've espoused is if there's a feature you want to support where we will make, if it's not already pluggable, we'll make it pluggable. And how much is that magic where you have to be experts at your various tools to be able to have that degree of flexibility? Or is that pretty low? It's fairly standard, like if you're a chef user already, it's not hard to see this pattern where it's like you set an attribute here in the role and magically I'm using all these things. Well, I don't know how I'm building the recipes themselves. Yeah, so the recipes... Are the recipes magical or are they pretty... That's actually not that much magic. You know, because a chef does a pretty good job of making reusable components. And so, you know, I think Chef for OpenStack uses 20-some community cookbooks that, you know, we didn't have to go and like rewrite whatever features, just grabbed it out of community and started using it. And so we're, you know, the OpenStack cookbooks are getting committed back to the community and we're trying to just keep, you know, I try to get whatever comes in, we commit it back to the community, document it. And so when somebody says, well, I want to run LXC, but I want to Chef back end and I want to use Postgres, we're like, set this attribute, this attribute, and this attribute and you're good to go. So you've both mentioned people that are using the modules and cookbooks and stuff like that. And then question marks about getting some of their work upstreamed or their barriers to getting that stuff in or questions like how are you seeing that going? I can say that it seems like time is really the number one barriers, everyone's really racing for to have something every six months. And I think part of it is time, but I think also part of it is culture for companies to understand open source and definitely a lot of the companies that are joining OpenStack are not traditionally open source companies. It's very true. And I think it'll just take some time for people to really understand the benefits so they can justify to spend the time to get things upstreamed. So if the release was, let's say every two months, would that be better or worse? Worse. What if it was every day? Are you providing packages? Do you want me to provide packages? So you're gonna go double A in just a few days? I was sort of mildly leading towards, if like off of that every two months is, a lot of the companies that are doing this are looking at more of a continual deployment model rather than a, I'm gonna go grab the latest release, not that the Shuttleworth demo wasn't fantastic because it was actually really cool, but like doing an app to get upgrade from one release to the next release, I'm guessing is not how most public clouds are running themselves at least. Or maybe you guys are seeing more than am I. I only work for one company. But I'm guessing that's not how it's working. What I have seen and heard is usually that no, people are not app-get upgrading there, decommissioning racks and switching them, slowly migrating things over. So that being said, there's sort of a two prong thing here. One is the question of what if we weren't releasing every six months and or what if we were releasing more continually, would that help or hurt? And the other one is how's upgradability rolling upgrades and stuff like that working out for you? So I think one thing that's important from my perspective is if OpenStack was gonna release every day, then for sure you're gonna have, I mean, the APIs have version one, version two. And I think you need to make the same kind of commitment for the command line interfaces and also for the configuration interfaces. You know, that they need to support some kind of semantic versioning. So you can have expectations about when things may break. That would be nice. Yeah. That's partially what, you know, maybe we could get into with, I know Vish did some work for XML interfaces to from the public side, now that anytime a commit is made, we actually make sure that the XML is still generated the same way because none of the tools out of the box, like all the CLI tools, all the libraries actually just use JSON. So it was just lucky if XML was working or not. Whereas now we're actually testing that. So whether we get there soon or late, I think we're adding more tools in place because of the fact that there are a lot, public clouds actually doing continuous deployment with OpenStack. And there are individual companies, I know that are doing continuous deployment on their own products, even though there's then a barrier between when they are done internally versus shipping. But can you guys, can you think of certain priorities that you would add there? If you had three things to ask for, as far as helping with the upgradeability, is the CLIs interfaces on the top? JSON, I mean print JSON. And I heard that's a feature that's coming in Grizzly, that there'll be an option for everything flags. So as opposed to tables, you can just dump JSON. Okay. I mean, stability and configuration would be amazing. The CLI, we don't use a lot in actual configuration and deployment, but it's just, you know, things are mostly in the same places, but the config files seem to be moving around. Maybe this is then a chicken and egg in the sense that by having sort of the post commits where we're deploying with the recipes, maybe, and then running Tempest against them, which I think is very similar to what, you know, the public clouds are doing. Maybe if we have that resources for the community, we could capture those. But I don't know if, like, you guys are not having your own CI infrastructure for these projects, correct? Or- I have some, I mean, right now, I'm running unit tests, but not integration tests. That's something I'm hoping I'm getting soon. But just like Matt said with his project, there are people who have built products on top of this stuff who are running continuous integration on their distributions up there. And I'm working with one of those people who are, you know, packaging up Chef for OpenStack and who will open source, you know, a CI tool chain to do it yourself, you know. Excellent. So just as a general note, because we can do this for a while, honestly, it turns out, we all like to talk. But there is a microphone in the middle of the room, so if at any point anybody takes offense to anything that we've said, or wants to ask a question or whatever, feel free to sort of jump in. I just want to jump up and shut the thing. Oh, wow, that, that was quick. We gotta jump up. You know, there are a couple more sessions starting right now, so I thought it would be a good idea to ask it. Guys, do you have recipes for deploying highly available OpenStack? In particular, let me rephrase it. Can a regular Ops guy take your chef or puppet recipes and deploy full production environment with DRBD or Peacemaker or something on top of it? Not yet. You know, I mean, there are some production environments, you know, of Chef for OpenStack that are being configured that way, and I expect before Grizzly that should be an available configuration. You know, now that there is a semi-official HA support for Folsom, we'll be implementing it. And I would say that we have a repository which shows an example of how you can wrap an HA layer on top of what I've created and actually all the components have an enabled flag which can work for active passive even though I kind of had implemented the capability to allow active passive modes for everything as iteration one, because really what I want is active active. And I'm actually working with members of the community who have a fork that supports active active for all the components that I'm currently in the process of merging. What do you find the most complex to make highly available from the queuing or database or like what is the challenge from having the recipes just work in HA mode out of the gate or is it different environments require different things? I know that for the HA mode that I'm working with a partner on that it was that they actually needed to change some of the configuration options to allow multiple rabbit hosts, for example. But I know that that stuff's actually going into Grizzly. So that'll be one of the challenges. When you say active passively doing just like HA pairs of the services? That's what I had done. I'd implemented a prototype basically using Pacemaker and yeah, exactly. But going forward, the thing that I'm really looking at is using Galera and it's really a partner has been working on a lot more than I have. So I know that I am looking at the code as it comes in and merging it. Yeah, so the Florian, us? Florian is Florian here. David, it's your talk and it's Pacemaker for the various Nova services, Galera for my sequel, Rabbit MQ's clustering. So trying to move away from pairing and I think tomorrow CloudStack has an HA talk to. Yes, so. Right, right. I'm just giving a shout out for more HA. Yeah, HA's good. Oh, I've done that again. Oh, it got darker. My monitor was dimming again. So we've touched on it a couple, I think Jesse and I've actually both mildly asked the question but I don't know, went somewhere else about upgrading and that sort of path. Upgrading. Yeah. Let me preface this, but us and OpenStack developers haven't, up until maybe a few months ago, hadn't really put much effort into really providing, like what we are doing in DevStack was trying to do proto packaging and proto recipes. We hadn't been doing the work to show how to move between upgrades. In fact, Jesse, do you have a project similar to DevStack that has something to do with upgrades? We're looking forward to how we can do that and actually Vish and Dean who work on DevStack worked on a project called Grenade that the idea was that it would deploy the old one and then try to run a set of scripts and then deploy the new one and then make sure the instances and volumes and all that are still there. Now, we caught some bugs and it helped with catching bugs of how upgrades need to occur and making sure that the migrations actually work but I don't think it was in time to help you guys with actually knowing how you move from one release to another. So I think that you guys are currently are facing two problems of which is you have to chase developers and try to figure out what actually changed and then actually get the recipes to work. Yeah, so on the Chef for OpenStack mailing list we had a thread about what are we gonna do about upgrades and it kind of got turfed because it seems like all along no one has given enough thought to upgrades and said, so the next it became, we'll do it with Falsal. Yeah. And so it is kind of an ongoing problem because what really happens is you stand up Diablo or Cactus, I mean in the Mercado Liberations talk, they stood up Cactus and then over here they stood up Essex because there was no upgrade path. And so until we are followers, we are going to be consuming what is available. So if there is a grizzly upgrade guide that says this changes to this and this changes to this, we can recreate it. I mean we can programmatically do technology. Yeah, we're good at scripting. Yeah, if there's a document that says how to do it with some assurance that it will work and not blow everything up, then for sure we can automate it. What if that document is a shell script? It's fine. I read Bash. Great. Yeah, I can read Bash. It'd be interesting to see if Puppet should model a shell script versus if Puppet should get, it depends on what the shell script looks like. Well, the shell script is again just meant to be a proto, like the DevStack and Skrinade, the goal is that you guys are actually the audience and deployers, it's not actually meant to be used as deployment. Jesse, are you saying that people shouldn't use DevStack for production deployments? Burn. No. If you know what you're doing. Yeah, sure. So it should be on purpose. It destroys everything every time you run it. Would you recommend that people who write tools for production employment should read DevStack and check diffs on DevStack to understand what's going on? The hope is yes. And so if it sucks, I would like to know. We've got a question here. So does Grinade only test the default DevStack changes? Or does it test all the config options that DevStack doesn't use? It doesn't check. So this was just the first stab at it. And so the question is, first of all, is it the right even tool? Because we are, of course, just taking DevStack and then installing the previous version, then running a set of configuration changes, and then running the new version. But it did enough where we could check Nova volumes to sender. And the idea is maybe we could help with the Nova network to quantum. So the big changes. Yeah. But again, this was just a month or two of work right at the end because we realized, while during Folsom, a lot of people said that, hey, let's work on upgrades. A lot of work did happen behind the scenes with a versioned RPC and other things in the code to help that so that you could actually do live upgrades. There wasn't necessarily that much communication between the people doing the development and the people who actually are asked, how do I do this? And I guess I can say too that that's definitely one of the advantages of Puppet and these configuration management tools is that they can really check to see what the current state of the system is and manage state transitions. But the bigger question is, is there actually a set of state transitions that can do a live upgrade? And I think for certain use cases, you could get really close to a live upgrade, maybe with control plane downtime. But not necessarily data plane. So your VMs will continue to operate. And everything, but we recommend currently turning off your control plane while you actually do these operations. Moving forward, we're actually adding versioned API, versioned RPCs, which are how all the components actually communicate, which will allow you to then upgrade each individual things as a rolling basis, like upgrade some of the workers, make sure it still works, and then continue on. We have added challenge though of the fact that we have the HPs and rack spaces who are doing continuous deployment who can do those baby steps to do larger changes. And they could support the version today and the version tomorrow and not have to do that gradual change out if you're doing something large. But when we talk about six month increments of work, it becomes potentially much harder to do that sort of rolling upgrade in a single deployment. And so that's kind of why I was getting to, at what stage should we look at more rapid releases, as well as, for instance, Nova is becoming calmer in the sense that we're removing features and it's becoming just about VMs. If it becomes mostly about drivers, like improving the Hyper-V or a Zen server, having more rapid releases, but it's mostly drivers and interactions with other tools, is it better to have more rapid releases? The rapid releases doesn't become a problem if APIs have all stabilized. I know the conversation comes up every summit. Is OpenStack the APIs or is OpenStack the projects? Both. It's the people. OpenStack is the people. It's the love. If those APIs aren't stable enough to switch out pieces or have them versioned, then we can't handle it. Well, I think that in order to even do upgrades, you had to have them versioned and or at least the ability to have the old thing and the new thing running at the same time. And we have that problem though of the six month release versus the every day or every hour or how often the continuous deployment guys do it. So it's an added challenge that maybe the recipes that help with upgrades for the real releases don't actually help with the people who are doing continuous deployment because they need to chase different goals, maybe. That's where I was trying to get to, how much magic is there in these things that you had to be an expert to be able to talk to both. Like imagine you were doing a rolling upgrade and you had a subset of your systems in Essex and Folsom. Would it be relatively easy to have packages in your recipes be able to support moving them with all the coordination between all the services? I can say, yeah, that it's possible you might have to write your own coordination layer, but we're working on using the resource model to manage coordination between things. But in terms of a pretty typical workflow for upgrades with Puppet would be using environments which allow you to have multiple versions of code and then targeting in NOOP, which means don't run but tell me what would happen if I run against the new environment to determine change impact to make a decision if you wanna do a live run. And then run live while using for orchestration you could use ImCollective which is another Puppet Labs tool which is essentially a message bus that sits between the person controlling their environment and all the Puppet agents. So you can use that for staggered runs to say maybe run one, check it manually and then say, well, let's run the next 30%. Let's run the next 30%. Yeah, so that's the model that you're looking at in terms of how to deal with the fact that you don't really wanna just update the module, change the version and have all thousand machines splat out and upgrade at the same time. No, you would definitely wanna maintain just like you were talking about maintaining multiple simultaneous API versions. It's very similar to that concept. Same thing. Yeah. Wait, you also use ImCollective to manage your Puppet agents? You can use ImCollective, but I mean, we have environments as well. We have the idea of rolling upgrades, switching between environments, that's how people do stuff for real. Yeah, because these things are getting pretty big. How's the scaling picture looking for you? And this is honestly, I'm just curious. Like as people are doing thousands and thousands and thousands and thousands of nodes, is that within the capabilities of what you guys are doing and stuff? I can't speak for the size of all the users. You could ask three most. They're very public about their deployment. You could ask HP who's not very public. I don't know if I can ask anything. That's gonna be very difficult for me to do. Right. If I know any of the works there. Definitely substantial production environments that are being deployed with Chef. But I'll let those guys start. I've got four boxes. Yeah. Wow, that's so many more boxes than I have. And I can say the same for Puppet. There's so many different ways to run Puppet to achieve different kinds of performance. Of course Puppet apply just running, synchronizing just code and then running everything locally is gonna scale forever. The Puppet master itself, if you're running in client server mode is by default, stateless, except for CSRs, which whatever. But it's basically stateless. They can be scaled horizontally. But there are definitely a lot of feature add-ons. If you're taking advantage of all the features like maintaining a configuration management database and really using the entire feature set of the Puppet server, then it definitely as you add state, it does have limits on scale. Yeah. Fair. Any other questions up there? That's we're actually at the end of the list of questions that I've got. Any questions? Any more questions from folks in the room? Or any other things that you guys would like to delve into? We've got a thing. There's a thing. Do you guys maintain different branches for the different OpenStack versions? Yes. We currently have an Essex branch and a master branch. And the master is when Grizzly milestones start to come around, we'll turn master into Folsom. And then have master will be Grizzly, I guess. And then there's lots of community forks. So somebody like Dreamhost has, they've got their Folsom branch, but in their branch they have things like sources. They're pulling source for quantum, I guess, because of stuff they wanted. And so there's lots of forks and branches. Yeah, I'm also using, I have a branch for, I don't even think I have a branch for Diablo. I think I just gave up. I have one for Essex for Folsom. The state that things should be in next week is an Essex branch, a Folsom branch, and then master is gonna be for Grizzly. And I had spent some time briefly last week thinking about trying to simultaneously support all versions, and pretty quickly decided against it, just because. I asked the same question in the chef cookbooks session yesterday, like who wants to keep working on Essex? And it was kind of like, Grizzly. Yeah, I mean people are ready to move to Folsom. Well for me it was more just like, it becomes accumulation of technical debt to try to maintain a single branch for this stuff. It's tagged. Yeah. Cool. Any else from anybody out there? I think, yeah. I was gonna say, I think we've made it. There's a handbag there. Oh. There is a, there is one feature in Chef or Puppet that you do not in the other schools that you think is like a killer feature? I mean, both products are pushed by each other. And so they had a no op mode. And so we have a Y run mode. And then one of their devs is like, now we have to make another no op mode, have the same features that Y run has. So we have had search. You guys now have search. Yeah, we do have search. You know, it's. Good idea. Yeah. And so, we use Ruby, they have a Ruby DSL. We did that one first though. Well, you existed first. But I mean, Chef was written kind of in a response to things that, you know, like philosophical differences about how Puppet worked. And over the years, those differences have kind of simmered down a little bit. And so both projects, you know, are doing a lot of the same things. It really comes down to, you know, what matches your engineer's minds, you know. What are you thinking about, you know, on architecting a big system, or are you thinking about it as individual servers? I'm curious, out of those two, like which would you say is, is it related to those two? So on your website, does it talk about systems, systems managing individual systems? Because on ours, we're talking about continuous delivery. We're talking about, you know, your business moving everything from code to production. You know, I mean, we're, that's where we're pushing people. I guess there's no question that that's a lot of what we're working on with our customers. In terms of branding stuff on the website for direction, I can't really speak very intelligently about it. Nobody can, it turns out. It's marketing. It's not just marketing. But I would say that, you know, I think Matt really hit on one of the major differences is do you want a Ruby API or a Ruby-based DSL, or do you want a declarative syntax, which is a little bit simpler, but simpler, which has pros and cons for different use cases. And I think the other thing in Puppet is, you know, the entire reason that you take Puppet code and kind of compile it is it all compiles down to a data structure. And I think that's really one of the differences of Puppet is that everything in Puppet is data and everything's exchanged as a well-known data format, which is actually a directed acyclic graph. And then Chef is built on the idea of your infrastructure as a code base. And so that's, you know, at the end of the day, you have a DSL, you have a Ruby, the programming language. Is it data or is it code? Data. Yeah. That was your cue. You had a pull-up? Okay, cool. Dying here. Cool. Anybody else? Cool. I think that's about it. Great. Thanks, guys.