 keep to schedule or anything like that. And also Rob should be in here in a second, but I'm gonna just start talking because there's a lot of you in this room. Apparently I think we're gonna talk about something interesting. Hopefully we'll do that. So I'm Monty Taylor. Rob Hersfeld from Hewlett Packard also been working in OpenStack for as long as OpenStack has existed. Do lots of various things for it and I won't go into all that because that's just boring to be talking about myself. Narcissistic as I might be. Rob Hersfeld from Dell will be in here any minute now, I promise, or he might be in here and invisible just standing next to me. He likes to do tricky things like that. And what we wanna talk about is about OpenStack reference architectures and about how we can, there's several things in here, but how we can actually work together on OpenStack deployments using the technology that we have inside of OpenStack as baselines for both delivering that and testing people's deployments against those things. Unfortunately, I think I can get through some of these slides before Rob gets here. Some of these are his and so I might just look at them dumbly. But that's how shared things go. So this is all starting from the idea of interoperable OpenStack amounts. It turns out that the value proposition of OpenStack, the whole reason that there's 3,000 people in the Oregon Convention Center, I think actually probably more than 3,000 because there's also a teacher's conference over there. But I don't think they're here for OpenStack. They might be actually, they should be. They should be teaching their kids OpenStack. But the value proposition is that is that we have multiple interoperable clouds. It's not that OpenStack is software that is going to allow HP to stand up a cloud that is going to win all of the cloud stuff. That's not the idea. The idea is that HP is going to stand up a cloud and Rackspace is going to stand up a cloud and Aptira in Australia is going to stand up a cloud and Dreamhose is going to stand up a cloud. And consumers of those clouds can actually consume all of them and do that in an interoperable way because it turns out what people want. They don't want to be locked into one vendor. They want to have the benefit of being able to have an ecosystem of places where they can run their workloads. So that's all well and good. I don't know if you guys noticed, there's a lot of configuration options in OpenStack. I don't know how we did that, but there's a lot of them and they make things sort of different. So, and I'll get to that in a second. But we want to make sure that we've got interoperable. Oh, look at that, that's fine. That's fine. I'm sure there was somebody interesting out there or something. Ladies and gentlemen, Rob Hirschfeld. He's going to go up. So in order to do that, we've got a, we need to, yeah, that's Rob's slide. So I'm actually going to go for him. Allows us to share things better and allows us to get that ecosystem up and going. Look at that, ha, ha. And I'm just, no, no, no, no, no. And we're to your, we're to this slide. Take it away. This slide is one where- Nature head shaping. Nice and basic too, that's exactly it. So if you guys haven't learned this yet, you need to, this is the take home slide that we use a lot, is that we can get your software up and stack, it's got a ton of that, get your hardware, you can go buy that, that's commodity. Turns out we both sell that. Don't know if you noticed that or not, but yeah. It's good stuff too. Also it interoperates, your servers work the same as mine. That's the- Isn't that neat? The thing that trips people up is operations. And so, usually actually this slide is actually 50% ops. And what we've been doing is codifying ops, using dev ops and making things repeatable. So the challenge is, we can sell you hardware that's gonna interoperate, which is awesome, right? We can all use the same software base, great. But if we wanna share tips, if we didn't deploy it the same way, likely we're not gonna make any progress. So I touched on this a little bit in the first slide. But why it's important to share the operational knowledge is that, like I said before, if we've got these multiple vendors that are deploying clouds, then if they are interoperable, then we actually wind up with an open stack cloud, right? It's some of it is run by HP, some of it is run by Rackspace, and some of it is run by other people. And each of the zones in that cloud are just different availability zones in sort of a larger worldwide pan cloud, right? Like sort of almost like the internet or the web, which already works this way. Which makes sense if you think, we like to talk about open stack being the operating system for a data center, just like the internet brings multiple servers together, we wanna bring multiple data centers together. So, but if we can't deploy this consistently, right? Like we're really big into testing things in the open stack world, it turns out. If we can't deploy it consistently, then we certainly can't test it, right? We can't tell you that it's any good if we don't know how to install the darn thing in the first place. And then past that, that's all well and good for the one. I mean, everybody I think knows that we install open stack 700 times a day in part of the CI system using DevStack, but I'm pretty sure most of you aren't deploying open stack using DevStack. At least you, I certainly hope you're not. And so we probably have different deployments. We need to be able to compare those, right? We need to be able to test those work with each other. I should, I forgot to mention, there's a panel today on interoperability at 520. You all should come to that. And that's about the interoperability is about being able to compare different people's deployments. And this is about getting help too, right? We see a significant number of people wanting to help, wanting to get help. The first question you ask is, how did you deploy it? And so if you can't tell me how you deployed it or I have to parse a long document and then hope it's right and it's not complete, you don't get the level of support and ease of use that we want in the open stack community, right? Because I'll tell you from my experience, a significant number of the bugs that we encounter in the field have nothing to do with the code base. They have to do with somebody not having the right IP address on an individual node or a wiring, it's, it could be anything, right? You used wrong VLAN. You just configured that VLAN, yeah. Didn't notice that your, you know, Melanox card had weird drivers. Right. Distro told you the drivers were included in the kernel or not. This is the tee-up for the open operations slide. Open operations is sort of this vision that started a couple of years ago now with, for me, and I'll take Jim Stanton, Stanton 7, he and I went back and forth a little bit on this emerging trend, which is we can take these operations scripts, we can take reference architectures that are published and we can compare things. And if we can compare things, it's not just that the code is open, it's actually the operations are open because there's very little value that you're gonna get if you managed to get to a thousand node scale one way and somebody else does it a different way, right? At the time, a major cloud provider was having a series of outages and they were very black box. And while they were very good about saying, hey, here's what happened at the end, nobody could actually go in and say, yeah, if you done it this way or that way, it was never exposed. So even back at the cactus days, the idea that we could share tips and improve each other's operations by being visible and transparent about that was really attractive. It was an added thing that OpenStack, really from the very start, has been part of our culture. And so what we wanna do is we wanna share the what, we wanna share the description of what it is that you're gonna install. I wanna be able to tell you, you know I installed a hundred compute nodes that are all running KVM and we're using flat DHCP and we are using Cinder with Seth back end and we're using Glance and we've got a, we've got SwiftDream and blah, blah, blah. These are all what, right? Like they're not telling you anything about what machines I've got racked in the data center. They're not telling you specifically about what my memory thresholds in MySQL are but I can tell you that I'm running MySQL and not Postgres, right? Like these are all the what's of what you're doing. And understanding that there are going to be environment-specific config but that needs to be environment-specific config, not environment-specific how to install MySQL because I gotta tell you that's not very interesting. You're not going to provide value add for your cloud end user customers by installing the crap out of MySQL. Like it's just not gonna happen. And that's sort of the next point there is that your deployment isn't as different as you think it is, right? It's, we're all beautiful and unique snowflakes except that we're all snowflakes. We're all snowflakes, right? And so the point, one of the points with this is there's times when it's really valuable to be distinct and unique and there's times when it's gonna cost you a lot of money and time and pain and what we want people to do is do the things that are unique and differentiated where it's valuable and not and stop. And what my experience is that a lot of times people do things that are snowflaked because nobody said here's another way to do it, right? There's no best practice, there's no pattern, right? And I know from, I was doing VMware virtualization way, way back 15 years now. And VMware took off when there was a pattern, the data center consolidation pattern with a sand that sold a lot of stuff for EMC. And that pattern hockey stick virtualization adoption and we have the same challenge, right? For everybody here to be successful with OpenStack we have to have patterns that are consistent and repeatable. So this is about, and another thing to inject. This is about getting all this stuff located inside of OpenStack. OpenStack has turned into a really great community for sharing things, right? Our companies can collaborate on OpenStack. Our companies most of the time don't call each other up on the phone for something that isn't OpenStack. I'm not like, hey, how are you guys doing MySQL? Like it's just not, that doesn't happen. But we can do it here. And so we have to take advantage of that, right? Because then those of us in the technical world who know that we need to collaborate can get our jobs done and not worry about the folks saying, oh, my SQL, don't tell Dell, you know. But the problem, but there is a problem in that we have forked deployment in Ops. And so the reason why we're here is, and part of this is your responsibility is to help put community pressure on these Ops forks to bring them back into compliance because you don't want to have to go to HPs or Dell's or Rackspace or Mirantis or Histon or Nebula. I mean, there's all these different ways of doing things, right? Some of them you want to be proprietary. But if we can avoid it and you can get compatible operations. The more of that battle, yeah. So this is why we're talking about, he likes acronyms, so RAs. I like funny pictures. He also likes funny pictures. He's, all the pictures in here come from Rob and he's really good at those. So it's not a resident advisor or a rich aunt. I wish it was a rich aunt, that'd be great. Oh, I didn't even notice that. That's just, I think you might have violated the code of conduct there. It just, I don't even sense. All right, I'm just gonna fixate on that for a second. Okay, anyway. I'll take over. So we had a long conversation actually about a half an hour ago outside about whether we should be calling this a reference architecture or a reference implementation. And what we're talking about here is a description of an open stack, an open stack deployment, right? Which is why the argument was being made that this should be called reference implementation. But I think that, especially in the open stack world, we don't really do a lot of specking out like a 500 page document and having that be like a description of something and then writing an open stack code. We're sort of an implementation is the architecture type of people, for better or for worse, that's just how it is. And so what we want is we want something practical. We want a description that can be installed, that can be vetted, that can be tested, that describes a thing. And we probably want more than one of it. Turns out we've got those configuration options for a reason. And so having different, I think that's the next one, right? Yeah, two slides, yeah. But I don't want us to read down the slides too much. The point about implementation or architecture is it's what people need, right? People ask, when somebody asks me for a reference architecture, they're not asking for architecture. What they're really asking for is show me something that you have tested and works and is repeatable and that I know if I follow your directions, I'll end up with a cake, right? Not, here's the stuff that I think in my science lab project is gonna be a good thing. And so this is why we favored that, yeah. Flour and eggs bind together really nicely. They do, they also make flour and egg much. And so the point with this, and this is gonna come up as a theme, right, is that we're talking about something that's tested, repeatable, and usable. Which is why implementation makes sense. People call that a reference architecture because it sounds really high-falutely. But what people buy is a reference implementation. What we test is a reference implementation. And Monty and I both love the word test. We do, it's important you should test things. Test. Test, they don't work. And so this leads us to a program we've been putting together. There is an empty GitHub repo. We invite all of you to submit pull requests. That's it. But eventually it'll be a Garrett repo and you can submit, but whatever. McKenzie doesn't want it there yet. Anyway, there's actually gonna be two different elements to this. This actually references one of them. RefStack as a program has a couple of facets. One of them is the reference architecture, the reference implementations that we're talking about. Which means that I can, in OpenStack's infrastructure, spin one up and we can run tests against it and we can verify that it is what we say that it is. It works. On the other hand, and this is the code that Joshua McKenzie's putting into the RefStack repo or you all will, is a way to a system by which people who are working on clouds can submit end points into the upstream system to have the same testing that we run against one of the reference implementations that we spin up run against your thing. So we can say, hey, we made architecture A over here and you're claiming to be architecture A. How about we double check and make sure that that's in fact the case. Which is then a sort of service that we can provide to help ensure that when people say that they're interoperable, that they actually are. One of the things that is going here and I did a talk earlier about trying to get to upgrade and actually I think my destiny this summit is to talk about things, aspirational discussions. One of the things about that is that there's a connection to things that we have to be able to do to achieve these big goals, right? Of having interversion upgrades that are smooth and safe and small-stepped. And one of those things is that we have to be able to know what you're upgrading from and to from an implementation perspective. So this is a necessary ingredient for us to start accomplishing a system-wide repeatable upgrade process. And so the other part of this, so McKinsey's thing there is the register your endpoints against bar testing. The first one is the actual definition of the things themselves. And one of the things that we're looking at really strongly and this is some of the guys in the triple O team have been working on this and there's other guys working on this as well. This is defining what an open stack architectural implementation is in open stack terms. Because part of the problem that we have is we could define this in a lot of chef recipes or a lot of puppet modules. The problem is that half of you can read the chef recipes and half of you can read the puppet modules and half of you can't read either and half of you can read both. And unfortunately that doesn't lead us anywhere. That leads us into religious wars over tooling and that's ridiculous. It turns out we're all here to work on open stack. So that's actually probably a pretty fair set of verbiage to use to describe the thing that we're doing. I think. Go for it. So here's, this is, the conversation we're. I want to eat all of that. Mm, hot out. So the trick is Monty and I sat down, actually there's the whole board, Josh was really involved in this and loved this idea at the last board meeting, the one in February. Loved it. We immediately stubbed our toes on, okay well, how do we do this, right? There's 20 ways that are valid. We, one, we can't have 20 and so we came up with this concept of flavors, right? And we didn't have a better word than flavors. Oh, and a favor. Everything matters in a favor. So one of the things and we actually want to try and keep running through the slides quickly so that we have time for discussion, but as, you know, what we need people to think about is what makes a good flavor, right? Is it networking, hypervisor, workload? We actually have a set of criteria. Yep, this one. Where what we're trying to, trying to accomplish with a flavor to make it useful is that we want to make sure very carefully that two clouds of the same flavor should be able to interoperate. We chose not the same must, but should. And so if you go to a public cloud and say I am using the chocolate flavor and I've deployed the chocolate flavor, you expect your scripts, your deployments, hopefully your VMs, your workloads should transition between the same flavors, right? And we might be able to say that chocolate and mint chocolate chip are also compatible. Maybe so, yeah. And so we feel like that's important from a consumption perspective. We also feel like it's really important that operators can compare flavors. So if you're using the vanilla flavor of OpenStack and somebody else is using the vanilla flavor of OpenStack, you should be able to have a very deep conversation about that. Chocolate, you probably will have some things. But it turns out deploying Zins or deploying KVM are different, you know? So your operator scripts to deploy Zins or instances might not be as relevant to bootstrapping a machine to be able to do the birth KVM, right? Just, I mean, some of them might be, but some of them probably won't be. But one of the fun things about this is that we might be able to come in and say, if you change these components, then KVM and ZIN might actually be able to be converged into a single flavor. And that is good for everybody, right? That adds a lot of value. And so we don't wanna start with 20 flavors and then go to 40, should be... That's exactly, as I said on this slide, it's exactly right. There should be as few of them as possible. Like, multiple flavors is not necessarily the best thing for the end users. So we might wind up with a table. This is a thing that, there's a lot of question marks in here for a reason. This is, we're at the beginning of this process, right? We're at the beginning of figuring out what should go in here and how we should define this and everything. So I said earlier that we wanted to define this in terms of open stack language, right? So it turns out we've got this thing called heat now. I don't know if any of you've heard of it. It seems like you all have, because everyone in the world was in all of the heat sessions yesterday, which is great. It's the new piece of orchestration inside of open stack. And for those of you that aren't familiar with it, I was joking earlier that it's sort of like make file for cloud applications, which he made me put on the slide. But I think that's actually right because it is actually, it's a dependency graph that knows how to spin up multi-node applications. So it's not just about hey, give me a server. It's about hey, I've got this service, I've got this application that takes, I don't know, 20 servers that all have to interoperate and get put out in order and in sequence and relate to each other. And you describe your top level application that way. It's right back to the same analogy of open stack is the operating system for your data center. You need a make file. Yeah, you gotta build your application to run it on that operating system. And actually, but one of the things that I believe very strongly and we've seen is that there's really not that much difference between, there is no difference between an application that you would deploy on open stack and open stack itself is an application that you would deploy on your infrastructure, right, the triple O project, just the tooling. Turns out open stack is a really complicated application, but it is nonetheless a multi-node application that has dependencies that need to be deployed in order. So if we've got an orchestration tool that knows how to describe multi-node applications that have dependencies that need to be deployed in order, then maybe we can describe ourselves using ourselves and then all wink out of existence in a singularity. We're great. So this is just more of the same thing. Heat will go around provisioning nodes and doing things. It will chunk some configuration metadata onto them, which is great. It'll trigger things to tell you that it has flopped new metadata down and then report the node report back. This is actually, there's open stack, very open stack specific ways that we can describe rolling this all out without going down into the world of Chef and Puppet and stuff because it turns out we've got a gating infrastructure and a lot of developers and this becomes religiously problematic. However, some of you out there might like Chef or Puppet and might have a lot baked into it or you might be doing things other than open stack in your world and need to do that. And so what we're talking about here is a way to describe what we need to do and have that be a full description so it's not missing bits, but also have that break apart in the right places so that if you are, if you have a vested interest in thinking about some of the elements of some of your specific specializations locally that you can actually still do that in the way that you want to. How many by show of hands, how many people are familiar with the heat project? That's awesome. How many people are actively tracking it beyond familiarity? Not that, not that. How many people have it deployed in the public cloud? That's sort of it. I would be impressed though. What? You could either of you raise your hand. No. No. I have not deployed it in the public cloud. No. And I don't have any, I don't have any public clouds in the public cloud. But this is, you know, heat just like we're trying to upstream and there's an investment that my team's making and I know other teams are making to upstream what we're doing from an operations perspective. Heat represents some consolidation back to try and figure out the commonalities with this. So this is an important thing, right? What we want to be able to do is change the conversation from I'm working on this way to do it to I'm collaborating with a more general way that we can transcend. And then what's fun is, you know, I know from talking to customers I deal with, they don't care whether they're deploying, you know, the same team is deploying applications and deploying OpenStack, right? The same tool chains, the same problems, that, you know, a lot of those things are very similar and it doesn't help us to bifurcate, right? No. There's a good beam that's been going around about using the same tools to deploy your cloud that you use to deploy things in your cloud. Basically, I think we're just saying that that should be OpenStack is the tool that that should be. So anyway, so this gets us to sort of the final little bit, which is that what we want is we want interoperability, we want the nirvana of interoperability, which is actually, this is the second time that Eastern philosophy has come up today in conversations around OpenStack. I'm not sure what that was. Chatting with Troy Tillman earlier about needing to teach companies the sort of Zen approach of not being able to achieve something by reaching for it, but instead by letting go and letting it come to you. That's just very Zen. Yeah, very Zen. It's very Zen. So it's, we won't diverge here. I want to repeat this again though, we've got a panel on Tuesday at 5.20 that will be a bunch of us talking about how we can get specifically to the interoperability thing. And so the closing, is this our last slide? This is, then we go to the discussion. So here's the idea, interoperability is not an end thing, it's a sustaining process. It's not a release feature that we're gonna go at the next summit and stand up and say we have now achieved interoperability. Yay, we won. We're done. It's like a lot of things that make OpenStack a successful project is it's a process. It's things that we need to bake into our DNA and what we do, right? We're establishing things like shared architecture through RefStack open operations through upstreaming the DevOps and making a home inside of OpenStack's GitHub for the chef and the puppet and Juju and all the pieces that deploy OpenStack were comprehensive testing which is essential in this, right? And then ultimately having the community participate and demand it, right? Because all this will fall apart if we have a lot of people who say, I don't wanna participate, just tell me what to do and I'll go buy it. We want people to do that too because we want vendors to make money. Yeah, I believe my sales guys. Assuming that you're here in this room because you wanna participate in this process or are a vendor and you don't wanna repeat efforts, you wanna let the community work together on some things and then we'll help users. In fact, I'd like to underscore that last thing is that community participation is the other meme that keeps coming up is that the community is us, the community isn't those guys who are gonna write some software and give it to me. It's all of us doing it together and it's pretty easy, you just go and do it. And one of the things that's a challenge and I think everybody in this room should feel challenged for this is to take what is their corporate or their financial and find ways that we can collaborate together and go after the bigger fish, right? Not repeat efforts, not create fracturing where it's in our best interest and I think we've just spent all this time trying to help you make the case that it's in our best interest to work together on operations. Yeah, I don't wanna compete with Rob to try and win the market that we have right now. I'm gonna work with Rob to grow the market that we have right now so that by doing that, each of our shares gets bigger. Totally right. Anyway, with that, this is all intended to be the beginning of a conversation with some people about how this might work out. So if there are any questions or thoughts or there would seem to be many hands. I think there is a microphone. There's a microphone if you would queue up with the microphone. And we would love to take suggestions on flavors. Yeah, I'm gonna do a good slide for this one. This is an open stack certification. Excellent question. Good question. So let me answer that squarely and then I'll let him take over. This started actually, our conversation about this started in part of a conversation in the board about how we deal with brand and how we deal with trademark and how we deal with naming. At what point do you get to call yourself an open stack cloud? It's sort of that question, which is a tricky question because it's really configurable. And this led us down the road to this. So there hasn't been a formal decision about this as it relates to a formal trademark policy or a formal certification policy. But the thinking here certainly did spring out of discussions about that. So I would say that we're moving towards a thing that I think in going with a lot of our stuff starting from implementations rather than from descriptions of things and then trying to come in with that, I think we'd like to have a thing that we can point at that we know is tested and can be tested against and then discuss whether or not that can be a basis of a certification, right? And so putting, I think both of us are answering which I like is board members here. This is answering, trying to answer what is open stack because we have a lot of board discussions that are blocked because we don't have a good answer. And then the board will take actions on things that are blocking community adoption and acceptance. And if certification is doing that, which I believe it might be, then we will be driven to do that. I can't have that conversation yet because I don't have a what is open stack definition. So we're, we've got a lot of circular dependencies and this is our attempts to resolve some of them. So I have a question here I would also deploy a complication on the open stack. So do you think he's going to morph into a pass? Is it going to be a replacement for a pass or are your thoughts on that? No, I don't think it'll be a replacement for a pass or morph into one. I would love to see pass solutions that are out there use heat, right? We've got the capability to describe inside of open stack something that's pretty rich with heat. And so then people that are doing pass things rather than going and writing direct Nova calls, for instance, if they can, they can have the sort of interaction between their pass and the cloud be done in heat terms and that sort of makes their job easier in a lot of ways, right? I have a different definition of a pass than some people do. I define a pass by the services that your cloud is offering. And so from that perspective, as open stack has more and more services, then we become more of a pass from the services. The actual deployment of capacity is not the hard part for a pass. It's doing what Amazon has done very well and have all these ancillatory systems that are very sticky and keep you. That's a pass to me. We've actually been down this road a little ways in our organization and struggled with exactly what you're talking about. SL Gen 7s, SL Gen 8s, and now DL Gen 8s. Some of them have SSDs, some of them don't. Some of them, and it seems like this project is so stacked, right? So what we ended up doing was we would, once the thing was finished building, we would deploy a set of applications on it, run a set of tests against those applications, and if we got the expected results, we'd say this thing's good to go. So I think that, I think actually bringing Linux into it there at the end is a really good analogy. But the thing you described is exactly right, right? What you care about at the end of the day is whether or not you can deploy your applications on the thing. The thing that we're missing right now from the upstream perspective is we don't have, we don't have a thing we can deploy to then do the can you deploy your application test on it so that we've got the baseline for a way for you to say, hey, can I deploy my application on it, right? So we're basically trying to get to that thing. And I think that's where Red Hat and Debian and Ubuntu and SUSE are all different, but there is a level to which you're pretty sure that you can run Mozilla on them. And so we've got to get to a baseline level there so that we can test the thing we actually care about. We're clearly getting kicked out. And there's a sea of people outside of the door. Oh my God. I have a three-word answer for you. We're using heat. You've got to get abstraction later. Exactly. Well, thanks, guys.