 Well, welcome to our session here. Bite off more than you can chew and chew it. We're going to talk about and have a discussion about different open stack deployment methodologies and all that that entails. The format of this, what we're going to do is a open Q&A. So at any point in time, you have a question. Come up to one of the mics so that way it gets on the recording. And go ahead and ask. And in the meantime, we'll continue along with our subjects. So first round of introductions, my name's Tyler Britton. I work for Red Hat. My name's Kenneth Hoy. I work at Rackspace. And my name's Paul Jakovsky. I work for IBM. So I think the first place we should start to kind of level set is the different consumption models for open stack, right? So there's not just one way to use it. There's different products and ways to deploy it. So I think we could start off at the end with least assessment. So DIY. So what's DIY? So I guess usually for DIY, you've got an operations team. You've probably got some good Python developers. And you're like, we're going to roll our own open stack. And then you may choose to start with an existing distro like Red Hat or Canonical or SUSE or whatever and use their packages and just sort of put together the architecture and get all the software installed and configured sort of on your own. Yeah, and I think typically what I find with folks at the site that go on to do your self-approach is they have some very specific use case where they decide even what the open stack communities put out isn't quite the right fit. And they'll actually go in and modify the code and add on features that it's not part of the community release, which then they maintain. So you mentioned distros, which is one way you can get it, which usually it's from your Linux vendors. Red Hat, SUSE, Buntu, just like they do with Linux. We take the code, package it up, and have packages and releases of open stack that you may code individually by version numbers and things like that, as well as there are some not like for example, Morantis also has a open stack distribution that they bundle together and that's where you get your support from, right? So now that's that first layer where you have someone to call if something breaks. And the next level after that's managed and there's a couple of different ways to do that. So I don't know if you wanna talk about it. Yeah, so the whole idea of managed open stack is basically customers that go, this is not my core competency. And rather than my having to learn how to do this, I'm just gonna pay someone else to do it. So a couple of companies like Rackspace, IBM, Cisco, for example, have what they call it a managed open stack or private clouds of service where they may be using a distribution or the community code, but they are doing the day-to-day operations of that platform. And the idea is users should be able to just consume it like you would say a public cloud where you don't actually have to do upgrades and things like that. And that's for the foreseeable future, right? So you just pay every month and you get open stack. Yeah, I mean that's a fast changing area. So most of the folks like Rackspace, and Cisco, and IBM, their approach is we'll just keep managing this up all the way through all the various upgrades. There are some companies, for example, like Morantis, who started to do what they call the built-up transfer model where they'll deploy and operate for a year and then you take it over. And there are good and bad which we'll talk about with that model as well as the other models. Yeah, and there's one more there where there's the concept of a partially managed open stack too. So there's companies like Platform 9 where they're managing the open stack bits, the control plane, but you as the user still have to manage the underlying hypervisor and things like that. So they're just managing the open stack bits and updating that and things like that, but operationally you're responsible for that. So there's all these different ways to consume open stack. So we've kind of hit the high level of what those options are and if there's not any questions, we'll go into kind of why you would start choosing one of those. So actually what may be interesting is if for the folks in the room who are deployed have deployed open stack in their companies, how many of you are like rolling your own open stack? Okay, how many of you are doing a distribution from a vendor and how many of you are letting someone else manage it for you? Okay, so it's mostly roll your own, actually. We got a question? Yeah, so on the, when you're rolling your own as far as like hosting, what's the integration requirement for like a rack space so that we can replicate data between our on-prem and the managed cloud? So in the case where you're rolling your own, you have your own and then you want to integrate with a managed... Correct. Also, Ken, you want to talk about like how... Yeah, are you talking about a private, like private cloud or private cloud or private cloud or public cloud? Private to public. Yeah, so it's basically at API level so you could use some of the same tools. A lot of customers who do that, what they do is they use something basically like Ansible that basically kind of abstracts a lot of the differences and they basically say, hey, spin up a VM and then they just point it to where, which cloud it's going to be. It is a little, it's still a little tricky to be honest. Like some of the APIs are different and the, like Keystone Federation, the identity management and after that, it's still fairly new. So it can be done, but there's going to be some work involved. It's not going to be a simple point and click. Yeah, and as far as network connectivity goes, we see a lot of people doing like an IPsec VPN or something between them and if you're using VLAN networks on your own OpenStack, it's then fairly easy to have like a firewall slash VPN device in front that takes that and connects that up to what you have in Rackspace or a soft layer or wherever you have your other infrastructure and then you kind of get sort of layer two, layer three connectivity between them even though it's obviously VPNing across the internet. And of course, some of the providers have like a direct connect and stuff like that, but that tends to get really pricey and isn't really necessary for most use cases. Thanks. So the next area we wanted to talk about this is as we go through it, I think the first place is to start is talking about that and can you mention if someone does DIY, one of the reasons is they want to modify it for some specific, very specific use case. So it gets into the conversation of stability versus capability, right? So if you're any change you make to OpenStack off of the defaults, if you will, brings bring some costs. So I don't know, Paul, do you want to talk about like, what it was like, say carrying a patch, like what that looks like? Yeah, so as soon as you start like changing stuff from the upstream OpenStack it, depending on how you've done the install, if you're installing from source or you're building your own packages, it's a little bit easier than if you're doing a distribution. If you're coming from a distribution you kind of have to modify bits from underneath the packages that the distribution gave you and then you can end up in a weird fight where the distribution is trying to like fix the RPM that you've changed code out from under and you're trying to keep it the same and then you're kind of getting that sort of constant fight with them and then if you're following any of the like, the Red Hat like Stig compliance, it's very specific about anything that you get from a package doesn't get changed apart from config files. Technically you're sort of losing some of your Stig compliance there, which for some people is a problem, some people isn't. And then even if you're building your own packages or going from source, you kind of have that problem where you now have either a fork of NOVA or you're doing something mid install to apply a patch to it or whatever and you've got to carry that and then you've got to like any time you're looking at bringing down new NOVA code, whether there's a new open stack version or you want to get newer features, you know mid version, you're kind of having to reapply those patches, make sure they all work and often you may find, you may experience a bug that you weren't expecting like three months after you did it because of some weird edge case you just hit and so you can get, you can certainly get into a lot of issues where you spent a lot of time troubleshooting issues and whatnot. Yeah. The people who I find have done DIY successfully tend to be people who, whether the operators actually also have some development background and really no Python because you do have to do stuff like get into the code, get into the database. It's not for someone, you have to have very strong Linux skills to even have a shot at making this work at any kind of scale. Now, you mentioned like as version change and upgrades, right? So that's always a problem, right? So hey, day one, we can deploy this open stack or whatever. That's cool. But then the ongoing piece, if you're doing, so if you're doing one of the managed options, that's generally taken care of, right? Yep. So that's one of those things that you're trading off that you would have to normally take care of yourself if you're doing DIY. And at least with a distro, you're someone's testing the packages, like you said, and if you're not modifying them, it's a pretty understood process usually. But from a, what does it look like if you're doing managed? Like, is it right away? Like, what's the delay? Yeah, so I'll speak for Rackspace and you can talk about IBM. The way we do it is basically when we have a new version, we basically work with the customer and say we need some kind of a window to do the upgrade and then we basically, we rely on the customer to basically handle whatever needs to be done on the application layer and the VM layer and then once they give us okay, we just go ahead and do the upgrades from one version to the other in open stack layer. But it's a shared responsibility, right? Because we don't have responsibility for the application layer. Yeah, we're fairly similar on the Bluemix private cloud. One thing we do is we build our own packages and we don't version them in the standard way that your distro packages version. So we can have two versions of Nova or six versions of Nova on the same machine. So it's kind of when we want to upgrade Nova, say API, basically at the upgrade step, we're flipping a sim link and restarting a service rather than doing an RPM upgrade or an apt upgrade and having to like remove bits, re-add bits. And so it tends to be an a lot smoother process. So most of our upgrades are in place and have very little effect on the customer and their workload. Unless of course you're rebooting for a kernel update or something like that, in which case we'll coordinate with the customer and make sure that their workloads are in a safe place to if need be restarted or migrated to another machine and roll through or whatever the appropriate method is for that given customer. Yeah, and there are some managed providers like platform nine that may be others. They actually do a blue green deployment of the open stack control plane. And they do that because they actually, because they run open stack in AWS, not on premise. So they can just spin up another open stack and then just move everybody over to the new. So that's just different ways, but the main thing that's the same about all of them is they're trying to make it as hands-free for the customers as humanly possible. We have a question. In the managed versus, or in addition to your district specific, how easy is it to select from a full menu of open stack components to swap out different things? Have you found that most people have chosen kind of the path that they want to go on and that's what they offer? Do you mean from which projects or versions of projects? Or both? More like for components that are easily swapable and there's multiple solutions. That's to mean like the Cinder driver options and stuff like that. Yeah, like the various STNs and whatnot. Yeah, so I think it depends. With BlueMix private cloud, we're very prescriptive and we basically provide the entire bomb from networks which is down and we say we are Linux bridge with VXLan. We are Cinder with Seth backed, et cetera. So we're super prescriptive about that. Whereas I think Rackspace and some others maybe have a little looser and give more options. Not really. So I talked about one of the primary reasons for DIY is because you want to be able to customize things. And you can sort of do that because no one else will consume this open stack except you. If you're a distro and especially if you're a managed service provider, the worst possible thing you could do is to manage 100 snowflakes, right? So because that doesn't work at scale. So the way we kind of get around that is basically give you a very opinionated approach to open stack. Rackspace, for example, currently supports eight projects. And we basically say unless we feel a project can satisfy, is production ready for 80 to 90% of our customers, we're not gonna roll it in. Because if we have to manage, and I won't name those projects, we've got to manage a few of these projects at 10 customers, then it's, that takes away from our ability to manage the whole fleet of customers that we have. And I think it's that prescriptiveness that makes, that gives us the ability to do upgrades in a very reliable, resilient manner. I know we certainly do upgrades, like frequently through the year where, when we roll out the new release of our product, we're immediately upgrading our production customers. So it's very rare for a customer to be lagging a version behind after say three or four months after we've cut a new version. So that's one of those decision points then, right? So if you're a customer and you say we need, I'll just pick a project, we need Trove, right? For our use case, we want Trove and if it's not included in your offering, generally it's, you're not getting it. So that's when you'd have to go to a distro or DIY. Yeah, that's right. And even with the distros, they only support a small sort of group of the projects. I think some may support Trove, some don't, some support, I can't think of them off the top of my head, but like some may support Ironic, some don't. So if you have very specific, very deep needs in some of those newer projects or lesser used projects, you may end up needing to DIY or you may need to just DIY that particular piece. Like you could bring your own Ironic and tie it into the Red Hat distro. You may have some support issues with Red Hat and have to figure that out, but in theory it could be done if you wanted to, but usually if you have that specific use case, you are gonna DIY. Yeah, I think specifically also if it's the type of project where it kind of just consumes the other open stack services versus like, hey, it has an agent that goes on the nodes and all that kind of stuff. It's even easier to just drop that on top. Yeah, that's right. We actually had, before we supported Heat in Bluemix Private Cloud, we actually had some customers that were running their own Heat and just connecting it up to our APIs and that worked quite well for them. Of course, yeah. How does this, can you compare contrast, the impact that each of these consumption models have in data compliance, data governments, and data sovereignty? Yeah, I guess that's a good question there from the standpoint of we talked about the models. We didn't also mention where they could be located, right? So I think a DIY is pretty obvious. It's you, it's up to you. Wherever you wanna run it, you can run it. And same thing with distro, right? If I have Red Hat open stack, pop the ISO image in, I can pretty much run it wherever. What about the managed options? Well, yeah, so managed, I guess if it's in, some managed options are in your data center, some are in the, I guess the company who are getting managing it, it's in their data center. But the hardware is dedicated to you. So even if it's in the managed data center, it's still like a, it's still yours. And so I think from a data-solving point of view, that sort of covers most cases. And then from a, there was a bit of compliance mentioned there, and I know both Bluemix and Rackspace doing a lot of work around compliance and meeting the STIG stuff. There's been a lot of work in OpenStack Ansible, and then also in Bluemix's tool Ursula to support that. And then to even do reporting back to it in Bluemix, we actually, we get alerted by our monitoring systems if a system comes out of compliance for STIG. So we can react to that very quickly and fix whatever it was that caused it to go out of compliance. Yeah, I think one thing, the kind of balances, if you're doing your own and you have the right people to do that, you can probably move faster than a distro, render or management. In terms of supporting all the various different compliance rules, right? Because you can basically custom fit this platform to meet that particular compliance requirements. For distros and managed service providers, because we're trying to meet the needs of many more customers, things will always move a little slower. Now the flip side is DIY, you have to provide all the resources to do that. If you're IBM or Cisco or Rackspace, we can bring in other teams that can help provide some of that compliance needs that you have. So there's some trade-offs, you gotta figure out how much you wanna take control your own destiny. One thing I tell people who do DIY is, you've now become a software company, because you are now the product manager for your own product. And you have to do all the things that a product team has to do to manage the life cycle. Any question over here? Yeah, so as you mentioned, in managed or various consumption models besides DIY, you have an opinionated stack potentially or a set of services. Besides customer demand, how do you gauge the readiness of a service in terms of maturity? Like when do you decide or how do you decide that? Yeah, I think we're comfortable putting this one in now. So from a blue mist perspective, we try and make it customer-driven, but we will bring in early support for a project into our install base, but we may not actually install it for our customers. And that way we can have it in our test clouds and do a lot more rigorous testing. So if you look at our installer, you might see roles for Ironic and roles for Magnum and some other projects, but we don't actually support those in our product. We just have them there so we can be installing them in our own test clouds and be testing them out and trying to gauge how they're at from not just are they production ready, but are they easy to use? Do we have to teach the customers how to build a special image for Magnum or for Trove or whatever else? So there's a lot of other considerations other than just is the project ready. So I'm gonna say some that probably come across in the negative, it probably is. The fact of the matter is when the open stock projects get rolled out, depending on the project, there's sometimes not a lot of concern about whether this is something that can be easily monitored or managed or troubleshoot, right? It's more about I got a new feature and we get it out. So one of the things any of the managed provider, particularly managed providers, distros to some extent is, as well as we have to pick projects that you can actually manage, right? That will actually scale, right? The open stack community may roll out a project and call it ready when it works on five nodes, but we actually have to be concerned rather it works across 100 nodes. So that's why things will always take a little longer because we're making sure, again, making sure that it's upgradable, it's, you can actually monitor it, you can actually troubleshoot it in a way that minimizes customer downtime. Yeah, and there's, I think on the distro side, there's no specific formula that you could punch in if it has this many selections in the user survey, plus this, plus this equals, we're gonna support that. Some of it is it's technical readiness, but a lot of it is my customer demand. So it's like customers really want this project and they're really looking for it. That, I mean like most product management, right? That's what is the main driver of supporting new projects. Question? Okay, thank you. Another operational challenge that we face, imagine that we already have a new deployment. Is there a maturity model so we can grow on a roadmap in a controlled fashion for deploying new projects or adding new services into a service catalog? Yeah, so you gave a talk about that. Yeah, so there's a, the nice thing about this is there's, everyone's different, right? So what may be ready for, you know, this person's cloud, you would not consider ready and you're looking for more things. So with the OpenStack Foundation has created a thing called the Project Navigator. So if you go to the OpenStack site, there's a Project Navigator and it gives a lot of statistics across all the projects. So it'll show you number of contributors. You know, if they meet regularly, you know, are they meeting the release cycles? Have they checked? There's all the different boxes. So then you can decide, does this project meet your personal readiness test? And then you can also keep an eye on projects to say, hey, we don't think that project's ready yet for us, but that can help you plan to say, hey, we want to add, say, Magnum later. But these are the areas where we think it's deficient. And if it's a, say, a feature, like say we're rolling upgrades aren't supported, that may be some feedback you even want to give to the project to say, hey, we'd love to run your project, but until you support this, we can't really use it. And that's where the user feedback can come to the projects to say, hey, we need this. And kind of on that note, that's where kind of how your methodology, how your deployed comes in too. So if you're doing DIY, like guess what, log into the IRC meetings and voice your opinion. If you're going through a distro vendor, you're probably your easiest avenue is go to the vendor and say, hey, we need this, but it doesn't support that. When are you going to support it? Can you push it upstream? Have that customer feedback? And then I think on the same thing on the managed side is like, hey, we'd really love to use this on your service. And then they have the appropriate upstream resources to either make that voice known or even the actual developers to contribute to get it over that hurdle. Yeah, I think of your DIY. Don't choose a project solely based on whether it has a feature that you think it's good. It's really important to see how many people actually contributed and when was the last time they rolled out a patch because otherwise you may end up leading the project unintentionally. Speaking of that on the kind of the upgrades and then even which projects and stuff like that, we talked about packaging, right? So obviously distribution vendors make packages and it's part of the deal, right? So if I have a red hat Linux box, I can, you know, yum install packages and new versions of Nova and things like that. What are the other ways you can get packages? I don't know, Paul, you kind of mentioned earlier. Yeah, so for Bluemix Private Cloud, for a long time we were deploying from source and we would basically pin the versions at the get char of the version that we wanted. And that worked fine, but it wasn't very deterministic because while you're pinning Nova we had a very specific version. Upstream dependencies could change and that could cause all sorts of issues. Now we built a project called Gift Wrap which would basically you would tell it what versions you wanted and it would go and collect all the dependencies and build either dev files or RPMs or I think we support Docker images in that tool as well. And that way you've got that very specific deployable artifact that's always the same versus is mostly the same. And that definitely helped improve our like longer term stability and predictability when we did that. Yeah, so in the DIY mode you would even still wanna be building packages at that point, right? You're not just doing a Git pull and restarting services or anything. Yeah, I mean there's nothing wrong with doing a Git pull and I guess if you were using a newer project that's changing more frequently maybe going directly from source is a good idea. But for the more stable stuff, over Neutron et cetera, I really think it's a good idea to be building packages. Even if you don't wanna go the way to a .dev file you can build it into a Python wheel or there's a few other options. Build it into a virtual Env and toggle up that virtual Env. Anything in that sort of realm. Yeah and on our public cloud we actually took a different approach on our public cloud versus our private cloud. So on our public cloud we're actually pulling from source all the time. Our goal is always to be, our goal is to be no more than two weeks behind the absolute latest release. Frequently we fall behind more than that. But I can tell you that's not something that, my experience, that's not something that a lot of companies are willing to handle. You really have to know what you're doing because you can really screw up something very badly. On the private cloud side because we're working with customers that tend not to have that need to be on the bleeding edge we actually just take the community release when it comes out and then we basically put it in, use Ansible playbooks to deploy. So there's a, we have a project called OpenStack Ansible. Some of you might be familiar with that. When you deploy OpenStack Ansible you're essentially deploying rack spaces, reference architecture for a private, OpenStack private cloud. Yeah, I think one of the other things that we're kind of talking about here too is upstream, right? So if you're not familiar, upstream is the actual OpenStack code that's being developed on. And obviously as a distribution we pull those at the releases similar to you described with rack space and we package it up and test it and those types of things. And we talked earlier about carrying patches and stuff. So one of the options is hey we're DIY, OpenStack doesn't do a thing exactly how we want so we're gonna patch it and then we're gonna have to carry that patch constantly between versions and test it and things like that. One of the other options is to just contribute that code upstream. Depending if it's something the community wants, it's a normal thing, it's not something weird. Getting stuff upstream is preferable, right? Yeah, that's right. But there can be a significant lag. We have put features up or bugs up and had a six to 12 month lag on getting them into master and then push to stable metakar or whatever the stable version is right now. And so it's not unusual for us to briefly fork stable metakar or stable Newton, add our patches in there and then jump back off that fork once they've caught back up. And same with features, occasionally we have things we need for our own internal compliance around password complexity and stuff that may not be supported upstream that upstream may not wanna support. And so then we have to make a call there if we're forking the code or we're gonna inject some middleware somewhere or something like that. Yeah, so one of the challenges of doing all this too is sometimes people don't realize there's a whole process involved in getting a patch or a new code we actually into and approved. And so we've had customers that come to us and say we try to do it ourselves and we haven't been able to get a single patch through because we're missing this or that on the other and they actually rely on us to kind of help them push it through because we have so much experience in doing that. So and Rackspace also supports Red Hat's open stack distro and one of the things they do is they have like a long term release and then kind of the shorter term release. So on the short term release you're basically getting code. Every release of open stack that's gonna be a release you can use on a longer term release they may stick with something like Mataka and say you can run Mataka for three years or two, I think two years, two to three years. And what they'll do is if there are patches that are in a new release that's not, you know, what they'll do is instead of making you upgrade to Okada they'll backport those fixes into their distribution of Mataka. And there's even some work in the open stack community for stable branch maintenance to, because all the distro vendors do that, right? That's a pretty common thing to have a long term release model and then have to backport patches. So try and get some of the distribution vendors to collaborate to not everyone repeat the same work for backporting patches. From a DIY or managed perspective one of the advantages is you do get the option to patch a lot sooner. So if there's a security vulnerability that comes out you can patch like same day whereas if you're coming from a distro really you wanna be getting them to update their packages and then updating from their packages versus getting out from underneath them and again changing code from underneath their packages which can, as I said earlier, get pretty hairy. Yeah, and that's why even in those cases to try to like say there's a new CVE or something like that comes out that even like say Red Hat whereas a distro vendor may release their own patch or as an initial thing and then rev the package or something like that. So the costs of that from a supportability perspective and then a people perspective can get high if you're not aware that if you're doing DIY. Question? Yeah, I wanna get all of three of you to talk about backup, not replication because that's not backup, but what's your strategy from dealing with whether it's malware or some sort of ransomware to a Roguesys admin to even, I wanna restore my environment from a day ago whether that's a VM or some Nova compute or maybe some data to entire infrastructure in a DR case. Yeah, so from our perspective we don't really have a ton to backup from the underlying systems itself because we have obviously we have the databases but like the rabbit doesn't really have any stateful data from the VM perspective we allow our customers to choose however they wanna back it up. We don't force them to back up. We don't force a particular method to back up because everyone already has their own method to do backups and stuff. So we kinda leave that to the customer. So we have a few things on the controller and the compute nodes that we have to back up. We have a few config pieces and stuff we need to back up but a lot of that is kinda backed up in our config management code anyway. So there's not a ton that we need to do and then we help work with the customer to give them connectivity from their backup software to ask via VPN or whatever else they need and help sort of assist them figure out the best way to back up but we don't back up for our customers. Yeah, so I think a similar theme is that for all the managed providers in distros is we kinda let the customer choose how they wanna back up the VM level. Obviously again, like I said earlier if you go with a vendor they may have other resources that's an option but it's not gonna be mandate. So an example is with rack spaces because we have a very large public, OpenStack public cloud with Swift. We have customers that basically snap images and throw them into a Swift or what we call the cloud files implementation. We are also a very big Commvault partner so I'm not, and I'm not necessarily in Commvault is the best solution for backing up OpenStack VMs is what rack space has and so customers use to do that but at the end of the day we'll offer some resources but it's up to the customer to choose which method works. Yeah, and I think maybe for the larger IBM corporation the larger rack space corporation there are parts of the org that will do backups for you partnering with us and whatnot as well but just us as the OpenStack team we punt on that to the customer or to our partners inside our own companies to work with that customer. Yeah, and one other thing too is some of the discussion about backup is actually I'd say relatively recent because I think if you kind of go back to the history OpenStack there were some assumptions that you would never need to back up your VM, right? Because they're completely stickless and so I don't think, to be honest no one really thought about how you would want to back up VMs because the thought was you don't want to back up or you don't need to back up VMs. I think that discussion has been changing some with more enterprise customers who are trying to want more stateful workloads but that's still a work in progress. It's still very far behind say Nova or Neutron. Yeah, I would say the same thing as far as yeah, there's some things even with glance like snapping VMs and things like that but I think most enterprises have something else for backups and they're leveraging it to back up the databases and the instances and things like that both from a workflow perspective and it's probably it's more advanced than what's kind of native to OpenStack right now. We're just about done and if we have any other someone else to hop up with a question, go ahead. You again? Does someone else want to ask a question? No, so question. So obviously you provide different options, manage, we have distros, we can do it yourself but I'm curious like why as a provider of OpenStack if you're doing manage why don't you make your code a distro as well so you can support the maximum spectrum of customers? Like why is there like people are either managed or distro and there might be some that are both but not many? Well, I think it's hard to do both because you're focusing at a different level. Like at the manage level you're really focusing on the stability and the operability whereas as a distribution you're also you're also thinking about the distribution and packaging and other things. So you're kind of two completely two fairly separate businesses and there's no reason a large enough managed provider couldn't do both but I guess it just depends on what they're doing. Yeah, and I think one of the reasons a lot of, I think a lot of the providers now are actually using it distro as their OpenStack layer within the managed service and I think it's a callback to what Paul is saying. For most, and I'll say this for Rackspace, our core competency isn't building distros or packages. Our core competency is building all the other services like monitoring and upgrades and operations around OpenStack. So we don't really want to try to do both at the same time possible. Yeah, and if you're a Red Hat customer you don't want my distribution, you don't want his distribution, you want Red Hat, right? So it's better for us to work with Red Hat and provide Red Hat's OpenStack platform as an option for our manage. And I think both Rackspace and IBM actually do offer that. Can you say Red Hat one more time? No, canonical? Okay, question over here? Yeah, when would someone choose to go manage to private cloud versus like a public cloud? What are the vagaries between the two? So that's tough and it's really up to the individual customer. Some of it's about predictability of costs, it's about capacity planning, it can be about data sovereignty. Yeah, I mean, there's- Like, whether they're comfortable using Amazon, like Walmart's the perfect example, right, Amazon's a competitor so they don't want to use Amazon. They could go to GCE or something else, I guess, but... Yeah, the three that I'm most frequently here talking with customers is predictable costs. Most manage, I'm not sure how IBM or Cisco does it. I think Rackspace, for example, we don't, we basically, we don't charge, it's not an on-demand VM price, we basically charge by the compute node on a monthly basis. And we don't, you can spin up 1,000 VMs on that node, you can spin up 10, we don't care, it's the same price. So that's a predictable cost no matter how many VMs you use. Second reason is single tenant. You just don't want, you know, for data, for compliance reasons sometimes, customers don't want to share hardware with other customers. And then the third thing is in some cases, avoiding the noisy neighbor. So we've had customers who were running in some public cloud and they've had performance issues because they're sharing that, again, they're sharing bandwidth and hardware with that customer. So they went to our private cloud because they can get predictable performance. Yeah, and even for the noisy neighbor perspective, even like inside their own customer, because it's a private cloud and we manage it for them, we can help set up availability groups and stuff like that so that like some groups will go to their own hardware if they have very specific needs. Or, you know, one group has GPUs and they don't want anyone else using their GPUs because they cost so much and we can kind of help out with that. One last thing, you know, before we run out of time, what we want to cover is the decision-making process, right? So how do you decide which of these make sense for you? Do you have any quick tips for, yeah? Do you want OpenStack to be your business's core competency? If not, have someone help you do it. And on that note, the foundation's been trying to help there as well, help people kind of navigate this. So there's been a number of publications. Right now if you go to openstack.org slash enterprise, there's a couple different e-books. I think there's some physical copies here in the foundation lounge. Help you make those decisions of which operational model you want to choose. Cover some of the things we talked about so you can pick out what makes sense. Maybe you want to change. You know, you're doing one way and you're like, hey, actually, now that we're looking at this, we don't want to do this. So there's lots of good resources to help you decide which way makes sense. Thanks everyone.