 Welcome everybody. Thank you for taking the time out to come to sit and chat with us. We want this session to be a very interactive session. We don't want to just talk to you and we're not going to do depth by PowerPoint. And we want you to be able to hear your feedback as well as kind of share some of our thoughts around the topic. So thank you again for coming. So I'm going to turn it over to my different Tyler here. Yeah. So the topic here we're talking about kind of what happens when you go live with OpenStack and kind of the decisions you make there. And main piece of that is when it comes to customizations. We decide to do something non-standard with OpenStack. Is it a good idea? You know, where is it a good idea? Where is it a bad idea? But let me see here what we're going to start with. We have a couple of different topic areas we want to cover. And I think the first place we just want to talk about in general is consumption models, right? So Walter, you're with Rackspace. What does Rackspace offer from example? Like where are the different models that you see with OpenStack? Right, right. So if you're not familiar with Rackspace, we were the co-founders of OpenStack. And we believe very heavily in a managed cloud approach. That is actually our tagline number one, managed cloud provider. We believe that we can provide you a great service by helping you manage your OpenStack clouds and take away and alleviate that stress of you doing it yourself. And when I say managed, I mean at the very deep operator level, standing it up, making sure it's running upgrades in all the engineering below the scenes and make it so that you can just consume the top layers, right? You can just consume the services, consume the resources, consume the storage and not have to worry about managing the environment. So that's our belief as to how we think OpenStack should be approached. Yeah, and being with IBM, we're kind of in that similar mindset. But there's some of the other options out there, right? So obviously, we can just go download it ourselves, right? What does that look like? Just do it yourself. Yeah, DIY. So I'm from Cisco. We actually do OpenStack too, believe it or not. So we also have a managed product, which is what I work on, called MetaCloud. And you'll notice one thing I've noticed in the spaces that as time passes, more and more vendors move to a managed offering. But there's a full spectrum, right, from complete DIY where you're building your own packages, you know, patching everything manually all the way to, you know, you don't ever even log into the hypervisors or the control plane, you have another company managing that for you and you consume it like you would a public cloud, right? And there are pros and cons to each of these, right? So why would I want to do it myself? So usually, if you're doing it yourself, it means you have a good amount of OpenStack expertise in-house already. You've got some really awesome OpenStack engineers. You want to run trunk projects. You want to run, you know, do a lot of active development yourself, you know, write your own patches, have, you know, bleeding edge features and functionality and support it all yourself, right? Which you don't get if you go with a distro, which usually are weeks or months behind trunk or with a managed service, which is usually even farther behind. Sure. Yeah, that's, like you mentioned, that's the other option is doing a distro. Generally, the distro vendors doing the packaging for you, which is very nice, right, versus it's one of the challenges with DIY is packaging up those OpenStack services. So that's that trade-off of someone else is taking care of that for me, but I have to wait for them to do it. But then now you have someone else that you can call when you have a problem. So I mean, that's one of the areas where you see people go in the distro. We have some OpenStack people. We want to work on it ourselves somewhat, but we need some handholding. We need a hotline to call when things hit the fan, you know. So what does that mean for upstream? So if I'm a rack space customer and we've identified a bug that's affecting us, how does that get upstream? Right, right. So that's actually a really good question and a really good point. And what we've found is that it can be a love-hate relationship in those situations primarily because we obviously create, we have a product that we sell to our customers and we're able to give you the four nines of SLA on top of your OpenStack cloud because we do standardize on that product, right? So if there is a bug that is found, we will actively work with the customer to try and contribute code back upstream to fix that bug, but it takes time, right? It's not immediate. It's not, you know, overnight. And then there's also the flip side of that, which is if there is a service that maybe is not as mature as you would like it to be yet, we can actually help to make that service more mature and then contribute that code upstream as well for everyone else to consume later on. So that's kind of how that works. Yeah, that's on the whole spectrum, right? Like with DIY, you're responsible for upstreaming the code, for interacting with the community for, you know, maybe writing the code and submitting a pull request on Git. With a distro vendor, you know, you probably have the ability to submit a ticket through them, you know, Red Hat or Canonical or whoever and, you know, hope that they fix it in a timely manner with a managed service vendor. It's a similar situation, but usually you have a little more leverage to twist their arm because you're paying them a lot more than you're paying Red Hat, you know, to make sure that your service stays up and there are SLAs involved and everything. I mean, and ideally, you know, they have a larger footprint with whatever vendor you're talking to. So they have a little more clout to, you know, twist Red Hat's arm or Canonical's arm to get the packages pushed out faster, right? So in that case, the workflow is we've identified a bug that's affecting us. It has to go to the vendor, say the distro vendor or the managed service vendor, who's then potentially commits it upstream. It gets into trunk and then it has to feed back down through that vendor's packages then down to your... And with a managed service vendor, I know in our case, and probably in Rackspace's case as well, if there's like a critical bug, we can patch it ourselves as like an emergency interim fix until that gets upstreamed. Fortunately, you know, OpenSec's been a lot more stable recently. And so that hasn't been as much of an issue as it was, you know, say back in like the Diablo Essex timeframe when it was happening all the time. Sure. And I mean, that's the same thing for us for at IBM with our managed offering. It's a similar kind of thing was we try and get it upstream because anything that you start living off a fork is you're adding a lot of work every single cycle. So it's getting upstreams key, but, you know, being able to apply it locally is important as well. And I mean, you don't have to necessarily choose one consumption model and stick with it. I have a couple of customers that have, you know, 20 plus AZs and they have a few that they have built their own DIY and they use it to test the upcoming OpenStack projects, figure out how they're going to incorporate those when it's supported on the managed service side. And just to give their, you know, give them some in-house expertise on OpenStack and they generally use that stuff for dev test. That's actually a really good point. I'm happy you brought that up because, you know, the message that that message, you know, just to kind of re-euro, repeat that message is, is that you don't have to just choose one consumption model. You don't have, you know, that it doesn't mean that if you do it yourself that you have to keep doing it yourself. We have customers ourselves who have spun up their own OpenStack clusters and are running them and they want to spin up additional ones, maybe in different parts of the world, different regions, or just maybe outside of their data center and they'll come and talk to us about that. So you can actually mix and match. You know, OpenStack is OpenStack, which is a good news. So, you know, it's, it's all good. Yeah, we saw that interrupt demo this morning, right? Absolutely. And the thing is like, you can staff for a dev environment. You can have like three full-time heads for a dev environment and that's great, right? But you can't do that for prod. If you want, if you want four nines availability, you're talking eight plus heads. And that's a lot of OpenStack engineers. That's actually bringing us to a great point staffing. So if you're doing DIY for an entire environment, what are we looking at staffing wise? You think eight? I think eight is like the bare minimum for prod. And that's, we actually, I did a talk earlier yesterday on, called broken stack. And we interviewed a whole bunch of customers who had failed OpenStack deployments. And the consensus was basically like eight is the bare minimum starting point for a production 24-7 environment. Sure. Sure. What about, so let's say I'm doing managed with one of our companies. What do you think the minimum staffing is? It's obviously not none on the customer side to get that done. Right, right, right. So I'm actually a firm believer in reusing resources. And what I mean by that is that I, you know, I talk to a lot of different customers and give kind of workshops on how, you know, the path to the cloud. Like how do you go from virtualization to the cloud? And one of my slides that I always present is that you have technicians and engineers and sys admins that work at your company that can very easily become OpenStack operators, right? So if you have a solid Linux background or even if you're a DBA, believe it or not, you have skills that can apply to dealing with OpenStack. So, you know, I would say that you would definitely want to have, you know, around the three or four time, you know, a number of employees if you're even dealing with a managed service provider because you want to make sure that you have staff in-house that can, I don't want to say challenge your managed service about it because I don't want you to challenge us. But want to be, have the expertise and the knowledge to know what you want, how to ask for it, how to set things up and make sure that things are working the way they expect. A big part of the role of the OpenStack operators is end user enablement, right? Yes. Which your managed service provider, they're not going to be able to, if you have, you know, 1,500 developers working for you, they're not going to be able to support all of them. So you do need some people in-house who are skilled at consuming OpenStack to help enable those people to use it properly, right? Absolutely. It's a different level of skill, right? I mean, if you're talking about you're going to totally DIY your OpenStack, you know, your troubleshooting rabbit queues, you're doing pretty intense stuff where someone that make that transition much easier, hey, you have to understand how OpenStack works, how to create projects, do things like that, understand how to communicate when something's broken, like, hey, we think there's a Nova scheduler issue. Right. How do we, how do we? That's a great distinction, right? Because on the managed service side, you need like, like three or four people who are like pretty skilled with OpenStack, but not necessarily people who can operate, you know, and maintain a, you know, four nines uptime on an OpenStack cluster, do upgrades and all that stuff. Whereas on the DIY side, you know, you're doing everything from soup to nuts, you know, distro somewhere in between. I was going to say, what do you think of the distro? It's kind of the mix, right? Like I think a lot of it depends in my mind. And when I talk to customers, my question to them is always, you know, what's your strategy with OpenStack? Like is OpenStack just a platform that you want to consume? Or is having OpenStack expertise in house a competitive advantage for you? Is it a way that you want to differentiate your company? And if that's the case, you want to build a practice internally, managed service might be something that you want to use to get out the gate faster, right? Get it up and running. But eventually you're going to want to transition to a just probably a distro based model or possibly a DIY depending on, you know, if you want to do OpenStack develop and stuff like that. And that's generally the message that I say. That makes sense. Speaking about that, like you're going to build practice. How would you, you know, how do you suggest customers go? They say, Hey, look, we're not, we don't have, we have a bunch of VMware admins. We want to do this cloud thing. We want to do it with OpenStack. We want to go open. Where do we start? And so what I've heard consistently is you generally in general, you cannot cross train VMware admins to be OpenStack engineers. You need to find, if you want to train someone on OpenStack, they need to already be a full stack engineer. So they need to understand they need to be a solid sys admin. They need to understand development, at least be able to read and troubleshoot Python, right? They need to understand security, monitoring, networking, right? And at that point, someone with that skill set can become an OpenStack engineer pretty easily. I mean, it'll be six months ramp period, right? But the thing is, there usually aren't that many people in an organization with that skill set. So you need at least like a couple really solid people like that. And I'd say, you know, bring in an expert from the outside if you can to bootstrap your practice, right? Maybe two or three heads, if you can, if you can swing two or three OpenStack engineers, which is challenging. Yeah. Well, I mean, I think, I think the thing is, you know, I think the thought processes will VMware's virtualization. So this is close to it, where you're almost better off for the companies being more successful or, hey, we are Linux admins are more comfortable. Yeah, that's a better building block starting point than it is. I have a Windows background. I do VMware totally. I do Linux or even DBAs where DBAs aren't, you know, Linux experts, but you know, running databases on Linux, they have enough of the Linux skill set to kind of go that route. Yeah, I agree with that. It is and it's coming working at Cisco, you know, we are customer base. We have cutting edge people, but I'd say 80% of them are on the late end of the adoption curve, right? Very conservative customers. And we see a lot of them struggle with the conceptual shift from you know, mode one virtual virtualization harbor mediated H.A. to you know, mode two agile cloud application mediated H.A. Yeah. And I think even the biggest challenge there is just even organizational policy more stuff than it is. We don't understand the technology. Yeah, it's we're trying to how we apply the security groups like yeah, you need to go through this manual approval process to provision of V.M. fill out this piece of paper. Exactly. Yeah. I mean, you know, it's funny. I also agree that there's a lot of organizational changes that need to happen in order to really successfully adopt open stack. I mean, one of the first things that I realized is that moving to the cloud, there's so many like features of, you know, it's agility, you know, you it's self service provisioning, you know, you have more hands on more control over your environment. But all those capabilities and not necessarily may not be the most important capabilities to your organization. So the message I want to give there is that you need to pick the capabilities and features that are most important to your organization is the most important to your business units and focus on that and involve their business units in making those decisions and look at your open stack cloud as a service, not as individual services, but as a service you're going to offer to your business units. So, you know, it's it's a different consumption model. They'll have to learn to consume it in a different way versus what they do now, which is instead of asking for one VM, they'll ask for 10 because it takes you 30 days to get them another one. You know, you have to teach them that you don't have to ask for extra. It is here for you. It is not an infinite. In other words, it does have an end. You know, people think the cloud does not end. It does end. You know, private cloud does end, have a max capacity, but, you know, have them consume it as a service. And, you know, so that's just some of the things I've realized. And I think you hit on one thing there is deciding what goes into your cloud. And I think that's a big kind of a thing that's often discussed within, you know, talking even between vendors and customers is, hey, there's all these plugins for open stack. There's all these capabilities available. It's Python code. We can just edit it. Where do we, you know, what's your thought process if a customer says, like, hey, we want to run this hardware and we want to run this networking and, you know, building their open stack cloud? Kind of how do you handle it? Yeah, I think that's a good question, right? And that's where I think, you know, we've all kind of been on the same page up until this point, but I think this is kind of where our opinions are going to diverge, right, in terms of there's this spectrum of trade-offs, right, with how opinionated and rigid is your distro versus flexibility and potential, you know, instability, you know, requiring longer times for patching, upstreaming stuff and everything like that. And so, you know, we're with MetaCloud, we have an extremely opinionated open stack install. We're not flexible at all in terms of which projects we support and our kind of threshold for adopting a project is extremely high. Like we are not using Solometer today. We have an alternate solution that we've created that collects all the metrics and scales, you know, because we have customers running thousands of nodes. Solometer doesn't work at that scale, you know. And so, you know, So what options are there available for a customer like? So where I typically do design, you know, is around you kind of size node capacity around CPU and memory. The biggest flexibility is network and storage, right? You can build, you know, low latency, extremely high throughput networking, you know, or, and it's obviously on the Cisco side, that's one of our focuses. Sure. One thing that we do that's a little different is all of our neutron runs on hardware. It runs on the Cisco ASR hardware on virtual routers, which is so no software routers, hardware-based failover, hardware-based performance, which is pretty cool. And then on the storage side, you have a million options, right? You have ephemeral, you know, SSDs or, or spinning disks. You have host aggregates you can build. You've got seft, convert storage. You've got, you know, external enterprise storage. You've got swift object storage and all of those there are a million different use cases customers have, right? And I think that's I'm a storage SME, so I'm probably a little bit biased, right? But talking to customers, you know, usually there's a very specific storage configuration that's ideally suited and most customers in my experience have not been good at identifying. Everyone just, oh, we'll just run everything on NetApp or we'll run everything on ephemeral and it's like, no, you know, you've got some more legacy-looking persistent data stores that need to run on something like, like seft for an enterprise storage platform. You know, you want to run as much on ephemeral as possible because it's the cheapest. Sure. You know, and then archival and everything on Swift, sure, et cetera. No, that makes sense. So I mean, that is definitely a place where I think, you know, from a from a mindset perspective, kind of disagree there. So from from IBM, Bluemix, private cloud perspective, we are very prescriptive. So not just on the open stack side, but also you can't bring, you know, the networking is set. We use LynxBridge will provide our networks. We use seft. You want block storage is seft. You want object storage is Swift. This is your options. Well, what if we want to use, you know, OVF? No, this is, and our thought process around it is the first question is, well, why do you want to do that? And a lot of times, the responses, it's from the the infrastructure architecture side saying, well, we want to make that we have this visual of how we want to plug our cool Cisco stuff together and we want to use this feature. And instead of like, well, do the developers need that? And it's like, oh, well, we didn't even ask them. You know, OK, so a lot of things we've seen is when we push back and say, well, why do you want this? Like, well, we want to be able to do on the networking side, for example, say, we want to be able to, you know, limit VMs from talking. We want a micro segment, you know, our VMs. So we don't want to VM, even if it's on the same instant, you know, the same host to be able to talk to each other only on certain ports. We sit in the security groups. We want to be able to create networks on the fly and connect them like, you can do all that stuff without any of those other things. So that's been our main push on there is we don't want to add something that affects those cycles you were talking about. So, OK, cool. We have this custom networking config or something. They say it was a IBM specific, you know, custom network thing. Now, every release cycle, we have to deal with integrating that or getting it upstream, you know, maintaining that all different, especially if we have, you know, for us, we're, you know, managing over a hundred clouds. If we start doing them all slightly differently now, that that upgrade process and see on the network side, like we're more flexible, but we're working via or this is not released yet, but soon neutron ACI integration with ACI being Cisco's you know, network orchestration and policy framework, right? So you can do all of that stuff via ACI, via neutron. Sure. Well, I mean, I think also from a Cisco perspective, it makes sense when it's like, hey, we're going to do more stuff on the network side. We might have some engineering. Our customers tend to be a little more networking biased and you literally can't throw a rock in Cisco without hitting a CCIE. Yeah, yeah, yeah. So I think that that's a little, I mean, so has Rackspace handle that from a, you know, I think you guys will even be on that point, right? As far as what as far as your customer, yeah, customer focused approach. Well, yeah, so and again, this is this is where a lot of our time and hours go in is that while we have a reference architecture that we stick to, we do allow the customer to be a little more flexible. At least I'll say we try to accommodate what the customer ask is in a sense that we have certain core services that are part of our core product, right? And that's our reference architecture. But if you came, for example, came to me and you said, you have a specific storage, shared storage device and you wanted to integrate that into Cinder. We would work with you to make that work. Now, of course, we would give out the caveat and say that there's no guarantee that just because the driver exists. And this again, this is in anyone who deals with opensack. You already know this fact. I'm going to say it all out anyway, just because the driver exists does not mean that it works, right? So we remind the customer that that just because that driver exists doesn't mean it works. We can give it a go and we can see how it how it how it pans out. Again, we also will be clear with the customer and say that, you know, your your SLA that we provide you may be adjusted because of this additional feature, right? But we will work with the customer and it's through our enablement services feature where we also will work with the customer to include a service that's not included in our core product, such as Solometer, such as Ironic, because Ironic is not currently part of our core product, but we will work with the customer through enablement services to be able to add that additional service to your cloud. Again, with the caveat of understanding that, you know, it can affect your SLA and that our support for you may be limited. Because again, for us, it's all about being able to provide you support. It's not actually about the the project themselves. It's about how what level of support can we provide you when you call us and you have a problem with that service. If we don't feel comfortable giving you a superior level or a fanatical level of support. I was waiting for that. Yeah, I had to throw it in. I'm paid to say fanatical at least twice a day. If we can't feel that we can't provide that to you, then we won't offer that service up and we will offer it up through our enablement services and it'll have a different SLA and a different support around that. So that's how we approach it. And even a higher level of flexibility over there. Yeah, it's trust me, it's pain and pleasure. It's pleasure for everyone else that's paying for us. But again, that's the approach that we have taken. Just so you guys know, you can feel free to get up and ask questions at any point. Yeah, yeah, don't let us ramble on. I mean, we said we were supposed to be interactive but we've been talking for the past 20 minutes. I'm sorry about that. This is what happens. This is like literally us talking in the hallway. Side note. I mean, that brings up a good point is if you're, hey, look, we have these services. We can test it up. How do you help a customer decide, because obviously there's additional costs, right? So if there's services involved or anything like that. So how do you help a customer decide if it's worthwhile? So is the juice worth the squeeze? Like, hey, oh, we did all this extra stuff to get this project supported. Cool, we just wanted to kick the tires on it. Generally it's on a good use case. So our path to that. And again, I'm no longer, I used to be a cloud architect for Rackspace. Now I've moved on in doing some technical marketing stuff, but once an essay, always essay. And the way we kind of handle that is that we hold a two-day workshop. And basically we come on site to the customer and we literally pull out all the dirty laundry of from the beginning of top of the architecture, dealing with the security, the network admins, everybody. And they kind of all get a chance to kind of put this stuff out on the table. And then from there, we kind of evaluate what we think is the best route to go. And we provide them back a cloud design that they physically will have in their hand at that point and they can go off and either choose to go with one of you guys or choose to go with us, right? That cloud design does not mean that you're tied to us. It just means that this is what we suggest the path you go on. Sure. Now does that, you know, when that gets to the point of like, well, this one extra thing that the network team wants is gonna cost us X extra. How does, how do you help the customers kind of? Or is that just, hey, look, this is what it costs. Do you want to do it or not? Well, I mean, I would be honest with you. We do talk a lot of people off the ledge with a lot of some of the wacky stuff they wanted to. And it's for the same reason you said is that it didn't come from a ask from the business, right? It came from the network admin who thought they would be pretty cool to do this thing. Yeah, why do you want to do this? Yeah, absolutely. And while that thing, even though it was published, you know, online, he read a blog post that someone did it with OpenStack. Doesn't that mean that it's gonna work at his level or his capacity, right? So, you know, a lot of times we do end up kind of going the same route you do, which is kind of talking them off the ledge. At the same token, if we feel that, you know, it's gonna be $110,000 worth of professional services to get that service, you know, and they're paying 15K a month, right? There's an offset there, right? That doesn't really make sense, right? So we obviously, you know, and I will be open and honest to say we do turn down opportunities like that because we feel it's not beneficial for us to get involved. It's like, what's the business value, right? How are you quantifying, and you have to ask the customer, right, how do you quantify the business value, you know, of the service? Like, how are you gonna, you know, if the CFO looks at this and is like, how do you justify this? You know, oh, I read a blog post that looked really cool. Probably not gonna. I wanna put it on my resume. Yeah, that's really what it boils down to a lot of the times. Yeah, yeah, I mean, like you said, for us, we spend a lot of time seeing with customers is really talking them out of something. We're like, that's cool, that sounds good, but let's talk through the process. And like you said, it goes beyond the initial, can we get it working? Then how do we keep it working? How do we upgrade it with the next release? What's the life cycle, that individual project, or tool, like where's the direction of that going? Does it make sense? Because it just has these ongoing and ongoing effects. Because let's just say it's something that's just gonna slow down your upgrade cycle. If it slows it down enough where you're falling out of the easy upgrade window, now it has all these pile on costs and effects. So that initial $50,000 thing to do it may be fine, but it may have all these drag on effects. I know there's still people out there running on Essex and Folsom, so. Oh, yeah. Yes, there are, yes, yes, there are. Which is a really bad idea. Opistak is so much better now. Oh, yeah. So if you haven't tried anything post-native of Icehouse, you need to just stop and try the new stuff, try Newton, try Mataka, you will be amazed at how much better it is. Yeah, the customers, we get there, totally new to Opistak, like, what is everyone complaining about? This is stable, it actually works. Absolutely. But they upgrade it all the time, it's the big deal. So speaking of that, so how do you handle with all your different customer clouds, how do you guys handle upgrades on an ongoing basis? Is it pretty regular? Ours is pretty straightforward, so we do, we're agile, so every four to six weeks we'll do a point release that's just bug fixes, minor patches, minor UI enhancements, and stuff like that, and then every six months we do a major release. We only replatform once a year, so we'll do a major release every six months, which sometimes we'll pull in projects from a more recent release. Heat was the most recent one, things like that, but we only actually upgrade the whole base version of everything in Opistak once a year, and we found that that's kind of a good compromise between speed of upgrade and the fact that it can be a bit of a pain to do those upgrades. So do you run any challenges from like a, I guess, with API compatibility, where if you're pulling in projects from different versions or anything, there's generally no issue. We wouldn't do it if there was gonna be a major issue, right? Like, and it's, you know, most, I'd say like, you know, 90% of the projects we run are all the same version, but it's like, okay, you know, back when we were running, what, I'm trying, my ice house, right? We were running, back when we were running ice house, ice house heat was kind of meh, right? Juno heat performed really well, was a heck of a lot better, right? So we pulled in Juno heat, and it, you know, there's no compatibility issues, no, you know, no, and that's kind of, we're very conservative on that stuff. You know, our main goal, like we tell our customers, like we're not trying to be cutting edge, we're not trying to give you the latest, the latest release, the latest versions of packages, what we're trying to give you is a rock stable platform that's ready for enterprise production workloads. Yeah, I mean, we're similar, but a bit more aggressive from the standpoint of, we'll do more release, so generally quarterly about, and we will pull in new open stack versions, but it goes back to that, if you're really strict with your, you know, what our clouds look like, it's a lot easier for us to get there. So I think we did our Mitaka release in June, and all of our clouds are already on Mitaka, and I think it was four to six weeks or something to get them all there. That's good. So, and we just did a new, another quarterly release, it's still Mitaka, it's not Newton yet, but then we'll do all those clouds. So that's where, that's for us, that's where the payoff of like, well, we know exactly what all these clouds look like and they're all the same, so this goes a little quicker. How do you guys handle upgrades? Right, so, and this is where we're probably not as aggressive in a sense that we probably give our customers too many options as far as whether that they choose to upgrade. So every time a major open stack release comes out, the .o release, you know, we will release, we will release a .o release of that new open stack. So for example, when Newton comes out, our 14.o will be out in the next month or so. We won't force a customer to upgrade to that. They will be an upgrade path available for them for 14.1, so for 14.1 we'll be all about being able to make a path to upgrade, but we give the customer the option to upgrade or not to upgrade. And again, this is where we run into some situations because the end of life support for certain releases. And again, I will openly honestly say we need to do a little bit better with that, but we do have the capability of doing the upgrade and we just aren't as forceful as we probably should be because, you know, we should be more forceful, I would be honest with you. But it is what it is, so you have the options, but being able to upgrade is there at your own leisure, basically. Sure, how do you think having the more flexibility affects the customer, you know, we were talking earlier, skill sets, like hey, if you're doing managed, generally you don't need super crazy open stack people. Do you think that has any effect on for you if you do stuff that's a little more, a little more, you know, farm to table open stack? Yeah, kind of, does that require the customer to be a little bit more advanced, do you think? Well, yes and no, and a lot of times, customers that are adopting are usually coming from virtualization, right? So their enterprise is used to virtualization. Virtualization didn't change, you know, you don't do a major upgrade in virtualization every year, you don't do it every six months, you do it every two years, every three years, right? So they are used to that model. So, you know, it's one of those situations where they're like, well, I don't wanna upgrade every six months, I don't wanna upgrade once a year. I know. And we do have customers still running older versions of open stack, we fill in the gaps and what changes is, is the level of support we're able to provide you, right? Because there's no more upstream support for it, so, you know, we can only do but so much if you find a severe problem in that older release. But, you know, we, you know, it's, I kind of lost my point there, I'll be honest with you. But, you know, we kind of just, we've found a way to bridge the gap. Sure. But, you know, it's, yeah, I lost my point. It's gone, it's gone, it's gone. I don't remember what I was saying. I don't remember what I was talking about. I don't remember what I was talking about. It's hard to accommodate their customers, you know? Yeah. So it's a former Racker, yeah. Yeah. It's, you know, it's part of the rap-based culture. Yeah, almost, yeah, it's part of our culture to kind of give you what you want, which is, could be, like I said, a good and a bad thing. Sure. Yeah. What do you, so, we've talked a lot about skill sets, required skill sets, how, you know, what's a good base point, what do you, so if you're, if these are skill sets, are pretty hard to find, right? Would you agree, in general? Yeah. If you're like, hey, I want an open stack engineer. No. Hard to find usually equals expensive, right? Absolutely. So, hard to find equals expensive, so then potentially as a customer, you're looking for someone that you may be paying at a higher rate than the rest of your engineers. That's, yeah, absolutely. Potentially, what, how do you get and how do you keep that kind of talent? So that was one of the points in the broken stack talk I made, because it's a major, especially for traditional IT companies, you know, where they're paying their regular admins half the going rate of an open stack admin, and they bring an open stack talent and then alienate their, you know, existing talent, you know, and it's incredibly challenging, and for us, the big part of our value prop is, you know, we, our message is basically open stack, private cloud consumed like you would a public cloud, right? So, you need experts in consuming it, you don't need to know anything about running it, right? And obviously, you know, it's in the customer's advantage to have some familiarity with open stack from an operational perspective, but we do everything soup to nuts from the hardware all the way up through the APIs, right? So they don't actually need to. And when it comes down to like, how do you do that, right? Like, you need to budget for it, you need to, you know, and it's like, if you're a really traditional company, how do you retain someone on the cutting edge? Like, how do you make that job interesting? You know, I don't know the answer to that. And if you look, there've been a couple of mass exoduses from major companies where, and if you watched the video of the talk I gave yesterday, we didn't mention any names, but there may have been some logos put up there where like 30, you know, senior open stack engineers from one company in like a six month period moved to another company, all with, you know, job title increases and raises. And I mean, like, I don't know that anyone has figured out how to solve that problem, right? Yeah, I totally agree. I mean, we're in a similar mindset where, hey, you don't need those people to that degree. You should know something about open stack. And we do that as part of our onboarding some training to like help people get the basics. But I think, yeah, I think it really comes down to the only way, you know, to me, it takes me back, you know, maybe 10, 20 years or something where the you didn't just go out and hire people, you're people learned. Yeah, right. So then it was, hey, here's this new tech. You have some spare time and some spare kit. You can go learn about this tech. Now we have someone that knows about it and we're getting a home team discount because we let that person or people go learn that stuff all the time. So they're happy with what they're doing. I think open stacks are no different in that aspect. And now you can get- Yeah, I mean, that's perfect. Rackspace is a perfect example of that, right? They've homegrown hundreds and hundreds of open stack engineers. Well, yeah, and it's primarily because our model is a little bit different, whereas we want you to see us as a staff augment to your organization, right? Not replacing your organization, but by kind of adding to it. And that's because we have a few models, right? We have a model where we'll host it and manage it in our data center. We have a model where we will support it in your data center. And we also have another model where we will support it in a third-party data center. That's not even ours or yours, but somebody else's, right? So with that being said, we're a little bit more flexible in the sense that we're here to add value to whatever staff you have. It may be good to have some staff that does no open stack and will at least be able to talk to us on that same level. And it's hard to retain, it's hard to retain open stack engineers. I mean, I'm gonna be honest with you because it goes to the highest bidder sometimes. The only thing I will say, and I'll just attest to this from Rackspace is it's about the culture, right? If you create a culture that people enjoy, they'll hang around until somebody in the audience offers me more and I'm yours. Right? No, I'm at Cisco. Yeah, I know. We were coworkers. We were peers, coworkers on the same team. That's true. He's decided to leave me, which is okay, I'm still hanging out, but, you know. Well, you did leave the SA side and now you're a TME, so. Well, yeah, because, you know, anyway. Anyway, but to Tyler's point, right? Like I think that's a big risk you have to take into account if you're looking to DIY or do a distro-based solution. I think in order to be successful, you need real commitment from senior leadership in terms of budget, in terms of staffing. Because if you can't create an interesting environment full of smart people, all of whom are extremely talented, you're gonna have a huge retention problem and the acquisition costs, talking to HR people, I've heard numbers anywhere from $40,000 to $100,000 in acquisition costs for a single open-stack engineer. And so building a team, I mean, that's a lot of money. Just to hire them and if you're churning through them at a rapid rate, like this. The pool's not that big to begin with. You can't just keep reaching in for another one. And then also geography. Well, if you churn through like five or six open-stack engineers from different companies and they were like, man, this place sucks, right? Like you're not gonna be able to hire anyone else because they're gonna talk to their friends, they're gonna come to Barcelona, you know? And they're gonna be like, don't work for these guys. Yeah, I think that's a key piece. Oh, I think we have a question back there. So back in topic, oh. No, no, no. You can't answer the question. We're not allowed to answer the question. Okay, so the question is for the recording is the, hey, cool, I like all your stuff. I like your services, but I just bought a bunch of gear and I have it here and I have servers because we try to do our own open-stack, so can I use your stuff? So in our case, we have, you wanna start Walter? Well, here you go. Walter's like, yes, give me that one. We're saving that. I will modestly say that we would love to take that challenge. We will be able to consume your hardware, manage it, give you an SLA on top of it, and life is great, right? So you don't have to go out and purchase, you don't have to go out and throw away all that gear you brought, are you making faces at me? It's true, it's true. No, but no, and that's again, that's one of the consumption models is, is that we can manage and operate and open-stack cloud that is running on your hardware in your data center. He's had too many sangrias. No more sangrias, sir. That's in the recording too. Walter's gonna be like, Walter's a jerk. No, we can consume and manage and operate an open-stack cloud running on your hardware and your data center. How about you guys? So, we have the ability to do that as well, but it would be cell-based work, which no one wants to really do. The way that we prefer to do it, we have a prescriptive control plane, which consists of three controller nodes and a bunch of net gear, and that's because we do the hardware accelerated neutron, right? So you need to have the ASR routers, and we're pretty prescriptive on the networking side, because we have something that we think is really special in the industry and works really well. Okay, that's all the plugging for Cisco. That's it. We're cutting it off there. So, we have this kind of control plane kit that we would prefer for our customers to run upon. The other advantage is when we do upgrades and everything, we know it's gonna work because it's on exactly the same gear, right? But when it comes to the hypervisors, you can do whatever you want. You can run it on a pizza box or a bunch of blades you got sitting around or what have you. As long as it meets the minimum hardware specs and isn't like an agent POS, it's fine. Yeah, so for us, we don't do that, right? So we won't consume existing hardware, and it goes back to that whole SLA thing of, hey, if you're banking on us meeting these SLA requirements, saying we're gonna keep this whole control plane and everything up and running, we're gonna keep you on these versions, we gotta know what that hardware is, we gotta know it's supported, and we also even do, so we offer a hosted version that's in our data centers or we can put it in your data center, and we even match the server design. So from a standpoint of CPUs, RAM, disk in the servers, they're the same, they're not the exact same servers, but they're the same specs in customer data centers. Again, very predictable environment means, very predictable for us to keep it up and running, but the side effect of that for customers is, or anyone, even our ISV partners that builds stuff on top is, they know exactly, say, oh, it's 20 enterprise nodes or something like that, we know exactly what that looks like. So we've got the full spectrum here, we've got, like, we'll run on anything, we've got prescriptive control plane and flexible hypes, and we've got prescriptive everything. Yeah, yeah. Well, do you let them run it on Intel NUCs, right? Well, you know, hey, you know, you line up a whole bunch of desktops that you've had laying around and wiring them together. No, I mean, we have, obviously, we have- As long as the check's clear? Yeah, as long as the check's clear, no. Obviously, we have a reference architecture that we have to follow, but it's only tied to, as long as that hardware can run a certain OS, which most hardware can, is in a certified port, that's pretty much it. And then, of course, we apply our reference architecture to whatever your hardware is. We move, we slice it up or whatever storage you're using, we figure out how to consume it. That's actually, that brings in a good question, you mentioned operating system. Obviously, you guys, managing curate, we're managing curate on an open stack piece, but operating system-wise underneath, do you let the customers, do the customers have options there? Or you're saying, we're managing it, so it's none of your business. Yeah, bread and bread hat for us. Yeah, no, it's not a, well, I like how you phrase it, we're managing it, it's none of your business. But no, no, we are, because we use open stack Ansible to deploy our clouds for our customers, right? The underlying right now has to be in Boutu, because in Boutu is the only OS that really works for it right now. They aren't working for one for Sentos, but it's not ready yet. So in Boutu 14.04 or whatever better, is the base operating system that we rely on. Yeah, we just said we're Ubuntu right now, and same thing, we have a tool called Ursula. It's very similar to open stack Ansible, uses Ansible, so it's focused on deploying. For what it's worth, we literally just, in our last release, re-platform from Ubuntu to Red Hat, so until our last release, we were also on Ubuntu. Very cool, very cool. We also have a Red Hat, well, anyway, I won't play really bad competitor on stage. No, that's what they're here for, go for it. We have a Red Hat offering as well, but I mean, we partner with Red Hat, just like Cisco, so we're both partners, with Red Hat, and we will support their offering, even though I am true to trunk, open stack, my personal life, but we do have that as well. Well, what are you doing your own personal life? I know, I know, but you know, anyway. All right, well, we're just about out of time. Is there any last questions, or yes? Hold on, hold on, we're told you gotta have a mic, because it's important. Because then we'll have to, otherwise, we'll have to repeat it, we won't get the whole question in there. Yeah, and you know, on the recording, we want you recording to hear your fantastic question. I don't know how fantastic or annoying it is, but when you mentioned updates, open stack, and your customer skips several releases, and then suddenly he decides, oh, I need the newest version, and he's behind two of the releases. So how challenging do you find it? Well, so I'll start by this way. As long as they're on our Isox release, which is version nine, which is most customers are there because we pushed, so there was a period of time when we stopped, we stopped actually doing releases of our product, because we wanted to make it better, and make a new reference architecture for it. So from version four to version nine, we stopped, right? So if you want version four, we encourage everybody to go to version nine, which was Icehouse, and at that point, you were consuming our new reference architecture. So if you stopped, and say you didn't move from Icehouse, and now you want Newton, we can actually make that happen, because it's still following our new reference architecture. It's a little bit more painful, it may require a longer maintenance period, but we can actually do that for a customer. Because it is on our new reference architecture. If it was predated to that, then it would be a nightmare, I'm gonna be honest with you. Yeah, I mean, that's for us, it's we have the like, well, technically you can delay updates. Thank you. But then that next piece is talking them out of why that's a terrible idea. So you're like, well, again, it may be timely, costly, whatever. You're not doing the upgrades, you're not, you know, it's a certain, think about it, you know, you don't realize whether it's AWS, Google, you know, Rackspace, public cloud, they're doing updates all the time, you don't know it. It's that mindset of traditional software, like, oh, I had ESX3.5 and I'm upgrading to ESX4, and that's a big upgrade, and then opens up more like that model, right? We have software versioning, but when you're providing as a service, it's like, hey, well, think about it as a service. We're just chugging along. Now obviously, if it's disruptive, we wanna schedule it, make sure we can get through those, but if they're not disruptive, we wanna, because we have a really good reason not to do it, and generally it's just a comfort level thing that it is a technical requirement. Yeah, we'd really try hard not to let customers get too far behind. We did, I mean, back a year plus ago, when we switched from Nova Network to Hardware Accelerated Neutron, that was a big migration. So that was one where, you know, I think we still have one or two customers on Nova Network because it's like new gear, and you know, but once again, we're kind of in the Rackspace situation now where we have a new reference architecture that we've really firmly established, works really well, got customers running, you know, 1,000 plus nodes. So like, you know, they're good from here on out, right? But yeah, there's probably a couple stragglers still out there. All right, I think we are out of time. Yes, thanks everyone, thanks for coming. Thank you guys. Any more shameless plugs? Any more shameless plugs, Cisco? No more shameless, okay, could I get one shameless plug? This soccer ball, there are only 22 people at the summit who have one of these soccer balls. You wanna know why? They took the OpenStack, Fix Your Stack Challenge at the Rackspace booth. If you are interested in one of these soccer balls, come take the challenge tomorrow, it's at our booth. It's not hard, I promise you, I broke the clouds, so I can tell you how to fix them if you're really nice to me. You doing any more book signings, Walter? No, my book signings are over, I'm sorry. So if you didn't get a copy of my book, I apologize. Don't talk about my book, I hate talking about it. Stop by the booth, come on, you know you want one. This is soccer ball, only 25 people have these at the summit, this is a commodity.