 All right. Welcome to Cloud Wars. Thanks for coming to the last session of the day. I hope it's exciting. We're going to have a fun debate, and I hope you guys all join in. Just a couple ground rules. No missiles allowed, but you can throw chairs up here if you agree or disagree strongly with something we say. Hopefully it's exciting. I'm Grant Kirkwood. I'm the Chief Technology Officer with UNITOS Global. And I'm Arturo Suarez. I'm a cloud product manager with Canonical. You'll see an army of orange shirts just around the corner. That's Ubuntu, the company behind Ubuntu. And the army of AstroChimp shirts, they're from UNITOS. All right, so I want to start by asking people in the audience who is using public cloud out there. A few hands. Who is using private cloud out there? A few hands, and you see some of them are the same hands. So according to 451 research, analysts are always right. 70% of enterprises are using a combination of multiple clouds to meet their enterprise requirements. That, I will expand that to all super-metal and containers and containers in the public cloud and any combination of substrate. So the world out there, maybe not in here, not that much in here. We always do our open stack things. But the world out there outside this convention center and the open stack world is very, very hybrid. So it is all about the economics of private versus public infrastructure. It's all about which workload goes where. So let's start with the public cloud. So I'm going to introduce these two concepts, reservation versus users, with something that you might relate to, might resonate. When you are buying or getting infrastructure, at the end of the day, you're going to have a combination of these four types of resources, right? You need CPU, you need memory, you need storage, you need network. This is pretty much something that we all understand. There is a trick when we do that in the public cloud, right? So you have an application that's going to consume any amount of those four types of resources. When you're buying public virtual machines or public cloud, those four resources come in packages. We call them flavors, and they are predefined. So you have one gig of RAM or one virtual CPU, one gig of RAM, 20 gigabytes of disk, and whatever number of IOPS, right? And you need to make your application fit into those flavors. So you usually, well, not usually, you need to get the one that is less restrictive. So if your application is consuming that many IOPS, you'll get the flavor that will feed that one. If it's consuming that much memory, you'll get the flavor, the highest flavor, that feeds into that memory. So what we end up having is that there are spare resources for each of those four categories, right? And what happens to those, they go back to the pool, of course. And then they are consumed by yourself or someone else. In the public cloud model, someone is also paying for those same resources you were paying before. In the private cloud model, it's always you consuming those very same resources, right? So I get to talk to customers a lot of time, and I like to do that. And clients often tell me, well, we're going to put everything out in the public cloud because, of course, it's going to be cheaper. You just pay for what you use. And so I have to remind them, you're not actually paying for what you use. You're paying for the reservation, right? How many times do we reserve something? Then if we don't show up, we still have to pay for it, right? There's a penalty. And so there's a really important distinction between paying for consumption and usage and paying for reservation. Has anybody ever found the exact right size infrastructure in a public model for every application? Of course not, right? There's always unused resources of some type or another. When it goes back into the pool, somebody else or you is paying for it again. And there's a little secret that kind of goes along with this. If you've ever taken a good look at public cloud resource pricing, you'll find that larger instance types actually cost more per gig or per cycle or per IOP than the equivalent smaller instance type. I have a number of theories as to why this is, but I think it has to do with the fact that cloud providers are pretty smart and have realized if you're not using the T1.micro instance, if you're using larger and larger instances, you're probably actually using the resources more than the really small instance types. And so if you look at that cost, you know, in the enterprise space, we're used to kind of thinking about things, hey, if I buy more of something, I'm gonna get a discount. I'm gonna get a volume discount, but it's actually somewhat the opposite when you're buying larger and larger instances in public cloud. And it's kind of like going to the restaurant. If Arturo and I go and have dinner, we're just gonna pay the normal price on the menu, but if we bring 20 of our friends, it's gonna be an extra surcharge. So next topic, is anybody in the room here familiar with the law of large numbers? Okay, so law of large numbers basically says when you have an expected outcome and you do an experiment, lots of times, you'll start with kind of a bunch of responses, but over time, the larger the sample size, the more predictable it becomes. So the example, the simplest example of this is rolling a dice. You've got six different possible outcomes when you roll one die, right? And if you average that over time, you would expect the average result to be 3.5, right? But there's not a three and a half dot on a die, there's only whole numbers. So therefore you're getting one of those six every times, but if you roll that die 200 times and average them all out, it comes out to three and a half. And if you average that over time, you'll see that the deviation between your results becomes narrower and narrower. So how does that translate to workloads? So we have, basically, you can see there are four types of workloads. There's the linear growth, there's the ones that have systemality that you can predict somehow when they're gonna grow, when they're gonna go down. There are the most boring ones, the stable ones, it's just applicants that are always running at the same capacity at the same level. And then there are the ones that are really a challenge, the ones that you really don't know how they are gonna be consumed, how many of those are gonna be, they're completely unpredictable. We separate those and those are the ones that are really interesting, right? So let's have a look at a single virtual machine, how it's gonna utilize different resources. So we see it's basically completely random or sort of random. You can't really make any sense out of that and you can't really predict any underlying infrastructure out of that. Let's see how five of this will look like and basically you can really see a lot here but there will be different peaks, different values of resource utilization. We're gonna put them in a way that you can understand where we're trying to get. Here you're gonna see how those resource utilizations are added up for all those five virtual machines and you see that some of the peaks are even by some of the valleys, right? When you look at the added curve for that, you'll see that when you compare to the initial single virtual machine, a five virtual machine, a component of five virtual machines looks a lot more predictable than one single virtual machine. So we've done the same thing with 20 virtual machines and we're gonna put all three of those graphs together. So this is one versus 20 and this will be one and five and 20, right? So the larger the number of VMs you have running, the more predictable your workload looks like. Who's got just 20 VMs? And most environments have hundreds or thousands, right? So that law of large numbers, the more you scale, the more impact it has. So let's talk a little bit about urban sprawl. I happen to live in Los Angeles and one of the funny things about it, if you fly in from the East, you're over the suburbs for like 45 minutes, it's actually kind of amazing. What does that have to do with cloud? So kind of a funny story. My grandmother used to tell me about a goldfish and that's not her, just to be clear. But goldfish have this interesting tendency to grow to the size of the tank that they're in. It's kind of amazing. Actually, when I was looking for this picture, I found a lady with like a two foot goldfish. Cloud's kind of the same way, particularly in public. So how does this happen? We're gonna pretend that I'm Bill and I'm Arturo's Peter. So Arturo. Hey, Bill. I'm sorry to mess up your day, but I know you're busy, but I've got a really important project for you to work on, just a small little thing. I'm sure you're not busy, you can handle it. I've got kind of a new application that I need you to go and deploy for us. It's just a small, very simple. Oh, right, that's just small, right? Okay. Very simple application. Sure, it looks simple, yeah. And I really need you to stop whatever you're doing. All right, so this is high priority, right? Is it higher priority than the one you gave me in the morning, or is it just the same priority? Yeah. Super high. I'm gonna need you to work on Saturday. Yeah, sure. You said you were sorry? Yeah, sure, sure. By the way, we need to get this done next week. There's a deadline here. All right, okay, I'll figure it out. Thank you, Bill. All right. I'm gonna go back to my office now. Sure, enjoy it. All right, so I need to get this stuff done, right? So I have work to do. The first thing, it's always, I need infrastructure to run whatever I'm gonna do that small application, I need infrastructure. And then I need to build the application, right? So let's take one of the simplest part of that simple application, which is the database, right? So the first thing we need to do is we need to build it. And to build it, I have to estimate a model demand. I have to size the VM. I have to build the VM. I have to put an OS. I have to actually harden that and have some sort of authentications so I get no hacks. Then I need to build or attach an existing storage volume to it and it's installed the database and it configs the database. You get the idea, all right? You sound really busy, Peter. I mean, Arturo. I am extremely happy that you came by my office, Bill. I get my stuff done and I feel like this. Congratulations. But then this happens. Peter, I've got a project for you. Hey, Bill. How are you? So you get the idea. You get how this thing works and how you spin up this kind of projects. So what happened? What happened to that stuff that you were building in the middle of it and then I came in and interrupted you with the next project? What happened to all that stuff that was kind of out there that you were building on and maybe some of it was unused? I sure had enough time to clean all the stuff up. You know, I canceled all my VMs in my public lab. I'm sure they're all down and not consuming a single dollar. Are you sure? Absolutely positive. I had plenty of time to do them Saturday. Are you sure enough that if anything's running and still costing us money, it'll come out of your paycheck? Nothing at all. I care a lot about that now. So, you know, I made sure I did my things. So we've got some orphans. So how do we prevent orphans, Arturo? Well, thank you. So there are two approaches and none of them are really a good thing, right? So you can actually implement some really strict kind of procedure. So you can fill in forms and make sure you clean up all your mess after you do one of those high, high, high priority projects. Get those TPS reports in. Exactly. Or you can pay some money for someone to do the exact same thing for you, right? So I'm not gonna do it. You're not gonna implement these processes. But someone has to go clean up the mess. The tools out there are either immature or expensive for both, right? So you end up paying more than what you would be saving at the end of the day. All right, thank you for that. Oh, where'd we go? I lost the screen. Oh, we're in the black hole. I was kidding. It was a joke. I don't know if you could see that. We're in the black hole. So let's talk about the black hole. I like physics and space. Who knows how a black hole works? Okay, so a black hole sucks in gravity, or sucks in mass, right? And mass creates gravity. And gravity sucks in more mass. And so the more mass that gets sucked into that black hole, the stronger the gravity becomes. Why is this relevant to cloud? So let's talk a little bit about features. Different public clouds differentiate themselves by having different features that they make available to consumers, right? And some of these features tend to be pretty convenient. We just showed you one example of standing up a highly available database cluster. Arturo, I mean, Peter, is there ever a time that you're not under a deadline? No. Okay, so let's go back to that example. You did a whole bunch of stuff. Yep. What's your alternative? Well, the alternative is the one-click thing that you can get. Those features that we are talking about in mainly the public cloud, not only the public cloud, is the one-click button. It's the easy way into deploying a database that is already highly available that I can use to build the other components of my easy project. So which are you gonna choose? I guess I'll click the bottom. Because you're on a deadline, right? Exactly. That boss bill, he's so hard. All right. And would you say that's a good thing? Sure, would you guys say that's a good thing? So it's convenience, always good. Depending on the perspective, right? If you end up with all the sofas, if you end up with the consequences of that convenience, you might, well, you might think, as a developer, us, Peter, it's obviously good. Maybe it's Bill or Bill's boss or the CFO. He might have another thought, right? So it's kind of interesting to me the industry calls this the feature black hole. I think of it more like jail because you kind of get trapped in that convenience, right? So I'm gonna coin the phrase, the feature trap. It's a sad, sad place to be. So kind of a little extension of this theory going back to the black hole for a second. Someone once told me about this concept of the data gravity well, and I think it's interesting. There's a tendency, particularly in the enterprise space, but I think it applies universally. The more data you have in a place, the harder it is to move out of it, and therefore, as time goes on, the more data it tends to attract. Again, very much the same concept as the black hole. So I think it's actually a better analogy, but it rings true, right? The larger the environment that you have in one place, the harder it is to ever change that. So you're trapped by features and more and more data is getting ingested. It makes it harder to do anything different. Data equals mass, so therefore it equals gravity. All right, so with all these features that we've talked about, let's try to build a model, right? So we basically have features that we can just measure and we are predictable and we can put a price tag or a cost tag on, right? We have a second category that we can't really measure on a particular base, but we can make an estimation. There's data out there how many orphans we have per used VM, for instance. And then there are some that we just need to keep in the back of our head when we're doing this model that we're not really able to add to the model and predict because there are implied human factors. So when we started to put this together, I was gonna kind of sit with Arturo and think about some hypothetical customer that we had and generate off that. But then a few months ago, we were asked to run an exercise for a real life customer. So this is real data. Actually, the virtual machine usage that you saw before, those were from real machines, but this is a real economic model that we built for a client. So I'll give you a little background on this. It's a global company that is in the life sciences space. They came to us because they were trying to decide what their cloud strategy was going to be, public, private, a mix of both, all of the above. Today, they have data centers that they own and so most of their infrastructure is on-premise and it's very traditional legacy infrastructure and they're trying to decide between public or private or how to mix the two together. They asked us to consider a number of different scenarios. So they said, well, we could keep everything that we currently have and just scale up and life goes on and things are easy. We could move to a hosted open-stack private cloud that Unitas and Canonical provide. They said we could also put that on-premise in our existing locations since we have this free space and power. And they said, well, we could actually split it so we could put our HIPAA data in our hosted facility and then the non-HIPAA data in our on-premise facility that way we no longer have to certify our on-premise facilities or we could go to one of the major public clouds and they were specifically looking at AWS and Azure. So then as we go through the model, there are a couple of assumptions that are important to note here. We started with a baseline environment for one particular workload that is running on a fixed set of legacy infrastructure and we took that and turned it into a monthly OPEX all in number. Then we did three, what I call, scale-up scenarios. So we took the baseline and multiplied the workload requirement, not the infrastructure, but the workload requirement by two, five and 10 times to see what the trend looks like. We assumed three-year pricing for everything. So in the hosted private cloud model, that means that we amortize hardware over a three-year period. In the public cloud model, it means that we bought everything on three-year reserved instances prepaid up front with the assumption that you would finance that. We applied, we used standard published rates and then we applied enterprise discounts based on what we hear in the market or kind of the typical volume purchase discounts. And we only included infrastructure costs. So we said, we've got to make this as apples to apples as possible. In a public cloud model, you're consuming resources and kind of everything underneath that is run by the provider. And so we modeled it the same way, assuming that that was being run by Unitas and Canonical. So here's what the results came out to be. And I can kind of walk you through these numbers or you can take pictures, that's fine too. But I think it's easier to look at this in a chart. So I'll give you a second here and then I'll flip the chart up. So starting with the current environment, that baseline was I think it was about 37,000 a month was the fully managed legacy infrastructure spend. So we took the workload that was running in it and we said, okay, let's multiply it again with those scale up numbers. And we got to just shy of 200,000 a month at the 10X number. So there you can see it didn't increase by 10X, it didn't go to 370,000, it was 198, I think was the exact number. So there is some economy of scale that you get even with the legacy architecture in that environment. So then we compare that to the hosted open stack private cloud, again, the Unitas and Canonical approach. And you can see there's a pretty wide margin. So we've been talking with this client, we said, look, even if we just take the easy path and build your private hosted cloud, you're gonna save a ton of money as your workload scales, which they knew that it was going to. We then said, okay, let's put that on premise since you have data center space and power and you wanna use that to the extent possible, let's just put that there. You can see it's basically parallel with a small discount. In a lot of cases, this would be higher because there is a cost of space and power. If you have to build a facility, obviously it's super expensive. I don't think most people are doing that. But in this case, they said, don't attribute any cost to it because we have free space and power. So that's why it's just parallel lower. We then split it, so we did half and half. And you'll see that actually started at the 2x because at the baseline, there wasn't enough infrastructure to make it HA in both environments. So we had to get to a certain size before we could effectively split it and maintain HA in both the hosted and on-premise environment. And it's a little tough to see in this, but one of the interesting surprises here was that at the 2x, the split was slightly more expensive. And then as we got to the 10x, it was slightly less. I thought that was kind of interesting. So then, drumroll, we looked at AWS. And it turned out, and this was another surprise actually, it turned out that it was more expensive even at the entry level, but significantly more than the hosted OpenStack Cloud as it scaled up. And then drumroll number two, Azure, it was even more pronounced. Now, before we move on, I'll say that this is highly workload dependent, but we've done this for lots of customers and we always end up with some variation of this. There's a very pronounced split at a particular point, an inflection point in there. And so we said, okay, what is that inflection point? So we started to run some models. We took the fully managed cloud approach and said, okay, let's take it as small as we possibly can and compare that to public cloud. So the first thing about public cloud though is that you can start at zero, right? You can literally have one virtual machine and you're paying a really small rate for that or even free if it's a micro instance, right? That doesn't work with private cloud. There is an initial set of infrastructure that you have to have just to have it, right? And so we modeled that out and did a flat line assumption and found that inflection point and it turns out it's about $17,000 a month, which was actually quite a bit less than we had originally thought. And again, this is just comparing infrastructure as a service as apples to apples as we could make it. So that means as that grows, that delta which increases over time and size of the infrastructure is all the potential savings, which is really quite significant. All right, so just to let these ideas sink in. At scale, the public cloud is, I wouldn't say always, I'm gonna say always, it's always more expensive than the private cloud, right? When you're trying to do a legacy vendor private cloud that is less expensive, but it's still expensive. The OpenStack hosted private cloud. That's a private cloud that's operated economically, that's operated pretty much the same way the public clouds are operated, right? Keeping that operational cost down will save you money. All right, so you saw the footnote there though for predictable workloads. So this was actually another interesting follow on thing that we started to dig into the data with this client. And they had these three, this is their kind of generalized workload over the year. They had these three processes that ran where they took a whole bunch of data and ran some kind of process batch analytics on it. And within their fixed infrastructure, they had 10 or 15% of available overhead. And so they'd kind of ship this out into that space and let it run and it would run for two months and then they'd get the analytics of the report that they were looking for. So that was kind of all the capacity they had. But they said, you know, it'd be really cool if we could spin up a huge amount of resources just for a short period of time and run that and we'd get those analytics much, much faster but then spin it down. And they realized, well, we could build a whole bunch of stuff in our data centers but we'd run out of room. And all the rest of the time it's sitting there kind of empty, right? Not being used. So one of these often used adages is, you know, own the base and rent the spike. And that's literally what we proposed them. And so we modeled this out as well and we found that the, that predictable consistent workload was most cost effective in the private cloud as we've just demonstrated. But those short-term spikes, if we can put that into public cloud and let it spin up all the resources it needs for two days, they were getting their analytics done within that really short period of time, which was better for the business and better for decision-making. And then it's spin it back down so they're not paying for it when it's not being used. Good, so I'm a little concerned about time. We're gonna talk about a couple of other factors you can see there, but there's a couple of mics there. If you want to ask any questions, please don't buy any of those and go ahead, interrupt us as you want. So some of the factors you can see there, the technology you're using on the private cloud, obviously you have no sane in the public cloud. This does what you get. But in your private cloud, you are free to decide what technology you want to use. So we have some ideas about, well, do you want some sort of legacy integration because there are some things that you're never gonna move, tell the IBM mainframes and all that. There's some things you're never gonna move to the cloud of any kind. There's some, you get control of that technology on things like oversubscription or host aggregates, which is a cool feature that allows you to have pretty much different configurations within the same cloud. You have, you need to be aware of what your workflows are and where your workflows work better in those different types of configurations. Things like containers are coming strong and you want to include that. So you build an infrastructure, a private infrastructure, now that it is today with the workflows you have in your mind to start off with, but just to allow for a future evolution of that infrastructure, this thing is gonna be in your data center or you need this global data center for the next 10 years, right? Second part of that obviously, in a private cloud environment, it's as customizable as you want, right? And you can go down rabbit holes pretty easily with this and you have to make some decisions about do you want to build the team to do that or is that something that is better left to a partner? But the point is most enterprises have complex IT environments already and being able to customize a private cloud is really key to integrating it with existing systems. Obviously, you're here, you're thinking about making that cloud and OpenStack cloud, just hopefully and this is not for the fan of it. So be aware of the upgrades. This is a kind of, for me, it's a little embarrassing graph from the survey on the OpenStack Foundation. There's still Essex running out there, which is a little crazy, right? When you think about that, all the innovation that OpenStack brings, all that flexibility, all that container story we mentioned needs to be made available to your cloud users, right? So make sure your cloud is upgradable, it's designed to be upgradable and hopefully to the latest version of OpenStack. So security is really important. And there were a lot of opinions on security as it relates to public versus private cloud and we can spend an hour on that, so we won't, but there is certainly a perception amongst a lot of the companies that we talk to that say if it's private, I know it, I can wrap my arms around it, I can get as into the architecture as much as I want and can ensure security. And then governance is another really important topic apropos of the image here, but being able to set policy around where data exists, how you partition out your cloud environments and separate them, it's, I think, easier to do in a private environment. I don't know if you would agree. Yep. So we're gonna go quick, because I know we're short on time. Performance is also really important, so this data comes from a company called Crystalize that benchmarks public cloud providers, really, really interesting data here. What they do is they go and spin up large numbers of resources in public clouds, then run benchmark agents on them to see what the performance disparity is. So if you look at the graph there on the left, this is, these are Linux four core VMs across different regions. The green band represents the spread of measured best and worst performance, and it's considering CPU, memory, disk throughput, network throughput, et cetera, and giving that in an average. I'll show you the breakdown a little bit. But as you can see, that Linux four core machine should behave the same way anywhere you buy it and it clearly doesn't. And so that impacts your relative cost. Similarly, this is the breakdown of the resource type, so you see file operations and memory. For some reason, memory in Oregon this particular month was really unpredictable. Don't know why. And then if you look at this over time, so here's a Linux two core machine over a six month period. If you go in July through December, you can see there's actually a really wide band of performance that was detected on that machine. So if you buy that, you would expect it to behave consistently over time and actually this data shows that it doesn't. Until I actually saw this data from Crystalize, I was a skeptic to a certain degree. I said there's probably some reality and probably some fud there, but this was actually pretty eye-opening for me. And lastly, complexity of building your cloud. Not only building your cloud but operating your cloud, right? So you can choose to build your own car or you can choose just to drive the car. If you're not in the boring infrastructure as a service business, you might have your engineers, your smart people working on something that actually score for your business. If you're doing e-commerce, if you're doing VNFs, if you're doing any of the workloads out there, software development, the infrastructure is something you just need to take for granted, right? So that is the smart way of consuming OpenStack. If you're not really an OpenStack provider. So we talked a little bit about the fact that Canonical and Unitas are partners. We have what we refer to as the OpenStack hamburger, but basically this delivers an end-to-end solution with fully managed OpenStack private clouds that you can consume in a model very much like public and that you're not worrying about how it's working underneath. We're delivering it to you as a managed service anywhere in the world. So I know everybody likes to take pictures with kind of the summary takeaway. Go ahead and take a picture. There's a link there. It's a little description about how Unitas and Canonical are working together to solve this problem. And we have four minutes for questions, so. Good news is this is where the last session, so we're hoping. There's often out there, there's no beer anymore, so. Yeah. I'm sure there's gotta be some opinions on this, so. Hi, you mentioned orphans and waste as sociological problems. Have you seen any successful strategies, either cultural or internal economic, to combat them? In a private cloud. In a private cloud. Not really good ones, actually. So the tooling that I've seen kind of falls into two buckets, either immature and not fully baked or super expensive, right? And so the first, you're not really getting the full benefit, the latter, you know, you're spending more than you're potentially saving. Though I would say this, part of the benefit, and this only works to an extent, but in a private environment, stuff that's orphaned and no longer being consumed, you are getting some of that back, right? The data that's sitting on the disk, it's gone, right? You're not gonna overwrite it. But if CPU utilization drops down to zero because the workload's no longer on that VM, those cycles are back into the pool for you to consume. So there's some benefit to be realized. That's part of the part that's hard to model. So when you think about those lines doing this, right? In a real world environment, you'd probably see it a little bit steeper, but yeah. I mean, processes are people-driven, right? You're only so good at filling out your TPS reports, you're gonna miss some. And we find that to be pretty common. If there's not another question, a follow-up on that. So this is essentially one of the things we're trying to fight. It's the tragedy of the common story. Multitenant internal cloud has multiple, you know, business units all using it, and they're not getting chargeback for it. Is chargeback essential? Like I'm kind of have an instinct that it needs to be, but I was looking for any kind of confirmation. Obviously with AWS, you have the external economic pressure that would incentivize usage reduction and reservation reduction, as you said. Yeah. Have you seen chargeback effectively optimize internal private cloud usage? Yeah, for sure. Yeah, I mean, there are cases where it's not essential, right? If it's just kind of one application, one group, and that's it, but that's pretty rare. Most of the companies that we talk to are complex enterprises with groups and projects and things like this. So for sure it's essential. There are things like Cloud Kitty Intelligent that have really robust software to provide that chargeback capability. And yeah, I would say, you know, understanding what different groups are consuming, creating a kind of price catalog to associate, you know, your fixed private cloud costs and being able to attribute that to business units. And most enterprises are essential, I would say. Thank you. Thank you. Hey guys, did you all in your, it's clear that the disparity between the public cloud consumption at the larger workload level versus the private cloud. In your model for the customer though, did you on the three year timeline horizon account for a pre-purchasing actual reserved instances for that baseline level? On AWS purchasing reserved instances for what that baseline would look like, is that in your scenario? We purchased all of it on three years. So on, okay, so the entire, the entire model was based on pre-purchase. Correct. Okay, reserved instances. Yeah, so we, yeah, the graph wasn't, in that case it wasn't a graph over time. What we did was each of the, the baseline 2X, 5X, and 10X, that was moment in time, assuming like with the 10X, we're gonna start there, right, and say here's the infrastructure, and let's say that's day zero, what does it cost on a go forward basis in both models? So it was three years across the board. Thanks. Sure, thank you. In terms of parameters for decision making or the calculation aspects, any line items that you kind of went through the infrastructure cost, the resourcing cost, data center power cooling, the networking costs, the upstream connectivity cost. So what were the list of parameters that we kind of considered, especially from a private cloud perspective? Yeah, sure. So we tried to make it as apples to apples as we could with public cloud. So the private cloud included internet bandwidth, it included storage, the full management of hardware maintenance, the space and power in the data center for the physical hardware, and then obviously the cost of the hardware amortized or financed over the period of the three year term, right? So if you think about launching a virtual machine in a public cloud, that's sitting on a server in a data center, consuming space and power and connectivity, ports on switches, all of those elements. In the private cloud model, we're supplying the same thing. So it's racks of space in a data center, space power cooling, network connectivity, internet bandwidth, and then the hardware, switch ports, firewalls, load balancers, et cetera. So it's really all inclusive of everything that you would get in both models. So the people cost, this is interesting, in a public cloud model, you're consuming for that virtual machine and that provider has a whole bunch of people that handle the care and feeding of the cloud. In the private cloud model, we're doing the same thing. So we assumed that the private cloud was fully managed and supported. So the user gets the same experience, gets your horizon or API and go and spin up resources and you don't have to think about how things work in behind the scenes. So that's the important feature there, the boson factor there, right? The operational cost, when you're doing that at scale, the operational cost per VM is reduced, right? We are able to do that at scale because obviously we do that for many customers, right? In a single customer point of view, if they're having an infrastructure that's 50 servers or 100 servers or 200 servers, right? They still need to dedicate a full operational team so their VMs become expensive. That's when I was saying that when you have the operational economics at that scale, it makes sense to have your own operations team. If you don't, then your VMs are always gonna be more expensive. But there's this and it was announced by the OpenStack Foundation as a category in the marketplace this week, the remote managed service, which is what we do with unit tests. We'll get you there, right? That's when you start comparing apples to apples. Any other questions? So the question was how do we manage the cloud day-to-day operations, OpenStack Cloud? Sure, so we do have a team of operators. We just need access to the data center. There's obviously a knock in the data center. We do keep monitoring 24-7. We back it up with an SLA and we have OpenStack experts around the globe providing, well, any fixes, any upgrades, any updates, security patches, you name it. Do I answer your question? Yep. There is a specific way in which we encapsulate the operations. We add canonical encapsulate the operations in our software. Software is a model-driven operation. So it allows us to perform these upgrades and these operations as actions. So for us, it's sort of that easy button kind of thing, but there's a lot of engineering put into that, into that software. So we do it in that particular way. Well, thank you all for coming. Yep. One more go ahead. Did you do any cost modeling on the storage side between private cloud and public cloud? Yeah, I mean, so storage was included in this. So, and if you want to send one of us an email, we can send you the model itself, but there was a fixed amount of storage required and we profiled, you know, there was some amount of it that was on SSDs and some that was kind of day-to-day ephemeral storage and then some that was archival that would sit, you know, in a big, cheap slow disk, right? And we built that using Ceph for the storage. We're both block and object. So we were really trying to make it as equivalent as possible to the amount of storage that you would consume in the public cloud, including IOPS. It's one of the things we hear a lot that companies are buying virtual machines based on how many IOPS they need and wasting some of the other resources or paying for provisioned IOPS, which then, same thing, you're not using it all the time. And that's oftentimes one of the most expensive resources in addition to memory. So we modeled the same amount of storage and the same types of storage in both and the same amount of IOPS in both models. Okay, well, we'll hang out for a few minutes if there's any other questions, but thanks for coming, you guys. I really appreciate it. Thank you.