 So hello everyone here. So we're going to do a talk on, I would characterize this as kind of an opinionated view we have on, you know, for people looking to, I guess, move from additional virtualization, looking to modernize what they have in light of recent disruptions we're seeing in the market like whatever Broadcom is doing with the general push from on-prem to public cloud but then also where might you be if we're looking at something on-prem? So, you know, in our case we're looking at, you know, the combination of products that Red Hat brings to table and to facilitate that on-prem private cloud and slash hyper cloud model. There are a number of customers that we have that are successfully leveraging our product suite with regards to OpenStack and Portfolio. And generally speaking here, this is kind of part of a broader talk track that we have. This is kind of our quick and dirty kind of intro to where we see ourselves in this landscape and the various benefits we can provide customers. So just briefly about me, I'm Greg, I support the National Sales Team Canada for cloud and storage. I have been working in the IT industry for about 25 years and a number of, I guess, backgrounds from health care public sector at Helico, at FSI. I've been using OpenStack for just 108 years now and then with Red Hat for just a little report. So generally, why are we here, as I mentioned earlier, you know, this is, Red Hat is kind of dedicated towards transforming traditional IT models into agile, offered to fund it. And there's using DevOps, cloud, native technologies and service level objectives. And the solution brief provides a blueprint for organizations aiming to adopt these advanced technologies and lead their IT transformation. So we're going to, this is kind of the high level agenda that we're going to talk about here. You know, we're going to have them through most of the introduction. What's a little different here is rather than kind of talk, like, exclusively about technology, which is kind of the classic way we kind of address this, a lot of up-engineerings. What we're going to look at doing is tackling this from kind of a top-down teacher point of view. There's a gentleman named Roger Martin who has a strategy framework which we kind of incorporate into this presentation. We'll then move on to a hyperscaler experience as we talk briefly about why people like them and why they're so wildly successful. And then how we see the Red Hat OpenStack platform fitting in alignment with what the hyperscalers are doing. And then we're going to get into a few problem statements and some solutions that are kind of common to customers that are undergoing this kind of digital transformation. We're going to look at how we can apply that strategy framework or that cloud transformation and how that choice to cascade, you know, manifests in that cloud transformation. So briefly about Roger. So he is a renowned academic business consultant. He's been voted number one in fingers 50, which is kind of a strategic integrative thinking consortium. He's published a number of papers for how to, I guess, address business problems with technology and business needs. So what we're looking at doing here is we're going to kind of take this five-step kind of process which Roger's outlined, which is, you know, like what is your winning aspiration. So for example here, you know, we're talking about an organization's primary vision. So for example, transitioning to a cloud-based infrastructure and then very discretely where we want to play, you know, about selecting those operational areas such as, you know, making a decision to adopt something like Red Hat OpenStack for on-premise. And then how are we going to win? Like when you look to move toward the solution, you know, like there has to be some criteria as to, you know, like what strategy would employ to be successful at what it is you're doing, how you would surpass competitors, you know, for example, making strategic decisions on automation like the incorporation, automation advanceable with your cloud infrastructure. And then what are those core capabilities, you know, that are going to enable this? So those essential skills or technologies, you know, and proficiency is required for, you know, being successful at this. And then what are the management, the power management systems we have in place kind of run this whole show? And so with that, we're going to talk about briefly, you know, from a hyper-scaled point of view, you know, like one of the things that we find when talking to customers when they're looking to change everything is, you know, they take very close or they pay very close attention to things like, you know, the ease of use and interoperability between, you know, the capabilities in the public cloud, you know, from a standardization and comprehensive documentation, you know, like moving to the cloud, like they really have put a lot of effort into making a very vast portfolio that's fairly easy to share. And so one of the things we've noticed is that, you know, the predominant industry preference leans towards the service model adopted by the major three hyperscalers kind of aligns with business shifting from traditional setups. So when we look at now, I guess the red hat kind of take on a hyperscaler experience. So one of the things we have here in this eccentric ring diagram here is that the very heart, you know, OpenStack provides a foundational level of capability, like storage networking, computing management. And, you know, outside of that ring, we do have a number of well-defined use cases. So, you know, OpenStack is one of the dominant forces for things like NFE, AI, ML, HPC. We also see it used quite frequently in test clouds and so on. But we're, like, I guess the purpose of this conversation where we're kind of looking at focusing is, again, in light of what's going on in the industry right now, we feel that there is a prime opportunity to kind of position OpenStack for going after traditional enterprise hosting services. We kind of qualify the enterprise hosting services. Most mid to large enterprises are hosting providers of the sort. There is some app that they can't modernize. It's just a thing they have to run and provide some vital business service. And they have to figure out how to live with it. There's not really an opportunity to containerize or modernize it. And so, you know, there has to be some thought or mechanism to kind of live with that. And so, having come from primarily those type of organizations, you know, this is where there are a lot of similarities between what you would find in traditional hosting provider or a high-price deal of what these mid to large enterprises are doing. And then finally, on the outside, we have this, like, you know, the portfolio products that are very tightly integrated with what we're doing at Red Hat with regards to private and hyper cloud, whether, you know, it's shift on stack or the stack integration or, you know, the azible integration into OpenStack. There's just a very powerful interplay between all of these technology capabilities and solutions that we tend to see as a competitive advantage that we're able to bring to customers. So much like the earlier slide with regards to the hyperscale experience, you know, we see that OpenStack integrates fairly seamlessly with orchestration tools like Ansible. And, you know, you can position things like infrastructure as code via GitOps, you know, so we'll find conditions with DevOps practices. You know, the Red Hat OpenStack platform ensures standardization and interoperability. I mean, we're, Red is a fairly trusted name in the industry for, you know, that type of stuff. So, when we talk about, you know, from a maturity and stability point of view, you know, we tend to see ourselves as being, you know, another kind of mature and trusted name on par with what we see with hypersavers. We actually work very closely with them, with, you know, with regards to deploying our products in cloud, both in public development. So, one of the things here, I guess, when we talk about, you know, the standards that we've got used to, so that this example here is, it's a typical kind of expectation, that minimum level of what a customer would expect when they're looking to deploy a cloud dataset. So, there are these four pillars, right? You know, you expect that there's going to be something that comes on my workload that's going to be, I expect there's going to be there, something there that provides networking for those workloads, storage for those workloads and ultimately management of those workloads. And so, one of the things that we find is that, like, well, everyone expects this to find that open source map. The open source data center equivalent maps very well to it, right? So, much like what we thought in the proprietary cloud base model, from an open stack point of view, we have all the same buttons, just, you know, different open source projects. And where we see red hat kind of taking that open source data center model and extending it, is in our product portfolio. So, like, OpenShift, OpenStack, SAP, Ansible, these are all the tested enterprise software portfolio solutions that, you know, essentially we're talking with de-risking, you know, your data, right? This is a situation where, you know, we let customers focus on running their businesses and not necessarily, you know, becoming a development house for an open stack. That seemed to be the temperature of the room with the maturity of mid to large enterprises that we've been talking with over the last couple of years. So, one of the things that I like to kind of bring to your attention as well is, like, the term hybrid cloud is thrown around quite a bit. And so, one of the things that, you know, we find is that, when you look at Ansible, Ansible is kind of this great need, right? It's able to do so many different things. And one of the capabilities that we've been kind of educating customers on is that how you can leverage Ansible and its integration, both in traditional virtualization technologies as well as the hyperskillers themselves to strategically move workloads, you know, around as you need them. Like, if you can contract this against what, for example, hyperskiller native toolkit. So, whether you're using AWS migrate or the migration services from Azure, it's a one-way trip. Like, once your VM leads your data center, there's no mechanism for it to easily move back down at a cloud. So, it's kind of fundamental business model of theirs to, you know, get as suck up as much of those workloads and position them in those public clouds, you know, as easily as possible. But, you know, one thing you will probably, you may have heard is that a lot of customers that do that, you know, about, I may need to plug in my laptop there, I'm sorry, one second folks. So, yeah, anyways, one of the things we're getting stuck in here, yeah, so we've noticed that there's no real great way to get out of the cloud, out of the rebuilding. There's no migration capability that's kind of easy to do that. So, one of the things we focused on is bringing awareness to things like Ansible and those modules that we have for things like Hyper-V and VMware and Nutanix as well as, you know, the hyperskillers themselves. And so, what we're looking, what we're really trying to say is, hybrid cloud for customers should really be about choice, right? It should be about I run workloads where they make sense to run workloads, right? So, there is something in public cloud that is advantageous for those workloads, you know. So, for example, I may have a batch job that requires substantial amount of GPU power to get a chunk of work done. But, then I don't need it or those workloads can maybe relocate in a smaller subset somewhere else. That is where we want it, what people start thinking rather than just like, I'm stuck here and how do I make the best of it? One of the things, I guess, when we talk about, you know, customer experiences and also, you know, knowledge we've gained from our partnerships with hyperskillers is, you know, one of the foundations of kind of cloud performance is in establishing service level. You know, when you think about it, hyperskillers have this really interesting problem, you know, and they've solved it quite eloquently. They have a requirement that they must be able to run any workload but loose to no requirements at any time. And so, that is a very interesting problem to solve. And so, the way they've done that is they have essentially created a menu of service level objectives of which customers of their service can consume their cloud. And so, that has allowed them to be, you know, extremely malleable and adaptive. And so, those lessons haven't been lost on us. And so, one of the things that we find that is kind of like a best practice is kind of emerging is that when you are looking to deploy something like OpenStatic, you know, taking special attention to, you know, defining the unit of consumption, you know, creating service level objectives that can be met and metered against, you know, it starts taking things like capacity planning, forecasting, changing them from I need a tool to do it or some type of magic agent that goes out and figures it out. Actually, something more academic, right? It's like from your day one point of view, I build a cloud, you know, being able to go in there, exercise the cloud and say, rally your browbeat or whatever tool you're using and then create profiles and, you know, units of consumption such as volume types or different types of flavors that we'll get into a little bit later on how you can kind of portion out what it is you deliver. And it's no longer like how fast can my cloud go. It's more how much can I deliver within the cloud before I need to scale. Now, that's kind of that change in mindset from a go from kind of building they will come to kind of policy and prediction for what it is you're doing. And so, as an example here, you know, you know, whether it's replicating something like AWS GP2 and GP3 volumes, you know, which you guys are familiar with, I over gigabyte or fixed performance IO, or if you are doing deterministic oversubscriptions or recreating a mechanism that the AWS Nitro uses for doing deterministic oversubscription of instances and service guarantees so that the VMs don't clobber each other, you know, or if you're looking to do like a traffic period and, you know, between different networks, you know, these are all kind of building blocks that when you stand up your cloud, getting that kind of figured out really counts with kind of living with it. And so, one of the things to kind of change here, one of the things we've been kind of socializing preference as well is that because they say like, how does cloud bring value? That's all really cool. But how do I transform my traditional legacy workloads, right? Like, it's great that we have this amazing plumbing, and I can define the unit of consumption, but how might I take cloud native and modernize, you know, old ass applications, right? And so, a thought that has occurred to us, and it's kind of lessons that we've learned from other things, is positioning, for example, cloud to enhance in passion. And so, like, lifecycle operations are kind of a thing that everybody does with them, especially in these traditional environments. And, you know, by leveraging things like system storage, and capabilities like, you know, like, we just recently showcased the Red Hat Image Builder, which is a toolkit slash service that you can get on cloudredhead.com for customizing rel images, you know, and then using cloud in it to program in identity and configuration. You have this mechanism where the lifecycle of a host can really start looking like a software pipeline, right? Where the app is the OS image, and then all the customization that all customers do for the most part, right? There's some type of hardening that occurs. Some, you know, like, configure it, these are my my logging server, whatever it is your organization does, that can go into, you know, edits that get streamed into the image. And then, you know, unit testing with Ansible with other mechanisms to make sure that when the new version of, you know, when, for example, Red Hat pushes out a new point release of Red Hat, or, you know, this is not limited to Red Hat either. I mean, when Microsoft has a new point release, Windows Server that you've got cloudbased and it's streamed into it, you have this pipeline to kind of create the latest and greatest version with your customizations. And when you talk about, well, again, how does that, like, improve lifecycle for me? Well, consider a workload that runs that's leveraging this. So a new version of the opposite comes out, the pipeline generates the latest customized image. When you look to catch, go to go to lifecycle, what you're really doing, if you're having that workload take a knee to detaching the data volumes, which are being held by a sender, and then you are rehydrating the latest version of the customized image using cloud in it, punch the identity back into it. And that kind of sounds familiar, it is, because that's what we do with containers. They're good data mounts. The images are immutable. So cloud is letting me take the lessons that I've learned from containerization and apply them to traditional workloads. So much like, I guess, the extension of that is where you would then take, for example, your customized image, and then application came to your environment, which forked that project as they would any other GitHub project. And so now I can take the base OS image that my corporation has been, this is our minimum, and then I can start layering in that DB2 database, that Oracle database, all the customization that are required to kind of make this now a thing that I can consume. And what we're really turning this into is this catalog of capabilities where we kind of take the layered customized image now, so base and whatever application is, and then we can position things like, just as an example, like, you know, Ansible, and to do the detailed configuration on deployment, or positioning things like heat to stand to configure network storage, you know, security groups in a persistent way. This is ultimately creating, you know, shift left opportunities for an internal marketplace, because when we think about it, you know, I mean, it's great to remember from a technologist's point of view to have all this really cool stuff, but, you know, what's valuable to a business. So again, as we mount back to Roger's framework, you know, the competitive advantage, the core capabilities that we are leveraging to win in the area that we're targeting, is that we don't necessarily want or need super technical people to do this, right, where we have all this framework and tooling in place that I can capture this in, you know, a service now catalog that triggers a workflow that provisions this, and then I can hand that off to a call center. Like, this is the value to business. This is where we see, and what we have seen by hyperscalers position to be super successful at what it is they're doing. So, what to get into? Hey, Greg, before you go, I think there's a question on the chat, if you don't mind handling. So, Kurt is asking, is there a tool process that allows us to make identity IPA into VM image deploy? Currently handled by a post-hospital and municipal place. Is there a process to make identity management into? It's a big identity, like IPA, into a VM image deploy. Absolutely. Yeah. So, if you were done with that. So, yes, the answer is yes. And baking in, like, so red head IPM, so this is like, so you've got a free IPA, but then red head IPM, is this what the question is about? Kurt, can you, are you able to get voice? I don't know if you can speak or just type. I think I could speak. Can you hear me? Yeah, that's scary. Oh, good. Good. I have a voice. Yeah, just one of the things that on my mind often, as I roll out images, I'd like to roll out baked images that I provide IPA access through group roles that are defined by groups onto systems. And what I'd like to be able to do is have all of that baked into an image. So, when it deploys, I can tell it you're this thing, and as it deploys, it does that. I don't know if there's a way I can do that in a free step. I guess I could do it in a cloud in it now that I'm saying it out loud. Yeah, I was going to say, so that's actually, so it's funny in another life. I was an IPA in the mid-next group. So, I would leverage cloud in it for that. So, in IPA, you have this notion of OTP, the one-time key. And so, the most component can generate an OTP, which can be streamed into the bootstrap process by a cloud in it. So, when that image cuts up, you can have that configuration data for the instantiation of that workload. You have to facilitate that auto-registration inside of that. So, that's like a standard thing, right? Okay. I just haven't done it. So, yeah, now that we talk about it out loud, it doesn't make sense. Okay, thanks. Is that a quick question? It's a good question. Okay, sorry. Yeah, John Mills of NASA. One of the biggest problems that we've ever had with image building is that images tend to be one-flat partition a lot of times. And federal government, in conjunction with NIST and CIS benchmarks, to after security scans, we have a minimum of like six different partitions. It's like slash, bar, bar log, bar log off, bar temp, temp. I mean, and so like stuffing all those different partitions into a dynamically built VM image has been tricky. Yeah, so, I mean, there are a number of different ways to handle that. From an image builder point of view, like I know that Red Hat is actively, I believe we actively pursue common criteria certification. One of those things is to have, for example, those separate partitions, ancient stuff about filling up file systems, right? Like I get it. And so by leveraging something like a proper software pipeline, so whether you're using GitLab, GitHub, whatever it is for your actions, these are customization steps that can either be done by image builder itself or in the pipeline via something like Ansible going in and turning those into Is Red Hat image builder different from Dib, the disk image builder? Or you can see open stack on this, like ironic uses. Yeah, so image builder, we just showcased that this year of Red Hat Summit. It is, yeah. So it was originally a CLI, but now it's, you can actually prototype it through cloudredhat.com. So there's like a web interface to it now, but there is a CLI to it as well. So that's something we can talk a little offline about. Okay, sure. Yeah, no problem. But yes, there's powerful mechanisms available. Both from what image builder can do, from what Ansible can do, and from what cloudredhat can do from a bootstrap point of view as well. The point is though, is that by leveraging the pipeline approach to delivering gold images, all of that standardization, whatever your requirements are. So you've got a compliance point of view. You would have that done in pipeline and the release of the NASA 1.0 image for that release would be something like produced by that pipeline, but people would base their work off that work. And it is very much a layered approach. All right, so that's kind of what we're doing. So we start with the stock image, free from this image builder. You put that into a DM and open stack to lay in what the Ansible, all of our benchmarks and compliance. These snap dots just prep that image and that becomes a gold one that is pushed out to bare metal or DM and that is configured by other public or Ansible more for what that particular purpose is. And the purchase can get to this tricky. Sure, like you have. Oh, for sure. Yeah, I guess to summarize, there are new tools that Red Hat has made public that may facilitate that. But it's also, I think it would be taking steps similar to what you were doing yourselves. And positioning them as part of software development pipeline where like, so let's say you have a new requirement like the standard changes and you need to, you know, tweak something in the image. That goes in as its own branch, right? You know, you create a branch, you apply your change, you run your unit testing based on like standard DevOps processes. Someone reviews the change, they review the workload. And then it gets merged in. You have a 1.1 and a 2.0 release. And that's just a really good way of doing it, right? Because from an audibility point of view and a compliance point of view, like one of the things we'll talk about a little bit later is part of why you would look at changing your philosophy or methodology to doing or to conduct in business is that most organizations have something of change control place, right? Yeah, it's a little change record. So that was a beating in everybody's heads and in the 90s. I tell for everyone, right? And so, but it is an important part of how companies manage this. And by implementing things in industry standards like software pipelines for delivering infrastructure, you get to have a different conversation with business where it's just like, I'm looking to deliver services and content, but I want to move towards pre-approved changes. And they'll say, well, in order to have a pre-approved change for that agility, you have to have certain controls and audits in place. The software development pipeline is a really good way of doing that, right? So in any case, so we're going to get through three very kind of top of mind problem statements that we've been having with customers. So the first one is I have legacy workloads that live in existing virtualization systems and or bare metal. I need to be able to move them as is to the solution without disrupting their views. And so this is typically what we look at at the lift and shift, but the lift and shift from traditional virtualization to open stack, we're going to leverage some of that good plumbing inside of open stack. So we have this notion of a super tenant. And so if you consider what like the sphere, what Hyper-V looks like, what even Rad looks like, right? These traditional, these traditional virtualization technologies, I mean, if you talk to any of those admins, if we have them, they're used to having all their workloads in a single pane of glass and they'll typically have their VMs plumbed with VLANs from all the different parts of their network. And, you know, this is how they need things to be until they are ready to uplift their workloads. And we are able to recreate that traditional, I guess, management monolith inside of open stack logically. I mean, so for the open stack folks here that are familiar, I mean, like this is nothing more than creating a tenant with a number of provider network based routers that VMs have ports that are provisioned on them. And so the advantage of this approach, and again, there's a lot of strategy to how I would convert from traditional to open stack, is that you don't boil the ocean with reinventing how access and networking is done for the workloads. So all of the firewalls, all of the input set processes, all of the chain workflow that are required for provisioning and maintaining access to those VMs still exist. And so this, how this plays out is the workload that lives in vSphere, RAM, or Hyper-V takes a knee. It gets migrated. And then when it comes out, the ports that are attached to it have the same MAC addresses, it gets the same IP addresses, it's on the same provider networks. So those VMs don't know that anything's changed, right? It comes up and everything just works. But it is, it's an important first step for like getting people moved without making it this unbelievably daunting task. Let's see here. So our second problem statement is that I have workloads that can run in a cloud view fashion and are being hosted in a legacy virtualization system and I need to pivot those workloads to a new solution. So one of the things that we find with a lot of customers is a lot of them have deployed OpenShift or Kubernetes, right? Like I'm going to be agnostic here because the solution works for both. So whether it is a, you know, FlavorX Kubernetes distribution or OpenShift running in vSphere, running in Hyper-V, running in RAP or whatever. By positioning Red Hat Advanced Cluster Manager, you're able to manage those Kubernetes clusters and then you're able to do a workload migration. So this is a fairly simple, fairly straightforward way of going from old to new. So you would stand up new Kubernetes or OpenShift clusters in OpenStack. You have a very good story about running OpenShift on OpenStack. And ACM would be then, you would have the KubeLit operator or help chart, it's Kubernetes, installed inside of your Kubernetes cluster and you would do a workload pivot from ACM which would then redeploy the application, recreate all the other bits as part of the deployment on the other node and then cut away the old nodes. So whether you are a legacy application or a container-based application, there is a fairly straightforward way of migrating, leveraging something like ACM or Ansible into migration. Is there a question? Last problem statement is not really a technology problem statement. So this is more about our central IT model creates bottlenecks limits, limits scalability, smooth innovation within the entire burden on one team and efficiencies arise, costs increase and responsibilities that this demand increases. So who here has heard or has been living, has lived the situation of, I'd love to do that, but I just don't have any time, right? And this is exactly right. So I'm sure everybody has experienced that at one point. I know when, you know, when I did it from a different life, that was there was never any time really to do anything because it's so many places you were spending. And so this is punk and we're going to briefly talk about why that is where we see it. So when we talk about new business, you know, that's typically a number of new applications of new technology, perhaps it's being considered and ultimately new expectation associated with that new business. And traditionally that gets funneled through central IT and central IT digests all of that and they come up with something service delivery. And so, but when you think about that digestive process, well, what is central IT actually doing? Well, there's a whole bunch of stuff that they do, whether it's, you know, coming up with the requirements for security for monitoring compliance, you know, setting a schedule and figuring out capacity management for the new thing. There's a whole bunch of thoughts that go into this and they get pushed down on central IT. And so what happens is that as your business scales that you start getting more and more projects kicking off, you end up getting these kind of discrete little work packages that get funneled through that central IT organization. And the problem is that, you know, you end up getting this operational bandwidth contextual with people. End up just too much. And that's kind of where we get into the whole like, yeah, I'll deal with it next month, 20 other things that I'm doing. You know, and so we kind of liken this to, you know, high school or college, right? Where you would have different subjects you would do on a different day. And the more subjects you ended up taking, generally speaking, the quality of the work you would get. We're only human and time slicing has diminishing returns. And so it's very true when you look at the traditional central IT organizations. You know, and so one of the typical answers we'll hear from them is, oh, I can just scale the IT organizations. This is a problem money can solve. And unfortunately, that's not true either, right? So what we find is that as IT organizations grow and number, you end up getting title knowledge or tribal knowledge. So the, you know, it becomes less of the technology and more about, you know, Bob knows how this customer likes their application. And he knows their tolerances. He's familiar with when they can take outages. Can't ever lose company, right? Bob can't take vacation either. Bob isn't a very happy guy most of the time. And I'm sure everybody at some point has been Bob, right? And so there are these diminishing returns with scaling that central IT organization. And interestingly enough, it was a problem the hyperscalers had as well. And they came up with a fairly interesting solution for it. So our recommendation is kind of a reverse engineering of what these organizations have been doing very well. So what they've essentially done is they've created this strategic compartmentalization of how they run their organization. So in this example, under the context of what we're here to, let's just talk about an open stack or a cloud team and a line of business team, you know, so a line of business team being maybe the consumer of the infrastructure, right? So what would the cloud team take care of? Well, there's a really interesting demerit point. Much like the hyperscalers, the cloud team would be responsible for all the S-serves, components, software client networking, all of that. But in addition, the lifecycle CI of the infrastructure. So your compute nodes and the environment, right? You know, in which compute nodes, like from what vendor, typically you need some oversight on that, you just can't let people have too much of a free RAM. But there are some interesting ways of eliminating and provisioning that out for management by customers. But generally speaking, they focus on making sure the cloud is running and providing services. Well, what about the LOD team? The LOD is your classic customer, your consumer of the cloud. These are the people that we've found there are, while there are many personas inside of these organizations, there are three that stand out. You have the line of business, site reliability and change. The person that much like you would see in AWS or Azure, these are the guys that are provisioning the infrastructure, creating security groups, all the policies created for running your operation. From a technical point of view, then you have your developers. This is the other major persona that you will be finding out of customers, writing the applications, putting them together. And then some type of BSA, some business of an analyst, is administrating the services that the developers create that runs on the infrastructure that the SREs provide. And the idea here is that, what the hinders this figured out is that by providing this strategic compartmentalization, it's almost hubris to think that a central organization would know a line of business is better than the line of business itself. And there are substantial advantages to that, where, for example, a line of business may know the tolerances of their customers. The central cloud team doesn't need to know when that customer can take outages. Their SRE does. Just like when we look with AWS, like AWS doesn't care when your servers are down as long as it wasn't their fault. It just has to do with, they let you take care of that. They're just there to provide services. And so when you look at this at kind of like a 50,000 foot view, you have this decoupled, disaggregated, scalable operating model. So in the sense that, I have my central cloud team, but they are detached and aren't like enough away from where the workloads run. As to be able to continue doing at a very focused scale. And they let customers worry about customer things. And there are, I'm sorry to the hyperspeeders, there are a number of very interesting open stack references. If you look at, if you've ever been to a CERN talk at Open Infra, that's like seven guys. That's probably like the largest open stack cloud in the world. Well, they are getting in the way through the scientists and the organizations that come in and run particle physics on the LHC. So there's a model that, you know, works is proven. It's not the old grid head for so smart. It's like, no, we've been paying attention. And we're trying to get people aware of what these like, how can cloud enhance my business? Well, it's not good technology. It's not supposed workflows automation. It's also about bringing and evangelizing the way the rest of your organization and the way you consume about work. I like move away from those central IT, you know, deployment patterns and thought patterns and move toward what is proven successful, we see in private and public cloud. So we kind of move on to the strategy framework for cloud transformation. So, you know, this all, I mean, essentially what we're talking about when we circle back to Roger's framework is that to succeed in cloud transformation, you know, one has to identify key challenges, you know, such as like technical debt, skill gaps, and ultimately align those things with your organization's strengths. So, you know, for example, expertise in DevOps, like, you know, or cloud-native technology, an emphasis on an iterative improvement, cultivation of culture that embraces those constant changes in innovation. You know, when we position things like DevOps methodologies, which, you know, allow for the customization as a specific, you know, images using, you know, gold technique, enhancing operational resilience, employing more of that cloud-native approach to handle things like virtual machine-like cycle, you know, positioning things like orchestration tools, whether it's, you know, Ansible or Heed or whatever your, you know, the flavor of orchestration. I would have said Terraform, but that's kind of shaping right now. So, you know, regardless, there's a number of different ways to do it, but just kind of taking that, you know, that approach to automating in a repeatable pipeline driven manner, you know, and then positioning things like service-level objectives for managing customer expectations. Because, you know, especially when you look at that desegregated model, it's like that cloud team, you know, like they're not in the weeds with the customers anymore, they're the consumers of the cloud. So, you know, they need to look at, you know, creating an environment where much like when you go into public cloud, like there's no volume I can provision in AWS, CBS, that will drain your sand. They have policy controls on what an IO1, IO2 volume can do, what a GP2, GP3 volume can do, you know, and like, they can do the exact same thing. Like, like, Cinder's very, very powerful in that regard. And, you know, or how we would position things like, you know, leveraging C-group tiering. It's actually part of a different part that we have on how we can position C-groups to kind of provide service guarantees for, you know, running virtual machines of different performance demands, you know, gold-sweater bronze gear. You need to be a block frequency of the processor on the compute node as a dimension for over description when you can apply a guarantee. But that notion of service level objectives being defined is kind of that secret sauce. And then finally, we get to the choice cascade in that where we look at taking strategic decisions like, you know, adapting, or sorry, adopting a planative approach, then move to operational choices such as determining what key technologies and platforms you're looking to use and lastly making practical decisions that influence daily operations based on that foundational strategic and operational choice. Ultimately, that choice cascade, you know, enhances operational efficiency by aligning decisions you know, ease transition of workloads from black and city cloud-native systems by empowering those lines of business with responsive line teams. But I'll circle back. Ultimately, it is about the power of choice, right? You know, that choice cascade transcends the technology. Focuses on organizational transformation, innovation and scalability of making informed decisions within this cascade future-proof organization sharing resiliency and adaptability as you grow, as you scale, like you're not being pigeonholed anymore. And then transitioning to this aggregated operational model which, you know, overall boost efficiency as we've seen with hyper-scalers empowering business sectors by integrating DevOps and cloud-native technologies, you know, like DevOps is not just for containers, it can be for everything. These are just ultimately just what we've learned from this journey as it can be applied. Additional across. And that was the whirlwind tour. So we have, I think, a little time for questions if there are any others. It was live. Can I get copies of your slides? Someone was here asking for it, too. People noted that you have to ask them for a partner, so we need to be quiet. There will be some slides we will be able to share. I won't be able to share this deck specifically. Not yet. This is kind of a new angle we've been kind of pursuing lately, like in light of what's been going on with Crockhouse and so on. I think there's a pretty interesting opportunity here for your conditional mid-to-large enterprises. So, but for anyone that's interested, if they can send a note to Chris, we'll make sure that we get something out to you guys.