 Let's get into the big topic, open source, something that we actually didn't realize. This is so awesome. We are an open culture that is actually putting some space in that process that a developer or, let's say, as the communities of the system really bring them. Welcome to another episode of In the Clouds. I am your host, Stu Miniman, and still coming to you from our home offices, things are starting to open up a little bit here in the United States as well around the globe. Earlier this week, I actually went to the Boston, the global EBC for Red Hat, which was the first time I'd been there this year, and got to see, actually, we're working on studio there. So we have a temporary setup, got to see the space we're going to build, you know, a more permanent setup. And I'm looking forward to the day where I will be able to sit down with an actual guest in person for this program. But until then, happy to bring the guests in remote, it makes it easy. We can just go meeting to meeting to meeting, and just some of them get broadcast out to the world. So really happy to welcome to the program Mike Barrett, who leads the product management on the OpenShift team, which I'm a part of. Mike, it is great to see you. We've got some exciting things to talk about here. How are you doing today? Great, and thanks for having me, Stu. I love this program. All right, awesome. So, hey, just for our audience, you know, could you just give us a bit, you know, how long you've been at Red Hat? And, you know, as I said, you know, you lead the product management team. We're going to put a link up in the chat there. Your team actually does like a 90 minute deep dive. So if people want to know all the wonderful features and all of the things that are there, they can go watch that. We, of course, are going to up level a little bit, talk a little bit more about the cloud piece. But yeah, why don't you do your own introduction, please? Yeah. I mean, a little bit about myself and how I got here. You mentioned Boston. We have a Boston Connections, Stu. I lived in Boston for 14 years. I went, I was in Chestnut Hill, Brighton, Chelmsford, Watertown, Medford, Lynn Shore. So I bounced around quite a bit. But yeah, you know, it's interesting enough, I got into computers through finance and marketing. I was a finance and marketing major at Boston College. And I realized very quickly that in order to excel in those studies that I would have to do some coding and models, stats, predictive analysis, all that good stuff. And then when I started to get in the computer science areas, I sort of fell more in love with them. And that brought me into Sun Microsystems in the late 90s. And so I'm at Sun Microsystems during the internet boom, during the dot com bubble where everybody's talking about how we're all going to be millionaires, right? And I'm working on servers that are more important or more expensive than my apartment. I'm working with like these industry huge names that have gone out now and started major companies. And it was sort of a time of my life back then. Then I, you know, I get into Sun through tech support and that is stayed with me, has an experience and it still stays with me here at Red Hat, how you have to be very connected to the customer. And during that time in my career, I was taking probably 50 support incidents a day. You would hang up the phone, you would pick it up, you would work 25 whip. You know, you have that case load always in your queue turning. But that really brought me to an understanding that I wanted to get into product management. So it was, how did this happen? How did that, how could that fall? How did somebody not think about that, right? And so I gradually went from support to sustaining the fine fixed, to ISV partnerships, to field enablements, to technical marketing and then into product management. And that was my sort of trajectory into product management. Yeah, it's interesting, Mike, you and I have some parallels there. So, you know, I actually, I started, you know, doing some coding, studied engineering, had a few field roles and actually took a product management role. So when I joined EMC, I worked there for 10 years from 2000 to 2010. And it's funny, you know, Sun, Sun hardware and all of that stuff there, you know, we got to talk about that and everything. But as like the junior product manager, they were like, Oh, hey, give Stu the stuff that nobody understands and we can't really figure it out. And one of the things I got handed is I was the product manager for Linux, you know, back, I mean, hey, to admit 20 years, you know, more than 20 years ago, which predates when rel existed. If rel existed, the job would have been easy. But back then it was, Oh my gosh, how do we support that? We only had like one engineer doing the integration testing of how that worked. And when Red Hat came out with Raz and then rel, it made our lives a lot easier because at EMC we were like, Okay, enterprise production workloads. You know, that's what we're all brought to the market. And so it's interesting to watch that journey, something I'd watched Red Hat for a long time and of course joined a year and a half ago. But it does tee up, you know, my background's infrastructure. And one of the questions we've had for the last decade, you know, or more is with cloud, you know, does hardware matter? Those of us that watch the cloud, there was a, you know, rather small acquisition that Amazon made a bunch of years back of Annapurna labs. It's only $350 million. I mean, you know, 350 million is that big? Well, let's compare, you know, like Red IBM bought us for, you know, $34 billion. So 350 million is not an insignificant acquisition, but it has had a large impact on the cloud because Annapurna labs is the chipset that is now the Graviton servers. What I've heard in the market is the, you know, might be the largest volume ARM server in the world, and that brings us to where we are with 4.10. So, Mike, you know, ARM servers in the cloud with Amazon, you know, tell us about, you know, what we have and why we think this is a pretty important thing for our customers. Yeah, I mean, I'll sort of really high level and then we'll dive into the actual release and what it does for hardware. But yes, to answer your question, yes, hardware still matters. And in fact, that's probably the only lesson I learned when Oracle purchased on microsystems. We were tasked with how do you accelerate application patterns and application benefits with operating systems with virtualization and with hardware sort of accelerators. And what I realized for the first time in my career was that the application stacks have a lot of RAS, reliability, availability, scalability. They have those features and they almost try to duplicate what the operating system and what some of the platforms are trying to provide. And so the paths, like this paths that came to be years ago was a good battleground to consolidate an application in what it wanted out of the hardware. It was a new way to vocalize how do I get CPU? How do I get memory? How do I get another instance of myself? How do I make sure that my persistence is there? And Kubernetes really helped do that for us. And so now we have that layer and we want to bring back hardware. And I got to tell you, when we first started this project of bringing hardware attributes into Kubernetes, there wasn't necessarily a lot of partners or friends. There were some like Intel and Nvidia and companies like that that were like, yes, let's double down on that. Let's go after it. But a lot of the Kubernetes ecosystem were primarily really focused on clouds and virtual machines at that time. So we coined a phrase, Kubernetes native infrastructure, where what if we could take a step back and we could take some investments in some other areas of the company like our virtualization stack, our rev, our KBM, our QEMU knowledge, our ability with open stack and everything we do with network interfaces and IPv6 and pneumo awareness. I mean, you want to go in the back room and sort of stretch after going through all these words that you have to be aware of when you start touching metal. How do we bring that into Kubernetes? How do we potify them? How do we get them to run in a way that would be Kubernetes native? And the project took a couple of years to really get off the ground. And 4.10 is really a declaration. 4.6 was the first step where we had really features that customers could touch and feel in production. And it took a couple of releases to round off some edges. And now in 4.10, we're really hitting the home run. So in 4.10, we announced the support for ARM. And I got to tell you, it wasn't an easy decision as easy as you would think. You would think everybody loves ARM. This would be an easy decision to go invest in ARM and do it. But you really have to know your customer base and you have to know how they're going to be using ARM. And that has a lot to do with there's no one ARM. Like when you get into the world of ARM, there's a lot of different server specifications. There's a lot of different ways people build those ARM servers. And you really have to focus on very precise investments. We decided to go out with AWS ARM first. So in 4.10, you have AWS support. You also have some bare metal on premise for a server specification, not the cool stuff in your office. Let's do the more expensive ARM servers are what we're supporting right now. But that has a lot to do with our Edge journey and where we're headed there. Azure will most likely be next on the roadmap that we'll declare support for. And right now with the ARM support, it's a homogeneous cluster. So everything in that cluster and that Kubernetes cluster is going to be ARM. And our releases at the end of this year will allow you to mix those in a heterogeneous way with other X86 workloads. We do that because a lot of the feedback on these use cases for ARM right now are QE, just massive cost reduction in your QE spend on a lot of these clouds as you move to ARM. And that was a driving force there. Mike, I'm glad that's what I wanted to get into with you is, you know, ARM's been around for a number of years, you know, everybody from, you know, hey, you know, Apple's got their own chips that people are getting their hands on, you know, laptops, you know, running the Apple chips to, you know, oh yeah, no, I've got a Raspberry Pi sitting over there. Can I just run it on that? Well, no, there's there's a amount of resources we need underneath. There was all the work that our Linux team needed to do. So, you know, rel support for ARM. So, yeah, you know, I guess, could you Mike just tell us, you know, that journey for ARM? You said we started touching it in 4.6. Here we are with 4.10. You know, this is now, I mean, generally available can run this in production on certain workloads. And then, you know, we've got a robust roadmap going forward. Is that right? That's it, exactly. So, we enter into this multi-chip world, first supporting power, IBM power, then we added support for IBM Z. During that time, we sort of built our chops on how to do a sort of multi-arch engineering team. And, you know, you got to change your internal CICD test harness. You have to do a lot of heavy lifting there. So, we were doing that heavy lifting. We were bringing on those. And we were lucky enough to stand on the shoulders of rel. So, rel had been doing this for a decade previous in terms of multi-arch development practices. So, it was a great combination of standing on rel, bringing those features in and getting that pipeline out. Outside of ARM though, when you talk about bare metal, bare metal brings with it a lot of stuff that you have never really considered. And 4.10 has a lot of those other features in it as well. When you talk about bare metal, you have to have a way to land. So, you're adding bare metal to a distributed system, right, where everything is supposed to be the same, where the end user is not supposed to have to carry the burden of understanding this box as a GPU, but not that box. So, there's a lot that you have to add to the Kubelet in terms of having it discover those features and sort of broadcast those up to the scheduler and getting those engineering projects into upstream Kubernetes and then downstreamed into the product itself through our node tuning or NUMA topology awareness in the scheduler. We support precise time protocol, which is timestamps on hardware and clock time in the APIs that you're calling out of the kernel. Device manager, being able to compile drivers for Intel cards has your deploying the workload, like mind blown. These are all the things that build on top of each other. I mean, SRIOV throughput, being able to talk to a service processor on a server. There's those nuts and bolts features, and then there's even philosophical differences. You hear a lot in this world that devices or operating systems or nodes, members of this distributed system should be ephemeral and they should be able to be rebooted whenever you want to reboot them. A very big design premise in Kubernetes and in OpenShift is that you design your cloud native applications and you can bring your nodes down and it should be absolutely no problem to you to reboot your nodes. You should repave them when you reboot them and so on and so forth. With Metal, you don't necessarily want to reboot them as much as you're rebooting your virtual machines. That was a big thing to put in the product. From 4.10, we allow you to do that. We allow you to have more control and awareness over when your nodes get cycled and how they get cycled. Michael, a lot there to unpack. Actually, we've got Coder Grammar has been watching. We're really excited to get their hands on 4.10 and actually set us up for an issue you want to talk about. They asked about the Sandbox containers, Cota containers. We have some updates in 4.10 as well as we've really seen a maturity in the OpenShift virtualization. Maybe explain those two topics because those tie into the things that you were just talking about. You have your container on one spectrum and then you have your virtual machine on the other spectrum. We cater to both of those with OpenShift virtualization, which is a Kubbert project. Then you have the obviously rail containers. In between, you have Sandbox containers. These are what a lot of people in the industry will call micro VMs. REL sort of invests in QEMU and we have a more sort of slimmed down QEMU that's multi-purpose. We feel that's a very robust solution for this middle ground where you want some isolation from the virtual machine, but you also want some of the performance and some of the management of the container. Lo and behold, you have Cata containers as the interface. It's going to help us a lot on build strategies. Anytime where you were faced using a privileged container or having this need to put root into a container, before we would mitigate that and deployment practices, making sure those only landed on certain boxes. Now with Cata containers, you have more of a distributed nature. You can really fulfill those use cases cheaper, if you will. We're super excited. It finally went GA. It's been tech preview, I think, for two releases now. So, stable and ready to take your production workloads. Yeah, it's always interesting. Mike, you and I both get to talk to a number of customers and we've reached a maturity level that early days of Kubernetes, was building new applications and modernizing everything. I want to use all these wonderful new workloads. We announced this at the NVIDIA to highlight some of the AI and ML workloads we're doing. Well, there's some workloads that it's going to take me longer to modernize them. So, we need to give those pathways. Can I manage them in a new cloud native way? And that's where things like the OpenShift virtualization built off of Qvert can help with. And, of course, security, huge concern. Everybody wants to make sure there's there. I remember when containers first came out, it was like, oh, my God, is there any security here? Let's just line up. Every container is tied to a virtual machine. Well, it's matured a lot. There's so many pieces throughout the entire stack that we do. So, can you speak a little bit to customers, some of the use cases that they might have and that maturity of the entire space? Yeah, I think, I think you hit it perfectly. With OpenShift virtualization, which is a Qvert open source project, we have a lot of people that are sort of lifting, shifting virtual machines. These are people who had virtual machines that hold them in production and they want a new way to launch them. They want a new way to consolidate them back to that API we were talking about before, where I have a consistent way to ask for CPU memory disk and IO and routes and IP addresses. That was the point. When we designed OpenShift virtualization, we could have totally taken the path where we take OpenStack and Rev, and we sort of put them side by side next to each other and put a user experience on top of it. And one of our competitors went down that path with their virtualization technology. We felt that we needed to do some heavy lifting, right? That if you truly believe that you're into containers, that you're going to have more containers and virtual machines over the next eight years, then you probably want to bring the virtual machines into the same sort of dynamic, the same distributed system, right? It should be able to scale up and scale down in the same way, the same YAML template. It should be able to ask for CPUs and memory in the same way. It should be able to get an IP address, a route, a URL. These all need to be the same in the system for you to be successful mixing these old virtual machines in with these new containerized workloads. And that's what the design premise was around OpenShift virtualization. In this release 4.10, it actually picks up support for service mesh, which is mind-blowing, right? A layered product is now sitting on top and bringing these products together. So yeah, it's been very successful for us and we're happy we made the investment. It is a very successful upstream project as well. You'll get upstream projects. You look for multi-vendor contribution and Qvert is one that does have a lot of multi-vendor participation. Yeah, absolutely. It's something that, you know, I remember first hearing hearing about it, Mike. The, you know, architecturally, you say, you know, what is my center of gravity? Right. Do I want to move things forward, be more modern, you know, containers and Kubernetes? Is that the experience that I want to build around? Or do I want to try to backport things and be, as you said, more of a sidecar with, you know, older environments? So it's a little bit of work for customers to move to a new learning environment. But overall, that's going to be helpful for the business. Everybody needs to be able to move faster. Kind of leads us to our next topic of conversation. You know, we mentioned the AI ML workloads. That's a big focus we've had for a number of years. The automation that's really needed for the scale and, you know, changing of those environments. It really has been a nice match for, you know, what we've seen in Kubernetes and OpenShift. So, you know, what does 4.10 enable that I might not have been able to do before when it comes to those sorts of workloads? Yeah. So when you start talking about bare metal, I got to say bare metal, although it's not the biggest population, when you look at all the possible environments that are running on OpenShift, it is the one that's growing the fast year over year. And a lot of that growth has a lot to do with intelligent applications and this need to allow more and more people to have access to GPUs. You know, this era where only data scientists and only the most privileged in the company have access to the best hardware is at the sort of the tier point of going away. And so now we need a way to have pipelines for the use of these GPUs with PyTorch, with CUDA containers, with a lot of the NVIDIA stack. And NVIDIA, at their conference, released this wonderful suite of tools that if you just add the platform, the OpenShift, and you load these tools on top of, you now have an out-of-the-box experience that allows you to really add security, allow what an enterprise would need in terms of their compliance, add pipelines to bring people in and start mixing those coding patterns with the results of a lot of the intelligent applications that only data scientists used to have access to before. We have a lot of customers sort of building that themselves, and that's a successful project as well. We helped there with Open Data Hub, which is a reference architecture that allows you to plug in a lot of other peoples, other vendors, our ISV partners, solutions into that. But this NVIDIA solution that just launched this week is incredible and we're proud to have them qualify on OpenShift and open it on OpenShift. If you want to see it, they also have something called LaunchPad, where you can go to the NVIDIA site and for free, launch it on OpenShift and get that experience to that tool set. Along with that comes support for, you know, another one of their cards, GPU sharing, GPU utilization is now shown in our user interface, where we used to just show the cores in the user interface. There's actually GPU support on ARM in 4.10, so you can sort of mix the two new features together. Yeah, a lot of incredible partnerships. And I got to say it's been great working with NVIDIA over the years. We first created Node Feature Discovery and special resource operators and device managers and a lot of automation in those areas. NVIDIA was able to bring those technologies together in the NVIDIA operator. And they supported on everybody's Kubernetes, but we had a really great partnership bringing that into the market. Yeah, so, Mike, we got an interesting question. I think that one that tees up, you know, that customer journey. So when I think back to the early days of PAS before containers and Kubernetes were a thing, it was really kind of narrow as to what applications we can put on it. It was, you know, okay, I want, you know, microservice 12 factor applications. You know, that's it. And I'm like, well, I'm going to have some of those. But the question says, you know, many customers aren't even ready to do lift and shift. They want to move slowly, you know, to paraphrase what Ockman says, you know, how do we meet customers where they are and move at the pace that they are ready to. So how does OpenShift help there? Yeah, I would say OpenShift, you know, going as far back as even pre-Kubernetes just coming into Kubernetes. We were one of the only vendors that felt you as a user had the right to say, I want it to go at my own pace, that we weren't necessarily ever going to force you into a felt 12 factor sort of position. And what that means is we were very concerned about persistent storage. We were very concerned about how you get your IP addresses. We were very consistent with placement, with scaling practices, making sure that horizontal scaling wasn't necessarily the only scaling possible. And a lot, you know, I want to say it was education or sort of awareness that drove us there, but we were in our, you know, an operating system company at our heart, right? And we are trying to bring workloads into a platform. And so we're going to go out of our way to make sure that you can run complex applications inside our ecosystem. And so we were one of the first to really invest aggressively on stateful sets in Kubernetes. We were one of the founding leaders in custom resource definitions. We were, you know, being able to control a lot of things that our clouds could care less about, but more legacy applications will break if they don't have it. Those are a lot of the projects we were investing in and Kubernetes is right at to really allow customers to go at their own pace. So it's something we're definitely proud of. Awesome. Yeah, Mike, that's great. And something I think people maybe aren't as aware, as we said, you know, when we learn a technology at first, that's kind of how we think of it. So, you know, Mike, I know in the last year and a half that I've been here, one of the toughest conversations is, you know, customers that, you know, did a lot of work on OpenShift 3. OpenShift 4 is almost a completely different product. There's so many things that matured in the Kubernetes space and that, you know, your team learned a lot. And as you said, made some important decisions as to how to build things. So this space moves so fast, as we know, we've now gone from four releases to only three releases a year, but there's so many pieces in it. So I did get, we got one question, want to open it up to you as to some of the other cool things that you're excited about in this release. The audience question was serverless, which is an area I've spent a lot of time watching and was really excited when the eventing capability is now in, you know, Knative, which is the container-based serverless, and that's there. So maybe, you know, if you talk a little bit about serverless and then some of the other pieces that you're excited in the new release. Yeah, well, serverless, and this is going to go down at KubeCon as well, in Amia, in Spain, I believe, right? And next month there's going to be... For once in May, yeah. Yeah, they're really going to celebrate the fact that Knative goes to 1.0, right? And so this release in 4.10, it's a different release number for serverless, but we graduate and we move it to 1.0. We also have a nice roundout of run times that we're supporting from Rust to Go to Quarkus to Spring. So just a lot of content is now available to you there. Some of the... there's some UX changes in our user interface that really help develop our experience around serverless. Somebody interested in an API-driven topology is important and definitely there. But yeah, a lot of stuff in 4.10 that we're proud of. 4.10, you know, 10 releases of 4.x. And there's that movie GrossPoint Blank where it's going, 10 years, 10 years, man. And it's hard to believe that we did 10 releases in like four years. I think since we... this will be year number four, since we released four. And early on in any product, you're going to be building windows and doors and floors and foundations, big items. And then as you graduate into these higher numbers of the releases, you get to go back and put in the trimmings and make things extremely nice. And that's where we are, right? We were able to fix like 45 RFEs in this particular release, which are requests for enhancements coming from actual end customers, which is a lot to get into a minor release. So definitely proud of the team. There's a lot of... we talked about the bare metal, but there's a lot of stuff happening in infrastructure support, right? And don't get me wrong. OpenShift is Kubernetes. You can run Kubernetes anywhere. The difference is what automations you want with that installation that you're experiencing. And when we looked at what we wanted to do with 4.x, we wanted to realize what infrastructure you were installing in and add automations around that infrastructure. And so it's taken us some time to go through every infrastructure on planet Earth and put a lot of those automations in and a lot of those Kubernetes on by default, right? And in this release, we do it with Alibaba. And we do it with IBM Public Cloud and we do it with Azure Stack Hub on-premises. The Azure Stack Hub kind of exploded in requests around the time of the big government bid. I think Microsoft got a lot of the attention there because they had an on-premises solution and a lot of the government agencies wanted support for Azure Stack Hub, so we add that in. So when you take a step back, now we have just about every major cloud and every major infrastructure provider on-premises with automations out of the box that drive you through using the most you can get out of those infrastructures. So that's it. Mike, I'm glad you highlighted, you know, first of all, like Alibaba. That's a great proof point, you know, globally where we're pushing there. My understanding, Mike, that is tech preview, right, from the Alibaba piece. And it's still the IBM platform. Yeah, and there. But the other one, Azure Stack Hub. So, you know, we've had OpenShift on Azure for a long time. And the way Microsoft positions it is Azure Stack Hub is we're extending the public cloud to your data center. I'm curious, you know, what's the nuance there of, okay, we could support OpenShift in Azure. Well, this is just, you know, a rack of certified, same software. So, you know, it's great that we can extend and cover the hybrid solution for Microsoft. The question always is like, the devil's in the details is like, how hard was that to do? Is there, you know, does it look like for like or were there some things that, you know, oh, you know, it's a little bit different. And therefore, that's why it took us a little longer. It's definitely different. And it's, you know, slight on Microsoft, when you look at all the infrastructure providers, there's literally differences between regions even. The APIs change on you and they're not necessarily documented. And when you're doing the sort of low level automations that we're trying to perform for the customers, those things are big hurdles to get over when they are different. And so, APIs are definitely different in Azure Stack Hub that we were looking to call. We even had to go into our cause, our operating system, and make some changes on how we sort of inject username and password and networks at time of boot. So that needed to have some tuning, how we do networks, how we do DNS, how we do load balancers, a lot of changes in those areas. Well, I mean, Mike, that really proves that, you know, that that's one of the key value propositions of OpenShift is if I'm a developer, I don't want to have to think about the fact whether I'm deploying an Azure on Azure Stack on VMware. Oh, great. OpenShift lives all those places. I'm not going to have to think about that, you know, from a development, from an operation standpoint, we take care of that for them. And that proof point, it's good. I know, you know, we work with a lot of the cloud providers, as you said, and as they go more hybrid, we should be able to live in those environments too. Yeah, that's our problem to solve for you. Yeah, so no shortage of things for your team to work on going forward though, I'm sure. Yeah, definitely not. And there's going to be another infrastructure that pops up that people want that's completely different. We'll have to go work on it. You know, another area of the release that we're really proud of and it is the combination of our advanced cluster management and our open data foundations projects coming together. What could they do together to solve a big customer problem? And it's around disaster recovery. So when you look at disaster recovery and you look at these projects that we were invested in, we found a sort of a pattern and a picture that we could bring together. So when you look at the projects we were sort of investing and moving forward last year and the year before, we have something called ODAP, OADP. It's an OpenShift API for data protection. And this is being used as an API on OpenShift by a lot of backup vendors in our industry. So third parties outside of Red Hat are calling this API to help them discover what are all the application components. So you want to back up an application. Kubernetes doesn't really have a good definition of application and it's got all these bits and pieces and labels of different secrets and routes and all these other components. It's got a project called Valero that helps you scoop those up to find them and scoop them up, but also you want to be able to call a snapshot interface for the CSI plug-in and Kubernetes at the same time. You want to get the persistence and all the guts of the application. And through this single API call, you can like snapshot it, you can back it up, you can restore it, you can schedule it. And so we have all these people entering sort of the OpenShift experience through a third party vendor, a backup vendor calling that API. We also have a project called a ramen of all things, the noodle dish. But this allows you to sit outside of a cluster that you're caring about up at the ACM level and have operators that you're asking to help maintain the sort of deployment of an application in more than one location and backing up and restoring. So now I can deploy an active-active application and I have something that's intelligent enough to look at not only what we just talked about before, all the application bits and pieces, but also at the Ceph layer, the underlying storage layer using something like Ceph Georeplication. And having these two clusters use that underlying Ceph Georeplication to replicate that data across those two locations and have a consistent way to automatically migrate through maintenance or migrate because of an outage. So all that comes together in OpenShift 4.10 and ACM 2.5 and ODF 4.10 and that releases next month when those layered products release. But something that we're really looking forward to getting customers hands. We have always been asked for disaster recovery for OpenShift and we're finally delivering it. Yeah, I guess I'm sort of a related topic, Mike. We talked about hybrid and edge, but one of the things that we have a lot of experience is not just the customers in their environment, but for many reasons often customers will need to run in a disconnected mode. So very different from the cloud providers. I've always got that heartbeat. I'm going back to so what's the latest when it comes to, you know, disconnected or any, you know, pauses in connectivity. Yeah, disconnected has been a hard egg to crack. When you look at the container technologies, they all have a prereq that you have a container registry. So if you're going to have a container, you need a registry to stick it in, but if I'm the one giving you the registry, how now what are you doing? Like are you standing up your own on the side and then you're sucking stuff down into something that you're maintaining and then you're deploying the real registry that you want it. And so there was a dynamic that we needed to help customers work with and we we decided to take our quay investment and create an all in one slimmed down quay. Internally, it was project baby quay and and this allows you to not only have a place to pull things down and now have a command to help you pull those things down. When I say pull down, it seems like an easy task, but if you just blindly pull down, you're pulling down like 400 gigs of software in the OpenShift ecosystem of all those CNCF projects that we're invested in. So we needed to give you a filtering command so that says, hey, I only want the latest of those operators or I want only this channel of OpenShift so you can really filter in on what exactly you wanted to pull down. And then how do I stay up to date? So the biggest problem with a disconnected customer isn't necessarily the first install. It's how do I maintain that investment that I just installed that is disconnected. Will something in the product tell me that I'm out of date that a new version is available. So we had a through our core West acquisition for the connected customers for years previous we were providing this update service that allows you to connect up to Red Hat and in your console we show you that there's a new version available and that you should upgrade and then you click an upgrade button. We were not providing that in a very clear consistent way for our disconnected customers. And so now we have a way for you to put that metadata about new releases into that baby quay and allow your your clusters to connect to it and have that same in console experience that says hey, I'm out of date there's a new version. So a lot of niceties went into it and we were lucky enough to have a very strong public sector in terms of our employees that work there. That really helped us even into some some coding practices to get what their customers wanted in terms of a disconnected experience to start showing up in the product itself. So it was a definitely a great project to be a part of. Yeah, awesome. Well, and I'm glad Mike thank you for helping clarify at least your pronunciation of q you a why I've heard it pronounced key I've heard it pronounced quay. So, you know, always those debates as to some of those things so we've been putting in the chat in the comments. There's links to hear more. There's so many pieces apologize if we didn't get some of the questions you have, but Mike, I got one last question for you, you know when we look at the last two years. You've got a large organization. So, the good news is, you know, red hat, you know, pre pandemic was already, you know, you've got a global team, you know, used to doing things asynchronously between product managers and engineers. I'm sure you've got way more slack channels and workspaces than I would want to know about. But, you know, there's changes you've had people that have never met any of their peers. You know, how do you as a team, you know, you know, celebrate the successes like like for example, like, like 4.10, you know, build that camaraderie, share a little bit of your leadership as to, you know, how you're helping your team through the current times. Yeah, so T shirts are important in this industry stew. Like when you celebrate and it industry you have to get a T shirt. So we do a lot of swag in those areas. You know, you have to use the tools that you're being provided. So if you're, if you're in G meets as a team, if you're in Slack as a team, you need to have time for happy hour over G meets right so virtual happy hours. We share that with our engineering teams in different locations. And that's a great time to sort of unwind and have a conversation that you would have called a water cooler conversation. So we've tried to do those things over the pandemic. It's definitely helped us, I believe. But we're also very excited that things seem to be returning. We're excited that sort of travel to shows and whatnot are starting to turn on. And I think we're, we're willing to experience a road again. I think what we need to be conscious of and probably a lot of people are going through this is making sure that we're visiting customers before we're visiting ourselves. Maybe it's been a it's been a while since we've seen some of our best and most friendly customers. So definitely looking forward to getting out there and shake hands again. Yeah, no, Mike, absolutely. I mentioned at the top of the program that I got to go to the Boston office that is our global briefing center. I'm excited. I've got some, you know, upcoming meetings with customers and yet to your point working in tech. It's not just a job. It's also a wardrobe. I'm sporting the Open Shift Commons, you know, North Face from some of the Commons events. Quick shout out to those people attending. QCon EU, as Mike mentioned in May, there will be a hybrid Open Shift Commons event. You can definitely, you know, look online as to how to join that. So, Mike, 4.10, so many things in there. Super exciting. Again, in the comments we posted, if people want to dive in deep, there's, you know, the blog, the press release, the 90 minute, you know, session that your team did. It is great. Thank you for joining us, sharing with the audience and look forward to talking to you more in the future. Yeah, thanks for having me, Stu. It was a great time. All right, and really appreciate, you know, this topic, you know, lots of interest from the community, you know, do reach out. Mike and I are really easy to get in touch with. Online, happy to answer your questions. Hit us up through the social channels through the websites and you can get this. And we've got lots of great programming coming up every two weeks. We try to bring this program and thank you so much for joining us on your journey in the clouds.