 Good. Hey. We're on. Test. Test. How are you today? Good. Good. We're keeping you from the reception. So hopefully this will be the best part of the day, and then we can all go and socialize. My name is Jeff Brent. I'm a director of product management for Red Hat. I focus on Red Hat Advanced Cluster Management for Kubernetes, which is mouthful. You can call it Rackham or ACM. You'll hear me maybe use that interchangeably through this presentation. I'm Kirsten Newcomer, director of product management with the OpenShift team, responsible for OpenShift security, and Red Hat Advanced Cluster security, also known as ACS. So we're all here because of Kubernetes, for the most part, and RAL, and Ansible, and it's a great summit for us, but what we're seeing is that Kubernetes is adopted across the enterprise. It's ubiquitous. It's the new compute network and storage platform, and no one really has just one cluster. You really quickly get into a situation where you have a cluster for production, a cluster for tests, bare minimum, right? But then you have multiple lines of business. You might have multiple lines of business. Three lines of business means six clusters at a bare minimum. So we see a large number of clusters, and traditionally we treated these clusters as pets and not cattle. They've been traditionally very hard to stand up and configure and secure, and all those things that ACM and ACS are intending to provide from a needs of use perspective. All these kinds of different reasons for creating new clusters, application availability or business continuity, disaster recovery, clusters in different regions for sometimes regulated reasons, geopolitical residency guidelines, right? I need to have clusters in country storing information for my citizens within that geopolitical reason or region, and also for edge deployments, reducing latency and moving the workloads closer to where the data is, closer to the customers or the users of the platform to be interacting. Multiple analysts come out and tell us all the time that it's not just a single cloud provider as well. So we see a mix of not only infrastructure, whether it's running vSphere on-premises or P or Z for your on-premises clusters. Bare Metal is also an option for your on-premises clusters, but then you also have clusters out in your hyperscaler cloud environments, some on AWS, some on Azure. And many of our customers have a really an internal mandate or to avoid vendor lock-in, they have to use multiple clouds. So you also not only have a difference of clusters running on different platforms, but then also many of our enterprise customers also have, through merger and acquisitions, are using different types of clusters. Maybe they're using AKS or EKS or GKE or IKS or managed Red Hat, OpenShift, ARO, ROSA, or ROX from the IBM cloud. So there's a multi-cluster problem, a multi-infrastructure problem, and a multi-cluster vendor problem that many of us are dealing with on a regular basis. And so when it comes to security, one of the things we've been doing since we acquired Stackrocks a couple of years ago is run a regular state of Kubernetes security report and survey. So we had respondents from over 300 individuals across a wide range of roles, and we see that even today, even after Kubernetes has been out here for over, what, gosh, 10 years now and in enterprise production for a long time, we still see that security concerns are delaying deployments. 67% of our respondents are saying that. We also see that some organizations have had revenue loss due to those delays or due to a security incident. And DevSecOps, sort of on the more positive side, DevSecOps is becoming kind of more widely adopted. However, we still have 17% of our respondents saying they have no DevSecOps initiatives and that 30% of them have concerns about vulnerabilities. So why DevSecOps? Because Kubernetes is a declarative, automated environment. If you need to fix something that happens in runtime, you need to rebuild and redeploy. You aren't going to step in and fix a running container and have that be an effective solution. So DevSecOps is key, but it's still a challenge for customers. Thank you, Kirsten. So how do we help from a Red Hat perspective? And within the OpenShift portfolio, we have Red Hat Advanced Cluster Management for Kubernetes, as I mentioned before. Who in the audience is familiar with ACM, RACM, have used it, downloaded it, played with it? Great, a number of you. So sit back and relax. For those that haven't been introduced to RACM yet, it really runs as a hub and spoke architecture. So you have your hub cluster, which will run on top of OpenShift. And then you have a number of different spoke clusters that are in the managed fleet. When you install ACM on the OpenShift cluster, it'll come with an empty management domain. It takes about five minutes to install ACM. And you have this domain that you need to populate. It works in a greenfield, brownfield scenario. So you can immediately start creating clusters from this experience. Or if you already have existing clusters, or you already have infrastructure as code creating clusters, we're not asking you to change that at all. You can import those into ACM for management and beyond. So the way that ACM works from an architecture perspective is when you import a cluster into your management domain, we install what's called a clusterlet onto that managed cluster. And that clusterlet is providing a consistent means of communication, certificate-based communication back to the hub. And within the clusterlet, there's a number of different add-ons. From an ACM perspective, there's add-ons for all the things that I'll talk about in just a moment. We think of ACM as a collection of open source projects. Much like you think of OpenShift isn't just Kubernetes, it's a collection of open source projects that are bundled together into a package. We do the CVEs. We do the maintenance. We do the testing. We make it enterprise-ready through our security pipelines. But within ACM, you'll find open source communities such as OpenClusterManagementIO, which is the basis of ACM that provides the clusterlet framework, the add-on framework, as also the cluster registry. In addition to that, we have Submariner in the box. Submariner provides you Layer 3 IPsec communication across clusters. So you could have pod-to-pod communication through Layer 3. We integrate very, very well with Argo. There's a number of different use cases. We think of it as pancakes and syrup, peanut butter, and jelly. We want to make sure you're using GitOps with OpenClusterManagement for many reasons, primarily business continuity, consistency, and automation. So we've been contributing a lot into the Argo community since the announcement of the partnership with Intuit and bringing Argo into OpenShift as a box. Our ACM team has been busy delivering application sets, which is a form of our placement to do a one-to-end placement across clusters through Argo. But also, we've recently started to contribute the pull model concepts into Argo as well. So we're integrating what we have within ACM into the Argo community. Prometheus provides a foundation of the managed clusters providing the information and metrics back to ACM as a hub, as a centralized resource. And we also use Thanos to aggregate those and show the metrics in the out-of-the-box dashboards. Open Policy Agent, Gatekeeper, is also included into the packaging with ACM and is used for ACM entitlement. And that gives you a really strong one-to-punch. Using ACM, you can configure your clusters in a consistent way. You're storing those policies in Argo and delivering those to the hub. And then OPA Policy Agent or OPA Gatekeeper enables you to use the admission controllers in Gatekeeper to lock down and prevent configuration drifts. So it's a really good combination. As I mentioned before, it's not just OCP on bare metal or VMware or any of the platforms that OCP supports. But we also support AKS, GKE, IKS. We call them the star KS as well. So when we think about ACM, we think about it in terms of pillars. And these pillars really haven't changed in quite some time. When you think about our infrastructure and administrative personas, we've been doing the same thing for decades and sometimes the technology's changed underneath us. We provision things. We upgrade things. We take those provision things out of circulation. That's from a cluster lifecycle perspective. And we try to isolate cluster lifecycle into just that. I've created things. I'm going to upgrade them. I'm going to deprovision them or reclaim those resources. Because configuration, we drive through our second pillar. Policy-driven governance through configuration enables you to, if there's a Kubernetes API for a given Kubernetes resource or you want to deploy operators, you can pull together a policy, whittle together some YAML. And ACM is going to adhere to that configuration using placement and labels. Application lifecycle is in the box. As I mentioned, we're going further and further in our integration with Argo. And eventually, in the future, since this is a roadmap presentation, as we reconcile the differences between pull model and app sets gain traction and get the Argo community, you'll find Argo running underneath ACM. But you'll still enjoy the views of topology application, topology views, and things like that for enabling your SRE teams to search for Kubernetes objects, interact with those objects, edit YAML if you need to. Our fourth pillar is observability. You have observability in policies. You have policies that are either compliant or there's violations. You have application availability through the application lifecycle views. You can see where Kubernetes resources are either deployed to the cluster correctly or not. But most time, when people think of observability, they think of metrics. And with Prometheus and Thanos as the open source project, we're able to bring those metrics back into some out-of-the-box dashboards for you to be able to use. And again, multi-cluster networking is provided through Submariner. So let's talk about some of the things for the roadmap presentation. From a fleet observability perspective, our multi-cluster dashboards have been very cluster-based. And what we want to do is what we're providing with tech previews and eventually for GA, I think with tech preview and 2.8. 2.8's coming out in and around June 12th is our current date. We'll have fine-grain access for metrics within our dashboards. So you won't be able to see. Right now, you can see all the cluster metrics. We've had requests from you from our community. I only want certain people to see namespaces in those dashboards. So fine-grain access will be part of that. We want to take steps down the road for helping with capacity management. If you go into ACM with the dashboards today, you can find your list of clusters that have been over-provisioned. They're not being used well enough. You can drill down and you see where the namespaces have been over-provisioned. Drill down further and you see you have pod limits set well above of what they need to be. So being able to provide capacity management, not necessarily have you hunt and peck around the dashboard, but also provide you a way to find recommendations. These are the things that you should be looking at on a regular basis to see whether or not you can reduce the resources and reclaim resources in the enterprise. And another, we currently aggregate alerts in the hub today, but we want to make that management of alerts from the hub experience much more rich. And the ability to be able to do that with centralizing alerts in the ACM hub. Excuse me. For policy-based governance, we're constantly making improvements through our policy model. And in this coming release, we have the support for ranges so that you can iterate over a Kubernetes object and we will create n number of different objects. You only have to claim one object template, but we'll be able to iterate over those. And we're also adding conditionals to policies so you can have if, else type statements. We've seen an explosion of policies within our customer base, which is great. But a large number of policies isn't always great from a maintenance perspective. So we're doing what we can so that you can write a policy once and reuse it many, many times over through ACM. We also have, coming into the next couple of releases, how do we make the usage of gatekeeper more seamless within ACM? We do gatekeeper and deploy that through our config policy service, but how do we get more native configuration or deployment of open gatekeeper runtime as well as the rego that's associated that makes that runtime work? And also in policy-based governance we're doing, we have a policy set coming out that enables you for those that are OpenShift Platform Plus entitled, how do I install the OpenShift Platform Plus portfolio using ACM as from a policy perspective? How am I doing on time? I'm leaving you on enough time? Yeah. OK, good. Just pull the yank on the cord if I'm sorry. No, we're good, we're good. So that's some information or what we're doing in policy from what we're seeing in our customers for different reasons is more and more hubs. I think that a majority of our customers now are running more than one ACM hub. And that could be used, we see that for a couple of different reasons. One, ACM is managing a fleet of clusters. There's a certain number of limit with XED as the data store for ACM, which is great. You didn't have to install a database to get ACM up and running. It only took five minutes to install. But that comes with the limitations that you have with XED. As a hub grows in fleet size, as resources are deployed to the fleet, more and more applications, more and more policies, we find that XED becomes a bottleneck. So sometimes you need multiple ACM hubs to scale to the thousands and thousands and thousands of managed clusters. Those are typically your edge case scenarios, telco 5G ran type scenarios, where you have 1,000 clusters in this region, 2,000 clusters in this region. But we also see people creating hubs for other reasons. Sometimes we have a large customer that says five hubs, 200 clusters per. And they've really created different clusters because they've organized their business in such a way that only these operators are allowed to touch these clusters, only these operators are allowed to touch these clusters, and so on. So how do we provide a way for us to be able to have visibility across multiple hubs is this global hub concept. And this is coming out in Tech Preview and 2.8. And we're really focused on a global hub as an observability point. We really, really press over and over and over again your configuration for ACM, the configuration of your applications, regardless. We're really strong proponents of using GitOps. So from a GitOps perspective, how do you distribute a policy to multiple hubs? Well, you use GitOps in multiple channels. If you have multiple hubs, you don't have to worry about, I have to go and run between all these hubs and configure them individually. You shouldn't be doing that. You put your YAML into a GitHub repository, use GitOps to distribute it out to the hubs, so it doesn't matter how many hubs you have. All your hubs are getting consistent configuration through GitOps. We're using global hub more as an aggregation of management across the regions. In a couple of areas that we've talked to our customers very recently, and we've been working on this concept for well over a year, is that they wanted observability at that global hub level. And so one of the things that they wanted to observe across the global hub level is, well, I've got multiple hubs. I've got multiple clusters in each of those hubs. How is my policy compliance doing across all those hubs? So one of the things that we're focused on first is how do we do an aggregated view of policy compliance, no matter how many hubs you have. The other request that we have is, well, from this global hub interface, this observability point, I want to also be searching for Kubernetes resources across those individual hubs from that global interface. So being able to search not just in the one ACM hub, but also search across multiple ACMs and find the resources that you want. And then we call this internally, we kind of call it Uber Thanos. Eventually, we want to be able to bring and aggregate the multiple Thanos instances up into a point where we're not going to be able to store all that data. You guys are doing some great jobs of creating a ton of metrics with a ton of clusters and a ton of applications. But what is the experience like from this point of maybe centralizing alerts from multiple hubs up into a global hub and then being able to drill down very quickly to get to your troubleshooting points of contact? I think you got five. And then finally, five minutes is cool. All right. Finally, we've been really, really focused on scalability for our edge use cases. And to this date, we've been really focused on the industrial edge or the telco 5G edge, or 5G ran type use cases. Today, you can run 3,500 single node open shifts with a DU profile inside a single ACM hub. But that's not everyone's use case. Not everyone's going to have 3,500 hubs. So we've recently started to do analysis. And it's been very promising of, well, what's a mixed fleet scenario look like? What if I had maybe 1,000 single node open shift? I've got three or 400 three node compact clusters, and then some big ones in there. So we're mixing up our performance test suites and doing a lot more analysis on what it looks like for an ACM hub running a mixed fleet of different size clusters. One thing we have been able to pull and conclude from this analysis is that your actual hub CPU goes down. When you have less clusters to manage, say for example, 3,500 single node open shift, I have 3,500 endpoints I need to inspect and interact with. When you have this mixed fleet, then CPU and memory usage of the hub goes down, but the nodes underneath are remaining consistent. So we're seeing really good positive results in mixed fleet testing. We'll hear from Kirsten very soon about the advancement in ACS, but we're often. No stealing the thunder. I'm trying not to. I thought about this as coming up here. I'm like, man, I'm going to steal the thunder. But are we going to have ACM as a service? And the short answer to that is no. The long answer is yes. And what does that mean? I'm not talking out of both sides of my mouth. Is that when you look at Red Hat's portfolio, when you go to console.redhat.com, we already have open shift cluster manager in there, which is sort of like cluster lifecycle. We've heard, I think, in the announcements today about Secure Supply Chain. We've heard Chris Wright talk about Secure Supply Chain being able to deploy applications to clusters. So that's kind of like application lifecycle. So where does ACM fit into a managed form factor, which is our managed form factor is console.redhat.com today? How do we infuse more of ACM's pillars, specifically the other two around policy and observability into that experience? So you're not going to get a catalog experience and saying, give me an ACM, and we'll provision you a whole ACM instance. Our goal and our strategy towards providing ACM-like capability in a managed form factor is integrating that directly into the console.redhat experience. Awesome. Thanks, Jeff. So we'll switch gears to Redhat Advanced Cluster Security, ACS, very much a companion with ACM. You can use them together. You can use them separately. ACM can be used to deploy ACS, or again, and also very similar to- Other button. Yeah, OK, very similar. I had it turned the wrong way. So similar to ACM, hub and spoke model. But let me start by talking about why security is policy and why a solution like ACS is so important for containers and Kubernetes. In a traditional architecture environment, security teams have been pretty siloed. There's security. There's ops. There's the app dev team. Oftentimes, the app dev team goes off, does their thing. They may or may not know what security guidance they have to meet until they deploy. When you don't have that kind of information for the app dev team early, it just creates all sorts of churn. It delays the business. It delays getting value to your end users. And it's frustrating for the developers. They don't want to release insecure code. They just want help to do it right the first time and not get bad news at deploy time. Also, many security teams are still not yet. I talked to lots of different security teams. Quite a number are Kube-savvy. Many are not yet Kubernetes-savvy. And so one of the key things ACS does is provide out-of-the-box security policies for Kubernetes and containers in a way in language that makes sense to the security team. So how many folks have actually tried ACS? OK, pretty good set. Happy to have feedback as we go. So really, we focus heavily on enabling a DevSecOps solution. We want to provide, we do provide policies that can be applied at build, deploy, and runtime. Even if you've got vulnerability management policies applied at build time, new vulnerabilities show up all the time, a new one might be discovered after that image has been deployed to production. We need a way to feed that information back to the app dev team so that they can do the fix. And also, ideally, information that's gathered during the build process or the CI-CD process can inform security policies in production at ops. For example, one of the things that we've added recently, I'm sure I have another slide on it, so talking ahead of my slides, is the ability to generate Kubernetes network policies from static application deployment data and the image prior to deploying your application to production. We're also providing checks to assess for Kubernetes network policies, do they exist on all the namespaces in the cluster. Network security is one of the areas where even if the security team has kind of rocks Kubernetes in containers, Kubernetes networking becomes kind of the next challenge, the next leap. So we're working to be able to make it possible for a developer to auto-generate a Kube network policy, honestly not that easy for them either, compare it to the system policies defined for the cluster, and let them know prior to deployment whether they need to adjust or ask for permissions around those. So the DevSecOps is really key for us. And again, these controls can be applied, build time, deploy time, run time. Very much like ACM, hub and spoke architecture, we do deploy a few more things than ACM does. So in our case, what ACM calls the hub, we call ACS central. That's deployed onto an OpenShift cluster, and then you connect your secured clusters, the clusters you want to provide ACS security to, to central. We also support OpenShift, AKS, EKS, GKE, IKS, rocks, alphabet soup, ARO, ROSA, right? But we do deploy a few more components. ACS has its own admission controller, and this is one of the things that Jeff and I will be working together on over time is what are the really great use cases for OPPA gatekeeper and ACM? When do you want to use that? Versus when do you want to use the ACS admission controller? So ACS admission controller and policies are very tightly focused on security things, such as does the pod require privileges that I don't want to allow on my cluster? Are there misconfigurations that I don't want to allow? Gatekeeper can be great for resource management configuration, for example. So there's a solution. Both are useful. Sensor is the component that talks between central and the secured cluster, shares the information. There is, of course, a scanner for vulnerability analysis, but also application configuration analysis as well. Again, that's critical. You don't just want to be looking for vulnerabilities. And finally, for every node on a secured cluster, we deploy an EBPF module. Historically, it had been a kernel module. We're moving away from kernel modules to all EBPF, simplifies things, lighter weight. But we use that to collect the system processes that are running in the environment. And then one of the cool things about ACS being Kubernetes native is we correlate that system process information to the Kubernetes objects in your cluster. So like ACM, ACS has a single dashboard, a dashboard across multiple clusters. This is kind of your landing page. You get a view of policy violations, deployments most at risk, images at risk, vulnerabilities, a whole range of things, compliance as well. And in recent releases, we've added a few things including, with ACS 4.0, the ability to scan REL-CoreOS nodes. So you can now use the ACS scanner not just for your application images or your Kubernetes components that are deployed as images, but also for the REL-CoreOS node. And we'll present the data. We're working to present the data in those categories, node, platform, and workload, so that you can have a cleaner separation. Often the individuals responsible for those different things are not the same, right? You need your cluster admin potentially to update the platform if there's a fixable CVE in REL-CoreOS or in the platform components, whereas your app dev team needs the information about workload vulnerabilities. We also, with 4.0, moved away from RocksDB as our back-end database to Postgres. So this gives us performance and scalability improvement. Right now, we have the ability to bring your own Postgres as tech preview. But so upgrading from 3.74 to 4.0 does require a backup. Prior to upgrade, we provide automated migration, but we really want you to back up your data. And so for that reason, we disabled automatic upgrades between these two versions. I also already mentioned some of the work we're doing around network security. In addition, we've been improving our network graph. And we've also added support for securing OpenShift on IBM Power and IBM Z. Anybody in here using IBM Power or Z? It's a small number, yeah? All right. So collections is something also that helps with scale. You heard Jeff talk about policy sets, policy collections. We're really taking a common approach to how do we make it easier for you to manage policy at scale, including custom policies. And so collections is a way to help us do that. And here's a little bit more on visualizing network security, our network graph 2.0. How many folks here are aware of or have tried the OpenShift Network Observability Operator? OK, a handful. So this is also another place where we're working to provide a common look and feel across network observability solutions. ACS has deep network observability. You can take a look at what traffic is happening in the cluster, compare it to what is allowed by the network policies. You can auto-generate network policies based on observed behavior. You can simulate what the traffic will look like based on that auto-generated CUBE network policy. And again, this connects with our broad story about being able to also shift left network security. And, as Jeff alluded to, we are announcing at Red Hat Summit Advanced Cluster Security Cloud Service, managed service from Red Hat. This gives you the ability to use ACS without needing your own instance of central, managing your own deployment of central. You go to console.redhat.com. You find the service. You click on it. It launches an instance of central for you and then provides information about how you connect your secured clusters. Supported by Red Hat, 24-7 support, consumption-based pricing. And supports the same platforms that ACS supports today. One other thing worth mentioning, especially if you're going to console.redhat.com, there is a vulnerability dashboard. It's kind of under the Insights moniker, which Insights is a little gray, so it's hard to see. But there is a vulnerability dashboard that provides information about vulnerabilities and platform components. And that's kind of where that stops. We're working to provide an easy connection from the vulnerability dashboard to ACS for a broader view of vulnerability data, including your workload data. So again, if you just want that quick view of vulnerabilities, the dashboard is there. It's with your subscription. You do have to log on to console.redhat.com, and your clusters have to be connected. ACS works with disconnected clusters. You do not have to connect back to Red Hat. Of course, ACS Cloud Service is, by definition, a connected offering. Definitely the wrong button. All right. Happens to me all the time. So a little bit more on our roadmap. In addition to some of the things I've already mentioned, we're going to be adding the ability to generate S-bombs, software bill of material, and also to import a software bill of material and provide vulnerability data for that S-bomb. How many of you have been hearing about S-bombs over the last, fewer than I would have thought. So it's in the States providing a software bill of material for code that is going to be used by the US government is becoming a hard requirement. And so there's a lot of conversation about software bill of materials. I was at Black Duck before coming to Red Hat. So I've been working with S-bombs for, I don't know, 15 plus years. I know people think S-bombs are brand new, but they've actually been around for a while. They just have a new prominence and use case. We want to extend our security analysis, our capabilities around CUBE network policies to service mesh policies. Also one of the cool things that we're collaborating with the K-native team, the serverless team, they're adding something called K-native guard that will provide some additional security capabilities for K-native deployments. And they're gonna kind of have that available. But if you want that multi-cluster view, if you want policy management of those security capabilities, if you want to be able to apply policies to manage exclusions, that's where ACS comes in. So we go beyond what's available. Historical trending data is gonna be key. And also a tighter integration with the OpenShift compliance operator. So today for your OpenShift clusters, you get compliance data either in ACM or in ACS through integration with the OpenShift compliance operator. ACS also adds workload compliance and StarKS compliance, like the Kubernetes benchmark. The Docker benchmark is still there. We don't know how much longer that's actually gonna be needed, but it is there. And so we wanna kind of have this tighter integration to be able to manage your compliance operator scans from ACS to get trending data. And also we're evaluating, do we keep the workload compliance checks and the StarKS checks as ACS standalone items or do we kind of migrate some of that code into the compliance operator? So various things we're looking at there. And also if anybody using Quay, sometimes known as Key with my British friends, colleagues, my colleagues from the UK. So Quay includes the Clare scanner for vulnerability analysis. ACS has a scanner. The ACS scanner was built from Clare V2. It was forked from Clare V2 when StackRocks was StackRocks. The Quay scanner continued to evolve, Clare V4. We're gonna bring them together. Each of them has different strengths. And by the end of this year, there will be one Clare-based scanner available from Red Hat. So whether you are running the Clare scanner with Quay or the ACS scanner, you will get the same results. Okay. Then from here, we just kind of wanted to do a quick overview of kind of all the ways in which OpenShift Platform Plus provides security and defense in-depth for you, right? Automated configuration and operations, not only individual cluster with OpenShift operators, but also ACM for scale. Integrated node management, integrated host operating system vulnerability, protecting data at rest and in transit. And again, you can do that at an individual cluster, but if you wanna configure that across multiple clusters, ACM is where you wanna go. Out-of-the-box deployment policies for workload privileges, for application placement, network security and segmentation, compliance remediation, locality, as Jeff said, really becomes more and more key for the customers we talk to. And then runtime behavioral analysis, runtime vulnerability management, security policies and event response. So one of the things I didn't mention was that ACS gives you the ability on incidents to either kill a running deployment or to scale that to zero. Now you would kill a deployment, kind of that assumes that you think the image itself is okay because as soon as you kill this in Kubernetes, it's gonna redeploy from the registry. If you scale to zero, that means I don't want this running at all and I need time to look at it. One of the other cool things that is coming, it's actually not on this slide. The Kubernetes community is, I think it's alpha at this point, is working on a feature called Forensics Container Checkpointing. And so we'll be supporting that in OpenShift and we'll be able to leverage that feature with ACS as either an inform or in force action. You'll be able on specific kinds of runtime alerts, be able to say I want to checkpoint that environment for Forensics. So that's something I've been asked about for a lot of years. A couple of other quick things to note really, and again this is kind of how do we think about how do we deliver capabilities that help you control application security at build time, protect your cluster at deploy time and detect and respond to threats at run time. I'll probably just call out a couple of things. I've already hit shift left network security. You'll be hearing a lot tomorrow about trusted application platform, CAP. So I'm not gonna steal their thunder but it relies on some of the functionality mentioned here, tecton chains for attestation of your CI process. Encrypted containers, rootless build, if any of you are using OpenShift builds, you know that they've required certain level of privileges. We've been working to reduce the privileges that are required by Builda. Platform integrity has been a big investment, which actually also extends to container runtime integrity. So we've been investing in rel in key lime for a remote attestation server. Extending beyond the secure boot that rel has today to a deeper level of attestation for your rel server. We'll add that capability to rel CoreOS as well. But we're also looking at what do we need to do to support confidential containers. And the first place you'll start seeing confidential containers, they're an investment on top of Cata containers, also known as OpenShift Sandbox containers, you'll start seeing that in the public cloud first. The reason for that is that the attestation services that are necessary to work with confidential containers, the cloud providers have those. Those services tend to be tied to a particular chip, whether it's AMD or Intel, right? It's a hardware level attestation. And so the cloud providers generally have picked someone and landed on it. As Red Hat, we need to be neutral. Our goal is to provide at some point a solution that could be used across multiple platforms, multiple infrastructure that we run on. But in the near term, you'll see that with cloud providers, including Peer Pods. And then, again, Compliance's policy and some of these other things I think I've hit already, including host scans, which is available now. So that's what we had. It's for three minutes to spare. So we've got three minutes for some questions. If you have any questions or comments. Yes, sir? So the question was with all the development around backstage and those capabilities that are really exciting for us is that there are plans to integrate that in with ACM. And we are looking at that. I don't have any solid plans to share at this time, but we're kind of, there's been a big push for getting that set up for Summit. And I think the team has done a great job for that when we will be looking into how we can integrate backstage with ACM in the future. Any, yes, sir? 20, 2-0? 2-0. So you're looking for remediation guidance for non-Red Hat content? Yeah. Yeah, okay, noted. It's something we certainly have talked about. It requires some additional data that we don't have today. The Red Hat data, it's easy for us to get. We have that connection to the RHSAs. But understood and noted that you'd like remediation for non-Red Hat content as well. Thank you. Okay, thank you. We got one down here. Yes, a couple of questions. One, right now, you can't configure in ACS which compliance profiles are run. So, yeah, you'll even see the Docker profile run on an OpenShift cluster, which is totally irrelevant. So that is something we'll be working to fix so that you can choose which compliance policies or profiles are run for individual clusters. And the other thing about Summit, and the other thing about CIS benchmarks, there are always going to be a number of controls in the benchmark which require end-user decisions, such as log retention policy. Do you want to encrypt at CD or not? CIS recommends you always encrypt at CD, but there are other ways to put the sensitive data. There are other ways to protect that data. You can encrypt the rel core OS node. You can encrypt the cloud storage. And so while OpenShift meets the majority of the CIS OpenShift benchmark recommendations out of the box, there will always be some that you as an end-user need to think about. And that's where the ability to tailor a compliance operator profile comes in. And you can do that today in OpenShift, but it's clunky, it's not easy. We're going to make that easier in ACS so that you can tailor those. Was there anything else? Yes. And in fact, in 4.0, we've actually temporarily, we've put in place exclusions for OpenShift operators that we know require privileges, so you won't see those as policy violations. As a security product, we think the right thing to do though is to have the filter, and that's where we'll be going so that you'll be able to, if you want to see what privileges OpenShift components use, you can view that, but it won't be on by default. Yes. We did in fact exclude the vSphere. Hopefully all of them, we might have missed something. Reach out. Well, thank you very much. I think that's all the time we have, but Kirsten and I are going to be here all week, and we can take your questions after. We can go have our reception crawl. I'm going to be at the booth tomorrow from I think 10.30 to 1. If you want to come by then, and Kirsten, when are you at the booth? I am at Ask the Experts from 10 to 1, although I, let's say 11 to 1, I already have a commitment at 10, and I'll be at the Ask the Experts for a little bit after this session as well. Excellent. Thank you. Appreciate it.