 Hello, and welcome to What's New in OpenShift 4.11. My name is Stephen Gordon, and on behalf of the entire product management team at Red Hat, I'd like to thank you for taking time out of your day, wherever you are, to join us as we take a walk through what's new in the upcoming OpenShift 4.11 release. OpenShift is Red Hat's open hybrid cloud platform. So today, we're not just talking about Kubernetes and cluster services like logging, monitoring, and networking that you need around it to be successful. But also OpenShift 4.11 changes focused on enabling multicluster management using Red Hat Advanced Cluster Management for Kubernetes, ensuring cluster security using Red Hat Advanced Cluster Security for Kubernetes, building, managing, securing, and mirroring images using Red Hat Quay as a global registry, and cluster data management using Red Hat OpenShift Data Foundation. Together, these provide the solutions organizations need to manage the applications and the platform across multicluster and multicloud environments, delivering on the promise of deploying, running, managing applications on any footprint anywhere. In terms of high-level themes that we've been working on through OpenShift 4.11 and even releases leading up to it, we've again focused on instiller flexibility, enabling that ability to deploy and manage both OpenShift and your applications on any footprint from public cloud to a data center to the edge. In 4.11, to deliver on this aim, we've added the ability to purchase OpenShift from cloud marketplaces, including Amazon and Azure, and soon Google Cloud, so that you can use Cloud Span to purchase and deploy OpenShift in those environments. We've also moved full stack automation for Nutattix AOS to generally available and fully supported to bring the best OpenShift experience we can offer to that platform. We also have a new developer preview of what we call an agent-based installer, aimed to provide an easy and repeatable way for customers to deploy the first cluster on-premises disconnected without requiring a provisioning or bastion node, as well as a tech preview of hosted control planes to those who are running many clusters and want to consolidate their control planes and make the overall experience more efficient. We've also added some niceties in terms of external DNS support using external DNS operator for existing solutions people may be using and more composability of OpenShift itself, the ability to turn off certain additional pieces of what has traditionally been part of the cluster version operator. In support of automated operations, we've added a FedRAMP high profile for the compliance operator. We've made fully supported the disconnected mirroring workflow we introduced in 4.10 as well as added an automatic upgrade recovery path for failed operating installations when a new version becomes available. To enhance workload extensibility, we've added support for NVIDIA AI Enterprise with OpenShift on public clouds as well as Windows Server 2022 workers for Windows containers and the custom metric pod order scaler based on the uptrend Cata project to allow you to scale your application based on basically any metric you could desire. At its heart, OpenShift 4.11 is based on Kubernetes 1.24. In the Kubernetes 1.24 release, some major themes and features that we'd like to pull out. There's now the ability to find GRPC startup liveness and readiness probes as they've graduated to beta. These allow you to add probes to a GRPC application without having to expose an additional HTTP endpoint or an additional exec command and therefore reduce the overhead of running those probes. The container storage interface has added volume expansion and storage capacity tracking interfaces and they have graduated to stable at this point. Volume expansion allows you expand PVs. Storage capacity tracking enables other features like storage capacity aware pod scheduling. These do require driver implementation so there'll be more work on these in future releases but having those interfaces there is an enabler for that to happen. The ongoing work to make the Kubernetes core smaller involves migrating entry CSI plugins to Adatree and as part of that ongoing effort, the Azure disk and OpenStack sender drivers have moved to Adatree. This is a change that should be transparent to our users. Kubernetes 1.24 also introduces mixed protocol support for services with type load balancer which enables a service that has different port definitions with different protocols by the same IP address. As part of the effort to provide more determinism around the Kubernetes Alpha Beta stable graduation process for features, a large number of things have also graduated to stable. I'm not going to go through all of those but I will highlight that the Kubernetes 1.24 release announcement does go into this in more detail as available at the link in the bottom of the slides which is also a good junction to highlight that these slides will be available after this presentation. I'll now hand over to Heather who's going to go through some of the notable and top RFEs that we've implemented for our customers before we dive into spotlight features. Thanks. We shipped 43 request for enhancements or RFEs in 4.11. These are direct customer asks that come to us. As you can see, the largest customer requested ones are here. A lot of them are network configuration related. Everyone's network looks very different and we are continually adding features to account for that. And with that, let's get to the 4.11 spotlight features and I'll hand it over to Daniel. Thank you, Heather. Let's start with OpenShift on marketplaces. So this complements our existing offering of managed OpenShift services that you can already pay for with your cloud provider budgets such as Auro and Rosa. We are now adding the option to procure a self-managed OpenShift through a marketplace of AWS and Azure and GCP and therefore pay for your OpenShift subscription fees with your committed spend in the cloud provider budget. We are already available in Azure and North America and the Azure government regions as well as EMEA. And for AWS, we are available to transact in North America and the Gulf Cloud region. AWS and EMEA will follow by the end of this month and we are also going to HGCP by the end of this quarter. It's all based on custom virtual machine images that you get when you procure one of these OpenShift offerings from the marketplace which you essentially just feed into the OpenShift installer. So on the one hand, it's a fully customizable self-managed object installation that you get but because it's paid for either by the hour based on VC views or upfront on a yearly or three-year year basis, you pay for both the infrastructure costs as well as your OpenShift subscription fees with a single bill through your cloud provider and therefore allow you to tap into your committed spend budget to also pay for OpenShift. On the next slide, we are also going to cater for our disconnected customers. So those customers have to maintain a mirror of all those OpenShift container images that they need to run their disk and the clusters from. And previously it was kind of fragmented and hard with the previous tooling to get all this data together. And it was also very manual. So in 4.10, we've already started to introduce a new central utility called OC Mirror that helps you do that in the central location for all of your clusters for all content types. It's all based on a single file-based configuration in which you granularly define which OpenShift releases and which operator releases you need in your disconnected clusters. And that is very automation friendly. You can run this tool essentially overnight in a regular cron job. And in that process, it will also automatically pick up new OpenShift releases and new operator versions if you ask it to do so. This in general lowers the downward volume of images that you need to download and store but also keeps your clusters supplied with updates as they are released by Red Hat. In 4.11, we are graduating through to general availability and we're adding the ability to specify a min max version range for both OpenShift and operators that you need behind your firewall. And as your clusters update and as you update these operators on those clusters, you can actually start to adjust these version ranges towards newer releases which allows you to make use of the tool's auto pruning capabilities which will delete images that are no longer needed in your registry mirror. It's also going to be able to integrate with the OpenShift update service which is a utility that you're installing in the disconnected clusters to give you a nice graphical experience when updating them. With that, I'm going to hand it over to you. In 4.11, you can now deploy OpenShift clusters on new tonics AOS using the installer provision infrastructure which is the full stack automation deployment method. OpenShift deployments will be supported on both long-term support and short-term support new tonics AOS releases. For 4.11, the cloud credential operator or CCO will support manual mode for the credentials integration with the new tonics platform. And the CSI integration is going to be available post cluster deployment in the future when new tonics open sources, the CSI driver will look to move that operator into the installation workflow. Next slide. Yes, in 4.11, we released a new version of a new profile for compliance operator. Compliance operator has been used by more than a thousand customers today worldwide to provide compliance for their OpenShift deployment. In 4.11, we had a profile for FedOmp high that allowed customers to achieve a high level for the OpenShift deployment and allow customers to use it for federal government. External DNS, the external DNS operator basically allows you to control your DNS records dynamically via Kubernetes resources in a DNS provider agnostic way. We use this to synchronize exposed OpenShift services and routes with the DNS providers. You can install this via the operator hub. We're really happy to announce that this is generally available for AWS Route 53, GCP Cloud DNS, Azure DNS, as well as InfoBox. And it's currently tech preview for look at. Next slide over to Gaurav. Hi, so VPA, we are, so VPA have a component or vertical part auto scaling have a component or recommender on which it recommends what should be the CPU and memory of a part. So in 4.11, we are introducing bring your own recommender. What is that mean is, let's say example, every Monday at 10 a.m., you have a spike in traffic. So you can build a recommender which understands the behavior and suggest you what should be the CPU and memory of a part based on the behavior. Next slide. Custom metric auto scaler. So HPA, horizontal part auto scaling scales out part based on increase in consumption of CPU and memory, which this scaling need might not fulfill all your application need. So in 4.11, we are tech previewing custom metric auto scaler where you can use predefined auto scalers, like you can scale based on Prometheus, Kafka, or you can build your own scaler and scale your application. It also allows even to scale the application to zero. And basically under the hood, it uses HPA to scale out the parts. Next slide. Hey, everyone. I'm very excited to share with you all today the new console features. We have two main enhancements focused on cluster upgrades. First is the ability to do partial upgrades. This allows users to upgrade the control plane or the control plane plus select machine pools. Users now have the ability to pause and unpause the upgrade of each of the defined machine pools. And we do this in order to cause minimal disruption to your applications. You will now have a total 60 days to complete the upgrade. The second enhancement we have in this area is conditional updates. This gives users the ability to select supported but not recommended updates. With this, you get a new level of added transparency of why certain versions are not recommended and or even blocked. Next slide, please. So Kubernetes has many tools to make your workloads resilient. Pod disruption budget is a great feature to protect your applications and critical workloads. With pod disruption budgets, application owners can state what the minimum number of pause can be at any given time. In OCB411, users can now see all PDBs from essential lists, create PDBs from our new form-based experience, select PDBs and see the list of pods that are affected or even select any workload, i.e. deployment, a stateful set, damage set, replicant set and view of a pod disruption budget has been attached to that workload. Our main focus here is to make PDBs as user-friendly as possible. Next slide. Every release we like to tackle some of the highest requested feature enhancements for our customers. This release we're able to provide dark mode. You can now find a section of the user settings to select the theme of your choice. And next, we did a new form-based experience for creating and editing routes and config maps. This is used to help simplify the experience for our users. Again, highly requested as well. Next slide, please. So the web terminal comes with some great pre-installed CLIs. Now, with our new help command, users can easily see what the set of CLIs are and the versions available to them. Even better though, users can now customize their terminal with the image of their choice. This means users are allowed to install any necessary CLI that we don't provide out of the box for you. Plus, another added benefit with customization is users can customize the terminal timeout. So after a terminal has been idle for a certain amount of time, you can set that as well. To enable the web terminal feature, you will need to install the web terminal operator. Finally, we now support multiple tabs in the terminal drawer. Users get a max of eight tabs. Next slide, please. Awesome. So for the developer experience, we have so much stuff. We now have a dedicated session just for the developer edition of what's new. So please check out a prerecorded video session here and slides that are linked right in the slide. And a few highlights of this are the improvements we made to the developer perspective in the OCP console, ODO V3 Beta 1, Podman Desktop, and much, much more. So I'm very excited for y'all to see this great material. I'm gonna hand off now to James Faulkner. Thank you so much. All right, thank you. Just sticking on the developer theme here. As you know, OpenShift is not only a great operational platform, it's a fantastic developer platform, especially when we have things like Kubernetes native Java with Quarkus. These are for Java developers who are looking to build and move apps or build new apps on the platform. Some key features in the latest version of Quarkus that comes along with OpenShift 4.11 is Java 17 support. As a tech preview, we now have GraphQL support, which is a better performing way to access mountains of data as opposed to traditional HTTP REST methods. We also have a new capability in search where it automatically index your entities as a developer, as you read and write from the database, put those into Elasticsearch, and then perform some really powerful search capabilities, things like word stemming to find instead of going, it finds go, also sounds like common things like that. So really powerful search capabilities included in the latest version, as well as some intelligent client side service discovery and selection. This is for applications who need a bit more intelligence as to how application is discovered or how service is discovered or how it's load balanced as opposed to what you get out of the box with OpenShift. Two great new features in an innovative new platform with Quarkus. Also, moving to the next slide, some new updates for Red Hat Single Sign-On. This is our federated identity management solution. Couple new features with fancy names of things you're almost, I'm almost positive everyone has heard of, step up authentication. For example, after you've logged in to your retail account, you go and update your credit card information. You have to log in again, even though you're already logged in. That's an example of step up authentication. Now supported, we also have client secret rotation policies to be able to automatically rotate secrets for clients which historically has been difficult if you can only have one secret because as soon as you rotate the secret, all of your clients suddenly are no longer valid and will have to re-authenticate. So you can keep a backup for a little bit of time to provide that graceful transition. We also now support web authentication as a GA product. So it's moved from tech preview to GA. Web authentication, another thing you've all used. This is things like two-factor authentication, using things like tokens or hardware tokens or fingerprints or other biometric information or even software tokens. So that now moves to GA. Several other capabilities in the newest release of Red Hat Single Sign-On. Again, it also included in OpenShift 4.11. Fantastic for developers to really build that security in as quickly and as early as possible in the development phase of your applications as well as providing a solid identity management system for OpenShift applications. So that's it for that. For the run times updates, I'll pass it over to Rob. So there's a bunch of maintenance items in builds for this release but there's two items we'd like to call out. One, we've removed Jenkins from the OCP payload. It's not gone, but it's been relocated to the OCP tools for repository. This has allowed us to reduce interruption to the builds team who are working on new features as well as contributing to the upstream shipwright. In doing this, Jenkins payload has been streamlined so that we don't go through the same release process for each supported version of OpenShift. Instead, we're able to publish a payload as Jenkins release compatible with all the supported versions of 4.X. And two, we now have a shared resource, the CSI driver which supports fine-grained access control of shared secrets and config maps, including some of the nicer to have features like revocation. This allows our cluster admins to start to give developers and applications access to sensitive information, rel entitlements, some sort of details, et cetera, while maintaining that principle of least privilege. This is part of an ongoing push and with more updates along these same lines to ensure that we have the best balance of security and usability throughout the builds ecosystem. And now Kustav is going to talk to you about pipelines. Hi, so OpenShift pipelines, basically it's a cloud network CSI solution and it gives a benefit of moving away from basically a central to convince your central execution of traditional pipeline network to a more of a serverless and distributed area. So what's coming in 4.11 for OpenShift pipelines is one is that external database support for TechTurnout. TechTurnout is fundamentally a curated task which can be reusable by other customers and what we are doing is basically that the customers can now bring in their ID available to the external databases. We are also seeing that many of our customers they are using ARM architecture for which we are now supporting OpenShift pipelines at ARM. Pipeline as code is another area where we have focused on 4.11. Couple of improvements there. For example, there can be multiple pipelines that can be run. We have also added additional Git provider support and also we have added certain triggers which can be used by third party providers, for example, to execute a pipeline. We have also gone through certain platforms and improvements for pipelines, specifically on pipeline bootstrapping, for example, configuring a Git repository or for example, getting a GitHub application for that. Next slide. OpenShift GitOps version 1.6 will be available with OpenShift 4.11 and it will include Argo CD version 2.4. Some highlights in this release. We've got application sets being made GA. We're also adding initial support for notifications as tech preview. So you'll be able to set up triggers based on events like health degraded or sync successful and then send out a notification in an email or by a Slack or one of the other available services. Austin plugin support has landed in 1.6 with config management plugins enabling you to bring in tooling that you need even when it's not part of Argo CD core. You can bring in specific package versions or extend Argo to utilize tools like SOPs for secret management. Communication between Argo CD components and the Redis cache is now using TLS encryption. So any secrets or sensitive data moving to and from the cache now has more protection during transit. Finally, OpenShift GitOps is going multi-arch with support for running on IBM Power and Z. The more info about all of these features and full list of updates and fixes, these will be available in the release notes version 1.6. I'll hand over to Naina to talk about serverless. Thank you, Harriet. OpenShift serverless is based on Knative that adds an abstraction layer on Kubernetes to bring serverless qualities to containers as well as functions on Kubernetes. New features include support for init containers so that users can implement initialization logic for the application. Support for persistent volume claims for the applications that need permanent data storage. We're also introducing serverless integration with cost management service so that users can see how serverless is helping them. For tracing and debugging of serverless applications, we have added integration with distribution tracing in addition to Yeager. And for our Kafka broker tech preview, users can now connect to externally managed Kafka topic. In developer experience with the installation of CamelK operator, event syncs can be added through the form experience and two new serverless dashboards have been added to the developer perspective. For functions tech preview, we have added the on cluster build using OpenShift pipelines, support for Red Hat S2I tech in addition to CNCF build packs, and IDE extension plugin in VS Code at IntelliJ for creating and deploying functions. Last but not the least, we are very excited to add the first developer preview of serverless logic that will add the orchestration of microservices and functions to workflows, managing failures, retries, parallelizations, and service integrations. Developer experience is through the plugin to the KNCLI and a workflow editor for visual view and experience. Please try these new features and let us know how we can improve and what features you would like to see in the future. And with this, I will hand it over to Jamie. Thank you. Thanks, Dana. As a book reminder, service mesh provides fine grade network security and the ability to handle and control traffic based on application identity. We recently made service mesh 2.2 available which updates Istio to 112 and Dali to 148. This really brings full support for service mesh on Red Hat OpenShift on AWS which includes multi-cluster federation. A notable change in this release is the new Wasm plugin API which deprecates the service mesh extensions API for extending service mesh. This is used without a tree scale API management extension. Dali also includes a whole bunch of new features. In particular, the ability, many features to help with managing large service mesh. There are also some exciting new tech preview features to play with. The next generation of Kubernetes Gateway API is in this release. The ability to perform driver runs on authorization policies without impacting traffic and a new proxy list option for GRPC applications which is instrumental. I'll now pass it back to you who's gonna talk more about the installer. All right, yes. So in OpenShift 4.11, there are now four installation experiences that target varying levels of control and automation. And the first, there's full stack automation or installer provision infrastructure. In this method, the installer controls all areas of the installation including infrastructure provisioning with an opinionated best practices deployment or OpenShift. With the pre-existing infrastructure deployment or user provision infrastructure, you're responsible for provisioning and managing your own infrastructure to allow you greater customization and operational flexibility. The more recent ones which are interactive connected experience with the assisted installer. Using this installer method, you'll be able to create your own cluster through a hosted web experience. And then lastly, we have a new installer. This is for disconnected experiences where we have an agent-based installer which provides a streamlined experience for deploying OpenShift for fully disconnected or air gap environments. And this is in dev preview in 4.11. We'll talk a little bit about it in a moment. So for 4.11, we've also expanded our supported provider list to include Nutanix AOS, which is now generally available with full stack IPI. On the cluster infrastructure side, we continue to enhance the seamless integration between OpenShift and our cloud providers. We've done some work around ARM as well as bringing in some of the top customer asks like Ultra SSD support on Azure and EFA support on AWS. We're making good progress on enabling cluster API or CAPI, which will eventually become artifact a way of provisioning and upgrading and operating multiple Kubernetes clusters. Looks like we're on the next slide. And for the OpenShift 4.11, let's take a look at the Azure AWS MPSphere Enhancements. For Azure, we've added support for Azure UltraDisk which includes Ultra SSD disk from Azure to provide better IOPS and throughput. Users can now install clusters using user-managed encryption keys in Azure for control plane and compute nodes disk encryption. We've also enabled accelerated networking feature in Azure for control plane nodes by default so customers can now benefit from performance improvements at no additional cost. On the AWS front, we've expanded to support the new secret region which is US ISO East 1 secret region. And so with that in mind, we've also added support for elastic fabric adapter or EFA on AWS. And then on the Fissure front, you can now use your own load balancers for external API and Ingress traffic for your OpenShift on Fissure deployments using installer provision infrastructure or IPI. And this is an addition to the load balancer that is deployed with the cluster. Next slide. So as mentioned, we have a new installation experience for disconnected OpenShift deployments. And currently what we've heard from customers is that our deployment methods are sometimes too opinionated or too complex for standing up once first cluster. And with that in mind, the agent-based installer provides an easy and repeatable way for customers to deploy their very first OpenShift cluster for on-prem disconnected or air gap environments without requiring a provisioning or bootstrap node. For 4.11, this is in dev preview and the initial focus will be on bare metal environments. The agent-based installer provides the flexibility of user provision infrastructure deployments and leverages the assisted installer engine to perform fully disconnected deployments. Using this agent-based installer, you're able to deploy on all supported OpenShift topologies including single node OpenShift, three node compact clusters, as well as standard HA clusters. And with the agent-based installer, no bootstrap node is provided or needed and you'll be able to use your preferred automation tooling to fully orchestrate and automate the deployment. The agent-based installer runs as a new sub-command of OpenShift install. So you'll be able to begin by specifying your cluster details in your install configuration, where you can configure things like pool secrets, your host network configurations and after you've configured or defined your cluster configurations, you can then use that OpenShift install agent sub-command to generate a bootable image to begin building your cluster. If you're interested in giving this new agent-based installer a try, please do reach out to us. And now I'll hand off to Anand. Thanks, Shiv. So in OpenShift 4.11, we have introduced support for something called Composable OpenShift, which provides a mechanism for cluster installers to exclude one or more capabilities from their installation, which will determine which payloads or components are or are not installed in the cluster. So in 4.11, we have taken the first steps to disable the installation of the following operators, including the bare metal operator, marketplace operator and the OpenShift samples content that's stored in the OpenShift namespace. You can disable these features by setting the baseline capabilities set and additional enabled capabilities in the install config. And so it's basically a new API in the install config, which will let you opt on or opt out of these capabilities. The installer will then validate this information and pass it to the CVO. The CVO will calculate an effective status, as you can see on the right, which has known capabilities of bare metal marketplace and OpenShift. So this is going to be a phased approach. And in first phase, which is 4.11, we have done the following things. We have provided a way for CVO to allow the enabling and disabling of these operators. We have given the optionality to disable three operators, which is bare metal marketplace and samples. You've made OC aware of cluster capabilities and the installer can allow users to select what can be included in the exhibit. So that's what we've delivered in phase one. And in the following phases, what we will deliver is a cluster lifecycle integration for OLM and also to see if some of these components can be moved into OLM. And also if some of the other operators, like the node tuning operators, can be made optional or not. Over to Duncan for discussing the armor placement. Thanks, Anand. You know, everyone, it feels like every month, if not every week, there's an announcement around ARM. For instance, hopefully you've all heard about Google and AirSinar ARM infrastructure. And in a similar vein, our OpenShift ARM journey is going to continue in this release. Like the other non-X86 architectures we offer, our initial focus with one interesting exception we'll get to is on the platform adoption. So with this release, we're adding, I guess, quote unquote, the missing bits from the current platforms. What that means is IPI for bare metal and UPI for AWS. And that leaves us with Azure and now Google to address in the future. Interesting now, we're starting to bring the other features in. So we've enabled disconnected install for those users that don't have the direct connection usually for security issues. And in the last release, even I'll admit we were a little light on storage options. So we filled that out with pretty much everything that you want. The only missing piece being fiber channel sand, which we're going to wait for rel to bring in support for running an ARM. And finally, we have a tech preview for you. Well, we're seeing ARM conquer the world and it's marched to full adoption. We are aware that not everything that you want to run, not all your applications or services are currently available on ARM. So while we expect that to be fully addressed in the near future, we don't want to stall any adoption. So with that in mind, we're moving to this idea of a heterogeneous cluster. And to be specific, that is a single cluster that can have compute nodes of different architectures. Something that up until now we couldn't do with OpenShift. I want to be really clear on this. This is a really early tech preview. There's a lot we're going to address going forward. And right now with 4.11, we'll just let you do this wonderful thing on the Azure cloud platform. And you'll be able to add those interesting ARM based nodes as a day two operation to your existing x86 clusters. But there'll be more around that later on. And with that, I think I feel the urge to hear more about Red Hat CoreOS. So I'm going to hand over to Mark. All right, thanks Duncan. So a quick few odds and ends for real CoreOS and the machine config operator. A notable change to the MCO, which is the operator that's responsible for rolling out updates to your cluster nodes, is that now updates will roll out to nodes alphabetically by zone and then by age. Oldest nodes first. This allows for more robust HA pod scheduling when using multi-zone container deployments, as well as a predictable order of reboots when updating OCP. Our cost is updated to rel 8.6 content. Of course, this comes with the usual compliment of hardware enablement, performance improvements and updated packages. K-Dump is moving to full GA support on 64-bit x86 systems. Other architectures will follow in subsequent releases and they remain tech preview status for now. And finally, there are new packages available. NVMe CLI has been added to the base image. And as alluded to earlier, a couple of key Kerberos packages have been added to the CoreOS extension system. And with that, I'll hand it off to Adele. Take it away. Hey everyone. So what is Hosted Control Planes? So today we offer OpenShift for all of our customers for different use cases with different deployment models. These deployment models all share one thing in common, which is they all require dedicated control pane. Now let's take a step back. If you've ever used the car pooling application, you know that you can share the same car with all the passengers who are going towards the same destination. You know that you would be saving fuel, but also allowing room for other people on the road who are going towards the same or other destination to share the same road so everyone is happy. We're trying to do the same with OpenShift. OpenShift Hosted Control Planes allows you to host the control panes or allows the host, the control panes of your clusters to share the same infrastructure. It's also decoupled. So the lifecycle upgrades and the control panes are also hosted as spots. So they are just another workload on your cluster. So if we take that car pooling analogy back, we'll be saving costs of hosting the control panes on because we're doing that in the same infrastructure. And because the control panes are hosted as spots as part of your queue deployments, you're also providing foster provisioning times for your cluster. So it's a win-win for all use cases. Additionally, you're also introducing network and trust segmentation because the control pane persona, your SRE, your admins are just looking at control panes and you as an end user are just looking at workloads. So you achieve strong separation of concerns. Hosted control panes is most useful at scale. I'm not talking about 10 clusters or 20. When you have hundreds of clusters, this is where you can use and have the most value out of your hosted control plane deployment. Next slide, please. Right. Now we know what hosted control panes is. Let's see what's new with hosted control panes. So the way you can consume hosted control panes today is through the multi-cluster engine. This is the part that handles life cycle for when you have a multi-cluster deployment. Again, as I said, it makes sense at scale. HyperShift is the project that is used underneath. We shift HyperShift through multi-cluster engine and the whole feature is called hosted control panes. Optionally, you can also use ACM. You require ACM subscription. So if you have ACM and you want to use hosted control planes, this is one way for you to go. We today offer it as tech preview on AWS. And we're also going to offer it on Asian platforms based on assist and install technology and Azure as a technology preview provider. In the future, we're looking to add more providers and you're going to be able to also see it in other form factors and other management models of OpenShift. With that, I'm going to hand over to Gaurav to Gaurav for workload profilages. Thank you, Gaurav. Workload latency profile. So OpenShift, shift with some default reaction time. Let's take an example. Let's say there is an increase in latency between control plane and the worker node. Now, by default, a controller manager will make the node as unavailable within 40 seconds. This 40 seconds might be too less or too fast to fail a node for certain use cases. So we built some profiles where we say, hey, you can use, you can increase the time to two minutes or five minutes and reduce that churn basically in your infrastructure. Second is once a node is unavailable, markets are unavailable, a deployment takes five minutes to spin up the part in another location, which might be too long to spin up a part. So with this profiles, we have decreased that time to 60 seconds. That means you can past, pin up those parts somewhere in the infrastructure, I mean, thus decreasing the downtime of those applications. Next slide. Blocking up payload registry. So for customer who wants to be a Mars e-compliant, we have provided an ability in 411 to block a payload registry in a disconnected environment. Next slide, please. Ron, are you back with us? Yes. Now I will cover the new ACS Advanced Cluster Security on the last quarter and the second quarter of 2022. Red Hat Advanced Cluster Security continued to create and enhance capability designed to improve security programs, including supply chain security and zero trust networking for Kubernetes. Our latest update released at 69, 70, 71, including improvement to vulnerability management security policy scale and additional guardrails to help protect against a misconfiguration that can create security risk. The key new enhancement capabilities outlined in this release are a scanning of enabled embedded OpenShift container registry, improved detection of spring vulnerabilities, new policies to manage operational deployment, and readiness of deployment, inactive software component identification, verifying image signature against cosign and six star public keys, and identifying missing Kubernetes network policies to enable zero trust networking within the cluster. And many more. We will publish what's new session for ACS as a different session. And it will be available, I hope, in next one. Next one. So in OpenShift 4.11, we have now improved the audit logging feature. The audit logging feature now enables you to have logs that contain both login and login failure details. So now what sort of events are logged into audit logs? So now what sort of events are logged into audit logs? What sort of events, like I said, including logins, fail logins are now logged at the metadata level. And as you can see in the audit logs under annotations, it is a field called authentication.openshift.io slash decision, which would either be allow or deny. And then you have another field with the username and the expected results are we will now show login failures as well as login and logout events as a part of the audit logging feature. Next slide please. So this, as you know, the Kubernetes API has been changing and PSP0 has been deprecated and will no longer be served from Kubernetes 125 has been replaced with something called PSP++. And OpenShift APIs have been changing to react to those needs. We have introduced a new API called SCCv2 to comply with those changes. And with the introduction of this new PSP++ or what we call part security admission, namespace and pod containers can be defined with three levels of policies, as you can see, privileged baseline and restricted. And pods and containers that are not configured according to the security standards defined globally or the namespace level will not be admitted or cannot be run. There is an FAQ at the bottom of the slide that talks about how you can reconcile part security, admission, integration and OpenShift, the new changes that are coming in as a part of SCCv2. I highly recommend reading that FAQ. And now over to Jeff Nguyen for talking about all the awesome innovations in our JCM. Thank you Anand. Welcome everyone. I'm here to talk to you about the enhancements we have to our management platform. So managing all of our Kubernetes fleet we have some great enhancements with regards to our policy and governance capabilities. We've had a lot of customer requests around the ability that once I deploy a policy it's going to create some Kubernetes resources. I'd like to when I delete a policy or remove a policy to delete those resources. So that's upcoming and new and ACM 2.6 along with better control and flexibility for selecting namespaces with labels and being able to place policies more effectively. If you remember the last session we had in 2.5 we introduced policy sets. It's a great way for you to manage and organize policies as groups and the upcoming release will be introducing a couple of great features or good policy sets leveraging Criverno or Gatekeeper depending on your choice of admission controllers or mutation controllers. These policy sets are meant to really provide a quick time to value by taking our best practices and we encourage you to work with us in the community for these policies to help contribute the ones that are most meaningful to you. The next one is where we hear a lot of requests from our customers is how do I deploy or onboard a number of users into my managed fleet. So we're creating a blog and a multi-tenant role-based access guide for you to use some assets for a sample so that you can onboard users quickly to the entire hybrid cloud platform. And then finally with policies in our governance framework we always recommend that you put those policies under governance in a GitOps repository or a Git repository. We've done some great work with our OpenShift GitOps team and with the policy generator so that now you can have a GitOps repository with your Kubernetes objects and then leverage Argo CD and the OpenShift GitOps to then deploy those policies to ACM and then through placement those policies will be mapped to the fleet. Next slide please. In the spirit of better together as we work across the portfolio and provide a more comprehensive experience as you know today you could see your Argo CD based applications in ACM's application views and that's a great tool for our SREs. With 2.6 you're going to be able to see those applications that were deployed through Flux and that'll provide enhancement for our customers that are using different GitOps tools and so that will also enhance the ability for our SREs to manage those platforms. In 2.3 of ACM we introduced the ability to call out to Ansible and 2.6 we're providing the capability for Ansible to call ACM. So you think about the value of providing the integration between these two very powerful automation platforms. Ansible then will know ACM's fleet of clusters, the inventory associated to that fleet and we're looking forward to getting some feedback from you about the playbooks that you'd like to call from Ansible to manage your Kubernetes domain at scale. In addition to that we are introducing community operators for ACM and MCE that Adele just told us about a few moments ago. Those will be coming soon and be able to get a community operator to deploy and work with ACM and MCE functionality and then the other bit coming GA and 2.6 is Volsync. Think of Volsync as a general purpose low level tool for syncing persistent volumes across different clusters. It's kind of a staple in the business continuity scenario. Volsync will now be GA so you can do it yourself or roll your own disaster recovery scenarios. But before you do that, wait a couple of slides and we'll talk to you about what we're delivering with ACM and ODF. If you've been using multi-cluster networking you know that we've GA'd Submariner. Submariner's got some enhancements with regards to Azure auto configuration and will make it easier to deploy and manage Submariner at scale. Now speaking of scale when we look at managing a large number of clusters in a fleet, we are now verified and validated within our performance team that you can have 2,500 single node open shifts associated to a given ACM instance. We're going to continue to push that envelope looking for 3,000 in the very near future and we've also made some scalability enhancements to our search capability. The underlying search capability has been re-architected for better resilience and scalability by collecting information about the resources that are on the cluster. And one of those things that help us with scale is helping you configure exactly the type of Kubernetes objects that you would like to be able to have in that one single pane of glass interface so you can configure what data Kubernetes objects are being returned back to the hub through search. And we've also taken the steps to make sure that the dynamic metrics collection is also configurable. Resource constrained environments there out in the edge, you don't want to collect all the metrics, but there's some there that you really want to bring back. We've made it very, very easy for you to be able to decide which metrics you want to bring back to the hub. And then of course in prior releases we've integrated alert manager into that hub infrastructure. Next slide please. There we go. As I mentioned before, business continuity and disaster recovery is top of mind for many of our customers and you could use volume sync for doing it yourself but we highly recommend that you join our journey with us and leverage a couple of different patterns we have for business continuity. The first up is regional disaster recovery for applications. And in this case when you think of disaster recovery in both of these scenarios ACM's role is to create clusters, configure those clusters consistently with policy, and then we team up with our friends from ODF to make sure the persistent volumes are replicated in a regional DR scenario so those will be replicated asynchronously. So you have a non-zero RPO and RTO, but we can get that down to several minutes for those workloads and use cases that require a regional disaster recovery and failover. If we go to the next slide the same situation in story plays itself forward where we are providing synchronous replication but synchronous replication we can't ignore the laws of physics. We have latency constraints and we are using this external ODF cluster here at the bottom to do synchronous replication. In that case you can achieve an RPO of zero depending on the readiness of your cluster that you have that will be receiving that synchronous replication we can get you to an RTO of minutes as well. So we have both of these options available in working with our customers or tech preview today as noted on the slides we encourage you to reach out to us and the slide deck that you will receive will be a URL that you can click to join us in our early access program and help us validate and refine these solutions as we bring them GA in the subsequent releases plan for 4.12, 2.7 from an ACM perspective. One more slide for me and then I'll turn it over to Roger is this general purpose backup utility that we provided based on OADP. OADP is the low level parts from our ODF framework. It's a CLI based scheduling and maintenance capability so you can target very specific things on your cluster and back them up to S3 compatible storage, do what you want with them from there and in future releases, this is tech preview in 4.11 as well. In future releases look forward to having a user interface associated to that and we intend to also integrate that into an ACM policy set so that you can maybe target fleets of clusters to be back up with this capability in the near future. With that I think I can turn it over to Roger to take us through the rest of the slides. Thank you GAF, my name is Roger I'm going to take you through this observability part of this update and we will start with monitoring. For monitoring we have added some features for the UX, for some security and reliability from user and some community updates. If we start with a UX in OpenShift 4.11 we focused on some major UX changes as we have removed UI from the stack that we no longer can support but we continue to invest in improving the overall monitoring experience but we will do that within the OpenShift console and that is to provide us a better native coherent user interface allowing administrators and developers to access the information that they need. So we will continue in the further updates to add information and dashboards to that transformation. For security updates in the cluster monitor operator we now allow OpenShift customers to configure the remote right with all the authentication methods supported by the upstream Prometheus and that means support for Auth and Auth 2. In the user section we also allow now cluster admins to configure the retention size for the metrics and what that means is it's now possible to define the maximum amount of data to be retained on the persistent volume. We also allow users now to manage the alert manager for the user defined alerts and that feature is being fully supported through the OpenShift console that I will show you in a later slide. We also support metrics federation for user defined monitoring and we will do that by exposing the federate Prometheus endpoint and we will do that from both within and outside the cluster. Some other updates is that we now allow doubling of the scrape interval to reduce some CPU usage in some single node stops and another config is for the cluster monitor operator. We now allow you to interact with systems outside the cluster and we do that by providing an easy way to add the cluster ID to the data that is sent out. We move to the next one. Some improvements in the OpenShift monitoring experience and this is quite much here but the team has continued to invest in improving the overall monitoring experience inside the Dobserve section and that's to provide this better and a more native and coherent interface for all users and administrators, developers and as a reminder we had an OpenShift for 10. We provided an experience that included metrics targets and the single view for the OpenShift console and now in 4.11 we provide additional improvements. For the UX improvements around the metrics page we allow you to show functions and metrics. We can highlight functions, labels and get a better expression readability for this section. Around the dashboards we have provided a higher sampling rate for examples in the old one, the time span was 30 minutes. We have now switched from a sampling rate where you can go from every minute to every 30 seconds. Some updates from the monitoring section also that shows in the UI is that the users now can manage their alerts or user-defined alerts and what that means is that for example the user can now silence the alerts within the OCP console. Some notes in the UI that you will probably recognize is that there has been a removal of the Prometheus UI but some redirects that the Prometheus has for alerts has been added so the features are there but not within the Prometheus UI and starting from 411 and later Red Hat will no longer provide the Grafana dashboards for visualization without or customization out of the box. It's there but for 411 and going forward we will replace the dashboard visualization and management features in the OpenShift console instead of Grafana. Grafana will be deprecated from OpenShift versions starting from 411 and all the action provided in Grafana will be moved into the 411 console. For the logging section the OpenShift continues to expand this into the vector as the alternate collector and Loki as the alternate log store so we now provide you can use this it's a supported collector for vector and storage for Loki but the default one is still fluent for the collector and the storage is elastic but as we continue to evolve from this tech preview there is a growing demand for having this transition and customers now can take advantage of this if they want to and one example is the ability to assemble log messages as a stack trace instead as a single log entity before you had to manage separate multi-line logging sources and as we keep working towards this consistent unified experience we have also added some UI changes that I'm going to show you in the next slide some minor updates within the logging is that we changed the max and that allows the significantly reduced upgrade time and we also allow customers to filter some of the log operator logs based on the pod labels and if you're going to use vector as a log collector that will allow you to send logs to cloudbox destinations so that means allowing AWS cloudbox log services and we also continue to upgrade the fluency to the latest dependencies and as I said the experience for the console so we keep on adding native dashboards and within logging 5.5 and available for OpenShift for Reven the logging and observability UI team have invested heavily in building this logging view that you can find within the observe section by a plugin and this example here you can see that this view shows the original log file and the user can click and zoom and see the additional data so the layout is similar to the metrics view but here is a log histogram and you have a nice log queries so this is available and I think we have the next slide about insight insight continues is to provide this proactive recommendation on the Red Hat experience about running OpenShift and remote health data from all the OpenShift customers out there so what we do is we analyse this remote health data, find symptoms, form diagnosis and we provide them to our internal teams and that is to improve OpenShift, make the support experience much more effective and enjoyable so that's the absurd part so I hand off the next to Deputy Tabar Thank you, Roger. So let's go with some of the major highlights with respect to networking, right? First up is the Metal LB so Metal LB operator with BGP support was GA in as a part of 410 release given how customers are consuming Metal LB, you know how it is evolving we find that there are many times where we need complex configurations taking this into consideration we have added support to make it possible to choose a specific set of peers for a given pool by adding optional lists of BGP peers along with it we have a 4 node selector configuration where we can have different set of nodes exposed to different subnets and the load balance or IP can be reached only via subset of these nodes so moving on we have had support for IPsec you know in much earlier versions of OpenShift but this had to be enabled at cluster installation time what we have done is added the ability to turn and turn off this at runtime IPsec ensures the traffic between pods on the pod network is confidential authenticated and not be tampered with this however will require adjusting of the cluster empty you to accommodate the IPsec headers overhead currently this adjustment has to be performed by the cluster administrator before enabling this at runtime. Also what we have seen is when we're working in highly regulated environment one might need the ability to secure DNS traffic when forwarding request to upstream resolvers to ensure data privacy and cluster administrators can now configure TLS for forward DNS queries through this feature encryption provided by TLS eliminates opportunities for eavesdropping and on path tampering with DNS queries in the network. Next slide please. Let's look at some of the ingress enhancements we have undertaken multiple ingress enhancements over the last cycle to enable more functionality performance and so on and so forth. First up we have the ALB operator. The AWS load balancer operator is currently under tech review. This basically deploys and manages an instance of AWS load balancer controller. So this AWS load balancer controller helps us to manage elastic load balancers for a Kubernetes cluster. This project was also formerly known as AWS ALB ingress controller earlier. So it basically satisfies Kubernetes ingress resources by provisioning the application load balancer which is more of an L7 ingress load balancer that integrates very well with a lot of native AWS services including DDoS authentication and many more which customers want to use. Next we have added the support for users to specify a sub domain on a route and have the OpenShift router fill in the ingress domain automatically which in turn makes very easy to complete the route's effective host name. This will help us to ease how we configure router sharding today and also helps us to roll out public private router deployments with much more ease. Next we have exposing port configuration of the ingress operator. I think this is one of the most requested feature which will enable customers to run multiple router deployments on the same node on different ports. Now this will cut down infrastructure costs that is incurred in scaling the ingress as it would be possible to configure various ports for each of the router deployments all of them running on the same host. Based on multiple customer inputs we have also looked to enable configurations on HA proxy first being the router max connection. This gives the ability to tune the maximum number of simultaneous connections that HA proxy will accept for better performance. Now this feature will allow us to set a discrete value which can be configured by the cluster administrator or also we could you know the auto feature is what we could use where HA proxy itself will dynamically compute the maximum value you know on the number of connections that it can accept. And lastly we have the routing back and check interval. A cluster administrator can set you know the health check interval to define how long the router waits between two consecutive health checks. And we understand that two frequent of you know probing can flood the application nodes especially large clusters and big loads. And that is the reason we want customers to be having the ability to tune this all of this for all of their workloads in one shot so that they can maintain their cluster stability. This value is applied on all of the routes and the default value is of five seconds. That covers most of networking handing off to Peter to cover virtualization. Thank you. Thanks, Deepne. Let's talk about OpenShift virtualization which is the ability to run KVM VMs and Kubernetes which is based on the upstream Q group project. We've got a couple of enhancements here to make it more natural and comfortable for traditional virtualization infrastructure administrators to work in a Kubernetes cluster. So we've got new dashboards and panels for individual VMs to allow you to manage and configure those very easily without using YAML. And then we've actually improved the new VM workflow to allow you to create a new VM or your users to create new VMs with two clicks. We've also partnered, we've worked with our partners at NVIDIA to allow you to share GPU resources across multiple VMs and containers which gives you a much more efficient use of your hardware. They have a new operator out that's in tech preview now. And lastly, as we've said for a while, that the performance is parity between all the virtualization platforms. So a workload running on Rev, Rel, OpenStack, or OpenShift should be approximately the same. And we now show that's at scale as well. So there's a white paper which will be dropping very shortly, which will show over 5,000 VMs running on hundreds of bare metal nodes getting the same performance. And I think there was also a question in the chat about the hosted, excuse me, hosted control planes and the Kubernetes, the Kievert provider. We are working on that actively. It's not quite ready yet. It's still baking in the oven. We expect to have a dev preview hopefully by the end of this year. Now let's turn it over to Joachim to talk about Sandbox containers. Thank you, my friend. And welcome to the section of OpenShift Sandbox containers, which is a product that has already been since 4.10. For those of you who do not know Sandbox containers yet, the main focus that we have here is to harden your container environment by implementing isolated kernels within each of your containers instead of having a shared kernel on the host. This prevents containers to have any unwanted effect on others as well as on the host, as well as vice versa. So in this time with 4.11, our key milestones will be the following. We are enhancing our footprint in directions of cloud computing by having the AWS bare metal option as a tech preview. This enables AWS bare metal today, but we will give you a lot more options with the next and following releases. Next thing is we have single node OpenShift, which is now fully supported and available with Sandbox containers. Beside that, we are making your life much easier with enhanced observability options to assist your administration with visible metrics on performance, health, potential bottlenecks, and a lot more. That's what comes with 4.11. Stay tuned and handing over to the next presenter. Anand, are you back with us? Oh, sorry for that. I was just looking at the live chat for some of the questions. So I'll keep the Windows updates short. So the first update is support for container D. Upstream has removed support for Docker, Shim as the default runtime for Windows nodes and is switching to container D. Long story short, none of your workload should be impacted. The existing tools that you use to build your Windows container application is using Docker CLIs. All of that should work. So really, the impact to the user should be as minimal as possible. The underlying container runtime is changing from Docker, Shim to container D. So that's pretty much about it. Next slide, please. The next quick update is that we will now be supporting Windows Server 2022 across the board for AWS, Azure, vSphere, and bare metal or platform agnostic, which is platform type is equal to none. And the reason for that is pretty simple. Windows Server 2022 has a mainstream end date of October 2026 with an extended date of 2031. Other Windows Server versions like 20H2 are, I would say, somewhere between 6 to 1 months of support from being over. So that's why we're really switching to Windows Server 2022, which gives you somewhere between a 5 to 10 year runway for your support for your Windows Server host. And now over to Irvahn for talking about all the awesome end media enhancements in fuller. Hello, I'm Erwin Guerin, and I'm an OpenShift product manager for AI on hardware accelerators. With OpenShift 4.11, we are providing new features to make data scientists life easier and run MLops platform at scale. So we have already introduced this year NVIDIA AI Enterprise, which is one end-to-end cloud-native suite of AI on data analytics software. So the new NVIDIA AI Enterprise 2.1 suite with OpenShift is now supported on public clouds, including AWS, Google Cloud, and Azure. So you can easily run supported GPU-accelerated frameworks such as RAPID, TensorFlow, or Triton inference server into the public clouds. We're also enabling with OpenShift multiple new features in the NVIDIA GPU operator. The first feature is GPU time sharing. You have already heard of MIG, which is a feature to share GPUs with specific ampere GPUs. But if you don't need memory or fault isolation, an OpenShift administrator can now define a set of replica for a GPU, and user can simply run multiple pods per GPU. So this feature works with all GPUs supported by NVIDIA GPU operator. So a new feature, you add some GPU metrics with DCGM exporter and the GPU metrics were only in Prometheus with Grafana and we have developed GPU dashboard in OpenShift 4.11 console, and now you can get the GPU utilization, GPU quotas, and you can directly monitor the GPU usage in the OpenShift console. We're also continuing on multiple topics like OpenShift virtualization VGPU. So now the GPU operator can install and configure the host. We are also introducing OpenShift on ARM as take preview. And finally, if you want, and you have an AI project, you want to test OpenShift with VGPU, but you don't have hardware. You can contact Red Hat Account team who can provide two weeks of free access to a remote lab, so it's called NVIDIA Launchpad and you will be able to test OpenShift on bare metal with NVIDIA Enterprise. That's it. I'm handing over to Tony. Thank you. So for operator SDK in 4.11, the most notable feature is the long-awaited Java operator plugin released in tech preview. The goal of this plugin is to enable writing operators in Java and easily manage those operators by the OOM. With this Java plugin, it helps scaffold out an operator project with Java operator SDK and the Corecast framework. So the operator's authors can not only comfortably develop an operator with all their existing Java toolings in libraries, but also take advantage of Corecast for its faster low time and optimized memory footprint to create enterprise-grade operators. Additionally, operator authors with this new release can now easily package their operators in OOM bundles so their operators can be installed by the OOM and ultimately manage their Java workload with OpenShift. Lastly, our engineering team is also working on a more feature-rich Java operator, so stay tuned for TrinoDB operator as a showcase in the future. With that, I'll pass it over to Daniel for all the updates, please. Yes, quick update on the operator lifecycle management piece. We added a new tunable for updates. Today, when OOM updates your operator and the update fails, fortunately most of the time the previous version keeps running, but OOM will not re-attempt to re-apply the update, and this is sort of a safety net to help an administrator debug what went wrong and revert any kind of halfway started migrations. However, this can become sort of a burden if you're running many clusters in parallel with many operators and you need to log into each of them and manually clean up potentially failed updates. So in 4.11b, introduce a new setting that allow you to essentially have OOM automatically move forward from such failed updates when a newer version of the operator appears in the catalog. This is aiming at SRE personnel that manages fleets of clusters with many operates installed and can release new operator updates in their own catalogs quite quickly. So that helps keeping the updates moving forward and eliminates any kind of manual intervention. With that, I'm going to hand it over to Kiana, who will walk us through Quay 38. Hi, I'm Kiana, another PM of Quay UI and I'm here to introduce Quay's plans for the rest of the year. For new users, Quay is a container registry that builds, analyze stores and distributes your container images. I'm excited to announce Quay is undergoing major product renovations this year and our main goal is to update and migrate Quay's product offerings and UI UX experience to console.redhat.com. We strive to create a one-stop shop for all of your container needs alongside of all of the other Red Hat tools aiming for the end of a queue through release. Next slide, please. For what's to come, will the Quay UI experience align with the rest of the Red Hat product portfolio? Sorry, actually go back. One key value adds include both deletions for quickly cleaning up workspaces, advanced filtering for Explorer's search and an ability to modify configuration plays. In the longer term future, we'll add additional UI coverage on other content types like signed images and charts. As a whole, we'll deliver a cohesive user experience with our other console.redhat offerings. Next, please. Thank you. Stemming from your request will grant super users the superpowers by removing the need to request explicit ownership to see all of your content. You have weighed in and we have listened. We want to keep listening and ensure our product updates continuously reflect your changing needs. I invite you to reach out to me using the ability testing sessions to shape the future of the Quay product for and by you. If not able to attend, I created this quick 10-minute survey for you or your team and I'll share the link in chat. Now over to my wonderful colleague, Daniel Messer. It turns out that we're also doing some under the hood changes in Quay Free8. The thing that users talk to us about quite often is the quite open permission model, because everybody who has a login in Quay can actually start to create new content by pushing images either into their own account or create new organizations and push images there. And that makes it a little hard and very regulated and highly compliant requirements environments to kind of control the registry growth and limit who can do what in this system. So in Quay Free8, we are going to introduce a new permission model where we are introducing a new user persona that we call restricted user and this aligns very much with how OpenShift handles permissions. By default, no permissions and a higher privileged user has to give you a sort of data permissions to get started in the system. In Quay, this will mean that a restricted user out of the gate cannot install new organizations and they cannot push images to their own account either. Only another user who is already having access to a shared organization can invite them to this organization and that's where they can potentially start to create content. This should help registry growth overall specifically to storage and check and also with environments of heightened and regulation environments and requirements. Next slide. Finishing off with Quay Free8, we are also introducing a couple of additional features. We are introducing native IPv6 support which is important in actual occasions where IPv6 singers that is the only available choice. We are also looking to graduate the proxy pull through caching feature to general availability and we'll give it the ability to add this limit up on which the cache will automatically start to remove cached images to make room for new ones. Lastly, the container security operator is something that every Quay customer can use to relay vulnerability reports from Quay into the OpenShift console to see which images that they are running from Quay pertain to certain vulnerabilities these parts have. That is something that will see enhancement into disconnected environments so the container security operator will become more friendly towards mirrored images with support for image content source policy and will also automatically detect if the cluster is in the proxy as a cluster by proxy setting. With that I'm going to hand over to Greg from Storage. Thanks Daniel and hello everyone. Let's have a look at what's new in storage. Starting with Cloud Proliders CSI drivers we are promoting Azure file CSI driver to GA, allowing OpenShift cluster running on top of Azure to consume file storage with RWXPDs. Worth noting that we only support SIF and due to a driver limitation there is no snapshot support at the moment. We also have an important announce around CSI migration. OCP 411 will be the first release where migration is enabled. This includes Azure Disk and OpenStack Cinder. Migration for other drivers will be available in upcoming releases. As a reminder CSI migration allows the replacement of legacy entry plugins to the newer CSA standard. The migration term may sound a little bit concerning but there is actually no data migration per se. It works by translating entry PV object calls to CSI in memory, not on disk. CSI migration requires no manual intervention from our users. It is transparent and enabled by default for the both driver I mentioned. In terms of storage classes new clusters will have CSI storage classes set as default whereas upgraded cluster will keep their current default storage class. That said we recommend switching to CSI after your upgrade. Next slide please. Next up is the introduction of CSI volume extension as GA. This feature allows you to expand your PV and the related file system on the fly. This works for both attached and detached PVs. To expand your PV the admin needs to ensure that the storage class exposes the functionality and when that's done users simply have to edit the PVC and update the resource storage field with the desired value. Worth noting that your CSI driver needs to support expansion and shrinking as not supported. Lastly we are introducing generic FMR volumes as fully supported. Generic FMR volume is a way to consume scratch data like MTDR but in this case via CSI. The PVC is defined in line with the pod specifications and you can set a fixed size to avoid application filling up the storage. Similar to other FMR storage the PV follows the pod lifecycle. It's created when and bound when the pod starts and deleted when the pod terminates. It's supported by all CSI drivers that support dynamic provisioning and as it is backed by CSI you can benefit from features such as network attached backend snapshot, expansion and cloning. Next slide please. Now looking at OpenShift Data Foundation updates the two main highlights are the tech preview of regional and metro disaster recovery. Those two were previously covered by Jeff. Additionally we are starting our journey to provide NFS service tech preview as part of ODF. We will provide support for workload running on the OpenShift cluster at first and going forward we will support NFS external consumer using LDAP and Kerberos. Next we are bringing multi-cluster ODF monitoring capabilities with ACM and one last important update with regard to ODF LTM operator that provides a dynamic storage solution for single load OpenShift. We are adding support for thin provisioning as well as snapshot and clones while retaining the very low resource conceptions footprint. And that's it for storage. Thanks turning over to Steve. Thank you Greg. So finally we have a large investment in OpenShift around top 5G and more broadly edge computing and we continue to deliver on features for that use case in 4.11 release. First of all single node OpenShift which we introduced in OpenShift 4.9 some customers who use that single node OpenShift in a per site model with site failover which is to say if a site fails that's okay the failover is to another site in that region have asked for the ability to add capacity within a site so to support that single node OpenShift now supports site capacity expansion by adding workers you can do that with clusters created from 4.11 onwards using either the cloud.redapp.com assisted installer experience ACM or manually by generating the worker.ignition file and using that by default the ingress in that case will remain on that single node OpenShift control plane and obviously that single node OpenShift remains a single point of failure for that site so it is very important to consider that in terms of how you want to manage HA as you scale that up and keep adding workers obviously a single node control plane is not going to be able to handle as many workers as a full three node control plane so that's another factor thing to factor into such appointments in the background we have also reduced the memory recommendation for single node OpenShift to 16 gigabytes of RAM I'll now throw to Roblove who's going to talk us through some of the more telco specific improvements that we've made so to better realize hard and reduce the ID and reduce the performance but this can stick to your level diameters of OpenShift run-ins it's a main point of failure your uh your voice is quite garbled Rob I'd suggest maybe turning off the video just to go through this one that's still no good Rob so I think I will try and get us through this section if that's okay so performance add-on operator was something we introduced to help with tuning of OpenShift nodes specifically for 5G ran applications in the 411 release we have taken a step of actually merging that into the OpenShift core so in today's workload you would install OpenShift installs PAO operator and then apply the performance profile that you wanted in this new installation workflow you'd actually just install OpenShift it's already there as part of the operators we have in the cluster version operator so you can just straight away apply your performance profile the upgrade workflow is pretty transparent the performance profile API is unchanged the operator is automatically uninstalled and that performance profile functionality is now implemented as part of the node tuning operator so basically a great example of something we developed specifically for tuning a 5G ran distributor unit that we've now been able to incorporate as core OpenShift functionality and extensible for other use cases as well we now have the ability to permanently offline CPUs using that performance profile so for example if the worker nodes of a cluster have been deployed with extra CPU capacity that will be used in the future how to turn them off until we actually need them so there's now an offline parameter in the performance profile that will allow you to do that this is done at boot time so if you do want to change that later you do incur the cost of a reboot but obviously you do have the advantages you gain by having those cores offline and not being used until you actually need them as part of your deployment the secondary interfaces SysControl for Mac, VLAN and SROV so basically kernel only interfaces not DPDK and the context of this is that cloud native network functions use the Linux kernel stack to implement part of that part or all of the network function that requires tuning of the kernel stack at a fairly deep level so how do we add a secondary interface with a SysControl value without having to elevate the pod privileges the solution we have here delegates the setting of the SysControl SysCuttle to a CNI which is CNI itself runs with elevated privileges so we don't have to give that to the CNF itself there is also a whitelisting approach here so some SysCuttle settings obviously can create security holes so there's only a short list of what are considered safe SysCuttle settings that you can allow through this mechanism we have PTP enhancements specifically to support those telco ran distributed units the things that my cell phone would connect to or sorry that would process that traffic from the antennas these render use receive time from what's called a grandmaster clock that time is set on the node system clock and when in boundary clock mode it's passed to network entities on the same NIC that the time was received on the problem statement here is that we want to pass the time to more radio units than there are ports on a single NIC so now OpenShift can then configure multiple NICs as boundary clocks on the node so that it can serve basically twice as many radio units as it could in previous OpenShift releases obviously at the scale that people are trying to operate these networks that's a pretty important improvement in the overall situation just in terms of additional things we've gained in PTP the Linux PTP stack is also updated from 2.x to 3.x just as part of receiving REL 8.6 into the 4.11 release final one here in terms of failed single node OpenShift upgrade recovery so we've introduced what we call the topology aware bicycle manager which is a cluster operator with a single node as artifacts prior to an upgrade and then a restore script to use in the case that upgrade fails obviously we hope that doesn't happen but plan for the worst hope for the best what gets backed up in that situation the cluster itself including the NCD and static pod manifests backups of all the important folders on the node any file that was managed by machine config that had been changed the deployment itself so a pinned version of that node as well as any container images that are in use there is a API here on the right cluster group upgrade where you can set back up equals true and that's how you manage this particular capability with that thank you for joining us and taking the time I know this has been an awful lot of information to get through I hope that it was informative for all of you who joined us today whether it was on BlueJeans, YouTube or Twitch these materials will be available on the OpenShift.com website shortly after this presentation in terms of the OpenShift 4.11 release keep checking that fast channel in the next month because it will be appearing there in the not too distant future if you want to get your hands on something in the meantime we do still have guided demos of new features on a real cluster that you can hit at win.openshift.com we have OpenShift info documentation and a sandbox environment available at try.openshift.com and of course we have the OpenShift Commons group where users, partners and contributors can come together at commons.openshift.org so again I thank you for your time enjoy the rest of your day wherever you may be