 All right. Hi everyone. Welcome to today's session on OpenShift 4.9. Happy to have you here. I've got the entire PM team with me as well. So we're going to shoot through a bunch of cool stuff that's in OpenShift 4.9. You're going to hear it directly from the mouths of our entire PM team. A reminder for what we're talking about today, we are talking about OpenShift Platform Plus. This is the holistic ecosystem that we've built around OpenShift. So that's going to include all the bits that you know and love for the OpenShift core. As well as advanced cluster management, advanced cluster security, Red Hat Quay, some of our cloud services and other related offerings. So we're going to get you kind of everything you need to be successful in a multi-cloud, hybrid cloud world. So we're going to talk about everything you see on this screen and a little bit more. As always, ask questions in the chat. If you are inside of Red Hat, you've got your own chat. If you are outside of Red Hat, we'd love to engage with you and get your questions answered and answer any follow-up questions later on as well. All right, let's dig into it. What is in OpenShift 4.9? So here's our three themes. We've got extended installer flexibility, group security, and of course always working on our next-gen developer tools. The first thing under installer flexibility is our single-node UPI is GA. Folks have been asking for this and looking for this for a while. This allows you to extend OpenShift out to more constrained footprints, thousands of clusters at retail locations and on all kinds of vehicles and things like that. So we're super excited about that. So go ahead and try it out when 4.9 is out. The next one is also long requested is REL8 workers. This is both for your compute workers as well as your infrastructure nodes. The control plane remains running REL CoreOS, which is also based on REL8. So you've got REL8 across the board there as well. We'll cover that more in a second. Our new platform that we're excited about here is Azure Stack Hub, another much-requested feature that's also going to be with a UPI-based installation. And something that we announced a little bit earlier, but we're going to cover again, is our bring-your-own-windows-nodes support. So if you've got out-of-band management for how you actually boot and manage your Windows machines, you can bring those to your OpenShift cluster. Then of course we're picking up a new Kubernetes version currently 1.22. There's a number of changes related to the APIs in 1.22, which we'll cover that are important to understand as well. In the security bucket, some enhancements to our TLS related to SED, so shorter X-Berry and some rotation tools that kind of bring it under management just like we have rotation for the rest of the core control plane and some customizable audit policy for the Kube audit log. The next one is expanding our mutual TLS. So we've got a big service mesh user base, and so this extends that into Ingress and Serverless. So you get that mutual TLS between all of those components. And then last on the security front, moving our FIPS security all the way through and other components of the platform. So FIPS validated cryptography for our ACM component, OpenShift virtualization and Sandbox containers. So if you're taking advantage of any of that in a FIPS controlled environment, you're good to go there. Then last on the developer tools buckets, we've got automatic rel entitlements that will get kind of the cluster itself will get entitled automatically. And then you can use that in some of your builds and other code artifacts, which is really, really great. Just smooths over a little friction that used to exist. We've got certified helm charts coming to the console, kind of lives alongside our certified operator content and a UI for GitOps pipelines that are managed as code. So some special screens to help you with that process. And custom domains for serverless when you're exposing those out to your customers. We're going to cover all of this a lot more. So stay tuned and pay attention. I want to cover what's in Cube 122. So the major theme here is API deprecation. So a number of APIs in Cube have been marked as deprecated for a while, but they've actually been removed now. So this affects a ton of popular APIs that have just moved basically from data to stable. So we'd like to see that. And so we're actually finally removing that. OpenShift has got a bunch of checks to help you kind of get over this bump. And we'll talk about that more in a second. CSI for Windows nodes is now GA. So that makes using storage on Windows obviously a lot easier. And we will cover that as well as the bring your own node and some other Windows stuff in a second. Then last, I kind of put a bunch of things under the secure by default category. The main thing being that there's a new admission controller that is the replacement for pod security policies. And so this is kind of phasing in the new functionality around that so that we can deprecate pod security policies the actual object. That's slated for removal in 1.25. And so just like we're talking about these other removals, OpenShift will work for you as well. A note that the CIS benchmarks still call for using pod security policies. So that's something that we'll just need to get updated. So you might see some friction there from some of your customers. And then as always, security context constraints and OpenShift remain working and, you know, we had your back kind of before pod security policies even existed. They're still there and they're still good to go. So no issues there. And we pick up Cryo 122 here as well. Those are version the same as Kube. So that is in OpenShift 4.9 as well. All right, here's a quick look at the roadmap. Obviously we're going to hear about a lot of these things in the first bucket. I'm not going to go over everything else, but just tons of work going into both developer tools, application platform, our hosted offerings, our cloud services that build on top of OpenShift. So really cool stuff going on here. We're going to pick up different regions for different cloud providers where we have a UPI install. We might pick up an IPI install to have kind of that full range of flexibility. OpenShift on ARM is coming. We've got enhanced Windows support, better stuff coming for serverless, get apps, pipelines. So a bunch of really great stuff here. Pause your video if you want to go check it out. And these slides will get published as well. All right, with that, let's talk about 4.9 spotlight features. And I'm going to hand it over to Tony. Thank you, Rob. So the first one here is about a new upgrade safeguard in response to the Kube API removal. As mentioned earlier, OCP 4.9 release comes with Kubernetes 1.22, which removes a set of duplicating V1, V1 APIs. What this means to the operator is that operators that use the beta version of those affected APIs will accordingly need to be updated to use the stable version. So far, partners and their product have been audited and notified of updates they require. And to prevent service breakage, operators installed in 4.8 that do not have a compatible 4.9 release will block the cluster upgrade. OpenShift will also alert users that deprecated API are in use and which operators are using them, as you can see in the screenshot here. Another potential service breakage due to Kube API removal are from the components that use external APIs that we cannot detect. So now with the new cluster upgrade behavior, and that means we'll need to first evaluate their cluster, migrate the affected components to use the appropriate new API version, and then provide a manual acknowledgement before the cluster can be safely upgraded. Lastly, in the future, we expect to use this act functionality for similar Kube ecosystem changes of this magnitude. Next, I'll hand it over to Moran to talk about Synchronal OpenShift. Hi, everyone. So Synchronal OpenShift, as the name implies, it's OpenShift from a Synchronal. With the goal to provide a consistent application platform from data center to the edge and extending OpenShift Edge deployment offering from three compact three node clusters and remote working nodes to include Synchronal OpenShift as well. It is focused at production and edge use cases, mostly for parametal. It does not have any workload or runtime dependency on a centralized control plane, and it really fits and architected to address edge use cases. One mechanism that we've added is bootstrap in place, so we don't need any additional bootstrap node to fire up and initiate that cluster. Upgrade support was added to the Dev Preview Edition that came out with 4.8, and now SNO is fully upgradable. One thing to remember is that SNO is non-highly available solution, meaning that if the server falls, it is not highly available as multi-cluster, as the multi-node deployment regular OpenShift deployment. That affects upgrades as well, meaning that there will be a workload downtime due to the need to update the operating system as well. Deployment can be done via OpenShift install, UPI-like style, and that would be fully GA. In addition to that, we are also offering deployment via relative advanced cluster management with the mechanism of zero-touch provisioning and centralized infrastructure management, which I'm going to cover later on, as well as with Assistant Installer, the SaaS offering for OpenShift deployment. OLM is available to install operators on top of SNO. Minimal requirements for this type of deployment is 8 cores and 32 gigabytes of RAM with 2 cores and 60 gigabytes of platform footprint for vanilla OpenShift. That's what's a platform consumed. There is an attached link to show the deployment model. With that, I would move it to Mara. Thank you. Thanks, Mara. While OpenShift installations on public cloud devices take advantage of native load balancing services, there hasn't been any native out-of-the-box load balancer for OpenShift on-premises bare-metal infrastructure deployments. In 4.9, we enhance bare-metal deployments by providing full support for load balancing bare-metal infrastructure clusters using Metal LB and Layer 2 mode. Layer 2 mode is a first step. The next step is to support a currently in-progress upstream effort that introduces a BGP FRR mode, and that is targeting 4.10 for full support. The operation of Metal LB basically involves two components. There's the cluster-wide controller that handles IP assignments, and there is the speaker, which runs as a diamond set, and it speaks the protocols of your choice to make the services reachable. There's also a couple of network mode types involved. So again, I mentioned that Layer 2 is what's supported in 4.9. Layer 2, that's where one cluster node is used for the Kubernetes service, and it uses, depending on whether you're using IPv4 or IPv6, it uses ARP or NDP, respectively, to make those IPs reachable on the local network. The follow-on work that is targeting 4.10 is about a layer 3 BGP mode that establishes BGP peering with nearby routers that you control, then tells them how to forward traffic to the service IPs. And so this will allow for truer load balancing with multiple cluster nodes. Next slide, please. Opinions of pipelines. On 4.9, we will have Opinions of Pipelines 1.6 release, and with this release, the trigger subsystem, which is responsible for the VEPU functionality, when events come from Git providers and trigger execution of a pipeline. That reaches GA, that has been the tick review for the previous release in the multiple releases that come before. Out of pruning configurations, our end has to allow configuration pair name space. In the previous releases was introduced as a global configuration. Now, every team or every group can go and customize this for their own needs and define how many of the pipeline runs, for example, should be kept, maybe the last 10 or the last five days, and the rest should be automatically cleaned up to free up space both in HDD and on the cluster from storage perspective. Pipeline as code is a feature that was introduced in the previous release, and that allows to follow the GitOps model for your pipelines themselves. Instead of creating the pipeline on the cluster, you put the pipeline inside a Git repo and add the repo to the cluster. Every time an event comes, the pipeline definition is taken from the Git repo and executes on the cluster. This was introduced in the last release, but we are continually adding more capabilities to it and verifying with customers of how it works for them. In this release, private Git repos are supported. Initially, we supported GitHub and GitHub Enterprise. This release, Hosted Bitbucket is added and we know that a lot of our customers are using Bitbucket server self-hosted, so we are working now for adding Bitbucket server support as well. More customization are added in this release so that customers can control how much metrics, for example, the pipelines to generate for customers that consume a lot of pipelines, they have a lot of execution, they send threads to a large volume under Prometheus, so they want to perhaps reuse some of those metrics. We are also giving ways for customers to customize how Tecton works on OpenShift in general. The default configs of Tecton in a way that is maintained across upgrades and the operator is aware of those customizations that customers are doing. On the Dev Console side, the pipeline builder, the visual tool that allows users, help users to compose pipelines from tasks. There are a lot of improvements actually. The most prominent one is that integration with Tecton hobbies there is a search for tasks and it doesn't only search on the cluster, but also search on community tasks that are coming upstream, gives you enough description for you to choose and add it to the canvas that is in front of you for designing your pipeline. We are also working with the Dev Console team to bring more and more of the pipeline as code views into the Dev Console. What you see on the screen to rewrite is the view of the pipeline runs attached to a particular Git repo and matching GitHub checks and the PR status or commit status integration with that. So expect more of that in the coming releases as well. Next slide, please. OpenShift GitOps also has a new release on 4.9. That's GitOps 1.3. In this release we are adding support for user groups and CUBE admin user to login into our city using OpenShift credentials and using OpenShift credentials with Argos CD was already supported but it didn't support CUBE admin user and it didn't seek the user groups so those capabilities are coming as well. The ACM team has done a great job in integrating more and more of Argos CD into the platform. We were working for a long time with them and in this release application set featuring matching Rackin that allows Argos CD to look up clusters in ACM and generate applications for them is available. So you have a quite dynamic environment in Argos CD in this case that for every cluster that is added to ACM or imported to ACM for management then automatically gets added to the list of clusters that Argos CD is managing and an application is created for it to sync to a Git repository. Customers for support is coming in this release of OpenShift GitOps and we also have had multiple requests on support in external certificate managers for Argos CD itself, the TLS configuration so that is also added if a customer wants to use search manager or some other certificate manager and router starting also is introduced. We are doing the same regarding Argos CD with DEF console incrementally adding more capabilities and in this release of 4.9 you will see more details about application environments where an application is deployed through Argos CD and their status, the metrics about how successful, how often the frequency of deployment the failure ratio at the health of those deployments across these environments. Next slide please. Go ahead sir. Hi so for OpenShift serverless that is based on upstream Knative in 4.9 will be updated to the upstream Knative version 0.24 made security our focus and added the encryption of in-flight service cache. This is an important feature we would be porting it to the previous OpenShift active releases. We know how important custom domain mapping is so we have extended our experience from just available through CLI to do this using DEF console and Serena later in the presentation on DEF console would cover it. New monitoring dashboards have been added for the visualization of your serverless apps. We also added support for empty directory so that serverless apps can use this for sharing files between Sidecar and the main application container. We keep enhancing our technical preview of functions so this time around sometimes that we have added our TypeScript and Rust in addition to Node, Quarkus, Go, Python and Spring Boot. Functions can now also access the data stored in secrets and config maps and you can do this with an interactive KNCLI experience and we would also be enabling Google Cloud functions to run on Knative. With that I believe I ask Yamak to take the presentation forward. Thanks Nina. Automatic rule entitlement management for build so this is a feature that has been one of the top requests from many of our customers because of the regression in OCP-4 compared to open in OCP-3 when Docker was available on every host which made building container images using Docker files and installing RPMs from like real RPMs in them quite difficult for many of our customers. In this release finally, it's a while we're working toward this goal across multiple teams we're finally at a point that this is released as take preview and in this release we're aiming to become GA and what it does is that once this is enabled since it's take preview behind the take preview feature gate it needs to be enabled by the customer once they do that the cluster itself automatically which is out downloads the simple content access certificates of the entitlements of the customer's organization and place it on the cluster in no location and manages this entitlement regularly so automatically refreshes them because these entitlements change a lot since they represent the organization and get invalidated this automatically goes back and refreshes the new instance and put it as a secret on the cluster and so the inside team has been delivering that piece OCM has exposed the API and build configs have added support for mounting secrets and config master you can use the entitlement secret directly inside a build config or inside a Tecton pipeline or directly POTS for that matter and consume the entitlement run a backup file build and when you run a young install the subscription manager automatically will recognize those entitlements and consume it for pulling the content put in the RPMs and a requirement for using this feature for the customer is to enable simple content access on the organization and this allows them to use a certificate that represents the organization and instead of going to every system and entitlement one to one which is a traditional way that some of our customers manage their subscriptions across their rail nodes so we're really happy that this is in place finally and we can push this forward make it simpler over the next couple of some more automation around the distribution of the secrets themselves the entitlement secrets make it easier to deliver them to application next slide please with the open chip 4.9 customers can now have multiple logins to the same registry in a single full secret before this without this I mean you could only have one login for an entire registry in a single full secret this required the use of many full secrets for deployments with multiple components and you can clearly see that that was a cumbersome with this change you can now be use a single secret that can contain multiple logins for the same registry either per registry name space or per image in a registry this also allows additional credentials to quay.io in OpenShift's global full secret without overwriting existing credentials for OpenShift core image on quay.io next slide please so we got some really great console updates for you all so in 4.9 we supercharged our project selector so not only can you come in and do a quick search and start your favorite projects but for privileged users we added the ability to filter out system projects our goal is to kind of just help you remove filter out any noise and get you to the projects that you care about in addition to that we added a user preference section which includes not only your language preferences but we've added the ability to set your default perspective view your default projects your default topology view in addition we also added your default method so if you want to choose form or ammo when you come in the preferences will remember that for you in the future next slide so another big request we got was in the overview dashboard our users wanted to be able to get the cluster utilization not just by an entire cluster but by no types as well in OpenShift we defaulted to both worker and master nodes we added the ability to segregate which node type you want in addition if users add their own node types for example they come in and add an Infra type or a GPU node type they could come in and filter that as well the next item we've added is the ability to get node level logs directly from the console so kind of like pod logs you can come in to your node select logs you'll get a list of all available logs there and then which version of that log you want to access you'll be able and then finally we added the ability to clean up operators right so when we say by clean up we mean not when you uninstall we not only uninstall the operator but we also uninstall all the operands that were created by that operator as well give you a full clean uninstall of your operator next slide please thanks Ali so yeah hi I'm Serena I am the PM for developer tools and this is around the developer console or developer perspective inside of the console what we've done this release is kind of really focused on a lot of usability enhancements as well as a few features so the screen on the top left is showing our converged import flow previously we had three separate flows for import from Git, Dockerfile and Devfile and in 4.9 to improve the experience we now have a single converged flow where the user just enters our Git repo and we do all the work behind the scenes that's a great improvement the next one on the top right hand side is an easy way to export your application this is a new dev preview feature from the topology view it allows you to export your app which gives you a ball of YAML that you can then you know replicate your apps in another project in the same cluster or in another cluster by just importing those YAML those YAML documents the bottom left hand screen is around a form based edit for build configs so this was an RFE requested by many people we used to have this in 3.x so again this is kind of a parody feature with 3x and last but not least on the bottom right hand side we have improvements for application observability our monitoring section has been renamed to Observe we now have four dashboards available for developers and we'll continue to see more added in upcoming releases we also have a type that had capability in the dashboard selector making it more efficient to select what you're looking for and that improvement has not only been made in the developer perspective but also in the admin perspective which has been requested by many as well on the next slide we'll talk a little bit more about the serverless changes that we have Naina had talked about some of this already so we do have the domains mapping support for serverless deployments so as we know each service kind of has automatically assigned a default name when it's a domain name when it's created and this option allows you to map any custom domain name that you want to a k-native service through the UI and on the slide on the mock-up on the right that is our developer catalog now includes Community Camlets so when the camelk operator is installed we get an additional 50 plus event sources available in the developer catalog so that's a great feature as well on the next slide we'll talk a little bit more around integration with pipelines inside of the console and this is aligned with what Siamak had discussed so we do the screen on the left is showing that we do have a repository list views for pipelines as code so that allows you to look at those repositories, get to pipeline runs etc. from the console and then also as Siamak had mentioned we do have some nice enhancements around the pipeline builder we have the ability to provide a search capability for tasks which allows you to see a task description as well as filter by different metadata that are aligned with the tasks and also we have that tecton hub integration and with that you're able to see what version of the task is available you can decide to install or upgrade the tasks right from inside the pipeline builder as needed as well so this is a great again another great enhancement to the pipeline integration with the console and with that I will pass over to Siamak for the next part of the presentation Thanks Irina. Hi everyone so let me start by introducing some of the new features we are adding to the installation experience in OpenShift if you are familiar with OpenShift 4 you will probably know that there are two primary installation experience. The first one full stack automation or IPI where the installer controls all the areas of the installation including infrastructure provisioning with an automated best practices deployment of OpenShift and the second one is the pre-existing infrastructure deployments or UPI where administrators are responsible for creating and managing their own infrastructure allowing for more customization and operational flexibility for the next part of the presentation as well. As you can see in the timeline the supported provider release gets increased with a new addition to user-provided provisioned infrastructure which is Azure Stack Hub. Next slide please. As I just mentioned we are adding a new provider to this release this is Azure Stack Hub from the Azure Stack Hub. This new feature allows an OpenShift cluster to be deployed into an existing infrastructure on Azure Stack Hub. As part of this provider enablement we have put together some Azure Resource Manager templates. This is the solution Azure offers to implement infrastructure as code to their Azure customer. These templates will assist the user with the creation of the required infrastructure where to deploy OpenShift. Next slide please. On this release we are adding support to Red Hat Enterprise Linux 8 starting with REL 8.4. If a customer wants to use REL for the worker or infrastructure nodes this can be done as a day 2 operation on any cluster deploy via UPI or IPI. In OpenShift 4.9 we are deprecating adding new REL 7 machine to the cluster and the path to place existing REL 7 machines with REL 8 ones will basically consist of adding new REL machines and remove the REL 8 ones. Next slide please. This enhancement enables for the OpenShift installer to create the subnets as large as possible within the machine CIDR rather than always taking up an 8 of feet regardless the number of subnets this is specifically for Microsoft so this will allow users to create a machine CIDR that is as small as possible to accommodate the number of nodes that will be in the cluster and there are no changes required in order to consume this or additional fields that need to be added to the install config file for the user provisioning infrastructure the documentation has been updated to reflect this change. Next slide please. On this release we are adding support to the new AWS regions in China due to government regulations there is a requirement for an internet content provider license in order to use these regions in AWS the main difference if you are familiar with OpenShift and AWS already between these regions and others while deploying OpenShift is that both the region and the AMI reference must be added manually to the install configuration file times these AMIs are not publicly available in that region and the user needs to upload manually to their account and that's all from my side passing over Moran now I think Yeah so zero touch provisioning is moving from a depth preview to a depth preview release status which is provided within RHCM with the additional option now to deploy multi node clusters as well as remote worker nodes on top of SNO capabilities which were there already with requirements coming from the Terco Market to address multi cluster regional plan deployment infra configuration and workload are manifested in Git via Kubernetes native APIs to provide automated fully deployment from a regional location and basically for guys people do not know that already it integrates and leverages existing technology stack whether Reddit Advanced Cluster Management Hive Metal 3 and assisted install basically taking the benefits of all of those to create a fully automated flow from infrastructure to application running on OpenShift cluster so it can do deployment over layer 3 networks with no additional bootstrap nodes so it's really aimed at edge deployment other than that it is highly customized deployment it fits connected and disconnected IPv6 IPv4 dual stack DHCP static IP also the all supported deployment options are feasible using this mechanism it is GitOps enabled meaning that it is managed with Kubernetes declarative API and it works with any deployment apology so that's zero touch provisioning and it's provided via infrastructure operator and next slide please using the same APIs the same Kubernetes APIs we really wanted to touch additional flows ones that are not focused on deployments but provide more dynamic capabilities and basically a decoupling between two personas one persona is the infra admin, the IT which manages the on-prem compute across different data centers or locations the other persona is the cluster creator the DEV or DevOps which consume this allocated compute resources and create clusters from them and we really kind of try to create a different interface for each one of those one is for managing what we call an infrared it's another custom record that we added to the operator which allows you to organize your infrastructure in a much more structured way still Kubernetes stated so you can divide per rec or per location and organize your hardware this way and while creating and kind of trying to preserve the same or better user experience for cluster creation we borrowed many of the practices that we've learned from a system installer working on cloud.rede.com and created the same type of experience of pre-flight validation of the monitoring and the same UXD around that so we kind of keep the same structure and same format this is integrated with ACM and available under a tech preview with advanced cluster management to the fall soon to come and with that I'm going to pass it to Roman thank you. Thanks Moran okay so let's talk now about bare metal IPI and one of the features we are adding in 4.9 is the ability to use the regular bare metal installer and workflow against bare metal nodes provided by IBM cloud so if you're familiar with bare metal IPI you will know that essentially what we are doing is we are using bare metal nodes as if they were a cloud provider effectively thanks to the bare metal operator so this is exactly what we are doing against bare metal nodes provided by IBM cloud this is not cloud provider a new cloud provider that understands IBM cloud but your regular IPI workflow this is an important difference because we are also working on adding full support for IBM cloud next slide please okay one of the things that you may be doing still talking about the bare metal IPI is provisioning your nodes with DHCP and PICT that's very common when you provision bare metal nodes and this is integrated in the standard IPI workflow when you do this you need a provisioning network that's dedicated for this purpose to do the provisioning over the network of your nodes but then you may want to expand your cluster with remote worker nodes that not necessarily will have access to this provisioning network or you don't want any provisioning network whatsoever and you can do this with virtual media the bare metal operator will essentially map the installation image which is in your cluster the bare metal operator is aware of this image and it will map it to the remote nodes BMCs so that they can be installed from it so if you've installed your cluster with PICT now you can expand your cluster with remote worker nodes over virtual media and with that I will pass it to Gaurav to talk about control plane updates Yeah, I'm talking on behalf of Gaurav but because he's on PTO with OpenShift 4.9 what you can do is that I mean customers and users always want to choose different scheduler behavior that fits the workload so for that we have two ways in which users can do that one is the pre-built profiles which you see on the left here and then the other is building one's own custom profile and both are supported with 4.9 you know as the name suggests the pre-built profiles have low node utilization, high node utilization and no scoring with you know you know I think the names are pretty clear on what they do for building your custom profile what you do is that you build a you know you build an extension using the scheduler plugin and then you can use that in your scheduling profile note that you can only use one scheduling profile on a cluster but that hopefully you'll use that and I'll hand it over to the next slide Thanks for sure so I want to talk about the updates to control plane starting with custom route names and certs for cluster components so the default route name of OpenShift cluster components now allows for any flexibility in customer environments the current name that we have which is name.apps.cluster.domain can be customized for both the OAuth server and the OpenShift console so if you've looked at your OpenShift console the URL is you know something like console-openshift-console.apps.clustername.domainname customers have talked to us and said you know I want this to have the name of my bank like an X-ray Z bank or ACME insurance in the cluster name and so now we allow for the customization of both the routes and TLS certs that you can use for those routes for the OAuth server and the OCT console in order to do that you can edit the ingress object and you can pretty much add a component route and say what is the host name that you want and then specify the TLS cert that you want and this is now supported for OAuth server from 4.9 it was supported for console and the downloads page from 4.8 there are few other components like the monitoring pieces like the alert manager Prometheus Grafana, Altenos that are in progress and image registry as well hopefully soon you'll be able to customize the routes for those components as well. Next slide please. So next I want to talk about all the updates to audit config but before we talk about all the latest updates from 4.9 just want to give you a brief background on the API audit log profile that we introduced in 4.6 and talk about how it has evolved from 4.6 to 4.8 through 4.9 so in 4.6 we introduced a API audit log policy that control the amount of information that is locked to the API audit logs by giving you three profiles first is a default profile that lets you logs only metadata for read and write request does not log any request bodies except token that was a default policy the second profile that we let you customize that we let you add was write request bodies which in addition to logging metadata for all requests you could log request bodies for every read and write request for every write request I'm sorry that includes create update and patch and last but not the least we created another profile called all request bodies which in addition to logging metadata for all request it also let you log request bodies for every read and write request to the API server including operations like get list create update and patch the default profile obviously had the least resources overhead the right request bodies has a little more resources overhead all request bodies has the most resources overhead and how you would change the profile is you would edit the API server object and then you would add a profile under specter audit and then specify the profile that you want default or write request or all request and in 4.8 we said the default log policy now you know logs the request bodies for both login creation and login deletion previously the deletion request bodies were not locked so that's the small change we made in 4.8 moving to the next slide so now in 4.9 you can configure audit policy with you know custom groups right which means you can create multiple groups and then define what profile you want to use for those groups for instance if you look at the same API server object under specter audit I've defined a section called custom roles under custom roles I can have multiple groups so there is one group for all you know all server requests and the profile that I've set for that is write request bodies have another group called system that authenticated which is pretty much all authenticated request to API server there is a profile set called all request bodies and for those requests that do not satisfy the about to criteria you know we said a default profile right so you can pretty much select groups and for those groups you can specify what level of logging that you want for those groups it's like this and the last update to audit logging in 4.9 is we have provided a capability of disabling audit logging so again you edit the API server object and specter audit profile you would you know flip it to none the reason why we have given you this switch is because now customers came back to us and said even the default level of logging was a little excessive for them so they would like to have an option of you know turning off the whole cluster and so now you have the option so next couple of slides I want to talk about all the latest and greatest updates to HCD starting with cyphers customization so you can now customize the cyphers that you use for HCD so again you edit the API server object under spec.pls security profile you will define the type of you know tls security profile you want to use there are four profiles that you know tls provides which is old intermediate modern and custom the intermediate profile is you know the default one for the ingress controller the cubelet and the control plane and it requires a minimum TLS version of 1.2 so you can say I want a custom if you wanted let's say stronger protocols stronger cyphers and stronger algorithms and you can replace those you know cyphers under custom and then once that set you would save it and then you can come back and always say cluster and you will see all the cyphers that are applied for HCD next slide please the next update to HCD is around providing you know automated cert rotation there are basically you know four you know certificates used and generated by HCD processes to communicate with HCD first is peer certificates used to communication among you know HCD members as you know a default open chip cluster has three masters which is you know three HCD you know members need to communicate any peer certificates next are client certificates the APS server needs to talk to you know HCD server it needs to present a client certificate third there is a server certificate which HCD server uses to authenticate those client requests from the APS server and what not and last but not the least there is a metric certificate that all metric consumers use to connect to the proxy and the pure client and server certificate validity is around three years and so you know after three years before 4.9 if they expired you know you pretty much had to like you know restart and remove the cluster to you know mint your certificates but now you're providing automated cert rotation feature and these certificates are you know manually rotated you know prior to expiration by the system and there is you know less overhead that an open chip cluster admin has to worry about next slide please and last but not the least for HCD we have provided a auto defrag feature in the controller this feature enables the automated mechanism that provides defragmentation as a result of observation from the cluster the goal of the feature is to provide a controller that manages the automation of HCD defragmentation based on observable threshold again mind you this is not an API we provided to end consumers it's just you know an automated way the HCD cluster operator has to defrag the cluster it checks the HCD cluster every 10 minutes and the criteria it uses is the cluster should be the cluster should be high in a highly available topology mode the cluster member should be healthy the minimum defrag bytes which is a minimum database size before defragmentation occurs should be 100 megabytes and in a max fragmented percentage which is the percentage of the store that's fragmented should be 45% so if these criteria are you know satisfied HCD operator will go ahead and you know run a defrag on the cluster you know reclaim last base you know free up the cluster that leads to you know fewer you know resource outages like lack of memory or resource load or downtime it leads to you know better cluster performance and why we did this is because we observe in a large scale customer clusters we observe some of our internal clusters that are used to run you know our CI jobs and we said you know this could benefit from you know like an auto defrag feature and so this is you know end of the day less overhead for the cluster admin you know better reliability of the cluster and in a stability of the open shift but this I'll hand off to Mark and D to talk about all the networking great thank you Anand so there are many new features in OpenShift 4.9 networking and we continue here in this section with a few of those highlights so the first is the addition of enhanced egress IP load balancing for clusters that were built with OVN so egress IP is likely a feature that was already familiar to networking minded developers and admins because what it provides is the capability to define source IP address or a predefined range of source IP addresses for specified applications egress traffic so cluster admins typically use this source IP address reservation to allow list traffic at the edge of their cluster deployment and doing that they can filter the traffic that's allowed to travel externally to the cluster so this enhancement removes the previous OVN requirement that egress traffic go out of a single nodes interface where it got snatted to the defined egress IPs of that node this feature enhancement adds the ability to use multiple cluster nodes to distribute egress traffic to avoid a single node choke point and still be able to deliver source IP addresses for that traffic note that this feature was implemented in the last release for our default out of the box networking and so this enhancement completes it for customers using OVN for network now to next slide please thanks Mark let's continue to take a look at some of the other major enhancements on the networking side first up is support for network adapters in fast data path list on OpenShift starting with OpenShift 4.9 NICS supported on OpenShift platform is going to be aligned with rail fast data path support metrics now what does this need to our customers right from now on any network adapters you know supported in rail will be supported in OpenShift without needing any further certification requirements the NICS support information can be viewed in the support metrics link that is right there next up we have support for SRIOV on a single node OpenShift customers want to run real-time low latency workloads on a far less resource constrained hardware and to help them with that we now have SRIOV operator running on single node OpenShift this ensures we have high performance network in place which is much needed to onboard these critical workloads finally it gives us a great pleasure to announce general availability of running DPDK and RDMA applications in a pod with SRIOV virtual functions attached for better throughput and performance on OpenShift this feature with tech previews in 4.3 is now officially GA moving on to the next slide now next slide please now let's take a look at some of major increase enhancements done for this release the theme has mainly been around security first up is support for MTLS Ingress operator starting 4.9 we have client TLS enhancement in place that enables administrators to configure OpenShift router to verify client certificates this facilitates mutual TLS where both the client and the server authenticate using their own respective TLS certificates moving on we now have support for TLS version 1.3 for OpenShift Ingress now this basically supports faster TLS handshakes with better performance stronger security in comparison to its predecessor next up we have global options to enforce HDTP strict transport security or we call HSTS policy in this release so HSTC policy basically enforces HDTPS in client request to the host the policy covers without making use of any HDTP redirects this ensures user protection minimizes security risks you know which are basically based on network traffic eavesdropping or manage the middle attacks in OpenShift 3.x in prior versions of OpenShift 4 up until now customers could provide a per route annotation to enable HSTS but those that had plenty of routes or who had this regulatory compliance issues per route annotation could be cumbersome so we have now worked out an option to enforce this policy globally with ease and flexibility finally we have a bunch of HAProxy timeout customizations that we have enabled as a part of this release the list here consists of newly added configurable as part of the ingress controller that allows you to customize you know when your connections will typically timeout these options can be set as part of the ingress controller spec under tuning options that's it for now moving to Frank next slide please, thank you exactly so VRF-CNI is graduated from technology preview to general availability so VRF-CNI permits to connect a pod to several networks with overlapping IP ranges by creating multiple routing and forwarding domain within the pod thanks to the Linux kernel feature name virtual routing and forwarding so VRF-CNI can run on top of any secondary CNI as long as it uses net devices meaning Linux kernel devices and not dpdk bound interfaces for instance at this point in time, VRF-CNI is deployed on top of SRIOV CNI and MAC VLAN CNI next slide please passing over to Jamie thank you Frank so a bunch of SeroSmesh 2.1 will ship shortly after 4.9 it will update SeroSmesh 2.1.9 and introduce new resources for federating SeroSmesh 2.1.9 across multiple OpenShift clusters this will allow Messers to be connected securely in a multi-tenant, multi-cluster fashion compared to upstream SeroSmesh 2.1.9 multi-cluster models OpenShift SeroSmesh 2.1.9 federation does not require the SDOD control planes to directly access the Kubernetes API servers of other clusters instead the SDOD SDODs are connected by ingress and egress gateways across which all traffic between meshes travels this allows remote services to be shared on a need to know basis as determined by each individual mesh administrator traffic to and from the remote services can then be managed using Istio resources such as authorization policies and virtual services as if those remote services were in fact local SeroSmesh 2.1.9 also brings the SeroSmesh extensions API to GA which facilitates the Web facilities use of WebAssembly for extending Istio and Envoy I'll look for SeroSmesh 2.1 in early November I'll now pass to Anita next slide. Hey, thanks Jamie I will be covering Hello everyone I'll be covering OpenShift 4.9 on OpenStack and today we want to look at the Octavia load balancer and support for as an external load balancer service OpenShift on-prem and native as Mark said earlier is does not have a native load balancer to support non-HTTP and TCP in general traffic with middle LB is coming in 4.10 but we already have requirements and expectation that the cloud provider in this case provides load balancer services this is to enable connectivity across OpenShift clusters as well as to connect them workloads. With OpenShift on-stack and OpenStack use cases in OpenShift 4.7 we introduced the external load balancer with a UPI installer you could use OpenStack inbuilt Octavia load balancer for both L4 through L7 services and previously Octavia was only available with career CNI but now we have enabled this with OpenShift 4.9 as a service type load balancer and available to install this with the IPI installer. Octavia has two backend options it has the Amphora which is the HAProxy IPvS based load balancer and you have the OVN backend. With Amphora you have to spawn a separate VM to handle load balancing for every OpenShift cluster and it handles HTTP, HTTPS, TLS termination all different types of PCT ports support for UDP though is work in progress coming with OpenShift 4.10 it needs the external cloud provider which will be added in OpenShift 4.10 STTP support is planned for OpenShift 17 and for OVN Octavia is tech preview right now it doesn't have health checks and health monitoring for its members but it relies on Kubernetes in both pod checks health checks to do verify that the members are up and running it has the same support for TCP UDP coming with 4.10 and STTP with OpenStack 17 the main advantage with OVN is it's the distributed load balancer service is available with the node itself there's no need to spawn an extra VM or the extra hop for latency and so OVN is definitely a potential for usage with the caveat that it might have no health checks and rely on Kubernetes moving to the next slide we also have now looking at Octavia for support for router sharding for the ingress operator a typical use case for router sharding is a DMZ use case where you want to separate your external traffic from your internal or API traffic and you want to use the ingress operator along with the external load balancer for each type of traffic so you want separation at the network level separation at the namespace level or separation at the service mesh level and we now support and have validated with OpenShift 4.9 Octavia load balancer in all of these modes where you can use separate load balancer with the ingress controller for DMZ internal, external and namespace services or service mesh separation using labels or namespace tags and service mesh tags that's it from me moving to specialise workloads with Peter. Thanks Anita let's talk about running virtual machines in OpenShift as you know we've been generally available for close to 18 months now and whether using virtual machines in a cloud native way such as Lockheed Martin is as part of their AIML pipelines or we actually have an online retailer that's building on the excellent work that Ramon and his team did for bare metal and converting their three tier applications into OpenShift and basically transforming those right now they have about 1400 VMs and they're going to continue to grow from there. So one of the things that we've talked about in the bare metal case is OpenShift everywhere and we're actually releasing a tech preview of the first hyper scalars we'll support which is AWS so this is where you can actually spin up a cluster and run VMs directly in your OpenShift cluster up there in the public cloud. The other thing we're delivering is building on our data protection story. We've got some basic building blocks today the next release will continue that trend and then you'll actually see a tech preview of the OpenShift API for data protection where we work with partners to not only make sure that you get the right disaster recovery for your OpenShift cluster but the VMs that are within it and those persistent workloads. Now if you were going to run VMs in OpenShift I wouldn't pick the biggest workloads to run on it like SAP but we've done exactly that. So based on the because we're based on KVM and using the same capabilities that we do across all the other Red Hat virtual platforms we can actually take advantage not only that knowledge but that technology to deliver very robust and capable performance VMs and you'll see this is the first view of non-production deployment of SAP HANA we're heading towards certification and a future release. We've got a couple of enhancements for security and performance. The main one is to be able to run virtual workloads on a FIPS compliant cluster this will be a big help and a big interest to our financial and public sector customers and then lastly as we've said for some time now we believe that VMs and containers should be able to take advantage of developer tools regardless of what format it's in so we've actually got some stuff where you can use VMs in service mesh and get improved observability and security of your hybrid application. I'm going to turn it over to Miguel to talk about how to actually get your virtual workloads into OpenShift. Thanks a lot Peter as you said many customers are bringing workloads to OpenShift whether it's in containers or VMs depending on the pace and the container really patient of the workload if I may use that word or something similar so I mean if you want to bring some workloads that you need next to your containers or you want to bring a lot of workloads to do the modernization at your own pace I mean you need a tool to do that and that is the migration to the station we have released version 2.1 with the easy to use UI that we had in version 2.0 of course improved and we are ready to do mass migration of VMs from VMware doing what we call war migration which means copying the data while the VM is running shutting it down and copying the delta or called migration or rehabitalization so customers that are using rehabitalization can start bringing their VMs into OpenShift we have a validation service to check the VMs before doing the migrations so we are sure that they run the migration runs nicely and we have added capabilities to first automate the migration be able to run a container before or after the migration is done to make changes whether in the infrastructure or in the VM itself and also to analyze if there is any issue what is happening with mass gather integrated in migration toolkit for revitalization so again we are ready to start bringing VMs whether it is to have them next to your containers or whether it is as a plan to start migrating at a slower pace by bringing those workloads more closely to OpenShift as VMs and with this I pass it to another Thank you next slide please I want to talk about a new GA that we had for Windows container a couple of weeks ago bring your own host for Windows nodes we announced general availability for bring your own host or short form as BYUH support for Windows nodes into OpenShift with this feature you will now be able to onboard custom Windows nodes and take pets into the OpenShift cluster we recognize that customers have dedicated Windows Server instances in the data centers that they regularly patch, update and manage often these instances run on on-prem platforms like vspear, bare metal and so on and so forth and it is essential that we take advantage of these servers to run containerized workloads so that their computing powers can be harnessed in a hybrid cloud world and enabling this BYUH feature for Windows servers can help customers lift and shift their on-premises workload to a cloud native world next slide please so this is how it works so you have a cluster with three masters, let's say three work nodes the three work nodes could be built with Linux, you can add additional Windows nodes using the IPI installer so say you're running on Azure, AWS vspear what not using the IPI mechanism you can add more Windows Server instances that are managed through machine API treated as cattle and along with this if you have let's say a dedicated Windows Server instance running in your data center that you treat as a FET it has a static IP address it has a DNS name that you use for it you regularly patch it and update it and manage it and you'd like this dedicated Windows Server instance lying in your data center to be onboarded to that cluster running on AWS Azure vspear or wherever it is in the same cluster you can do that to the BYUH instance now you have an open shift cluster that comprises of machines that are managed through machine API and also these machines that have been onboarded through the BYUH feature and so you can manage FETs and cattle in one happy animal farm managed by the same open shift control plane managed and scheduled by the open shift cluster so this feature went to a couple of weeks back so with this you can now onboard Windows Server instances onto new platforms like bare metal vspear and so on and so forth so hopefully this will unlock customers who are on these platforms and who are trying to use Windows container workloads to run on those platforms next slide please Hello everyone I am Andiz Alouk and I am the PM for open shift sandbox containers so just a reminder open shift sandbox containers is tech preview in 4.9 as it was in 4.8 it provides an additional runtime for customers who are seeking an extreme layer of isolation it complements our existing stack that we have and follows a defense in depth method so it's a complementary feature for existing runtimes it bases off the Catholic containers upstream project and in 4.9 we're providing a couple of new features one is if you have a TIPS installed open shift clusters you can now install Catholic containers or open shift sandbox containers on top of your cluster without tainting the existing TIPS state of the clusters that follows and has been validated for the operator and the Catholic containers runtime we also now allow to move paths for updates and upgrades whether it's for the operator through OLM or through for our runtime virtualization stack or hypervisor stack also for customers who are having trouble we have increased and provided a must gather image that they can use to collect information in case they have problems with their clusters and in that case that help narrow down root cause analysis and we're working on even adding more information across the entire stack finally we have like if you have an open shift disconnected cluster and want to run open shift sandbox containers now the operator the open shift sandbox containers operator allows for that and can work in disconnected mode yeah, with this I will hand over and we'll cover hardware acceleration with Erwin thank you, so hello I'm Erwin Villene product manager for OpenShift AI on hardware accelerators in the previous OpenShift releases we have enabled new hardware accelerators including GPUs FPGAs or ESX each of these enablements we are requiring dedicated operators so to help on standardized hardware accelerators enablement we are providing two new tech preview components in OpenShift for that time so the first component is the special resource operator so it's an orchestrator that can manage the deployment of software stacks for hardware accelerators we have started to create this component NVIDIA, Intel, silicon or Xilinx and we are now providing this toolbox with OpenShift so SRO can manage day-to-operations like building or loading kernel modules you can use it to deploy drivers deploy device plugins and enable Prometheus monitoring stacks so SRO use Recipes enable out-of-tree drivers and manage all the life cycle of drivers so for specific out-of-tree driver enablement will fall under Red Hat third-party support on certification policy the second component is the driver toolkit so it's a container image to be used as a base image for driver containers so the driver toolkit contains tools on kernel packages requiring to build or install kernel modules so you can use the driver toolkit to build to have pre-built containers or local bits so it reduce cluster untilled time and requirements we had with the previous container images so we'll use all these components to increase OpenShift performance with accelerated workloads like deep learning or telco radio access network now I'm handing over to Duncan Hi and let's take a little dabble into multi architecture right now I'm hoping you've all heard about this but we have a developer preview of ARM available for you now you can go out there and try it and look let's be honest this is going to be huge whether you're looking at what cloud providers are charging for ARM or the new systems that are coming out with their low power requirements it's just the thing that you're going to be asking about right now people are probably discussing things like software, fly chain strategy but by the end of 2021 in the next few months time everyone's going to be asking about ARM so we have something now that you can not only see but you can go out and touch and try and we'd be really interested in hearing your feedback our roadmap is going to be really strong so help us guide that we're also working with IBM on their power and Z features and we continue to innovate there where we can the one probably that's interesting that I want to pull out there is the multiple network interface so the code for that was always there but it was the thing that we never really tested but based on feedback from yourselves and customers we've gone out there and made sure that it works and we're not only making that available for 4.9 now but we're going to make sure that you're supported right back to 4.6 we also don't want to forget about our developers we all love developers and great work by the OpenShift Pipelines team so not only have they got 1.6 out but that's available for our power and Z systems as well and then for the rest of it our concentrations really just been on new hardware on the Z side they've upgraded their virtualization support system so that ZVM has gone up to Z 7.1 so we're making sure that's available and on the power side well they've only gone and brought out some new hardware hopefully those of you interested in this area so the announcements around support for power 10 so we're going to make sure that works well well we have made sure that works well with OpenShift and we've seen good traction here you know one of the things that IBM are offering right now is on-demand pricing there so you can go and kind of get access and run OpenShift in that power 10 environment and you know if your systems don't your workloads don't fill up the whole power 10 system but they maybe want to burst later on you can do that now and with that let me hand over to Tony on the next slide and I think he's going to chat about Operator Framework. Thank you Duncan. So on the Operator SDK front there are three highlights in this new downstream release firstly in response to the API removal in CUBE 1222 the updated bundle validate commands helps developers easily review if any benefits in the operator bundle still use those affected APIs the command also provides guidance on migrating affected benefits so developers can easier keep their operators compatible with the CUBE ecosystem moving on to the next highlight to support proxy enable clusters operators must inspect the environment for standard proxy variables and then pass the values to operands part as you can see on this diagram on the proxy enable cluster OEM will help read the proxy config and populate the config as environment variables in operators deployment so in this new release the SDK provides a helper library for reading the proxy info along with some code examples to show how to pass those down to the operands part that way the developers can easier make their operator offerings support the proxy enable clusters as well. Lastly starting from OCP 4.8 the downstream SDK default use UBI and other downstream image in project scaffolding. The downstream base image also guaranteed with compatibility fixes with OCP releases so the developers can easier create and maintain operators in their Red Hat supported way. Next I hand it over to Daniel to talk about OEM updates. Thank you Tony. Let's talk about operator lifecycle management. A lot of enhancements went into OpenShift 4.9 and let's start with automatic switching of catalogs. This is something that happens under the hood for most use of unaware in every OpenShift update for all the catalogs that we ship with the cluster and this allows us to really just put operators into a catalog that are really known to work with that particular OpenShift version. And while it has always been possible for customers and partners to ship and create their own catalogs they didn't have access to this automatic switching of catalogs with a cluster update. And then we enable that with a way to actually dynamically reference the image in which this catalog full of operators that you can install via the operator hub is referenced. For instance by the use of template variables that refer to the current major minor version of the Kubernetes platform that OEM is running on. So customers and partners can use that now to ship their own catalogs which will automatically get switched with a cluster update to also take advantage of this way of shipping a supported set of operators for a particular OpenShift release. And further on the sense of operator release compatibility in 4.9 we introduced the ability for operator developers to denote in the operator metadata what's the maximum OpenShift version that this operator has been tested with and is known to work with. So when you as a developer set in your metadata you are essentially shipping a support matrix boundary to your customers. And this is something that administrators will notice when cluster updates are available and they have operators installed that have the maximum OpenShift version set to whatever the current version is. This is actually how we inform administrators on the update from 4.8 to 4.9 that they have operators installed which are still referring to these APIs which have been removed in the OpenShift 4.9 and Kubernetes 1.22 release. So this is now available for developers as well and they can really make sure that customers stay within the supported boundaries of an operator version compared to a cluster version. Another thing that we saw is that these bundles in which operators ship their metadata grew in size mostly due to the fact that these custom resource definitions that operators ship to describe their user interface and the APIs have really, really large sections of text that describe the API, the OpenAPI spec. And this has grown all towards the limit of 1 megabyte which is imposed by the CD database underneath the cluster and in the past we had to ask authors to kind of put and reduce this data by not shipping OpenAPI spec for all their APIs. However we want that because it really drives user experience with validation and also how our UI builds these dynamic forms. So in OpenShift 4.9 we are using now inline compression to compress the bundle content and while we handle it and extract it so we are far below the 1 megabyte limit in most cases. We've also reduced the amount of resources that O&M itself uses specifically the catalog pods if the catalogs were large could take up a significant amount of memory so the ram neutralization of all the OpenShift default catalogs almost 500 operators nowadays are now a fraction of what they used to be before and we've also added a lot of status information for debugging and troubleshooting to the user-facing APIs of O&M namely operator group and subscription so admins have one central way to look at what's causing an install to fail or an update to error out in one high-level API without the need to go into logs or subordinate object statuses to figure out what's going on. That's it for operator lab segment management. Let's continue with Quay. So redhead Quay 3.6 is almost in parallel with OpenShift 4.9 and the first thing that I'll introduce is actually not really connected to Quay itself. It's a new flavor and version of Quay that we ship as part of OpenShift. We call it the OpenShift narrow registry and it is essentially a very streamlined simple to use installer for Quay that deploys a very specialized and stripped down Quay deployment for the sole purpose of bootstrapping a disconnected cluster install. So users don't have to rely anymore on unsupported upstream registries. We give them a simple way to install a registry just for the sole purpose of mirroring and within a minute they have a fully led up registry which they can then turn to the OCU utility to start mirroring the OpenShift content for disconnected clusters. While this is not a fully blown HA performance version of Quay, we would like customers to run that on top of OpenShift. It allows you to kind of get over this first initial barrier of installing a disconnected cluster by having a registry. It's supported on rail 8. The only requirement really is Portman and it's going to be shipping post 4.9 GA and it's going to be included with every OCP subscription that no additional cost and support wise covered by OCP as well. Next slide. Another important aspect for OpenShift users using Quay is the operator. This is our go-to installer for HA departments of Quay. This has seen a lot of improvements in FreeSix. The highlight is the much sought after ability to let OpenShift take care of the certificate management for routes. So this is now the default customers and users can still bring their own CLS certificates in which case we will not have an extra route by the pass-through route. We've also enhanced the error reporting for all the various parts that make up a Quay deployment. So you can really see which part and which component of each Quay deployment is still deploying or has degraded. So we have separate status conditions for that now as well. And we do support TLS and crypto connections to external databases such as those from cloud providers in FreeSix as well. There's a size of a portion of users that is also the older operator based on version Quay 3.0.3 and we are offering direct updates for these users as well. Next slide. Last but not least, a feature for users that are using Quay as a center ingress points for all upstream registries that they let their developers use or for storing OpenShift core images for disconnected marrying is the support of necessary repositories. What this allows is to essentially structure content within one Quay organization so that you can avoid naming collisions and you have some sort of logical structure if you will within one bigger bucket. So this is important when you bring in content from other registries but you want to retain their original structure so you are able to add more path elements to the repository name and an admin of that will also quickly see all the various parts in a certain Quay organization that have been mirrored from OpenShift catalogs and OpenShift core images. There's one small wrinkle with that so like you have these logical subfolders in Object Storage, these do not come with their own permission management so all these nested containers that you see on the left hand side of this slide are all individual images that are still subject to their own individual permission and access. This feature will also come with Quay 3.6 and will be available on Quay I.O. The hosted version of Quay towards the end of this year as well. With that I handed over to Storage. Take it away Gregory. Hi everyone. So on the storage side we are continuing our clock providers when it's a certain journey to and to be successful we need both the CSI Drivers GA as well as the migration path for customer that where you think can treat. In OCP49 we are graduating new CSI Drivers to full support with their respective operators namely Azure Stack Hub and AWS EPS. We are also introducing AWS EFS as tech preview for Amazon RWDX use cases and we also added some enhancement into the vSphere operators so that it automatically creates a storage class along with the vSphere storage policy. On the CSI migration front we are adding a GCE disk and Azure Disk as tech preview. Last but not least we would like to give a heads up on the vSphere migration as VMware is one of our top OCP infrastructure providers. As mentioned earlier OCP49 is the release target that will trigger the CSI migration and that will be the only option to consume storage. The vSphere CSI driver requires VM hardware version 15 and an underlying 6, 7, use 3 version therefore we need to have customers informed and planning to upgrade so if you have customers in that situation please reach out to them and we will also work with marketing to advertise that. Next slide outlines what's new in OGF Fallout 9 so before we go into the feature we want to mention the rebranding of OCS to OpenSheet Data Foundation starting from 4.9 change can be found across the product and the marketing protocols. There is no migration or pricing change as OGF skews were already introduced earlier this year. On the DR side we are using asynchronous replication based original DR solution managed by ACM as the review with failover and failback automation. Next on the security front we are adding PV granularity encryption with KMS integration using a service account and we currently support vaults as KMS. In addition to that we are missing a joint effort with IBM Flash System to include in OGF deployments with nice monitoring and dashboard. We added a new capability in the multi-cloud object gateway named space bucket to replicate data between OGF object storage and that native cloud storage. Last in this list is the managed service for OGF on Rosa which just started early trial period for early adopters. This is an optimum offering and if you have any customer interested in that offer please switch out to the OGF product management. That's it for me on the storage side moving to management and security. Awesome, thank you. So this quarter we really wanted to focus on accelerating Kubernetes security innovation and we wanted to do that three ways. We wanted to do that through the thematic of advanced security use cases. At the end of the day red hat advanced cluster security wouldn't be the same without that. We want to focus on self service security workflows. So developers and the security team need to bridge the skill gap between a security person not actually knowing Kubernetes itself holistically and an application holistically and a developer not necessarily understanding security in depth as much as a security engineer would. We also wanted to focus on expanding of platform support. So in terms of advanced security use cases we've enhanced protections for the Kubernetes API server. What this is going to allow teams to do is to monitor for access to their most sensitive secrets and config maps in their environment. So in this way you can tell if someone who is unauthorized is accessing one of your secrets with a cluster admin role. We also want to help organizations improve the cybersecurity skill gap. So many organizations use the MITRE attack framework which is an industry standard framework that is used to do gap analysis from an attacker's perspective so that you can see the common tactics and techniques that you see in the cybersecurity wild. It also helps with incident response prioritization. So if someone is going through and looking at potential security incidents they're going to prioritize something that is indicative of lateral movement in an environment. So I've already pwned you versus an initial access attempt on a website. Another primary goal we had was to shorten feedback loops in the environment. So when a security incident or a violation occurs we want to be able to shift left and immediately inform users that security is flagging them. So we want to shorten the feedback loop by enabling teams to use namespace annotations to define where exactly they want that feedback to go to. So you can use that to send feedback to a Slack distribution and email distribution to see any way that your team operates today. And we've also enabled GOAT access controls for self-service security. So the reason we do that is because teams want to be able to log into a user interface and bridge the context gap between a security and development team. So this is a way that security teams can share a multi-tenant environment with development teams but only give them access to the organizational details that they need. And finally we want to continue to improve our support for the OpenShift platform as a whole. So this quarter we have certified and tested ROSA and ARO and those are officially supported. We are currently working on OSD as an active priority. And finally we've come with first-class citizen support for running configuration checks on deployment configs rather than just deployments today. And Scott, I want to hand it over to you to the round out manager. Right on. Thanks, Jamie. Next slide. Here we go. So thank you all for continuing to be with us. I know some of you probably have to drop but it's only getting better from here. You probably don't even realize that we're putting so many things in the OpenShift 4.9 but management is what really makes it easy. Easy to spin out clusters left and right. Easy to do get-ups at scale. Easy to do all those things and we do it better together. So ACM really brings us to the center point of OpenShift Platform Plus. Bringing those features that Jamie just talked about with cluster security. Bringing those features from SoundMic and the OpenShift get-ups teams. Bringing application sets at scale with cluster selectors. Doing things around governance, risk and compliance when we're centralizing those alerts into the hub alert manager and abilities to spread those out to Slack, Patriot Duty and other third-party tools that you have to implement. And you asked for cluster health metrics for non-OpenShift and we brought that to the table. We're now bringing cluster health for EKS, GKE, AKS and IKS. So all major cloud provider clusters are being reported back from the cluster health metrics centralized at the hub Thonus. On top of that we've brought the business conversation into this. We're reporting service level objectives into that refunded dashboard so you know the uptime so you know your error budget and how you're targeting and tracking and trends. This is all done from one single pane of glass. We always say that management takes everything else easier and you have to put that first. To keep that in mind to our customers, to our partners as you work with OpenShift across this portfolio, we ACM are at the heart of it all. Next slide please. Going from better together it really just gets more about the cluster and the experience of being able to deploy that with fixed compliance. Being able to show our public sector team what we're doing with Microsoft Azure Go as we look forward to bring other private regions like Amazon AWS, GovCloud and the future. Moving forward you can now deploy your hub on IBM Power and Z that's a full GA so guys like Duncan don't worry about it we got you covered we can manage the five on Power and Z. Centralize infrastructure management like we're talking about we can do that for very little deployments and that's that tech preview. Also look for enhancements to advanced image registry configs so that within your public cloud you can define image registries for the cluster that for the add-on for all the features that we deploy out there in those public clouds. We're really trying to make that cluster experience easier for you so you can do that repeatedly and operationalize your teams and be successful with clusters at scale. I'm now going to turn it over to Brad Whitemitter who's going to talk to you about management at the edge. Take it away Brad. Hello OpenShift world and belated thanks our birthday wishes to you birthday boy. Yep ACM for management at the edge no matter how far from the traditional core data center your clusters are located. ACM supports deploying a thousand single node OpenShift clusters and bringing them under management in the single pane Scott mentioned no jumping from system to system. Zero touch provisioning is a big part of that. We'll call it ZTP from here on out. It's a project that deploys and delivers OpenShift for clusters in architecture named HubSpot where hubs a cluster able to manage many of those spoke clusters. The Hub cluster will use ACM to manage and deploy those spoke clusters and with that we have IP v6 dual stack support along with connected and disconnected scenarios so we heard some industry references to telco and many of those are in a disconnected scenario especially as you get further away from the core data center further out on the edge and the worker nodes can directly access the internet or not in the disconnected node. It may be by design or some type of act of nature so the policy generators a big part of this because we've heard other folks mentioning pets first cat or pets and cattle analogy that's where our clusters are disposable and we can simplify that with a GitOps approach to the distribution of Kubernetes resources managed through RACM policy ACM policy. We expect these to be stored in Git Hub and that's as we deploy this through ZTP using Argo CD to push to the hub and then policy engine to push to the managed clusters and depending where you're at in that using the telco industry which would be a horizontal depending where you're at in that edge location we can deploy different profiles and if it's a vertical market less policies will be deployed these managed clusters are handled by backup and restore as part of our business continuity story and yes Red Hat you've heard a few of us mention it but we think of these clusters being disposable and they can be replaced more efficiently than the traditional business continuity stories when using ACM and a GitOps approach. So whenever you hear us referring to the cluster or cattle and not pets it's going to save you time and money when you start thinking of it like that versus the nurture that required with clusters where you know tribal knowledge has to keep those things running and you can think of ZTP as a project that includes a solution said it's got assisted installer single node single node open shift referred to as snow and these are deployed and managed via ACM and I'd like to go over some of the provisioning building blocks ACM deploys the snow which is the open shift container platform installed on single nodes leveraging the ZTP. The initial site plan is broken down into smaller components and the configuration data stored in a Git repository ZTP uses a declarative GitOps approach to deploy these nodes and some of this includes deploying the Red Hat for OS on a blank server deploying OCP on single nodes creating cluster policies and site subscription leveraging GitOps deployment topology for a develop once and develop anywhere model and then making the necessary network configurations to the server operating system things such as PTP performance profile, SRIOV and then lastly downloading images to run workloads things like CNFs. One of the other components of ZTP includes the assisted installer that is a project to help simplify the open shift container platform for a number of different platforms you can deploy on. The AI service provides validation and discovery of targeted hardware and greatly improves the success rates of installation. That's something we're striving to improve with every release. Advanced to the next slide please and as Scott mentioned the customers have spoken we've listened and we've finally been able to provide crucial features around business continuity. The RACM backup and restores a tech preview that's using a backup solution based on OpenShift API for data protection. The managed cluster configurations can be backed up and restored in a different cluster. Leveraging ODF that was OpenShift Data Foundation previously OpenShift Container Storage for those who haven't seen that yet and then ACM and disaster recovery across stateful workloads that's also a tech preview. For your business critical stateful apps ODF along with ACM will ensure you have a robust multi-site multi-cluster DR strategy. Both ODF and ACM enable fast and consistent application DR that protects both application data and application state. This is while ensuring your application data volumes are consistently and frequently replicated resulting in reduced data loss recovery. The DR operators are enabled with ACM automate the DR failover and fallback process and signature recovery is fast and error prone for manual operation. I also want to call out the PV nature of this with Valsync. It's a tech preview. It's ensuring resilience for business critical stateful apps by ensuring a planned application migration strategy across your clusters. You also can use Valsync to create your own DR solution when working with non ODF storage or heterogeneous storage products and you can expect this business continuity story to continue to evolve so please keep the feedback coming and in closing I would like to reinforce how ACM saves time and cost while using the data center compute in efficient and compliant fashion as Scott was mentioning some of the features in his slides and I'd also like to encourage you to review our blogs at cloud.redhat.com forward slash blog because a lot of the things we've discussed here in these slides today they're regularly being published out to those blogs so deeper dives on AI or the business continuity there's actually one of the exciting ones a five part blog series that covers a lot of the things we just discussed here in the ACM slides thank you on to our next presenter please. Thank you Rods so we have been working a lot in cost management to improve the service so you will see reliability improvements including a new version of the cost management metrics operator that provides additional logs for supportability. We also are hearing customers so we will you will see slides improvements to all the overall user experience we've now you're capable of distribution the cost of nodes and clusters based on memory instead of using CPU one of the requirements we have for several customers so if your cluster is memory limited instead of CPU limited your cost will be more accurate and more close to what you want to do to reflect your business we also work on the overall usability of the tool and now we are getting rid of the of the infrastructure and supplementary distinction and you will see that you can show that only if you need it just to make the interface clear and show less graphics more simple graphics all around it's easier to understand and follow and another requirement we have from customers is being able to export labels in CSV files but when you export the data from cost management you can now gather the same labels that we are showing and using and then you can process that in any tool you are using for your reporting and have bands and capabilities out of it and another thing that was really required by customers is now you can post your sources you have a source that is no longer there because your cluster is transient or just basically you kill it to create a new one you can stop your source and you will not see any error message around because we will know that the source is no longer sending information to the cloud and with that next slide I pass over to Frank who is going to talk about Telcon 5G Thanks Tazur so high performance application running in DPDK in case of cloud native network functions for your CPU network interfaces and memory to be located on the same NUMA node any cross NUMA situation lead to significant and unacceptable performance drop in other words CPU devices and memory absolutely need to be on the same NUMA node Cubelet relies on topology manager for container NUMA alignment and so far only CPU and devices where NUMA align with the addition of the memory manager Cubelet can now align regular memory and huge pages with N devices the memory manager is enabled for the whole cluster by the performance add-on operator for NUMA topology policy restricted and single NUMA node and NUMA alignment is then enforced for all containers belonging to the guaranteed quality of service Next slide please passing over to Rob You want to go ahead and cover this one as well? Yes, yes I can Rob as a backup headset Hey Rob we're losing you I think we're going to let Frank cover this one Okay So with OpenShift 4.9 so we have enhanced our PTP support to support the boundary clock function which is really really to the VRAM deployment which is and we comply to the Oran overall design and we also we have contributed in Oran by delivering a non-local event bus related to PTP events so we can deliver super fast so it's really sub-microsecond precision so to deliver events via a sidecar image that can be injected in any CNF belonging to the very far edge node that we call the D-U the distributed unit so basically what's the first piece is the software running at the bottom of the antenna if you want and this is a mandatory so PTP is a mandatory function for Oran because you need all of the antenna to be synchronized in order to avoid destructive interferences between themselves and also to have super high 5G bandwidth so all in all without PTP you don't have run and we have greatly improved PTP 24.9 with the boundary clock and the event bus for the local events next slide please and I'm passing over for observability to Shannon Thank you Frank, good morning good afternoon everyone so we've continued to make some substantial movement here with OpenShift monitoring so with the introduction of some new cube state metrics for alert manager functionality we've seen pretty significant cost a lot of different customer requests so those enhancements to alert manager rules for cluster monitoring operator and refining for triggering conditions such as alerts on cube states metrics Provence to detect more quickly when disk space is actually running low from a container perspective which is pretty important across customers the more frequency of intervals so everything from errors and error detection at the cube state for THONOS queries as well for monitoring user-defying projects we've made enhancements for rewrite storage for Prometheus metrics and we also have a new support for Prometheus and THONOS 0.2.2 so making a lot more investments and looking down the road at THONOS if we could please move to the next logging slide that would be great so for logging the next chapter here we've added new support and flexibility as we've seen a lot of requests to improve performance and start to come up with solutions to scalability issues with Fluent D we are now offering a preview in logging 5.5 consumable in 4.9 to use Vector which is Vector collectors are going to be important for the future for scalability so as we replace Fluent D with Vector over time we plan on offering a smooth transition so an upgrade path from previous GA versions of Fluent D to allow a direct path to Vector which will work really well so with those growing enhancements as we evolve forward we're transitioning into more compatible API style design that way we can extend requests down through Vector through API calls and expand our abilities for how we do Vector collection for logging and really grow and expand and build upon that in the future here as we progress forward with OpenShift logging so as we seen some requests for customers where we want to start assembling multi-line stack tracing log messages and essentially what that means is customers didn't want to have to trace logs through multi-line types of log tracing so they wanted a more unified experience so we've provided that ability to enhancement through stack trace where now we can use JSON as assembly format and essentially you can read JSON formats and lead more of that multi-line type tracing to a single source which will then help us down the road considerably with even distributed tracing across workloads we have more flexibility to provide a simple log exploration to another area where we are going to offer a new API experience in the OpenShift console where we can basically display the contextualized logs inside an individual alert and we can start to see more relationships between alerting and the capability of being able to use the Explorer to expand and drill down there for root cause analysis and finally here we have new support and capabilities so as we grow and expand with OpenShift we are making more investments and looking at Loki so Loki is a logging solution so our Loki operator now is capable of providing encuster solutions so we have the ability to install, update and manage our cluster with an alternative through Loki with scalable and scalability and improved log performance that said I will pass off to the next session Thanks Shannon so last thing on this presentation is talking about Insights. Insights is a set of services that we offer and features for free to all our OpenShift customers and are available for you on console.red.com. We already talked about cost management as one of the Insights services and I will be talking about Insights Advisor Insights Advisor is all about delivering a proactive support to all connected users. I will give you recommendations on potential issues that might impact your cluster performance or security or some other aspects of your cluster and we are giving you recommendations through which you are able to avoid these situations. In OpenShift 4.9 we improved a couple of things around Advisor first if you are an Ergot customer and you are not connected to our infrastructure we allow you to manually upload the Insights operator archive and still get these Advisor recommendations so you can still check your clusters you can still get these cool recommendations and prevent some issues on your clusters. Also if you are using the Advisor interface and you have been also using console.red.com for different features that it offers you might have noticed that there is a feature for notifications. Right now we can configure notifications over email or Slack message or through webhook for critical and important Insights events. This means that every time a critical and again important issue appears in your cluster you will get your email as you see on the right side here with all the details about how you can prevent this. Recommendations that we are working on are based on a work with OpenShift engineers, OpenShift subject matter experts, support people and with OpenShift 4.9 we released a new set of or it is actually running internally but there is a new set of recommendations that are focused on storage and networking configuration. We look at misconfigurations in user management and we also have some new recommendations if you are running specific workloads on your cluster like SAP as an example. Insights operator actually accompanies our telemetry operator. This feature is known to us as remote health monitoring. We work with Insights operator on a smaller footprint with something we call conditional data gathering. When a specific condition happens on the cluster then we collect additional data so we can make the recommendation more spot on and more accurate so you can again resolve it. For you and I guess this is the last slide so bringing it back to Rob. Thank you very much. Thank you all for joining us today to hear about OpenShift 4.9. As a reminder we have tons of live streaming that we do all the time so please look for the calendar for that please do check out OpenShift 4.9 in your existing clusters or for new installs in just a few weeks here when it becomes GA. Again thank you so much for joining and we'll see you next time.