 Alright, hello everyone. Welcome to the What's New in OpenShift for OpenShift 4.10. I'm Rob Somsky on behalf of the entire product management team. We are here to tell you all about it. As a reminder, what's cool about this is this is some of our Red Hat focus field enablement, but we also stream it live for our customers. And so you get to experience all the goodness at the exact same time. A reminder today, we're going to talk about the entire Red Hat Open Hybrid Cloud platform, includes everything you see here. So OpenShift Platform Plus is the package that has all of the synods, and it covers things all the way from multi-cloud, multi-cluster management, through all the security tools you need for all those clusters, getting content out of them, through a global registry, all the data services, plus all the developer application platform services that you've come to know and expect inside of OpenShift for building your applications, getting them out there, keeping them secured. So we're going to talk about all of it today. We've got the PMs from each of these areas to talk about what is happening specifically in 4.10 for those areas. Looking at 4.10 at a quick overhead view, we've got a bunch of new installer platforms. So our IBM Cloud IPI is going GA as well as Azure Stack Hub. And then following quickly behind that is Alibaba Cloud as a tech preview. This is a pretty mature one, so definitely test it out, and that'll be GA pretty soon. Also GA'ing today are ARM instances for Amazon. It's GA, we've been testing this out, it's been well received. So please take a look at that in the automated operations buckets. We've got a bunch of cool things here. So we covered last time how our EUS policy is changing. And so 4.10 will be one of those EUS releases, and it'll be one of the times that you can do this reduced worker reboost during an EUS to EUS upgrade. So we'll talk about that as well as some other changes to cluster upgrades around reducing risk. So if there is maybe something we're investigating on a specific platform, we may want to suggest that folks don't update on that one, but otherwise roll forward and all the rest. So new features there as well as for our disconnected clusters, how you mirror content and run that stuff in your disconnected environment. We've got a bunch of great things there. On the workload side, a bunch of new compliance profiles for our compliance operator to keep you secure with your government regulations. Sandbox containers are GA. So if you need a little extra isolation, you can wrap your workloads in those. On the networking front, two exciting things. One is you can now bridge your virtualization VMs into a service mesh. So if you have VMs and containers that want to talk securely together, they can use your service mesh. And then expose services outside of your cluster with Metal LD, you can now have BGP routes for those services, get everybody connected into the content. The whole bunch more, so stay tuned and we will dig into it. As you know, we shook a new version of Kubernetes with every OpenShift release. And so we're shipping 1.23, excuse me, today. And a cool set of new features there that we're going to pick up. Clusters are going to default to dual stack networking. So this is a feature gate that was there upstream, it's now being removed, so it's on by default. This has actually been GA and OpenShift for a few versions now since 4.8. So no change for us there. Pod security is graduating to beta. So this is something where we're doing a lot of upstream contribution. And so our plan here is to support the upstream pod security admission work when it comes in and goes GA. And we'll continue to support our SCCs side by side. And so you can use the new stuff, you can use the old stuff. We got you secure the entire way through. And if you've heard us talk about our CSI migration work, that continues upstream. So this is replacing horse drivers with the corresponding CSI driver versions. We've been doing this work in OpenShift. We will seamlessly migrate to this in the future. And so we've been watching this one pretty closely. And then last on the software supply chain front, you know, a ton of news about this recently. So Kubernetes has put some attention into their own release process and have gotten level one compliance. We also put a lot of emphasis on this on the OpenShift side and Red Hat wide, to be honest. And so we're excited to see any innovation. So yeah, we've got Cryo 123 coming in, Kubernetes 123. Then the roadmap. I'm not going to go over all of this. Feel free to pause your video or take a look at these slides when they end up on the Red Hat website. But we've got a ton of work going across the entire platform here. What you see in Q1 is most of the stuff that we're going to talk about today. And then a bunch of stuff going on for Q2. You'll see a ton of investment in our hosted platforms, app and dev services. Of course, the platform continues to grow forward. And then into the second half of this year and then early next tons of work going on, you know, all across the board, pulling in the best of breed from the Kubernetes ecosystem. As always, if you have questions on this, we'd love to reach out and talk to customers directly. And so please reach the PMT. All right. Lastly, I want to talk about RFEs. These are our requests for enhancement. We ship 45 RFEs and these are direct customer asks. They come to us and you'll see the kind of the top ones here. A lot around network configuration. This is a classic thing. You know, everybody's network looks different. And so we want to adapt and meet you in your world. So some great options there as well as debugability and kind of, you know, the way that we own management of the nodes all the way up into the cluster means that, you know, we can emit some of more events to help for debugability. These are direct customer asks. And then this availability sets for Azure is interesting because, you know, our default is to, we're just going to go across all the zones in a cloud region that you give us. Some of those regions don't have enough zones. And so this is adapting to that with some of the availability sets in Azure to still get you an HA cluster and maybe a little bit. So really cool to see those. Like I said, there's 45 of these. This is just a subset. So OpenShift keeps getting better. And with that, let's jump into our spotlight features. So I'm going to start handing it off to the PM team. You're going to hear from the best and brightest. And I will start off with Adel. Hello, everyone. My name is Adel Zalouk. And I am the product manager for OpenShift box containers. The good news is it's going from tech preview to GA. So mark that. However, what is sandbox containers for the first time? People are hearing about it. It's an additional optional runtime that you can use for your additional or third party applications or for the applications that require extreme and stringent levels of isolation. You get that for free in OpenShift. When you install the operator, the OpenShift sandbox containers operator. So what did we do from a feature level perspective in 4.10? We added three most important or major features. One is instead of you having to check manually your note eligibility for installing sandbox containers, the sandbox containers in the back end relies on Catholic containers. And that is also using virtualization. So things like VMX needs to be there. We rely on hardware level virtualization. So we made that easy for you. So you can just check on our custom resource that you want to make sure that the nodes are eligible for installing sandbox containers in the back end. We use a note feature discovery over here. And just make sure that you have that checked and the installation will proceed once all the nodes are eligible or the nodes that you chose for sandbox containers are eligible. The second more important one is the metrics. So we're adding more metrics, not just the metrics that you get for free because sandbox containers give you its containers. They're running the same way as normal pods. We add more deep down stack level metrics on the Cata agent on the VM CPU and memory lightweight VM in the back end. The number of VMs you have in your clusters so you get additional visibility. And in the future, we'll be adding more and more metrics for you to get visibility in for us to help you get better in deploying your workloads. And then finally, we have increased the debugability option. So we're adding more and more logs. Again, the same we're doing with metrics, we're doing with logs. So now, if you set the log level to debug, you get logs across the entire stack, including the Cata stack log, the Cata runtime logs, QMU logs, virtue of SD, and more. And this does not just help you see what's happening underneath in the background, but also helps you when you're facing problems, when you're raising support tickets, this will be a great help. That's all what I have. So thank you, and I'm going to pass the call to Subin. Hi, I'm Subin. For OCP 4.10, we have a quicker, safer, and less disruptive upgrade experience when you're going from extended update support version one to another. In the diagram, you'll notice node one, which is on OCP 4.8, keeping version 4.9 when it upgrades to 4.10. So what happens is that you have a safer upgrade and you have a quicker upgrade, which involves just a single reboot of the worker node. We also have an upgrade of a scheduler, which steers these rescheduled pods from 4.8 to available 4.10 worker nodes. In this way, you have a quicker upgrade process. And also, the pods restart less frequently. So there is less disruptions to the workloads you have. Over to Daniel. Thanks, Subin. Let's talk about disconnected. We know that some of our customers really want to deploy OpenShift in environments without a direct connection to public Red Hat services, like the Red Hat registry. And they're doing this mostly in on-prem environments and data centers, but also in some of these data environments. And for these deployments, which we call OpenShift disconnected deployments, in 4.10, we are really delivering a lot of simplifications and streamlined tools to help install these disconnected clusters. So in order to install this connected, you kind of need two things. First, you need all the images, container images that OpenShift is using as product installation and at runtime. And you need the bucket to store them all in before you actually carry out the install using the OpenShift installer. So the first thing we did is we actually already released the solution to the bucket. It's called the mirror registry. And it's essentially a simple installer that gets you up and running with a streamlined all-in-one instance of Red Hat Quay on a system with Relate installed and Podman available. This really is the need for any kind of stopgap solution using upstream registries to store your content in. And it's actually available as of today at no additional cost for all OpenShift customers. Now you need to kind of populate this bucket with the content that OpenShift needs to run and install. And this is content from a variety of sources and different types. It's, for instance, the OpenShift core images of the core cluster infrastructure and the core operators, but also the optional operators managed by OLM to add the layered products to it, as well as any kind of custom images or hand charts that you may need in your disconnected cluster. And we've had previously asked you to use several different tools, each with their own documentation and different command structure to carry out the different steps to initially create the mirror and then also to maintain these mirrors. What we are doing with the new solution for this, which we call OCMirror, is centralizing all of these steps, all of these content types into a single command of a single binary, which is actually a plug-in to the OpenShift command line client. So OCMirror is really your one-stop shop to mirror the OpenShift images, the operator images, hand charts, and any custom images. And you do this essentially in a very declarative, different way by expressing all your desired OpenShift releases, as well as operator catalogs and operator releases in a text file. And you feed that text file to the command line utility called OCMirror. And what OCMirror will do is mirror all this content in a single step in a single command, in a single transaction. And what that gives you is your first mirror to carry out the installation of your first disconnected clusters. And the tool is very smart about this. It's not just a simple downloader, right? It will actually take advantage of things like layer sharing and duplication, as well as allow you to trim the releases of OpenShift and operators and catalogs that you really need in your disconnected environments to do just what you want in order to reduce the amount of data that needs to be downloaded. But what you also want is for these disconnected clusters to keep receiving updates, right? And in order to do that, you need to keep the mirror up to date. And OCMirror actually does that as well. And it does this by simply rerunning the tool with the same configuration again in something as simple like a cron job. And it is very smart when it does this. So it will actually take a look what you have already mirrored the last time you did the initial mirror. And it will figure out what has been released in the meantime in terms of additional OpenShift releases, new operator catalogs, new versions in those catalogs and new channels. And it will start to download all of this in order to get you an update path that you can walk along to update to the most recent version of OpenShift and the desired operators that you have. You need to basically do nothing else than just feeding it the same config file that you did earlier. And it will figure out what intermediate releases if any need to be downloaded in order to upgrade. And it will also do dependency resolution on the operators that it is updating. Residing in a disconnected experience that from the perspective of updating cluster is very close to what you would have if the cluster was connected. And with that, I'm handing over to Doran. Well, I might have lost Doran here. I'll cover this really quick. So we've got three new compliance operators. So this operator runs in your cluster and can enforce these three standards. So we've got PCI DSS, FedRAMP moderate, as well as the NERC-CIP for kind of the toco and kind of electrical type appliances. So this is really cool. It's enforcing these standards where we can inside of the cluster itself to remain in compliance. You can also deploy this across a fleet of clusters and have them all remain in compliance and get reports on them through ACS and ACM. All right, I'll hand it over to Duncan. Thanks, Rob. And hello, everyone. I am very sure that you've all heard about the buzz going around about ARM-based systems, the new systems from Apple, the offerings at AWS, and all those nice duty servers that are coming into market now that you can go out and buy. And it really just feels like this year is going to be the time that ARM will see major adoption. And you know at OpenShift, we always want to give our customers the choice to run your workloads and your applications on the best infrastructure. So I am really pleased to announce that we're going to have general availability of OpenShift on ARM in the 4.10 release. In this release, we're going to let you run on the latest AWS ARM-based instances, as well as those nice, juicy bare metal servers that you have bought or you're going to buy. Of course, OpenShift is such a rich platformer, and while we've got all the core pieces in place, we've still got some work to do on those surrounding nice, juicy add-ons. But expect those gaps to be filled really quickly. We're seeing really nice results there about bringing these things along. Out of the box, you'll probably only be able to use EBS and NFS. We might sneak EFS in there on the storage side, but on the plus side, big kudos to the login and ACM teams. They've pulled out the stops, all the stops, and they're going to have some ARM functionality around the same time we GA on OpenShift as well. For those of you looking at hardware, as always, we recommend using the certified systems on the REL ACL, but the great news is we've kind of opened things up, or Red Hat has opened things up as far as choice, and we'll also be supporting you to run on systems that have met the ARM system-ready, server-ready specifications. So that really opens up what kind of systems you can buy and run on. Whatever architecture you're looking at, whatever applications you want to run, you can choose now and run OpenShift in there on top of it with no issues at all. And now, let me hand over to Diphthi and or Mark Currie to chat to you about Metal LB. Thanks, Dangan. Metal LB FRRBG's BGP support. OpenShift installations on public cloud providers have always taken advantage of native load balancing services. And up until very recently, we did not have any out-of-the-box native load balancers for bare metal infrastructure deployments with OpenShift. And this is when we introduced Metal LB in L2Vote as a fully supported load balancer for bare metal deployment. And we have further enhanced this in 4.10 by providing support for the BGP mode. With BGP, we can advertise direct routes to Kubernetes services to other BGP participants outside the cluster. So basically, Metal LB running on every node establishes a BGP peering with nearby BGP-enabled routers and tell them how to forward traffic to the service IPs. So for configuring BGP, we need to provide information on address pool for Metal LB to allocate service IP. And this is done via the address pool structure. And we also need to provide information on the BGP session connecting each node to each external router to the BGP peer structure that is seen here. One can also pair a BGP session with a BFT session for faster failover detection. And given that, each node acts as a mini router with Metal LB deployment which do not allow external routers to inject routes into the cluster. And with BGP pure node selector, you can run Metal LB speaker pods on specific nodes by specifying the node selector. And we also have an IBGP, EBGP, and single and multi-hop support with this release. Next slide, please. Thank you. The Royal Titanium Management for ImageBuild, this is an area that I've been working on the past couple of releases and delivering pieces of that. And the problem that it addresses is that a lot of our customers run ImageBuilds on the cluster using Dockerfile builds and within that, they want to install REL RPMs within those Dockerfile builds. And since every OpenShift container platform customer, or CP customer, is entitled to REL content, we wanted to make this really easy similar to how it was in OCP3 for them. There are three areas that these improvements are applied. One is how customers, how do they get access to the entitlements that have purchased with OpenShift? Second one is how do we manage access to this? They perhaps they have a multi-tenant cluster they don't want to expose all the tenants to to use these entitlements that they have. And the last bit of how to actually use them in a Dockerfile build. In 4.10, managing those entitlements, getting them into the cluster, it reaches GA. This is a function that Insight's operator, FullFizz and Insight team had have delivered in 4.10. The cluster reaches out to OCM and downloads the entitlement certificates of the OpenShift subscription that customer has purchased. Customer needs to enable simple content access on their account. This is a one-time thing they do at their account or at an account level. And from that point on, they can automatically pull the certificates and Insight's operator manages the life cycle of these certificates as well because they get invalid if they did any change in the real content. So automatically refreshes them to make sure that cluster at all times has access to a valid entitlement certificate to use. This is only available on CP not available on OSD, Rosarro and other managed instances of OpenShift because those do not entitle customers to real content. And on how to managing access to these entitlements, the work that the build team has done with shared resources side drivers, moving to take review and this operator essentially allows the platform owners, the admins, cluster admins to control to share that entitlement that is pulled to the cluster with their tenants across different namespaces and controlling who can access and who cannot access it. So it gives them a granular control of how to consume, who is allowed to consume these entitlements. And the last bit is once a team gets access to these entitlements, they can mount it inside a build configs through a CSI driver which will move to tech review or they can mount the secret that if admin decides to duplicate the entitlement secret that insights operator puts in on the cluster, they can mount it as a secret inside a build config or pipelines or pods or anywhere else that a secret can be mounted and used in a Docker file built. And that capability remains GA, the use of shared resource CSI driver and the use of that CSI driver inside the build config that remains tech review for CAN and moves to GA in the following devices. Next slide please. Hey everyone, Ali here from the console team. So big news, really excited to share with you all, multi-cluster capabilities will finally come to the OCP console. What does this mean? This means we now have a single interface to view your fleet and to drill down into the details of each cluster or app running on individual clusters. So you can now easily switch context between each of your clusters and come up a level to get a glance of your fleet. Essentially what we're doing is we're giving you the best both ASIM and OCP UI brought together. So how does this work? You start with a single OCP cluster, then you have essentially two options. You can install the multi-cluster engine operator standalone. This NCE operator is new and will enable the cluster switcher, the basic cluster management capabilities, which gives you the ability to either create, upgrade or destroy clusters and gives you the cluster inventory list. The NCE operator is included with your OCP subscription from now on. So the second option you have is you can install ACM, which will give you the entire cluster life cycle management stuff you get from NCE, plus all the great stuff like policy, config management, multi-cluster network, multi-cluster application deployment and much, much more. So really excited to announce this. This will be tech preview in 4.10. You'll need a feature flag to enable it and expect to see a quick start to help guide users enable this new multi-cluster console. Next slide, please. So also new in 4.10, dynamic plugins will be offered as tech preview. From the console team, we get a lot of requests to be able to come customize those OCP console experience. Previously, we had a really prescriptive ways of updating the console using either console CRDs like the console link CRD to add a link to the UI as an example or Olam descriptors for customizing your operand creation. Now with dynamic plugins, we're giving our partners and customers a lot more flexibility to tailor the UI for different personas or even add an elevated experience for your service and offering. So a couple of big things will allow people to do is you'll be able to come in and update an existing perspective, which means you could add new nav items, new flows, new pages, actions to existing pages. This will be either done in the admin or dev perspective. The other option that we're getting our users, excuse me, is to add new perspectives. If you want to create a persona task-based perspective that's completely standalone, you'll have that ability to do that as well. A little bit about the tech behind this, this is essentially based off Webpack 5 module federations. You can write your own Webpack module federation piece. It's built with all the pattern fly four components. Plugins are dynamically loaded during runtime and disabled, enabled via UI console. Plugins can be updated independently of the host application. This means if you have an operator that's delivering the plugin, you could do this at any moment. You don't have to wait for a new release or update of the OCP console. And then finally, plugins enable and provide extension points for your whole perspective. So currently, we've added a ton of extension points that I talked about adding an ad by and pages actions to the admin perspectives. But for example, we actually used ACM to provide the multi-cluster review and OCP console. And what that's going to do is they could go ahead and add all their extension points to allow people to come extend the multi-cluster review as well. So that's going to be really nice. So you could also expect a getting started guide to come soon. And then if anybody wants early access before the release, contact me. And then I'll give you all access to the samples and template repo available. Next slide please. So you asked, we listened every OCP release. We tried to address some of the most popular RFEs. Here are the ones we got to this release. So pod debug mode right from the UI. This will allow you to run exactly the same thing as OCD debug pod right from the UI. It'll start up an interactive shell, which will stop the pod from crash looping. It'll allow you to check your environment variables, config files, allow you to access your logs and events right from the console, which is fantastic. The next thing we added is we updated the user preferences. One of the big apps we got was to be able to hide user workload notifications from to the admins. So you now have the ability to hide that. Also, we added the default method for route creation. So if you want to make sure that secure routes was the default option selected when trying creating a route that will be enabled as well. And then finally, we added the ability for non-admin users to come in and see their quota usage via the applied cluster resource quota. So that is now available to non-admins in the UI. All right. Next slide, please. And I'm going to be passing off to Believ Siamak. Right. Thank you. So an official builds on the classic builds already mentioned related to all the work that has happened around enabling the real tactical management with the shared resources to the driver and mounting that inside the build configs. True Pride builds is the next generation build that we are moving on to as evolution of classic builds. And within this release, building from local sources enabled. So this is extremely helpful for iteration locally before submitting something to Git report creating on a cluster. You can use the Ship CLI to up the source from current directory into a build that executes on the cluster. Custom annotations are supported now on the output images. So you can decide and specify what annotations should be available on an image that is built through Ship Pride and volume support is another aspect that is added inside Ship I Build as well within this release. Next slide, please. Open Ship Pipelines 1.7 will be available alongside 4.10. There are quite a lot of capability that are added within this particular release pipeline as code. We'll move to take preview. Pipeline as code is enabling Git as workflows for your CI. So the customer can put their ticked on pipeline in a Git repo and that's the sort of single, that's the source of truth for the pipeline and every time an event comes from Git, commit or pull request they can execute that pipeline on the cluster. Providence for both images and to as strong is another aspect through the ticked on chains and signing with through the six store project being released as tech preview. We are transitioning to bringing the ability to create a set of tasks for developers and DevOps engineers on the cluster to our customers so ticked on hop is brought onto the cluster is optional at this point the customer can enable it at their own task in there and we are working on bringing integration in the tooling that we have to this instant that customers can run for themselves and have full control over what kind of tasks they want to expose to their development teams. And other key people the important one that have been on the horizon for us for a while is that in this release on OCP-410 there is the ability to run tasks or tasks from pods really in the username spaces which allows those tasks of those images that are required to run as a route inside the container to be run in the username spaces as non-route on the host. So it addresses one of the criteria that some of our customers or a good portion of them have that they do not allow any privilege containers to run and things like STY image and Dockerfile builds required that because they use buildup but we have the ability in this release to run them as non-route without any particular privilege or SCC or service account just using the default service account and just run it in the username spaces. There are improvements in the trigger project in TikTok so that we can get the events of how if a web book has come from GitHub and failed within the name space that the trigger and the pipeline is running so that it makes it really simple to access that kind of information and figure out that something that wire pipeline for example is not triggered if you have committed to the Git repo. We hear it from Adele that Opinion standby container is moved to GA and we have verified that it works with pipeline that's one of the use cases for customers that require higher level of isolation within their CI pipeline so that also works as the last screenshot at the bottom that you see shows how that is enabled in a pipeline for example to use the cutter on time for executing the pipeline and there are diverse enhancement within the dev console around the pipeline UI. We support now multiple pipeline templates when a user developer is importing an application into Openshift that's the top screenshot and that's the ability that allows platform owners to have prescribed or predefined supply chain templates also for their tenants so that they can add it to the platform as a template and when the tenant is importing an application or namespace they have an option of choosing the pipeline. Before 4.10 there is a single pipeline that it can provide for the entire platform now there's the ability of having multiple and a lot of tenants to choose. When through that import flow an application is on on Openshift added to Openshift now a web book is also automatically created for it that can be easily added to the GitHub provider and to the Git provider and have the pipeline executed on Git events. Previously this was a manual thing that the user had to do post importing the application and within the taketown hub integration that exists in the pipeline builder in dev console now we have links to the documentation of those tasks that can jump onto taketown hub and read more about the the task and how it can be used within the pipeline. Next slide please. Good Openshift GitOps Openshift GitOps 1.5 we've been released alongside 4.10 provides Arvo CD 2.3 that will be released quite shortly. There are two new generators that are added in application sets this allows for example generating the application for every pull request that comes to the repo this is really useful if you want to automatically deploy the application on pull request to a particular name space and be able to test that and decommission that so it allows us to have that kind of dynamic application creation for pull requests and another aspect is is another generator is the ability to merge multiple generators so maybe after a generator after an application is created for pull request you want to override some attributes of that application from a Git repo a list in a Git repo or from a different type of generator so it allows a way to merge the result of multiple generators and generate the applications that you need for a particular event and dynamically add applications to Arvo CD There are some improvements done also for making it easier to deal with resources that are not completely GitOps compatible so for example there are objects on Openshift and on Kubernetes that are that some attributes of them are managed by the operators so when you create the object an operator comes and mutates those fields so now there is we have already always had support to ignore these attributes now there are new ways more advanced ways of ignoring them through managed fields for example and if you look at some of the objects on Openshift you see at the top most there is a managed field attribute under that a long snippet of saying which what will go what are the fields that are managed by whom by which operator so in GitOps 1.5 we can for example say any object that is managed by the registry operator ignore those ones during beefing and sinking so that Arvo CD doesn't complain about someone else that operator has has changed those fields and other abilities that previously ignoring fields was only happening during beef so that Argo CD doesn't complain about an object being out of sync what happens in this release is that you can also respect those during the sink flow so for the objects that are or fields that are managed with something else you can all you can tell Argo CD that the first time that you're creating this object create this field but after that point ignore it in any of the scenes or ignore it in any of the beefs that you do so that it doesn't show out of the sink and you don't overwrite that field if an operator on the cluster is changing mutating and there are enhancements within the dev console so that is added health status of the resources we already had the sink status in the environment's view and the the sink status at the health status that's a broken heart that you see on the slide is added also so that it shows if the object is in healthy condition on the cluster next slide please hello everyone OpenShift Serverless is based on upstream project Knative and for 4.10 it will be updated to Knative 1.0 and this is going to be a big milestone for us we have added some features around eventing for this for these releases and to enable developers writing event driven solutions one such feature is Apache Kafka-based broker as a tech preview feature this can maximize Kafka performance persistence and can also avoid event duplication it will also prevent tight coupling with Kafka because it eliminates the use of Kafka clients by event producers in the same theme another tech preview feature is Knative Kafka-Sync for OpenShift 4.10 it could still receive cloud events from source subscription trigger eventing construct on a Kafka topic without writing any custom code now for our developer experience a new plugin is being added to the CLI for developing debugging and testing again event driven applications by sending cloud events interactively to apps using KnCLI on Dev Console a new visualization of event sync has been added as you see from the screenshot on the left and another important addition on Dev Console is to make any Kubernetes services as a target output or sync for cloud events from serverless eventing infrastructure a subscription or trigger can be added to any cube services available on the cluster to enhance the development experience for EDA solutions just want to add that we have serverless functions as a tech preview featured in 4.10 as well we are constantly evolving this by adding new features and creating better user experience as of note various runtimes are available out of box and it provides local developer experience using Docker and or Podman on Mac well and Windows please try out these tech preview features we would love to hear your feedback and the use cases that you're trying to solve with them with that I will pass it on to Jamie thanks Neda um so yeah for OpenShift Service Mesh we're aiming to ship Service Mesh 2.2 this April it'll be based on Istio 112 and Kiali 147 at least going forward we're looking to keep our Service Mesh within a release or two of upstream Istio so accelerate any bit Istio 112 also introduces a wasm plug and API which will deprecate our existing service mesh extensions API as the upstream API was largely influenced by the one that we had was OpenShift Service Mesh it will be a straightforward migration by popular demand we have also recently introduced the ability to override and customize network policy creation and management around Service Mesh we know that our customers set up mesh in many different ways and this this provides more flexibility between how how Service Mesh and Kubernetes networking work work together so by popular demand finally the Kiali Update and Service Mesh Tutu will come with several enhancements for viewing and navigating larger meshes especially we know that that some customers have had challenges with Kiali for viewing that very large meshes that they have and that's something that we're working to improve over time it also comes with a few new debugging features such as the ability to view internal certificate information and the ability to adjust envoy proxy log levels again the ability to to debug and manage your mesh they've also shipped a new demo as well around Mesh Federation which could be used as the basis for additional multi-mesh use cases and with that I'll pass it to next next slide so yes for Fortin we have expanded the supported list of providers to include full stack automation for Azure Stack Hub IBM Cloud and Alibaba so let's take a look at what we're doing on IBM Cloud next slide please so for IBM Cloud you can now deploy advantage of clusters on IBM Cloud for IBM Cloud VPC infrastructure so no we only support this with IBM Cloud VPC infrastructure and if you're using IBM Cloud classic infrastructure that is not currently supported in this particular release we're also only supporting cloud internet services DNS or CIS and what the IBM Cloud DNS services what this means is that only public clusters can be created at the moment in the future we're looking to support private DNS so you can open and create private clouds in the future one thing to note about IBM Cloud is we're only supporting IBM IPv4 at the moment so dual stack and IPv6 environments are not yet possible next slide please next building upon the work we've done in previous OpenShift releases you're now able to deploy OpenShift on Azure Stack Hub which is Microsoft's on-premises cloud using full stack automation or IPI and so we've added some documentation for both UPI and IPI to support using customers internal certificate authorities or CIS on to Gaurav next slide Gaurav I think you've muted maybe if Gaurav is not able to speak I'll continue in 4.10 we are going to check preview IPI installation of OpenShift on Alibaba Cloud so what this means is you'll not be able to give it a try for the 4.10 release we will support the international portal which includes the world and China mainland and then please note this is a fully connected installation with new and existing VPCs next slide please so with regards to VSphere or VMware we now are able to take advantage of thin provisioning on your primary obluctis that you use for your VMS in OpenShift and what this means is the just provisioning method now includes thin in addition to the existing thick or eager zero thick within provisioning your obluctis will only consume the space needed and will grow as needed and one thing to remember is that if you don't configure disprovisioning it will default to the VSphere default storage policy and then for NFS it is always that to them next slide please so in 4.10 we're introducing a new capability to enable OEM partners to pre-install OpenShift at the factory what this means is they're now able to build turnkey solutions with OpenShift embedded or rather pre-installed on the OEM partners hardware this is well suited for partners looking to build turnkey solutions for rapid edge deployments so think about use cases where you're trying to power edge workloads where customers might want to deploy all-in-one OpenShift together with prescribed hardware to serve as a data center in a box in a remote or disconnected or even air-gapped location typically these are being used to enable compute real-time data processing and analysis at the edge so the solution leverages zero-touch provisioning or ZTP which uses a declarative specification stored in gift repos to deploy the infrastructure using Github's deployment set of practices for infrastructure deployments and 4.10 we've added documentation that will walk you through setting up the factory or management cluster which you use to build the turnkey spoke clusters the multaneous or folks interested in trying this out do feel free to contact me and let's move on to Ramon thank you June so very mental I want to highlight three things today about what we are doing for 4.10 and the first one from left to right is a long-awaited feature which is the ability to configure the host networking at installation time with the IPI workflow that will allow you to use the install config and specify per host network configurations and something that you know many people have told us that they required which is you don't need DHCP anymore with this you can allocate static IPs to your notes without an external customer provided the DHCP server so next we are doing or actually promoting to EGA the Kubernetes NMS state for those of you of you have been using NMS state this needs no introduction but essentially you're going to be able to use some day to the NMS state operator fully supported to again configure your notes just like you will do on day one with the same syntax which is the NMS state syntax there is a link at the bottom of this slide to see you know all the things that you're going to be able to do with NMS state and finally another thing we wanted to highlight is the ability to update your host's BIOS settings now we are adding two CRDs one is the HFS so hardware firmware settings and the other one is the firmware schema and what we are doing is we are collecting all the BIOS attributes and storing them in the firmware schema along with how to use them right what kind of attributes and values you can set and then you will be able per host again to configure your hardware firmware settings on day two and with this I will pass it over to Subin and Anand for control plane updates Hi I'm Subin the product manager for over-the-air updates in 4.10 cluster admins will see a new feature called conditional updates the open shift update service will can declare in the update graph conditionally recommended updates the cluster version operator which will look at these conditionally recommended updates and give information to the cluster admin on what are the updates which are supported but not recommended for the cluster now if you're using the OC command line tool from version 4.10 you're going to see two new flags when you upgrade the first option is include not recommended and if you use that option you're going to see all those versions in the channel which are supported but not recommended for the cluster and the cluster admin can now look at these data and information provided by the new option and evaluate if the risk associated with these conditional updates are okay or not okay for the current cluster and if the cluster admin thinks that the risk is okay for the workloads the cluster admin can now go ahead and update the cluster and to update the cluster you have a new option called allow not recommended so while upgrading use this option and you can update to the version which is a conditionally recommended update now over to Anand thank you I'm the product manager for the OpenShift control plane and let's talk about what are the new dates in 4.10 so let's start with cert manager so cert manager is going preview in 4.10 with GA that's going to be announced subsequently cert manager it basically helps automate certificate management in cloud native environments cert manager is a fully open source project it builds on top of Kubernetes and it introduces certificate authorities and issuers and certificates as first class resource types in the Kubernetes API and this makes it possible to provide certificate as a service to developers working in your Kubernetes clusters it adds certificates as I said certificates and certificate issuers as resource types in Kubernetes and simplifies process of obtaining renewing and using those certificates as you can see from the diagram it can help issue certificates from a variety of you know supported sources such as let's encrypt as you got for Wenefi and even a private kpi cert manager ensures that certificates are valid they're up to date and you can attempt to renew certificates and it can figure time before expiry and just some you know the background on cert manager is that it's based on you know kube lego and it also has some you know intelligence from kube cert manager it is a fully open source project it was developed by this company called jetstack which is brought over by Wenefi but you know they've contributed this project upstream and it's you know a CNCF you know sandbox you know project right now so it's fully open source and what we're doing is we're building a operator for using cert manager in open shift the latest version of the upstream project is 1.7.1 and in the speaker notes you'll see the detail you know productization you know roadmap and the proposal for you know onboarding this new operator on open shift and you'll also see links to the latest you know cert manager use cases for instance you know what are some use cases for the classroom admin what are some use cases for the service infrastructure admin service developer to use cert manager with an open shift and you'll also see the github repo from where you can go use the cert manager operator for open shift so summarizing cert manager is doing tech preview in 4.10 with GA to you know be announced you know soon after that next slide please so next is announcing group membership from identity providers in 4.10 we have introduced support for syncing group membership from open ID connect providers to open shift platform upon login and you can enable this by configuring the group claim information in the open shift container platform open ID connect provider so on the right you see an open ID connect str so if you navigate to spec identity providers you can you know specify the name the mapping method which is claimed and right at the bottom of the claims you will see the group information that you can provide so once you provide the CR to the external IDP it presents the JSON web token using this web token you can you know obviously you know log into any application of it so those are some of the control paid updates for 4.10 handing it over to the next slide which I think who I think is Scott or Jeff hey so now we want to talk about how advanced cluster security has begun to streamline Kubernetes security programs over time and the first real key need was the improvement of developer workflows we've enabled developers to streamline risk management by giving them the ability to triage risks by either accepting that risk and getting an approval from a delegated approval authority or marking something as a potential false positive and in this way they can streamline CI and not have to worry about potentially a false positive or something else breaking and build that is known within the organization we've also worked to simplify prioritization and remediation in CI so we've given some additional vulnerability outputs and summaries for breaking builds to help developers really key in on exactly the issues that they need to address in terms of a SecOps notification workflows we've shortened some feedback leaps for vulnerability reporting to the middle management teams but also the teams that need we've done that by implementing scheduled reporting for vulnerabilities for standard remediation stakeholders runtime notification enhancements by ensuring that we're sending detailed system information to a SIM each and every time that a potential policy violation is found in the environment we've also enhanced the administration of the platform as you know ACS was the result of an acquisition we've simplified administration of OpenShift Platform plus through the reuse of the OpenShift authentication for ACS users and in this way administrators don't need to set up the the same standard configuration across multiple platforms we've also improved the scalability of our registry integrations so for those of you who play in the AWS space and use Amazon Elastic Container Registry we've implemented an IAM Assume Rule to allow users to use different roles across accounts and implement authorization at scale and I want to pass it over to Jeff Hey, Jamie this is Scott hopefully y'all can hear me I always love working with the ACS team you guys are setting such a high standard and high pace and I've recently been able to use your new OOS on the OpenShift Platform and it works beautifully that's one of the things we want to do here in ACM we want to build things better together you'll see our statement includes Ansible includes the full OpenShift Platform Plus we're diving in an OpenShift GitOps making sure advanced cluster security works beautifully and even bringing bits of the storage at the data foundation layer up into our ACM experience right out of the gates we've got a dev preview with Ansible Automation Platform that's surfacing our cluster inventory into the Ansible Automation Platform for their users to experience the OpenShift world directly at their fingertips so the Ops team doesn't have to move out of their comfort zone to start learning new things a quick on ramp with with a low entry on the skills you'll see the highlights there about OpenShift GitOps we're leveraging the application sets feature and bringing that to GA in ACM coincided with the expected GA of the application sets by the Argo CD team that's a great way to bring applications the application construct to the multi-cluster space that was introduced as tech preview 2.4 and the pace of activity moves right along into 2.5 stronger security see the upstream gatekeeper community working really hard to produce the enhancement around mutating webhooks that allows us to change the resources upon their admission giving you further control for resource admission context within the managed cluster sleep at the same time we're also providing improvements in secrets management integrating variable templating templating that allows you to handle secrets in a more effective way a more secure way when dropping them into managed clusters and finally as I mentioned before our integration with ACS continues we'll see policy sets dedicated to ACS as well as an OpenShift Plus integration that brings ACM at the center point for deploying the ACS central and sensors to its fleet I'll take the next slide please so with our expected release in late April you are going to see a cluster lifecycle for Red Hat virtualization known as Rev and AWS GovCloud in the US this is awesome it means you don't have to leave your existing infrastructure you can be comfortable with any type of infrastructure to deploy your OpenShift clusters whether that's on-prem with bare metal even in the cloud or any of the private government spaces that we're targeting like Azure Gov and AWS Gov as Duncan mentioned earlier we are working hard to coincide our release with the ARM architecture support we're targeting that as dev preview to support both the hub on ARM as well as the managed clusters leveraging ARM for low power consumption in particular we see edge as a great opportunity in that space HyperShift thank you to Adele the team that are working hard to bring HyperShift into the fold the opportunity there to share the control plane at scale reducing the cost hardware footprint and the time position is super exciting for ACM we're super thrilled to be bringing that to market as a tech preview and look forward to your feedback on that as well and finally I'll wrap it out by pointing to central infrastructure management this has been such a fun area to work in with the assisted installer team and bringing their technology from SaaS into the on-premise space we give you that self-service style model that allows your users your infrastructure owners to be able to carve up and slice up their bare metal hosts just as if it was a cloud service offering super exciting space for scale and growth opportunities across the fleet and with that I'm going to hand it to my colleague Christian Stark who's going to take you on to the next slide thanks a lot but so finally we have been able to provide crucial features around business continuity so we have the RH ACM Hub backup and restore using a backup solution based on OpenShift API for data protection managed cluster configurations can be backed up and restored in a different Hub cluster you also leverage ODF formerly known as OCS and RH ACM for disaster recovery across stateful workloads for your business critical stateful apps ODF along with RH ACM will ensure we have a robust multi-site multi-cluster disaster recovery strategy both ODF and RH ACM enable fast and consistent application DR that protects both application data and application state while ODF ensures your application data volumes bbs are consistently and frequently replicated resulting in reduced data loss recovery DR operators that are enabled with RH ACM automate with DR failover and failback processes ensuring that your recovery is fast and error free for manual operations I would also quickly highlight another feature persistent volumes replication using volume sync formerly known as scribe which is also currently tech preview it provides really sense for business critical stateful apps by ensuring a planned application migration strategy across your classes you can also use volume sync to create your own DR solution then working with non-ODF storage or heterogeneous storage providers thank you next slide please passing to Brett thank you very much Christian and with RACM Advanced Cluster Management for Kubernetes at the edge we're excited to announce GA of the capability to deploy and manage 2000 single node open shift that's snow and we're doing this and connected and disconnected scenarios we've tested out the disconnect disconnected we validated 150 millisecond round trip times and 0.02% packet loss and that's helpful for those far edge scenarios as well we're generating those DU profiles through the policy generator so it does demonstrate how we can do this at scale these are all through the zero touch provisioning which is stored up in Git so we're able to use the infrastructure as code by pulling it down from Git and lastly some exciting news we have the ability to export the hub collected metrics to external tools so we can give that more holistic view for customers that are needing to merge their OCP metrics with other metrics and monitoring platforms for a converged holistic view thank you handing off to the next presenter networking and routing thanks next up yeah so now let's take a look at some of the networking highlights in this release we're extremely happy to share the support for external DNS functionality and open now this functionality mainly provides the ability to dynamically control DNS records of an external DNS server via Kubernetes resources and this is done in a DNS provider agnostic way now this feature can be enabled by installing the external DNS operator via the operator hub and this actually deploys and manages the upstream external DNS functionality it's an in cluster component that makes Kubernetes resources discoverable through public DNS servers here we use it to synchronize exposed open shift services and routes with DNS providers this is supported currently on AWS GCP and Azure and is in tech preview and going forward we will support many more providers including blue cat and inflow box next slide please some of the other highlights that includes our support for egress ip for clusters on public clouds now we could use egress ip functionality to allow you to ensure that the traffic from one or more pods in one of our name spaces has a consistent source ip address for services outside of the cluster and this was supported on bare metal and v-sphere environments and with this release we're happy to extend this feature on public clouds like AWS GCP and Azure egress ip setup continues to be orchestrated by the network plugins however they will utilize a new component called the cloud network config controller to perform the required setup on the cloud side one thing to note is on the public cloud the notion of a limited ip capacity per note that exists today and it's a hard constraint imposed by the cloud provider so this is something that needs to be kept in mind while setting up egress ip on public clouds another thing which we have worked upon this release is the ability to modify the mtu post installation during openshift installation the mtu is automatically detected based on the mtu of the primary network interface of the nodes in the cluster but as a cluster administrator if you want to change the you know the mtu of the cluster post installation need addition of new nodes that needs a different mtu for optimal performance or any change in infrastructure this was not supported until now from 410 you can modify the mtu post installation the only thing to note is the nodes need to be rebooted to finalize the mtu change finally we're happy to announce the support for intel e8 10 nix and dot com nix that are listed here for the sriv operator on openshift 410 with this support sriv and dpdk functionalities of this nix can be configured via sriv operator seamlessly next slide please and i think it's over to beta for virtualization updates thank you thank you as you all know we're actually approaching almost two years now since openshift virtualization which is the ability to run virtual machines and open shift platforms is approaching almost two years now and we're a good citizen of the open shift platform because you heard rob say earlier you can actually use virtual machines with service mesh so to continue that we're actually now expanding the data protection story for openshift itself that's provided by odf and acm and oadp by being able to use virtual machines that are in your cluster to show up there and be data protected just like the rest of your cluster we're also expanding as you heard earlier about ibm cloud the ability to run virtual machines in the open shift cluster that you provision in ibm cloud we're also improving the vm workflows for users and admins that are used to vsphere or rev or even open stack to make them more comfortable and just get better visibility into the virtual workloads in their environment we've also got plenty of work for gpu as well and i really want to point out in the past year and a half two years we've been very busy you can see across the bottom here we actually have demonstrated ability in every one of these areas of cloud native gpu acceleration modernization and actually telco and i'd like to just preview we actually have a customer psyche bendon that we're going to have a success story published very soon that has actually replaced their entire virtual infrastructure with open shift bare metal and they're actually migrated their three-tier applications into this environment it's a very exciting story next slide please so the real question is is how do you actually get your workload your virtual workloads from say vsphere or rev into open shift you would use mtv or migration toolkit and this is actually based on the upstream conveyor project and now has the ability to actually do a warm migration which is a running vm on your old infrastructure you can actually copy the data over to a point take a scheduled maintenance window bring down the old application copy the last bit of data and bring up the new service provided you have the networking correct and your users will be able to move to the new platform with minimal disruption now i'm going to talk turn it over to Erwan to talk about gpu hello i'm Erwan Gelen product manager for open shift ai on compute acceleration so we've open shift 4.10 we're enabling more advanced gpu deep learning workloads so first we have worked with nvidia to bring cloud native workloads on dgx a100 servers so the dgx a100 server are the first five petaflops ai systems so really big servers and you can quickly deploy open shift on dgx a100 for testing with single non open shift or deploy a full cluster of dgx nodes so we have also started to provide in open shift console some high level metrics of the gpu utilization so we'll get more cluster visibility for your accelerators if you run open shift on virtualization platforms as vSphere or open stack we have also worked on enablement of vGPU with a more simple procedure using a driver tool kit the number needs of custom builds or node entitlements as tech preview we are also introducing new features with new nvidia network operator you can enable ai distributed deep learning training so if you use multi gpu on the multi node communication primitives in your code sharing the gpu memory could be required for the performance so as you can see in this diagram this operator will enable the gpu direct rdms stack so you will be able to access that directly from one gpu to another gpu so relat has announced GA of open shift on ARM for bare metal upi we have also worked on this topic and the gpu operator will run on ARM system as a tech preview lastly we are integrating into the gpu operator the open shift virtualization enablement so you will have the option to run VMs on top of open shift with vGPUs on the gpu operator will take care of host configuration so that's it for me and i'm on the cover to duncan thanks oan and yes as usual we've been busy on the power and z side as well this release we've decided to focus on flexibility networking and security as you can see on the slide i guess security is an important topic for everyone it seems like a long time that that's been the case and particularly interesting in this release is support for ipsec so you can take advantage of network encryption without having to make any changes to your application also on the networking side we all agree choice is good so everyone has different requirements and needs so we're pulling in those multis plugins that are available on the x86 side of the house so you can choose them and decide what networks you want to attach to and then finally on the flexibility piece if you're all like me we all want a really easy life so with the addition of these auto scale and functionality this release just means that you can react more quickly to changes and user demands and that just makes for a much better you know and responsive end user experience and with that let's hand over to tony who i think is going to talk to you about the operator framework thank you duncan so for operator SDK the overall theme is to help increase the operator maturity the first notable feature is the hybrid helm operator SDK plugin compared to go or Ansible operators which both can reach operator capability level five the traditional helm operator has limited functionality the hybrid helm operator enhance this through Go APIs so the operator authors can not only still create an operator from an existing help which are easily but also continue improving it through adding event driven logic to the new Helmbury Concealer in Go Land to easier make it beyond capability level two next to the right operator can create commodity object as part of their normal operations if not properly removed those objects could be hogging valuable resource like SCD keys or storage space therefore the new library is here to help operator to remove objects specified in group version kind also in different pruning strategies number three for an operator to run in a restricted network with the OM it needs to list all the related image to perform their functions and all those container image needs to be referenced by a digest rather than attack so the updated make bundle command is here to help easier package the operator project for running successfully in the disconnecting environment with that I'll pass it over to Daniel for an update next slide please thank you Tony let's talk quickly about operator life cycle management so you heard earlier in the recommend announcement that hyper shift is now available in tech preview and fortunately all them is there to help you with optional workloads as well so open shift all them is actually fully supporting hyper shift as of 410 a couple of enhancements needed to be in the code in order to remove all them components from any of the worker nodes such as the catalogs and ensure that as a tenant of hyper shift to really only see the worker nodes and need to not worry about anything else so all them moves to these catalog components on to the management control plane that is managed by hyper shift for clusters that are really really dense that is clusters that have a lot of namespaces we sometimes had installations that were exhibiting a lot of memory consumption caused by all them because it had global operators under its management and in today's model that means that in each of these potentially thousands of namespaces a object is going to be copied into in order to drive discoverability of the tenant via the UI and that insert operator is available if that is a problem for your cluster we actually built a release wealth into all them in 410 now where you can disable that copying that will not impact any functionality of the cluster nor the operator but it obviously degrades a little bit the discovery experience in the UI so the tenants can see that an operator is available to them however it will still function as normally on the CLI the API level and obviously this is also just a stop gap as we move to a global operator model in the future where we will address this in a proper solution for some of our products we actually have them shipped in the form of multiple operators and this is what we call an umbrella operator pattern so you can see which are virtualization as well as ODF being shipped in this format and what is important here is that users and customers stay on the supported path of all related operators that is also the dependencies of these umbrella operators that get installed and sometimes it was possible in the past that you would actually update to a newer version of a dependency rather than the one that you actually wanted because the way you specify the dependencies in OLM was using a flat list of requirements now in 4.10 we allow authors to specify and express these dependencies in more complex terms and more complex constraints using a separate language and as you can see in the screenshot they can also now contain custom error messages that tell the administrator that a certain dependency is missing or not at the right version level in case they are managing these manually next slide let's switch gears a little quick to Red Hat Quay our central registry for distributing images to an OpenShift cluster fleet in Red Hat Quay 3.7 which ships alongside very closely with 4.10 we are introducing the ability to run builds on OpenShift clusters that are not by metal so builds what is it you can actually use Quay to build your container image right inside registry this is sort of a very easy pipeline setup where you connect GitLab or GitHub for instance we have back hopes to Quay and trigger an image build therefore alleviating the need to distribute registry credentials or API tokens in your CI pipeline the build is carried out in Quay and previously this was done in a containerized to Emo Virtual Machine that was run on top of OpenShift and then the build result in form of an image was pushed to the Quay repository associated with the build the only problem there was that you needed a bare metal OpenShift cluster to do this because QEmo needs a bare metal machine to run now we have actually moved away from QEmo to a containerized builder environment which allows us to build a container image in an OpenShift pod that is running on any OpenShift cluster on a infrastructure provider of your choice including your favorite virtualization or cloud provider in 3.7 you will be able to configure this via the config file and in 3.8 this is going to be managed by the QE operator we're also planning some future features like the ability to produce multi-arch builds with this the images for separate compute architectures and we're also looking to add support for the pipeline in the future as another way to dedicate build functionality from Quay to OpenShift next slide and very often requested future from Quay is to serve as a pull through cash proxy that means you can actually configure a organization in Quay to proxy a particular upstream registry like Docker Hub you will want to do this when your connection to such an upstream registry is either sort of brittle and spotty or doesn't have a lot of bandwidth and always pulling the images from the internet is kind of slow or which is very often the case today with for instance Docker Hub the upstream registry has rate limiting in place which you would quickly run into behind a corporate firewall when you run a CI pipeline that often pulls or attempts to pull an image from a popular upstream registry with Quay you can now circumvent it by making Quay cash these requests what happens is that the client is actually attempting to pull from that Quay organization that is configured to cash either an entire upstream registry or just a certain namespace in an upstream registry and what Quay will do is it will stream the image transparently to the client in the background while also caching the image so the subsequent pull attempt against the cash will be fulfilled from that cash and is therefore much faster and much more resource friendly and also insulating you from any of pull rate limiting that may happen upstream this is also a way where you can essentially have admins allow partial access towards quote unquote untrusted community registries like Quay IO or Docker Hub without opening up the floodgates to the entire registry and allowing all images to be pulled so you can configure the cash to just cash a certain portion of the upstream registry and for the work you can also limit the cash size so it will not eat up indefinite storage and it really makes it transparent to the users because they can configure their clients to do a mirror mapping to prefer the Quay cash registry when a pull attempt against for instance Docker Hub's library portrait is performed the picture here will be something that's possible in the later OpenShift release where we will also have this tag-based mapping of pull attempt supported in OpenShift please follow the link Jira ticket in that slide for that next slide is also related to storage it's allowing Quay administrators to prevent the storage from growing ever and ever every day in a Quay registry and we do this by showing users consumption information about the storage consumption at both the repository level so a particular image and all its tags and also Quay administrators source consumption at the organization level so as an administrator or for multi-tenant Quay registry as you would use it in the classic OpenShift platform plus setup you will want to know who is the biggest consumer of your precious storage space in that registry and now in Quay 3.7 you can do that by just looking at the super user panel the quota that's behind that system can also be enforced so a registry admin can define quotas at an organization or a registry level that will limit the amount of storage that a certain organization in Quay can take up and when that storage limit is reached there will be no further pushes allowed until the owner of that organization actually starts to clean up older unused images to fall below the quota to not surprise users of the registry with these kind of enforcements there's also a notification system built in that allows owners of organizations define custom warning thresholds which will use Quay's notification system to warn users of that organization that their storage quota is about to be fully consumed and with that I'm going to be handing over to another storage topic read out by Gregory thank you thanks Danielle and hello everyone let's have a look to what's new in storage with OpenShift 4.10 we are having to announce the new CSI driver support for this year Amazon AWS EFS for RWX storage access IBM cloud as mentioned earlier and Ali cloud we're also introducing the azure file as tech preview in terms of CSI migration 4.10 includes technology previews for vSphere and azure file and as a reminder the CSI drivers are automatically deployed at installation time or after upgrade when leveraging the installer cloud provider integration it's also worth noting that the entry drivers are still present and remains the default class until the CSI migration goes to GA next slide please we'd like to take the opportunity to give a heads up on clusters that are running on top of VMware as I just said we are gearing the vSphere CSI driver and it will be deployed by default after upgrade for clusters that are running in tree today the introduction of the CSI driver requires virtual machine hardware version 15 that's a requirement from driver version 15 is supported by vSphere version 6.7u3 or later so in order to ensure a successful deployment of the driver please make sure your cluster is running vm hardware version 15 for clusters that are running CSI today with a driver coming from an external source because the two versions of the same driver cannot coexist within a given environment the external CSI driver needs to be removed so that the embedded CSI driver deployment can proceed in both cases the cluster stability nor the performance are impacted checks are in place to detect this situation and the CSI deployment will resume once the requirements are met however the cluster that are in this situation will be marked unupgradable to preserve upgrade stability they can still perform minor updates but upgrade to 411 will be blocked until it's solved and next slide please now into what's new in OpenShift Data Foundation as presented earlier we are introducing support for regional disaster recovery with ODF and ACM for failover automation for sake of time I won't repeat what Christian explained and move to the next slide for the other ODF highlights so we are introducing cluster wide encryption with Vault KMS using a service account for at-rest encryption and better integration with Kubernetes ODF no supports GP3 and GP2 CSI as the underlying baking storage on the multi-cloud object gateway side we added a capability for legacy application to write and read from the file system while clue native application use the S3 API and that allows a better migration and collaboration between legacy and cloud native applications one last important update is the introduction of a new dynamic local storage solution for single loader per shift as tech preview this solution has a very low footprint in terms of resources consumption and is based on the Topu LGM upstream project that's it for the storage and ODF updates handing over to Rob for the Telco 5G Hi folks High performance applications running DPDK in the case of cloud native network functions require CPUs, network interfaces and memory to be located on the same NUMA node any cross NUMA node situation leads to a significant and unacceptable performance drop in other words CPUs, devices, and memory absolutely need to be on the same NUMA node Kublet relies on topology manager for container NUMA alignment and so far only CPUs and devices were NUMA aligned with the addition of the memory manager Kublet can now align regular memory and huge pages with CPUs and other devices the memory manager is enabled for the whole cluster by the performance add-on operator for NUMA alignment set a topology policy of restricted or single NUMA node and NUMA alignment is then enforced for all containers belonging to the guaranteed quality of service class next slide please we've we've been aggressively strengthening our zero touch provisioning solution for telco use cases and we've made significant advancements in 4.10 to highlight a few we've now we now support deploying DUs to a CRAN hub whether that be a traditional cluster or a three node compact cluster we've added better status reporting of the ZTP status with ZTP labels we've tightly integrated ZTP with the topology aware lifecycle operator we now allow for custom manifests we have a technology preview feature for policy driven multi-cluster upgrades we added a feature to allow the cluster operator to sequence SNO provisioning across an ACM instance and lastly we added support for pre caching images and artifacts prior to an update or upgrade so that you can download the necessary bits prior to updating and upgrading providing as little downtime as possible when updating and and upgrading next slide please it's very important for RAM DU workloads to have as many CPUs as available so that the DU deployment on an OCP node can support as many cell sectors and radio units as possible Red Hat has been driving OpenShift's compute needs down for over a year in 4.10 we're confident that the OpenShift control plane and the load applied by the DU workload can fit within four hyper threads the workloads characteristics for example the number of pods and the number of probes the control plane must manage and run directly influences OpenShift's compute needs on this slide we define the application infrastructure budget as the allowable load that the application can put on the control plane if the workload does not exceed an application infrastructure budget of 1,700 millicores we believe that when reserving four hyper threads for OpenShift the platform control plane processes can fit within three hyper threads allowing for one hyper thread of headroom next slide please in 4.9 we added a node local low latency event bus vetted by ORAM in 4.9 OpenShift supported PTP ordinary clock with ordinary clock events published to this event bus and PTP boundary clock but without events in 4.9 excuse me in 4.10 we're now publishing PTP boundary clock events to this event bus and we've also added redfish hardware events to the same ORAM approved low latency node local event bus now telco workloads can get all relevant events when running in ordinary clock or boundary clock mode as well as getting a better understanding of the hardware state by subscribing to this event bus with that said I'd like to pass the presentation over to Shannon Wilbur to review our 4.10 observability additions thank you very much Rob appreciate we're excited to announce some OpenShift monitoring additions as far as the OpenShift 4.10 release is concerned so I'll stop by highlighting a focus here on the functionality with providing audit logging for metrics so with our ability to view now and audit which components are requesting and calling the metrics APIs we also have enabled query logging for all Prometheus instances with support for both platform monitoring and user workload monitoring there is support to use the Thanos querier to view which queries are frequently executing and the potential impacts to queries have on resources for both performance and capacity purposes of monitoring next slide please to extend that so some of the enhancements we offer is basically the Prometheus logging certificate capabilities so we have the ability to improve the reliability of metrics collection so with the new Prometheus client certificate authentication we're able to scrape metrics and allow certificate authorities to be able to provide that so with authentication to APIs we're able to scrape metrics enable reducing performance impacts on our scrape endpoints which is really good a lot of feedback from customers asking us to minimize the overhead from Prometheus so I'm happy to provide that so we continue to provide our monitoring experience inside the OpenShift console which I'm going to talk about here in a minute where we're going to go through and highlight some of the workflow and integration inside the OCP console and some of the capabilities you can expect next slide please our improved OpenShift monitoring UI experience so what we're really focused on here is really delivering that in cluster monitoring experience where we can start to bring together the value add of rather having a distributed management experience of different assets and components from Prometheus to alert manager into scrape endpoints we're pleased to announce that now in the OCP console we're able to provide a more native and coherent user experience for administrators and developers so in 410 we've enhanced our user experience to bring together alerting and metrics into the OpenShift console so basically integrated alert manager under the observe and alerting menu from integrated Prometheus user experience for the metrics specific menu and then also I'm going to highlight on the next slide is integrated scrape endpoint targets to observe targets which is going to be another enhancement next slide please so this is a key feature that is intended to provide a lot of value and a lot reduce a lot of the administrative overhead so we have new Prometheus target endpoints inside of the console under the observe menu which allows us to observe and manage all of our scrape endpoints as one entity under the OCP versus in previous releases having to individually manage those independent and those individual targets that gives us a lot more ability to look at the aggregated view of our scrape endpoints and be far more efficient there next slide please so for open shift logging for 5.4 we continue to evolve our strategy for our Loki logging stack with open shift so we're offering our capabilities to extend Loki and the vector logging connector there's been a lot of high demand in a lot of community about Red Hat and the open shift team moving into the logging stack for Loki in the vector collector so we continue our tech preview phase and we're working towards of working with our customers on our use cases and providing that capability and essentially looking for to extending Loki and Prometheus support in the future into next year but also make taking advantage of all the performance impacts and capacity across Loki and vector going forward so with that said I will pass it off to my teammate Maltron Leo for distributed tracing Thanks a lot Shannon for distributed tracing one of the technologies that probably most of you is already familiar it's based on a Yeager as open shift 4.10 we are releasing this distributed tracing data collections which is pretty much based on the open telepathy collector and what you're going to do is you can act as an agent to work side by side with your application or even it can work as a gateway to get connected to existing backends on an open shift but also outside an open shift eventually we're going to release other presentations to explain more footer and deep about the open telepathy collector so we stay tuned with that I pass the baton to the DAX PM Radek Thank you Maltron so Insights is a set of services that we offer to all open shift customers for free on console redhead.com and they provide additional experience with managing your cluster infrastructure one of these services is called advisor and it's all about proactive support giving you recommendations about potential issues or existing issues and specific remediation steps how to avoid them with 4.10 we're delivering and bringing to you a new user experience with the advisor application new view allows you to look at all available recommendations for the set of your clusters and also being able to filter them out disable recommendations that are not interesting for you not interesting for a specific cluster and do additional views on these recommendations that are available with this new UI we're also delivering an onboarding tour that you can easily walk through all these available features as part of this Insights also delivers a new feature that was already mentioned by my colleague here to enable simple content access for redhead content and together with this new feature you'll be also able to see an open chief cluster manager your support status whether your cluster is supported on which level and whether your evil exploration has already been due all these features are already available in and console redhead console ish beta and they will be g8 with 4.10 very soon another service that is available to you is cost management and Sergio will talk about it thank you Radak so we cost management it's us offering too a part of Insight that we are offering to our open chief customers that it's basically provide visualization of cost on metrics in for any cluster that is available there one of the things we do is we mix that information with the bills coming from the clouds and one of the improvements we've added to 4.10 is the capability to use saving plans so before when you had saving plans that we were just use unamortized values and some customers needed to see the amortizer to blame the values to get accurate numbers now that support is provided for anything that it's Amazon related another thing that we were being asking a lot is increasing the number of days as the information is available so from 4.10 on you will have at least 90 days so you will be able to go to the cost splutter and select more than this month and the previous month that was what we were supporting before and another thing that has been really a request by customers is the support of OCP and GCP that we support Amazon Azure and Google as underlying infrastructure clouds in the past we were supporting the distribution of cost of those clouds into OCP concepts like metrics or tags with this release we add the same functionality to GCP on OCP so we have priority in the three clouds that we support and the last thing is we are actually modifying the way we are calculating things one of the things we do is we distribute cost based on CPU or memory into your project or tabs or notes and we had the thing that's you could use the use they use all the requests but actually customers were saying that they use it's never going to be below the request because the request is a reservation so whatever they have a cluster where they use was very low and the request was high the numbers that we're seeing was not reflecting reality nor use or request separated will allow them to see what was going on and now we are adding this capability when you created our resource when you're creating a right plan you can select the effective usage and we will select the maximum of usage and request so every time you use it the minimum will be the request and the maximum will be the actual usage and that's all for cost management and I think I'm the last one so this is the last slide yes thank you thank you everyone for joining I hope you learned a lot about new features that you wanted to I want to give you a reminder that at learn.openship.com you can check out guided demos of all these types of features on a real cluster when 4.10 is out in mid-march that will be updated right now that has a bunch of cool stuff that's going on in 4.9 and earlier versions as always you can find more ownership information documentation more at cloud.redhat.com and I want to call out we have a new database gathering coming up and OpenShift Commons this is our user group where folks can learn from each other about how they're using OpenShift successfully that is on February 23rd and if you're watching this recording after that date go check out the video of that as well if you use databases on OpenShift thank you so much hope you enjoy OpenShift 4.10 in mid-march and we'll talk to you next time