 All right, folks, hello, welcome to the What's New session for OpenShift 48. I've got the whole product management team with me, and so we're excited to talk to you about what is in this release. As a reminder, we're going to cover all of OpenShift Platform Plus today, so we're going to talk about all the goodness that we have in OpenShift, our platform services, updates to Kubernetes, as well as advanced cluster management, advanced cluster security, and Red Hat Quay. We've got some themes for OpenShift 48, where we did a bunch of work on installer flexibility, so we know a lot of folks in their IT environments sometimes have protected roles for making new IAM on Amazon, want to use STS tokens, install into different resource groups that already exist, so we've done a ton of work there on Amazon and Azure, as well as pick up Kubernetes 1.21. Some exciting feature graduation. We have a bunch of APIs that are going to their stable V1s in Kubernetes, as well as some features in OpenShift that are going GA. We've got our Tron jobs, pod disruption budgets, and vertical pod autoscalers, as well as scheduling profiles going to tech preview, and what we're really excited about is IPv6, single and dual stack. And then our next-gen developer tools, we've got a bunch of investment in OpenShift GitOps, which we were excited to GA, as well as open pipelines. And then lastly, a new tech preview coming in the developer tools arena is serverless functions, so we've already GA'd some of our other serverless technologies. This is kind of getting closer to a functions as a service experience. I'm really excited to talk about that more later on in the presentation. All right, and then I want to dig into Kubernetes 1.21. This is what we're shipping with Kubernetes, excuse me, with OpenShift 48. As I mentioned, a bunch of APIs have graduated to stable, as well as some better control over node disruption. This is a big thing for us because we do our upgrades over the air, and so we care about how your pods are moved off of nodes and moved back on. So a new node shutdown timer gives you a little bit more control over that, as well as pod disruption budgets really are that main kind of primitive that you have, and that's going to graduate to stable as well. Kube 1.21 is required for IPv4 and IPv6 dual stack support, so we've been waiting on that one to call that GA. And then a few other pod scheduling primitives are coming in Kube 1.21. So in alpha, there's the memory manager, and then in beta is storage capacity. And as always, a reminder that we've versioned Kubernetes with OpenShift, so we've got Cryo 1.21, we've got Kube 1.21, and we've got OpenShift 48 all designed, tested to work together. Then here is the overall OpenShift roadmap. So this is kind of as far out as we can see. I'll let you pause your video if you want. We're not going to dig into all these things, but you can see that we've got a ton of investment happening in the developer tool space, our application space, as well as the platform itself, and then our hosted and managed clusters. So take a look, and we'd love to talk to you more one-on-one if you've got certain questions about anything. And over to Mike. All right, thanks, Rob. So this is probably the only slide that doesn't have anything to do with OpenShift 4.8, but since it's June, and we're all having a lifecycle event, we thought we'd draw your attention to it. So the last release of OpenShift 3 was OpenShift 3.11, and this month in June, it leaves its end of full support. We're extremely proud of the 3.x line. It's been nothing but successful with our customers and users and the ecosystem, but it is time to move it on to its next stage, and that is maintenance support. So maintenance support is critical CVEs and critical bugs, and that will go all the way to June of 2022. Now, the next one is a little confusing to some people, but easy to understand once you know what the words mean. It's extended lifecycle phase, and that simply means that we're still answering the phone, but we're not issuing any fixes. So after 2022 going into 2024, and by the way, 2024 is like the next presidential election here in the United States, so pretty far out there, we're just answering the phone and we're pointing you to existing patches that have already been pushed out. We're not doing any code changes in between those years. Now, we did announce this month an awesome new offering, and that is at the top of the public lifecycle page. It looks like that blue box, and it says that if you are in a current OpenShift 3 to OpenShift 4 migration, and then you want a little bit more time on that maintenance cycle, maybe you just want one more year, right? So June 2022 to June 2023, we do have a fore sale offering that would allow you to buy CVE and critical fixes during that one year extension. And that is also mentioned on the migration page, so definitely spend some time on our migration page. If you haven't visited in a while, there's a lot of tools and really great content on that migration page. And with that, we'll move back to 4.8 and to the spotlights, and I believe CMAC is kicking us off. Thank you, Mike. So OpenShift Pipelines, that's downstream add-on on OpenShift that productizes Tecton upstream project. Kubernetes-native CICD system. 4 Kubernetes built for Kubernetes, and on OCB48, 1.5 OpenShift Pipelines, 1.5 will be released. We are already G8 on OCB47, and now the next release of that will be made available. There are a bunch of really significant requests we have had from customers that are included in this release. The first one of those are auto-pruning of Python runs and task runs, so that this has been possible before, but customers had to manually create a Chrome job. Now the operator manages that, and they can configure to keep the latest 20 of their Python runs, for example, and automatically the rest of them get garbage collected, not to burden its CD. The other big chunk of CAC that are coming in 1.5 is Python as code. It enables Git-centric workflows around your pipeline. It's not just about your application, but the pipeline definition itself. So you define your pipeline, your Tecton pipeline, as Yaml put in your Git repo, and that's all that there is to it. You don't create anything on the cluster. Every time there's a change on the repo, that definition is taken, and exit gets executed on the cluster. There are a bunch of features around that, like event filtering, you can define if that needs to happen on commits or pull requests or other events, for example, which branches, which tags that should be included or should be monitored for those changes. There is automatic task resolution, so you don't have to have the Tecton task that your pipeline needs on the cluster, and rather you can keep them in your repo alongside your pipeline, or simply just refer to them in another repo, or if you just name them, the platform automatically installed them from Tecton Hub. There is also a file support in pipeline as code that allows you to limit who can trigger your CI on official pipelines to avoid bad actors. You don't want just anyone to create a PR in terms of your CI, so you can have a limited number of users or groups rolled that are able to do that. And we support pull request commands. One of those is actually for the previous point, so if you have defined particular users that are approved to run the CI, then those PR comms, then those people can go and put a comment on that PR, okay, to test, and the CI gets triggered, or you can run a retest, for example. Similar flows to Proud, if you're familiar in the community's board with that. There is integration with GitHub checks API, so on your PR you would get the result of your pipeline execution as it executes an OpenShift, but the results are available on your PR as well with details to the task and link to the logs of all those task runs back to on the platform in OpenShift console. And currently in the first release of that, we support GitHub and GitHub Enterprise. It's a preview feature, and over the next feature, the next release is going to add GitHub and GitHub Pocket as well. The next item is the ability to customize the default templates that we shoot with OpenShift Pipelines. You might have noticed when you're adding an application through Dev Console, you have a checkbox to add a pipeline as well, and customers can replace this templates that we give them with their own sophisticated pipelines that they can provide through a dev team. And around Dev Console, Dev Console has done an amazing job really with a lot of enhancements around usability of pipeline and Dev Console. They're much more than what I can list here. So you will notice a lot of improvements around the specifically UX of pipelines inside Dev Console. Next slide, please. The next piece of work we have done is OpenShift GitOps. Again, G8 on OC7, OC48, 1.2 of OpenShift GitOps will be released. Another frequently requested capability that we have had is that Argos CD authentication should be integrated with OpenShift so that OpenShift users can reuse their credentials to log in into OpenShift. And in previous versions, we had a manual process to set that up. In 1.2, the operator says that it will automatically through provision and RSS. So as part of Argos CD is embedded inside the Argos CD, the name is the one that runs Argos CD. We have also simplified the privilege configuration of Argos CD. A lot of our customers want to give an instance of Argos CD to their development teams and limit that Argos CD instance to only the name spaces that the dev teams have access to so they can apply, they can follow GitOps processes only for that development team application within the name spaces they have access to. And in doing so, they have to create a number of role bindings and role. It's fairly cumbersome and not complex, but it's a lot of toil that needs to be created. In 1.2, we have automated that and all of that goes inside the Argos CD CR and the operator takes care of that. You just specify which name space is Argos CD that needs to have access to and the operator configures those, creates those role bindings needed so that Argos CD cannot break out of the boundaries that is defined for it. And the environments with you that some of you might have noticed in Dev Console is there are enhancement around the UX of that when you use the GitOps application manager, CLI, CAM for bootstrapping your GitOps process. You get your application listed there. You can get to the environments view and we have more work planned on the following releases in that area as well. And last but not least, we have done a lot of collaboration with the ACM team. They have done a great job on bringing Argos CD more into the experience of ACM. When you are managing cluster within ACM, ACM recognizes that there is Argos CD on those clusters and it would import the cluster registry of ACM into Argos CD's registry so that it can easily define applications of Argos CD and sync Git reports to those clusters as well. And within ACM console or dashboard, you would notice Argos CD instances of applications. So Argos CD application is recognized both in the topology and the list of applications with a role capability. So if you have the same application across multiple clusters, ACM can enroll that up to a single application that can expand to all those clusters if you need to. And with that, I'll pass it down to Adele, I think. Yeah, thank you, Simon. So in 4.8, we're also introducing OpenShift Sandbox containers, as tech preview. OpenShift Sandbox containers brings along as down the overlay for Catholic containers. So what is Catholic containers? Catholic containers is an upstream project that provides you with an OCI compliant runtime to run your workloads in virtual machines, in lightweight virtual machines, was the same exact experience as you would do it with normal containers. What we bring with OpenShift Sandbox containers is an operator and the operator takes care of doing all the grunt works combining all the bits and pieces to bring Catholic containers to your OpenShift cluster. Including... So some of the tasks that the operator does, it is available on the Red Hat Operator Catalog on the console, so you can enable it as any other operator with OpenShift. It exposes a CRD for you as a cluster admin, for example, to configure day one and day two tasks. And it provides Kimo as the backend for or as your virtual machine monitor using CoreOS extensions. Additionally, it provides the Catholic containers RPMs and installs them in the node, the same method with OS extensions. This allows us to dedicate the lifecycle management to a tool that does it very well, the machine config operator. And then you have, it does also cryo configuration, so you would need to configure a usually script that was out an operator to configure the handler at the runtime handler for any runtime class you're adding to your cluster. In that case, we're adding a Catholic containers as an additional runtime class which also the operator will create that runtime resource for you. So it basically automates all these necessary stuff that you'd usually do by bash scripts and does the life cycling. How you would use that basically as a cluster admin, you would create our resource called cata.comfit and that resource at the moment allows you to choose which nodes you're labeling cata containers on. So you have the choice to choose or to configure certain nodes and not all the nodes of your cluster to run cata specific workloads or virtual machine workloads. And it also, so once you create that cata config resource the operator will create a runtime class for you and that runtime class has also a scheduler and a handler that allows you to then as a developer or a cluster admin create a pod which references that runtime class and eventually when you do that so the only thing you basically need to do on your pod level is just set the runtime class name to cata and what this does is basically run your workload in a lightweight virtual machine using an OCI compliant runtime just cata containers. Now, what are the default use cases was what we have now with the addition of OpenShift sandbox containers you'd usually in most of the cases you'd run normal containers and that basically gets most of the needs when you're re-hosting or you have existing VM workloads that exist outside of Kubernetes or OpenShift and you would like to bring them to be cube native and they have no existing image so you have not built an image or a container image for that a good default use case or a good product to use. On the other hand, if you have built an image already and you're already far in your cloud native journey then OpenShift sandbox containers can be a good choice for you to rearch at that because it's also an OCI compliant runtime so the experience is exactly the same you're just changing the runtime class when is it useful it's useful for kernel isolation or for third party apps that you have no control over for most cases, normal containers should also be fine for you use cases yeah, I think that's about it over to you, Naina Thank you Adele So OpenShift serverless brings the serverless platform to OpenShift that enables the users to run almost any containerized workload in serverless fashion aka scale up on demand and scale back to zero what serverless functions bring is a simple, focused and opinionated way to author solutions and then deploy them in serverless manner it is a collection of tools that enable developers to create and run functions as a canative service on Kubernetes and we are delighted to announce the serverless functions tech preview with OpenShift 4.8 functions is offering a simple programming model that reduces the complexity and it empowers user to no longer worry about platform specifics like networking, resource consumption sizing, etc and functions take it a step further and takes the application configuration project structure and container creation as well you create and deploy your application in two commands Dev Console has a new visualization for these functions and they play very well with the drag and drop topology of event sources and this will be covered later in the presentation by Serena in Dev Console section the simplicity of function is what attracts the power developers and non-developers such as data scientists so they can easily author their models and web servers even to listen on the port for the services and apart from shielding the developers from platform specifics it also provides a certain level of consistency and safety security functions offer well-loved run-time languages such as Quarkus Node, Python, Go, Spring Boot with TypeScript and Rust on the horizon and with the vast event sources courtesy of KMLK and the addition of Kafka event source SGA, SSPaving the way for production-grade event-driven solutions to solve your today's modern day challenges with that over to you Mark Thank you so this one IPv6 single dual stack support so this is a feature that represents a huge body of work between that and the upstream community also something that many of our customers have been waiting for for quite a while but this is finally landed in OpenShift 4.8 with the GA of Kubernetes 121 so we are providing full support for IPv6 when you're using OVN as the cluster networking so IPv6 comes in two forms single stack where you choose one of IPv4 or v6 and then all OpenShift networking is going to be 100% aligned to that choice or you can choose dual stack where your pods are going to get both a v4 and a v6 address and then the cluster can communicate with any internal or external endpoints that are using v4 or v6 so this latter configuration dual stack represents the vast majority of our customer use cases but we support both and the reason why it really represents the vast majority is because there always seems to be that one server somewhere in your ecosystem that is still v4 only that you'll need to work with even though you've progressed on to v6 so IPv6 support is for bare metal deployments currently but we'll add other platforms that are IPv6 capable in the future next slide and Gaurav please next slide Hi my name is Gaurav Singh there are a few features that we have graduated from Tech Preview to GA first one is vertical pod autoscaler or so known as VPA what it is is it's scaling up of your pod by adding more resources into the pod let's take an example let's say your application team build an application give it to you to run it in the production you don't know how to size it you can use VPA and run some simulation load on there and based on historic CPU and memory you can figure out what resources it will need and run it in production another feature is cron job is basically what you have in cron tab in linux where you can schedule your jobs to be run at particular time in a day let's take an example let's say you have your master nodes being deployed in each zone and your work has been deployed in the west zone and when you create your cron job it is going to schedule your job based on the zone where your master load node lives in the schedule via each zone pod deception budget basically avoid application outage by using pod deception project right for let's take an example three copy of your application your business requirement is that one copy of the application should be running all the time then you can define that in the pod deception budget then let's say you click upgrade that will drain the node it will start evicting pod one and then pod two when it goes to pod three it reaches the eviction that budget and will pass the eviction process so making sure you have one copy of the application running all the time next slide please let's talk a little bit about VPA in VPA whenever you accept a recommendation from VPA to add more resources it is going to evict the pod and then reschedule so in order to design a fail save method we have put in definition that at the minimum VPA can only be applicable to a deployment set which has two pod and you can always change it manually by going into configuration there are few modes that VPA runs and two mode which is very popular mode which is your commander mode where it will just recommend you how much cpn memory based on historical data is needed for the pod second is recreate is fully automatic it's going to recommend and apply the changes without your intervention so next presenter please everyone I am over here we have many customers tell us their IT departments don't allow them to use wildcards so now we allow users to configure the external console off service and CLI download routes hosted on the cluster the configuration is done on the cluster ingress resource object under the cluster administration admins can now set the URLs and certs in one spot and if no cert is presented it should use the default certs from the ingress controller in the future other external routes will be configurable from here as well next slide please this slide is all about making things easy on the left hand side we added the ability to import multiple dock yaml you can go ahead to import yaml screen drag a bunch of yaml files over and import them in very easily and then on the right hand side we added the ability to drag and drop jar files right into the topology view so you can pick your favorite to bring a corpus application bring it over we'll get it initiated and then the logs will be presented and voila your application is then available for you to use in OpenShift next slide please great thanks Ali so to continue to focus on the developer experience in the OpenShift console we're going to talk about the enhancements that were made to the console for serverless in 4.8 we've made progress in three main areas the first one is what the image is about here on the screen it's around the make serverless tech preview action which actually creates a new serverless deployment next to your existing deployments other configurations including the traffic pattern can still be modified in the form so this is really exciting on how to convert an existing workload to serverless and we'll continue to enhance those features as we produce additional releases the second one is around topology we now support visualizing cloud functions and additionally as Nina mentioned we do support all the associated commands as well as the ease of use of drag and drop capabilities when interacting with those cloud functions and the third item is that we now have advanced scaling options for Knative Services so concurrency utilization which allows users to set the percentage of concurrent request before scaling which is now available and then we also have support for autoscale window which allows users to set the duration to look back for marking autoscaling decisions the service is scaled to zero if no requests are received in that time period on the next slide we talk about what we've been doing to continue to improve onboarding so we now have a getting started resources card for both cluster admins and developers so on the left hand side you'll see the top image is what we're doing for cluster admins on the cluster overview page it provides resources to help set up your clusters building with guided documentation which is our quick starts as well as exploring new admin features and then that similar card is available for developers on the ad page which is the bottom left corner and that provides resources to creating applications using samples powered by dev files guided documentation so quick starts for developers and also ability to explore new developer features and one of the things there is that's pointing out to the latest what's new blog and then additionally we also have a great enhancement to quick start so as you all kind of know in 4.7 we've provided a mechanism for custom quick starts and in 4.8 we've added this additional feature so quick starts authors can now use special syntax in their console quick start they can use the copy name to provide a way for users to copy a string into their copy buffer but even more kind of exciting is the ability that you can use execute command so that if you have the web terminal operator installed there's a way for the users just to see the command in the quick start click an icon and it actually executes in the web terminal for you on the next slide we're going to talk about two additional ways to customize the developer experience both of these features been really requested heavily by customers so the first one is around the ability to customize which roles are being shown in the project access area in the developer console so again the screen on the left hand side is showing that this is a way for developers to quickly provide access to their project and so this new feature allows admins to enhance the number of roles that are available in that drop down there's some syntax that's shown there on how to achieve that and in addition to that we also have the project that access customization is in the console spec and that code snippet is available in the YAML editor for this customization and then on the right hand side you can see that we now provide access for cluster admins to hide features from the ad page another thing that people have asked for in the past so this feature allows just allows the admin the ability to hide whatever things that they don't want the developers to access and to achieve this again similarly it's a customization in the console spec for ad page and there's a code snippet that's available in the YAML editor for this as well and this snippet here shown on the screen is just shown how do you hide the import from dev file entry for example okay and then finally we've recently on the next slide we've recently announced the certification program for helm charts and have been working with a few partners to get them in the catalog so our catalog now display the developer catalog now displays a badge there for the charts that are certified the certified charts are also going to be visible in the red hat marketplace and additional charts will be made available to the catalog but if you're interested in some specific charts from partners you can engage with the partner team and now I'm going to hand it over to Marcus to talk about installer flexibility thank you so as you might know already for OpenShift 4 there are two primary installation experience the first one is a full stack automation or IPI where the installer controls all the areas of the installation including infrastructure provisioning with an opinionated best practices, deployment for OpenShift and the second one is pre-16 infrastructure deployments or UPI where administrators are responsible for creating and managing their own infrastructure allowing them greater customization and operational flexibility for this release for the support provider list stays the same as for 7 next slide please with this enhancement users can now deploy OpenShift into an empty user create Azure Resource Group by providing your own resource group for customers who are more security consensus this allows the Azure Service Principal to be a scope only the resource group be net and a public DNS zone rather than the whole subscription to enable this feature we use the platform Azure Resource Group name field in the install config file during the installation note that when destroying the cluster the user defined resource group is also being deleted next slide please user after existing Route 53 host private zone with shared VPCs so support has been added to specify an existing Route 53 private hosted zone in cases where OpenShift is deployed in a shared VPC in situations where the VPC is owned by a different account that the account is used to deploy OpenShift you can associate the private hosted zone to the shared VPC and specify the zone ID in the install config file you can only use the pre-existing hosted private zone sorry when providing your own VPC this is not for situations for example where the VPC and the subnet are created by the installer next slide please user pre-existing Instance Heim roles on AWS so we have enhanced the OpenShift installer to allow pre-existing IAM Instance Role to be passed instead of the installer creating those BI BI deployment deployment method while the documented list of permissions remain exactly the same usually this allows the admins to provide additional permissions boundaries or use specific name conventions for the bootstrap control plane and worker instance roles this is configured via the compute platform.aws IAM role and the control plane platform aws IAM role fields in the install config file just one note that the bootstrap instance shares the control plane IAM role next slide over Maria please thank you so much our customers want to use temporary short-lived credentials during and post installation we have started this work with the AWS provider because STS enables the authentication flow that allows a client to assume a role resulting in a short-lived credential AWS extended their SDK to offer the web identity token up and it allows the automation of the process of requesting and refreshing credentials using an open IED Connect or identity access management identity provider and OpenGIF can sign a service account token trusted by AWS IAM these tokens can be then projected into a pod and then the pod can use that for authentication this feature became available in 4.7 but we are announcing the GA in 4.8 given everything that we've added for it in 4.8 we support new deployments as well as some greenfield cases later in 4.9 we plan to continue to expand and improve the upgrade path as well as automating some checks necessary to move forward and we're also looking at other providers but that's what we can offer for now and now we'll pass it to Shar. Next slide. Thank you Maria Hi everyone for those of you who are familiar with OpenGIF 4 Red Hat has the hosted update service wherein this allows you to go to OpenGIF cluster and see what are all the upgrade edges that are available and then you can take further action based on that but unfortunately this was not available for many of our users and customers who operate in air gap disconnected kind of environments which they have to do because of various reasons including their particular circumstances or policies so here with the OpenShift Update Service we're happy to announce the release of the on-premise version of this hosted update service following the OpenShift Update Service it is available as an operator no surprise there via the operator hub which allows users to post the update graph information for clusters residing in that restricted network. The service is comprised of two services one is the graph builder which features the release payload information from a local container image registry that which is what you have used to mirror the content and builds an upgrade graph based on valid edges the second one is the policy engine that can be then be responsible for selectively serving updates to the cluster based on a set of filters that you define as admin so I hope you get to use this and we welcome some feedback on this for sure next slide please I'm handing it over to Ramon thanks for sharing so let's talk now about Baymetal and the IPI workflow specifically and starting by one of our first in 4.8 is the option to run old boot nodes using UEFI secure boot. In this release what we are doing is well the use case you know right you have a number of nodes that you want to protect against my issues code being loaded and executed in the boot process right but this is essentially UEFI secure boot and in this release we are adding the ability to tell the installer directly which nodes you want to have enabled with UEFI secure boot it's as easy as going to the install config.yaml file and go to the nodes that you want UEFI secure boot and say that this is your boot mode next slide please in another addition that we have in 4.8 is the ability to schedule pods based on Baymetal hardware attributes and specific hardware attributes that we've been learning especially from the telco industry and with the help of the telco team we've been learning about a number of attributes that are related to a type of hardware that you may need for example to run pods with maximum performance or with real time right and not all of your hardware not all of your nodes will have these attributes enabled right so you want to know when you schedule a node whether you can place that pod in an older node based on this hardware attribute so the first thing you do is you use the node feature discovery operator this is nothing new and we have been adding well hardware attributes like for example checking if PSage is active or not this is something we've learned that customers it comes from telco requirements but in reality any customer may want to know if a specific node supports a specific type of hardware accelerator or kernel state etc so in this example you can see how you can tell a pod to be scheduled only via the node selector to nodes that have the CPU state active yeah next slide okay and to finish with new features in bare metal one of the things that we've done is so say that you have deployed a cluster with virtual media right or with the assisted installer to do this you don't need a provision in network right when you install through virtual media all you need to do all the installer does is map an image to the bios over the network obviously but to the bios of the nodes and then the nodes will boot that image as if it was locally right if you install your nodes with the assisted installer similarly you're going to have an ISO you will boot your nodes from that ISO and you don't need a provision in network but say that after your cluster is installed now you have new nodes that you need to add via pixie booting right you need to provision them through pixie booting and for this you don't need a provision in network so in this release we have the ability to enable the provision in network on day 2 after the cluster has been installed and after you start having this need wanting to have your nodes deployed via pixie now you can do this directly on existing nodes and the bare metal operator allows you to do this now and with this I finished with bare metal and I'll hand it off to Moran to talk about zero touch provisioning Hi everyone so zero touch provisioning what is it? it's aimed at regional distributed on-prem deployment, multi-cluster deployment and it's enabling customers automated paths from zero meaning uninstalled infrastructure to a fully functional cluster with application running on it the high level flow as you can see on the right side means with a site plan so all the site infrastructure configuration and application data is fed into a Git repo manifested into Git and using some of the capabilities discussed before with GitOps integration with Reddit advanced cluster management we can basically do the entire site planning and that's just waiting for something to happen so that basically enable an unskilled technician to go to a remote site and using a biocode scan or some triggering of the GitOps changing it actually start the entire flow start the entire provisioning of the cluster so it's the provisioning is the configuration and application how do we do that so basically this ZTP flow enablement integrates and leverages existing technology stack and taking components like RHCM, Reddit advanced cluster management Hive, Metal 3 assisted install and leverage and integrate them together to provide an end to end flow for zero touch provisioning it has minimal prerequisites and enabling untrained technician to do the physical installation while controlling the actual installing remotely it can be done over layer 3 it doesn't need any additional services external services or bootstrap node or anything like that so it's very much as focused it is highly customizable deployment it fits connected or disconnected IPv4, IPv6 DHCP, static IP, UPI IPI basically covers an entire scope of the metal deployment it GitOps enabled as I mentioned so this way we can provide this experience next slide please so just talking again about the ingredients and the logical phases it goes through with this installation so first of all the site planning the site planning is done and all the data manifested in Git we do the infrastructure as code so infrastructure is the first segment that we added as Git and as Git code and possible basically to deploy and configure the entire cluster so it begins with the cluster provisioning it moves and using RHCM policies and integration to enforce policy and configuration on the cluster and then uses additional mechanism with RHCM which is called EPSAPS to provide the application deployment and rollout for the application and work loads on the cluster and with that Anand would you like to share with us some control plane updates sure good morning good afternoon my name is Anand so let's talk about the control plane updates in OpenShift for the next couple of minutes or so thank you the first one being a single service serving certificate for headless stateful sets so this feature provides for automatic certificate generation and rotation for direct part to part communication similar to the service serving certificates operator this lets you generate a service serving certificate for headless services now and this includes a by card subject in the format of star.serviceName.serviceNamesPace.svc and this allows for TLS protected connections to individual stateful set pods without having to manually generate certificates for these parts the only important thing to notice because the generated certificates contain by card search for headless services do not use the service CA if your client must differentiate between the individual pods and if your client must differentiate between the individual pods generate individual TLS search by using a different service CA the next feature is supporting subject claim URI scheme of the open ID connect IDPs the problem being users of OIDC systems where you know formerly unable to log into OpenShift in the cases where OIDC IDPs use subject claims adhering to the URI scheme why is this important because they also are rejected logins from users of OIDC IDPs that are quite popular like Microsoft or Yahoo or Google or Optup even though these followed the RFC requirements for the subject claim and they also were found this problematic and I rejected those subject claims so in 4.8 users of IDPs that use the URI scheme in the subject claims will now be able to log into OpenShift next slide please so the next feature in the control plane that we are proud to talk about is improved customization of the audit config just to give you some context and to set some background in 4.6 we introduced a node audit log policy feature that let you control the information the amount of information that's logged into the node audit logs it had basically three profiles default, write request and all requests the default lets you log only metadata for read and write requests the write requests lets you logs request bodies for every write request besides a metadata and the all request bodies lets you log request bodies for every read and write besides all the metadata with 4.8 what we have introduced is in the default profile that you see on the top you can now log request bodies for OAuth access token creation for both creation and deletion which is basically log in and log out the way you would set this audit profile is you would edit your API server resource object and then edit the spec.audit.profile and you would set the specific profile you want default or write request or all request and then you know roll all the Kubernetes QB API server fonts make sure that all the nodes come up on the latest revision and you're good to go next step is so the next feature in the control plane that you probably talk about is you introduce two new alerts that get fired when an API that will be removed in the next release is in use those two specific alerts are API removed in next release in use and API removed in next EU is released in use so for instance using an API that's going to be removed in the next release or the next EU release we will send you an alert that gets fired so you can be aware that you're using APIs that are soon going to be deprecated there is also another API introduced called the API request count track two things one is the number of API requests for every API and also if you're using a deprecated API or not so you could get this information two ways you can obviously go to the OpenShift console go to the API explorer look for the API request count object under that you can see all the instances and for each instance you can see the number of requests number of requests in the last 24 hours so on and so forth or you can come to OC command line and say OC get API request count and that will list all the APIs the usage for those APIs in terms of the request in the last hour in the last 24 hours but also there is another column called removed in the next release so for instance if you're using an API for instance ingress.v1beta 1.extensions it will clearly tell you that it's being removed in the next in a queue 1.22 release with that I'll hand it to Duncan for the next section thanks Anand so let's talk about cluster infrastructure we've been doing a lot of work on the internals but that's not to say we haven't got some juicy additions for you you can see them on the slide first up is user defined tags for AWS for those of you not familiar with it you can essentially assign metadata to your AWS resources each tags really just a simple label consistent of a customer defined key and an optional value and that just makes it easier to manage search for built on resources by power or owner or environment or other criteria next up is your disk encryption sets which unsurprisingly helps protect and safeguard your data this is about kind of meeting requirements from your security and compliance teams it actually uses DM crypt feature of Linux to provide the volume encryption and there is some integration with Azure Care Vault to help you control and manage your disk encryption keys on that side and then finally we're bringing vSphere up to date with our other cloud providers and giving you auto scaling from zero again those of you not familiar with this it was all about using resources previously to use auto scaling on vSphere you had to have at least one node hanging around even if it wasn't being particularly used so we've done away with that requirement now and with that I'll say next slide and move us on to some network joy with Mark I think that's correct thank you Duncan so there were many new features in enhancement still push if networking in 4.8 so I'll be covering some of them here in the next slides as a representative cross-section roughly divided into ingress egress enhancements and then just sort of general networking enhancements on this slide one of the big new things HAProxy upgraded to 2.2 LTS and so we get all the new features that are detailed in the link on this slide including some of the major ones listed here in the bullets performance security hardening all the bug fixes health checks some improved observability all things that are very important to us and our customers nothing along with HAProxy upgrade we added several new support customizations for it so for example this router use proxy protocol this basically allows the source IP address to pass through a load balancer if that load balancer supports the protocol like Amazon ZLD does there's also the router back end process endpoints so this is critical for tuffling endpoints for proper distribution of requests when you're running multiple routers that have a load balancer in front of them for example in F5 the tunable is there the tunabuff side so some customers have the use case of very large header data on the order of 48k or more but if HAProxy's buffer for that header data is not large enough it gets dropped so what we did is we added support for the configurability of those parameters we don't limit what that value is that you can set it to keep in mind that the larger the cluster the more memory it's going to consume as you increase that configured value nothing customizable number of router threads, NB threads so since 4.1 we've supported the NB threads parameter but what we did was we defined a fixed value of four threads so that was determined to be sort of a best practices value for most or many but not all workloads that our customers ran so customers with really large cluster nodes asked us to make that configurable so we have in 4.8 another one up to fail over keep live d support the keep live d image has been there in the product for a while since 3 but in OpenShift what we did in 4.8 is we formalized support for the use of that image to provide HA in OpenShift and as part of that support we also now document a best practices procedure for implementing on the Gateway API front OpenShift 4.8 will present a developers preview of Gateway API formally known by the names of ingress v2 and service API but Gateway API represents unifying technology for ingress and we're targeting integration of it with contour as the primary ingress controller for traffic alongside our current HA proxy this will represent an enhanced integration with Envoy deployments and OpenShift service mesh the global access option for GCP's ingress internal LB without that particular option traffic originating between projects in a shared VPC network have to be in the same region as the load balancer that's being used so what this does this facilitates communication cross region for shared VPC deployments and finally the last one egress IP load balancing enhancement so this is for OpenShift SDN and this provides the ability to spread traffic across cluster nodes so what this does is instead of having a single IP tied to a single host where all traffic no matter where you in the cluster would go out to be assigned that IP address remove that single node choke point and we're going to be adding this same enhancement to OVN networking in the future version next slide please so in the category of general networking enhancement so we have a rather large effort underway for shift observability and in general this first enhancement on this slide represents enablement of networking flows tracking and monitoring for network analytics so basically we're adding a NetFlow S-Flow IP fix collector to OVN Kubernetes this would give us a supported way to monitor traffic in and out of the cluster and this would be really helpful for those customers that do well to troubleshoot performance issues to do capacity planning security audits and so on also in 4.8 we added some key SRIOV capable NIC hardware support for our customers and so those key sets of hardware are listed there in bullets one thing to note in the next version 4.9 I know this is about 4.8 but in 4.9 we're moving to a model of whatever rail supports we support so this is going to hopefully remove any necessary future work on this so we've also enhanced the OpenShift SDN to other Kubernetes and I migration that we support so if you wanted to go from OpenShift SDN to OVN we support it for all of our currently supported platforms we already support IPI deployments but now what we added 4.8 is all UPI deployments as well we've enhanced and strengthened the rollback capability and just for planning purposes keep in mind that if you do switch from one CI plug into another that a reboot is going to be required of all the nodes but hopefully that's not too shocking and then audit logging so for security and compliance reasons our customers asked us to provide a mechanism to optionally audit logging of network policy events like exception denies so that information is presented to the built-in logging stack and some custom Kibana dashboards and people find this very useful for IDS or post-mortem analysis and finally core DNS we upgraded that as well to version 1.8 and this is going to include a number of feature enhancements and bug fixes and with 1.8 we also provide the ability to control OpenShift actually the DNS pods placement within the cluster and so this is for customers with extreme workloads and they want to be able to control where exactly that DNS is running within the cluster so that they can ensure that it gets proper resources to handle the DNS lookups and they're not overloaded by the rest of the workload all right with that next slide please Hi there my name is Robert Love I'm here to talk to you about a feature that we're adding in 4.8 that allows NIC sharing and guaranteed bandwidth for each of the entities using that NIC port so some servers have a limited number of NICs or they limit the number of NICs on purpose to minimize the number of cables that they have to pay for and manage so if you have a single NIC port and you want to share that between the control plane and the workloads we've added that functionality in 4.8 so via configuration on the Oven Kubernetes SDN and configuration of NIC rate limiting we will allow that sharing of a single port between the control plane and the workloads so for example if you have a 25 gig NIC and you want to allocate 20 gigs to the workloads and 5 gigs to the control plane we allow you to do that and we will enforce the throughput limitations that are configured next slide please so a few releases ago we introduced a operator called the performance add-on operator this is an operator that configures a lot of complexities on the node this is particularly important for Far-Edge and Telco Far-Edge you really need to tune the node such that it's an appliance you may need to specify the pneumatopology the CPU layout isolated CPUs that will be pinned to workloads huge pages and other node level configurations that performance add-on operator requires you to specify a performance profile and what we're providing in 4.8 is a performance profile creator so this is a tool that introspects the hardware and then generates the performance profile for the operator as a convenience in creating this performance profile that can sometimes be quite complex with that I'll hand it over to Peter and the next slide Thanks Robert I want to talk a little bit about OpenShift virtualization which is the ability to run VMs inside of OpenShift we've actually been generally available for over a year now since OpenShift 4.5 I want to talk about a couple of highlights here storage enhancements where you can have golden images in a particular namespace that are created by an administrator and then instantly cloned to other projects within your OpenShift cluster the other thing is sometimes it can be a little tricky to get CSI compliance storage providers working easily with VMs and Kubernetes so what we've done is created this idea of a storage profile that automatically picks the proper access mode and storage type for virtual machines as you create them the other thing I want to highlight is the ability to do compute intensive workloads such as AIML that may still be running in some VM pipelines and video rendering can actually use GPUs attached directly to the virtual machines to accelerate those workloads you can hear more about in recorded summit sessions where folks like Lockheed Martin have done some very clever things scaling their infrastructure using virtual machines pipelines and GitOps functionality and then there was also a dump the expert session that we recorded as well now I'm going to turn it over to Miguel who's going to talk about how to get your virtual machines into OpenShift Thanks a lot Peter so as Peter said we can run virtual machines in OpenShift but how do we bring those virtual machines that we have currently running into OpenShift while you have the migration toolkit for virtualization now fully GA, fully generally available and you can use it just to bring those VMs into OpenShift so it has an easy to use UI you can use it also via the API as you want to automate it you can mass migrate VMs from VMware to OpenShift we will add more sources to the migration we have added a feature that is war migration because we know normally you require an intervention window to do those migrations and war migration what it does it copies the data from the VM while it's running and then you shut down and you copy only the changes to it there's a validation service a stack preview that will review the VMs and the configurations that they have like let's say raw device mappings shared disks, CPU pinnings and this kind of configuration that requires a manual intervention or that could render the migration not possible under the current circumstances and it will help you avoid having issues in the migration and it will be able to review before doing that of course we are very focused on performance so we have added a feature that is being able to prioritize the VM conversion to maximize the throughput and of course in order to not impact other workloads that may be running you will be able to select the migration network and redirect all that throughput to the network that is not going to impact other workloads. With this I'll pass it to Erwin and thanks I'll cover for Erwin, thanks Miguel Hi everyone, again GPUs have been supported on OpenShift for quite some time now but many GPUs have said how can we share those GPUs? The typical use case here really is I have a bunch of data scientists I have less number of GPUs than data scientists and they want to use it for some development and experimental purposes maybe not necessarily full model training but just to do some development work and they need access to a GPU so that they can experiment with CUDA or some of the other things that are possible on GPUs so NVIDIA introduced this idea of multi instance GPUs or MIG in earlier this year maybe last year anyways and the GPU operator from NVIDIA that is certified with OpenShift did not have support for this so far but with 1.7 that changes that as you can see with 1.7 works on 4.6 4.7 and 4.8 versions of OCP and that allows for this MIG mode allowing you for a sharing of GPUs this is the native sharing versus the other ways in which you can do it which is the VGPU which we talked about at the last what's new but this is additional way this is natively the A100 and the A30 NVIDIA GPU is going to put in MIG mode and the GPU operator can be useful for that with that this slide I'll hand it over to Daniel for covering Quay Thank you Tushar Redhead Quay is our central scalable registry platform for a multi cluster world and because of that we actually put it into the OpenShift platform plus bundle and in there it is usually installed via the operator on top of an OpenShift cluster which we call the Hub cluster Service Cluster serving all the production clusters in this spoke for customers in which case this cluster is in a disconnected environment you usually have to overcome a catch 22 situation because in order to get to that cluster you need to install it first and for that you need a registry so to help customers who at that point don't already have a registry we are going to deliver a streamlined simplified Quay on and one installer that ships actually as part of OpenShift and will deploy a lightweight streamlight Quay instance on the same note from which you usually run OpenShift install it will have reduced requirements specifically it does not require object storage and it will live for the sole purpose of storing OpenShift core images and related operator images the mirroring itself will still be carried out via OC and if that host that will run this all-in-one Quay instance is actually behind firewall or behind an air gap there will also be an offline variant of that installer available and since the scope of the support for this Quay instance is reduced to just OpenShift paid-out mirroring it will actually be available to all OpenShift customers with a valid subscription at no additional cost it's going to run on well-aid using Parkman and it will be released shortly after 4.8 goes GA and you can retrieve it from cloud.redder.com as a binary with the image or without the image from the same place from which you get the OC binaries and OpenShift install next slide another feature that we are adding with Quay 3.6 which is going to trail the OpenShift 4.8 to release slightly is called nested repository support and this is for all those customers who are using Quay as an ingress for multiple upstream registries and want to organize content in a single organization in Quay further, so organizations in Quay are our highest level bucket and all the images live inside at least one organization and it's usually like a flat namespace so when you have a lot of images that you mirror down for instance as part of mirroring the OpenShift operator catalogs this gets quite convoluted and there's potential for naming conditions as well so nested repository support will support users using forward slashes and repository names therefore creating the concept of stuck folders to structure the content in an organization you see some examples of how this will look like on the left-hand side of this slide and while some of these may actually be mirror or have the same image name and tag they will actually all be different images and they will not collide with each other so this eases mirroring of OpenShift catalogs with operators this also eases mirroring with scope view with multiple upstream registries and it will all be able to live inside one organization which is making permission management much easier what you don't get as part of this is still permission management in those folders so think of it as very similar to object storage buckets where inside a bucket you can also have a folder but all the access management the permissions and the ACLs are still managed either at the bucket level or at the object level and with that I hand over to ArtCore's Mark Alright Thanks In RollcoreOS 4.8 two items based on REL 8.4 Binary content and so all the latest goodness and hard enablement that that entails I'd like to be your attention to butane formerly known as FCCT Fedora CoreOS Config Transpiler and available upstream we found that FCCT didn't quite roll off the tongue nor was it quite the right name as the tool is not actually Fedora CoreOS specific and so now it's called butane we are shipping it with OpenShift butane is there to help you create ignition configs and now machine configs more easily other features include for example easier inlining of configuration files and fragments rather than needing to base 64 encode the file content you can just add it directly to the butane YAML file like you see on the left and you can do that for normal files as well as system D units this makes configurations much easier to generate and far more readable and new in 4.8 butane allows you to slurp in a directory of files and create a machine configuration to push out the group to RelCoreOS host it's much simpler than trying to manage that before and lastly we've also consolidated the LuxTix disk encryption and boot marrying configuration workflows with document configurations of that in 4.8 documentation and with that I will hand the back to Duncan Hardy. Thanks Mark, I guess we're on to storage now and with OpenShift storage we continue our long and arduous journey to CSI the end is actually in sight many of the entry drivers already have their deprecation notices in place and are getting ready to be removed in a future upstream release and for us to move to upstream is coming in two parts you've got migration and the drivers themselves which in our case are provided with as operators starting off with the migration that's the thing that gets you seamlessly from your entry driver over to your equivalent CSI one as you can see we've got tech previews in place for OpenStack Cinder and AWS EBS unfortunately this is a per driver effort so we're just going to have to keep going through and getting around to getting them all moved across on the operator side we're geing GCE disks and bringing tech previews for Azure and vSphere and what you'll see us trying to do here is we'll tech preview in a release and then GA in the release afterwards so you'll see this kind of rolling rollout I guess you can see the table on the left hand side to see how we're progressing at the moment and please don't worry all the support's still there for the entry drivers this is just us getting ready for the switch over and finally on our side there's a completion of the addition of AWS tags complementing what we did on the cluster infrastructure side for those of you asleep or ignoring me earlier these are the tags to help you manage identify, organise, search for and filter resources one little kind of tip there don't put personally identified information in tags that's just not what they're designed for next slide please and then we've got the OpenShift data foundation so hopefully you've all caught the name change from OpenShift container storage I'm still trying to get my brain used to talking about it they've also been extremely busy with the release you can see all the features there on the slide but let's touch on a few Metro DOS stretch which is a stretch cluster with an arbitrator for two data centres is in there multis for network isolation between data and control traffic and you can see PV granularity encryption with KMS integration this is an addition to the cluster wide encryption that was already introduced in one of the previous releases two interesting dev preview features there in this release as well the first is what we're calling regional disaster recovery that's asynchronous replication across clusters deployed in multiple regions and the second one is something called data segregation per host group this is for customers that need to isolate and bind tenant workloads and data to a specific set of hosts next slide please that moves us nicely onto multi architecture so on the IBM P&Z side it does actually feel now like we've got all the main features and add-ons in place that we really need to push OpenShift into those markets that said there's always more that we can do more that we can add so in this release we've got a few things coming that we've already got on the X86 side the cluster log foldering so you can take use of the log aggregators if you want to there's other things like the converge three node cluster to allow you to make more efficient use of your system this is particularly important on the Z side the IFLs are quite expensive and then enabling that better security practices by encrypting your data store there are a couple of things that only make sense for certain platforms so on the Z side we're filling out our portfolio of storage that we support with this 4k fcp support I remember the days when this first came along and we could only do NFS storage so it's nice to see that story and then on the power side we've got a multiple gain for SRIOV I'm sure you all know this much better than I do already but SRIOV is about allowing a PCI device to appear as multiple separate physical devices at the end of the day this is just all about performance and on these systems performance is what it's all about so bringing that to that platform is really good and with that I'm going to hand over to Jamie N who will talk to us about I think some security goodness so Advanced Cluster Security is a holistic security solution designed specifically for Kubernetes and we want to talk to you about how we're helping advance your security program so over the last quarter we've been at heart at work and we're pleased to announce that we've achieved the Red Hat Certified Technology Vulnerability Scanner designation and what this designation does is it represents transparency and accuracy on the issues that matter most to containers with Red Hat Package so it's going to help you reduce the cost of your Vulnerability Management Program by helping you apply the appropriate context for software packages that are identified as Vulnerable so when this context is applied over a more generalized data source like the NVD this can change a risk severity of an issue help highlight potential risk compensating controls and even establish one an issue is relevant so by highlighting this relevancy your teams avoid wasting time triaging an issue that doesn't apply to a Red Hat Package and this helps you focus on real threats for your clusters and reduce false positives so you can give back time to your development team in order to focus on delivering business value and if your organization has risk tolerance policies this influences them greatly we're also working on improving the industry standard OpenShift Security configurations for security and compliance we help teams identify compliance configuration we help them measure and report on compliance status across clusters and report on opportunities to improve security posture with our integration through the compliance operator this will help add to our additional our existing suite of compliance solutions and help showcase exactly for OpenShift what you need to do in order to achieve certified technology compliance standard we're also working on aligning to with the OpenShift experience if you're not familiar advanced cluster security is part of an acquisition through StackRocs and we're hard at work in order to align with OpenShift so what are we doing we're accelerating the operationalization of security use cases with our new operator and we're creating a consistent user interface and experience you'll see that the look and feel of our user interface is changed we also deliver value very differently than the rest of OpenShift so I want to call out that our release schedules are not in the standard OpenShift 4.8 timeframe we work on three week release cycles in order to accelerate the time to customer value so what you'll see here is things like our everything here on the slide is already delivered but because we work on such tight release frames you're going to see us be delivering feature value much faster and you're going to see us being able to discuss with the market things that aren't ready yet so for instance our operator is being in active development right now and coming soon so with that be excited and on to you Jimmy awesome thank you so much next slide please all right just wanted to give everybody a quick overview of what's going on with red hat advanced cluster manager for Kubernetes so red hat advanced cluster manager for Kubernetes provides an end to end management disability and control to manage your clusters and app life cycles plus security and compliance of your entire Kubernetes domain across multiple data centers and public cloud with this release we focus a lot on the end user experience we are announcing that we have a wonderful UI refresh to keep it more in line with the open shift look and feel we also have the ability to import and manage open shift on Amazon or Rosa a lot of our customers are now migrating a lot of their workloads into that so of course it makes sense to have a management tool to be able to help them go through that process we also have the ability to run the ACM hub or the main basically where ACM kind of runs and open shift and IBM power this was a big ask from a lot of our IBM customers we also can provision OCP within Red Hat OpenStack this was an ask from you know since the product was 1.0 days a lot of our customers are running OpenStack and they want to be able to run OCP or open shift right on top of it having the ability to be able to provision that directly from ACM it's a huge advantage as you can keep everything or the management of your clusters consistent across the different platforms we also have expanded a lot on the cluster lifecycle support and we have been cool it looks like we might have lost me there I'll take over so here you go I can jump in until Jimmy's audio comes back so Jimmy I'm going to jump in on the cluster pools and talk about the value that we're delivering allows you to quickly deploy clusters have those clusters be in a hibernate state so they're not consuming resources on the cloud or generating cost these are great for development environments these are also great for CICD and situations when you need to quickly spin up a cluster and then bring it back down in the pool also bring the worker pool scaling so a lot of you have been asking to be able to scale up and down clusters essentially and we can now do that directly from the ACM user interface I'm going to move through a couple of these quickly but the cluster sets allows us the ability to do grouping of clusters for simplified RBAC experience and that's also how we leverage UI support to configure Submariner it's been a big ask that teams want to quickly make use of Submariner as a cross networking component but they don't really know how to do it they don't really want to go through a bunch of steps so we're celebrating the amount of time it takes to make use of Submariner for cross cluster networking services they discover an import is awesome it brings the reach of cloud.redhat into ACM so bringing some of these services directly from the cloud and providing a way to discover your existing clusters and quickly import those into ACM and bring them this is awesome it really improves the time to value and getting your existing fleet under management with ACM and then lastly so many folks have been asking for the ability to update cluster versions and we can do that in batch we can take a selection of clusters and move their version channel to stable or candidate releases of the next major version and allow you to roll groups of clusters to those new versions in a bulk way so really reducing the time it takes to manage your input Jimmy do you want to check your audience to see if you're back? Yeah I should be back I apologize for that I'm not quite sure what happened there No worries we're ready for the next slide take it away Yes thank you so much Scott Thank you for taking over there Alright so going on to the next you know about expanding and portfolio and embracing open source we are happy to announce that Red Hat Advanced Cluster Manager for Kubernetes is now fully open source and you can see the URL there for if you wanted to go ahead and check out everything now from the cluster lifecycle to application to security and governance everything within the product is now fully open source we're very happy to announce that we're also happy to announce that the Red Hat Ansible integration is now fully GA so for those of you that a little bit of history lesson for ACM2.0 we introduced the ability to be able to have an integration with Ansible and this brought a lot of value to our customers we were able to integrate from the application lifecycle perspective being able to do pre and post tasks and this again brought a big value to our customers we're having the ability to quickly integrate with third party tools within their data center so the buck obviously when you're building Kubernetes clusters doesn't stop at just being able to you know provision clusters right there is a school of things that happened behind the scenes before you get there but now we're happy to announce that we also have expanded that integration besides making a GA we're now have the ability to have cluster lifecycle integration so you're able to integrate Ansible playbooks pre and post cluster deployment we also have integration with the governance risk and security perspective so we're able to trigger remediations based on specific policy violations so if you wanted to take an action within your policy perhaps open up a ticket or integrate with a ticketing system or perhaps perform a remediation that requires an interaction with a third party system you're able to do that and bring that power of Ansible into ACM we also as Saimak mentioned earlier on the call it feels like it was an hour and a half ago and it was is the full integration right now with Rehab OpenShift GitOps or as we call it Argo CD as well so now you within the application lifecycle if you have Argo CD and you're using leveraging Argo CD you're able to fully integrate within the ACM UI all of your applications and you're able to interact with the applications in a seamless way so you're able to deploy the applications interact with the applications troubleshoot issues with the application then this is a huge value add for customers that are running Argo CD as their application deployment engine today and that's really who you know we're basically targeting you know those customers that kind of have multiple Argo CD deployments and they just want to bring in and centralize it into one single place so ACM really enables you to do that and of course we are constantly adding new policies new governance compliance policies so I highly encourage you to go and look at the GitHub repo we're constantly updating those as well next slide please alright from a multi cluster observability perspective we continuously keep enhancing the ability to have visibility into your clusters we understand that part of management of clusters is just not only provisioning them right we got to continue with the day two operations so part of the day two operation enhancements here is the integration with insights so Red Hat Insights is a tool that runs within the cloud.redhat.com construct and it allows you to have deeper visibility into what's going on with your clusters right from patches to errata information and much much more so this runs as an operator now inside your ACM instance and it's able to kind of integrate with insights from cloud.redhat.com bringing all of those metrics right into the ACM UI so there's no need for you to go and log into cloud.redhat.com to kind of see that directly we also have the ability to you know have advanced configuration for long term metrics so obviously you know as your deployments grow you will have more and more metrics right that you kind of want to be able to kind of keep track of to make sure that you're able to kind of establish patterns and see you know as you go through and the analysis right of your clusters right you want to make sure that you can establish patterns so now we have the ability to configure those as well we also have the ability to configure alert forwarding from all of your managed clusters directly into the ACM hub and this makes it easier for being able to create alerting so you can create alerting and integrate alerting to things like Slack for example and you can kind of see here a screenshot of you know of an alert on a Slack alert that provides you a direct link into a Grafana dashboard to be able to kind of go deeper and analyze more about what's going on there. We also have available we can customize specific metrics right you have able to kind of have those recording rules to support those as well so with this release really as I mentioned we 100% focusing on end user experience the ability to bring more visibility into your clusters the ability to make it easier to provision clusters at a larger scale and deploy applications much much easier and integrate with other tools within the portfolio of Red Hat so with that I'll go ahead and pass it on to Sergio thank you so much thank you Jimmy next slide please so in Cosmodasman we've been working in a few things so the first thing we notice is that the navigation had changed right now we are now under the OpenShift Clusters navigation so it's just slightly different how you're going to get there and meanwhile we've been working on adding Google Cloud support but only for infrastructure because we are still working in OCP and GCP there's some changes that we need to do to fully support it but it's actually been developed but right now you can add your GCP sources and have the same breadth of functionality you have in Amazon and in Azure today and talking about Amazon we also lift the requirement to add a powering account so if you have any child account that generates its own crew files you will be able to add it and that's it's going to simplify adding Amazon sources when you are not responsible for powering account or your cluster or you just want to add only a piece of your infrastructure into cost management all that information also we've worked to create the new cost explorer that it's a fully time series view of your data you can group, you can filter by different concepts you can see that in time and in fact is the core of new developments in the future where we will be allowing more than 60 days of data and also it's important to notice that it's the first view where you can see the line items numbers that you can see like a table all the information in the graphics below the table and you can of course download it so it's a very good way of looking at the information and have new insights into the data next slide please more important things we now have a 35 operator it's we're going to keep the coco the open source version of the operator for beta testing and for advanced development but now it's possible to use the 35 operator and in fact it's so easy that you can actually install both operator in parallel if you want to test the new one while you are installing the 35 it's perfectly fine right now installation is still support on our gap and it's less than a minute if you use the installation we're working a lot in performance you will see that the time we take to update the data and the performance of the tool overall has improved a lot and also one thing that we like a lot is that right now when you look into OpenShift cluster manager you will have a widget showing you the cost of your cluster so this is the first integration between the overall OCM and cost management we plan to continue adding new features so you can have more views of your cost that's all for cost management so I'm going to give to Christian to talk about your capability thank you Sergio I'm really excited to basically present the last two slides of today's session first we are kicking off with another round of improvements for our native visibility experience so you all know since a long time we've been craving to give you better tools and set the OpenShift console so that you don't have to leave the very warm kind of experience that you have in the console go to a third party system in 4.4 we reached a milestone by adding important dashboards into the console itself and we've been since then improving that 4.8 it's just another round to just make it easier to work with the dashboards that we have so we have tons of new features coming in just as just two examples one is instead of just selecting like seconds, minutes, hours you're now able to basically select a very specific time range that you want to look into for your dashboards we have groups now so if you have a very big dashboards with many many different shards we are now grouping similar shards into very specific groups so that you can easily digest the data that you can actually see and then we have a few others like zooming so if you zoom basically into one dashboard every dashboard it's getting updated as well and then if you can also then now click kind of like into a specific dashboard to go to our metrics explorer and you can further explore or drill down into very specific things as well so very nice, very good new enhancement and we will continue to obviously provide best possible experience that we can next slide please switching gears a little bit to logging now Jason has been probably the most requested feature for us for a long time we had some issues in the past on how we basically provided Jason capabilities to our customers and we've put in tons of work to basically work around those limitations that we have by something that we can fairly say that we are we feel good with and the way how we basically expose the Jason feature is through our lock forwarding API so we wanted to make sure we actually support two different type of use cases one is you are a customer who uses primarily third-party systems and the only thing you want to do is you want to be able to deploy a collector make sure you select the locks that are interested in parcels locks into a Jason objects and send those off into your third-party system that is what we basically do now with the cluster lock forwarding API that's basically use case one the second use case is you are a customer that is using our managed elastic search and you want to put in specific locks into our also Jason object so that you can go to Pibana and then query individual fields to really selectively choose what kind of locks you are interested in that is also available through the lock forwarding API so everything that you really do your primarily interface basically set everything up is the cluster lock forwarder CR that we have and there you basically just put in what you need to pass and then you tell us what schemas those locks primarily relate to so you might have a web logic application jboss that all use the same schema this is very important for some of the more advanced management system to know what kind of schema so that they can group those locks into into a single entity to save operational costs to save kind of like you know how things are going and to give you kind of like a more reliable approach and last but not least last one that we have here is I talked a little bit a little bit about the selectiveness the flexibility info in 5.1 so the upcoming learning solution what we also do expose now is the ability for you to select specific locks based on pod labels so before you have the ability to push locks from specific name spaces or projects into your third party application now you're basically able to choose from specific pod labels so you can you might have like specific apps that need to go into a specific topic into Kafka as an example you might have team labels on specific pods you're not using name space as a team you're basically using labels to identify what kind of application belong to what kind of team so they can do that as well now so we are pretty excited for the next one of logging features and that's it for today hope you all enjoyed this session and I think what you probably wanted to say something in the end like thanks for joining we got a bunch of great stuff coming out in OpenShift 4.8 so look for that on your clusters and your upgrades here in a few weeks and a reminder so this is all about 4.8 we also have another session coming up in about a month on kind of looking ahead for what's next in OpenShift so we would love you to join us for that too thank you all so much