 getting in the car. So to honor that we are broadcasting this live to the world at the same exact time we're sharing it with our internal company. You are now on this journey with us and let's see where we're going. Next slide. Like I said this is a roadmap presentation which means in quarter four where we stand today this is what the men and women of Red Hat are coming to work and working on. This is our best estimate of where we're going. Now things will evolve and in fact in this presentation in the roadmap presentations things evolve and change a lot. So it's good to check back in with us. We're going to be doing this once a quarter so you'll get to see the change and the evolution of the features. You know the lawyers want me to mention that you should not buy a product based on its futures. Those futures will change by the product because it's awesome right now because it's going to help you with a problem and that is what we're going to dive into right now. Next slide. I have brought with me some awesome people some experts in their field and we are going to dive in to core Kubernetes. We're going to dive into the top of the stack. We're going to go into the infrastructure and we're going to get into the developer experience. Next slide. When you look at OpenShift it is a platform. It is a hybrid cloud platform. It's going to help you get the most out of your investments, whatever investment that you may have chosen and we're going to shine a light on what hopefully made you buy that investment. We're going to make it easier to use those things. We're going to make it easier to push to production. We're going to make it easier for you to capitalize on this technology stack and we do it in the core Kubernetes layer. We do it at the top of the layer services and we do it from a multi-cloud point of view. Next slide. This particular presentation is going to get deep into what we're going to do in January. It might bleed into February a little bit. What we're going to be pushing out in quarter two and then we're lumping the second half of 2021 together because things get a little less clear the further out you go. Like I said before, this is an honest assessment on what we're all striving for in both our communities that you're involved with and within the company itself. With that, I'm going to hand it over to Catherine. Catherine, why don't you take us through the core? Thanks, Mike. Next slide, please. In the following slides, we're going to kick off the core platform roadmap with a look at the overarching themes which form the basis of roadmap and how we guide based on what we've prioritized. This is influenced by industry and market trends, technology, upstream directions, and most importantly customer feedback. The insights given to us through conversations with customers, RFEs, customer cases, bugs, and other methods like voice of the customer sessions. OpenShift is the foundation of Red Hat's open hybrid cloud strategy, enabling users to have a consistent operational and application experience from the data center to the cloud to the edge, across different infrastructure types, providers, hardware architectures, and accelerators. Basically, OpenShift everywhere. This also means users can continue to ask for more flexibility and choice while focusing on maturity for security at every layer of the stack. You will see that reflected on the first two pillars here. We're also finding containers Kubernetes and OpenShift adoption is increasing with the need for support for newer workloads, which is AI ML and databases. Enabling newer workloads is definitely part of our plan. Finally, our customers not only scale both in terms of size and number of clusters, but also in terms of maturity and sophistication of their deployments. We see observability, management, automation as the third pillar. You will see declarative policy-driven management and automation of multiple self-hailing clusters, application, deployment across these multiple clusters. Some of the areas we'll be investing in. Next slide. So let's kick it off with OpenShift install and update. We're going to be focused on a number of key areas there. For the hybrid cloud, we're enabling OpenShift to be deployed on even more platforms, including Alibaba, AWS Outposts, Azure Stack Hub, Equinix Metal, IBM Public Cloud, and Microsoft Hyper-V. We're also extending our existing provider support to cover more regions as well as additional cloud instances like AWS C2S, AWS and Azure China, and planning to make reliability and scalability improvements, such as adding support for vSphere multi cluster deployments. The next one on this list is for restricted networks. We're planning to release an on-premise version of OpenShift update service, enabling viewing of graph data information right from the console, which will allow you to see what upgrade paths are available for that given cluster, and add support for leveraging on-disk images when installing OpenShift. Finally, we'd like to improve the OpenShift deployment experience with better documenting of cloud credential permissions for both day one and day two, support for customer managed disk encryption keys, Azure and GCP, and moving the control plane to be machine set managed for automated node recovery. Next slide. That brings us to the core themes, compute, networking, and storage. Let's start off with workloads. In the past year, we have added a number of features to compute networking storage, such as new alignment with node topology manager, remote worker node, multis for high performance networking, to name a few. This was in response to the needs for new workloads and markets that include AIML, Telco 5G, and Edge. But we're doubling down on those markets as well as the workloads that run on them. We'll continue to add performance, scale, and schedule extensions to serve those markets. But also plan to start addressing the HPC market. One is scalability, stable stability, and scale. When we look at these new markets, we continue to pay attention to the needs of our existing markets, the enterprise market, and bring you advanced features that will help you manage or better manage applications or certificates. Introduce networking support for multiple clusters in an autonomous system with BGP, high performance networking features, and more. We increase coverage for CICD testing of OpenShift while adding telemetry and diagnosability to assist SREs in the field to decrease mean time to repair. And the last one here is partner and technology integrations. We continue to support a broad ecosystem of partners through more certifications of providers for networking via CNI, storage via CSI, and provide more capabilities with extensions to the core scheduler that can be node, topology, load, or specialty workload aware, such as gang scheduling and HPC and grids. Next slide. We're excited to introduce upcoming support for Windows containers, customers to run Windows workloads on their clusters. This involves letting Windows nodes run Windows application containers and REL nodes run REL application containers with OpenShift orchestrating them as building blocks to impose your next generation applications. We provide full coverage for moving Microsoft Windows workloads to OpenShift, enabling users to run legacy Windows VMs, OpenShift virtualization, .NET framework containers on Windows worker nodes, and .NET core containers on REL or REL CoreOS worker nodes. At the heart of the solution is the Windows machine config operator, which automates the orchestration of adding Windows nodes to your OpenShift cluster, enabling Windows server workloads to run on OpenShift for six or later. The operator allows a cluster administrator to add a Windows worker node as a day to operation with a prescribed configuration that allows it to join the OpenShift cluster and enable scheduling of Windows workloads right on that new node. A prerequisite for this, probably just an important note here, is that the cluster must be configured with hybrid OVN Kubernetes networking. Next slide. Continuing with that thought, we're going to jump into the roadmap for Windows containers. The general availability of the Windows container, the operator, will be on 12.14, so it's fairly soon, four days away. The Windows machine config operator will be available from the in-cluster operator hub. Once installed, it will be able to deploy Windows nodes right on your cluster, and the operator will be available for AWS and Azure at the time of GA with support for other platforms such as vSphere and bare metal coming later. Finally, logging, monitoring, storage solutions for Windows container workloads will be also available after GA. We're shifting to Compute, so ARM has been in the news lately with recent acquisition by NVIDIA, as well as the adoption of ARM architecture by Apple and their Numax. ARM also forms the basis of a number of technologies including DPUs and smart NICs. Therefore, OpenShift for ARM and enablement of next generation of bare metal as a service with OpenShift is an important focus area for us going forward. These changes will enable a new cloud-like way to do software-defined hardware security and isolation networking and storage in the data center. We're also planning to provide more capabilities with extensions to the core scheduler, targeted again at running AIML, as well as HPC workloads. As a first step, we're introducing scheduling profiles. They're defined as several schedule plug-in configurations that represent most common use cases and can be enabled by the CUBE scheduler operator. I'm going to round it out on the side, the shift to control plane. The several new enhancements are planned for the control plane. First one is productizing the cert manager or jet stack. The cert manager helps automate certificate management. It builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. We're working on providing a Red Hat supported operator for cert manager that will be available to all workloads running in OpenShift, except the bootstrap components that need certificates before operators exist. Those include auto box operators that support it as a day-to-configuration like OLM operators, middleware, software from Red Hat, as well as applications deployed by the customer. The next one on the list is custom route name and certificates for OpenShift cluster components and users of identity providers that use URI scheme in their sub-claims to be able to log into OpenShift. Other significant highlights include providing automatic certificate generation rotation for direct pod-to-pod communication, similar to the service-serving certificates operator. Finally, customers will be able to easily integrate with external KMS solutions by using existing Kubernetes KMS provider capability and existing KMS plugins, including those provided by major cloud providers. Next slide. We're going to move to networking. Customers are asking for more advanced features than ever for networking. We've been delivering on, upon those asks, in our next generation networking platform, OVN. Examples include east-west encryption with IPsec for customers with regulatory compliance requirements, full IPv6 support for telco performance improvements via hardware offload to smartNICS, BGP support with externally advertised Kubernetes services, and EBPF for a high-performance replacement to IP tables to remove scalability limitations. A network observability is being enhanced as customers grow to multiple clusters by adding new metrics and telemetry, but also presenting the information in a way that is more easily digestible by networking and security admins. Shifting to network edge, OpenShift continues to align to upstream Kubernetes, and Red Hat's leadership contributes to Ingress, V2 service API, which defines our next-generation Ingress solution for OpenShift. Builds on the promise to close the gaps between Red Hat routes, Kubernetes Ingress V.1. Egress Router, a popular customer method for associating a source IP address, is completed in OVN as we close the gap between OVN and our current networking solution. We've also added Day 2 configurable custom domains for customers that want application naming that aligns with their corporate DNS. In addition to performance and scalability enhancements in HA Proxy 2.2, we deliver on several customer requests for configurable settings such as a variable number of threads, customer pages, and more. OpenShift will also provide support for IP failover image, a highly anticipated customer feature that adds a means of providing simple HA for cluster service external IPs. Shifting to storage, in the nearer term, we are going to be supporting snapshots with CSI drivers that do have support now with OCS and CNV, which already support snapshots. This will be what is known as crash-consistent snapshots, what you get if you remove power from a system with a journaled file system. Our work to transition to CSI drivers is bearing fruit with some tech previews graduating to GA, including AWS, EBS, GCP EPS, and Cinder. There are also some more tech previews that we continue to convert are core entry drivers. The other thing here is the CSI migration piece, which allows seamless transition from entry to CSI drivers. We see this happen first for the Cinder CSI driver, but migrations for others follow after that. This has become a main focus as a lot of the entry drivers have been deprecated upstream. That means they will be removed once the CSI migration is positioned to GA for them. In the long term, we are going to look at other operators like ephemeral and object storage, and at some point we would like to remove flex volume since CSI support will effectively replace this. Next slide, please. Next, we will talk a little bit about Red Hat's observability story. OpenShift provides several different components that help users understand and troubleshoot problems quickly. At the multi-cluster level, we have advanced cluster manager, which provides cluster health observability across your entire OpenShift fleet and on the single cluster level as well. We also have OpenShift monitoring and logging, and that gives you a more detailed view of what's going on inside your cluster, and that's built right into the console. The key focus areas here are the hybrid cloud observability, so extending our reach and exposing more multi-cluster capabilities to support our hybrid cloud observability story, and cluster observability, so reaching our in-cluster observability experience for better and easier troubleshooting at a higher resolution. Next slide, please. So in the near term, for both monitoring and logging, we will continue to double down ability and quality of our solution, specifically doing a non-feature release in 4.7 as we work through the backlog in test infrastructure. So for monitoring, we're planning to enhance our core capabilities, for example, to allow more customizations of the built-in experience and support for multi-cluster metrics aggregation scenarios. We're also going to be continuing to provide a top-notch connected native monitoring experience inside the OpenShift console. For logging, we'll be focusing on introducing some critical high-interest new features such as JSON support and more Tennessee capabilities around log forwarding, architectural changes to provide a more lightweight, easier to operate logging experience by introducing an alternative log storage solution, and finally better native exploration capabilities for logs inside the OpenShift console. For multi-cluster, we're going to be adding additional cluster value in the PCM cluster health monitoring with Thanos and Grafana, providing enhancements to metrics collections, enabling customers to allow their own custom metrics to be gathered and sent to the Thanos sub. Customers can define custom Grafana dashboards from a Git source, ensuring they not only have a read-only multi-cluster health and optimization dashboard out of the box with consistency in their customized Grafana dashboards as well. As we get to the last area, I'm going to focus on, for a cluster infrastructure roadmap, we're continuing to move forward in our focus of bringing value and additional features and exposing them to our customers. There's a lot planned in terms of new key features here. The first thing is to mention is proxy support. Historically, the machine API did not follow the global proxy settings. It will from now on. We are bringing this change as part of a ZCM release. From that point, they'll obey HTTP proxy and HTTPS proxy settings. While we believe there will be a limited impact to our end users, there's something to keep an eye on. Also, in the near future, we are starting down the path of importing out-of-tree cloud providers. Just like CSI on the storage side, upstream is moving to cloud providers not being part of the core Kubernetes. This means eventually removing all existing entry cloud providers in favor of their out-of-tree equivalents with minimal impact to users. The first one we will do is OpenShift, which will be followed by work on the other key cloud providers, AWS, GCP, Azure, as part of our mid-term goals. This is also actually tied to a lot of dependencies on other teams. The out-of-tree cloud providers will require CSI drivers for the storage. Pretty much a lot of this work is all tied together before it will be delivered. In a mid-term, we're also looking to be able to treat the control plane compute as machine sets, as I mentioned earlier, and also working with availability zones, the long-term goal of lining with upstream and bringing a fully realized vision of a multi-architecture, multi-cloud cluster to OpenShift. We're shifting to multi-arch. We're continuing to move forward aggressively for the multi-arch work, focusing on IBM systems, but we're also planning to add support for an additional architecture. In the near term, we're bringing in some of the storage options that didn't make the last release. We're also responding to customers ask for supporting OCP and on KVM on Z. This is because they don't want to run them, sometimes because of cost, sometimes because of in-house activities. Either way, we're going to be looking to add support there. Mid-term, we'll also be looking at multis and catching up on the NVIDIA GPU support, though the latter is a dependency on NVIDIA. In addition, we're hoping to introduce arm support for bare metal installs. This will probably be in the pilot initially, but moving to a tech preview as fast as we can. Initial support will be only for genius clusters, meaning no mixed architectures as of yet. Longer term, we're looking to expand support for our own stack with Rev-IPI and OpenStack on the list. Also on the roadmap is the IBM cloud infrastructure control integration. This will be a way of delivering the vast benefits of installer provision infrastructure to power in Z. The final item is actually heterogeneous work. This is mainly come up due to our move to ARM. This particular use case requires ARM worker nodes to be part of a cluster with a different architecture. This unfortunately is a big effort and affects many areas of OCP, so it's going to be on the longer side before we can support this. And around this final final list out, multi cluster with ACM, we enhanced our existing investment in OPA integration by bringing the development of the OPA operator via policy in the ACM governance risk and compliance pillar. ACM also offers GitOps of Argo CD and does it at scale. ACM allows configurations to grow with your clusters automatically. As you deploy and import new clusters, ACM configures and deploys applications and configurations. This is GitOps at scale. Submariner will be included in ACM as a preview, opening up the possibilities of multi cluster networking, especially in load balance scenarios. Finally, we continue to invest in supporting our customers whenever they run open shift, adding existing support matrix, ensuring ACM hub and managed clusters are working for both Azure and Red Hat open shifts, so ARO as well as dedicated service running on all their Red Hat. Thank you and next slide. Okay, Kubernetes native infrastructure. Next slide. Okay, so focused on on-prem environments, KNI provides the simplicity and agility of the public cloud in on-prem environments. And to do this, we are keeping a consistent open shift experience across both footprints of public and private clouds. And among other things, KNI is addressing container adoption growth. But while still running virtual machines, we'll talk about this later, along with containers, right, by running open shift clusters on bare metal. Next slide. And we're going to start with bare metal. And if you remember, in OpenShift 4.6, we introduced bare metal IPI to our family of installers. And in this model, in the KNI model, the OpenShift installer is somewhere of the infrastructure provided by the bare metal nodes. Treating bare metal nodes are interacting with them as you would with machines in any other cloud provider, right? This is where the magic is in this part of KNI, right? The ability to treat bare metal nodes as if they were machines in AWS, in Google Cloud, or in any other platform. The resulting cluster is an OpenShift cluster on bare metal. But an OpenShift cluster on bare metal, that's aware of these bare metal nodes. It's running on and managing as well. Let's see what's next with this on the next slide. Okay, as part of the installation. So I think one of our biggest additions in KNI is going to be the fully supported assisted installation from cloud.redcat.com. So this is an online assisted installer, an online experience to deploy your cluster on bare metal, making it easier, very easy, with very low entry bar with very few prerequisites to get clusters up and running with a UI installer, that guides you through the installation process to get your cluster installed. Let me highlight a few other things in here. For example, and now more general, not only about the assisted installer to install from an online at cloud.redcat.com, we are adding improved validation. So, okay, let me start again. So in OpenShift 4.6, we introduced the bare metal IPI. 4.6 is out. We have had feedback of users in both telco and surprise environments telling us, well, in this first few weeks, a couple of months, that is what we are learning. And we are introducing things like improved validations. One of the things we learned is look, these fails, but it takes a while. I want it to fail fast and tell me exactly what am I missing. So based on that, we are improving the user experience. We are also adding features and features that initially they are coming mostly from telco customers, but let's not forget about our enterprise type of customers, which we have loads, and the interest is really high as well with automating and the ability to deploy fully automated classes on bare metal. But the highlights here, you will see that, for example, UEFI secure boots is an addition that we are adding that anyone can benefit from that, but we learned that telco customers require this. You are passing your security audit, and if you don't have UEFI secure boots, well, somebody is going to flag it during the audit. Similarly, we are adding FIPS mode support. FIPS is supported in OpenShift, but we are adding the logic to the install in IPI, so that you can say, hey, I want to have it enabled, and you can do that at installation time. Then on the management side of the clusters deployed with A&I, well, again, we are learning about how people are using this cluster with bare metal and what gaps that we have. One of the things that you have with bare metal is when you reboot a node, it's unlike rebooting a virtual machine, it's going to take some time. We are working on faster recovery times after a bare metal node failure. Similarly, we are also automating the recovery. In this case, we felt BMC management. BMC is what we used to manage the nodes in general. Not everyone is going to have the nodes to be able to be recovered after failure, so we are doing this through a process that we call the poison peel. More things that we are doing, we are learning that, especially in the telco market, customers want to have specific sets of bias settings configured, and those are needed by workloads that need to be deployed only on nodes that are configured in the specific way that telco workloads expect. In order for us to do this, we are adding support to get and set bias settings and to also schedule the workload placement based on those. On the networking side, still on bare metal, what we are doing is host network configuration. This is the node you install the open shift cluster. We want to be able to give you a way to configure the host network in any way you want, with VLANs, with buttons, with the standard configurations that you can manually in your operating system, now automating them for you to do it on day one during the installation. Similarly, with the hosts that you are going to use for your cluster, we are learning that not everyone wants or can have the HCP. The ability to configure static HCP for your nodes is something that we are working on. As part of deploying an open shift cluster on bare metal, which is something that we do, is we add load balancer. It's a load balancer based on IP and HAProxy, and DNS, based on NDNS and cool DNS as well. Customers are telling us, well, I may want to use my own load balancer and my own DNS, not the ones that are deployed by you. We are adding this ability to enable and disable loads. There are several things about K9 in the next slide. Now I would like to talk about open shift mutualization. Again, still in Kubernetes native infrastructure. I would like to suggest you to read the blog post that is linked in this slide, but still in open shift mutualization, in open shift 4.6. Remember that we went GA in 4.5, and with that release, we've been focused on making DLAMs easier to create and consume in Kubernetes. We've also demonstrated performance parity for virtual machine workloads in open shift, for even the most demanding enterprise database. More of this in the blog post linked in this slide. Now let's see a few more things about open shift mutualization in our next slide. Okay, so let's start by core as part of open shift mutualization and core improvements for virtual machines in open shift. I will highlight a few of them. For example, running compute-intensive workloads like KID ML and ML. I'm so sorry to interrupt you. The audio is a little spotty. Can you try turning off your video and see if that clears up? It's been spotty, just internal and external. Thank you. Is it better now? I think there may be some I'll let you know. Okay, I'm going to keep going then. So, as part of the core, running intensive workloads like ML, GPU, PAS2, and also via VGPU, this is something we are adding to open shift mutualization. Deployment of public clouds where damage that is offered, remember it's obvious, right, open shift mutualization requires open shift to work on the mental nodes. We are also having damage of noting public clouds like AWS and IBM cloud is where we are adding this capability as well. We're also working on tooling for much workload migration from this sphere and web platforms into open shift virtualization. Then on the network site. I hope I'm not breaking up anymore. Hopefully you can follow me. Graeme and the audio is still a little spotty. We may want to, is it possible for us to move forward a section and come back to your slide? You may want to test your connection because it is very difficult for everyone to hear. Yeah, okay. So, move to the developer platform service and then I'll start from here. Next slide, please. Everyone, I'll name Oberam here, a product manager for OpenShift console. Today I'll be guiding you through a journey of developer and platform services with CMAC as well. So, first I want to talk about our customers. Each of our customers has different needs and requirements as they should, since they all are unique businesses. We as OpenShift aka the platform or platforms need to enable our customers to easily configure, customize and extend the OpenShift platform in order to meet our customer's business needs. The developer platform, the developer and platform services are specifically engaged in removing all friction from adding and configuring services on top of the platform to developing and deploying applications to the platform. Falling areas highlight how we approach that. So, you have operators and helm. These are key mechanisms for packaging and managing add-on services. Then you have the console, which seamlessly ties everything together and provides a user with exceptional user experience. Next we have Koei, which is our central registry that helps power deployment of services and applications. After that we have Serverless and Service Mesh, which provides an easier way to develop and run applications in Kubernetes with a secure day-to-story. Next we have DevOps and GetOps, which facilitate application delivery and management, and finally the developer tools, which empower developers to take full advantage of the OpenShift platform. All in all our goals are to provide our customers with a Kubernetes platform that will help them succeed. Next slide, please. The OCP console has four high-level themes, which we focus on with every release. Our first theme is about making developing, building and deploying applications OpenShift as easy as possible, with the goal to provide developers everything they need in a fashion that is consumable by them. Below are a few of the highlights that you can expect to see in the future. You can look for things like new experiences for functions and integrations with managed services and much, much more to come. Our next thing, we focus on teaching our users about Kubernetes and the ever-fast changing ecosystem. Look for improvements to Quick Starts and features like CLI shortcuts that will help our users automate what they just learned in the UI. The next thing we have is about managing OpenShift. Look for the console and ACM to converge into a single interface for our customers. The need for multi-cluster management grows more and more important every day. Finally, the last thing we have is enabling the extension of the platform. Look for the new dynamic plug-in framework that will enable operators to easily add and integrate new UI with the OCP console. Look for general improvements to interacting with operators on the platform as well. Next slide, please. Let's dive a little deeper into dynamic plug-ins. We created the dynamic plug-in framework to empower operators to create exceptional native UX experience on top of OpenShift. Previously, in order to do native integrations with the console, our add-on operators like OpenShift serverless or pipelines were limited to use our pre-existing static plug-in mechanism. The static plug-ins bound those operators' UI code to the console itself, not allowing our add-on operator teams to release new UI code on their timelines. Now, with the new dynamic plug-in framework, teams are able to update their UI code with their operator release. This just adds another level of flexibility for our additional add-on operator teams. In Force 7, we're building the foundation of the framework, and then in following releases, we will migrate the existing internal teams, then further down the line, once we work out all the kinks, we will then open it up to certain partners, then to the general public. The dynamic plug-in framework will be the ultimate enabler that will allow internal and external add-on operators to create some very cool solutions on top of the OpenShift platform. Next slide, please. In Force 6, we introduced Quick Starts as a new onboarding pattern that has the major benefit of guiding customers with an interactive experience, reducing the time it takes to get customers up and running. Now, in Force 7, Quick Starts are extensible. Operators and customers can utilize the console Quick Start CRD to easily provide their own Quick Starts. Additionally, we will add hint interactions that allow users to highlight components of the UI. Also, in Force 7, our Quick Start catalog has been enhanced to be more scalable, now supporting filter by keyword and status. In Force 7, we will provide additional enhancements, starting with embedded CLI commands. Additional items in our backlog are easy access to Quick Starts from topology for the developer, resizing the Quick Start panel, grouping mechanism for related Quick Starts, and much, much more. Keep an eye out on this area. Next slide, please. In Force 7, developers now have enhanced the developer catalog experience. A developer catalog contains a number of sub-catalogs, each with their own set of features. Developers will have the ability to see all offerings in the developer catalog, along with the ability to filter by category and or keyword. Cluster admins will have the ability to customize the available categories in the developer catalog now. The categorization component works off a default set of categories. These categories are only displayed if it has associated items in the catalog. Sub-catalogs include items like builder images, event sources, helm charts, managed services, operator-backed services, Quick Starts, and more. A customized experience is available when drilling into a specific sub-catalog, exposing additional catalog features. For example, in the future, our helm chart catalog will have the ability to filter by available chart repositories. And finally, cluster admins with RBAC can limit what developers are able to see and access. Next slide, please. All right. So, building on a solid foundation, we have a one-stop show for application monitoring. In Force 6, users have the ability to view and silence alerts as needed, exposing alerts on workloads in the topology view for easy discoverability. Next, we will provide a dedicated area to view target and associate status. This will allow users to know at a glance which workloads have custom metrics enabled. We're also working on pulling in performance analysis of job applications as well as integrated logs and tracing information. In Force 7, we've added the ability to see base image vulnerabilities in your project. In the future, we will enhance the experience to also include application vulnerabilities identified with our partnership with SNCC when the appropriate operators are installed. This integrated experience allows developers to identify and possibly fix application vulnerabilities at a much faster rate. Next slide, please. Moving on to the operator framework. The operator catalog in OpenShift does go significantly in the last OpenShift releases. Over 300 operators are now shipping out of the box with every cluster. So, there's a renewed focus on a smooth update experience with more controls for the cluster administrator to specify which kind of updates happen at which time. We also want to enable better insight into dependency resolution between operators. The cluster add-ins can see up front which operators will get installed as a dependency. For developers, we want to provide a broader choice of programming languages to create operators. And we're looking at a prototype designed for Java and Python operator SDK as early as next release. Beyond that, we're going to make it easy for developers to design level four and level five operators by creating rich libraries with reusable components and models to cover application management. Next slide, please. Here's an example of how we want to make operator updates easier to plan for. Smart updates allow cluster administrators to define a policy for operator updates in which patch releases are applied automatically, but change in minor or major versions are waiting for explicit approval by the admin. This balances our desire to regularly ship operator updates to fixed CVEs in Z streams with the predictability of change sets due to an operator update in production. Next slide, please. Another way to make operator updates more robust is to actively communicate with service that it's about to be updated. In the next release of OLM, an operator can communicate a non-upgradable state. This makes sense for critical operations that should not be interrupted like an application configuration change or, for example, in the case of open ship virtualization, the live migration of the VM. OLM will delay any update until operator finishes the operation and reports a readiness for upgrade. This operator to OLM communication might also prove useful for other scenarios like missing subscription licenses for paid operators. Next slide, please. Moving on to Helm. Helm is one of our most popular package managers for communities, and now that it's G8 on OpenShift, we continue to integrate its capabilities across OpenShift. The goal is to provide a self-service application development experience that makes OpenShift tagline innovation without limitation of reality. By enabling developers to use the tool's desire and deploy their applications with minimal intervention and greatly reducing application time to market, we will continue to bring greater integration with the various developer tooling and services, including ODO, service binding operator, Dev Files, and more. Next slide, please. Moving on to CoA. CoA is becoming an OpenShift native registry. Not only does it run well on OpenShift fully autonomously managed by the brand-new CoA operator in 3.4, but it will natively integrate with OpenShift monitoring, alerting, logging, and authentication, aiming for CoA 3.5. Many of our OpenShift clusters are running in a disconnected environment where CoA aims to help streamline the process of creating and maintaining a mirror for OpenShift deployments. Finally, in a multi-cluster world, CoA is the central registry for all community artifacts, not only for container images, but also for Helm charts and operators. Naturally, CoA will be used by many different clusters and tenants, and we aim to provide better support for the office team's running way by adding more controls for the supervisor and quotas to prevent noisy neighbor syndrome. Next slide, please. Another important area is support for larger multi-tenant deployments. To support the multi-cluster landscape, our customers are steering towards CoA, and we will introduce quota management in two steps. In the first step in CoA 3.5, quota reporting will be introduced where various metrics like storage, consumption, network egress, and registry operations like pools or bills are counted. This will enable reporting and showback. A soft enforcement mechanism will follow that allows to create notifications for tenants who are passing their quota. In the next steps, quota enforcement will be enabled, which can be configured to trigger a throttling of network traffic, bills, or initiated recruiting, starting with older container images. There will also be options for CoA administrators to temporarily exempt users and organizations from their exceeded quota. Next slide, please. For public sector customers in North America, the FIPS certification is very important. CoA so far was not supported to run on FIPS-enabled OpenShift clusters, and CoA is not FIPS certified by itself either. With the rebase of Python version 3 and CoA 3.4, there is an opportunity to unblock CoA on FIPS-enabled OpenShift clusters. In 3.5, in a fully support Red Hat, CoA is running on top of OpenShift FIPS mode, which seems to be the largest block for government agencies as of today. Later, in 2021, we are aiming to certify CoA itself with FIPS certification. Next slide, please. I am going to pass this off to Xiao Mac now. Thanks, everybody. Thanks, Ali. Moving on to what is coming throughout next year around serverless and service mesh add-ons. Around serverless, one of the major focus areas are day two operations. So, we have had serverless and OpenShift, it's already GA, and customers are adopting it through the user experience that is available through CLI and console deploying applications, serverless applications. But as these applications move to production, there are more capabilities needed around monitoring these services. The serverless workloads will be able to make decisions on their performance and get that feedback back to the development cycle. So, that's one of the areas to bring more insight into these services deployed, and also focus a little more on the delivery cycle and the CI-CD flows and get-offs flows for delivering these workflows. These are applications like any other applications that need to fit into the existing CI-CD or delivery flows that customers already have. Another area of focus is integration and ecosystem, especially around the event sources. So, what serverless enables for most is event-driven architecture, and it's very important to have a very rich ecosystem of event sources available that we add through Red Hat products or partner products, ISVs, that customers can integrate into their application, consume these events, and be able to build workloads around it. And last but not least is developer experience to incrementally improve the experience that we have across the developer tooling, not just the console but also CLI and VS Code and Intelligent and other ID plugins that might be working on throughout the next year. On the service mesh side, the scaling service mesh, it's a big area that touches on as customers have more and more OpenShift clusters. As provisioning OpenShift clusters becomes simpler and they deploy more of them, they end up having more service mesh add-ons also in each cluster. So, there are capabilities needed for the service mesh to talk to each other or in a federated approach or be able to have shared control plans, for example. So, having multi-cluster meshes in multiple use cases is an area of focus. Navigating service mesh and like getting really started with understanding the use cases and the documentation and quick stars and other approaches to get customers started with it is in our area to help customers understand the value of service mesh and integrate it into their applications. And also, better integration with the rest of Red Hat portfolio on OpenShift, for example, with API management with our delivery pipelines, Tecton and alike is a next area, the major area that we've worked on throughout next year. Next slide please. But more recently, what we are working on on these two areas in the serverless platform function is an exciting piece that will be released as tech preview, really expanding the serverless capabilities to not just the applications running in serverless, but also you just bring your code and function and you don't have to worry about a deployment mechanism and configuration of it. Initially, it's based on Go, Node.js, Carcass, expanding to Vertex and Python and other type of runtimes. Support for running serverless on OpenShift dedicated on Amazon and Rossa, that's the next part that is coming. First, as an unmanaged operator to make it available on dedicated cluster for customers. And there are discussions to make better managed service going forward as well so that customers can have the same OST type of experience and the responsibility of management that is on Red Hat, have it around serverless as well. Eventing was moved to a GA, it releases GA. Last month was part of serverless and there are more capabilities coming in the admin console for the admin personnel for dealing with events. Monitoring is one piece of that, but also to be able to bind event sources into serverless workloads. And one of these event sources are Kafka and integration with Kafka and that's becoming generally available also in serverless. Around service mesh, the same story like this is a theme you would see across most of our add-ons in OpenShift. We want to provide the same experience on OpenShift dedicated clusters and Rossa and other flavors of it beginning with unmanaged and moving to manage throughout the year. Multi-cluster federation is another part that comes in service mesh 2.1 and expanding support to also workloads that are not running on OpenShift like in virtual machines or bare metal, recognizing the fact that many of the applications customers deploy are really hybrid combination of container and non-container and so be able to use the same service mesh across all these workloads. Next slide. I briefly touched upon serverless function. It's a very exciting area, especially that it really offloads a lot more of the responsibilities that application developers had to bear around serverless and put it more on the platform. You write a function essentially in your favorite programming language, Go or Node or Java and based on build packs, you can deploy the function and run it in the platform and the configuration and operation that are managed by the platform. So there's a local experience available through a plugin for the K and CLI, the Funk CLI that helps you to build an image based on the function code that you have and deploy it on the platform. There are templates and you can bind it into different type of cloud events. And as we are going forward, expanding the runtime support for what type of functions can run on this platform, Spring Boot are next. Python especially is important because there are a lot of AI ML workloads that fit the function and serverless use case really well and we want to tap into that community and make that available on OpenShift as well. Next slide. On serverless, on service mesh, briefly talked about a federated approach. On the left hand side of the slide what you see on a service mesh 2.1. So there would be more capabilities for multiple clusters that they have service mesh installed. Service mesh is be able to talk to each other and coordinate their capabilities. It will have incrementally more and more features around this scenario. And moving later throughout the year, we are heavily involved in the Istio community alongside IBM as well around discussions to be able to have a central control plane. So we have multiple clusters that have service mesh enabled. You don't want to have one instance of control plane in each of this cluster similar to how the federated mode looks like, but rather have a central control plane and each cluster has its own data plane and be able to coordinate the mesh across these clusters. So that's more of a long term throughout the next year toward the end of the next year. Next slide. Around DevOps and GitOps through three main areas we are working on. OpenShift builds focuses on building images in the clusters, evolution of existing build configs or classic builds that you might be familiar with. OpenShift Pipeline focuses on ticked-on pipelines for CI and OpenShift GitOps focuses on ARGRA CD for enabling GitOps workflows, Git-based workflows for delivering application and configuring clusters. So OpenShift builds V2 build pack strategy for Java and Node.js will be added to the platform alongside source to image and build for Dockerfile builds that are already available. And we are working also to build the community around this so that there are more community-based strategy for other build tools like Canik or Jeep or others that customers are using to be able to have the same experience, but as a community build strategy. Separation of build tools and runtime images is another area we're working on to be able to have build, build their images that contain the build tools like Maven and JDK for example and runtime images that are really lean so customers can create images that are very small and deploy on the platform. Volume support and dependency caching is also another area we are exploring with OpenShift builds to make sure that different type of volumes can be consumed and also the builds are fast by caching dependencies for Maven and NPM and build systems that are similar. OpenShift Pipelines metrics and trends is an area that are like that in closer beginning of the next year will appear in the platform so teams can get insight on how the delivery pipeline is performing and also be able to identify issues when there are anomalies. Pipeline as code is a major area to be to align the way that people build delivery pipelines more with the get-offs practices and treat it as a single source of truth also for the pipeline definition. More assistance around migrating to to Tecton I can see on the slide that it's actually reversed Jenkins to Tecton migration guide to help a lot of customers ask us of how they should they have made investment in Jenkins but they would like to move some of those workflows to Tecton to provide them some guidance assistance on how that can be achieved and we were working on getting OpenShift Pipeline as well on OpenShift Dedicated and the flavors of that initially as unmanaged and going forward as a managed service. Tecton Hub integration also in another area that we are pushing forward across our developer tooling Tecton Hub is a place that people can find reusable Tecton tasks and OSV tasks to build pipelines based above and it's already launched in the community but we are bringing more integration into Dev Console and CLI and VS code and intelligent extension so that you don't have to lose leave your switch context to go to a web browser and find this task so they can write from where you are install tasks and use them in your pipeline. OpenShift get-offs we will be having the first release toward the next couple of weeks in fact with a productized version of Argo CD and we are also enhancing the UX around get-offs application manager CLI that makes use of Argo CD takes down CLI and other technology that we have to bootstrap a get-offs process and there are views that counter that or match to map that in Dev Console that they're enhancing those that give you insight into what what deployment environments your application has and which version of your application is deployed in your environment which status they have and history of those to give you a more high level insight of how your delivery is looking looks like. The same about a managed service we're working to get them on on dedicated it's unmanaged later throughout the year as managed and alignment with Dragon also the question that comes up a lot so we're working on Argo CD being as the single and core get-offs engine both on OpenShift and Rackam and expand capabilities to cluster provisioning and governance policy and governance through Rackam based on Argo CD. Next slide please. Looking a little deeper into a pipeline as code so this is a new mode that we are working on and this would enable essentially to treat pipelines as a declarative syntax that leaves only in the Git repository so what you see on the slide there's a dot tick ton dot tick ton folder in your Git repo that contains your pipeline there is no pipeline on the cluster that user would manage and based on Git events that are coming from that repository that might be a Git commit or pull request the definition of that pipeline from Git repo is pulled and executed on the platform and and the results are published to to that PR for example that are accessible through the CLI or Dev Console under the repository itself what you see on the Dev Console is really just the execution of a pipeline and a Git repository you don't have a pipeline on the cluster that you have to keep in sync with what you have in the Git repo so really doubling down on the get-ups concepts here and making sure everything is declarative and coming from Git rather than live objects on the cluster. Next slide please. One request we get a lot from our customers when we talk about our Dev Console portfolio is that they like and use the technologies that they're productizing and bringing but they ask us so how is this to combine this how should my delivery look like what should I do with the tick ton what should I do with with Argo CD so our response to that is really weird is the GitOps application manager CLI that you'll be building is an embodiment of our opinionated way to do continuous delivery using these technologies so customer runs a command can bootstrap and that would generate the resources generate a pipeline for the customer generate a number of sample environments the application would get deployed configure webbooks configure Argo CD and really bring up that entire environments that you see on the right hand so that entire workflow that you see on the right hand side of the slide with that one command one or two command for the cluster and from that point they can go modify this cluster so it gives an opinionated and integrated way of doing continuous delivery through GitOps practices with the offerings that they have available on OpenShift and they can customize it after the tool generates this structure for them. Initially we are using customize as the customization and packaging and overlaying the config for different environments we're looking at helm as an alternative packaging and also a different type of secret management integration right now still secret secrets is what is used throughout this tool. Next slide please so getting developers started with OpenShift we currently have a local experience with OpenShift that's code ready containers which is a little larger than you would like and we are actively working on reducing the resources that it needs so that you can run with much less resources on your laptop with the below six gigabytes really and there are catacoda scenarios if you have ever run to learn.openshift.com there are catacoda scenarios that gives you an instance of OpenShift for one hour and a tutorial to run through so we are expanding that concept with developer sandbox as a shared multi-tenant cluster that it lives for 14 days and user gets a number of project and resources to be able to deploy applications and try out different technologies and different type of developer and add-on services that we have are pre-installed in this environment so it's really an environment that is ready for the developer to get a start and start coding and experiment with various capabilities that we have on the platform and our developer tooling and also a workshop version of that that is really for running workshops for the public at conferences a lot of our developer advocates used to use catacoda for running workshops which doesn't allow the user to play with the environment after the workshop finishes two hours later so a version of that on IBM cloud that would be available for running workshop that is available for 40 hours so the first two hours maybe the workshop executes and and people have two days after that to to play around with the environment after the conference for example next slide please so not only the the activating access to OpenShift but also other areas we're working to help developers get a startup with various technologies that we have across different tooling that we have so Odo is an example of taking advantage of diff files to provide a multiple type of quick start that generate an application that sets up the environment for application to for developer to start with this is in ways similar to the quick start experience in developer console but bringing it to Odo and more expanded way of getting developer started there and bringing similar capabilities in VS Code and other ID plugins that we create to be able to instead of developers starting from scratch and trying to figure out how to build code and deploy code on OpenShift through different runtime and serverless and other capabilities that they have to give them examples and start them off a much higher ground and push it through the flows that they have designed for them next slide please and in general our approach throughout next year is to bring more and more emerging technologies and fold it into the existing developer experience that we have in console so customers and developers can easily get over well with the variety of technologies that we have available in OpenShift like serverless and Istio and there was Tecton and Argo CD and on and on and the key to that is to make sure that this experience is really smooth and folded in well integrated into the existing experience that exists around OpenShift and that's the focus that we have for next year to make sure that they are readily available as a part of the existing dev console experience and also make sure that we can support more and more multi cluster experiences as we see the trend that customers incrementally add more and more clusters to their infrastructure so that they have application that are deploying multiple cluster and this through the same interfaces through the same experience they have access to those multi cluster deployments as well. Next slide please and with that thank you for this section I'm actually not sure who is probably talking next. So I'll go back to the slides for virtualization so Ramon can finish his section if that's okay. Yes thanks for that there you go perfect thank you okay so we are in the Kubernetes native infrastructure section and let's go about OpenShift virtualization and starting at the core of OpenShift virtualization something we're working on is GPU pass-through and VGPU supports those are you know addressing intensive workloads like AI, ML by dedicating GPU to the ports that require it then I'm highlighting a few important things of OpenShift virtualization and another one is the work we are doing for working with public clouds particularly with AWS and IBM cloud that is using physical instances in these public clouds where OpenShift virtualization needs to run and something else we are working on at the core is well a tooling for mass workload migration specifically from vSphere and ref platforms into OpenShift virtualization. Then on the networking site one of the things is that we're working on as well is the life migration of virtual machines with SRIOE right not an easy task to do think about you know we are presenting SRIOE which is a slices of a unique physical device into a virtual machine and we are then moving this virtual machine into a separate host as well so this is coming we're working on that and similarly we're also working on a nick hot plug to you know hot plug in and unplug your your nicks in virtual machines and we're continuously enhancing the right statistical system for network partners we're working with TIGERA Calico and then on the storage site of things what are we doing there but one of the things we're doing is dynamically attach storage with a hot plug to your virtual machines and also with the intention of having less downtime while migrating workloads and running on vSphere we are adding support for warm import from vSphere this will still require a reboot but it's faster than the hot than the cold migration and then improvements in snapshots and cloning with storage solutions like OCS that will a CSI interface which results in better protection of the data using native co-ordinated capabilities so this is with OpenShift virtualization let's let's move to the next slide and we're going to cover data containers all right so OpenShift sandboxed containers cata containers and well let me again highlight a few things in this slide operator well let's start with the nature of cata containers as driven by an operator so sandboxed containers are coming on bare metal bringing cata containers to OpenShift by an operator right which will be using operating system extensions with a downsized qm you build okay the operator will be providing olm based installs updates upgrades as a core as the core cata containers includes OpenShift CI and cryo CI integrations to gate changes and then we'll use cata 2.0 which brings metrics along for observability and footprint optimizations as well to the cata agent in this case and then on the networking side well let's call this networking plus plus right with ipv6 dual stack support and also more sri ob and dpvk for cata containers in this case and let's let's go to the next slide now and talk about edge and edge is a natural extension of open hybrid of the open hybrid cloud strategy right which i'm sure you all are familiar with enabling any workload on any footprint in any location right and this is important because as organizations look at the best technologies to help them reach more customers deliver differentiated experiences drive innovation we are talking about architecture here and your architecture options can't be limited to just centralized architectures right edge is required it's it's driven perhaps mostly by the telco industry but you know it's something you can use in many types of customers and this is what we are learning let's move to the next slide and talk about a few things about edge starting by topologies talking about architectures topology that we are working on and a lot is a single note open shift you you heard it right single note open shift in a specific use case for open shift edge right where you need an entire open shift cluster on the remote side but you only have space for one note right this is pretty much the use case that we are learning this type of customers need and we are really working hard to to make this happen more things important here well you know the assistant installer i was talking about before one of the biggest additions to Kubernetes native infrastructure for the provisioning of web metal notes well this also needs to be able to provide compact clusters right that is a three-note clusters with the metal ipi in 4.6 that there was a question before in the chat we support deploying a three-note cluster already right you still need a machine to do the provisioning well now with the workflow that's driven from cloud.redcat.com right completely online we can also release and provide compact clusters what else ACM ACM is important in this type of use cases and ACM cluster life cycle now has integration with ansible and with this you know when deploying clusters with ACM well we're using ipi we are using height but this is not enough there are so many types of topologies use cases that bringing ACM sorry bringing ansible hooks to the table will allow us to cover a variety of use cases when starting with ACM and deploying multiple clusters and then um zero touch provisioning so the concept of zero touch provisioning is i want to provision my notes with barely any interaction that is going to the data center plug the node in power it on and have the node added in the cluster right with virtually zero interaction anyone could do that and this involves many things right this involves well automating that this node needs to go to a specific cluster needs to be accepted it needs to be ready to schedule workloads of the type that are required by every you know remote site in this case all of this falls into the zero touch provisioning category and we're doing a lot of work to make these use cases possible with with open shift on the edge right and then time in an acceleration again these are use cases related to telco customers right a specific for workloads that work with real time that work with sparnix accelerators hardware accelerators etc this is the the line of work at the edge for open shift we are we're working on and with that let's move to the next slide and in the next in this is like this is not strictly k&i now we're going to talk about open shift and open stack and in open shift and open stack if you think about this we've been making progress in open shift 4.2 in fact before open shift 4 but in open shift 4 essentially adding support to the installer started in 4.2 and i want to say that by now open shift on open stack is a pretty mature combination and integration of all the services so we are working on the deployment user experience in every release with upi with high pi making it easier and easier to use and covering more use cases as we go but i have to say it's pretty mature at this point we are going and we are already focusing on some telco and edge use cases open stack if you think about it is one of the most popular platforms in the telco industry right in the telco market there are loads of open stack deployments and obviously they want open shift in these deployments as well and this brings a lot of flexibility as well having vnfs along with cnfs together in the same platform and then another point that i'm covering in this slide is the integration with bea metal another thing that open stack is allowing you to do is to manage bea metal notes as if you were managing virtual instances and we take advantage of these with open shift having the installer in an open stack platform that allows you to provision bea metal notes provision the notes the open shift notes now on bea metal or in a combination of bea metal notes and virtual machines this is something that's available and we are about to release full support for that but you know the technology is ready there for you to test and let's go to the next slide and see a few more things about this integration and what we are working on and at the core as i said we're working on bea metal workers we also you know i said this is pretty mature but thanks to the feedback of our customers we have very close relationships with a number of customers using open shift and open stack and some things that they are saying one of them was saying what not one of them when i'm using auto scaling with open shift and open stack i still need to have at least one note in there and that costs money to my customers right so we are now working with auto scaling to and from zero notes right so when you don't have any work to process you don't need any instances you're not paying anything as Catherine was talking at the beginning of this presentation we're also working on introducing the external cloud provider as opposed to the in-tree in the Kubernetes code provider that we have right now more things on the network side of things as i said part of our focus now is covering telco use cases and we're starting by adding support in a fast data path with sriov and ovsdpvk and hardware of load how does that work well essentially we are allowing pods running on virtual machines that run an open stack to use sriov right with the sriov operator and the same for ovsdpvk on the network side of things we're also working on bring your own load balancer and dns similarly to a metal in open stack when you provision an open shift cluster we also provision an old balancer and a dns infrastructure actually is the same than the one we use in metal and well we are working on allowing you in a very easy manner to bring your own load balancer if so you wish and the same for dns we also working with full support for provider networks this is common in many customers enterprise and telco type of customers and well courier which is our cni right cni that understands that's working with open shift and with open stack optimizing the traffic for this integration and we are adding support for ipv6 dual stack this is a common pattern across many providers in open shift and then moving to the storage section well again as kathryn was introducing earlier simda csi is something that also we are working on we believe very soon you are going to have simda csi by default and as part of simda csi you have a csi topology in general in kubernetes that we want to support also as part of the storage integration in open stack with simda availability zones in particular right so you will be aware of the availability zones when you create pvs for your pods and you will be able to influence where these pvs are scheduled and with that i finish this section and right well thanks for my man kathryn containers right after kubernetes nothing slows you guys down the machine so i'd like to thank everybody for joining it was a great great session and these slides are available the recording is available two quick things remember to come back once a quarter this roadmap will change and we'll update it once a quarter for you also anything that entices you that you want to hear more about there are deeper dives available so grab the closest red hatter to you and get them connected to us and we can begin deeper on any of these topics also would like to thank everybody out there in the world for the call for papers for summit that closed last week we had so many awesome public customers that want to talk about their usage of open shift that we will be having those sessions on open shift tv and open shift commons and at summit so really give a lot of exposure to the amazing things that people have done with the platform and from all of us at red hat and the open shift community we'd like to wish all of you the happy holidays stay safe out there