 Good morning, good afternoon, good evening, wherever you are, very warm welcome to the Q3 2021 update to what's next in OpenShift. My name is Tushar Katarkhi and I'll be facilitating this presentation. I'm from the OpenShift product management team. As a reminder, the presentation offers an overview of the direction, initiatives and exciting use cases that we are solving with new features over a 6 to 18 months plus time horizon. We provide this update quarterly. Also note some of the specific details that we won't go, but we'll be finding that in the appendix and you can refer to them offline as a reference. One final note on this. This is a roadmap update so things that are coming in the future are subject to change. The material discussed here can change. Just be aware of that. With that said, I'll move to the next slide. With me, I have an excellent team of OpenShift product managers and colleagues of mine who will be the speakers. In addition, you'll notice that we have the rest of the team present here also who can help answer questions, so please ask them away on the Q&A forum that is available to you. This is just a level set. I've been at Red Hat for 10 plus years and we have always made the case for Open Hybrid Cloud and we continue to do so. That is a fundamental driving principle for us and strategy. Our customers have over the years innovated, competed and successfully created value to their own customers through applications built for Hybrid Cloud, be it NTR applications or be it more modern cloud native microservices that encode traditional business logic or rules or more modern data analytics and artificial intelligence, or it can be developed in-house applications, custom applications or packaged applications from ISVs. They have developed and deployed these applications across the Hybrid Cloud footprint, everything from a physical data center, the public cloud and to the edge. Red Hat OpenShift built on Red Hat Enterprise to LearnX has been the bedrock platform in that journey. This is a better look at that journey and this journey called the Open Hybrid Cloud journey has evolved over time. We started this journey back in 2015 as some of you might recall with the OCP V3 or OpenShift Container Platform V3 with Docker containers and Kubernetes, with Docker providing the standard packaging format and Kubernetes providing scheduling and cluster management for portable applications and consistency. Since then with OpenShift V4 in 2019, we brought a comprehensive and integrated approach to the cloud with full stack install experience, Kubernetes operators, monitoring, distributed tracing, service mesh, serverless, CI-CD, GitOps, a number of developer tools and more. Building on this OpenShift V4 and with customers wanting more cloud-based services and consumption, we have added OpenShift as a fully managed service that can be consumed on your favorite public cloud with offerings like Red Hat OpenShift for Amazon or Rosa, Azure OpenShift or ARO or OpenShift dedicated and much more. In addition, we have introduced ACM and ACS for Hybrid Cloud management and security. Building on this success, building on our success with managed OpenShift, we earlier this year doubled down an unveiled suite of fully managed SRE backed application and data services that we call Red Hat Cloud Services. You will see us continuing to add more there through this year and next. Now where are we headed next? In some ways, we are already on this journey, but our point is to bring you a comprehensive hybrid cloud-based service experience that brings all these together such that there is a uniform experience for you no matter where you are on the cloud or on-premise or at the edge, be it for a developer or an application architect or a DevOps person or a security person or a system administrator or anybody in charge of operations in general. OpenShift, just as a quick recap, is available when we say OpenShift, we are talking about the different ways in which it is consumable by you as customers and users. It is available as a fully managed cloud service or it can be consumed as a self-managed platform. Managed Red Hat OpenShift is jointly engineered, offered and managed by Red Hat and the cloud provider so that you can get started with Kubernetes service quickly. This includes OpenShift dedicated as your Red Hat OpenShift service on AWS and IBM OpenShift service on IBM Cloud as a fully managed for you by Red Hat and our partners. OpenShift container plus OpenShift container platform, OpenShift Kubernetes engine, our self-managed software offerings that you can deploy in your data center, public cloud or edge locations. You can choose the model that best suits your needs and or a combination of both and many customers have done both. So OpenShift anywhere, anytime and any way is kind of where we are going with this. You are very familiar probably with this rendition of what makes Red Hat OpenShift. So as a quick recap, OpenShift container platform is the foundational piece here is our Kubernetes distribution built on top of Red Hat Enterprise Linux and CoroS. That also includes many platform developer and data tools and services. The new thing that Nana is going to touch upon later as well as Tony is the OpenShift platform plus which in addition to OpenShift container platform also includes things such as ACM, ACS and Red Hat Quay integrated and tested together to address your management security, governance, compliance and registry needs. More on this a little later. Next, let's look at the themes for OpenShift. These themes are based on customer inputs such as you as well as Red Hat vision and strategy and the broader market and technology trends and changes. The first theme here on the left starting at the left is the code platform and developer tools. Pillar if you will, this includes our investments in Kubernetes, Linux and platform and developer tools. While we know that we have added a lot of innovation here over the years, there is much more to come here. The innovation such as response to new hardware accelerators, this is just beyond GPUs now with things like GPUs or data processing units and FPGAs or innovation in use case specific scheduling or innovations in networking including network observability. Innovations in GitOps and DevOps and exciting new things for developers with regards to serverless and service machine code ready tools. This theme is also foundational for the rest of our themes such as managed services and telco and edge. The telco edge theme is in service of rapid innovation and needs from the telco industry in 5G code and 5G Iran. These needs include desire to run and develop container native functions or AI ML applications, etc. Develop at the core and deploy it at the edge with containers or it could be because you are collecting what's amounts of data at the edge. How do you analyze that? How do you clean it and bring that for consumption into the core for analytics and stuff like that. The managed services team is all about bringing OpenShift as a fully managed and SRE backed service on the cloud of your choice. In addition to the current API management streams, service with Kafka, Red Hat OpenShift data science and management services such as subscription management, cost management and insights which you already see. We'll be adding more features and capabilities to those but also we'll be adding more services over this year and next. Finally, we are very excited about the hybrid cloud experience. As alluded to earlier, this is a comprehensive end-to-end experience for the applications across the hybrid cloud. It includes hybrid cloud governance, compliance, security, management and observability all tied together with the rich cloud.redhat.com experience. All this is going to come to you through our releases. Right now, we are in Q3, OpenShift 4.8 is next on-deck. After that, we get into Q4 and then we divide it into first half and second half and you will see that we are continuing to innovate across all the different parts that I described earlier. I won't go into the details of each one of these but this is there as a cheat sheet whenever you need to refer and more details can be found as I said in the appendix. With that, I'll hand it over to Naina to take us through hybrid cloud experience and OpenShift platform plus. Naina, take it away please. Thank you, Tashar. Hello, everyone. I'm Naina and I will be taking you through our plans of hybrid cloud experience and a bit on platform plus. Next slide please. Customers are onboarding more developers and deploying more workloads and applications to OpenShift which is a good news but this also means more storage, more nodes, more parts, more traffic, both north, south and east, west and most importantly, more clusters. When you have more than one cluster, you have to think about these things at a multi-cluster level. Now you have multiple clusters distributed across multiple clouds and infrastructures. Now you are thinking about the hybrid cloud. For the application architect and developers, the questions are how do I deploy applications across multi-cluster hybrid cloud? How do my applications in different clusters communicate with each other and exchange data and how do I do this in a secure, repeatable and automated fashion? The system administrator and operators are meanwhile asking how to provide multi-cluster storage, multi-cluster networking, ingress, egress of traffic and load balancing and multi-cluster management, security and registry for container invaders. One of our main themes for this year and next will be to provide standardized tools that for your first cluster all the way to your hundredth cluster or more address these needs and challenges. Next slide please. The Red Hat OpenShift is the industry's leading hybrid multi-cloud platform and OpenShift brings Kubernetes to the enterprise with over 3,000 customers across all industry verticals. It is built on a foundation of Red Hat Enterprise Linux and OpenShift provides a comprehensive container platform. Our goal is to provide you with everything you need to build, deploy and manage applications across the hybrid cloud and that is our OpenShift platform plus and we would cover it more in detail later. Next slide please. Digging a little deeper in multi-cluster, let's cover networking. So advanced cluster management maintains east-west networking between all of your cluster using Submariner. Submariner is integrated with Red Hat Advanced Cluster Management as a technology preview at the moment and it provides cross-cluster network infrastructure for OpenShift by extending the well-known Kubernetes networking objects. Main feature in this tech preview include part-to-part and part-to-service L3 routing with native performance for your connectivity needs. All traffic flow between clusters is encrypted by default IPsec to give you security, compatibility with different infrastructure providers such as AWS, GCP, Azure, IBM, VMware and network plugins such as OBN, Calico. Service discovery is another aspect and Submariner provides cross-cluster service discovery DNS with service failover and load balancing across clusters. Next slide please. Continuing on networking, the OpenShift roadmap for networking has expanded to include multi-cluster, hybrid cluster and so we need unified networking. So this slide represents the roadmap goal for getting traffic into and out of the cluster in a unified way so that ingress and egress are the same regardless of protocol and to align to how layered products such as OpenShift Service Smash and OpenShift Virtualization operate. There are additional benefits that are realized from this model. Single and multi-cluster scoped, port replication for auditing canary deployments, operational simplicity through ingress unification, cloud provider and community contributions, extensible to current and future networking protocol. Next slide please. OpenShift Service Smash 2.1 which will be released in late Q3 2021 will introduce the Federation of Service Mashes across different OpenShift clusters. This will include new custom resources for configuring interconnectivity between federated meshes as well as importing and exporting services between different meshes. This will enable secure sharing of services between different meshes including load balancing and high availability of services in different meshes and clusters. Each mesh in the Federation will retain its own control plane where importing and exporting of services is done in an explicit manner. This allows users to limit the scope of access between meshes where desired and future releases of Service Smash will include support for a single Service Smash and control plane which spans multiple OpenShift clusters. Next slide please. Multi-cluster storage is required for a number of users including data Federation wherein an application wants to access data from multiple sources and multiple clusters and clouds. To that end, the OpenShift Multi-Cloud Object Gateway is a lightweight object storage service for OpenShift, allowing users to start small and then scale as needed on-prem in multiple clusters with cloud native storage. Multi-Cloud Object Gateway is addressing that data Federation, specifically the ability to have a local endpoint to read data from multiple locations with an option to get local caching and mirroring capabilities. The other important need for multi-cluster storage is for high availability and disaster recovery. The Red Hat OpenShift Data Foundation, which is based on self-technology amongst other things, has a number of current and future capabilities including synchronous and asynchronous replication that allows for meeting your high availability needs. In addition, we are following some interesting upstream projects like Scribe and Raman for replication of persistent volumes within or across clusters and for failover and fallback capabilities. Let's be assured that all these will be integrated and managed from Red Hat Advanced Cluster Management to provide a comprehensive hybrid cloud experience. Next slide please. Red Hat Advanced Cluster Management provides the central fleet view for all of your OpenShift and non-OpenShift cluster. Integrating with CSX Insights here brings the OpenShift fleet-wide analytics and remediations from the traditional single cluster experience now into the Advanced Cluster Management Hub console. We further enhance the multi-cluster health by ensuring your managed fleet is now benefiting from the telemetry of the entire OpenShift fleet. Enhancements to the user experience will ensure a reduction in the amount of time required for the CSX on cluster remediation to be reported back into the Insights data hub. Tons of look and feel improvement will create a more seamless experience across the ACM and OCM. Next slide please. OpenShift builds V2 will be intact preview and this feature, which is based on the upstream project shipwright, will be a common interface for all application container builds. So users can keep using their favorite and well-known methods such as S2I, BuildPacks, Chronico, or other popular container build strategies all from a common interface. Tighter integration with Argo CD and Advanced Cluster Management will give users the ability to use both with their GitOps workflows. Users of ACM will be able to view, edit, deploy Argo CD resources and fully visualize applications from the ACM console. And Argo CD will be able to read configurations from ACM to further refine the application deployments. We're completely integrating the popular secret management tools like vault or sealed secrets for a seamless experience. Users will still be able to use their secret manager of choice in their GitOps workflow and will be able to easily integrate them with OpenShift. The application environment view inside the Dev console will provide a feature with UI to visualize their application developments per environment like Dev, Sage, and Prod. And there will be a CLI, the CAM CLI, that developers can go from zero to GitOps in few commands. This CLI provides an opinionated, helpful practices, bootstrapping experience that gives the developer complete control over how their applications are deployed using GitOps practices. Many customers rely on an internal registry as a trusted source of software. However, at the same time, developers may want to rely on public upstream registries for experiments and rapid prototyping. And this usually conflicts with the security boundaries, the networking and info sac teams at the customer sets in place. Quay will offer a new middle ground in the future. The transparent pull through caching, this feature works together with OpenShift's image content sales policy, which will allow the transparently redirect images pulls from a public external registry to an internal registry. If this internal registry is a Quay instance, it will be possible in the future to configure it to cache a certain upstream image repository. And if the image hasn't been pulled yet, it will be pulled by Quay on the client's behalf and will be served from cache and others much faster in the subsequent pulls. Quay will autonomously maintain the cache and determine when it needs to update the cache. This will give the administrator privilege to selectively allow pulling from certain repositories. And the entire namespace can be configured as a cache of an upstream registry. At the same time, the developers have the impression that their images pulls ourselves from the upstream registry. They don't even have to update their image URLs and they will only notice faster pull speeds and will be able to circumvent any rate limiting some public repositories are applying nowadays. Next slide please. Effectively securing containers and Kubernetes includes a DevSecOps approach. This approach secures both platform and the applications deployed to the platform and OpenShift platform plus enables DevSecOps for both layers. As you know that OpenShift platform plus includes OpenShift container platform, advanced cluster management, advanced cluster security and Red Hat Quay. The capabilities delivered across these components combine to provide policy based cluster lifecycle management and policy based risk and security management across your fleet of clusters. And you can see the flow at the bottom of this graphic. The hub cluster is where lifecycle deployment security compliance and risk management policies are defined and is the central management point across clusters. DevSecOps for the platform includes pulling images from the Red Hat registry, pulling day two configuration code from Git via our integration with Rgo CD and ensuring that all optional operators are deployed and configured. Policy based deployments also specifies which admission control should be deployed to which clusters. And the hub also provides a unified view of health security risk and compliance across your fleet. We have many of these capabilities in place today. However, they each have their own UI. So over the next few releases, we will be working to provide an integrated multi cluster user experience for admin security and developer persona in the console. When it comes to implementing DevSecOps for applications, the OpenShift pipeline is the key integration point. It requires implementing security gates in the pipeline. And there are some new capabilities in our roadmap for this. Like sign images in Quay, an easier integration of Q Blinter and the ability to sign additional artifacts such as deployments for additional protection against temporary. Finally, as new issues are discovered, information about these new issues are fed back to the developers via alerts and closing the DevSecOps infinity loops. I will hand it over to Tony now who would be covering OpenShift platform plus in detail. So Tony, over to you. Thank you, Naina. So to recap, so far we've talked about standard tools for managing a large OpenShift fleet. And OpenShift platform plus is where that experience comes together. It's a set of tools to help you run your OpenShift fleet for the use cases range from thinking networking and security policy across clusters. You get ACM on the top of the corner show on this diagram for your multi cluster view to threat detections. You have ACS to provide vulnerability detections or to container scanning and distribution across your fleet. Redhead Quays on the top right corner on this diagram provides a global registry to manage image across all your environments. So in the next couple of slides, we'll walk you through the upcoming features from these areas in ACM, ACS and the Redhead Quay. So let's start with the Redhead ACM. We continue to drive cluster management capability across four high level pillars as you guys can see on this slide. And we'll be introducing a new pillar focusing on multi cluster networking in the coming months. Like mentioned earlier in the earlier slide, we expected to see continual work with some mariners and the use case with multi cluster service mesh coming together. So for the first pillar here, multi cluster lifecycle management. Operation teams look to leverage Redhead ACM's hives and assistive installer to drive cluster deployment on-prem as well as in the cloud. Today ACM includes cost effective management capabilities around cluster hibernate and cluster poles for easy and quick access to depth clusters. ACM will also look to drive the same capability into SRE managed open ship all from the same ACM central hub. Customer has been asking for more feasibility in their deployments. And the central infrastructure management will bring a hybrid agnostic approach more aligned to UVI. Moving on to our application lifecycle pillar on the right hand side, we are continuing to support our customers investments in open ship apps resource, and ensuring a get up driven approach with everything we do. We will also continue to provide the app health and status signals from any deployment source, including flags and Azure DevOps through one single console with the ability to deploy applications based on placement labels. ACM will also provide cloud bursting with stateful workloads and the ability to replicate data for customers business critical applications while scribe and ODF. On the bottom left, in our desired state policy driven governance and compliance, ACM will add a new policy to seamless deploy ACS central and sensors, and we will further enhance the SRE apps experience by sending policy compliance alert to your preferred notification systems. Lastly, on the for the observability pillar, we are looking to bring cluster health metric support to manage your entire Kubernetes fleet. That includes both your open ship and non open ship clusters, all from a single usage dashboard interface. And we are also exporting the ACM hub metrics so that the third party tools can be used for a deeper level of fleet analytics. Next slide, please. On the ACS front, we have six focus here. The first focus to reduce security program cost shifting lab is the concept allows teams to address security risk early in the software development process, allows team to reduce the cost of retesting and possible architecture by designing securely by default and catching security issues early. Moving down to the next focus, by prioritying issues that are most important, security teams can spend their limited time and resource on issues that will have an outsize impact on their security postures. Next, best in class open ship support. This includes support for open ship dedicated redhead open ship on eight of us and Azure redhead open ship, as well as continue development of security features with open ship as a first class citizen. Moving toward the right top right commitment to open source. As you are aware, step rock was acquired as a proprietary solutions. Our team is committed to and working to open source the solutions and to tap open source innovation as well. Next, ACS will be a security product won't be a security product without focusing on advanced security workflow for Kubernetes. So we will continue to invest in this workflow to meet the need of customers. And lastly, the security teams want to track the return on investment in their security program using KPIs. So we are working on gathering KPIs for program to excel and on ways to make recommendations to programs around actions they can take to have a best return on investment. So moving on to redhead way. Next slide please. The next big thing for quay is to support container native builds. So currently redhead quay can view container image directly in the registry. User can either manually trigger views or trigger by the git commits. For security and scalability reasons, quay maintain its own view systems and scheduler that lunges apartment views in a containerized virtual machine inside the open ship. And this requires bare metal cluster today. And going forward, quay will be able to delegate build jobs to open ship pipelines, which is based on the upstream Tecton CD project. That means that image build can run as a Kubernetes jobs using a containerized buildup process natively on the clusters. This allows taking advantage of open ship scheduler and alleviate the needs for a bare metal cluster. Builds can then run an open ship cluster that are hosted either on the hypervisor environment or cloud providers. And open ship pipelines builds are currently running with reduced privilege. And the plan is to allow builds to execute fully ruthless, providing the good balance between security needs and infrastructure requirements by running in virtualized infrastructures. And lastly, thanks to being based on Tecton, quay can take advantage of the progress in its ecosystem as well. For example, expand the support to multi-art builds in the future. So all these are the top level upcoming features from redhead ACM, ACS and quay. With all that, I'll hand it over to Adele to talk about hypership. Hi everyone, I am Adiz Alouk and I am the product manager for the hypership project. So hypership basically aims to bring in a new architectural pattern in addition to the standalone mode that we already have on the left, where we have control plane and workers basically co-located and where the control plane is required to run on dedicated nodes or dedicated VMs. We're also introducing a hypership which allows us to decouple the control plane and the workers and be able to run the control plane on an existing cluster and be able to run more than one control plane of a cluster on the same node. This makes a lot of sense when you are running thousands of clusters or when you, for example, want to run different architectures, ARM or x86 and host them like the control plane and the workers could have different architectures. And it also makes the bootstrap time of clusters a bit faster because we're sharing the control plane already on a management cluster. Hypership is still under development and we're expecting to roll it out for use late 2022 based on your feedback. Next slide please. This is a bit of a switching gears and we're going to start talking about telco and edge. Next slide please. We're going to start thinking about edge and telco from the application perspective. We divided application patterns into four different patterns. Operations, edge application patterns, enterprise edge application patterns, provider edge application patterns, and consumer edge application patterns. Each pattern has different usage style. For example, with the operator operations edge pattern, we're more interested in doing analytics. An example of this is manufacturing visual inspection where you really want to automate the process of looking into goods in a production pipeline and figure out the defects. Then send the analytics data back to the cloud. On the other hand, things like routing and switching becomes more important for the provider edge application pattern where we're more interested in mobile broadband or telco 5G use cases. It would really are optimizing for a network performance bandwidth and throughput and optimizing the latency. Finally, there's also another pattern versus the consumer edge. We're all using that. Some of us have IT devices at home. This reports back to the cloud. And if we think about these deployment patterns, they exist at different stacks, different layers of the architecture. You see on the left, there's a data centered cloud near edge and far edge. The more we move closer to the device or the user the more the requirements increase and change. That will also mandate us to forge architecture and deployment models to help cater for these different patterns and application requirements. Next slide, please. We're going to dig more into the provider edge. That's the telco use case more or less. Starting from the left, there's a lot to take here in this diagram. The point is, if you start from the left, we have the radio hats and the end points. This is where your connectivity comes in and gets divided. Then there's the baseband unit. Initially, the baseband unit for 4G architectures and so on were coupled together and co-located at what we call the sales side, which is also very close to the radio hat. However, in 5G, the architecture changed a bit. The virtual centralized unit and the virtual distributed unit got divided. The virtual distributed unit is more concerned with functions on the physical layer like orthogonal frequency division multiple axes, multiple input, multiple output, or doing medium access control or radio link control like automatic repeat requests. This is more about encoding and decoding to make sure that you're not losing data at the transport layer or the radio access network. There's also the aspect of DRAN and CRAN. This is more or less how close are we to the cloud? The closer we get to the cloud, the more flexible it becomes and the less the requirements on resources and latency. For example, there's two distinctions here between these requirements. We call them the higher layer split and the lower layer split. This is more about what functions do we want to host where and depending on the functions, like the five functions or the medium access control functions, the requirement increased or decreased and it becomes a matter of the use case. Based on what use case do we move closer to the cloud and based on what use case do we get closer to the far edge? We are trying to cater for these requirements again by offering different deploying models. The VDU, for example, requires more latency, sensitive deployments, requires more network throughput while on the other hand the virtual centralized unit does more calculations, does more quality of service and requires more GPUs, more resources on the nodes and it also gets more flexibility because it's closer to the cloud. It can fetch nodes from a pool and address use cases based on that. Next slide, please. As I said, we have different application patterns, different adoption patterns and different tiers, such as data collection, data aggregation and data analytics and the closer we get to the user, the harder the requirements become. For example, if we move closer to the user, the latency requirements become very sensitive. We're talking about microseconds and milliseconds. This is a requirement that did not exist before in the normal 3G networks. Additionally, the more we move to the user, the less resources we have and so we have to also think about how much resources do we offer on the nodes and what deployment models do we think about when we do that. For example, in OpenShift, we offer three main or four main deployment models for standalone OpenShift. The first one is the normal OpenShift where you have the masters and the workers, that's a minimum of three nodes plus the normal workers. The other one is requiring more of it, high availability and this is where you host the control plane and the workers and replicate that on three different nodes. This becomes, as I said, when we get closer to the core, when we get closer to the data center, that becomes possible. This is where a deployment pattern that can be applied closer to the core. When we move to the left, we are in need for the resource scarcity increases and we are requiring less resources and so we provide single-load OpenShift which basically reduces the footprint and reduces the minimum requirement that is needed to run an OpenShift cluster. Finally, at the edge, we have, for example, Red Hat, Enterprise Minutes for Edge and then that is closer to the user and the endpoint devices. All right. Can you move to the next slide, please? All right, so if we zoom in onto the requirements of the Edge, like you saw, a lot of devices, I don't see lots of analytics, zero touch provisioning becomes crucial and critical in automating the installation and lifecycle of these devices. Zero touch provisioning is technology preview in advanced cluster management version 2.4. It is aimed at regional distributed on-prem deployments. It gives us a lot of benefits because it integrates already and leverages technology stacks, for example, integrates already with Red Hat, cluster management, Hive, MetalCube and Assistant Installer. It also has minimum prerequisites in install, which implies zero touch provisioning. It's perfect for automating multiple devices at scale, so it's really requiring or self-configuring and enables an untrained technician to go through the installation flow very easily. Think about just scanning a barcode that you're going to install. It also has highly customized deployments. It can fit in any of the modes that we offer with OpenShift. It fits in either connected or disconnected mode, supports IPv6 dual stack, supports dynamic host control protocol, GSTP or static health discovery, UPI or IPI deployment technology and moreover, it is also Edge focused. There's no additional bootstrap note required as usually is. Then there is self-boostrapping in place. And finally, it's integrated with ACM GeatOps feature, which allows you to manage your zero touch provisioning installation in the cube native way and account the actions that is being taken to install the cluster. Finally, it also removes the need for compute management and cluster provisioning only for a single dedicated note allows a pool of notes to self-discover and allocate themselves to join a cluster. Next slide. Finally, this is how it all fits together. We, after all, start from the application layer. The application, as I said, there is different application patterns, different focus. Some application focus on analytics require more CPU, more resources. Some application focus on data collection and data management. Some application focus on networking like broadband, telco, IoT, and all these applications have different dependencies. They force us or they add constraints. And these constraints can come in different shapes and form. The constraint can come in space or footprint available. As I said, when we move closer to the edge, the space in the footprint decreases and we have to find solutions for that. We need to think about scale. How can I manage multiple deployments of any of the deployment models that we thought about? How do I increase the latency in the throughput? How do I improve performance and resiliency, especially when I'm dealing with telco and mobile broadband? How do I increase high availability when I'm close to the core and hosting critical application that all the entire layers of the edge rely on? Another layer becomes more important. How do I automate all of this? How do I make my users the ability to delegate the installation and handle all the installation and lifecycle of these devices at scale? Finally, we have to support different infrastructure types, as always, virtual machines, bare metal, the different flavors of an installation. How do we cater for these needs in OpenShift? Coming next, especially in the second half and the first half of 2022, as I said, zero touch provisioning integration that allows us to manage and co-locate the deployment and installation at scale for single node, remote worker deployment models, three node clusters for H8 that also ties back to the high availability requirement. Then there is DU profile optimization, decentralized unit profile optimization because we deploy single node as the representative or SNO is the DU in the telco edge model. We need to be able to optimize the profile to even decrease the space in the footprint required to run a cluster or a DU deployment. Then we have network optimizations coming up. SRUV, dual stack, smartNICS for network offloading, load balancers, on bare metal, and we also have NOMA for performance and optimization, CPU pinning, real-time kernels, forward error correction also for the DPU or for the DU decentralized unit in the edge, hyper thread aware scheduling, all this ties back and it just becomes an interconnection model that we just map to from the things we do. Yeah, so a hand over to... Don? Thank you very much. Hopefully everyone can hear and see me okay. So if we could move on to the next slide, please. So hey everybody, my name is Sean Pertel. I'm from the Managed OpenShift Cloud Services Product Team. In addition to all the great features that are inherited from OpenShift Container Platform and the integrations that we have with other Red Hat products and services, I was hoping to provide some focus and visibility with a few simple slides today on some specific roadmap items that apply directly to our Managed OpenShift services. So first, I just want to start quickly with some compliance readout, if you will. So PCI is now available for both OpenShift Dedicated and our ROSA offering. Our next focus will be on FedRAMP certification which is currently targeted for the second quarter of 2022. HIPAA Ready certification is also next on the agendas and scoping phase for OpenShift Dedicated and ROSA. HIPAA certification is also next on the roadmap for ARO for Azure Red Hat OpenShift Service. And in addition on the ARO side, we're also working on a FIPS mode install option. Moving on a little bit to the second box there on security. I wanted to specifically call out a few things. So starting with Amazon STS, so now with ROSA and OpenShift Dedicated, we can leverage policies within Amazon Secure Tokens Service to gain access to AWS resources needed to install and operate the cluster. So this allows us to do things in a more standardized and secure way on AWS and to more directly enforce least privilege policies, right? Reduce the overall permission requirements needed for both the installation and the ongoing maintenance of our managed clusters. In addition to that, we've also introduced private link which removes the need for direct public internet access for a cluster and allows more VPC network customization. It enables, for example, our Red Hat SRE teams to access the cluster for maintenance or upgrade procedures through a private connection with no need for public internet. On the ARO side, this is already available, but there's an analog for egress, which we call egress lockdown, which allows for much more control over the egress traffic to and from, well, egress, it would be from an ARO cluster, right? Bring your own key encryption is something that's being worked on for both AWS and Azure storage options. For ROSA and OSD, we are working on an additional layer of LCD encryption. It's worth noting that LCD storage is already encrypted, but this will provide an additional layer of security around LCD. And on the Azure side, we're working on an Azure Active Directory group sync mechanism. Next slide, please. So on the compute side, our primary focus right now is on expanding instance types that are supported sort of across the board. So on both sides, this means things like GPU support, which depends on an operator that does need some work, spot instance support, AMD instances, and even dedicated instances on the ROSA and OSD side. We continue to try to maintain parity between OpenShift dedicated ROSA and OCP, or the self-managed options. On the ARO side, we're trying to support the Azure government region. And again, working to expand instance type supports that are specific to the Azure cloud. On the infrastructure side, a really interesting feature that is in the works right now is cluster hibernation. So for on-demand managed clusters, obviously this is extremely important in terms of being able to manage costs, being able to essentially pause and unpause a cluster is definitely a step in the right direction for those clusters that you may not need running 24-7. There's ongoing integrations with several infrastructure tools to provide a lot more flexibility when it comes to how you might be able to manage a VPC, including cloud formation, terraform, and Ansible on the ROSA and OpenShift dedicated side. For networking, we're working to transition to OVN. OVN is the default network provider over from OpenV switch. We're adding additional in-cluster support for network load balancing and for pre-existing route for 53 configurations when installing into an existing VPC. Again, on the Aero side, we're working on integrations with the Azure portal UI, providing a cluster creation GUI, allowing for a little bit more configurability when it comes to installing a cluster, including being able to determine specific versions and then native integrations with other Azure tools such as AppLens, which is the next thing that we have here. So next slide, please. All right, and we'll close this out just by talking about some overall platform-specific features that are coming. So again, on the ROSA and OSD side, we're incorporating a more tightly integrated user workload monitoring feature, which is already available for self-managed solutions. On the managed side, the work that's being done is primarily around leveraging custom alerts because alerts are used by the Red Hat SRE team, so there are just some permissions that need to be put in place to make sure that things will work properly for both the Red Hat team and the customer team. For ROSA specifically, working on AWS console integration, the work is always ongoing to provide a more native experience, including things like supporting annual agreements directly from the AWS console. A few other quality of life improvements on the ROSA side, in the ROSA CLI, allowing for direct YAML input. For both of these, for both OSD ROSA and ARO, we're working on integrations with OpenShift cluster manager to be able to provide the same level of experience across all three of these offerings, and that includes the ability to provision both ROSA and ARO clusters directly from the OpenShift cluster manager interface. And then for ARO, being able to adopt and provision the clusters, manage add-ons, and schedule upgrades directly through the UI. Yeah, I think that covers what I've got here. So the next slide, please. The last thing I wanted to touch on really quickly is our ongoing efforts to provide more visibility and avenues for direct feedback to our product teams. It's very, very important to us. So to that effect, we've established public roadmaps for each of our managed services. You can see an example here of the ROSA roadmap. But there are quick links, red dot hat links to the OSD roadmap, the ROSA roadmap, and the ARO roadmap, where you can see which specific features are in progress and gain a little bit more information into those features. Next slide, please. And then finally, as part of those public roadmaps, we've enabled the RFE tracking and issue tracking through the standard GitHub issue tracker. So again, any feedback is always welcome. Any requests for features are welcome through here, and we'd be happy to open up a dialogue. And hopefully this will help provide ongoing visibility to our managed open-source services. And with that, I am going to hand it off for core platform and developer tools. Thank you. Good morning, good afternoon. Good evening, everyone. Welcome to the roadmap update on the core product platform and developer tools. My name is Arun and product manager at OpenShift, and I'll be taking you through the course of this presentation. So what's next for the OpenShift console? The console, as you know, is the face of the product for cluster administrators. So first, we have the OpenShift platform plus that enables us to do so much, so much more. We will be combining the capabilities of ACM, ACS, and Quay so we can offer our customers the ability to manage, secure, and deploy across clusters. And this is becoming our foundation for the OpenShift hybrid story. Our hub and scope model with SSO enabled will allow clusters to accomplish their goals. Next, dynamic plugins. The OpenShift web console framework will now work with dynamic plugins that will enable our teams to create beautiful layered UI with minimal effort. With these dynamic plugins, operators can deliver and manage their UI experiences with a relay cycle, giving operator creators much more control and flexibility. And the result of the dynamic plugins and the OpenShift platform plus is that you have a hybrid console that brings everything together into a single UI with a single URL and users can now see their entire fleet in a glance and turn down as needed. The hybrid console is a multi-cluster UI experience that will enable an awesome layered experience with core add-ons, third-party ISVs, and even customer zone integration. And users will be able to tailor it exactly as it fits their needs. So in this slide, I want to talk about our investments into serverless and our journey to strengthen our portfolio in the serverless space. There are two pillars for it. One is serverless deployment platform. And for this, we want to lead the example and become the community leader and thought leader in Knative and serverless. And one aspect of it is obviously to add more event sources like Kafka, strengthen our security story. And we also want to scale our performance on the serverless side for driving and increasing adoption. And the next pillar is around user experience. We want to attract enterprise developers as well as non-developers, people who cannot code, people who cannot write a YAML. Usually, personas like data scientists and content developers use serverless to author platforms. And as they do that, they should be able to integrate with other platform services such as observability, integrate with other Red Hat and other cloud service providers. And we will also let users take this for a run through the developer sandbox trial. The next is around the serverless platform itself. And we're doing this in two steps. The first is to make serverless the default way of deploying workloads such as customer workloads and other managed cloud offerings. And the second step is to make OpenShift serverless a fundamental and integral part of OpenShift itself. So, regardless of where you deploy OpenShift, bare metal or red hat virtualization or other managed cloud services provider, serverless will be available for you to use. And all of this leads us to the fact that we create a foundation that is very application-centric, is very focused on centralized hybrid cloud. And the developer experience is not in deploying clusters or deploying pods or deploying nodes. The developer experience isn't deploying the application itself and the cluster creation is less important than making sure the developer is productive. Next slide, please. In this slide, I want to talk about our investments to our operators. So, first of all, you can now write operators in Java. Java is still a very popular enterprise programming language. If you look at any programming language survey, DOB index, Java still rates somewhere in the top five. And red hat is invested in Quarkus, which is cloud native Java. And we want to extend that to writing operators with the Java programming language. And we also want to enable granular permissions. So as, you know, OLM shifts its lifecycle model towards global operators, we want to make sure that additional controls will be available to introduce fine-grained RBAC, selectively enabling who can see the operator, use the operator, what can a particular operator do in a particular namespace and whatnot. Next, on the operator investment, you might have heard of something called cloud native application bundles, the cloud native way of packaging distributed applications. So we've heard from a lot of our customers and they have said that on the cloud native application bundle site, they would like to combine the goodness of both operators and health charts. So in the future, in OLM, you will have a generic API to install, distribute and unpack cloud native content, such as content and operators and health packs in OpenShift clusters. And last but not the least, catalog files. The catalog management in OLM is based on images, which are incrementally added to a database. As users look to automate this process and have a more direct control over update graphs, channel management and package level metadata, OLM will introduce a new way to declare catalogs using a single YAML file. This will enable releasing catalogs and updates without a bespoke pipeline and make regular maintenance like adding additional update edges or channels or deprecating certain versions in a single file based operation that is get friendly. Next slide, please. So what's next for Helm on OpenShift? Helm is one of the most popular package managers in Kubernetes and you can make investments in that direction. And the goal is to provide a self-service application development experience that enables developers to use tools, a desire and deploy the applications with minimal intervention. So along with operators, Helm is a very popular way to deploy applications on Kubernetes. And we're continuously targeting to enrich the developer catalog to make Helm charts available out of the box. We've recently introduced a new certification program for Helm charts along with partnerships with HashiCop, IBM and GitLab with more partners getting onboarded as we speak. And you'll also see more products and applications from the Red Hat portfolio available on OpenShift with Helm charts. Next on Helm charts, users often deal with potential security issues and misconfigurations when they pull Helm charts and this can get quite challenging. The dependency graph can get very large and each layer can potentially introduce misconfigurations and security risks. So we want to help developers to ensure that Helm charts follow best practices and avoid any kind of misconfiguration. And so we'll be looking at providing these best practices, documentation and tooling to make sure that the Helm charts are secure and they work properly. And last but not the least, on Helm charts, we will be continuing to make greater integrations with various developer tools and services. The developer console and the OCP console will allow you to easily test your charts and we will also provide the ability to install a Helm chart directly from an archive. And on the IDE tooling side, the OpenShift plugin for VS Code provides the ability to install tools that are available out of the box in OpenShift as well as the ones you've configured in your Helm chart registry. OpenShift Virtualization So OpenShift Virtualization is again a very popular way of running virtual machines and containers together and the investments we're making in OpenShift Virtualization is around these four broad umbrellas. First umbrella is hybrid cloud and edge. We want to optimize for smaller deployments like single load OpenShift, compact clusters and we also want to support bare metal instances on public cloud. Next on the workload side, we want to enhance support for workload acceleration technologies for sharing GPUs across technical workstations, video rendering and AI ML workloads. On the third pillar of enterprise scale, we see continued, we continue to see production for customers who are modernizing existing application of enterprise scale. One example is a major e-commerce company relying on OpenShift Word to modernize the private cloud services that involves millions of active users. And as a part of supporting enterprise scale, we continue to enhance our partner ecosystem around data protection, backup restore and disaster recovery. And in this regard, we're evaluating SAP HANA as a key workload to scale to our enterprise use base. And last but not the least, the fourth pillar is migration at scale. So, simplify bringing virtualized workloads to OpenShift. We have introduced migration toolkit for virtualization, which supports warm migration from vSphere. We enable migrating workloads from Red Hat virtualization with minimal disruption. Next slide, please. OpenShift Sandbox Containers. OpenShift Sandbox Containers is based on the Cana Containers open source project which provides a more secure container on time using lightweight virtual machines. Right now, it's available as a technology preview. And this adds capabilities for running specific workloads that require extremely stringent application level security. So, while most of the applications and services running OpenShift can be served by strong Linux features like SE Linux or Seccom profiles and whatnot, Sandbox Containers provide an additional layer of isolation that's needed for high sensitive tasks such as privileged workloads or running trusted code. So, think of it as a good combination between containers and virtual machines. You're getting the lightweight and speed of containers, but at the same time, you're getting the secure isolation goodness that virtual machine brings. So, it's really combining the best of great of containers and virtual machines. And what does virtual OpenShift Sandbox Containers provide is compliance. This means that you will be able to deploy OpenShift Sandbox Containers on FIPS-enabled clusters and it will be safe to deploy the operator on FIPS-enabled clusters. Next, the operator will delegate the upgrade of the Cata container to the machine config operator. Next, we are introducing MustGather to collect cluster information and information about Sandbox Containers that it's easy for cluster admins to debug the Sandbox Containers. And these Sandbox Containers will also support disconnected environments. We will also be using a metrics endpoint called Cata Monitor to fetch metrics for different Cata Containers endpoints such as the agent, the hypervisor and the ship. Next slide please. So, this slide I want to talk about our investments for OpenShift on bare metal. First, being advanced host networking configuration will provide a declarative configuration for setting VLANs, bonds and static IP addresses installed time and on day 2 leveraging the technology in Kubernetes and in state. Next, you will be able to run bare metal OpenShift anywhere which means you could run it on physical hardware in your data center or you could also run it on virtual machines say for instance on OpenStack or on redial virtualization. Next, known health checks and remediations. So bare metal clusters should provide workflow protection to hardware failure regardless of how the cluster was installed IPI, UPI or assisted installer. But the machine health check requires the machine API which is available for IPI only and so to make sure this node health check is supported for other amendments like UPI and assisted installer we are introducing node health check to allow node failure and detection without using the machine API and we are also working on protecting the workloads on single node OpenShift clusters in cluster pairs. As but not the least on bare metal improvements in OpenShift is hardware management and observability. In 4.8 we added a number of performance related hardware attributes to the node feature discovery operator to help place workloads on nodes based on performance required and now we are adding the ability to configure BIOS settings to ensure nodes are profiled according to a desired configuration. And we are also working on observability from the console to show hardware information from multiple cluster nodes. Next slide please. Next slide I want to talk about other investments to specialized workloads scheduling framework and this is really a multi-layered take so I am going to read this top to the bottom at the top is your specialized workloads like your big data workloads your self-driving application for your cars and what not. Right below that is your multi-clustered application dispatcher which helps prioritize the workload set quote elements based on customer business requirement. And below that is your open data hub which provides workflow like Hubflow to run the AIMM models. And below that is your specialized operator framework which consists of two parts. The first part is your red hat specialized scheduler operator that provides a specialized scheduler you know for instance a batch or a gang scheduler that will start and execute and finish the job together. And the next part of the operator framework is a customer developed specialized workload scheduler operator and with this customers can build their own scheduler that best fits their workload needs. And below this a layer in the cake is basically the scheduler profile that lets users plug and play which specialized scheduler they want to use along with the default OpenShift Operate. And obviously all of this runs in OpenShift which is enterprise-grade Kubernetes and obviously OpenShift is covered by you know rel and rel core OS which are enterprise-grade operating systems that can run any cloud or any infrastructure. Next slide please. Thank you. So next slide I want to talk about installation and updates. So we are working to enable OpenShift to be deployed on even more number of platforms including Alibaba, Newtonix, Azure Stack Hub, Equinix Metal, Ibeam Public Cloud, Microsoft HyperV and we want to you know improve this you know list as we you know go forward but also expanding our existing provider support to include more regions and more cloud instances and whatnot. And there is also support for rel 8 so you will see that rel core OS is obviously still the default you know choice of operating system for the control plane but for the computer infrastructure nodes we will provide the option to use rel 8 for your application workloads. So next unified installation experience right so today we have multiple methods of installing OpenShift you know user provider infrastructure infrastructure, Assisted Installer and each you know method addresses a different deployment scenario if you want you know full stack automation you go for IPI if you want to do an ala card on your custom you know hardware you use UPI if you wanted some help for bare metal installs you use Assisted Installer so there are a lot of different ways to install OpenShift based on deployment scenario and we've seen that you know sometimes these options are too many and it becomes difficult for users to choose which option they should choose because in some cases there is more than one option for a particular deployment scenario for instance you know vSphere you could do IPI UPI whatnot and the other problem is you know we want to be more agile in supporting more number of these you know cloud platforms and providers and availability zones for instance you know if there's a new cloud provider like DigitalOcean or you know Equinex whatnot or you know Amazon introduces a new AZ whatever it is we want to be more agile and supporting these newer you know cloud providers new regions using the IPI which is a full stack automation approach and as we try to onboard new providers and more regions and whatnot it usually takes a multiple releases it takes you know a couple of you know cycles to get it right and so we're looking at a more scalable and a more agile way of integrating these new providers without compromising the installation experience and at the core of it is going to be the install core which is you know basically installing OpenShift and we want to improve that obviously and you know layering on top of that is a cluster life cycle API which we also want to improve and you're also looking at centralized host management which is to you know manage posts you know across multiple ways of deploying OpenShift and at a high level you know this effort will involve introducing the OpenShift high operator which will provide a cluster provisioning API upon which we can build a new central host management service along with improving cluster provisioning experience with you know OpenShift OCM and ACM and last but not the least on EUS to EUS upgrade we're working to improve the experience for customers while minimizing workload disruption by some intermediary versions cannot be skipped for control plane upgrades we're looking at you know skipping those for the compute nodes this means that the control plane upgrades will be done sequentially between EUS releases but for some intermediary versions we may be able to skip the upgrades on the compute nodes when progressing to the next EUS release so for instance you know the process will require passing the compute machine config pool at specific times during the upgrade process so that the upgrade can be skipped while moving for intermediate releases and as you can see in this diagram you know if let's say you're on 4.6 EUS all control plane and data plane nodes are running on 4.6 and if you let's say upgrade to 4.6.1 you know let's say 4. n plus 1 whatever that is you'll upgrade only the control plane nodes and not the data plane nodes and then if you want to go to the next EUS release we'll just upgrade all the nodes so that way we minimize workload disruption but we still keep in touch with the latest and greatest updates to the control plane so next slide is about bringing your own windows server host we're seeing that the world of windows sees customers treating their instances more as specs than cattle and there is a desire from customers to be able to reuse these specs windows server instances and open shift worker nodes run windows workloads and gain similar benefits that their Linux workloads get when managed by running open shift so today we support windows server container deployments on open shift 3 platforms aws azure and vspear using the installation provided infrastructure method or the IPM method you want to extend this out to BYOH so that if you have these windows server instances as you know pets for instance you have you know a bunch of windows server x86 servers running in your private data center as rats and you want to treat these as you know pets and not cattle this feature will let you onboard those windows server instances as you know worker nodes or compute nodes on your open shift cluster and the only two caveats is that the windows server instance has to be on the same network as the Linux worker node in the cluster that they're connected to and the instance also has to be the same cloud provider that the cluster is brought upon and the second caveat is that the prerequisite for windows containers is oVn hybrid so the cluster has to be installed with oVn hybrid before you can set up your windows server nodes next slide next slide I'll wrap with this we want to provide a preview of cert manager in open shift and you know the goal is to have a cluster wide operator for application certificate lifecycle management that supports integration with external CA this has been a very popular ask for a while now and this work will let us include provisioning renewal and retirement of certificates and so we're writing this new operator called the cert manager operator which is you know a simple you know on the upstream you know cert manager operator and I want to emphasize that this will be available for all workloads running on open shift except bootstrap components that need certificates before the operators exist so for instance you'll be able to use this as a data operator you'll be able to use this as a poem installed operator for any application installed on open shift any middleware component installed on open shift obviously you know any applications you may use it is not meant to be any of the day one you know certificate management for HCD or for any of the API server or control plane infrastructure components it's only for application level certificate management and right now this is the latest release is 1.3 upstream and that is what we will be including in the operator and with that I would like to wrap this presentation and you know thank you for attending the slides and the recording will be posted on twitch once again have a wonderful day and thank you for attending the roadmap update on open shift