 Hi and welcome everyone to the Q4 2021 update to what's next in OpenShift. As many of you know, the product team does a what's new and a what's next presentation every quarter or so. This is the what's next Q4 update. The what's next offers an overview of the direction, initiatives, and exciting new use cases and features over a six to 18 month time horizon. These are heavily influenced by you, our users via formal and informal feedback, and also by market drivers and trends. This is an hour long presentation that covers the overview and motivation for the roadmap that we're going to present to you. Specific details for each of these topics that are covered here, can also be found in the appendix, which is a part that we are not going to cover, but you will have access to it after this presentation, and you can use that as a reference. Please note that this is a roadmap and therefore anything that we discuss here can change and is subject to change. Please bear in mind as you plan your next few months and years. With me today, I have a fantastic team of fellow colleagues and product managers from the OpenShift team, who will be the speakers. In addition, we have the rest of the team present also today, and together it is the hard work of not just us presenting, but also all of them and also obviously by extension, all the engineers and the other functions that make this happen. So thanks to all of them, obviously, and then you'll see them and you'll hear them all talk today. Just in the way of introduction, I don't think I introduced myself. My name is Par Katarki and I'm one of the OpenShift product managers in kind of shepherding this presentation. For the past 10 years or so, we have made the case for the Open Hybrid Cloud as many of you know. Our customers have innovated, competed and have succeeded in creating value to their customers through applications built for the Hybrid Cloud. Those applications can range from traditional NTR applications to more modern cloud native microservices, based applications, encode more traditional business logic or rules or more modern data analytics and AI, and can be for in-house developer applications or packaged applications from ISVs. No matter what, they have developed and deployed these applications across the Hybrid Cloud footprint, everything from a physical data center to a public Cloud and to the Edge. Red Hat OpenShift built on Red Hat Enterprise Linux has been the bedrock platform in this journey. That continues to inform our roadmap and our initiatives and our future. OpenShift as a quick level set, as you probably already are aware, is a fully managed Cloud Service or can be consumed as a fully managed Cloud Service or a self-managed platform. Managed Red Hat OpenShift is jointly engineered and offered by Red Hat with the corresponding Cloud Providers so that you can get started with a Kubernetes Service very quickly. OpenShift Managed Services include OpenShift Dedicated, OpenShift Red Hat OpenShift on AWS, or ROSA as your Red Hat OpenShift or ARO or RO, and then as well as OpenShift on IBM Cloud and Google Clouds. In addition, you have self-managed products from Red Hat. This includes OpenShift Platform Plus, which we'll touch upon briefly on the next slide. The OpenShift Container Platform, OpenShift Kubernetes Engine, as well as other self-managed software offerings. You can choose the model that best suits you and your needs or a combination of all of them. Obviously, it is for OpenShift, what we call anywhere, any ways at any time. You are all probably already familiar with this in a block diagram, if you will, or a rendition of what makes our constitutes Red Hat OpenShift. As a quick recap, OpenShift Container Platform is Red Hat's distribution of Kubernetes built on top of Red Hat Enterprise Linux CoreOS, that also includes many platform developer and data tools and services. That being said, especially over the past year, and we certainly anticipate more next year, enterprises and organizations need to deploy and manage applications and clusters in a multi-cluster and hybrid Cloud environment. They need to answer new questions in this context. How do I deploy applications across multiple clusters and Cloud? How do I monitor these applications as well as these clusters and drive updates and upgrades? Are my images free from vulnerabilities? How do I ensure a secure supply chain? How do I store images for connected and disconnected users? How can I integrate security into my entire dev process from conception to production? The OpenShift Platform Plus, which is something that we introduced earlier this year, in the first quarter of 2021 comes to the answer, if you will, and includes, along with the OpenShift Container Platform, Red Hat Advanced Cluster Management, Red Hat Advanced Cluster Security, Red Hat Quay, integrated and tested together to address your management, security, governance, registry needs, and more. We'll discuss that in a separate section later. As you all know, Red Hat is an open-source company and everything we do is upstream first in the open communities of innovation. OpenShift Platform Plus and its component parts are all made of these fantastic upstream communities shown here. Definitely very colorful to say the least. On behalf of the entire product team, I wanted to acknowledge the work, the many contributions that come to us by the way of these communities and say thank you. As we look towards the year 2022 and beyond, our mission is to enable our customers to accelerate the deployment of applications in hybrid clouds and hybrid cloud environments that includes obviously multiple clusters through a rich services-based experience that we are calling the Hybrid Cloud Experience. You will see this Hybrid Cloud Experience come to you through. You already see this in cloud.redhat.com, but more is coming and a lot of this roadmap that you will see is in support of that. This Hybrid Cloud Experience is comprised of three themes. Unified Experience, here you will see on the right, is the first of them brings you best of breed uniformity of experience for application developers, DevOps engineers, data scientists, data engineers, machine learning engineers, and of course admins, and operations folks that span the hybrid world end-to-end. Security everywhere is the second theme which offers tooling and capabilities to ensure applications run securely and from conception to production and users interact in a compliant manner with incompliance with internal industry standards. Platform consistency provides a platform that tastes, smells, sounds, and feels the same while also providing a rich, no matter what the Hybrid Cloud footprint is, and also provides a rich ecosystem of products and technologies not only from Red Hat, but also from our very strong ISV and partner ecosystem that enables users and gives them the choice to customize and get the best of breed that suits that particular need no matter what the cloud or all the way to the edge. We have three pillars of execution that you see also here in the center in the context, and we do that in this context of the Hybrid Cloud Experience and the three themes that I talked earlier. You'll find that the rest of the presentation are organized along these three pillars and these three themes. The first of these pillars here on the left is the core platform and developer tools pillar, which includes our investments in Kubernetes, Linux, and platform and developer tools. While we know that we have added a lot of innovation over the years, four, five, maybe even more if you can do the Linux for the past 15 to 20 years. This innovation is not slowing down. There's more coming in response to new hardware accelerators be it because of new specialized workloads, which new kinds of scheduling, be it because of innovations in networking, including network observability, GitOps and DevOps, and exciting new things for developers with regards to serverless and service mesh, and IDE experiences with code ready. These pillars are foundational to our other two pillars, which are managed cloud services and telco and edge pillars. The telco and edge pillar is in service of rapid innovation and needs from the 5G core and 5G ran in the telco industry. We already have seen major customer wins and adoptions in these markets and we'll continue to do so. Obviously, this segment needs a lot of innovation. These include their desire to run developers, develop and run container native network functions, or AI and machine learning applications developed at the core and deployed at the edge with containers and on a 5G footprint. Or this could also include collection of data at the edge and anonymizing it and then cleaning it, and then feeding it back into the core or acting upon that real-time data. The managed cloud services pillar is the third pillar, and this is all about bringing OpenShift and application services from Red Hat and partners as a fully managed and SRE-backed service on the cloud of your choice. This includes Rosa and ARO and OpenShift dedicated that I touched upon earlier, but also includes application services brought to you with SRE-backed services such as API management, a stream service with Kafka, Red Hat OpenShift data science, subscription management, cost management, and insights all available via cloud.redhat.com, as a rich web-based GUI or also through APIs. In 2022, we'll be doubling down and introducing more innovations in this space with more application data and managed services, and that informs a lot of our roadmap that you'll see. As you all know, I'll touch upon this very briefly. When we released OpenShift 4, we went to a rolling window life cycle, so the life cycle of a specific 4.y release was until the y plus 3 release GAs. When OpenShift 4.8, for example, G8, OpenShift 4.5 ended live. This typically meant about 10-12 months of life, but based on feedback that we have got from customers and users, this was constraining to our customers and users from an operational point of view. We heard from them and therefore, we are introducing this life cycle change, which includes to our minor numbered releases, like the 4.y releases, for example. What it is, the highlights really are, we are changing from our current version-based lifecycle policy to a time-based life cycle of 18 months for all minor releases of OpenShift 4. This change will take effect with the Red Hat OpenShift Container Platform 4.7 and higher, and this also means that, and also we are designating even-numbered releases as U.S.s, so that way, we can provide you a rich U.S. to U.S. upgrade experience between these even-numbered releases. Then finally, I think the three OCP releases per year is in cadence with the upstream Kubernetes, which is also going three to three releases per year. What this all means is that this is a roadmap in a nutshell, like a one-slide nutshell of everything. I'm not going to go through each one of these obviously, but we'll cover some of these for the rest of the presentation. We continue to innovate in exciting new capabilities across the core platform, application developer and managed services or pillars, as you can see here, through a calendar year 2022 and beyond. OpenShift 4.10 will GA in Q1 of 2022, followed by 4.11, that is in early Q3 2022, and 4.12 will be the last release this year. You can find details of these features and much more, as I said earlier in the appendix of this presentation. With that said, and with this introduction, I'll hand it over to Scott Burns to take us through the Hybrid Cloud and OpenShift platform. Hey, thank you Tushar and hello, world. I appreciate everybody for joining today. I'm thrilled to be talking to you about the OpenShift Platform Plus, and really it's Red Hat's Hybrid Cloud experience. You can hit the next slide, Tushar. So one of the things that we've really started to notice with our customers is that there's more of it concerned for the regionality of your hub and this concept of a hub being the central point of management. When you start to see the clusters proliferating like cats and cattle out there, you start to have the story of management already at everybody's first breath. And when you get to that point where the management becomes key here and the proliferation of clusters, we really start to think about the hub cluster as a unit where you start to manage all these different bits and pieces. And from there, you can see, we're building in concepts for OpenShift Platform Plus with capabilities around ACM and ACS and Quay. We're seeing Hypershift, which is a new deployment pattern and new way to manage the control planes for how you deploy those OpenShift clusters. And all of that really represents a shift in the thinking that we're seeing and this is not just one and two customers, but this is across the board. We're all looking for that infrastructure to be managed, automated. We want our operations teams to be sleek and working on a fleet capability. We're just at that point where you can't have one and two clusters anymore. We even have clusters that are being chiseled out and defined for specific applications and specific workloads to run on that shaped cluster. Those can be clusters from the central data center all through the edge tiers and beyond. Let's hit the next slide. To really hold on to that thought, as you standardize in this space with one club or multiple hubs, you start to see how from your first cluster to the hundredth or the thousandth cluster, you really need the consistency of networking. You need the consistency of storage and tooling, ingress, egress, container registries. Everything that we have built in our hybrid cloud and OpenShift platform plus model speaks to the user that has the frustration of the concerns and the challenges about managing this new environment, about making sure that the east-west traffic is tunneled correctly from cluster A to cluster B. Ensuring your ingress and egress are all managed consistently. So what you see on this slide really represents our point of view for how your operations teams and how your developers can start to interact with this platform in a way that's consistent and provides the ability to go from one cluster to the next without a huge headache of disruptions in between. Let's hit the slide. So with that, we really start to talk about the center point of management for Kubernetes space and that's red-hat advanced cluster management for Kubernetes. You're gonna see these three themes throughout this deck and we hope that helps you anchor in and define the important points that we're delivering with each product area so that it's a cohesive picture and as we deliver this as a red-hat portfolio. So the unified experience that we're talking about here with ACM is a single console experience, whether that's an OpenShift console or an ACM console, we want that to look and feel as one and that's a consistent delivery in the on-premise model or up into the cloud. So as I navigate the managed cluster and my fleet of clusters from one to a thousand, I don't have these jarring experiences from one tab into the next tab. I want to be able to flow into those consoles and see the metrics and see the data and see the applications as if it's been unifying, which is what we're going for here. We think that ultimately reduces the total cost of ownership. It reduces the headache on skills and upskilling and different tools that you need in that space as we really bring the unification of these OpenShift platform plus capabilities into your teams. Security everywhere, that is such an important facet of the entire package here, making sure that your supply chain is secured from the beginning to the end of that application delivery. We're bringing cosine manifest and looking at secret management as areas that are huge headache for our customers, especially in the GitOps model, where they want to be able to deploy those workloads out consistently across clouds, again, on-premise with bare metal or up in the clouds with any of your favorite public providers. We think that reduces your exposure and risk. Obviously, those are huge headaches for security teams to take on, and we also recognize that the reduction of exposure and risk lowers your cost as well. There's less of that excitement for your company to be in the headline because of something that got out where it wasn't supposed to. Third, we look at the platform consistency, and we think this is a really key pillar for increasing developer productivity as well as lowering your costs as well. When you look at the deployment models of a single-note OpenShift, out to the edge, you look at compact and multi-node, remote worker nodes, even hyper-shifted clusters that are coming in, and hierarchical tiers of management hubs, you really start to understand that no matter where you are, OpenShift has a distribution that's consistent for you on any cloud, anywhere on the planet, and we think that's the best place to be. Reduction of your complexity and the distribution allows you to deliver applications consistently anywhere that you want them to be. Let's hit that next slide. When you look at this story about applications and workloads and how they're going to run consistently everywhere, you obviously need a networking layer that can speak to that. And our multi-cluster gateway for ingress and egress really points to the unified handling of that traffic. Again, regardless of where you're at, you can see metal LB and HA proxy Istio ingress at various different layers of that traffic gateway. So as you funnel through this map, you understand the inbound traffic coming from the internet, and as it flows through these different protocols, you shouldn't have to care about how that workload needs to be architected this way. That way, we want to handle that for you. So eliminating the risk and challenges around different protocols and ensuring that there's a uniformity in that flow of traffic. You can see how we're aligning these layers so that you can take some of that thought off and put that more toward where you're interested in, which is innovating with your applications. It's not just that there has to be one single point of view. We want to encourage all of these network capabilities in the box. And you'll see there, we highlight the sub-rener capability with that VPN tunnel connecting clusters, cross clusters in the east-west scenario. So all of this is being built and packaged into OpenShift Platform Plus, ensuring that you have the ability to automate and deliver the operations for your workloads. Let's hit the next slide, please. And to round this out, it really starts to make the most sense that we bring storage into this picture. It's not just an afterthought, it's something that you bring in on the day one experience. It's something that you plan for and you work around. So we're really unifying this experience as well to bring the storage capabilities like CSI migration from the entry, a couple of other key features around security everywhere with the Kerberos mounts and secret stores. And finally, looking at platform consistency with CSI ephemeral volume. This is ensuring that ODF and its storage counterpart, OpenShift Storage, are really providing the asynchronous capabilities that you're looking for without having to target separate storage capabilities that you bring into this situation. So doing things like DR operators, they're bringing failover and failback operations to reduce your downtime, easing that disaster recovery scenario and ensuring consistent data foundation capabilities everywhere you go. We're all about increasing developer and admin productivity, reducing again the risk of disruptions through your business continuity and reducing your total cost of ownership by standardizing storage across your fleet. Of course, the OpenShift Storage is available across clouds. You'll find the CSI available in any of your popular clouds. And with that, I'm going to hand it over to my colleague, Jimmy Scott, who's going to run you through some of the what's next features in the Advanced Cluster Security for Kubernetes. Awesome, thanks, Scott. And in our path to enable you to secure and manage your first and your hundredth cluster, OpenShift Platform Plus really brings together a few bundled service offering. It brings together advanced cluster management, as Scott just described, but it also brings together Advanced Cluster Security for Kubernetes, which came from the StackRox acquisition and play as well in an attempt to help you to enable your first and hundredth cluster in a secure and compliant manner. So through that, Advanced Cluster Security is going to focus the next year or so on bringing you a more unified experience. What that means is we want to break down cross-functional barriers to help you reduce cost. So we're going to do that by accelerating operationalization with managed services. This will help enterprise teams decrease the swim lanes within their organization and accelerate that operationalization by being able to manage it unilaterally within their own chain. We also are looking to improve feedback loops between development and security teams so that they can share the same language as they work to communicate and secure application work. And we want to do this by improving network policy and how it's managed across clusters and through security assessments over time. We're also looking at how we enable security everywhere more effectively. We want to do that because ultimately, Kubernetes is new to a lot of people still, especially to security teams. We want to enable teams to innovate with confidence by helping them to bridge the skill gap between a security professional who's lived in the Kubernetes space and someone who might not necessarily have that skill set. And we want to do that by identifying different risk indicators across expanded use cases and by also enabling teams to remediate issues more effectively by giving them the information they need at their fingertips in order to fix issues versus just identifying the issues. We also want to establish additional platform consistency with the information across the portfolio. And we're going to do that by providing consistent security data across use cases and across different pains of glass throughout the Open Shipping Kubernetes ecosystem. And we want to do this in a way that enables teams to scale their policy workflows in a repeatable manner so that they can establish guardrails in order to innovate with confidence and reduce the complexity to focus their resources within their organization. And this will enable us to create and evolve the Kubernetes native security platform that helps teams to build across the entire lifecycle of an application through build, deploy and run as they seek to secure their supply chain through their workloads and secure the infrastructure that's being deployed on Kubernetes. We do this through a policy engine and an API that works to establish feedback loops that are continuous throughout application development and life cycles using tools that security teams and development teams use natively, such as PagerDuty and Slack or a SIM such as Splunk, SumoLogic or Q-Radar. And we enable you to do this across public cloud, private cloud and multi-cluster in our attempt to enable you to secure your first and 100th cluster. But further, next slide please. Skip over that one too. We further wanna enable you to have a unified experience with Red Hat Quay as well. And we're going to do that by establishing visual consistency across a new user interface with a different look and feel that familiarizes yourself with the OpenShift console. But we're also gonna be working to integrate with Quay.io and console.redhat.io. So that users can log in to console.redhat and get that experience with Quay that you're used to with Quay.io but also to take advantage of different forms of pricing so that you can use your existing purchase orders using SKUs that you paid up front for a year or use pay as you go pricing with a credit card through console.redhat.io. This is gonna enable a more consistent user experience from a self-managed environment to a hosted environment. We also wanna establish security everywhere as well. And we're going to do that by expanding the scanning coverage beyond container basing. And we're going to do that by looking into establishing language level support for languages such as Java and Gullang package. And as you've heard, Red Hat's making a significant investment in cosine in order to trust and verify signatures. And Quay will also be looking to support cosine-based image signature attestation so that you can verify image providence and sign up signature identities directly within the registry, even before an application workload goes into production. And finally, we're looking to establish better platform consistency. With a global deployment model reaching all the way from the core data center cloud regions and the fore edge, users with Quay are finding that they need a suitable content distribution for Kubernetes. And Quay helps you to reduce single central registry instance that you're using is looking to do that with geo-replication. It also is helping you to provide a consistent consumption experience through pull through caching of external registries so that you can use Quay with external registries proxied as well. And this will enable a hybrid content distribution model across the enterprise. Next slide, please. So we're also making significant investments in establishing workload observability. We wanna do that by simplifying the hybrid observability model. So we're looking to have an integrated cloud observability tool set to help you bridge the self-managed and cloud-managed workload solution. And we're looking to do this by establishing workload monitoring and user-defined projects to monitor flexible hybrid workloads and applications. This will help teams to optimize consoles between hybrid environment. We're also looking to establish additional observability for how the consistent information is stored. The Red Hat's going to provide the necessary tooling to ensure observability can be delivered across multiple environments. And we're going to do that by improving Thanos and Prometheus support to extend write for storage and platform monitoring of OpenShift workloads. And this will help you establish longer-term trends and ingest metrics. Finally, we're looking to establish visual flexibility within our platform. And we're going to do this by providing one choice, one provider from the data center through your edge tiers for observability. So we wanna extend the platforms and locations that you can use the existing dashboards within the OpenShift console and export observability metrics and log metrics. We're going to do this by optimizing an API experience in the OpenShift console. Next slide. And finally, to round out our investment into observability, we're also looking to help users observe network traffic more effectively. So whether in one cluster or 100 clusters, developers and cluster administrators are always going to need seamless active location connectivity. But they're also going to need to troubleshoot when that connectivity isn't seamless. So we're looking to give you the information on network traffic metrics and tracing in order to perform that troubleshoot. We're also looking to help teams meet their security and regulatory compliance obligation by helping give them the traffic that they need in and around their networks so that they can establish some network policy in governments and use the necessary tools to ensure code is secured across all of their environments. And finally, we want to provide additional platform consistency. And we're going to do that by allowing developers and administrators to require a common understanding of their traffic within and across multiple cluster boundaries. We're going to do that by establishing a topology for users to understand and share a viewpoint on network traffic flow and visualization. Next slide. And for those of you who aren't familiar with HyperShift, HyperShift really brings the externalized control plane to OpenShift in a multi-cluster environment. HyperShift is a middleware for hosting OpenShift control planes at scale. And it solves the problem of cost and time to revision multiple clusters as a means of portably implementing cross-todd workflows. So HyperShift is going to come and give you a fleet-level provisioning view for your clusters in a way that gives you a myriad of benefits. And these benefits range from low CAPEX and OPEX to faster cluster bootstrapping and to use heterogeneous architecture clusters, such as using a cluster in an x86 ARM or IBM Z type of architecture. This allows you to manage these clusters in a way that has network segmentation and trust established throughout the environment. And with that, I really want to hand it over to Deepthi to walk through our telco and ad strategy. Thank you. So built on top of Enterprise OpenShift, these are some of the enhancements we've undertaken specifically to support telco and edge workloads on the platform. So telco workloads require high performance, low latency computing. And in order to achieve that, workloads need absolute resource guarantee to enable predictable performance. We have been working on PAO, the performance add-on operator, and the topology of our scheduler to name a few. And we want to achieve optimal resource utilization with enhanced performance on this platform through this. Today, if you look at typical networks, we have dedicated appliances, we have virtualized 4G workloads and now we have 5G workloads running in containers. Running telco workloads as microservices has its added benefits. That includes continuous CI, upgrading seamlessly various parts of network without breaking anything. Now, what we're trying to do is to simplify network operations and management by making it practical to run all of telco workloads on a common platform. We do have the CNF certification process in place to ease the move. Finally, we've always looked to enable next generation hardware, be it CPUs, NICs, SmartNICs GPUs to facilitate an agile infrastructure with latest and efficient hardware. Next slide, please. So as we know, telco workloads need coherent and predictable resource alignment. It's more like having CPU, memory, and devices, all the resources that are assigned to your pod belonging to the same human node. And without this alignment, we're cognizant of the high performance penalty that one could see. So we've tried to address this with topology manager earlier, which works very well aligning resources at a node level once the pod is scheduled on the node. But given that the Kubernetes scheduler itself is not aware of any topology, this can often lead to runaway pod creation when NUMA alignment constraints are not taken into account while scheduling. So we've been working with upstream communities to enhance the Kubernetes scheduler to make intelligent NUMA aware placement decisions to optimize performance specifically for telco. The first implementation of NUMA where scheduler is based on the upstream RTE or the resource topology exporter component. And we will be switching to node feature discovery project in the near future. With topology aware scheduling enabled, workloads should never be placed on platforms that cannot meet their resource needs aligned to their topology preferences. Next slide, please. We've always looked to support leading edge networking hardware or accelerators on the platform. With coming to 5G, this becomes even more essential and critical. With OVN hardware offload, we're looking to offload all of the data plane traffic flows and services to the NIC FPGAs. Doing this can benefit telco NFE customers who can now have high performance data planes with improved networking services. With smart NICs, one can look to isolate the control plane onto a separate cluster just for running infrastructure services. Say like running on ARM cores in the NIC while the tenant workloads continue to turn on the nodes. This provides managed accelerated infrastructure services that includes networking, storage, AI, ML, outside of the tenant cluster. Finally, with support for latest hardware accelerators, be like programmable FPGAs, crypto engines, or GPUs, all managed by open shift operators, we're looking to accelerate the 5G core and RAM functions like inline encryption, data plane encapsulation, so on and so forth. Next slide, please. So we come into, so now we get into the RAM. RAM is on the edge of the network. It's a crucial connection point between the end user devices and the rest of the operator's network. With the current ongoing 5G network transformation, one is increasingly seeing container-based cloud-native solutions for RAM. It is very important that we simplify the network operations, improve the stability, availability, efficiency, all while serving increasing number of devices with high bandwidth applications. As it is on the edge, it is very essential to have a very small footprint, optimized infrastructure, but with very, very good performance to meet 5G requirements. We do have a single-note open shift that fits the bill right here. And given that we're going to deploy hundreds and thousands of sites with hundreds and thousands of such devices, it is essential to deploy, manage, upgrade this in an automated way at scale to advance cluster management and zero-touch provisioning. All of these DUs that typically handle antenna coverage to a ton of calculation in real-time, and we tune the nodes that run real-time workloads to leverage advanced timing and support hardware isolators on the platform to achieve such high performance. Next slide, please. The ZDP, the zero-touch provisioning, is a way to deploy open shift clusters at scale in an automated way via ACM, right? It uses a declarative GitOps approach with straps and trays to deploy open shift on new compact topologies. We're continuously looking to evolve and enhance this specifically for the edge. We're looking to enhance this on a scale level. We're looking to support more than 2K SNO provisioned and managed by a single instance of ACM very soon. And even in policy-based upgrades, we're defining groups of SNOs that can be upgraded independent of each other for more granular multi-cluster management. Ideally going forward, we would like to ZDP everything. That's right, ZDP everything. The DUs, the CRAN hubs, the additional infrastructure that is needed, which all would make our life easier to deploy, manage, and upgrade clusters at the edge with ease and scale. Next slide. The challenge with the 5G network is to provide ultra-reliable, high-accurate timing synchronization over the 5G packet network. The answer to this is the Precision Time Protocol. And this is the reason we have invested in enhancing this and making it more robust in the coming days. Given that we already have a single-nick ordinary clock, boundary clock, and event bus for PTP events, we're now looking to build on top of these enhancements for PTP stack and also the PTP operator on OpenShift. We're looking to enhance Precision Time Control. We're also scaling to more number of RUs, building HA by having richer policies on how the system clock is set with best master selections in key support. And we're also looking to upgrade to Linux PTP 3.1 stack, which has much more richer features, improved algorithms, and robustness enhancements. And we're also looking to support the grand master clock via the NIC. This would highly reduce the cost of the sell side by moving this functionality from the sell side router to the node. For those who are interested, we have a detailed roadmap of this in the later part of the deck. Next slide, please. We're increasingly realizing ran functions over generic of the servers with OpenStore software. As a part of the next phase, we're looking to see how we can optimize these nodes for power savings without any performance penalty. At the end of the day, we do not want the nodes to consume any more power than that is necessary. So given that many telecom vendors run thousands of this far edge to use, power getting expensive, especially at the far edge, every little bit of power savings on the node directly translates to huge dollar savings. So we're working up the stack right from the hardware to the operating system, OpenShift, and finding the workloads themselves to enable power saving profiles or norms with the end goal that all of this working together coherently will result in no wastage of power, especially at the edge. Next slide, please. It's over to managed services. Thank you. Hey, DP, thank you. Hopefully video and audio is coming across. Tushar, next slide, please. So I just want to level set just a bit about what I'm just going to be specifically talking about with relating to managed OpenShift. I know Tushar touched upon that much earlier on in the presentation, but as we're all aware, everyone knows about OpenShift, OpenShift Container Platform. This is where you basically, by a subscription, you take the bits, take it home with you, or into your data center or cloud of choice, install it, and you basically are doing a lot of the day one and some of the day two operations as well. But we're going to be specifically talking about over on the left hand side of the slide, about the managed OpenShift offerings that we bring more of like an OpenShift as a service. We do have some partnerships with the large cloud providers where we provide a first party native service that's available on the platform where really the entirety is available on your platform of choice. So we do have with AWS, the Red Hat OpenShift service on AWS or what we like to call Rosa. Or similarly with Azure, with Azure Red Hat OpenShift, early on IBM Cloud as well with Red Hat OpenShift on IBM Cloud. And we also have a Red Hat offering that gives you a choice of cloud provider and that is Red Hat OpenShift dedicated. So you can choose between running it on GCP or AWS. And just some of the next things we're going to go through specifically relating to these offerings. So next slide, please. So here is a familiar theme that we've kind of been going through. So one of the things that we're looking at is really just to give our users a unified experience in how they work with their managed OpenShift clusters. So this is really giving them one single point of location where they're going to be able to deploy their clusters, manage their clusters, delete their clusters rather than having to have disparate areas depending on the service that they're using. So this will be able to give them more of an ability to work from one location rather than kind of keeping track where their clusters are really kind of enhancing that hybrid experience. So allowing them to go to OpenShift cluster manager in order to be able to deploy their cluster or modify their cluster in any way. Currently, what's available now is we do have OpenShift dedicated that is available from there. We're very close to getting Rosa to be available through that as well. And hopefully in the not too distant future we'll get ARO there also. I am happy to announce though that also with ARO just this week we've enabled a UI experience through the Azure console as well. So as we are kind of going through this we're making slow but steady strides and being able to further enhance our customer's experiences with managed OpenShift. Talking about security, right? And everyone is interested in this regard. But here we're raking further strides in achieving further compliance with industry leading certifications. So one of the big ones that is ahead is HIPAA that we're working towards. PCI compliance we've actually already hit. And we also are working towards other government certifications such as FedRamp High. We are looking this probably in the earlier half of 2022. But really giving our customers just more flexibility in the kind of workloads that we can accept and really still kind of feeding into that hybrid cloud model whereby they can come to us for one location for OpenShift in order to put their workloads or regardless of what those workloads really become. And with a platform consistency, right? This is more about kind of I guess like a mentality that our team has kind of taken on. And that should really be that if it works on OpenShift, on OCP then it should work on managed OpenShift as well. Obviously there might be certain things where that may not be feasible but really the approach is gonna be and is that it should be initially at least treated as a bug if it doesn't work on managed OpenShift but it does on OCP. So really just kind of ensuring further that OpenShift is OpenShift. It's gonna work like you expect it to work regardless of how you are opting to consume it. Next slide please. Furthermore, we're working on just expanding the choices that our customers have. So again, in expanding their flexibility to choose whatever workloads that best fits for their use cases. So expanding the options that they have in terms of the worker nodes or kinds of worker nodes that they're gonna be using. So things like spot instances which is actually already available but working on things, let's say like GPU instances or AMD, things like wavelength or dedicated instances as well. And just really offering those to be able to just meet the customer where they are with the kind of workloads that they wanna be running. Further on the security front, we've actually initiated this too already. So just supporting bring your own keys for KMS for the cluster encryption. This has been something that we've heard from our customers repeatedly that they've wanted to see. So before it was encrypted as well but we would create the key for you but now at cluster creation time you're able to specify your KMS key and that will be used during the creation of the cluster. And then lastly with the platform efficiency, one of the things still we keep hearing is that our customers just wanna pay for what they use. And going along with this is that sometimes our customers might be using, might not be using their clusters maybe over the weekend or it's an extended weekend like we just had here in the US. They may not be using their cluster so they want the ability to posit or hibernate. So this is something that will be coming out hopefully in the near term whereby the customers will be able to pause their clusters and that'll be also both a pause on the infrastructure costs as well as the open shift subscription costs that are on top of that. So this way they'll only be able to truly pay for their consumption as they are using it. Next slide please. So what I'd like to call attention to is so those are just some of the things that we went through there are more details down in the appendix but I'd actually invite you to come and actually take a look at our roadmap that we have. We have this published publicly in GitHub. There are the three relevant links at the top so I encourage you to come and take a look, see what new features are being added, how it's progressing through the pipeline and watch what has already been GA. Next slide please. And along those lines if you do have other ideas that you'd like to see that you haven't seen yet is just to please get in touch with us. You can use the same mechanism to actually just open up an RFV or a request and maybe we'll get in touch with you to find some more information and maybe you'll see it on a future roadmap. So that is it and I'll turn it over to Karina and Gaurav to talk to us about platform and developer tools. Hi, thank you. I'll be joining with my colleague Karina to talk about the session. Core platform and developer tools, these are essentially the fundamental features and capabilities that make the whole platform faster, better, secure and easy to use. Next slide. Installation, so with each releases we try to integrate the OpenShift to a cloud provider so that you can deploy OpenShift on the provider of your choice. In near future we'll be integrating with Alibaba Cloud, Azure Stack, IBM Cloud and Nutenix. We're looking forward for a simplified onboarding of the installation by giving you a bootable media that you can boot your cluster zero as soon as possible or even better you can have all OpenShift installed from factory and ship it to your data center. We're trying to mitigate risk during upgrade. Example, starting 4.10, use upgrade requires, will require a single worker reboot. Zone aware, awareness during upgrade, that means if you have a fault domain, OpenShift deployed in different fault domains, then the upgrade will complete in one fault domain before moving to another. Targeted upgraded blocking is if you think a release has a critical bugs that is pertain to a particular environment that you're using, then we give you the flexibility and freedom to skip those releases. Next slide, compute. So we will be going forward, going to support ARM in OpenShift. When you will deploy the OpenShift in a cloud provider of your choice, then we'll give you the flexibility to use their cloud native services like KMS, DNS, load balancer, will provide improved lifecycle management of certificates to certificate manager. To enhance the experience and to decrease the operational cost, we are looking forward to provide self-driving control plane that will have automatic scaling and automated backups. Previously, our course was a black box, now we'll provide you the capability to customize the R course based on your business needs. Next slide, please. Operators, so operator is an easy way to install or install your application in OpenShift. So while installing the whole OpenShift, we will provide you the capability to skip certain operators that you don't want to get installed. We'll provide the functionality to provide automatic failure recovery of those operators. Specialize schedulers, so we'll provide you a easy way to install your new schedulers on top of OpenShift, so that you can deploy your AML or HPC type of workload on top of OpenShift. For customers who wants to deploy OpenShift on an air gap environment, we provide functionality called disconnected. Now with the disconnected, you could have a easy OC mirror like functionality to install the whole OpenShift in that environment. Next. Advanced host networking in OpenShift bare metal, so we'll provide the functionality like bonds, VLAN, static IPs. In fact, DSCP is no longer a carbon, you can put a static IP and stand up your OpenShift on bare metal. Hybrid clusters. Now you can run VM and bare metal next to each other. Take a scenario where you're running a control pane on a VM and the worker node is on the bare metal. We'll provide a faster booting mechanism like you will have external bootable media which will help you to boot your cluster zero up and running with no time. Next slide, please. So Cata container. So with the GA of Cata containers, we'll provide the help matrix functionality, we'll provide node feature discovery so that it will let you know before even installing the Cata containers that if the environment is suitable to install the Cata container or not, we'll provide the integration with ACS so that you can have a tighter runtime control. And we'll provide the integration with SRIV and DPDK so that you can run the application which has high throughput flow latency type of workload in a secure containers. Next slide. Window updates. So we'll be going forward to strengthening the support of dockers and windows and we'll be migrating towards container D and we'll be using CSI for storages. We'll providing help management of windows node and provide the capability of self-healing capability in the windows nodes. Next slide. And the next presenter, Karina. Thanks, Gaurav. All right. To further round out that hybrid and multi-cloud experience are key enhancements to the user experience. Soon you'll see the OpenShift platform plus unified console experience come together. And first will be the integration of OpenShift container platform and advanced cluster management into a unified console experience. And this provides a managed cluster intelligence to your entire fleet. So what that means is you'll easily be able to switch contacts between your fleet view to a single cluster and it'll drive more users into your multi-cluster management hub. Next, we will integrate advanced cluster security, Red Hat Quay, log management and more into this unified console experience. And our goal is to provide deeper insight into your fleet, about your fleet into this new unified console experience. And also we have the advent of dynamic plugins. Not only will we create a rich user experience for your users and our users, but soon customers and partners will be able to build their own experience into the console. And some examples of these dynamic plugins you'll be able to use are the multi-cluster overview, cluster inventory, cluster creation wizard and a cluster selector. Next slide, please. OpenShift GitOps. We see small and large customers increasingly making use of Git workflows for declaratively driving your cluster and application operations. And compliance is also gaining attention from customers due to the challenges they face with running multiple clusters across multiple clouds. OpenShift GitOps enables customers to get started with your GitOps workflows and configure your clusters and deliver your applications declaratively. And as your requirements grow, advanced cluster management, advanced cluster security and Ansible provide a solid foundation for extending your GitOps workflows into a wide variety of use cases, including supply chain security, edge deployments, your cluster lifecycle management and compliance, policy management, as well as AIML workloads through MLOps. Next slide, please. OpenShift CAICD and GitOps. We're working towards a consistent GitOps-based workflow to not only deliver applications, but also manage and drive your CI pipelines directly from Git. This also includes additional capabilities such as approval processes and pipelines concerns, concurrency control to be driven directly from the Git repo. Your OpenShift GitOps focuses on enhancing your GitOps workflows for customers that are also using Helm charts. They're using Helm charts to deploy their applications, that it'll simplify the bootstrapping and the getting started experience for your GitOps workflows with Argo CD. Also talking about security, your supply chain security and application delivery, it's such a hot topic right now, rightfully so. It is a top of mind challenge for most customers and it is a key area of focus for OpenShift pipelines and OpenShift GitOps. You'll see be able to enable variable builds by signing and verifying your tech Tom pipelines. And also that'll be expanded to image signing, which will be incrementally introduced in the upcoming quarters. Also, HashiCorp Vault Secret Manager, you'll see integrations with that as well as more guidance on other secret managers with OpenShift GitOps. Next slide please. OpenShift Serverless, to enhance the OpenShift Serverless deployment platform, it's being further integrated into the OpenShift user experience. With serverless functions, we've talked a lot about that on previous presentations. You'll get a consistent simplified programming model for non-traditional developers, such as data scientists and content developers. And in our security focus area, Serverless will be providing end-to-end encryption for internal and external services, as well as support for multi-tenancy. OpenShift Serverless continues to drive towards a much more consistent platform experience by offering a way to deploy stateless workloads and or manage cloud offerings, which you heard about earlier, as well as providing an application-centric foundation for the centralized hybrid and multi-cloud experience that you have also heard about today. Next slide please. All right, OpenShift Service Mesh is also, it provides a well-integrated out-of-the-box service mesh that is installed and upgraded via an operator. You'll see observability and visualizations with Kiali. It'll automate your network and ingress configuration and it integrates with free scale for API management. Also, zero trust networking policies for enhancing your security. This allows for the creation of your traffic policies. It's based on service identities rather than traditional IP addresses or ports. This greatly reduces your potential exposure of risk. Again, more consistent user experience as well across the platform. Next slide please. OpenShift Virtualization. As virtual machines are in OpenShift, inherit all of the OpenShift features, we're working to enhance that experience by managing your VMs, providing even more detailed statistics, aggregated views of your VMs, and then also pulling in the data protection, integrating it with the OpenShift API for data protection, disaster recovery, and that is all going to be integrated into ACM, Advanced Cluster Management, security everywhere, integrations with the compliance operator as well as Advanced Cluster Security. So you'll see even tighter integrations for OpenShift Virtualization across OpenShift Platform Plus and the entire platform. And then also for your platform consistency, you'll see the ability to run the same workloads in your public cloud. You can run your VMs in OpenShift as well as on your AWS bare metal instances. That's currently tech preview, so you'll just see that OGA, and then collaborating with other cloud vendors. Next slide please. Migration Toolkit. So previous slide was OpenShift Virtualization. You can use the migration toolkits to help you migrate your VMs. So keep that in mind. Lots of key enhancements you'll see for the Migration Toolkit for Applications. Some things you'll see are migration guidance. You'll see even further being able to bring your legacy applications in. You'll see guidance on how you're going to do that. And the goal of this Migration Toolkit for Applications is to become your ultimate open source toolkit to safely migrate and modernize your application portfolio. So bringing in your legacy applications. You'll also be able to gather more insight as you're architecting your migrations into OpenShift, your old applications into OpenShift. So there's so many great things. Look at the conveyor tackle project upstream to see what is coming down the pipe. Next slide please. And also Migration Toolkit for Containers. There are still customers on OpenShift Container Platform 3. They're migrating from 3 to 4 has never been easier. So look at the Migration Toolkit for Containers. If you're still on 3, it continues to simplify and be easier and easier. Cloud Migrations, there is an increased demand for migrations to ARO and ROSA. And these will be fully tested and supported by the Migration Toolkit for Containers. Regardless if you're migrating from OpenShift Container Platform 3 or 4. Also, Storage in Place Migration. This is key for a lot of people. It'll help you migrate your existing storage into OpenShift Data Foundation with minimal disruption to your applications. Next slide please. Thank you. These slides will be posted publicly and the appendix contains more detailed roadmap information as Tushar mentioned for all the topics and areas we covered today. And thank you so much for joining us for this quarterly briefing from OpenShift Product Management.