 All right, hey folks, welcome to another session of our first quarter roadmap for OpenShift. I'm Rob Somsky, part of the product management team. We're super excited to have you here today. We're going to cover a lot for what's next with the platform. As a reminder, this presentation is our what's next. It's a look ahead as far as we can see out into the future. The content is directionally correct, you know, plans do change, so keep that in mind as you're watching, but this is a good summary of where we're going. When these features make it into a specific OpenShift release, we're going to tell you about it shortly before that release in a what's new session. That'll be for a specific version of OpenShift. Our next one will be OpenShift 4.8, so keep an eye on the streaming schedule for that session. Now, also coming up really soon, we have Red Hat Summit, where you've got a bunch of great announcements there that you're not going to hear about here. They're only at that event, so please sign up. It's a virtual event. We're really excited about it, and we'd love for you to join us. Today, I'm joined with a few of my colleagues from the product management team, and we're going to be talking you through a bunch of different features in OpenShift, so Mark, Daniel, Maria, and Jamie are going to break down some of the pillars that we have for the platform itself and where we're going for 2021. As a reminder, OpenShift is everything you need to run a hybrid cloud. This is what our stack diagram looks like from the Kubernetes layer up to platform services that help you run your apps, do development tools that help you create them. We're going to cover what's next in all these areas today, and if you look really closely, you might see that there's actually a new box on the screen here right under multi-cluster management is our new Advanced Cluster Security. This is based off of our acquisition of StackRocks, which is closed recently, so we're super excited to welcome the StackRocks team to Red Hat, and we're still working on the plans for ACS, so you're not going to hear much about it today, but know that that will be coming in later sessions. So we're really excited about it, though. It's going to be awesome. Lastly, we're going to cover some of the exciting work happening in our multi-cluster management sphere, allowing you to seamlessly scale from one cluster to possibly hundreds of clusters with all the integrated networking and management that you expect out of that platform. It's truly a game changer for hybrid cloud, so with that, let's dig on into it. So here's a quick summary of our key investments for 2021. They're spread across four categories. We're going to break them down further. For the core platform, we're looking to formalize a practice that some of our customers are already doing today, which is providing OpenShift clusters as a service. This takes a new form factor, like smaller footprints, smaller clusters, more compact control planes, and allows them to have management tools to easily run the hundreds or thousands of clusters that make their customers successful internally at the company. Also in the core, we're looking to augment our security tools to help prevent supply chain attacks. Probably heard a little bit about this in the news. And customers are already using OpenShift to build and ship their software, and they're using tools to build containers, and so we're going to tie all of these together into the best possible protection for that supply chain. In the hybrid pillar, we're investing in multi-cluster networking with Submariner, adopt features to help you push down the config, applications, and policy onto all the clusters that you're under management. In our Kubernetes Native Infrastructure pillar, it's all about bringing new capabilities to the unique features of bare metal and edge form factors. We're investing in tools that help VMs, containers, and serverless experiences blend together on one site. And if you've got a VM appliance as a dependency, that shouldn't block you from modernizing a different part of your application. You want to bring in some serverless components, totally possible in our Kubernetes Native Infrastructure. And last, in our developer and platform tools, we'll be GA'ing three services that we've been testing for a while. This is our V2 version of OpenShift builds, our new pipeline based on Tecton, and OpenShift GitOps powered by Argo CD. And then last in this category is a full functions as a service experience built off of OpenShift serverless to get that Lambda-like experience that runs in a true hybrid fashion anywhere that you want to run it. As you can see, we've got a lot going on. We're going to dig into more specifics. So there's a ton in flight. Here's a quick overview of all of this. Obviously, I'm not going to look at every single feature of this larger release, but feel free to review this, pause your video if you want. These slides will also be up on openshift.com in the learn section under what's new and what's next. So you can take a look at these at your leisure. All right. So as you can see, we've got a ton going on in our managed services. We're building on the platform. We're introducing new application and development tools. So let's dig into that stuff right now. I'm going to hand it off to Mark, who is going to get into the details of our core pillar. Great. Thanks, Rob. So, yep, I am pretty excited about all the things we have coming down the pipeline. Let's start it by talking about where we're going in the product development of OpenShift's core subsystems. Next slide, please. I'm sorry, the slides may be off a bit. If you could go back to the first slide in the section. So Lucid wrote up into three categories. We are improving the user and administrator experience of our core platform. And so this includes the way clusters are installed, operated, and upgraded across the board. So you will see new tooling for automation of common tasks like installation and lifecycle management and tooling for analysis and presentation of complex cluster information like, for example, networking security and traffic flows. We're taking workload management to the next level by providing the kinds of advanced scheduling that could only be realized as a result of our advancements in the underlying operating system integrations and node feature discovery and management tooling. And we're building upon existing high security model that, and we ensure that OpenShift is clear right out of the box and provides zero trust implementations to address it at multiple layers. Next slide, please. So if we conceptualize workload scheduling as a stacked layer cake and putting highly specialized customer workloads like genome sequencing, big data, AIML, self-driving cars at the top of that cake, OpenShift provides application tooling to manage workloads, single or multi-cluster environments and allow customers to define their SLA quotas and priorities for those jobs. As we move down the layers to the OpenData Hub, OpenShift provides workflows and tools to manage and run the specialized applications. And the topology where Scheduler helps to identify the best places within the cluster to run the specialized workloads. OpenShift further addresses specialized workloads with scheduler plugins. These enable job scheduling based on an application's requirements. Then a common example of that might be, for example, using gang scheduling for batch jobs. Next slide, please. So as we evolve our security model, there is a common single theme that is zero trust. In a cloud-negative world, DevSecOps is critical. Our security features and capabilities must be available in every phase of the build, deploy, run application lifecycle. And we're providing advanced options to address every customer's threat environment. Among many improvements to the build phase, we're adding kubelinter with GitHub Action. So kubelinter analyzes Kubernetes YAML files and Helm charts and checks them against a variety of best practices with a focus on production readiness and security. Also for the build phase will be rootless builds. Rootless containers are containers that can be created, run and managed by users without needing any administrator rights. And in this way, you can run a containerized process, just as any other process, without needing to escalate user privileges. In the deployment phase, we continue to work with DISA to deliver a security technical implementation guide or TIG for OpenShift. And we're adding critical protections like attestation of integrity. So we're integrating the key line project to focus first on rel core OS, attestation, and then workload attestation in a second phase. We're also gonna continue to support security context constraints and how it is that they will coexist with our, with pod security policies or PSP replacements in the future. Next slide, please. There are three key use cases that are guiding our future path of storage. So the first being secure storage for all. So it's a multi-cloud, multi-platform world and customers want to be able to place storage on any platform they use. Ideally, it's accessed with minimum delay and securely. We provide these capabilities today with our driver suite and build upon that with the help of OCS. But we continue to evolve the story with several new features, including OCS support for new platforms, including OpenShift dedicated and ROSA, and also the ability to protect your data with WAN DR functionality and a complete encryption solution for data at rest and in transit. A second key use case is any storage anytime. The OpenShift provides choice. So we continue to focus on CSI drivers and CSI migration. With CSI drivers, you will be able to benefit from operators getting the updates to your drivers when and as they're available without you having to wait for the next OpenShift release. This means faster realization of new features from a very active upstream community. Finally, stay informed. So admins need full understanding of their storage. Are my devices filling up? How many IOPS am I getting? Is there a noisy application stealing all the data bandwidth from the others? Some of this information is available today already, but generally only at the node level. Going forward, we're gonna be looking at how it is that we can expose this better and provide more useful, more digestible data, not only with OCS, which already has some pretty good monitoring, but generally across PVs active in the OpenShift cluster. Next slide, please. OpenShift today has more than one way to ingress traffic to the cluster. Customers can choose routes, they can choose Kubernetes ingress, two different technologies with different feature sets. Starting with a technical preview in 4.8, we hope to begin the process of unifying ingress for OpenShift, starting with Gateway API. Gateway API has undergone several name changes. You may have heard it also listed as ingress v2 or services API, but Gateway API is a set of APIs for deploying Layer 4, Layer 7 routing in Kubernetes that is expressive and extensible and role-oriented across the different API interfaces. So Gateway API then can be used for unifying and deploying these Layer 4, Layer 7 routing to provide a singular experience for all traffic workflows from outside the cluster to the application workload. Whether that's web traffic, service mesh, envoy traffic and an audit capture of traffic or high performance throughput traffic where you want to minimize disruptions in the stack, it will handle all of those eventually. Gateway API is still in its infancy, but we'll have a technical preview of it available in 4.8, then build upon that to provide it on more platforms, integrate it with new tooling such as Metal LB for load balancing if you're on bare metal deployments. Gateway API will eventually have support for and make deployment to multiple clouds as simple as replacing that cloud provider or adding a new cloud provider to your existing config. Along with Gateway API, the upstream community chose Contour as the ingress controller for all. It's testing and a lot of momentum is built behind Contour. Contour provides the control plane for envoy and supports dynamic configuration updates and ingress configuration delegation, among many other things. We will productize Contour in lockstep with Gateway API and support it alongside Asia proxy. Separately, we'll support Metal LB for layer two initially at 4.9 and then we'll follow it up with Metal LB BGP support and 4.10. Within the cluster itself, there are many new features in the pipeline, but some key ones to note are listed here. For example, OVN, we've supported this in 4.6. It becomes the default networking solution at 4.9. We fully support IBV6, single and dual stack, end-to-end and open-shift at 4.8. We're gonna support hardware offload of OVS initially and initially to Melanox CX-5 NICs, but that work will directly help and benefit us to enable additional NICs and also create the framework for offloading other compute-intensive workloads like IPsec encryption. We're creating a closer alignment to host networking and one of those ways is to, for example, provide multi-NIC traffic flow support. We're adding BGP support to open-shift for advertising Kubernetes services, overall increased networking observability for better understanding and debugging of your networking workloads. We're looking at the future support of EBPF for greater precision of traffic control. And finally, we'll provide a no-overlay option for environments that prefer to use the underlay networking of the cluster hosts themselves. Next slide, please. There are four key takeaways to our installation and update future. So the first one being, we're continuing to enable open-shift deployments on even more platforms, including Alibaba, AWS Outposts, Azure Stack Hub, Equinix Metal, IBM Public Cloud, and Microsoft Hyper-V. We're also expanding our existing provider support to include more regions, cloud instances. Second key takeaway is that while Rail Core OS will still be required for the control plane, we're gonna be introducing RailAid server support for compute and infrastructure nodes. This will enable customers to migrate from Rail7 to 8 for hosting their application workloads. Third key takeaway, today we have several installation methods like user provision infrastructure, installer provision infrastructure, assisted installer. Each method is intended to address different deployment scenarios, but one problem we're facing is that having too many options for users to choose can be confusing. So the other problem we're faced with is how to provide more agile support and faster integration to new providers using the install provision and infrastructure approach. So this process is very involved, often takes multiple releases per provider. So we need a more scalable way to integrate new providers without compromising the installation experience or overall reliability. Our goal in this one in the year to come is to unify our overall installation experience starting with the installer core by making the provider integration easier, more modular, and then we'll follow that up by improving the cluster lifecycle experience along with our fleet management story. At a high level, that effort will involve introducing the OpenShift Hive operator, which will provide a cluster provisioning API upon which we can build a new central host management service along with improving the cluster provision experience within OpenShift OCM and ACM. And finally, for EUS to EUS upgrades, where EUS refers to our extended update support release, for those upgrades, we're working to improve the experience for customers while minimizing workload disruption. While intermediate versions can't be skipped for control plane upgrades, we are looking into allowing certain releases to be skipped for compute nodes. This means control plane upgrades will still be done sequentially between EUS releases, but for some intermediate versions, it may be possible to skip the upgrade for compute nodes when progressing to the next EUS release. This process will require pausing the machine config pool at specific times during the upgrade, and that'll allow the upgrade to essentially be skipped when moving between immediate releases. Next slide, please. And finally, in the future, Windows Containers for OpenShift will be available on more platforms across private and public cloud, as well as Edge Deployment. The installation experience for these platforms will include a bring your own host model. This is important to customers, in particular, Windows customers, because they tend to treat their instances more as pets rather than cattle. Windows customers generally want to be able to reuse these pet Windows instances as drop-in OpenShift worker nodes so that they can run Windows workloads side-by-side with comparable Linux workloads and realize all the same management benefits provided by the OpenShift container platform. Next slide, please. And Daniel. Thank you, Mark. So now that we've covered the core of a OpenShift, let's talk about how we accompany our customers and users on the journey to a true multi-cluster landscape. So our goal for OpenShift is to expand to cover all sizes of deployments, right, and seamlessly scale across all clusters that you manage. Oxygen networking, observability, and operations across these is something that we are heavily investing in this year. The bits and pieces of this effort have already started as all of the innovations actually happening at the cluster level. For example, hardware uploading at the SDN layer is utilized when we expand to a multi-cluster network. So let's take a look at how this looks in more detail. Here's what the standard set of management tools look like when tackling a multi-cluster arena. Protocene app deployment are driven from central locations, making it easy to span across cluster domains, both north and south, as well as east and west networking traffic is actually routed and secured. Observability of your entire OpenShift fleet is enabled, making it easy for developers to keep track of their apps and for cluster admins to ensure that things like compliance and security policies are uniform. A central scalable container registry is in place as well to provide clusters with access to the software that is already scanned for one of the abilities before it actually hits the cluster. The end result of this is a standardized experience amongst all developers and admins across your organization. So let's now take a deeper look into each of these specific features that make this possible and build up this architecture. Next slide, please. With multi-cluster network, we really want to enable applications to span OpenShift deployments, but without the need to re-architect or touch the application anyway. And we plan to achieve this by prioritizing the CNC sub-mariner project and use ACM to synchronize the network configuration across all clusters. And what this will do is create encrypted tunnels between the OpenShift SDNs, which gives customers a routed connectivity essentially between all the nodes of all their clusters, which is far more efficient than trying to stretch a single cluster over geographically distant locations. So what customers can do with this is distribute their multi-tier or multi-component paths across several clusters. Think about paths that belong to the same application, but for availability and performance reasons, they are running on different clusters. They can still talk to each other as if they were in a single cluster. And this communication path will also enable a single federated service mesh configuration that spans multiple clusters. Whereas before, customers were required to run and maintain multiple independent service mesh deployments, one for each cluster. So multi-cluster networking in general makes it practical for applications to cross data centers or cloud regions without forcing the application internal traffic through the cluster ingress or OpenShift routes, which trees are bandwidth at this point for actual production traffic from clients. Next slide. So for our single cluster observability story, we will continue to mainly drive features that help administrators and developers in their journey to understand their systems, right? Mind issues quickly optimize status quo. So for monitoring, we're planning to make the stack more resilient in situations of increased demand and in general provide better alerting quality. We're working towards supporting multi-cluster metrics application scenarios by pushing metrics to a third-party metrics solution and also plan for multi-tenant API for configuring where exactly notifications are sent. Of course, we are also going to continue to provide a top-notch, connective, native monitoring experience inside the cluster in the OpenShift console. For logging, we will be focusing on allowing customers to search application logs in Kibana by preserving the JSON structure that make up these log messages. Some more fundamental architecture changes are happening under the hood as well where we will be switching to a more lightweight storage engine for the log messages with an open-source project called Loki. This will reduce the cluster resource requirements for customers using OpenShift logging. And we are also working on tools that allow to administer the flow of logs into the logging system itself. For example, to keep noisy applications out of that, which may otherwise impact the ability to process logs for the rest of your applications. And finally, viewing logs will also now be possible from the context of an application from within the OpenShift console, which really alleviates the need to run complex queries from a separate UI. You just see this in the OpenShift console where your application runs. Next slide. When going multi-cluster, observability becomes really critical. So to that extent, our advanced cluster management solution, ACM, will aggregate all the data that you would normally see and you connect the customer experience inside offering for the entire fleet, in turn easing the administration by providing this insights data. So this telemetry will be made available in the ACM Hub, so you don't really have to go to cloudaredredhead.com separately. With that being said, this will bring together the benefits of ACM Centralized Fleet Management Console along with the benefits of CCX Insights. This aggregated view provides benefits for many IT personas. So think of operations and cluster admins, which will be able to resolve problems more quickly and avoid downtime in the first place. Or SRE and DevOps personnel, which have a better visibility into how applications are impacted and the crucial areas to actually focus on. Last but not least, cost management will also be integrated to provide cluster cost visibility at the ACM level. Next slide. So when working with more than one cluster at a time, customers quickly realize the need for automation. Increasingly, the method here is a GitOps approach paired with continuous integration. And we are addressing this need by productizing OpenShift pipelines and the OpenShift GitOps add-on. Those will enable customers to centrally manage cluster configuration and application definitions across a multi-cluster landscape. OpenShift pipelines based on the upstream Tefton project supports customers in building those pipelines in a declarative and Kubernetes-native way. And this essentially yields pipeline definitions in the form of Kubernetes manifests, which you can store in source code management systems alongside your application definitions and cluster configuration. And you will leverage the same GitOps methodology. So one of the big focus areas in 2021 here is going to make our advanced cluster security offering as well as other security tools directly available in those pipelines. And customers who are using pipelines to also build container images can count on that process becoming more secure as well as you heard earlier with the proliferation of rootless builds and carry containers. OpenShift GitOps is the central pillar to actually make all this work at scale across clusters. And we are here productizing the upstream ARGO CD project to give customers the ability to use Git as a single source of truth for essentially everything that you use with OC or QCTL apply. Because of our operator-first approach, this includes virtually everything from cluster configuration to pipeline definition as well as application deployments. And of course, we don't expect our customers to innovate all of this manually on their own either. We are actually working on a GitOps application manager CLI called CAN. And what this will do is bootstrap a typical Git repository layout for common workflows which already include pipelines and application definitions right out of the box. And sometimes these can actually contain sensitive data, so passwords or credentials to the application and databases. And to protect those, we are looking to integrate secret management right into those pipeline tooling. And we're going to choose technologies here that are commonly used in GitOps scenarios. Next slide. Let's look at an example of how this could come together. So customers will leverage ACM to add OpenShift GitOps on all managed clusters. And that includes setting up all those clusters as targets in ARGO CD and also create the required credentials for things like cluster access, Git repositories, et cetera. And from this point on, apps can be deployed and configured declaratively triggered by a simple pull request against a Git repository. And as new clusters are added, bootstrapping them in this way for GitOps is entirely automated via the ACM policy engine. And similar to those clusters, they would also be configured for ACF. And that would allow us to shift security analysis as far left in the process of deploying and managing application as possible. Next slide. Similar to cluster and application configuration, multiple clusters generally need, also need a central source of truth for the software binaries they are eventually going to run. And Red Hat Quay is that central globally distributed registry for OpenShift customers today. And as soon as you have a central registry, it becomes a really critical and vital service because in Kubernetes, really nothing works without access to registry after all. So Red Hat Quay is very well-suited and better tested for the scale, but some more controls are going to come. In such multi-tenant environments, customers are frequently leveraging quotas to avoid noisy neighbor syndrome and guarantee their service levels. And to enable that at the registry level, Quay will start tracking usage of certain aspects of the registry. For instance, how much storage space those tenant images are taking up or how often tenants actually pull artifacts into the registry. And in the first phase, this will enable customers with a service provider background to essentially build their internal users based on that consumption or at the very least to provide some form of showback. In another next phase, Quay administrators will be able to use several levels of enforcement of these defined quotas to make sure that not any one tenant of the registry is using up all the resources. So choices here range from alerts to notifications, which can potentially be ignored, all the way to hard denials when actually pulling an image or even deleting all the content when pushing a new image. Customers have the flexibility to selectively employ what are makes sense here, depending on how important that tenant or that cluster really is. Next slide. Another requirement that our customers often have is that clusters are running disconnected from the internet. And when we think about OpenShift running infrastructure workloads like ACM, ACS or Quay on an OpenShift infrastructure cluster to support these production clusters, this kind of creates a catch-22 situation, right? Because a registry needs to be up first before you can even install the infrastructure cluster in that disconnected environment. And we plan to solve this with an automated way to stand up a small, single-no purpose-built Quay registry and provide the required tooling to easily mirror all the OpenShift-related content, which is the release payload, operator hub content, the sample images, all into this registry in a single step. Customers in heavily regulated environments frequently also have an AI gap in their network layout as well. And in these cases, a second copy of this all-in-one Quay mirror instance can be stood up behind the AI gap and the mirror can be then transferred via offline media. So this really aims to streamline the process to initially create a disconnected mirror for a AI gap deployment, as well as keep it in sync over time as you are consuming OpenShift updates. From there, the production clusters and infrastructure clusters can then be deployed and the letter would then run a highly available all-scale production registry thanks to the Quay operator. Next slide. A major concern when transitioning into a multi-crusted world is that of losing track of cost. So here, Red Hat Cost Management will help customers to retain their visibility and make users aware of their cost footprint. We are planning to improve in how we model costs, which will make it easier to represent common customer usage models and eventually and hopefully drive behavioral change of users actually by showing them what it costs to run a certain project. So in Red Hat Cost Management, we will get a cost explorer view, which is our first installment to provide a real full-time-based view of costs on a per-project basis. You can also group this by clusters, nodes, tags on infrastructure resources or labels of OpenShift objects. And beyond this, we are also planning to add additional features to serve the business, providing actual forecasting and budgeting at all levels of the infrastructure. Speaking of infrastructure, provider support is now also being brought in by the inclusion of IBM Cloud and GCP. Next slide. So to recap, to move to a multi-cluster architecture, we are going to provide all the tools and features out of OpenShift. This is built from the modern much starting with multi-cluster networking, to single-cluster and multi-cluster observability via OpenShift Logging and ACM, all the way to a scalable approach to application and cluster configuration management via ACM, OpenShift GitOps and OpenShift Pipelats. Those clusters can be distributed, if need be, even geographically. And you could still get a central source of truth of configuration in Git and a central source of distribution for software content via container images in Quay, without losing track of cluster expenditure, thanks to Red Hat cost management. We'll now hand over to Maria for Kubernetes-native infrastructure. Thank you, I'm Maria Ratcham. Let's switch gears and focus on infrastructure. K&I provides the simplicity and agility of the public cloud and on-prem environments, keeping a consistent OpenShift experience across both footprints. Among other things, K&I addresses container adoption growth while still running virtual machines, along with containers, by running OpenShift clusters on bare-metal nodes. Let's continue talking about K&I on the next slide. Coming up later this year, centralized host management will allow you to have your infrastructure inventory centralized and request and deploy clusters from there, while keeping the bare-metal hosts also centrally managed. Hardware-based pod scheduling, for that, we'll be able to expose hardware telemetry, such as internal temperature, fan speed, power supply, failure, et cetera, so that user, or in some cases, our partners, can create their own integration to have that data affect the workload scheduling. That's coming up later this year and then later in OpenShift 4.9. Also, assisted installer and methyl cube integration basically providing bare-metal nodes of clusters deployed with the assisted installer. And then advanced host networking configuration to provide a declarative configuration for setting B lands, bonds, and static IP addresses at the installation time and on day two, leveraging technology in Kubernetes NM state. Moving on, continuing on virtualization. On the next slide, we... Since OpenShift virtualization became GA in OpenShift 4.5 last year, we have continued to deliver important features to make it easier for admins and developers to use VMs in a natural way in Kubernetes. For administrators configuring VM storage and networking, there are good default settings that make it easier to create standard Windows and REL VMs. For developers, it should not matter that some data or service is running in a VM. It should be just as easy to use as if it was in containers. So coming up at the end of the year, we will be more aligned with OpenShift API for data protection. And that is important for disaster recovery and business continuity. It protects persistent hybrid applications that consist of both VM and container workloads in the same namespace. Next, continuing talking about virtualization. So administrators know that they're a myriad of options when configuring storage, networking. Some of the customers that we've worked closely with in aerospace, automotive, and the financial services have really seen benefits of reduced complexity with one single platform to run both virtualized and container workloads. Even though AI and machine learning workloads leans heavy towards container, there are still training workloads that have not yet been migrated from their virtual legacy form. You can accelerate these workloads with direct GPU access and in our future releases, the admin will also be able to slice up valuable GPU resources with VGPU capabilities. Next, we'll talk about what's in it for developers. Coming up next slide for developers, working on modernization of legacy application and workloads. VMs can now utilize all the power of OpenShift platform to be more flexible, while different parts of the application are still being transformed. These hybrid applications can also leverage CI tooling that you get as part of OpenShift, improving the build, the test quality, and the security by ensuring that your VMs continues to stay up to date with the latest release of the guest operating system. Next, we'll talk about MTV or the migration toolkit for visualization. So we're introducing the first release of migration toolkit virtualization as beta. This is an easy to use tool to mass migrate VMs from VMware vSphere 6.5 and beyond to OpenShift virtualization 2.6 and beyond and latest. This is a warm migration that will provide reduced VM downtime by copying data while the VM is still running and then copying the delta once the VM is powered down. Obviously, this reduces the VM downtime. There's also a set of pre-migration checks that will search for possible issues that could make the migration process difficult or non-viable, even before launching the migration. Coming up at the end of the year, we'll add Red Hat Virtualization, Red Hat and OpenStack as sources with OpenShift virtualization as the destination. All right, let's talk about containers. OpenShift Sandbox containers is expected to be tech preview in 4.8. It brings forward OpenShift Sandbox containers operator, which encapsulates all the tasks required to install and lifecycle factory containers on an OpenShift cluster. That means installing the Cata binaries, configuring Cryo with runtime class handlers, installing QMU as lightweight virtualization with only the necessary components as an extension and then creating the required runtime class. All of that you will get with the operator. This operator exposes CRGs that initially allows for selecting the nodes where the Cata runtime would be available and potentially other data parameters in the cluster. Then using Cata as a runtime is just a matter of changing runtime class name on the pod spec, as shown in the example here, or you can choose not to use Cata. It makes just Cata available for you to use. Next, let's talk about single node OpenShift. It looks like OpenShift, looks like OpenShift, everything on a single node. We will provide an OpenShift that does not have a dependency on a central control plane. So this is not a remote worker node. This is OpenShift in a single node. It provides consistent app platform from data center to the edge. It fits within a constrained physical footprint of a single but beefy server. And we're planning to release a depth preview in 4.8 with the ability to deploy a symbol replica control plane topology. It will have a bootstrap in place installation and deployment of the edge server over L3 networking and no need to additional bootstrap node. In future releases, we plan to include in place upgrade as well as OLM operators compatibility integration. Then let's talk about zero touch provisioning. This is a highly desirable in edge computing. It is used to achieve large scale edge site deployments that are planned centrally. It requires a minimal on-site physical task. For example, imagine having an on-site tech scanning a QR code on a machine to trigger the full installation. At the end of that fully automated process, the workflow will be fully operational. Red Hat will deliver a set of capabilities and then the customers will use those capabilities along with their specifications to achieve that zero touch workflow. Our Telco 5G team is also building a reference get-up space, zero touch provisioning workflow as an example or something that you can take on. And at post-edge, let's talk about 5G. So with OpenShift, we've already delivered a solid 5G core platform starting in OpenShift 4.5 and the 4.6 timeframe. While we continue to improve that core platform, we are now also expanding those requirements to include RAN centralized unit and distributed unit footprints starting the first half of this year. Looking at core capabilities, IPv4, IPv6, and dual stack throughout the cluster networking, continued optimization of the scheduler in Kubernetes to provide optimal workload placement. And looking at it from the RAN use case, we talked about zero touch provisioning before this slide. That continues to be an area of focus as well as continuous optimization of latency and single node configuration. And then next on OpenShift and OpenStack, over the next six to 12 months, our focus will be Telco NFV and DCN edge use case. In addition to continue to add more storage and SDN options for our enterprise customers. As highly requested, we will look to add support for NFV fast data pass options like SROV, OBS, DPDK, and OBS, how we're offload with both IPI and UPI. OpenShift's CNF pods will have equivalent performance as OpenStack VNFs by mapping fast data path. That includes OBS, DPDK, SROV, OBS hardware offload interfaces as PCI devices in the VM and SROV operator to connect yet directly into the VMs and pods on secondary multis interfaces. We will continue to add support for provider networks as primary CNI option, which is covered on the next slide. So there are several benefits of using a provider network as the primary CNI interface. Some of the key benefits are pods get direct external connectivity to the physical fabric. It is recommended when there's significant external or no south traffic. Because you're using this, you don't need courier since there's no double encapsulation. It avoids the floating IP and NAT for external connectivity. It uses the default OpenShift SDN option for east-west traffic and load balancing, and typically is used with external load balance. Some limitations are that the current TAR forum installer requires admin privileges and metadata services. For the example shown here, the compute node and VM worker are on the same network, and top of rack is the gateway for router for this network. All traffic is sent to top of rack for routing destination. And then east-west traffic between pod 8 and pod C is sent only on overlay managed by OpenShift SDN and routed by top of rack gateway. Incoming external traffic is sent to OpenShift Ingress or external load balancer and load balance between pod B and pod C. And that's it for OpenShift in OpenStack, and I'm passing it now to Jamie. Thank you. Thank you, Maria. Thank you, Maria. So we'll now move up a level to our developer and platform services. But before diving in, I just want to talk a little bit about our customers. Each of our customers has different needs and requirements. They're all unique businesses and companies. And so we as OpenShift, the platform of platforms need to enable our customers to easily configure, customize, and extend the platform to meet their needs. So developer and platform services focuses on removing friction from developing, configuring, and deploying applications on OpenShift. The following areas highlight how we approach that. The developer console seamlessly ties everything together and provides a user interface for all levels of developers. Operators is our mechanism for packaging and managing add-on services to offer a cloud-like managed services experience both inside our cluster. Helm increases our support for other packaging and deployment mechanisms. Serverless and service mesh provide easy ways for developers to deploy and run applications in Kubernetes with a pre-baked date to operations story. These are also examples of our popular examples of operator extensions that we provide. Develops and GitOps facilitate delivery and management of our applications. Finally, our developer tools empowers the developers to take full advantage of the OpenShift platform. Next slide, please. So what's new for the console? So the OCB console is an extensible and customizable Kubernetes UI designed to empower users at all levels. Our major areas of focus here remain consistent for each release. The console is an extendable and plugable platform, and admin should be on the lookout for a couple of new ways to customize the developer experience. New capabilities include the ability for admins to customize available roles in project membership, as well as the ability to hide items from the ad page, which don't necessarily align with your group's breath practices. So we'll let you make the decision of what you want to allow your developers to have access to. We always try to put developers first, so we want to get them productive more quickly. We like to meet them where they're at and tailor experiences to them, whether they're novices or experts. Finally, we put an emphasis on making Kubernetes easy. We've enhanced our extensible quick starts to offer the ability to easily copy and paste and execute commands in the web terminal. This onboarding pattern guides customers with an interactive experience and helps reduce the time it takes to get developers up and running. Next slide, please. Let's dive a little deeper into our frictionless, cohesive and plugable platform. This platform allows customization and extension of the OCP console. As platform capabilities grow, so does your UI. So we're looking to enhance features in the following areas. Our quick starts, our upcoming metrics dashboard, custom resource definitions to handle, enable and custom dashboards, and the continued evolution of our dynamic plug-in framework. Soon two teams will be able to update the UI code via operator releases. This dynamic plug-in will be the ultimate enabler that will allow internal and external add-on operators to create solutions on top of the OpenShift platform. Next slide, please. So making Kubernetes easy is one of the primary goals of the console. We want to allow everyone to get up to speed quickly. We don't want you to be in the dark, in the dark again or continuously. So the new Getting Started card featured in the cluster overview provides admins with cluster setup recommendations, quick start, quick cart tools, and guides to help you get up to speed. A similar card is available in the add page for developers to get up and running quickly. Our developer experience continues to focus on ease of use and efficiency. In this release, you'll find the first of our drag-and-drop experiences in the console. Devs can now drag-and-drop their FAT jars from their desktop directly to the topology page and get it quickly deployed on OpenShift. In the future, you'll even be able to drag-and-drop HelmChart archives, potentially even the ability to connect a debugger to your deployed application all from our web console. Next slide, please. So moving on to the operator framework, the operator catalog in OpenShift has grown significantly over the last few OpenShift releases. We now have over 400 operators shipping out of the box with every cluster. We're putting a renewed focus on smoothing the rough edges around the administration of installed operators and in particular more complex setups where operators may depend on each other. This becomes possible by moving to a model where operators are global objects with better control, installation, updates, and access management. A lot of our customers are employing a GitOps automation approach, especially to drive larger deployments in multi-tenant environments. Here we want to enable service providers with fleet management features that will allow the rollout of new operators with binary or blue-green-style deployments. Last but not least, we plan to help developers write mature operators more easily by providing higher abstraction-level APIs in the operator SDK and supporting more languages. This will leave developers with more time to focus on management logic of the operator rather than the low-level API concerns or having to learn a new programming language even. Slide, please. So going a little deeper into this new operator API, it'll be a fairly noticeable change and that it will eventually replace three separate APIs that we have today to install and update operators. These were initially designed with human intervention in mind, for instance, to explicitly approve an update. But we've listened to feedback from our customers who are automating every aspect of cluster configuration, looking for better support for GitOps when dealing with operators. Hence, the new API will support completely decorative installs and configuration, which will make it simpler to integrate ArgoCD with OLM. At the same time, many of our customers are service providers for their internal teams, which requires explicit control over what they can and can't do. This is also true for operator access. The operator API will provide first-class multi-tenant access control lists to support this, and this will alleviate the need for a lot of purpose-built automation that our customers have been employing today. Finally, a very popular ask for our developers is to be able to install an operator without the need to create a ticket or ask an admin. So we don't want to block our developer's velocity, expiration, and initiative when we can. So OLM will allow customers to give this freedom to cluster tenants as part of an auto approval plugin, which can be provided on a per-calendar or per-catalog basis. Next slide, please. So Helm. Helm is one of the most popular package managers for Kubernetes, and it's now generally available as part of OpenShift, and we're continuing to integrate its capabilities across the platform. The goal is to provide a self-service application development experience that makes the OpenShift tagline innovation without limitation a reality. We do this by enabling developers to use the tools they desire to deploy their applications with minimal intervention and greatly reducing application time to market. This is particularly useful for developer-centric tooling and stateless applications that don't necessarily require the advanced management and automated D2 operations that operators give us. The OpenShift workload ecosystem will get stronger with a new Helm chart certification program. This will enable our partners to provide their tested and supported Helm charts across OpenShift. We will continue to bring greater integrations with various developer tools and services, including ODO, service binding operator, dev files, and more. Next slide, please. So what's extra service mesh? Two big and fast growing areas are serverless and service mesh. Serverless and service mesh provide easier ways to develop and run applications on OpenShift with a secure and out-of-the-box D2 story. Serverless allows you to build and run applications without requiring a deep understanding of the underlying infrastructure, while service mesh addresses common challenges with microservices such as security, observability, resilience, deployments, and more, which takes the big load off of your developers. As we move forward with these two areas, there are these four themes that are guiding us. Better together, as serverless and service mesh are based on the two open-source projects Knative and Istio respectively, we're working to provide an even more seamless experience between these projects and OpenShift. Today we provide seamless installation and upgrades via Operator Hub. As we move forward, we're aiming to improve our integrations with different eventing sources, get-offs workflows, API management, observability infrastructure, cluster management, and much more. On the user experience side, both Knative and Istio are young and fast-moving open-source projects. And as such, we aim to smooth the admin and developer experiences around both projects. For serverless, this includes enhancements to observability, observability eventing, and new ways of writing applications with serverless functions. For service mesh, this means a greater focus on operations and troubleshooting of the service mesh control plane and proxies. For scaling services, there's two ends to the spectrum. On the serverless side, scaling to zero and startup performance are tantamount. On the service mesh side, we're recognizing that we have customers now that are pushing the boundaries of both performance and scalability when it comes to using service mesh. We're working to establish benchmarks, and we're also looking to see how we can help customers more effectively use service mesh at scale. We're also focusing on extending service mesh over multiple tenants and multiple clusters, which we'll go into a little more later. On the security side, security is always a major focus of ours at Red Hat. Our team support a thorough review of all upstream features before bringing them into our products. In the cases of service mesh, many of the differences between open-source service mesh and the upstream Istio are a result of these such reviews. And this continues to be the case as we move across clusters with Federation. Serverless provides secure-by-default communications with encryption between internal and external services, as well as for both event data and the transport. Next slide, please. Open-source serverless functions is going to be coming into tech preview. These are a collection of tools that enable developers to create and run functions as native services on Kubernetes, which provides a simple, focused and opinionated way to deploy applications. Functions are all about making the programming model simple to reduce complexity. Users no longer have to worry about platform specifics like networking, resource consumption, sizing, and many other considerations, which can be extremely time-consuming. It makes deploying applications like AI and ML much simpler since data scientists can easily create a web server that will listen for services and they no longer have to learn different application configurations or frameworks. They only have to worry about deploying their models. Also, specific programming models can now be enforced which can help prevent entry-level programmers and data scientists from being overwhelmed by platform specifics. This provides an extra level of consistency and safety. Next slide, please. Finally, last but certainly not least, popular demand in the upcoming release of Service Mesh will introduce the ability to federate multiple service meshes. This will allow service mesh administrators to easily manage connectivity between services located in different meshes and indeed in different clusters. This will be our first step towards supporting multi-cluster topologies for Service Mesh which will build upon our existing multi-tenant deployment model. Our multi-tenant deployment model provides for securely deploying and managing multiple service meshes within the same openshift cluster. Each of these meshes may be managed by a different team or administrator. We'll be expanding this model across multiple clusters. Users may wish to connect services in different meshes for several reasons. Most notably to provide direct access between services that might be managed by different teams, by different administrators in different clusters, different regions or availability zones. This is becoming increasingly common as our customers look to scale Service Mesh across their large organizations. This will also enable use cases such as failover between services where a service is able to fall back to an instance in a different mesh if it does not have a local instance available. From our perspective, this is all part of a strategy around scaling out Service Mesh. Finally, next slide please. That concludes it. Thanks myself and all my colleagues. I want to thank everyone for spending time with us. Please keep an eye out for future presentations on openshift.tv. If you haven't already, please do sign up for the virtual Red Hat Summit in April. Thanks everyone for your time and we'll see you next time.