 Everyone hear me out there. How's my my speaker group? You All right. All right. Well, thank you very much joining us this morning My name is Jeff Brent. We've got a number of speakers with us today. We're here for what's next an open shift This is our second quarter of this year We'll have another one later on this fall and we are going to go ahead and jump right in So we have a great set of speakers with us today Scott Mark Boaz Brian Macau and myself are going to go through all of the content for you and show you what's next give you Some things that you want help you get involved in our communities and move us forward on what you can expect to see in our Upcoming releases, you know, think of this as a what's next what you shall see from open shift as we go into third and fourth Quarter that'll be available for the GA and for your usage Now we want to reset everyone in the context of red hats open hybrid cloud strategy We've been talking about this from from the very very beginning At least ten years we've been talking about open hybrid cloud and really the key here for us is that it is open And so this is a direct invitation for you out there in the field from our our field Teams as well as our customers on this call reach out to us and if you hear something exciting and interesting Come to us and get involved and help us in these community efforts because we are the only platform That is open as we provide the hybrid cloud platform as we get past That we see a lot of challenges with hybrid cloud and we hear this from from you our customers on a day-to-day basis You know that clouds are everywhere clouds are ubiquitous. We've got serious clouds We got human endless clouds. We have cumulus clouds, but it's all the clouds are different cloud doesn't mean Consistency, so it's difficult in the error prone to manage the cloud This could secure the security controls are inconsistent across these clouds And it's overwhelming to go through and it's kind of like reap repainting the golden gate bridge Once you think you have your configuration set You've got to go make sure that it's still in its desired state and analysts and you are our customers are telling us We're using multiple clouds and we're using either public clouds multiple public cloud providers For various reasons a combination of public cloud and on-premises clouds And there are a lot of different reasons for us to do that deploying clusters and having this open hybrid cloud strategy Often we have Application availability as reasons why we would create more and more clusters disaster recovery I need to have multi-region availability for some of our mission critical workloads Sometimes I need to get that workload out in close to the data for for edge deployments and whatnot reducing the latency for the workload and Distributing our our cloud from our traditional data center out to the far far edge. There are also reasons where we're deploying these Multiple clusters we have compliance and industry standards. Maybe I need a PCI cluster Maybe I need a different cluster for different purposes cost reasons or some reasons that we we see our customers Create more and more clouds, but then you know something we can't avoid geopolitical data Residencies and guidance. I have to have a cloud Multiple clouds in a given country to support the data. Maybe there's only one cloud provider. That's in that country So there's a lot of reasons for us to create clouds and our job here is to make it easy Start at the foundation as we look at the basic of containers being our Linux being able to be deployed across physical Virtual private cloud public cloud and edge That's our foundation for the open hybrid cloud and then on top of that we have our Kubernetes distribution Open shift provides the best-in-class Kubernetes Distribution because it has these cluster services Provided on top and this bottom layer here is something that we've been we have out We've had in the market for a little while this layer here is open shift Kubernetes engine Nice light weight way to get started with your open hybrid cloud strategy And then build on top of that with open shift container platform bringing more and more value with platform services applications developer services and data services that's providing that foundation now I've got all the tools and bits that I need to do to create Workloads and create value add whether it's customer-facing workload and value add or internally operational facing And on top of that from an open shift perspective an open shift and red hat brand perspective We've provided to market open shift platform plus And that really gives you the complete picture now We're really growing with you and we're able to provide the capabilities that you need across multi-cluster management multi-cluster security having a common global registry for all of your images and scanning and geolocation Miring and things like that, but also we've got compute network storage is a fundamental part of our story So we recently added open shift data foundation to this open shift platform plus Set of products and brand Now all of this can be Managed by us from a red hat perspective So we're providing open shift on the various different cloud providers in a managed form factor This open shift platform plus can sit on top of that and it can also be found and Set on set on top of an on-premise or self-managed Open shift platform when we say self-managed It doesn't necessarily mean that you have to rack and stack things in your heart and you're in your data center Self-managed means that you want the freedom of control of upgrades and various other things You don't want SREs messing around in your business. Maybe you have tight tighter controls but you can place this self-managed open shift in both a public cloud provider using their infrastructure as a Basis or you can have that infrastructure on-premises via a bare metal is one of the things that we have really really invested in providing you a Solid foundation for creating a bare metal on-premises cloud infrastructure that can be self-managed Now as we get into this we're going to double-click into hybrid cloud and open shift platform plus We have a number of our speakers today that we're going to transition through I'm going to start us off with a talk about talking about advanced cluster management for kubernetes And this is kind of the foundation for your fleet management Advanced cluster management is hopefully all of you are aware of a number of different capabilities across observability cluster lifecycle application lifecycle and policy-based governance, but we're bringing that together and as we look forward It's integrating that into the overall stack so managing Tokens and and cloud keys and providing Integration to Ansible and other things that we've provided in the past We're also expanding in that capability making it easier for you to manage your kubernetes domain internally But also be able to reach out to those Those all traditional resources that are not kubernetes based and get a complete complete open source platform One of the things that we're really excited about and we'll touch on a little bit later in the presentation I feel a thunder but posted control planes is something that we've heavily invested in It's going to be able to be delivered with with advanced cluster management You'll be able to set up a hosted control plane Set of control planes and then provision clusters and that gives you some cost savings And it also gives you some the management capabilities across that so look forward to talking about that a little bit more And as we move forward in the deck as we look at one of the things that I'm really excited about is this hybrid cloud console When we talk about open hybrid cloud, we're really talking about how we're going to Provide a consistent experience both on-premises or in our on our console experience and in in this We have this great bit of technology at the bottom of the screen called dynamic plugins And this is the foundation for us as red hat to integrate into a common user experience But also this dynamic plugins are extended out for you or customers to extend the open shift interface Our partners to extend the open shift interface and it creates a way that you can have a consistent experience Integrated into a common a common interface and one of the things that we started off doing and you'll see very soon Is this new multi cluster engine operator multi cluster engine operator is available in open shift the open shift Operator catalog and when you install open the multi cluster engine operator What that do is going to give you all the basic capabilities of being able to provision clusters from open shift that when you install the multi cluster engine Operator onto a given open shift instance it pretty much becomes your hub cluster much like You've seen with ACM the day imagine taking the bottom of cluster Life-cycle management and getting you started with just your OCP entitlement and creating clusters and creating expanding your open shift platform footprint and then integrating and upgrading into the overall Advanced cluster management capabilities around get option of durability and policy-based government So we're really excited about what we're able to provide out of the box with this unified console experience and with multi cluster engine Operator available and open shift entitlement as we look at what we're trying to do from an organizational perspective I want to hit this team of consistency Culture through consistency of automation because we can provide all the tools and all the ingredients for you to To work with your open hybrid cloud But one of the tough one of the toughest things that we see with our customers is a transition into a cultural The transition of culture into a get ops culture and at the foundation of our get ops culture is our get ops open shift Which provides you the declarative capability of being able to manage infrastructure as close to cluster life cycle application delivery a machine learning And operation supply chain and security ads governance compliance all those things with just the foundation of a cultural part The core engine of get ops Expanding that out into advanced cluster management leveraging ansible get ops capabilities leveraging get ops capabilities and advanced cluster security That's creating this culture of consistency through automation And that really it takes the tools and it takes the culture to be to be successful as we move forward Creating a open hybrid cloud platform The next area that we're going to address as we talk about our open shift platform plus is what we're doing at the data management Data management layer so from the bottom up on the on the left hand side You see all the capabilities as we as we grow and capabilities up the stack for open clusters or multi-cluster storage We see a number of the things that you'll you'll you'll find coming into just the core foundation of Expanding our support for container storage interface As your file CSI file provider CSI migration CSI resizing secret stores for CI all kinds of things down at that foundation That you'll see in the next half of this year providing a better experience with our multi-cluster storage and as we move up We have multi-cloud object gateway file system game space That's a very long way of saying that with this new capability It's it's a way for you to integrate with your legacy applications that are doing file systems also integrating that into Cloud native applications that are interacting through you know cloud native means For your your edge type deployments and you've heard us talk about in the past where we've introduced single-node open shift Our data platform is also supporting single-node open shift. There's something called an ODF LVM. Oh, that's a logical volume Management operator so we'll provide that capability on single-node open shift So you can have storage management on single-node open shift for your edge deployments and we have a richer IOPS throughput legacy status integrated into that OCP console through those dynamic plugins that I just described so better visibility better capability of integration an extension of the data foundation out to your edge through single-node open shift is what we're looking for or What we're providing here at the back half of the year and at the top we move higher up the stack And this is something where you'll see a CM sprinkle through the Advanced cluster management sprinkled through the presentation in various areas And this is one area that we're very proud that we're partnering with our across our red hat team with the open shift data storage team There's two really elements here. There's regional DR. I'm really excited about that No DR is about consistency for the application as well as the configuration of clusters and then for stateful workloads You need to synchronize the data across an application that's running on one set of clusters and then a failover another set of capability the same application running in a different region and with with Advanced or an open shift data foundation and along with ACM you can have ACM deploying applications in multiple regions Keep those clusters consistent and with ODF You're synchronizing the data across so if anything was to happen your your your business continuity is able to pick up and move Right forward and the other thing that we've been working with is the open shift apis for data protection and we call it OADP That is designed for backup and recovery for our partners and we ACM are using that for Backuping and restoring and providing business continuity for the ACM So with that I'm going to go ahead and move on to Boaz who's going to take us through the next two elements of our open shift platform Plus which is advanced cluster or I'm sorry Red Hat quay and advanced cluster security Boaz awesome. Thanks, sir. You're welcome So quay will get aligned with the rest of the open shifts look and feel with a completely new UI helping open shift admins Find the way through a registry manager with quake Quate at IO using a new UI as well over time be integrated into console that red hat comm so that customers can see their content from that Perspective and manage quay.io billing through the red hat marketplace This will allow for payment of quay.io via Purchase orders using SKUs that are paid up front for a year in addition to pay as you go via credit card Which is an often requested feature from enterprise customers that do not want to manage registry themselves We're planning to consolidate quays image scanning components with ACS scanner enabling scanning for example of Java packages in images stored in quay registry This increases security scanning coverage to the level of programming language package managers and allows to get alerted about security Vulnerabilities before this image is pulled into ploy The registry is also the place where image signature and the station information are stored today Making quay a central source of trust for all production clusters In the future quay will gain more first-class support for cosine based image signature and its station So you can verify image provenance and sign our identity directly in the registry before it ever goes to production I also mentioned that in ACS in a few slides Quay has so far allowed for all users on the platform to create content The only option to control this was to not create additional user accounts But only hand up pull secrets for push and pull access Otherwise every user is able to create new repositories or namespaces in quake This year we plan to change this RBEC model and align closer with that off open shift Every user in quay will only get access to a selected set of contents without explicit permissions given by the administrator Users will not be able to create net new content This helps customers implement discreet and selective permission assignment and keep registry growth manageable over the ACS Our efforts in ACS for the coming months are focused on four main areas. Let's start with security innovations In the near future, we're planning to add support for image signing validation We also want to help developers move faster and plan to add the ability for developers to scan images on their local machine As well as offer advanced remediation guidance to help them quickly resolve issues We're also planning to extend node scanning to full host level vulnerability scanning Starting with red at core OS nodes and provide a consolidated view of image and host level vulnerabilities Applying Kubernetes network policies is an industry best practice that helps isolate workloads. Yet most organizations find it challenging We plan to make it easier to first identify missing network policies and then to build them by providing intelligent automated recommendations and visual editing tools Customers using red at ACS and large environments need an easy way to apply policies with specific settings to multiple resource sets and To consistently approve and track exceptions to policy violations To facilitate this need we plan to evolve policy management to offer grouping of resources using attributes like resource names and labels You'd be able to apply policies to entire resource sets at once and you would also be able to group the policies themselves to perform bulk operation We're planning to update the main dashboard and introduce historical metrics to help organizations track and communicate their progress using key performance indicators And for compliance, we're also planning to introduce an intuitive graphical user interface that combines the red at compliance operator with ACS Another key priority for us is to tighten our integration within the red hat portfolio in order to improve our experience as your experience as users and optimize how we utilize your resources We plan to continue and improve the unified experience for red at ACM users And as mentioned earlier, we plan to consolidate the ACS scanner, which is clear based with a clear v4 red hat quay scanner Ultimately pushing everything upstream and end with a single clear scanner across the red hat portfolio We're also starting to look into how ACS could help customers who are operating a service mesh such as Istio A third focus area is open source where we are active in four open source projects Stackrocks.io was launched a couple of weeks ago and we're delighted to see the community is already active with Stackrocks enthusiasts We're contributing to Claire and Falco projects and hope to fully update these option projects with the developments made by our Stackrocks team We also plan to continue and lead the KubeLiner project And last but not least The focus area of ACS cloud service, which we'll talk about later in this presentation Over to you, Michal Thanks, Bose Let's talk a bit about the telco and edge topic and what we plan. Can we go to the next slide? Thanks Telco transformation is speeding up and communication service providers are focusing on implementing 5G technology based on cloud We are using the next generation hardware to help them build the most efficient and agile infrastructure We are deeply engaged in Activities to deliver high performance of performing solution by the CPU budget reduction forecast and traffic SmartNICS, which we see in the middle Are helpful to isolate the control plane In a separate cluster just for running infrastructure services on ARM cores and standard workload will continue to work on the X86 Core in the past telco services were supported by highly specialized hardware with DSP processors 5G technology also requires acceleration and it is critical for RAM access network deployments We can achieve it using a programmable FPGAs Crypto engines and GPUs to accelerate 5G core and run functions like inline encryption and data plane encapsulation like GTP and ECP, FPC Next slide please To better utilize hardware resources and reduce 5G total cost of ownership We introduced the performance of an operator Which configures the low-level parameters of OpenShift and Red Hat Core OS to run network functions It is a mandatory component for 5G core and RAM deployments as a data operator Power will become of OpenShift core subcomponent of the Node UNIC operator It will pair users to manage one extra but mandatory components as we know only a Specific version of power can run on a specific version of OpenShift with power becoming part of OpenShift core upgrade of OpenShift and power become atomic and versions control Capability problem is resolved Also as a power code, it's tightly coupled with OpenShift and core components future Developments are going to be simplified for instance It is it opens the possibility to integrate performance profile into the zero workload Finally In the case of single node OpenShift and 5G RAMDU As all the spread resources count we will have one infrastructure put less the power operator This is a small gain but important for far edge use case next slide please the Oran ecosystem Define Defines the different development modes depending on the services and has mobile programed massive IOT URLC It is addressed by single node OpenShift Remote a worker node and compact cluster to deliver the cost-efficient infrastructure On top of that we are working on single node OpenShift with one or two workers To simplify the network operations we Delivered pre caching of images and artifacts before upgrade and also we are working on the backup restore And we want to reduce the deployment time providing the factory pre-installed SNR We are spending a lot of effort Introducing zero-touch provisioning everywhere For the you see you have clusters and other component advance cluster men other ZTP is our solution for 5G network scaling and and managing Distributed clouds like a cloud on each radius sign soon One ACM instance with managed three thousands of SNR clusters and 10 ACM instances can be Deployed on a single hub cluster together with hub of hubs feature. It allows our customers to manage 30,000 of edge clouds from one place support for advanced timing and synchronization like Precision time protocol and stinky on the OpenShift level It's a key point for us For run side cost reduction The run side cost reduction can be achieved by moving the Grandmaster clock from the cell side router to the DU server and the delivering synchronization Directly to the server from GNSS Next slide please Now let's focus on single node OpenShift evolution Initial single node OpenShift deployment on bare metal will be extended by deployment on Red Hat OpenStack Red Hat virtualization and vSphere What is also important? We reduce memory requirements for SNR to 16 GB runs and As I mentioned before we are working on Worker node based a capacity expansion adding one or two workers to SNR Additionally, we want to deliver the geo redundancy for SNR Last but not least is OVN Kubernetes as a default network solution for single node OpenShift Next slide please In the last month we could follow discussions about green sites Communication service providers want to reduce power consumption To deliver new services in the most cost efficient way We want to support this concept by tuning our platform Starting from the bottom. We want to allow the operator to start a service With the lowest possible power mode zero touch provisioning will allow Configure and validate BIOS settings for power savings prior We want to extend functionality of performance add-on operator to allow or Offlining unused CPUs For performance pools the performance pools concept will give the possibility to define some CPUs in high performance mode and some in low performance mode Also, the workload will be able to define a power performance profile that define its power needs With that I want to hand over to Scott to walk through the OpenShift cloud services. Thank you Thank you, Mihael and a warm shout out to my colleague there in Poland Thank you for all of the great content as Jeff kicked off and described our OpenShift platform plus story and as we've walked through the enhancements that you've seen throughout the landscape Check it out. We're talking about cloud services now Cloud services that unify the experience across your on-premise and your cloud estate So what we're talking about is the platform consistency and moving application services into the Red Hat cloud that you already are familiar with In your on-premise OpenShift This means taking the addition the addition of our cloud services that we already have and Bringing in things like the developer studio cloud experience bringing in get-offs and pipelines as a cloud service Developing out what we call developer access to multi cluster aware tools without the toil Bringing in features around DevOps workload capabilities and new technologies that will integrate advanced cluster security With ODF and storage all as part of a cloud service offering This is awesome Imagine getting your end-to-end software supply chain built on proven open source Without the additional toil and headaches that you're facing today to synchronize this all together Again, we're talking about consistency from your on-premise base into your cloud. This is awesome stuff as I move into the next slide What we're talking about here is enhancements that are coming Already you've already seen these coming on console at redhat.com We made some great announcements last year around summertime and we've continued to iterate on those as we move forward You're going to see OpenShift cluster manager become a fleet admin experience Getting new wizards and new UI capability to start managing star KS directly from the open shift console experience at OpenShift cluster manager Creating Rosa everybody's favorite is the Rosa the redhat open shift on Amazon service they're creating that managed open shift cluster directly from the SaaS capability directly from the cloud service and doing things like capacity planning to look at Right sizing your compute for performance and understanding what the workloads are actually consuming So again, we're talking about consistency of your management tools consistency of that operation all from essential experience Workload Explorer is awesome Ali and Rob and the team have really been working hard to make that API explorer experience a first-class citizen And continue to iterate that forward so that you have the ability to dive deep into your clusters and understand What the workloads doing This helps you out in troubleshooting I know sre's love to see that experience as they can navigate From the cloud management experience to a cluster console and start to understand what the API usage is And last but not least let's talk about the application services a humongous opportunity for us to make it easier to integrate application services all the things that were mentioned on the prior slide But showing your team how they can set up workspaces which become Almost like an environment for developers to to tap into and it started deploying Their applications and move through the build process easily Again, we're bringing all of these tools to you that you're already familiar with But in a cloud service model without that headache and without the toil of setting setting up in the first place All of this is going to be pushed to production at console dot red hat Let's take the next slide. We're going to shift a little bit of a gear here So I just took you out of the application framework and I'm going to talk a bit about the value of managed open shift This is talking about the platform. This is where you could bring your own opinion into those tools But don't have to worry about the platform itself This is called simplicity of operations bring a cluster directly to your teams Remove the extraneous headaches of doing the self-managed And ensure that your teams have the exact uptime and cluster availability that they need to begin moving forward I'll mention it again The user interface and the experience that our team is working on to build that cluster creation wizard for rosa Is going to be a first-class experience. I'm excited to see that grow further out for aro for azure rat open shift As well as additional capabilities with open shift dedicated osd We're talking security everywhere because that is first and foremost a major hurdle for all of our customers They need to understand that things like fed ramp high and hippa are taken care of for them so that their workloads can be run without having The the headaches of figuring out how to do that first. So we're going to bring hippocomplant experience to you right off the bat from the single click Platform consistency again, this is a guiding approach to us We want to make sure that we were reducing the barriers for adoption of these managed services So whether it's your on-premise environment or in the cloud It should just work. We want your applications to run seamlessly on-premise without having to figure out Well, what do I need to do next to make this run in the cloud? Our mantra and our guiding approach is to make sure that if it runs on open shift It will run seamlessly on managed open shift as well I'll take that next slide jeff as we talk to the addition of more red hat open shift cloud services It really kind of boils down to meeting view where you're at Bringing expanded choice in the instances. We've already delivered spot instances. We have more instance types Coming around wavelength gpu amd and dedicated We're talking about the expansion of instance types that you've told us you need For example arm, which is a lower cost lower power consumption model that a lot of our customers need for their edge workloads And we heard you loud and clear when you said you needed jacarta in your ap southeast three. We're bringing that It's coming it's coming into the managed service that you already see delivered in open shift 410 on-premise So we're aligning that experience for you again. We want your your workload to run consistently on-premise and in the cloud Security everywhere. I'm going to hit that again. We're making enhancements again for keys Enabling ebs encryption as well as multi region keys and also providing opportunities to use short-term token based credentials across all the supported cloud providers Do you hear things about workload identity and sts and these type of token arrangements? We want to make sure that the the additional security options are there for our customers that have sensitive workloads and need short-term tokens And finally, I've probably told you this 15 different times But we want to make sure the platforms are running efficiently and low cost One of the features we're really excited that we're working on and looking to deliver to the cloud Is hibernation? Deposing your environment that means also stopping the payments when it's not in use. That's awesome This is an efficient use of resources ensures that you're not spending when you don't need to be and allows your teams to take control Of the usage and consumption that they're delivering their applications to If I'll take the next slide This one is a bit of a mind-blowing opportunity here when I think about kcp and the transparent multi cluster opportunity in front of us It is awesome. It's one of our newest open source projects Rob some ski and the team have been doing a fantastic job of really educating the world about what it means to have a new control plane that is Even more transparent for the user above the Kubernetes itself It's a multi cluster layer for your clusters. It's designed to be transparent for the developers We want to take away that rough edge of a cluster being something that you have to manage and focus your efforts around We want to bring these tools directly to the users directly to developers That say hey, I just want to get going. I just want to start using my tools For dev ops teams that are ready to build out their pipeline Let's make that available to them in a console experience that doesn't have to worry about bringing cluster in So for admins think about how Today you have all of these prerequisites to get in place Let's just think about how you could deliver that application service directly to your team And get directly involved with building apps and deploying apps Get focused on business requirements start innovating. This is what kcp is bringing to us We're looking for the service to launch later this year and we need you to become part of this community with us Start expanding our understanding of what your needs are and what your use cases are Again, I think I said this word about 15 times But we want to take the toil out and kcp is a control plane That brings the abilities of your workspaces and a holistic way without having to worry about the individual compute what's going on underneath And with that, I'm going to break protocol here and shift the Presenter over to my steam colleague boas Who's excited to present to you with as much zest as he possibly can What acs is doing with their online cloud service? Take it away boas. Hey, thanks god Yeah, saving the best for last So we're already told you but we're so excited to be announcing ACS as a service. We hope to have limited availability starting as early as the end of this year With acs as a service you you install minimal software on your kubernetes cluster and you can start securing it in minutes We support open shift on private and public clouds as well as the kubernetes variants offered by all the leading public clouds Forgo the operational activities and let red hat worry about that instead save time on provisioning rescaling security software updates upgrades backup and recovery Service is backed up by red hat. You will receive 24 seven support offered by expert staff You also enjoy flexible consumption models including pay as you go And you or use your committed spends to purchase acs on red hat amazon aws and azure marketplaces so really happy to Be announcing that and reach out if you're interested and over to you mark take it from here Hey, thanks boas and hi everybody next slide so Installation we are working to enable open shift to be deployed on more platforms including alibaba cloud IBM public cloud and new tanix. We're also continuing to expand our provider support to include more regions and instance types For installations in the coming year We're planning to work on a path forward for the onboarding experience Starting with the installer core by making the provider integration easier more modular and more composable We're planning to improve the initial cluster onboarding experience with an agent based installer To create your first cluster for on-premises environments and private clouds We're also planning to pilot externally managed control planes as a form factor to deliver open shift with hyper shift Starting with red hat acm, but followed by our managed services as vehicles of consumption And we're working to make more open shift more composable in general to allow more flexibility In cluster deployment initially we're making it possible to disable features that get included today by default in every open shift deployment As well as making some of the components completely optional In the future we want to make it possible for customers and partners to add their own platform specific components without being installed by default during the installation Next we want to improve and improve cluster lifecycle experience Along with our fleet management story at a high level this effort will involve introducing the The open shift hive operator which will provide a cluster provisioning api upon which we can build a new central infrastructure management service Along with improving the cluster provisioning experience within open shift open shift cluster manager and red hat acm For upgrades we're continuously improving upgrade behavior and targeted targeted blocking i.e. conditional updates based on fleet telemetry And or per cluster historical data and we want to enhance the upgrade documentation to provide more clarity on operator status upgrade order of operations debugging processes and other upgrade guidance to help customers better plan and perform upgrades Next slide, please On the compute side of things we're continuing to extend our reach into hybrid cloud by focusing on three key areas platform First we want to enable new workloads and reduce total cost of ownership We're working towards enabling open shift across more cloud providers and platforms like idm power and z and power vs Additionally, we're looking to enable mixed architectures in a single cluster That's pretty cool We'll continue work on dpu support be that in the hyperscalers or other data other data centers in the future open shift will integrate These specialist system on ships allowing you to leverage the unique architectural approaches that can be essential to success in today's world And on experience We're we're we're focused on the inbuilt functionality to help you scale up and scale down your control plane Provide even more comprehensive backups and handle dr requirements all built into open shift We'll also increase the overall quality of for open shifts out of the out of the box alerting rules such as reducing alert severity With warnings were appropriate and warning against badly configured admission webhooks that risk stability And on a subject close to my heart last time we hinted there was more customization coming to rel core os and I'm ready and pleased to share a bit more about what we're up to next slide Jeff So from its inception core os was conceived as a container optimized edition of rel The very strict separation between the os and any other content which should always be running containers But over time many legitimate needs for customization of the base of us have been brought to us We've been rethinking how we could accommodate those needs while staying true to the rel core os mission of providing a stable immutable base for open shift So engineering wondered what if we managed os configuration like we build containers What if we did it exactly like we build containers with a docker file? The team was hard at work making this a reality as we speak It starts with shipping core os itself as a standard oci container that you can inspect or run with pod man like any other Then we apply the cluster configuration and user defined customizations to that base image in a build process No different than building a container on ubi or another base image Now you have a compute node image to your specification to push to your nodes On the node rel core os natively understands these containers and writes the changed content layers to the on-disk file system Think of it as an update to the classic golden image model But replace the challenges involved in updating distributing and rehydrating with maintaining a working os deployment pipeline So as you can see on the slide Yes, this means we'll be adding support for installing custom packages in a coming release That could be agents stipulated by regulatory or it requirements Where extra rel packages needed to integrate into your environment It will also allow for much faster rel hotfix delivery. Just add the support provided rpm to your builds We're still here to provide that reliable integrated and well tested base image and to orchestrate safe reliable upgrades And by the way, if you prefer the appliance model the way it is nothing really changes But if you need it, you'll have more power to meet your operational requirements Stay tuned for a developer preview later this year Next slide The open shift roadmap for networking is expanding to include every architecture and platform that we support today and are planning to support in the future Single node open shift hyper shift micro shift multi cluster hybrid cluster and globally unified networking This slide represents the roadmap goal for traffic into out of in between clusters in a unified way So that ingress and egress are the same regardless of protocol Eliminate all the do it this way for this traffic and that way for another kind of traffic And it helps align how layered products like open shift service mesh and open shift virtualization operate We continue to work on Submariner to provide enhanced multi cluster networking capabilities with layer three and layer four interconnect by enabling direct networking Between pods and services in different kubernetes clusters either on premises or in the cloud With Submariner your application and services can span multiple cloud providers data centers and region Next slide as networking gets more complicated Customers expect improved tooling to understand what's happening below the application layer of their stack and optimize it Open shift is creating tooling within the console to provide the information customers require to optimize network traffic to and from their apps The tooling will support network architects operators administrators and developers whether they operate on one or 100 clusters The new tooling will provide a range of network information Using relevant standards and protocols from the networking industry to present the data Live streamed or snapshot in time such as net flow ip fix and its flow The complexity of networking can be reduced to an easy to understand graphic or table of information Metadata about all the network devices and endpoints be they physical or virtual and how they're layered and interconnected Is combined with network flow data to represent an application's networking topology for the purposes The improving code development security and traffic configuration next slide please As brian cantrell one of the creators of the tool detrace said during the 2018 observability summit Observability is the capability for a human to ask and answer questions And we're delivering tools and capabilities On open shift to make sure we deliver the means to answer these questions to a myriad set of users whether they're developers security admins cluster administrators or sre's Open shift delivers integrated monitoring logging and distributed tracing as part of the open shift entitlement no extra costs Some of the capabilities we're delivering are simplified observability correlation consistency with improved fanos and prometheus support an easy view on workloads regardless of scale visualization flexibility with dashboard creation or via the open shift console log exploration tools and integrated data collection with the open telemetry collector Next slide We are also adding a cool feature based on the upstream keto project The open in open shift to scale pods based on custom metrics So the custom metric autoscaler and it's pretty straightforward You have two main components. You have a scaled object or the customer Where you define the metric on which you want to scale the pod and the horizontal pod autoscaler, which scales the pods With the custom metric autoscaler customers customers will now have the ability to scale down their pods to zero as well Next slide Open shift on bare metal All right, so let's start with redfish improvements. We're adding improvements to provisioning nodes via redfish Super micro and no k airframe open edge are two good examples We're also letting partners implement their own proprietary solutions over from our redfish endpoints The assistant installer hosted at cloud dot redhat.com has been in technology preview for a number of releases and we're working out all the details to make it ready for full support and ga And agent-based installer, um, this is super exciting. We're working on improving the installation experience on bare metal to stand up open shift clusters fast and easily and on premises And disconnected environments installing your first cluster aka cluster zero to have a hub cluster With our hacm for a multi cluster experience. That will be extremely simple for this agent-based installer Next slide sandbox containers It enables this is something if you haven't heard it enables kata containers As an additional runtime to use via native kubernetes primitives The runtime class is uh, it's the one And in addition to being able to select which nodes enable the kata runtime on There's an option to do pre install eligibility checks to save time and avoid discovering problems later in the install process Workloads based on sandbox containers are no different. They look and feel just the same as normal containers Based on run c Including how dashboards and metrics are viewed So far sandbox containers is available as a runtime at the cluster level But what if you have a fleet of clusters? What if you want to enable sandboxing on all of them? We're working on integrating with acm policies to make that happen In addition to standalone open shift Qualifying sandbox containers on single node open shift and other future topologies such as hosted control planes better known as hyper shift On platform consistency So far sandbox containers was only available on on-prem with bare metal. We're working to change that in two ways One we're going to be introducing a tech preview for aws. This is still with bare metal nodes Uh, two we'll be working to remove the bare metal restriction through an effort to use Cloud native reuse the cloud native apis that cloud providers already have to achieve this additional isolation Next slide Windows all right. We announced general availability support for windows containers and open shift way back in december 2020 And followed that launching general availability of bring your own bring your own host support for windows nodes in september 2021 With the bring your own host offering you'll be able to onboard your custom windows nodes Some might call them pets into an open shift cluster The next steps are to move to container d as the runtime and support csi proxy for storage We'll also offer a better health management Um view for windows nodes. For example stay out a windows node has a kubernetes node binary crash A controller running on the node will recognize this and work to return the windows node to a working state It's not possible an event will be generated and the cluster admin will be notified We'll soon add support for windows server 2022, which will be supported until 2026 Uh and as adoption of windows nodes grows we'll bring support for more third-party plugins For those windows nodes and open shift particularly calico sysco aci and others Next slide So we are adding a new vm centric overview page to open shift vert that simplifies vm operations and monitoring It's a single page to view the vm configuration cpu gpu memory disks network to easily change them Get metrics on the running vm and health and alerts Uh with the combination of hyper shift and uh open shift vert You can run multi tenant open shift virtual open shift clusters And the last thing i'm going to highlight on here is um Couple notes. We're providing a tool for low latency network self-test on open shift vert This tool allows third-party vendors for example vnf providers To easily build self-tests that will verify the customer environment is configured properly and ready to serve vms And we're building an ecosystem of data protection partners that can provide an easy way to back up and restore And with that i'll hand it over to brian So workloads on open shift Um, we want to continue supporting the success of the operator partner ecosystem We built with open shift and continue to bring our customers and partners together Starting on the operator front We're going to be adding support for day zero olem managed operators that are included in cluster life segment On trash to today where olem managed operators can only be brought onto the cluster after it's fully operation We're also going to be adding operator support constraints in the future So for example, if you want to specify the minimum maximum cluster version Or the available cpu or memory that needs to be present on the cluster for a solution to run That can be included in the operator package Olem will be able to evaluate those constraints during installations and upgrades And guide customers so they know how to stay within the supported scope of the operator The operator sdk will also be able to provide more guidance and examples to support operator developers to optimize memory usage Network performance so the operators can better manage workloads on a larger scale Olem is also going to make it easier to run operators at the cluster level so it avoids Version conflicts and also without compromises at the security layer with a better granular permission and visibility controls We're also making it easier to update operators So operators will in the future be able to ship updates quicker with a new catalog format that allows you to release additional update graphs Or promote versions between channels without having to release a new operator version This will allow customers to get the latest operator version easier and faster and it'll allow developers to release versions Out of beta channels and into ga channel quicker without having to go through a forward recycle We'll continue to improve our tests and operator sdk and build pipelines Find potential cases of operators that still use deprecated apis or have misconfigurations in the metadata So partners aren't left behind as we progress through ocp and kubernetes Next we want to make improvements for disconnected experience Essentially, we want the disconnected customers to have the same experience as the connected customer Working on a single cli tool that can manage all disconnected content allows a granular selection of desired content and makes keeping that content updated very easy Last we want to expand the partner ecosystem or continue to expand it rather There will be a generic new bundle apis So Olem can support other package formats like helm charts or even just raw kubernetes manifests in addition to operators themselves The sdk will enable other language sdk's for example, like java and corkis To easily package their projects and be managed by olem even if the development team isn't good isn't a good development All of these will ultimately allow us to offer new partners to the ecosystem faster and allow more flexibility for how those partners want to operate Next slide That brings us to ci cd and get ops In the user experience section, we want to continue working on the improving the ci cd user experience In the near feature that's going to include work on concurrency control So in pipelines, we would offer concurrency control So users can control if and how many pipelines should be executed concurrently as developers push changes to their git repositories and trigger ci workflows Also improve support for mono repo git structures This is using pipelines as code and maintaining pipelines for each of applications, but in a single git repository We want to improve the experience for users of helm charts Open shift get ops will be enhancing get ops workflows for customers to deploy using helm charts And we're working closely with the red hat advanced cluster management team So open shift get ops will also Simplify the onboarding experience through bootstrap and git repos for deploying applications and cluster configuration On security front security and software supply chain management Is always top of mind for us and it's top of mind for many of our customers now as well We'll continue doing security work across our ci cd offering Pipelines and task governance will enable customers to provide a set of curated tasks and pipelines across their application teams And be able to centrally manage and roll out updates across these teams ci workflows We'll be providing a tecton hub instance on open shift with a curated set of tasks from red hat Which customers can further customize and make their own That's a step toward helping customers govern the tecton tasks that should be used across their organization And we'll be working on providing more guidance on using secrets managers with open shift get ops As well as adding integrations with hashi court ball another public cloud secrets manager Platform consistency Proving the operational experience of platform owners maintaining open shift pipelines and get ops is another area We focus on pipelines as code and making sure customers can apply get ops principles for managing and running Running their ci infrastructure. It continues to be a focus for us We'll be working on enabling long-term pipeline history and long-term log retention for audit purposes And also maintaining the existing user experience of open shift web console on tecton cli On the open shift get outside We're aligning the argocd multi-tenancy model with open shift and this will simplify the overhead of providing Argo as a service for platform owner a couple permissions required from control plane management and applications Both open shift pipelines and open shift get ops. We'll be working to expand support for more cloud services and architectures in the future Next slide jeff Now we're into our open shifts open shift serverless So serverless we aim to reduce complexity and focus resources daytime with a consistent platform experience Or unifying the experience It means we're integrating the platform with platform services such as observability and integration with other red With red hat and other cloud providers We'll be elevating serverless functions to offer a programming model help you create event driven solutions That lets you focus on the value you want to provide and not on the infrastructure or application configuration And we continue to add more event sources to continue to increase the range of possibilities with serverless For security We define serverless as a deployment platform and as such security should be integral to it We're focused on Adding end-to-end encryption for internal and external services and also broker and channel authentication and authorization To secure the entry and exit of events Multi-tenancy is another important feature and we're in the midst of providing this with service mesh We'll be looking to bring it natively into k-native itself The users will be able to create solutions compliant with industry statutory or internal security requirement Finally for serverless platform consistency or serverless everywhere We want to work towards offering service serverless everywhere that manage openshift runs. That means Openshift dedicated rosa aro disconnected clusters single known and also different flavors of openshift like ovn We're looking to integrate with application centric Centralized hybrid cloud initiative where the developer operator user experience is the driving force and the cluster nodes or pods Kind of fade into the background so much that cluster creation itself would be redundant This is exploratory at this point to bring openshift to the multi cluster realm by Offering centralized hybrid cloud in a new innovative way by using kubernetes to clarify of apis to offer a powerful But simple hybrid cloud platform that will enable users to create cluster agnostic solution Next slide Jeff for service mesh In the unified experience column this year service mesh will look to include better integrations the openshift experience Including openshift console and kiali integrations and a unified metrics infrastructure and unified api gateway Security everywhere service mesh already provides automatic and pls encryption with certificate management allows creation of traffic policies That are based on service identity rather than traditional ip address reports This allows you to create zero trust networking policies restricting communication between services to a need to know basis This will continue to expand into multi cluster environments and off cluster services such as vms and bare metal hosts And for platform consistency openshift and service mesh provides a consistent user measure experience across multiple environments and infrastructure On-prem in the cloud multi mesh multi cluster multi region and this is something we'll continue to build on This reduces the complexity by providing a consistent experience across different types of infrastructure Next slide please For the migration toolkit for appliances helps bring legacy application The openshift and can have significant boost on their software delivery performance without requiring complete rearchitecture Vision for mpa mta is to become the ultimate open source toolkit To help organizations safely migrate and modernize application portfolio leverage openshift providing value at each stage of the of the adoption process As an architect in charge of an adoption initiative the amount of information to be taken into account when designing the adoption plan Can be massive can be overwhelming and can be overwhelming quickly Updates this year will assist in obtaining insights out of this raw information Enabling leads on adoption initiatives to dramatically reduce risks provide certainty upon which the right decisions can be made Extending scope of mta mta has traditionally been focused on static analysis of application binaries But we'll start leveraging the conveyor tackle project to provide new features such as application portfolio management Application assessment and automated generation of tests and deployment manifests Next slide please Finally migration operator for red hat openshift will become brand new operator And we'll call it mtrho. This will be available for developers mid 2022 And this operator is not replacing mtc the migration tool kit for containers mtc will continue to support mass migration use cases for openshift 3 to 4 storage migrations But this new operator will add migration capabilities for developers directly in the open console making Workload portability easy and self-service Yes, that brings us to the end of our slides spending your time with us we really really appreciate All the participation all the answers and questions This is such a vibrant community and we look forward to you joining us offline in the next session Reach out to us on any of these topics that we discussed today and join us in those open open source communities so that we can Have an open hybrid cloud With that, I think we'll go ahead and end the call Thank you