 Well, good morning, everyone. My name is Ramon Acedo Rodriguez. I'm one of the product managers working with OpenShift. And today, we will give you a short update on our roadmap, things that are coming, things that we are working on. And if this lets me... Sorry. Otherwise, we can do it manually. Yeah, here we go. Thank you. Okay, so let's talk about the hybrid cloud to start with, which is very related to everything we do with our Kubernetes OpenShift. And, you know, for the past 10 years or so, at Redcat, we've been investing a lot in hybrid cloud. And our customers and users have also been investing in innovating with their apps in the different, you know, types of applications that you may have, right, from the traditional NTR applications to more cloud-native type of applications, right? And all of these can be done across all our footprints. Here, you can see five footprints that we have from the physical bare metal nodes where you can install Kubernetes, virtual machines, you know, the traditional virtualization platforms, but then also your private cloud. Think about OpenStack or others as well, along with the public clouds that we all know, and Edge Cloud. We will talk a little bit about Edge as well today. And thinking about OpenShift right now, Kubernetes delivered with OpenShift. You have two ways, right, to access Kubernetes from OpenShift. It's a self-managed platform, right, on the top. You have AWS, Microsoft Azure, IBM Cloud, and then RedCAD's own cloud that's backed by AWS and Google Cloud, right? Those are self-managed, co-engineered between RedCAD and the cloud provider. By the way, today, after this session, we have, as Diane was saying, a session with AWS about application modernization. So that falls into the top right bucket here with AWS. And also, you know, we're talking about all the footprints that you have with OpenShift. Well, you can self-manage your OpenShift, self-managed Kubernetes, right? So with this, today, I want to pass it on to Daniel, who's going to do a demonstration of OpenShift, and then we will continue with the roadmap update. Thank you so much, Ramon. Hi, everybody. My name is Daniel. I'm working for RedHead as a developer advocate as a CNCF ambassador. So what I want to do is showcase today. So I'm going to use OpenShift 4.10. This is where we are. And then I'm going to showcase a couple of the architecture. A little bit complicated, this one, but don't worry. I'm going to go through a little bit step-by-step, and then there are a bunch of the capability on OpenShift 4.10. For example, advanced cloud management and then container security and the GitOps pipeline. This is not about only developer, but also SRE and DevOps engineer or application architecture or your DevOps team leaders. I'm going to showcase a bunch of stuff in the next 15 minutes. And then, yeah. So I'm going to stop my presentation. So here's my OpenShift cluster 4.10. And as you can see, hopefully you guys see, I already installed a bunch of the operators. So you can see advanced management Kubernetes for ACS. It's all managing and allows you to have container security like a Docker cyber benchmark or any CBA violation. You can keep monitoring and secure your cluster. And also there are GitOps based on Tecton Pipeline to CI-CD Pipeline as well as AlgoCity. Based on that, you're going to have your GitOps pipeline from your Git application repository. And the last thing is ACM. So for example, your company did to manage multiple cluster, not just single on-prem Kubernetes. For us, you have Kubernetes on Amazon or Google, Microsoft or DigitalOcean, how to manage multiple cluster along with your multiple cloud strategy or hybrid cloud strategy. So here's the answer to enable you can manage that kind of multiple cluster. So I already installed the operator you can go to as long as you have a cluster admin permission. And I'm going to start how to get start to develop and create the GitOps pipeline. And actually, I have already created my CI-CD Pipeline DLIS GitOps based on AlgoCity. You can actually create this AlgoCity using opposite to GitOps pipeline and operator. And then here's a bunch of stuff. And then I have a developer environment. I have a production environment. But here is the command line. All right. So the CAM CLI, I'm going to make it bigger a little bit for the back end here. OK. Hopefully, you guys all see that. So this OpenShift application, it also manages CLI. So CAM Bootstrap allows me to have a bunch of the YAML file as a secret file. When you run this CLI, as you can see, I already set it up. The GitOps pipeline, GitOps repository, as you can see, I'm going to put it up there in a minute. And also here is the actual application GitOps repository. And then here is the access token, which allows me to access my external container registry. I'm going to use QuaidIO. You can also use Docker Hub or Google Container Restory, which allows me actually, when I change my application it automatically pushes into container registry based on the Tecton pipeline. And then also when I re-tag my container registry of my GitOps repository, it automatically triggers your IGO CD to deploy the application into target environments, for example, depth cluster, production cluster, et cetera. So once you run this command line and then you got a bunch of the secret file, as you can see here is the secret file. So I already opened my terminal based on my ID tool and then go to Compuration. You can see IGO CD or YAML file. And then also CI CD based on Tecton, how to deploy your application from depth cluster or your staging, production, et cetera. Also here's the environment for depth and production. And I go to application service and base here and Compuration, the bunch of the YAML file, how to deploy application onto Kubernetes cluster. So today I'm going to use one of the Java applications. Everybody say, oh, Java is too old. It's maybe 25 years old technology, but it's still evolving. So I'm going to use the application. It's a one of the popular game windows operating system back in 1995. So mine sweepers. I'm going to a little bit tweak that application based on Quarkus, which he made me a little bit evolving cloud-neighbor Microsoft's application in the end on running on Kubernetes. So go back to web browser. So this is my GitHub repository. I'm going to make it bigger here. So here's the GitOps repository for Microsoft Sweeper. This is publicly available. So when you go back to my here, and here is actually a QR code. You can scan it. I'm going to share this slide later today, and then you can scan this QR code. You can go to entire the demo environment, just a YouTube channel, and you can find all step-by-step. I don't have enough time to step-by-step all kind of stuff today. So back to the GitOps repository. Here are the GitOps. You can see the configuration environment. And then go to application instead of Quarkus. And then here's the application. You can go to source. It's almost stopped. If you are familiar with the Java application, this is your main project. I got to just develop the application. And one of the interesting part, when you go to setting on the GitHub repository, and then there are developer setting. And then I just create two personal access token. And here is the CAM-CI. I'm going to input my super-secret password here. So this is the way how the Tecton pipeline trigger and then automatically detects some of the changes by the GitOps repository. So this is a common practice how developer actually develop applications. And then once they're done, just push the code into the GitOps repository. And after that, the nice smart CI-CD pipeline detects the change in automatically start your pipeline and the build and deployment, et cetera. And then here is the query.io. It's an external container registry. So in this setting, and then I actually created the credential for the container registry access and authorization and authentication based on robot, which means whenever I change it, I re-tag a new image into container registry and then I will automatically detect the change and deploy the target environment. So I got to set it up a bunch of stuff, but maybe I got to skip some of the necessary part, but I don't have enough time to do that. So just to make sense. And then here, back to the application. And let's go to ACM cluster. So when you install ACM and then you can find the ACM management, when you click on that, the new dashboard is just open it. And then when you go to Up Over View, and it automatically single sign on your open shifter authentication and then log in my username, Daniel. And then it shows the two clusters, depth cluster and the production cluster at this moment. You can actually do this same capability on your open shifter 4.10 cluster for now. And then once you go to this ACM and then we have a bunch of the application. So when you go back to open shifter cluster, I actually deploy the application in depth namespace and then change it to depth console. And then you can find the nice candy eyeball, the depth console UI. It takes some time to rendering the UI based on your network bandwidth. So it takes some time. Go back to the ACM and you can have a two cluster. And you go to cluster. Now we have a depth cluster I already showed in Igo City. The two cluster, one is depth cluster. The other is a production cluster just like D1. So when you go to depth cluster C and you can find all relation, all Kubernetes resources. For example, here's the actual application and POST SQL and then a bunch of the other resources. For example, services and the point and the router, et cetera. You can find all kinds of topology, which resources communicate along with your application and then Kubernetes manufacturer. And you can see here, I'm going to make it bigger. Two cluster and a local and a production. And then go to application view. It actually shows the similar topology in upwards to the console. Here's the upwards to the console. You can see it's the POST SQL and the simple Quarkus application and the click on the open URL automatically opens the actual application endpoint. So the openshift actually provides the route URL just like a Kubernetes ingress. So once you open up, you can see the minus three per application like back in 1995, the minus three per game. So I've been there, so let's try to give some time to the application. I've got to tweak a little bit. And here is the micro sweeper. I change the title. And here's my board, actually change the back end application. This is the old score stored in the POST SQL from the Quarkus application, which is the cloud name of micro service Java framework invented redhead. So when you go to ACM, you can have the application side and then you can have a similar topology just like you have upwards to the console. When you go to upwards to the console, for example, you have a two cluster, which means you have two different cluster and two different console. But the ACM, you can just find all kind of stuff in the single pane of your console. So go to application and dev console. You can find the similar topology application and route deployment, et cetera. And then go to route and click on the route URL and you can find the same application here. And then I already played the game to time and you can find the two score in my POST-S database. And then when you go to production application here and production, and then you can find the different topology and then go to launch URL and then you can find the production and then I'm going to get a little bit bigger and then the URL based on production and then previous one is URL based on dev cluster. So that's why I go to production. It's different on the ID. I got to try in the morning so you have a different Bible here. So I'm going to show another interesting part is the ACS. ACS is advanced container security for Kubernetes. For example, you got a bunch of the application deployed and running on Kubernetes and then there are a lot of multiple personas to secure your application as a reg, the infrastructure as well. So luckily, OpenShift provide ACA operator so I already installed it. So go back to ACS stock regs. I already deployed a bunch of the application and then go to admin console and then here's the operator and then it will show up the ACS and I already deployed the central services which allows me to have the ACS console here. So let me try to access ACS console go to networking and route and it will automatically create the route you are to access ACS. So here we go. So I'm going to go to access the ACS route, URL and then each show me the all the violation stuff in the cluster but in this case I didn't edit secure cluster at this moment just empty dashboards so you don't have any violation at this moment. So I'm going to try to showcase how to edit secure cluster specifically in this cluster. So in order to log in, I'm going to add admin back to the console and then we have a secret object automatically generated when you install operator and then here's the central password and then rebuild enemy and then paste that password go back to here. So you can also actually edit the secure cluster for ACS itself. As you can see there's no violation scanning at this moment so I'm going to try to edit a new secure cluster for that. So ACS is a little bit beautiful ACS actually provides some integration how to create a secure file when you edit a new cluster for security. For example here's a ham chart so I'm going to create a new my secret based on any bundle and I just download all the secret file and then just try to where am I, my project ACS StarX and then I see apply OC apply F and user Daniel O and download and then I just P secret file and the same namespace ACS StarX So once I created this secret file my ACS actually access that cluster to scanning all kind of violation so I just created three secret you can go to open the console in the same namespace and go to secret you can find the three secret so I created it just now and just now and just now ok so pretty cool and then go back to operator and I just need to secure a new cluster new cluster you can actually leave all the depot something like that one thing is the central endpoint it should be your actual target cluster in that case this is the ACS ok let me try to access ACS first go to ACM ok this is ACS so I'm going to copy the endpoint and paste here and this is the HTTPS protocol TLS termination I just create a new one and then go to path it will automatically deploy a bunch of the path to access and then secure the ACS cluster itself and then it takes some time to finish all kind of stuff I just actually download the old container in advance it takes almost done I think it's done ok go back to my ACS and go to dashboard and then go to compliance and I try to scan in bottom once again and then it takes some time depends on how much your application actually deployed and you can see the bunch of the violation you can find that and here is the CIS the cybersecurity based on the ACO benchmark and there's the bunch of the HIPAA and a lot of the CVE standard you actually take care about whenever you deploy the container application on to Kubernetes and your production environment and then I'm going to interesting stuff here I'm going to just reload it takes some time to repress your dashboard because it aggregates all your application and then in the end it shows up in the dashboard in the meantime so I got one interesting this is my last thing so Logo4j this is a 1M file so I just you probably know a couple months ago we have a very critical CVE around the Logo4j shell so this is the same example exposed Logo4j CVE with the Java application so I'm going to try to deploy that thing go to another namespace here CVE test and I'm going to just import YAML file so when you go to full video demo you can actually find how to change your application and then to push it into GD proprietary and it will trigger CICD pipeline in the pretty interesting part okay so I'm going to skip that thing so this is the last part I'm going to show that so just a quick summary so currently you have ACS and ACM and also the GDOS pipeline capability on OpenShift 4.10 so I'm going to hand it over to Daniel to talk about what is the next thing in OpenShift and a lot of the bunch of the capability I'm Daniel Messer I'm a product manager in the OpenShift Group on Red Hat and what you just saw are all current capabilities so all of the things that Dan showed you multi-cluster visibility in ACM multi-cluster security analysis and reporting with ACS GitOps driven deployment and pipelines with Takton and Argo all this is possible already today so what I and the leader also Ramon are going to talk about you now is the future of what are our plans and visions for the remainder of this year and the future of OpenShift beyond that at the platform level and we grouped this in kind of three main areas that we want to talk about and the first one is multi-cluster so multi-cluster is something that we take the whole platform into in the main direction moving away from this model where we have very few but extremely large clusters with hundreds and thousands of namespaces shared by hundreds and hundreds of tenants to a model where we are essentially looking at a architecture that looks like this so we want you to be able to bring in tenants into their own clusters to bring in clusters for specific purposes with specific hardware from specific cloud providers or infrastructure providers and manage them at the fleet level so we are not just telling you hey run multiple clusters because now it's so easy with OpenShift to do that we want you to do this in a way where the scales and you aren't drowning yourself in work because you need to keep all of this operational so when we talk about multi-cluster we need to have a couple of things in check the first is the storage layer that from a central pool of storage allows your applications to get persistent storage from a central source for efficiency to reclaim it later when it's no longer needed but also in a way that you can have data move from one cluster to another cluster you don't want to be stuck in one cluster just because your data sits in that particular region or data center you want to have the ability to move the data over to another cluster for failover purposes so a multi-cluster storage layer is required to actually do that and in those clusters you still have their own individual ingress point for network traffic that's where the applications sit but the applications can't be aware and dependent on the fact that they are running across multiple clusters it needs to be transparent to them we don't want to rewrite our application just for it to work on a multi-cluster it should just work out of the box so that you need a multi-cluster networking solution so you need to have the ability to essentially transparently route east-west traffic between the clusters so that it is sort of seamless and transparent with the application itself this is what you need at the infrastructure level and in order to effectively do that at scale without doubling or tripling your team size you need to have tuning, right? you need to have the ability to have inside of what is running in all your clusters but also from a central point deploy applications across those enforce policies across those clusters figure out security violations in various clusters and remediate them and get alerted about it and whenever you talk about having multiple clusters you also need a central source of truth of all the containerized software that you are running in these clusters so this is coming together with a container registry that sits in sort of a hub position so these three main pillars are multi-cluster, security, multi-cluster management and container registry these are the core pillars of our multi-cluster approach where we basically standardize all this tuning regardless of how many clusters you end up running and where so the first is cluster management and we are true to our word we open source everything so we open sourced what we call in the product space advanced cluster management with the open cluster management project and we didn't just open source it and put it on github we actually donated it to the CNCF last year so the core parts of what it takes to do multi-cluster life cycle application deployment across multiple clusters as well as policy enforcement is in the open cluster management project which is a CNCF project now these are the base building blocks in the form of the APIs and the controllers that make this up and as you can see it spans the various six and working groups and already gets contributions and community momentum also from our partners so open cluster management is one part because we donated this to CNCF it needed to be three of things that aren't in CNCF so we actually took out the components that are OKD and OpenShift specific and put them into a separate sort of midstream project that we call Stonostron Stonost is a fleet and Tron is tools so it's kind of a fleet tool but these are all the extensions that we see them capable of managing OpenShift and OKD clusters specifically creating them managing them over their life cycle as well as give you a graphical console and a search capability this is also where we do integration with Hive for cluster provisioning Submariner for east west network traffic isolation as well as volume syncing for actually replicating storage across clusters using a shared storage system so just to not be confused these are two open source projects that kind of flow into the ACM product and that's where all innovation happens upstream so what we are focusing on in multi cluster specifically the networking part because this is crucial to get right if you don't have that your multi cluster deployment is going to be very complicated it's going to be very manual so we are investing heavily in this multi cluster networking layer based on the Submariner technology to essentially allow parts running in different clusters communicate through what they perceive as a very flat network namespace so they don't perceive any boundaries of their own cluster they don't even know that the part they are talking to may actually sit in another cluster very far away it's completely the same as talking to parts and services in the same cluster and this is the level of transparency and abstraction you need in order to essentially carry out multi cluster networking where the clusters themselves are connected west-wise with IP stack tunnels but from the part perspective it's all one flat networking namespace so this makes it possible to just place your parts wherever there is either free capacity or special hardware that you need or according to your fault tolerance policy another independent region so you are not looking at one cluster as a single point of failure what we are also looking to do in ACM and I say ACM synonymously with open cluster management and Stolastron is the ability to import and manage OpenShift and OKD clusters and other compute architectures so x86 is what we are doing these days also Power and System Z but we are also going to support ARM this year OpenShift and OKD already supports ARM since version 4.10 so cluster management layer will also start to support that and learn how to deploy clusters on these infrastructures and for the storage part we are betting on Vulsing formally known as Scribe this project is used to essentially asynchronously replicate data from persistent volumes across clusters in the background for the purposes of being able to make a disaster recovery possible so you would be able to take data out of one cluster and move it into another cluster you actually do this continuously in the background and that gives you the opportunity to fail over a workload to a completely different cluster if you have a catastrophic failure in a region or a data send so this is done with Vulsing integration but when we talk about this cluster management architecture you will see that the cluster management stack itself the ACM on open cluster management technology runs on OpenShift itself so we call this a hub cluster and this is an infrastructure only cluster that doesn't run any workloads it really just runs the infrastructure to do multi cluster management and this is something you definitely want to be able to backup and restore in case it completely fails so ACM will learn how to do that with their regional hub architecture and will allow you to backup and restore that multi cluster management stack completely in case you have a catastrophic outage there what it will also support is deploying OpenShift in a slightly different way so we have this master worker node model today where you have a control plane separate from the masters and you use specific nodes for the control plane in the Hypershift project that we are working at upstream you will be able to containerize the control plane and run it on OpenShift itself saving you from procuring and providing separate machines just for a control plane for a cluster you will have a larger managed cluster that does that in the form of container orchestration it will just run the API server and SCD as containers but the actual worker nodes of the cluster will still be external nodes so this is the Hypershift project and ACM will learn how to provision clusters in that specific way in the future as well another important aspect and you will hear about this later today is security and forced by content integrity and verification so signing is an important topic in this world and we specifically sponsored a six store project which concerns itself with signing cloud native artifacts and when we talk about cloud native artifacts we usually mean container images but you can also sign other artifacts for instance manifests and if you know a little bit about ACM you know that all the policies all the regulations and rules and applications that it manages are expressed in the form of YAML manifests and with six store we can actually sign these YAML manifests and prove to you that this is the exact same manifest that you get when you pulled it from an external source so we are essentially providing integrity not just around the images but also around these manifests this is the basic cluster management layer this is open cluster management but what you also want is sort of a console graphical UI experience around that and we have an awesome console in each cluster with the OpenShift console and what we are doing is we are elevating this experience up to the fleet level so you have a console that's actually multi cluster aware and will as you've seen previously will start to integrate many of these multi cluster management aspects into one common console framework so you have the OpenShift admin console next to the developer console next to the ACM console next to the ACS console as well in one view screen basically and you will be able to zoom into particular clusters in your fleet but also zoom out at the fleet level to basically have a fleet wide overview of policy enforcement applications running as well as security profiles and this is done with this unified cluster engine there is a new operator that we are introducing called multi cluster engine which is actually taking out some of the basic cluster life cycle that ACM does into its own operator that's available to every OpenShift and OKD cluster and this is driving this unified console and what it does it allows you to essentially use things like fleet wide authentication so you don't have to login into each and every cluster individually and there is fleet wide signal in place that allows you to be logged into all clusters simultaneously if they have the permission to do so this is done by an extremely interesting project that we are conducting in the console that we call dynamic plugins so all these new UI experiences that we are sort of working into the OpenShift console are carried out with a plugin framework and this is not just something we use in order to bring ACS ACM into the console it's actually something that our virtuous users as Diane used to call them and also our partners can use to build their own UI experiences right inside OpenShift it's very straightforward all you need to know is a little bit of JavaScript and maybe have a little bit of hand of designing a new UI and if you are a partner in ISV and you want to have your own UI in the console directly integrated into OpenShift you can do that very very easily and this is something that our group actually do a dynamic plugin within two days with nothing but a little bit of JavaScript and a little bit of YAML that you throw the cluster and it makes your console appear and this is so easy that I think a lot of you can do it as well and can use it to model unique workloads and specific things you want to have specific UI support for in your own clusters so this is an extremely exciting technology and definitely recommend you to check it out and try it for dynamic plugins in the OpenShift console I talked before about the importance of storage and the base layer to actually store persistent data so this is an area where we are heavily investing and we are starting from the ground up at the container storage interface level where we will teach OpenShift how to use CSI for resizing of volume provisioning of a firmware of volumes SC Linux mounts as well as bring in all the cloud provider plugins again through the CSI framework so this is how we enable the individual clusters to work effectively with the infrastructure through the standard interface and then a layer higher we are starting to bring in multi cluster capabilities we have a multi cluster object gateway that is used with the 7roop project to create an S3 compatible storage in your storage landscape so it's an object storage by nature but we are going to add a file system persona to that so we are going to be able to pretend that what's actually underneath an object storage bucket is actually a file system and there are a lot of apps out there that rely on shared file systems and a file system style object storage is a way to make that work across clusters in the read, write active, active fashion but also on the other end of the stream we have smaller clusters, very small clusters in the form of single node open shift SNO where open shift and all of its technology is running on a single server these systems are usually not connected to a larger shared storage network or complicated storage system and they need to work with what they have in the local server and we are exposing that with the same management capabilities through the logical volume management operator which makes use of the LVM stack and the rail kernel to essentially give you a little bit of light storage provisioning on a single node but through the standard interfaces of ODF and then one layer higher we are introducing these multi cluster shared networking and shared storage capabilities what we will be able to do with ACM and open shift storage working together is facilitate and orchestrate the disaster recovery failover from one cluster to another so the application is managed via ACM and deployed via ACM but the storage is managed and provided by OpenShift Data Foundation and these two technologies integrate in a way that they use this waltzing technology that I mentioned before to replicate data continuously in the background and if a cluster fails you will be able to initiate a disaster recovery step that will move the entire application definition to the surviving cluster where all the data is already present because it was continuously migrated so this is something that is orchestrated and available to you basically as a result of an action in ACM rather than you going into the systems and redirecting storage and redeploying applications reactivating storage all manually so this gets you out of a disaster really really quickly. On the other hand we hear you about requirements in OpenShift's data protection space with dbLidity to back up and restore and we get this so often and we have so many partners that want to integrate with OpenShift and backup and recovery in that we have provided additional APIs to integrate into the platform so the OpenShift data protection APIs will become version 1.0 this year and these are the integration points that backup vendors will use to integrate with OpenShift. Speaking of storage I mentioned before that whenever you have more than a couple of clusters running you definitely want to have some sort of central truth for all the images that you are running in these clusters. This is what a central registry does and once you serve more than one cluster it's really really highly available and really really performant because if the registry is down you will notice in your clusters within 5 minutes I guarantee you. This is the idea that the project Quay registry has been designed with from day 1. It's the same code base that powers the public Quay IO registry and it's also available as an OpenSource project with Project Quay and a product with Red Hat Quay and we are looking to integrate this product now into the same unified console framework that you've seen earlier and also bring the initial appearance of Quay.io into the fold where all the rest of the Red Hat managed services are available at console.redhat.com. Security scanning remains an important aspect of an image registry because it allows you to scan the images before they actually hit the cluster. Quay already does that since a long time with the Clare Security scanner and will be enhanced to scan even more content inside the container. We have already introduced support for programming language like Python where you'll be able to report Python package vulnerabilities it finds in the container and we extend that to Java which is actually in tech preview right now but also Golang and other scripting languages like Node.js and Ruby. So you will be able to see not just RPM, base OS level vulnerabilities in your image, you will also see language level package manager vulnerabilities. And then finally our security pillar. You will have a session about it later today but the StackRocks project the community version of Red Hat Advanced Cluster Security this is the last component that Dan has shown in his demo and this is the central piece of actually having sort of peace of mind and faith in your multi-cluster vision. So you're not opening yourself up to a lot of rogue workloads and tenants that do all kinds of unsecure things in your clusters. You can still centrally control what kind of security policies are in place, what kind of workloads can run and what is your tolerance towards security capabilities. So ACS and multi-cluster security does it as runtime. So versus the registry which does it at rest, ACS will tell you what's going on in the cluster in the context of the running workload. So what it will do here is integrate with the SiegStore project I mentioned earlier. So it will be able to define policies around you only accepting containers that are signed for execution and prevent unsigned containers or containers that still signature verification for being executed. This is how you ensure only trusted workloads run in your cluster. There will also be an ability to define network policies. So a lot of users want to essentially compartmentalize and isolate the applications at the network level, which is an extremely good idea in the security space, but it's also very complicated, right? So ACS will provide a graphical editor for that and it already has insight into the cluster traffic already based on its EPPF level packet filtering capability and it will make use of this insight to recommend you known traffic patterns to basically express them as network policies and say this known pattern is now allowed and everything outside is not allowed anymore. So this yields network policies, they need to be applied to the cluster and we already have a technology that applies policies to a cluster. This is ACM and open cluster management with its integration with GateGeeper to add scale. So ACS will just handle its network policies to ACM as well and make this component apply them to the cluster. Compliance is an important aspect for corporate security and if you haven't heard about the compliance operator you will hear about it today from Kirsten how it helps you basically prove to auditors that you are applied to certain compliance levels. The compliance operator will also get a graphical interface and this interface will sit in ACS. In the next topic, Ramon will walk you through what we are doing with deployment flexibility and I'll then come back to walk you through our standardization technology. So you've seen a lot of details of what's going on with OpenShift. Sometimes we get the question of what's the difference between OpenShift and Kubernetes and with what Daniel showed us before with the other Daniel has just explained, you can see it's many pieces together that we package, we make them easy to consume and this is where we are the value that OpenShift is providing. I promise I'm going to try to be quick and just go through the fun stuff but you're going to see many letters written in there. Let me start by installing OpenShift updating OpenShift and integrating OpenShift with more providers. What are we doing here? In terms of adding new platforms we have a few new platforms, some of them already there, some of them in the roadmap Alibaba Cloud, IBM Cloud and Nutanix as well. This actually is not only what we are doing with adding more platforms, we're also adding more regions to the existing platforms so this is a continuous, especially with the large main ones, public clouds it's a continuous effort that we are working on. In the middle, installation, you need to install OpenShift, if you are not doing it self-managed and well, installing OpenShift needs to be easy as well but at the same time we need to cover all these use cases that I'm sure each of you will have and all have seen. We need to make it easy and we are working on and I'll leave a brief update later an Asian based installer we'll see what's that in a second hosted control planes. Have you heard of that? So now imagine you want many clusters, different types of clusters, but you would like to have one shared control plane somewhere, imagine three nodes for example, or six nodes it doesn't matter, serving to thousands of workers, nodes in different clusters, just sharing the control plane. That's also my idea and very practical indeed, so we're working on that, we're calling it so far HyperShift. What else we are doing? Upgrades, upgrades is always a challenge isn't it? Many times customers say, well, you know, I will only upgrade between extended user support versions, like I don't know between 4.6 and 4.10 things like this, just to save the hassle of managing upgrades. Well, we're not making this continuously as well, but it's always in our roadmap and in our pipeline, make these improvements with upgrades. Bare Metal, so by the way, I'm the product manager for everything Bare Metal, so this is a topic very close to me and I could talk a lot about this but I'm not going to. Only that, let me tell you this, do you know the project Metal Cubed? It says Metal 3, but in the community we call it Metal Cubed so essentially with Metal Cubed you can manage physical servers as if they were virtual machines or just an instance in a public cloud, right? That's incredible we've been doing this for years it's a pretty mature project in fact it's leveraging technologies that existed previously ironic, maybe you have heard of OpenStack Ironic so that's what manages in fact that's the engine underneath Metal Cubed anyway, so Metal Cubed managing loads of servers, managing many more with all the improvements that we are making in it Mobile Metal, an installation how else can you deploy OpenShift? Well, we have a cloud installer we have what we call the assisted installer so essentially you go to console OpenShift.com with your Redcat credentials and then you can have access directly to an installer on the web you don't need to download a client anything, just say this is how I want my cluster to look like and then you're going to get an ISO and then you just boot the ISO in the notes of the cluster that you want to build super cool and in fact we are working on a on-premise version of the assisted installer that will also give you even more flexibility to disconnected environments things that you need when you are on-prem that perhaps you can do with when you work from the SAS so that's what we call the Asian-based installer for now internally, that's how we are calling it what else are we working on OpenShift virtualization you heard of Q-Bird Q-Bird pre-impressive project as well we've been working for years at Redcat so in fact now it's pretty mature it's been years of ramping up, having features making it so that you can do everything you would do with your traditional virtualization platforms only that from OpenShift that's pretty cool, that's actually incredible, having achieved all of this and in fact the recognition to this maturity level is that it's now an incubating project so that's what level up in maturity in the CNCF Q-Bird, pretty cool project let's talk about, before we were talking about the footprints where you can install OpenShift one of the footprints is the edge and in the edge edge computing it applies to telcos but not only to telcos, so anybody who needs to install OpenShift on a remote location will benefit from all the improvements that we are adding for this kind of topology for example these places may not have even space for three servers so we may need to install OpenShift in just one server when you are on the edge we may have connectivity but perhaps not all the time so we need a way to be able to do management of these clusters under these circumstances many times from a central point of management which is ACS, the advanced cluster manager that Daniel has shown us before did you see how sophisticated we can get to manage the minesweeper those who are old enough probably have played the minesweeper in the past with Microsoft so now you could explore all of that while learning about ACS for security advanced cluster management to manage many clusters distributed well this applies to the edge and then in the edge we have also very specific needs for example in terms of real time workloads so we need to fine-tune our servers so that they can provide the performance that these applications need applications that process traffic in real time things like these that require complete dedication of the machines you are running your workloads on and related to this single node OpenShift as we were saying you can install OpenShift in just one node we've been working a lot on SNO as we call it internally to make it as good as any other cluster but also within the constraints of one node that's not an easy task so imagine you have the master node the worker node all together all competing for resources the workloads are competing for resources with the management of the cluster the ingress operator everything that you will have in there so we made it happen not an easy project but we made it happen it's fully supported since last year and now look at this we can virtualize workloads in one cluster in a single node cluster that's very impressive if you ask me adding this support also not only for bare metal initially this takes a lot of resources OpenShift in general put on top all the workloads in one node so we really need to be very careful how we split the resources in this node so that was mainly bare metal even though in general we tried it in every platform now we are adding support to vSphere as well so you can have a single node cluster on vSphere pretty cool as well another thing that the team behind SNO is working on is you may need sometimes more capacity maybe you start with one SNO and then you need to scale because you have more workloads so you are going to be able to add more workers or simply add workers because you start with just one node to add more workers to increase your capacity another something just to mention it OVN Kubernetes do you know OVN? you can use SDN OpenShift for the network or OVN well OVN is kind of like the thing that's taking adoption for managing the network the CNI in Kubernetes and SNO is not going to be an exception and lastly here we go ARM this architecture how many of you have an M1 laptop right now with the Apple one? it's pretty impressive my fans don't go off almost never now the summer is coming and not only this is just an example in laptops but ARM is really present in many data centers including the Amazon data centers with AWS so OpenShift again is no exception we are supporting ARM we've been supporting ARM already and what's coming we're doing something pretty cool as well so now you are going to have that will allow you to spread to distribute your workloads in the same cluster between ARM or X86 right? you actually will tell the workloads when you define it which architecture is it and then it will be placed in the right servers as this little graphic tries to show you and with this I'll pass it over back to then to finish with standardization so let's finish with standardization I'm going to be respectful of time and also the break that we have scheduled so I'll move a little bit quicker on this one but the first thing we are going to standardize is the way you are installing OpenShift behind the firewall without connectivity to redhead.com or registry.redhead.io I know that many of you are doing this and that the process in the past has been sort of fragmented and a little bit hard to automate and everybody needed to write their own automation to do this continuously so in order to install OpenShift without internet connections you need to mirror all the images in your registry and you need to have a registry first as well and depending on what type of images we are mirroring the tooling was different and also the steps were different and then we didn't really give you any recommendation or guidance of how to do this over time to keep updates coming in so we are going to standardize this project at this process now and we have a new utility that's going to wrap all the existing utilities in one single command one single utility that's working off a single declarative configuration file we call it OCMirror it's part of the OC client but it's actually a plugin for the OC client in the very same way Qubectl has plugins OC has plugins because it's the Qubectl client basically OCMirror is a binary tool that allows you to create a mirror for many managed clusters that are running different versions and keep this mirror up to date over time it will be able to take your declarative intent from the config file where you state for which exact OpenShift version you want to run in your data center which operators you want to run their custom images helm charts all the stuff that you need behind the firewall and it will download all this data from the various places and it will either generate what's called an image set which is something you can throw in a tower ball and move over behind an air cap with a USB stick or if you have direct connectivity from the OCMirror host to registry stream it into your own registry as well and from there you can run all the clusters and this tool has a lot of intelligence built in it will understand when new OpenShift releases have been published it will understand how the update graphs for the operators will work and if new versions are available and it will if you ask it to automatically mirror those as well you only need to execute the tool again so really the way to automate this and keep most up to date as if you were connected is to just run OCMirror in something as simple as a cron drop on a system every night and then it will continuously mirror all your content into a registry or into an image set so you can keep your disconnected clusters as up to date as if you were connected it's a very seamless experience now and we are going to move it to GA later this year at the compute level of where you execute your workloads we are also driving standardization so we are going to standardize on cert manager as the de facto way to provision TLS certs rotate them and provision them in the cluster and attach them to workloads this will be GA later this year it's already in tech preview we are also going to pick up policy security and mission controllers we are also going to have 2.124 but they are not enabled by default we are going to make them enabled by default and open shift and we will work on making that compatible with our security context constraints so that automatically the restricted policy is enforced we are also going to have as you heard previously mixed architecture support so you can run not just ARM and X86 but also X86 as it is now in separate clusters which makes workload placement really effective we are also going to improve logging so you will be able to see log in attempts and log in processes in the audit logs and we are also going to fine tune the logging level of the API server itself to reduce a little bit of noise that usually comes in with the API server being very busy but also cover scenarios that we aren't covering today for instance when you have webhooks in your cluster and some of these webhooks are broken this could actually destabilize your entire cluster so the API server will recognize the situation and warn you about it and tell you exactly where the breakage is one thing that I am super excited about is open shift chorus layering and this is huge because we previously took the stance that the worker node image is immutable and is always the same it comes from us and you are not supposed to touch it anything you need in addition to that it has to be executed in a container as a worker node on top of open shift so any kind of agent additional software that you need running there for regulations and compliance requirements needs to come from you as a part on top of open shift and you all told us that this is not working this is too complicated and you need the ability to customize the worker node and with chorus layering we are giving you that ability Karina will later today talk a little bit more in detail about this the short story here is that chorus is now a container base image like UBI and in the same way you build your applications on top of UBI you will be able to customize the worker node image by building a new container image of the chorus base image in the process you install your own RPMs you install your own agents additional RPMs from redhead channels whatever you like and this creates a new image which is then saved in the cluster and the worker nodes know exactly what changed in those images and will roll out these changes on the worker nodes and this is not just persisting reboots it will actually persist across cluster updates so we are giving you an golden image build process for your own customized worker images that are supported to run OpenShift and will update with the cluster will always keep up to date with the cluster really cool huge change in direction giving you the ability to finally customize your worker nodes with chorus instead of having to use rel and we are also standardizing how we manage applications this is already coming as a template from ACM but as you know we are heavily invested in GitOps with the Argo CD project we are working to enable additional tenancy models going from a very central model that you see in the left hand side here where there is a single Argo CD instance that's pushing into all managed clusters or an Argo CD instance per cluster that's putting workload and application definitions from a central Git repository and pushes them to different namespaces or even more extreme on the right hand side you have an Argo CD instance per tenant as in per namespace on the cluster which is just responsible for one application being deployed and continuously updated GitOps and Argo CD will also be enhanced to support a single sign on with key clock and Red Hat will get refinements and helm processing and also integrate with HashiCorp World later this year and before we wrap up I want to give you a little bit of an outlook on a super interesting project that we are working on that takes a completely different approach to multi-cluster ring so if you think about it the way Kubernetes is built the way the control pane is working with its eventual goal seeking approach and its eventual consistency and its extensibility this is actually very useful even outside of containers and we already see this because we have operators in the cluster that don't do anything on the cluster what they do is they communicate with an external service like Microsoft Azure or AWS and provision resources there and just report back status but they don't do anything with the cluster so these are controllers that are sort of Kubernetes controllers but they are not doing anything with Kubernetes except maybe dropping a config map which is deployed and you see that this concept of the Kubernetes control plane is very generic and we think it is so generic that it can actually automate the entire data and the entire cloud and we make use of that and say we are going to basically extract the guts of the control plane just the API server with very known concepts like namespace and RBAC but not a lot more than that make it really really small so small it can run on your laptop and we call that KCP the Kubernetes control plane or the smart control plane and that control plane doesn't isn't attached to a single cluster running on your laptop it's actually attached to multiple clusters that you bring into default by selecting your own clusters or getting clusters from RO, ROSA or OSD and the beauty of this is that this is something you can give directly to your tenants each tenant gets their own little control plane and this control plane is aware of multi-clustering it can be extended like we install operators today with controllers that know how to spread a deployment across three or five clusters evenly there are controllers that are able to understand that config maps and secrets have to be replicated in all these physical clusters down there and from the user perspective this looks and feels like a single cluster only that these components that you see as parts, deployments and services are actually replicated and distributed in the background completely independently of any changes in your application to make multi-cluster a reality so all your Helm charts all your customized templates all your GitOps processes will likely completely work unchanged but instead of throwing them into a cluster you throw them at KCP and KCP will manage multiple physical clusters in the background transparently and we are planning to roll out this service later this year because every tenant gets its own KCP remember it's extremely small there's not a lot of overhead in executing it so we are giving one per tenant and they can connect to their existing clusters or request clusters from ROSA or OSD and RO to actually provide compute capacity in the background and these clusters could sit anywhere in the world could be geographically distributed and KCP is the common frontend for that so this is an exciting project it's not part of OpenShift yet it's very early still follow the GitHub to learn more about this and look at the demo it's really nice and keep tabs with that because this is an interesting space to watch with that we are at the end of our outlook and insight into how we as product managers are thinking about OpenShift's future what we plan with multi-clustering deployment flexibility and standardization and even very exotic things like KCP we hope we gave you an insight into what's coming and we invite you to join us in the afternoon for the Ask Me Anything session where we will be able to answer your questions and give you a little bit of another insight as well into topics we haven't covered today it's a large platform you'll see there is so much going on there is no possibility to cover this in a single hour but any questions you have also on stuff we haven't touched on come see us again and we will be there for you to answer your questions