 Buenos dias. Hola a todos. Hello everyone. How are you doing? So we're here today to talk about OpenShift 4. As you know this is a really special year for Red Hat. In the same year, we were able to release both RHEL 8 and OpenShift 4 with many new features, with many new good and interesting possibilities. And now what we're going to do today is go under the hood, show you all these features in details, showing the future, the roadmap, and also demo it. So you're going to see all these new concepts in action. My name is Thiago, I'm with Ali, and we're today going to get all of these details. To start, just to give you a glimpse of the key teams around OpenShift 4. So first things here. We want to keep doing what we've done before, which is to provide an enterprise Kubernetes distribution. So we're talking about security, reliability, production grade, ready. Many of these characteristics that came from our experience with RHEL, with our enterprise operating system. And we bring it to Kubernetes. But on top of that, we're bringing now many new capabilities. And I'm talking about here automated operations, which means that customers, users of OpenShift will have now the same cloud-like experience. So you're going to use OpenShift, you think you're going, you're using a cloud, so self-service and catalog, and all of this obviously powered by operators that we're going to talk a lot of this concept today. Obviously, all the features that this, there is important for developers to keep using and creating new applications. And we're bringing many more capabilities around that on these terms. And obviously, keep bringing OpenShift as a whole solution that is ready, again, for small companies, for enterprise companies, global companies, to deploy it on premises, deploy it in the cloud anywhere on a hybrid mode. The first thing here we're talking about OpenShift 4 is that we're bringing a new paradigm here. When I started at Red Hat in 2013 now, we were, at that time, we had OpenShift 2, and we were starting, and at that time, the container orchestration technology was completely different. And then we started to develop OpenShift 3 at that time, based on Kubernetes. And when we released it in 2015, it was incredible. It was astonishing, because nobody at that time was using Kubernetes. Now, it's easy. Everyone knows Kubernetes. Everyone, when thinks about containers, knows about this technology. But at that time, nobody knew it. The community version was not even 1.0. Everything was very new. Well, so we made a big back at that time. It was a good decision. And now, when you start to think, okay, so we have a team is winning. Should we keep it? And we're now making a big back again. So we're continuously moving and improving and changing things. So not only we're bringing, continue bringing Kubernetes, but now this concept of operators, what we do is that we are now able to manage not only the container platform, but also the infrastructure that is below. What I mean, this is the operating system, and also, optionally, all the hardware, all the infrastructure components. What we're saying is that with OpenShift 3, we would have two maintenance windows, one for the OpenShift and the other for REL. So you have to, for instance, patch REL, and then patch or do an upgrade on OpenShift in two different moments. Now we're doing all of everything as a cohesive unit. So everything is managed by OpenShift, including the hardware itself if it's necessary. And we're doing this because we model all the components based on this, what we call operators, and we're going to explain this in much more details. And all the components that made OpenShift is based on operators. We have to write 42 operators that expose API so we can control all the infrastructure, all the machines, all the operating system, and obviously all the container orchestration components that are on top of it. To go into a bit more details and talk about, especially operators and new features, I'm passing to Ali, who's going to give you a glimpse of the future with OpenShift 4. Thank you, Thiago. Hello, everyone. My name is Ali Mobrum. I am a product manager on the OpenShift team at Red Hat. I'm very excited to be here to speak with you all today. So thank you for coming out. Like Thiago said, OpenShift 4 is a brand new paradigm for us. We rebuilt it from the ground up. We're using this new concept, like you said, operators. So for people that don't know what an operator is, an operator is a runtime that manages Kubernetes applications and services. So what we do is we take the knowledge from the support engineers and we put that smart into the operator. So when you need to go install, if you need to go upgrade, if you need to do some troubleshooting, we're building that logic in. So as Thiago said as well, we're looking at this a holistic view. We're looking at OpenShift from the bottom up all the way from the OS to all the applications and everything running on there. So what happens now is in Kubernetes, when you're managing application, you tell Kubernetes, I want the desired state for my application to be this. And if it ever deviates, Kubernetes job is to get that application back to the desired state. But now, because we're doing this holistically and we're using operators, we can now use Kubernetes to manage Kubernetes. So we're actually pushing the boundaries here a lot with Kubernetes and it's absolutely awesome. I don't know anyone else that's doing this yet. So that's why OpenShift was such a big jump for us from OpenShift 3 to 4. Another piece here I want to talk about is at Red Hat we talk to our customers a lot and we really appreciate the feedback that we get from you. And some of the feedback we got for OpenShift 3 was that OpenShift 3 is difficult to install sometimes. It could be hard to upgrade. It could be hard to scale out the cluster. So we heard that loud and clear. And when we designed OpenShift 4, we wanted to address all those issues. That was a primary concern for us. Another thing that when we talked to customers we got a lot of feedback for was the hybrid cloud. Hybrid cloud is very important for our customers. They want to be able to run their workloads on-prem in various different ways on cloud providers. They may want to move from one cloud provider to the other. So our customers showed us a concern that they're worried about vendor lock-in, especially with the cloud providers. So when we created OpenShift 4, one of our primary goals there as well was to enable our customers to be able to run their workloads wherever they want to and to have the same type of interface. So now let's kind of dig in to the details here. So here I have the different types of installations. So the first one you see is the full stack automation. This is what we call our one-click installer. You could go ahead, just give us your credentials to AWS or GCP or whatever. You click the button at that point under a little over 20 minutes, you get a brand new HA cluster right out of the box with all the best practices. You don't have, you know, all the hard work is kind of done for you there. Again, we listened to our customers and our customers gave us feedback and said, look, we have all this existing infrastructure. We already have a DNS as VPC setup. We have all the security stuff. We need a little bit more flexibility. And we said, okay, so now we've created a pre-existing infrastructure that is much more flexible. That's a little bit more complex because we're giving that flexibility. But it's so you can put OpenShift into your environment, and so OpenShift will meet your needs. That was our primary goal there. Again, listening to customer feedback, and I'd love to talk to everybody today if you get a chance to come find me after this. And you could tell me your opinions on things. I would love to hear it. We also have a couple other offerings. We have two offerings for hosted. We have OpenShift Dedicated and then OpenShift on Azure. They're pretty much very identical except the Azure offering. We actually do support with Microsoft. So for the OpenShift Dedicated, you only get the Red Hat engineer support, but for Azure, you get both. Now here, I wanted to put this chart up for everybody to show you a comparison slide. It shows you the difference between the full stack automation and the pre-existing infrastructure. You see what the user's responsibility is, and you see what the installer's responsibility is in the full stack automation. The one big thing I want people to take away from this is, in the pre-existing infrastructure, you have the ability to use REL7 for your worker nodes. So if you need to use REL7, you can. We're planning in the future also to support REL8 in the near future as well for that. All right, so this is an exciting slide. This is our provider roadmap showing you all the platforms we're going to support. Currently, OpenShift 4.1 is out, and we support Amazon Web Services for full stack, pre-existing as well, and then VM and bare metal and pre-existing as well. For 4.2, we're adding the ability to support Microsoft Azure, GCP, OpenStack Platform, and bare metal on Rai for the full stack automation, and we're going to be adding GCP for pre-existing infrastructure as well. And then 4.3, you're going to get Alibaba Cloud, IBM Cloud would have virtualization support as well. Okay, so this next slide I put up is important because there's a new thing called Cryo as a container runtime interface. We are, in OpenShift 4, we do not use the Docker runtime. We're actually using container runtime interface because it's a lightweight native support to be able to run containers. What this means though is you could still run any Docker container. They're both using the OCI interface, so they're interoperable. You don't have to worry about modifying any existing programs or applications you have. They'll work right out of the box. You get a couple of cool command lines that come with it as well. There's something called Podman and Builda. If you get a chance, take a look at those. All right, so for OpenShift 4.0, one of our biggest things, like I said before, is hybrid cloud. We need to be able to support our customers here. We need to be able to support them on any platform, on-prem, or in the cloud and be able to allow them to move their workloads as needed. So in order to support this, we actually created something called cloud.redhat.com. It's a nice portal that'll allow you to manage all your clusters. It'll have all your subscription items there. And expect a lot of stuff here because we're planning to really increase the functionality. And I'll tell you why. Because now it's so easy to go ahead and create clusters and to upgrade clusters and to scale out clusters, our customers are starting to create many more clusters versus just having a few. So we want to provide you guys the toolings to be able to manage the increase of clusters you're going to be creating. So at cloud.redhat.com, we actually have the OpenShift cluster manager. Every time you create a cluster, it will register back to the cluster manager. This will be your single source of truth for all your clusters. We're going to send back some data, a little bit of telemetry, and we're going to send back the status of the cluster. We're going to send back how many CPUs, how much memory, so the utilization of the cluster is going to be back. And like I said, in this area, we're planning to add a lot more functionality. We're probably going to add the ability to use KubeFed, which will, when you want to install an application or an operator to many clusters, you should be able to from here. So look to stuff like that in the future. It's a very exciting area. All right. So this is a big one. Operators all the way down. We talk about how we rebuilt OpenShift for completely with operators. So everything is built on operators and for a good reason. It lets us automate a lot of stuff. And the nice thing in 4.0 is we actually surface that information to you. If you go to the admin cluster, admin console, and go to cluster settings, you get the list of all your operators. You can see what their current health is. You can see what their messages are. And if you do an upgrade at any point, you can see the status of those items changing. And those operators actually send us back status so we could tell and make sure that your cluster is in a healthy condition. Now, in OpenShift 4, we also have global configurations area where you can configure clusters. So OpenShift 3, there was probably like a bazillion different flags and configurations you could set, which allowed people to shoot themselves in the foot. So with OpenShift 4, operators are very precise on what flags and configurations we're going to expose. And all those flags and configurations are going to be exposed here on the global configuration page. This is actually something I want to get feedback from people today at some point. We're actually kind of being rigid about this because we want to be very particular on what we allow people to modify. But we want to make sure we're not too rigid. So when people start using OpenShift 4 here, I love to hear back like if we need to give you guys more flexibility and expose more configurations to the cluster. Okay. So the next thing we have is over-the-air updates. Because now we're using operators and we're using the rail core OS, which is immutable. We know the state of the cluster at any given time. And because we know the state of the cluster, we can say, hey, we need to take the cluster from state A to state B at any point. And we can now do that very easily. We actually have something called the cluster version operator, which is a master operator that manages all the underlying infrastructure and core operators that make up Kubernetes and OpenShift and maintains their versioning. So if we ever want to go from, say, 4.1 to 4.2 or 4.1 to 4.1.3, whatever, we can now do that very easily for you guys. Okay. So the next thing I want to talk about is we kind of talked about installation and how we solve that. We talked about how we do over-the-air updates. So upgrades are very easy now. The next thing I want to talk about is how we grow your cluster. There's a new thing out there called the machine API. The machine API allows us to manage your nodes and your machines via Kubernetes. So there's a definition for a machine and there's also a definition for a machine set. I want you guys to think of a machine set as the same as a replica set, but for machines. So you can define different types of machines and when you need to autoscale, if you have a certain type of workload, you can do that. For example, if you have a workload that needs high GPU, you can define a machine set that says, hey, use this type of machine with this OS, with lots of GPUs on it, and if my cluster doesn't have the existing capacity, you can now autoscale that up, run your workloads, and then it will autoscale down the extra machines to back to the desired state that you have set for your cluster. So really cool, very powerful, and that's now available for everybody to use. All right, so I talked a little bit about machine sets and I kind of want to go a little bit further in it. You could even do machine sets for infrastructure, right? So you could have your elastic search, you could have Prometheus for monitoring your router, your register, you could define these different machine sets and say, hey, I just want to run my infrastructure items on this, even with metering and charge back, you could define a specific type of machine set and then with node selectors, hello? With node selectors, we could go ahead. Thank you. So with no, sorry about that. So with node selectors, you could drive the correct workloads to your infra machine sets that you've created as well. I quickly wanted to just show you guys a possible architectures diagram. This is for AWS. As you notice, we have a control plane. There's three masters on that and then for logging and monitoring, we have some R5 2x large. Those are some high performance machines and we specifically put those there because the throughput and memory and CPU usage that those types of applications are going to need. Then you're going to notice that there's routing and workers. They're both M5 large, but they're separated into different machine sets. The reason we do that is because maybe the routing machines have to have higher security because they're exposed directly to the internet and they have different configurations. So even though you have the same type of machine, you have a different profile on those types of machines for your specific infrastructure or your workloads. Okay, cool. So something I wanted to talk about today as well is cluster monitoring. Cluster monitoring now is a core component of OpenShift. You have to have it there. You cannot turn it off. It's based off of Prometheus and the reason that it's a core piece is because right now out of the box, we've added a ton of metrics in there. We talked to again like the support engineers and everything and we know what the good boundaries are for your cluster. We've built those in and because all those metrics are there, you get a horizontal pod auto-scaling right out of the box. So you don't have to do the tedious setup with that anymore. Again, it's all about automating and simplifying your guys' lives and giving you the most bang for the buck. So in OpenShift 4, in order for us to be able to give you guys the best service and the best support, we've added some telemetry in there. We're sending back data like how many nodes in your cluster, how much utilization they have, maybe the operator status and also like upgrade status, how do your upgrade go. We want to know if there's an issue with your cluster so we could come and proactively reach out to you and help you resolve your issues if there's anything there. Something also new for OpenShift 4.0 is metering and charge back. We now have the ability to plug into cloud provider's APIs and we could go ahead and give you reports on how much your spend is and so forth. We've already have a lot of reports out of the box for you and you have the ability to create custom reports as you want as well. If you look at the bottom, there's a matrix there, CPU memory storage, request, usage, pod namespace node. So you could do pod usage on a node or pod usage on a cluster or storage request for a namespace. So these types of reports are going to be there for you and they should handle about 80% of the use cases. Again, we've talked to a lot of customers and got a feedback what kind of reports they wanted and we have those out of the box for you. Next thing I'd like to talk about is extending the platform. So far we've kind of talked about the infrastructure and running it and the core pieces. Now one of the pieces of feedback we got from customers as well is they felt like OpenShift was bloated. So what we did is we really slimmed in down to a base install and then we enabled people to add functionality as they need. So in the base install, you're going to get the console and auth. You're going to get monitoring, I get to spoke about. You're going to get over-the-air updates and you're going to get machine management, essentially be able to scale out your cluster. Optional items are the service broker, optional OCP components. For example, logging is an optional OCP component, metering is an optional OCP component. And the way you do that is you actually go to the operator hub. All these add-ons are in your operator hub catalog. So if you want to add additional services from us, from community, from third-party ISVs, you go to the operator hub and you'll enable everything there. Here's the image of the operator hub and what I want you to know is there's two versions of operator hub. You're going to have a local operator hub on your cluster. And as an admin, you can see that your users will not be able to see that, only an admin can see that. So once an admin goes and decides, you know what, I want to offer Mongo to my entire cluster, you can do that. If you want to say, hey, I want to offer CouchBase just to these namespaces or these projects, you could do that as well. So a lot of operators allow you to decide if it's a cluster-wide service or if it's a specific service you offer for a project. There's another operator hub as well, which is operator hub IO. That is a community project. Anybody could upload their operators there. We encourage people to create operators for their services that they want to run on Kubernetes and to share it with the world. And that's a great spot to do it. So like I talked about, the operator hub, once you enable a service, how do your users consume it? So users are able to consume it via the developer catalog. So before the developer catalog had service catalog items in there, had templates, had the source to image build stuff in there, but now we're adding the operator services in there as well. So your developer catalogs are a one-stop shop to grab everything that you can consume. So this slide's pretty cool. Because of this operator framework, we now have the ability to make a cool console. Like I talked about, on the left side you have the dev catalog. When you enable an operator, the services show up automatically. For certain operators, you could go ahead and add a link into the external application launcher. For example, someone installed service mesh, they can now put a link directly to the Keali UI from there. Or if you guys are interested in container virtualization, if you enable CNV, all of a sudden you're going to get the ability to manage VMs within OpenShift right next to your containers. So you're going to be able to see all that kind of good stuff. So I'm going to get this back to Thiago and he's going to talk to you about the broad ecosystem of workloads available to you using this. Okay, great. And the slides. All right, so the idea here is that we show the platform itself, but of course a platform is nothing without an ecosystem of applications of technologies and software running on it. The first thing is that with operators, that we talk operators, they're ones that we built that manage the platform, but also the ones that third parties can build, yourselves can build, your companies can build operators to manage the applications that will be running on and taking care of the applications that are running on the cluster. So you can take, for instance, an existing Helm chart and convert it to a Helm operator, but also using other technologies such as Ansible Playbooks and even programming languages like Go to create operators and taking care of your applications. We define a maturity model. So very basic operators that just do installation. It could be using like the Helm charts, which are very simple, but then you can start going to much more details such as day two operations like backup, restore, metrics, analytics, logging, and doing what we call autopilot, which means the operator takes care of everything. You don't need to do anything. Operator, if you think of a support engineer, is the knowledge that he has on managing applications built on a software. So this is an operator. So you can do many, many interesting things with operators. And operators are what we call first class citizens because they do all this. In fact, there's a software running as a pod in a container, long running process and taking care of your applications. That's what an operator is. But if you think about it, okay, operators take care of my application. Who takes care of the operators? That's the role of the operator lifecycle manager. The idea is that it gives to operators all the requirements, all the features that the cluster has for the operator to do his job, like deployments, roles, permissions, et cetera. That's what the operator lifecycle manager does. And the best thing here is that the operator lifecycle management, the idea is that you bring a catalog, think of it as an app star. So you have your operators in an app star. You download it as the catalog. And then you can attach what we call subscription. Which means you can create interesting rules like I want this operator to be as updated as possible. So the idea is to make sure the applications on your cluster are easily updated and are always on the latest version. Being automatically or you can configure doing with the authorization of a cluster administrator. So that's what the application, the operator lifecycle management does. Finally, another way to extend the platform is obviously to creating a container-based applications. So we are now offering a new possibility. We just launched Red Hat Universal-based image, or UBI, which is a very small lightweight, rail-based image for containers. So it comes in different flavors like for .NET, for PHP, for Node.js, and others. And it has obviously all the security features, all the performance features of rail, but it's very small. And the idea here is most important. It's freely distributable. So you can use it, UBI, as long as you want, send it to your partners, your customers. ISVs will use it to create their applications. And it's free to use. The idea here is that once you put UBI on top of rail, on top of OpenShift, then for customers of Red Hat, obviously you're going to get much as a value of our subscription with many other capabilities and support, and obviously all the help from Red Hat. But this is really interesting. And finally, we're going to talk about what the youth may go. It's how we empower developers to create the applications of the future, the new cloud-native applications. We have now a new CLI. I think most of you know Qubes CTL and OC, Line Code Commands. They're great, but they are model thinking of Kubernetes objects. So when a developer is doing his job, he always has to translate what does this command means for his application. So we created this ODO CLI that is really focused on the operations that the developer needs to do. And it's modeled in a way like Git. So there is an ODO create, an ODO push, an ODO watch, similar for those who understand and work as a developer using Git. Another thing, and Julia talked about that, is that although we're still delivering Jenkins and we continue to do it, I want to pause here and ask you a question. How many of you are using Jenkins on your organizations? Raise your hand. Okay, so a lot of you. And how many of you really love working with Jenkins? So I think you understand why we come to this and create this new Tecton pipelines. So again, we're still doing Jenkins. It's important to keep evolving Jenkins. However, this new model called Tecton, it is based on cloud-native principles, which means is that Jenkins was very centralized. You have to configure everything in a central model, plugins and configurations. Tecton is distributed. So each team owns its pipeline, owns its configuration, can do multiple types of pipelines, different pipelines for each team. So that's the idea with Tecton pipelines that we're going to ship with OpenShift now. Another thing, and it's really interesting here is Knative, this open-source project. The idea is to bring several capabilities to OpenShift. And what I'm saying here is that the idea is that you can scale down to zero your containers. Imagine that you have a container-based application and it's not running an OpenShift yet. It's down to zero the number of replicas. The number of replicas. Then comes a request. It starts automatically the container. It does its things. It's scaled to many more replicas if needed. And then when there's nothing else to do, there's no more requests. It goes down to zero again. And this is important for resource consumption. So you're going to use better your compute resources, your memory resources, storage resources on your cluster. So this is something really interesting that we're developing now. And also, every time I talk to developers, this is the main thing that everyone wants to know, wants to talk about, that Istio project. We're delivering now in a couple, three to four weeks. Now we're going to deliver this OpenShift service mesh, which comes together with no additional cost on top of OpenShift, the Istio project. The idea is to create a network that connects and controls the traffic between different microservices. And the idea here is that you have control so you can define which microservice can talk to the other microservices, how this traffic will flow, visibility so you can see which requests going to different microservices. And obviously making more advanced deployment techniques such as Canary and AB so you can have multiple versions of microservices running and managing the traffic flow between this. You know, this was something that's, it was possible before to do this kind of things, but it requires many libraries that it is specific for a language or you have to modify your application to do this. Now we're delivering this on top of OpenShift so you don't, so it's available for any kind of application. You don't need to change the application to start using this capability so it's really interesting. Finally, code ready workspaces. This is a web-based IDE so developers can start programming. On a click of a button, code ready creates a web IDE and with everything set configured for starting developing containers. And it's interesting because it allows a lot of collaboration. If a developer is stuck into a, is doing something coding and is stuck, you can send a link to another developer of his team and this will instantiate an IDE for them and they can work together. One developer can even see what the other is doing online on the IDE and obviously all based on containers. So well, I think we talk a lot about OpenShift 4, but I know that many of you are using OpenShift 3 so obviously you can start installing OpenShift 4 right now. It's available, the fourth one is available, but for those who are thinking, I want to migrate, I want to start, I have my production cluster, my test cluster, I want to migrate it to 4, how do I do this? Well, we are working on a new tool, a migration tool that's based on an open source project called Valero and the idea is that automatically you can select namespaces, persistent volumes and other components like stateful sets, deployments, et cetera, select them and define, oh, I'm from version 3.7, 3.9, 3.10, a click and then it moves to OpenShift 4.2. It does the migration automatically, you can even stage it, test it before doing this migration. And it's important to explain, this is not an in cluster upgrade, you're not upgrading an existing cluster, we're doing this in a migration fashion because we choose this because as we talked to you here during the whole day, it's a completely different architecture. It was not possible to or it would be too risky to upgrade from version 3 to version 4 directly in the same cluster. So we started with this methodology with a strategy and we believe it will be much better for you to migrate clusters and sometimes you can still, if you want, using both versions, maybe production still in 3 and you start in developing version 4 and when you think you're comfortable with all new concepts and features, then start using version 4 and migrate all the applications. So I'm paused here. Here's a roadmap. As you can see, there's a lot of things going on for the next six months. So we already delivered version 4.1. We are about in some months we're going to deliver version 4.2 and beginning of next year version 4.3. We are going to send these slides back to you so you can get into more details of each feature that is planned for each version. And now you can, everything that we told you, you can try now, go into try.openship.com, you can in a couple of minutes instantiate a new cluster and start playing with these new features. But let's show you to you. Let's show you a real quick demo of OpenShift 4 in action. So Ali will move over and show us the real thing in action. Awesome. Thank you, Diago. All right. So on the screen you see an OpenShift 4.1 cluster. One of the things I wanted to show you is the new console. It's based off the Tectonic CoreOS console and we took that as a base and then we've enhanced it. So one of the cool things I want to show you is cluster settings. This is where you come to do upgrades. As you see here, we have channels. On the channels you can say, hey, I want a nightly build, pre-release or stable depending on what type of cluster you have. You can set that. And then in the update status you have all the, you can click here to select what version do you want to go ahead and upgrade to, right? So let's just pick one of these and click update. And the update will start proceeding. Over here you could also see below is the version history, right? So you can see I started with a 4.1 release candidate and then it went to official 4.1 and then 4.1.2. And now we're upgrading to 4.1.8. You can see it's downloading the updates here. So that's it. That's how you upgrade an OpenShift 4.1 cluster. The next thing I want to show you guys is the operator hub to kind of give you actual feel of the marketplace here. So let's say we want to install Mongo. I already installed Mongo. Let's install something else. Okay, this is all couch base. You coming here? And sometimes it'll say there's pre-required stuff here so you get information about this operator and all the supported features. You go ahead and click install. Now this operator gives me options. It says hey, do you want to install to all namespaces in the cluster? Or do you want to specify a specific namespace? Right? I also have the ability, if they have more than one channel, preview, nightly, whatever, this would show up here as well. So you could, we're giving third party ISVs an opportunity to do upgrades just like we're doing for the cluster for their applications running on OpenShift. And then you get the approval strategy. Do you want this to automatically upgrade? Or do you want the admin to come and manually update this? So let's just subscribe to this. And as you see, it'll start the upgrade process as it's going. Let's quickly go back to the cluster update. And as you see, it says working towards 4.1.8, 13% complete. If we go to the cluster operator page, these are all the operators that we have here. And slowly, once the images are all downloaded, you'll see this getting updated in live, in a live method as well. So we're going back to operator hub. I want to actually show you guys now installed operators. This is the list of installed operators you have. And you could go ahead and set it for namespace. So if I go to all projects, you're going to see something interesting here. You're going to see that couch base is copied to every one of the clusters. Okay, so one last thing I wanted to show you guys is the brand new 4.2 cluster. We have a brand new dashboard here. We have a lot of new great functionality. I kind of wanted to show this real quick. We have this new thing called API Explorer. So for every API or Kubernetes resource available, you can now select it. You could see some details about it. You could see the schema. And you could actually drill down on the schema to find out what the different values are. And then you could see all the instances of that type of object in your cluster. So one of our goals for the OpenShift console is to educate admins and to give them as much visibility into the cluster. And every release, we're adding functionalities and features to make it easier and to make it more accessible for everybody. Awesome. Thank you so much. Yeah, thank you so much. I hope it was useful for you guys to understand the future of OpenShift. And thank you for your time. And we'll continue with the next presenter.