 My goal today is to give you some basic overview of a few features that has been released in Free 10, which is the latest release, and a small peek into the roadmap that's coming in the upcoming releases during this year and the next year. So let me start from the back from the past. I found this slide in one of the marketing presentations and I quite liked it because it actually shows that Red Hat and Google and CoroS, we have been there with Kubernetes for a long time. Today everybody's on Kubernetes, right? Do you know somebody who's not on Kubernetes? Not really. So today everybody wants to be on Kubernetes, but only those three companies has been there since the beginning, like really involved. And as you probably heard, Red Hat and CoroS is one company now, so there is essentially two companies, Google and Red Hat, who has been contributing since the inception. And I will also have something to say about the CoroS and Red Hat, but another slide from the same presentation, what is the goal of Red Hat and what has always been the goal of Red Hat is to be independent, to be agnostic to the infrastructure, to the technology. If you are running on VM, if you are running on IBM or something, we would like you to be able to run OpenShift on top of that, right? You don't want to lock you into one thing, like we have OpenStack. If you have OpenStack, the integration is very nice, very smooth, but you don't have to have it to actually be able to use OpenShift. So as you can see on the horizontal, it's the number of contributions, so we are one of the highest contributors, and we are pretty much the most independent, so you can run OpenShift wherever you want, and we don't lock you into one specific infrastructure or tooling. And this is just a complement Diane's presentation from the more business-y side. So you can see the number of the customers growing, the split between the different industries on the right side, so just to peek into that, how OpenShift is faring in these areas. And big overview for you, like what OpenShift or like the OpenShift ecosystem actually consists of, so there is a lot of different Red Hat portfolio projects and products that run on OpenShift and integrate with it. There is a lot of third-party integrators that we always add to the ecosystem, so that's growing as well. And as before, there is enterprise Linux and the Agnostic infrastructure that you can run on. And as you probably heard, there is going to be something called OpenShift 4, which is going to be the marriage of technology from CoreOS and the OpenShift technology that we had. So there is not so many public details to speak about right now, but essentially we will be using integrators, sorry, not integrators, operators from the ground up a lot, and you will see that pattern in the future way more. And it will be way more automated and the marketplace will be more prominent in the release. And have you heard about operators already? Okay. Those who did not, I have few slides at the end, so don't worry, we will get to that. So that was for the basic ecosystem and business side. So now let's get to something more technical, which is more to my liking than those slides. So in the 3.10 release that was a few months back, a few months back, I'm not sure, something like that, we released a lot of features. One of the best ones that I liked was the support for Device Manager. And as Diane mentioned already, that we have a very active community among the machine learning and AI in these areas. Kubernetes is also doing very well in these areas. And Device Manager allows you to manage your workloads in the relation to different hardware thingies that you need for your computation. So if you know that you need a graphic card for good arithmetic, then OpenShift can schedule your ports in a way that the graphic cards will be utilized in the right way, et cetera. So it will not be just the basic stuff that we know, memory, CPUs, and that have been there all the time, and load balancing on top of that, but you can also manage these kind of things like GPUs and utilize them in your containers, and it will also handle the security aspects and all these things that you need if you do that on production. So that's already there. And for the future, we will be providing more documentation and information how to use this stuff and how to utilize it in your applications. If you have any questions, feel free to ask during the presentation. That's probably simpler than waiting for the end. So if you have a question, just raise your hand and ask. Right? Sounds good? Okay. So the other thing is that we are moving forward with not breaking Docker, but like replacing some parts of it into different other products. So we have been using Docker since we released Open Shift 3 as the container technology, and I guess everybody here is familiar with Docker, right? Raise your hand if you are familiar with it. Okay. Good. And you are using actively Docker containers. Yes. So I think it's pretty cool if you do it on your desktop. It's a very smooth experience, very nice, very easy to do. However, if you start doing it on the scale of Kubernetes or Open Shift, if you do it on the servers, it may be slightly more challenging because of the architecture of Docker, with having a demon with not real good separation of roles and authorizations, and these kind of things can be quite complicated if you do it in production environments. So for that, the idea was to break down Docker, essentially reusing as much of the code that has been actually in Docker, which has been contributed back into CNCF as container D project, right? It has been on the slide Diane had, and it is taking, for example, Cryo is the project that runs containers in Kubernetes. So instead of having Docker on the machine, you have Cryo. Cryo is stripped down to only being able to spin up a container and do it in the secure and reasonable way for production environments, right? And it's integrated directly into Kubernetes. So you don't have all the other features that Docker would provide you because you don't need them for running containers, which is lowering the footprint of the demon, but also lowering the different possible attack vectors that you can have for your system. So that's for running containers in Kubernetes. So Cryo is a Kubernetes project. Builda is the build part of the Docker. So if you want to build a container, then you would use Builda as a command line to, and then there is a podman, which is like Cryo, but outside of Kubernetes. So in free 10, Cryo is there. I believe it's supported, but it's not default. So by default, you would still be using Docker, but you can switch during the installation and use Cryo instead of Docker. So Cryo will be there utilizing all the different projects that has already been there, but you will not be running Docker, you'll be running Cryo. There is a switch in the inventory file. So when you are installing just open chief use Cryo equals true, should set that should not install Docker, but should install Cryo. So Builda has been available since Rails 7.5, yes, and also it now supports multi-stage builds and provides much more, much higher compatibility with the Docker files that are available in Docker. So pretty much all the daily tasks that you can have today, you can do with Builda instead of Docker, but it's a command line that doesn't have the nice, gooey things for OS 6, et cetera. So it depends on the use case you have. It's very nice for automation, and if you do things on the server in script, it simplifies things. And all of the projects, Builda, Cryo, Portman, they all share the same code. They essentially just build it into different ways, and they try to reuse as much as possible. Another thing that has been in 3.10 has been Helm operator. So who is using Helm, or who wants to use Helm? Everybody, no, almost no one. OK. So it was one of the big requests that we had. Like, I would like to use Helm on OpenShift. We are saying, hey, it's not so easy because there is Tiller, and Tiller was a component that you need to deploy the Helm charts on the cluster, but it needed cluster admin access to everything. So there was like single component that had access to everything. With 3.10 release, there is a Helm operator that doesn't use Tiller, so you should essentially be able to take a Helm chart and deploy it using the Helm operator instead of the normal Helm Tiller thing that has been created by the project, and it will follow all the RBAC, that means roles, and security aspects that you have configured in your cluster. So you don't have to allow one component to be able to do anything in the cluster, just to be able to use Helm. So this is quite a nice feature. Also, operator SDK has been released. So SDK is a go, essentially, package that helps you to build operators. So I will move to operators at the end of the presentation, but for the way future, operator seems to be one of the main and most important patterns to design applications for Kubernetes and OpenShift, and the ecosystem is growing every day. So getting familiar with that is pretty important, and you will see way, way more operators in the future with OpenShift. OK, so that was for free then release. What's my time? Very well. I have so much time. So what's coming in the next releases? Who is using service catalogs? Almost no one. OK. So there are two things that will be coming for a service catalog. One is the automatic injection of the secrets and information into the pods. So today, if you do use a service catalog and you provision a service, it will create a secret, and then you have to bind it to some specific pod or create a binding manually. What will be possible in the future when pod presets are released? Essentially, you will have a pod preset. And pod preset said, if there is a pod with these labels injected with this specific secret. So whenever you create a secret with some specific selector, it is possible to inject it into all the pods that have some specific labels. It will be automatic, and there will be no manual stuff to do. So essentially, whenever you deploy the database, you can say, when my database is deployed, just bind it to this application pod, et cetera. So this is for the operations side or developer deployment side. And also, today, when you are deploying the services, the security aspect of that is not so complex. And the limitation of different roles who can deploy what is not very, well, not so good as it could be. So for the future, there will be very more granular access to services in the catalog. So only specific users will be able to deploy specific containers and will be able to do some specific stuff with specific services. Also for the catalog, we are redesigning the user interface, so it will be more stride forward, more easier, easy to navigate. And I think it will be quite nice, and it will make the experience with the catalog way more nicer. This one is something that Diane was mostly most excited about in the presentation. So you know that today, there is the metric system based on Cassandra and Hocular that comes from our Jbos projects, and it's Java-based. But in the community, people have been very keen on using Prometheus. So we have been integrating Prometheus into OpenChift for some time. There already are the endpoints for monitoring, et cetera. But we will be also having a full metrics system based on Prometheus that will be complementary to the Hocular one. And that will also provide some other features like charge bags and calculations on top of that. So this is coming. And if you want to, you will be able to use Prometheus way more during your operations of the cluster than you are using it today. OpenChift do, or ODO, ODO, different names. So this is a tech preview of a tool. Well, tech preview is maybe a bit too strong, more like a proof of concept, how to deploy applications on top of OpenChift. So if you are a developer and you are using OC Client Tool, it may be a bit overwhelming with all the information on all the options, all the stuff that you can do with the cluster. So ODO tries to simplify that. Essentially, just use the basic English sentences and the developer would not have to be so familiar with Kubernetes and just saying, I want to create an application. I want to add some storage to my application. I want to push my source code, et cetera, et cetera, instead of doing OC, build, trigger, something, something, et cetera, et cetera. So simplifying the lives and the experience for engineers. And now I'm moving to the last topic that I have in my presentation, which is operators. So somebody raised their hand before, but who actually used operators or built operators already? Yes? 1, 2, 3, OK, mostly red headers or partners. And who is using some kind of operators in production environments or non-production environments? Almost no one. OK, so what is an operator? Before, when you wanted to extend Kubernetes, you needed to change the code base. But as it was involving, the flexibility of Kubernetes has been also evolving. So today we have something called custom resource definitions. Custom, as you know, we have a pod. We have a service, et cetera, which is the definition of the YAML file that you can upload into Kube and OpenShift and do something. You can define your own resources. So for example, if you would define a resource called PostgreSQL, it should create a PostgreSQL, right? Then you need something that can read that YAML file and do something with the cluster. Create a pod, create a deployment, create something. Do the changes, which is called controller. You already have controllers in your Kubernetes that are provided by the project, but you can create your own controllers. So if you take CRD and you take the controller together, it's called an operator. Operator is essentially this is how you define your resource. And this is the tool that needs to be running in the cluster to make it happen to actually materialize the definition in the cluster. And for the future, like the pattern says, that operator should be able to manage everything from the start of the service over to the undeployment of the service. So you say, I will stay with my PostgreSQL example. So I have PostgreSQL operator running in my cluster. And I create a CRD that says PostgreSQL, replicas one, master one, storage this, something, something, something. I will upload the YAML file. That controller sees that and deploys one deployment with the replication controller. Maybe state for set with a port running my PostgreSQL server. Now I want to update my PostgreSQL server. So either the operator will do it manually, or I need to change the version of the database server in the definition. And the controller should be able to pick up the change and do the update for me. Now I want to undeploy my cluster. So I will delete this definition. It will undeploy everything that was created. So the whole lifecycle of the application should be managed through changing the definition and the controller doing the work. So essentially, I usually call it the operations know how written in a code. This is what the controller essentially is. So the operator SDK provides a tooling in Go that allows you to easily write more easily write the controllers. And they also provide two different things. There is a mattering aspect. So if you want to do chargeback and stuff, stuff then you need to monitor what's happening in the cluster. So that's also based on the operator pattern. And the lifecycle management is, as I understand it, it's an operator for managing operators. So something that can help you to actually manage the versions of the operators that manage the applications that you have. So in the wild, it's growing. And different companies are actually using operator pattern to do stuff. So we see the adoption happening all over the world, all of the ecosystem in Kubernetes itself, in OpenShift as well. And OpenShift is adopting the pattern for ground up. So the future releases will be way, way more based on operators as well. And so it's called something called operator maturity model. So over here, you have different things that the operator can do. So it can deploy your stuff. It can upgrade your stuff. It can handle the whole lifecycle of application. It can give you some insights and can manage all the aspects of the application completely from the ground up. So if you do only want to do installation and the upgrade of the platform, you can use the helm operator essentially and just use the helm charts to do that stuff. If you want to add some more features and manage the whole lifecycle, then you will be able to use on-site operator. And if you do also gather some metrics, provide charge-based and this kind of things, then you will need to go SDK and actually write your code in there. And it has to be your application essentially. And that's it. I'm still 13 minutes, but I guess there could be some questions in the audience. So if there are any questions? I'd also like to add if any of you are interested in creating an operator or using an operator, the operator SIG is really active. There's an inside of Kubernetes and an operator SIG. And OpenShift has one. And we meet the third Friday of every month, which is probably 9 AM my time, which is probably Saturday morning your time or something crazy like that. But everything that we do, we've done a number of presentations. People have walked through how to build an operator, what the OLM is, the operator lifecycle management. All of that information is on the YouTube channel, which is RH OpenShift on YouTube. So if you want to get started with operators or you're interested at all, let me know on and on commons.openshift.org. There's a SIG sign up page. Go to that. And you will get added to the mailing list for Google. You'll get added to the events notification when there are meetings. It really is the next wave of how we're going to automate operations and make all of these things like databases as a service. And a number of the first ones that are coming on are people like Postgres with CrunchyDB and Redis and Couchbase and all of those guys. So you'll see the usual suspects coming on first because those are the necessities of life. But there is a lot of great work that's going on. And there's a great team of folks in the community, not just from Red Hat, who are working on this as well. So it's another, when you see a marketplace cropping up on Kubernetes anywhere, it is going to be basically using operators to populate that. So if you have a service, this is the place you should be studying and working and looking at right now pretty closely. You mean operator on top of multiple clusters? I have no idea, actually. Technically, yes, because you can have like, operator is just an application that talks to your APIs. So you can have an operator that has access to multiple clusters and do it. If that's a best practice or good practice already, or it's not, I am not sure. So technically, sure. If it's reasonable to do, I will not be answering that question. Might be. Maybe Ansible could be used for this like, cross cluster management of stuff. Like, they're different tools and would be depending on the use case you have. There has been, in the presentation that I stole my slides from, some of them, there is one slide on federation. So you can also look into that and give it to your customers. All right, there's a question here. Different regions and zones. So today, you can label your nodes with different labels and then use node selector to specify the content. Well. You can do anything in the web UI, pretty much. Almost. Because you can get down to the. So you would need to label the nodes to have the labels and then either use node selector, spot affinity, node affinity, or anti affinity to actually place the pods on those specific nodes. This is, I think, not exposed as a GUI. I think you would need to edit the YAML actually to do the. But you can edit the YAML from. And you can edit the YAML from the interface, for sure. You can go and edit it. If it's in the YAML, you can edit it, yeah. Or config map. Let's see. Yeah, it's not a compression. It's in the open. Yeah. You can modify it from web console. OK. So it's getting more exposed in the web interface, actually. That was my dream that I could do everything from the web console. Nobody else. Just clicky-clicky think, right? Well, let's discuss it after the presentation. And if you describe me more what the use case you actually have, maybe you can figure out what would be the best approach to that. And I already see another question over there. We'll get you up here sharing your case study and how they did that at some point. OK. Or we could do it as an OpenShift Commons briefing and then everybody could hear. That would be great.