 I want to show you the concept of a custom resource definition. We're going to do that by using Kafka on our Kubernetes cluster. We're going to launch Kafka into OpenShift. So here's my OpenShift console. We're going to come back to this in a second and show you some things. But first, let's take you out to a vanilla Kubernetes, and the vanilla Kubernetes in my case will be Minikube. So I just launched Minikube a few moments ago. You can see it running right here. But let's do this. Let's do a kubectl get nodes as an example. So this will tell me how many master nodes, worker nodes we have all together. You can kind of see the role right there of master. This is Kubernetes 118 for my Minikube. But if I do a kubectl get CRDs, custom resource definitions, you'll see there are none. There are no custom resource definitions. And basically, this is a bare bones Kubernetes. So if I say get pods, all namespaces, all namespaces, you can see there's not actually much running in a Minikube by default. There's the core DNS, the NCD, the API server. So you got to have that NCD. You got to have that API server. You got to obviously have your controllers. And of course, your scheduler. Those are standard Kubernetes things to have. But that is, of course, Minikube bare bones implementation. Not much going on there. Let's actually take you now into my OpenShift cluster. So if I go over here to my OpenShift cluster, let's go to this one over here. And let's do control C, clear that up. kubectl get nodes. In this case, we're going to look at the nodes for my real OpenShift cluster. You can see there's multiple masters, multiple workers. In this case, it's also running on Amazon for me right now. And if I say kubectl get CRDs, watch what happens here. This long list of custom resource definitions comes out of the box. So a bunch of these are out-of-the-box custom resource definitions that extend the base Kubernetes with OpenShift capability. Things like in Ingress, Routes, Thanos, Prometheus, Grafana, lots of cool things there. But the ones I'm interested in right now is get CRDs grep Kafka. So as I mentioned at the beginning, you can actually have Kafka brokers as part of a core out in within your architecture. In other words, Kafka is a first-class citizen of a Kubernetes cluster. How does that magic happen? Well, let's come over here to my console and let's actually just build a new project or namespace. I'm going to just call this burr for now, right? Just to give it a name. Okay, so there we have the burr project and now what I'll do is click on operators and installed operators. I already have a bunch of operators in this cluster and you can see those cluster operators are now being installed specifically in this namespace as we sit here. They're rolling in. I have something called Elasticsearch, Yeager, Keali, and Servicemesh. That's your Istio capability. I also have Serviceless, that's your Knative eventing, Knative serving capability. And then I have one called AMQ Streams which is the Kafka capability and that's the one I'm interested in right now. Again, Kafka and all these other custom resource definitions implemented as operators with custom controllers are now part of this namespace and part of my overall cluster. So if I want a Kafka broker, I can just click on Kafka here and I can say create Kafka and I can look at it in the YAML form or the form view and I'll just keep it for form for now. You can kind of see there's some very special parameters I can set for Kafka like I can determine the number of brokers, the number of zookeepers, as an example, three by three, three zookeepers, three brokers at a default for production Kafka and that's set up there. And I'll just take all the defaults. There's nothing really to change. I'll just hit create. Okay, so that's creating. You can see it's called my cluster there. If I come over and say cube CTL, get pods dash in and this was called the burn namespace. You can see there's my three zookeepers coming up right now. And if I keep watching it, let's actually go ahead and just put a watch on it. You will see my three zookeepers will have to spin up and they're assuming there's enough resource availability on this cluster and then I get my three brokers. If I come back over here to my console and click on this cluster and look at resources. So I can see it from this perspective as well. You can see there's some secrets that had to be created, some certs created. There's a stateful set, which represents the Kafka broker. There's the Kafka, the bootstrap service. Of course, the bootstrap service is how you'll connect to it and the zookeepers. All those guys come into life right now. So there is my zookeeper and there's my Kafka's. You can see it's zero of two. We have to wait for it to go two of two. And once those guys come online, you'll then see another operator installed. And that operator is responsible for managing things like topics and users against that Kafka broker. Here's what's super cool about this. If I say kubectl get Kafka's, right here, you'll see it's called my cluster. So I can say get Kafka's like I do pods, like I do deployments. I can say kubectl describe Kafka, my cluster, just like you would if it was a deployment or anything else. So there it is, the describe of it. I can get a jammel. I can update it, create it at anything that you would normally do with a normal Kubernetes or OpenShift component. You can do that with a Kafka broker now. So let's do this one more time, kubectl get pods and burn. And let's see, there we go. All right, so there's the operator that I mentioned that manages things like topics and users. If I say kubectl get CRDs, grep Kafka topic, you'll see there's a type of Kafka topic. So now I can declare topics, I can declare users. And of course, the bootstrap server is already here. If you're familiar with Kafka, get services and burn. And you'll see there it is, the bootstrap server. So you can immediately start communicating with it from your Kubernetes client that you might have, or sorry, in this case Kafka client that you might have too many K words in this case. But all of that magic based on the custom resource definition and the Adela Box capabilities you can install from operator hub within your OpenShift cluster. Again, you just come into operator hub, type in Kafka and you can see there's one for StrumZ community or AMQ streams, which is the supported version of this for OpenShift customers. Thank you for your time and look forward to more videos from us about new key capabilities that you'll see in the Kubernetes ecosystem.