 Hey everyone, my name is Jason Dobies, I'm a TMM in the Cloud Platforms Business Unit at Red Hat, and today we're going to be taking a look at using the operator framework to install applications into our OpenShift 4 cluster. The operator hub is a new tab in the UI and gives us access to all of the operators that can be installed into our cluster. Once they're installed we can use them to provision custom resources depending on what it is the operator provides for us. For this example, this operator will only run inside of a specific namespace as compared to being installed on the cluster itself. So we're going to install it into a demo namespace we've created. The AMQ Streams operator is going to give us access to the data streaming platform provided by Red Hat. It's based on the Apache Kafka project and it's going to give us access to a number of services and resources related to managing not just the creation and running of a Kafka server, but all of the extra pieces surrounding it in terms of configuration, in terms of supporting services, and in terms of actually creating resources that run on the Kafka server itself. When we switch over to our demo operator project, we'll see the operators have been successfully installed. So we'll take a look inside of there and see the variety of resources that are available to us. Now for the purposes of this demo, we're going to set up just a basic Kafka cluster. It's going to set up all of the resources necessary to run it in addition to standing up the services themselves. You'll see the basic OpenShift YAML that you're used to seeing in the past. What we're going to do is leave the default. So our cluster is going to be named my cluster. It's going to go into our demo operator project. And once we create, it's going to begin the process of pulling down the images and installing them. Notice how the kind of there is listed as Kafka as compared to something like a pod or deployment that we're typically used to in OpenShift. One of the benefits of operators is it uses custom resources to be able to talk in terms of the business logic being deployed. Instead of having to think I have a pod of a particular image on it, I can tell OpenShift, give me all of my installed Kafka installations, and let me work in terms of the actual object itself. Now you'll notice it's kicked off the creation of a number of resources, supporting resource types like secrets and configuration maps. You'll also notice that the ZooKeeper nodes had begun to start. If we had DuckEnd, we would wait for all three of those nodes to begin before it actually begins the creation of the Kafka brokers. Operators give us access to this level of advanced installation logic where we can stand up one service before standing up another or trade data back and forth as necessary. Hopping out to a shell very quickly, we can see that the majority of these servers have begun. So we'll hop back into the UI and take a look at the resources themselves. If we jump back to the installed operator section, we're going to go through, we're going to also create a Kafka topic inside of our newly created cluster. Again, it's going to prompt us with the traditional OpenShift YAML. We're going to just simply rename the topic name itself to Hello World. It's going to be deployed into my cluster, which is the cluster we just created through the operator. And again, that lives in our demo namespace operator. Once we've created the topic, the operator receives an event that this new Kafka topic object has been created. And it takes whatever steps are necessary to create that inside the Kafka cluster itself. We head back to the project itself and take a look at the services. Again, we'll see that the ZooKeeper cluster is running, as well as the Kafka cluster itself. We go through and we select a single pod from this list. We can RSH in and take a look around to make sure our topic has been created. Once inside the cluster, we use the tools built into the image itself to take a look at all of our topics. Now, we use local hosts, but obviously we could have gone to the service level itself and had the load balancer pick one of these instances. But for simplicity, we're just going to say query the Kafka server itself or the Kafka instance itself and make sure that the topic that we created is there as expected. As we can see, the Hello World topic was not only created as a resource inside of OpenShift, but is also present inside the Kafka cluster. Hope you enjoyed this walkthrough operators and all the power they give you in OpenShift 4.