 This is one of my favorite demonstrations, showing you the power of Knative plus Kafka on OpenShift, and we're going to walk you through the key capabilities of it. Keep in mind, we do document this particular demonstration. You'll see that in the notes below, but for now, we're just going to show you the highlights of how the basics are set up. I have my OpenShift project here, and I have an OpenShift project called Kafka, so let's check that out, Kafka right there. In it, of course, is running my operator, my installed operator for MQ Streams, which gives me the Kafka broker itself. Okay, so that's one thing that's already out there. And actually, let's go ahead and click on Kafka here, and you can see there's my cluster, which has been created. Also, let me check out Kafka topic, and there is the my topic that's been created. You'll notice that the topic has 100 partitions. That's kind of important for this particular demonstration. That's the concurrency factor, right? You want to have a great concurrency rolling through that Kafka topic, so 100 partitions is important there. And if I look at my pod, you'll see there's a bunch of things already running here. Okay, some things that are going to help us do this particular demonstration. Let me now come in into the developer console and show you what we can do. We have the Kafka broker up and running, so my zookeepers, my Kafka brokers. Again, there's the service known as the Bootstrap service. If I come, say, Cube CTL gets services. You'll see there's the Bootstrap service. So there's a lot of components already in place. I'm not going to walk you through all the details here, but what I want to show you is what it means to add a new Knative component to this. So this is standard Kafka on Kubernetes, standard Kafka with StrumZ on OpenShift, using OpenShift operators there. But let me come in here and hit Add, Container Image. And then, now I got to remember the actual name of my image. So I think it was 101, nope, not quite. Sync, I think, is what I call that image. Okay, fantastic. We're going to put this in an unassigned application. That's just a grouping, a visual grouping on screen there. My Knative Sync here, that looks good from that perspective. We're going to call this a Knative service. And then let's click on the scaling factor down here. I want to have it a concurrency target of one, concurrency limit of one. What this means is for every one message, you have one pod. So while it's processing one message, it can only process one message and therefore one pod. That becomes more interesting when we really hammer it with a bunch of messages. So let's hit Create. All right, there it is, my Knative service coming to life. If I come back to my command line, I do a lot of checking at the command line for things. So I like doing that. So get Knative service, case service there. And again, the Knative capability is due to get pods in Knative serving. This is what happens when you install that openshift serverless operator and Knative eventing. This was part of it as well. So that's the two namespaces that have the Knative infrastructure in it. And of course, if you say get CRDs, grep Knative, you'll see these extra types that we're working with here. So like you can see ping sources, you'll see Kafka sources. And the Kafka source came because I installed this additional operator. Let's go back over here to show you the additional operator. Let's go here, operators, installed operators. And then right here, the Knative Apache Kafka operator. That's what gave me the Knative event source for Kafka. Okay, that's how that came to be. A lot of words here. But let's go into the developer console warm time. So there's my Knative service. You notice that it has nice blue ring. That's because a pod came to life and now it just went dark because the pod just scaled down. Okay, so the pod scaled down based on the fact no one's interacting with it. So let's go and do this. See this little blue arrow right here? Actually, let me move things around a little bit. Make it a little bit easier to see, move that over here. We don't need to worry about the Kafka broker. Make this a little bit bigger, move it over. There we go. I'm going to see this little blue arrow. I'm going to click on it. And if I do this right, I'm going to say from event source. So this is an event trigger for this Knative serving service. So by default, a Knative serving service responds to HTTP events. I want to make it respond to Kafka events. So I'm going to say event source. And then I can say a Kafka source here. When I do that, it wants me to pick my bootstrap server. And remember the bootstrap server, we could basically say, kubectl get services. And there it is, my cluster bootstrap right there. So let's put that in. And it's actually in the Kafka namespace. And it's going to be 8082. So if we can go back over here, maybe it's not 8082. Maybe it's 9092. How about we do it correctly with 9092? Nearly messed that one up. But there is the service. So it's just like any other service in Kubernetes LAN. What topic do we want here? We're going to go with the My Topic. So again, I can double check that by saying kubectl get Kafka topics. There we go, My Topic right there. And again, I have my Kafka itself get Kafka's. Again, these are all customer resource definitions. So I've got those two things answered. Just call this My Group for now. And if I did that correctly, MyKaidNativeSyncService, name Kafka source, let's hit Create here. And that component shows up right there. There it is. And if I come over and say kubectl get Kafka source, and we can double check some things, kubectl, describe Kafka source, Kafka source with the dash in it. And we can kind of look in here and we can see, OK, kNativeSync, that looks right there. MyKaidNativeSync there, that looks OK. My group is fine. And where is the, oh yeah, my cluster Kafka Bootstrap Kafka. So if we've done everything correctly, when we start shoving messages into this Kafka broker, specifically into the My Topic, OK, My Topic, it'll then route those messages into the kNativeService and service, known as MyKaidNativeSync, all right. So let's see if we can kind of make that magic happen here. So we got that working. It's auto-scaled. OK, that looks good. And let's kind of, we'll watch this and then move it over here, make it so easier to see. Move this over here, OK, right there. All right, that guy right there. So that's the guy we want to see auto-scale based on Kafka messages. Now I have another component here I called the Kafka Spammer. It's just another image that I have created that basically blast messages into a particular topic. So in this case, it's going to look for My Topic, looking for my cluster. And I'm just going to exec into it. Let's see Kafka Spammer. OK, I know this is deprecated, but at some point I'll learn the new command. And I'm going to say curl localhost 8080. And I want one message to show up. So let's send in one message. And there it is, my Knative Sync coming to life at this point. And actually two pods in this case. So the algorithm is not exactly deterministic, if you will. So if you send one message in, you might get two pods. If you send 20 messages in, you might get 24 pods, something of that nature. Basically, since we're auto-scaled to zero, this burst of message activity will cause the up-scaling from a Knative standpoint. But that algorithm is going to be looking at a couple different factors. It essentially is trying to figure out, oh my god, messages have come in. How do I get a pod to respond to it? In this case, there is two pods possibly responding to that one message. So if I come over here and actually say stern, my Knative Sync, and we're viewing the logs of that pod, you will see a message that's shown up here. So if I say see user container, so we can see what those messages are. And all those messages have gone through already. If I push another one in, we should see another message show up here and we'll keep that pod alive. But you will notice as Knative realizes it doesn't need these pods anymore, it will start downscaling. So we're already one is terminating. And then we'll see the next one terminate based on the fact that there is no additional message, new event to keep it alive. So if we watch it for about 60 seconds since our last transaction, you'll see it downscale automatically. In this case, it did make my Knative Sync a Quarkus-based application. It is Java, but it is gonna, because it's compiled in native, it's gonna boot up real fast, be very small, supersonic subatomic Java, that's kind of the point. And therefore it'll respond quickly. A lot of people think for these use cases, you have to have something like Go, or C++, or Node.js, or Python. But we're kind of demonstrating that you can do this with Java-based applications also. So there it is, it's gonna be downscaling here momentarily. We did touch it again by sending one more message in. And therefore it'll have a new 60-second lease on life. You can, of course, override that parameter and actually determine if, in fact, the lifespan of the pod that's not really doing anything is 60 seconds, or maybe it might be 15 seconds, 20 seconds, 30 seconds. You can kind of change that interval. Right now I have it set for the default 60. It is terminating now. If we look at our console, you'll see it's auto-scaled to zero. So it's in terminating mode, which simply means that at CD and, of course, the Kubernetes cluster are basically kind of communicating with each other and going, okay, is that pod still alive? Is that process still running in that worker node? And then CD has to update its database. And, of course, the user interface gets updated, the API gets updated, and now you can see it's gone. But let's do something really crazy. Let's really hammer it. Let's push in 50 messages. Oh, well, let's just go for it. Let's just push in 100 messages and let's see what happens. Let's see if we can really make this thing panic and freak out and really go into overdrive mode. So there you go. It's container creating, pending, meaning it's got to schedule it across my worker nodes. I bet my worker nodes are not even large enough to run all these pods. I could, of course, you can see it's trying to scale up to 143 pods right now. So that's a four pod, six pods. So these pods are coming online. If I wanted to, I could go add machine sets or have some other form of auto-scaler based on worker nodes to also scale up my worker nodes, make them either bigger or more of them. But it's just kind of fun to watch what it's trying to do. It's trying to schedule the pods and question across the different worker nodes that I have. You can see 45 are running now, 46 are running. But this is also pretty telling. That's essentially 50 application servers springing to life in response to all these messages so I can now start processing those messages as they show up in our system. But that is really the power of what you see with Knative, Knative Eventing, Knative Event Source based on Kafka and, of course, Kafka being the message backplane in this case. So if you want more information about this particular demonstration, you'll have to kind of check out our master course materials, our tutorials around Kafka and Knative and this particular use case is also exposed there and demonstrated there. You can see right now it's dynamically downscaling based on the fact that I'm not entering new load. Those 100 messages are already in the system. They've been consumed and it's now terminating all those pods and it's scaled to zero again. So that's the beauty of Knative. You only pay for what you use, meaning I'm only paying for that memory in CPU while there's actual traffic, in this case, events driving the system load and causing it to scale out, or in this case, scale back down. Thank you for your time. We're gonna have more videos based on neat capabilities and components of Knative and Kafka and Istio and OpenShift and Kubernetes in just a few more moments. Thank you.