 All right, this is one of my favorite demonstrations, showing you the power of Knative plus Kafka on OpenShift. And we're gonna walk you through the key capabilities of it. Keep in mind, we do document this particular demonstration. You'll see that in the notes below. But for now, we're just gonna kind of show you the highlights of how the basics are set up. Okay, so I have my OpenShift project here and I have an OpenShift project called Kafka. So let's kind of check that out, Kafka right there. And in it, of course, is running my operator, my installed operator for MQ Streams, which gives me the Kafka broker itself. Okay, so that's one thing that's already out there. And actually, let's go ahead and click on Kafka here. And you can see there's my cluster, which has been created. Also, let me check out Kafka topic and there is the my topic that's been created. You'll notice that the topic has 100 partitions. That's kind of important for this particular demonstration. That's the concurrency factor, right? You wanna have a great concurrency rolling through that Kafka topic. So 100 partitions is important there. And if I look at my pod, you'll see there's a bunch of things already running here. Okay, some things that are gonna help us do this particular demonstration. Let me now come in into the developer console and show you what we can do. We have the Kafka broker up and running. So my zookeepers, my Kafka brokers. Again, there's the service known as the bootstrap service for come say, CUBE CTL get services. You'll see there's the bootstrap service. So there's a lot of components already in place. I'm not gonna walk you through all the details here, but what I wanna show you is what it means to add a new Knative component to this. So this is standard Kafka on Kubernetes, standard Kafka with StrumZ on OpenShift using OpenShift operators there. But let me come in here and hit add container image. And then now I gotta remember the actual name of my image. So I think it was 101, nope, not quite. Sync, I think is what I call that image. Okay, fantastic. We're gonna put this in an unassigned application. That's just a grouping, a visual grouping on screen there. My Knative sync here, that looks good from that perspective. We're gonna call this a Knative service. And then let's click on the scaling factor down here. I wanna have it a concurrency target of one, concurrency limit of one. What this means is for every one message, you have one pod. So while it's processing one message, it can only process one message and therefore one pod. That becomes more interesting when we really hammer it with a bunch of messages. So let's hit create. All right, there it is. My Knative service coming to life. If I come back to my command line, I do a lot of checking at the command line for things. So I like doing that. So get Knative service, case service there. And again, the Knative capability is due to get pods in Knative serving. This is what happens when you install that OpenShift serverless operator and Knative eventing. This was part of it as well. So that's the two namespaces that have the Knative infrastructure in it. And of course, if you say get CRDs, grep Knative, you'll see these extra types that we're working with here. So like you can see ping sources, you'll see Kafka sources. And the Kafka source came because I installed this additional operator. Let's go back over here to show you the additional operator. Let's go here, operators, installed operators. And then right here, the Knative Apache Kafka operator. That's what gave me the Knative event source for Kafka. That's how that came to be. A lot of words here. But let's go into the developer console warm time. So there's my Knative service. You notice that it has nice blue ring. That's because a pod came to life and now it just went dark because the pod just scaled down. So the pod scaled down based on the fact no one's interacting with it. So let's go and do this. See this little blue arrow right here? Actually, let me move things around a little bit, make it a little bit easier to see, move that over here. We don't need to worry about the Kafka broker. Make this a little bit bigger. Move it over, there we go. I'm gonna see this little blue arrow. I'm gonna click on it. And if I do this right, I'm gonna say from event source. So this is an event trigger for this Knative serving service. So by default, a Knative serving service responds to HTTP events. I wanna make it respond to Kafka events. So I'm gonna say event source. And then I can say a Kafka source here. When I do that, it wants me to pick my bootstrap server. And remember the bootstrap server, we could basically say kubectl get services. And there it is, my cluster bootstrap right there. So let's put that in. And it's actually in the Kafka namespace and it's gonna be 8082. So let's, if we can go back over here. Yeah, did I do, maybe it's not 8082. Maybe it's 9092. How about we do it correctly with 9092. Nearly messed that one up. But there is the service. So it's just like any other service in Knative, or sorry, in Kubernetes LAN. What topic do we want here? We're gonna go with the my topic. Okay, so again, I can double check that by saying kubectl get Kafka topics. There we go, my topic right there. And again, I have my Kafka itself get Kafka's. Again, these are all customer resource definitions. Okay, so got those two things answered. Just call this my group for now. And if I did that correctly, my Knative sync service name Kafka source, let's hit create here. And that component shows up right there. Okay, there it is. And if I come over and say kubectl get Kafka source, and we can double check some things, kubectl describe Kafka source, Kafka source with the dash. And it, and we can kind of look in here and we can see, okay, Knative sync, that looks right there. My Knative sync there, that looks okay. My group is fine. And where is the, oh yeah, my cluster Kafka bootstrap Kafka. So if we've done everything correctly, when we start shoving messages into this Kafka broker, specifically into the my topic, okay, my topic, it'll then route those messages into the Knative serving service, known as my Knative sync. All right, so let's see if we can kind of make that magic happen here. So we got that working. It's auto-scaled. Okay, that looks good. Okay, and let's kind of, we'll watch this and then move it over here. Make it so easier to see. Move this over here. Okay, right there. All right, that guy right there. That's the guy we wanna see auto-scale based on Kafka messages. Now I have another component here I call the Kafka spammer. It's just another image that I have created that basically blast messages into a particular topic. In this case, it's gonna look for my topic, looking for my cluster, and I'm just gonna exec into it. Let's see Kafka spammer. Okay. I know this deprecated, but at some point I'll learn the new command. And I'm gonna say curl localhost 8080 and I want one message to show up. So let's send in one message and there it is, my Knative sync coming to life at this point. And actually two pods in this case. So the algorithm is not exactly deterministic, if you will. So if you send one message in, you might get two pods. If you send 20 messages in, you might get 24 pods, something of that nature. Basically, since we're auto-scaled to zero, this burst of message activity will cause the upscaling from a Knative standpoint, but that algorithm is gonna be looking at a couple different factors. It essentially is trying to figure out, oh my God, messages have come in. How do I get a pod to respond to it? In this case, there is two pods possibly responding to that one message. So if I come over here and actually say stern, my Knative sync, and we're viewing the logs of that pod, you will see a message that's showing up here. So if I say see user container, so we can see what those messages are and all those messages have gone through already. Okay, if I push another one in, we should see another message show up here and we'll keep that pod alive. But you will notice as Knative realizes it doesn't need these pods anymore, it'll start downscaling. So we're already one is terminating. And then we'll see the next one terminate based on the fact that there is no additional message, new event to keep it alive. So if we watch it for about 60 seconds since our last transaction, you'll see it downscale automatically. In this case, it did make my Knative sync a Quarkus-based application. It is Java, but it is gonna, because it's compiled in native, it's gonna boot up real fast, be very small, supersonic subatomic Java, that's kind of the point. And therefore it'll respond quickly. A lot of people think for these use cases, you have to have something like Go or C++ or Node.js or Python, but we're kind of demonstrating that you can do this with Java-based applications also. So there it is, it's gonna be downscaling here momentarily. We did touch it again by sending one more message in and therefore it'll have a new 60 second lease on life. You can of course override that parameter and actually determine if in fact, the lifespan of the pod that's not really doing anything is 60 seconds or maybe it might be 15 seconds, 20 seconds, 30 seconds. You can kind of change that interval. Right now I have it set for the default 60. It is terminating now, if we look at our console, you'll see it's auto-scaled to zero. So it's in terminating mode, which simply means that LCD and of course, the Kubernetes cluster are basically kind of communicating with each other and going, okay, is that pod still alive? Is that process still running in that worker node? And then LCD has to update its database and of course, the user interface gets updated, the API gets updated and now you can see it's gone. But let's do something really crazy. Let's really hammer it. Let's push in 50 messages. Oh, well let's just go for it. Let's just push in 100 messages and let's see what happens. Let's see if we can really make this thing panic and freak out and really go into overdrive mode. So there you go, it's container creating, pending, pending, meaning it's gotta schedule it across my worker nodes. I bet my worker nodes are not even large enough to run all these pods. I get of course, you can see it's trying to scale up to 143 pods right now. So that's a four pod, six pods. So these pods are coming online. If I wanted to, I could go add machine sets or have some other form of outer scaler based on worker nodes to also scale up my worker nodes, make them either bigger or more of them. But it's just kind of fun to watch what it's trying to do. It's trying to schedule the pods in question across the different worker nodes that I have. You can see 45 are running out, 46 are running. But this is also pretty telling. That's essentially 50 application servers springing to life in response to all these messages. So I can now start processing those messages as they show up in our system. But that is really the power of what you see with Knative, Knative eventing, Knative event source based on Kafka and of course Kafka being the message back plane in this case. So if you want more information about this particular demonstration, you'll have to kind of check out our master course materials, our tutorials around Kafka and Knative and this particular use cases also exposed there and demonstrated there. You can see right now it's dynamically downscaling based on the fact that I'm not entering new load. Those hundred messages are already in the system. They've been consumed and it's now terminating all those pods and it's scaled to zero again. So that's the beauty of Knative. You only pay for what you use meaning I'm only paying for that memory in CPU while there's actual traffic, in this case, events driving the system load and causing it to scale out or in this case, scale back down. Thank you for your time. We're gonna have more videos based on neat capabilities and components of Knative and Kafka and Istio and OpenShift and Kubernetes in just a few more moments. Thank you.