 I have now everybody welcome to the third episode of chat loop back off today is February 22nd. Can't believe it's almost March. It feels like the year is going so fast already. This live stream is a sibling to the similarly themed clash loop back off an event that the CNCF puts on it. The most recent cube kind of Chicago. I think we'll see it in Paris as well. In that one, it's kind of a fun competition. Normally gfe is hosting it and he pets to community members against each other to accomplish some sort of technical challenge of his choosing. You go into no real background. It's supposed to be laid back for everybody and fun. It's definitely challenging though. If you tuned in before you might be wondering where is gfe? Well, that's a fun story. My name is Jeremy Rickard and I'll be the host today. You might know me from Sig release or from reviewing some PRR things, a few other things around the Kubernetes project, but you may have also seen me competing in clash loop back off in Chicago or its predecessor cloud native iron chef sometime during COVID. Gfe asked me if I'd like to host an episode of this and kind of experienced the different side of this challenge and he gave me a list of potential topics and I thought it would be really fun. So today I'm excited to dive into the Kata project, something I've never used before and have you joined me as I learn all about event driven auto scaling in Kubernetes. I have not done any real in depth research on this topic. I know what Kata is. I've never used it before. So just like clash loop back off. I'm coming in kind of cold and we're going to dive into the topic with no real background information and learn as as I normally would off stream. We're going to look through some examples and some documentation and kind of figure it out as we go. In doing so, I hope that we can all walk away having learned something new about Kata. A few housekeeping items before we get started during the live stream, feel free to ask your questions and chat. I'll try to get through as many as possible and Katie behind the scenes is going to help me stay on track for that and we are here to learn together. So if you know the answer to something or if I'm stuck, feel free to speak up and help us all learn together. That said, this is an official live stream of the CNCF and it is of course subject to the CNCF code of conduct. Please don't add anything to the chat or questions that would violate the code of conduct. Basically just please respect everybody on the stream all the other participants in the chat and this video will be put on YouTube afterwards. So folks that haven't been able to join the live stream will be able to follow along with us asynchronously. Okay, before we dive into Kata, let's check out some news from the broader CNCF world. I think if you've watched any of these before it's kind of cool to see some some topics that have popped up. So first the one thing that stood out to me kind of going back to the clash loop back off in Chicago is that there's a new GitOps associate certification being launched by the CNCF, the Linux Foundation and the Continuous Delivery Foundation. It's really intended to help folks understand core GitOps patterns, tools, when and where to use those patterns and those tools. And if you watch clash loop back off in Chicago, one of the challenges was to use flux. And I think that's a really cool topic. There's lots of really great tools in that space in the CNCF, but those things can be super useful and having this new kind of certification seems like a cool step in helping folks demonstrate their kind of knowledge and mastery of that. I also found a really cool blog post about Qiverno. So Qiverno is a policy engine and an admission controller for Kubernetes and it was all about how to secure service meshes. I'm going to probably try to drop these links into chat so we can share them. Do that real quick. See if I can haven't done that before and drop them to Katie and maybe she can share them. So here's the first one about the GitOps certification. And these are on the Kubernetes or the CNCF blocks. You should be able to find them too. The second one was really about how to use Qiverno to secure service meshes. So kind of combining those two different topics making them available in a kind of unified blog that I think addresses a bunch of cool topics. And then the third one I wanted to touch on was kind of near and dear to my heart. We're halfway through the Kubernetes 130 release. Code freeze is coming up pretty quickly. It'll be on March 5th. So we're going to get up under that deadline super, super fast. And with that in mind, we're going to get an alpha release next week. That'll be like the third alpha. I think if you're interested in trying out new features before those releases land, you can definitely check out the alpha releases and provide feedback to the rest of the project. Help us find some of those bugs early on too. That'd be super cool. So I dropped a link to Katie for the schedule. So you can find all the dates for the Kubernetes releases there and kind of understand when 130 is going to be out, when can you expect data, things like that. So it's a pretty good time, I think. Okay, so we've got that out of the way. Let's dive into Katie and kind of understand what we're going to be looking at. So I'm going to share my screen and we will take a look at some documentation that we started with. Let's do it. All right. So, Katie is a, let's look at the description. It's a Kubernetes-based event-driven autoscaler. So with Katie, you can drive the scaling of any container in Kubernetes based on a number of events needing to be processed. So that's kind of a cool concept. If you've worked in Kubernetes in any kind of operational aspect, you've probably heard of the vertical pod autoscaler or the horizontal pod autoscaler. Those are general topics around autoscaling. Let's take a look at a couple of those. So let's go over here and open our new tab. And let's look at the vertical pod autoscaler. All right, cool. So what does the vertical pod autoscaler do? Let's see. Well, it's a component that allows us to basically resize the limits and requests in our pods based off of metrics that are going to be available from the metric server. So, you know, we can scale things up and down based off of the workloads that they're experiencing. It's kind of driven by the metrics about that workload, right? The horizontal pod autoscaler is going to be similar. Let's find that one real quick. Horizontal pod autoscaler. Okay, horizontal pod autoscaling. Similar kind of thing, right? But instead of scaling a single pod's resources up and down, this one is all about scaling the pod in and out for a number of replicas. So same concept. You're going to use a metric server with a certain number of metrics or specific metrics that you're interested in. And the autoscaler can help you kind of respond to those scenarios. So maybe I've got a lot of requests coming in from an external metric server. I want to be able to scale based on RPS of my service, right? I know that I'm going to hit a threshold and I want to be able to get more pods to be able to handle more requests coming in. So that's kind of where the vertical pod autoscaler and the horizontal pod autoscaler fit into this picture. They're really looking at those resources about your workloads in the cluster. So maybe CPU memory, things like that, but maybe things like requests per second, things that you're scraping with Prometheus and making available with a custom metrics API. Okay, let's go back to Kata. And I think the important thing that sticks out to me is that we're really doing this on events here. So the number of events that need to be processed, that seems a little different and seems like it's not necessarily looking at the resources in the cluster, but it's looking at the things that maybe your application is working with. You know, looking at things like maybe queues or databases and records like that. Okay, so let's dive in and see if we can find out a little bit more about what Kata is. So let's look at the concepts first. Okay, so how does Kata work? Okay, so Kata has three key roles within Kubernetes. So metrics, okay. We mentioned the metric server for the horizontal pod autoscaler or the vertical pod autoscaler. So it looks like it acts as a metric server that can provide that data that you're going to need to be able to drive off of stuff. Admission webhooks to prevent misconfiguration and force best practices. That sounds pretty cool. And then an agent which activates and deactivates Kubernetes deployments to scale from zero or to more based on the events. Okay, so it seems like a pretty straightforward set of components. There's a cool architectural diagram here that I think maybe highlights this a little bit more. We've got Kata here. So we've got the metrics adapter and we've got the admission webhook, a controller. I think that's probably going to be this agent up here. One of the primary roles is the operator. Okay, that seems pretty straightforward. We've got something called a scaled object that probably is a CRD that's from Kata. We'll look into that in a second. External trigger source. Okay, that I think ties into the eventing. Any events, yep, from the external trigger source, scale up the workloads. Okay, so event sources and scalers. So Kata has a wide range of scalers that can both detect if a deployment should be activated or deactivated and feed custom metrics from a specific event source. Okay, so that's cool. So going back to the VPA and the HPA, those are going to look at metrics, mostly by your workloads. This definitely looks like it's looking at things external to the cluster. I see a bunch of like really common things here like active MQ, I see rabbit MQ, there, Postgres, my MS SQL. Seems like there's a lot of different event sources that are available. Okay, yeah, so up here we saw scaled object in this architectural diagram. I see that down here inside of the custom resources as well. So when you install Kata, it creates four custom resources, scaled objects and scale jobs. So maybe that's two different types, trigger authentications. Obviously, all of these things are going to require you to authenticate to them in any real scenario. So that makes sense. Cluster trigger authentications. Okay, that probably is just a cluster-scoped one instead of having to do it in namespace. So I guess that this is probably the namespace resource and this is probably the cluster-scoped one. Cool, scaled objects. So this documentation seems really great so far. I can't be coming to this with no real knowledge so far. This is providing me a bunch of really useful information. So scaled objects represent desired mapping between an event source and a Kubernetes deployment, a stateful set or any custom resource that defines a scale sub-resource. That's cool. That covers a lot of ground. I see there's also a scale jobs and that's pretty cool. I think when I think about event-driven systems, jobs are one of those things that pop into my head pretty immediately. If I'm going to get a bunch of things in a queue, I probably want to think about them as being processed by a job, maybe stateful applications or some other thing can consume from them as well. But that really feels like a batch processing in a job sort of thing. So it's really cool to see that's a standalone sort of type inside of here. And then scaled objects or scale jobs can reference trigger authentication or cluster authentications which contain the authentication configurations or secrets to monitor those event sources. Okay, so that makes a lot of sense. I want to take a look at a couple of these scalers and kind of get an understanding of what they look like before we get into deploying KDA. So that's the next thing that's in this documentation, but I really want to know a little bit more about these things. It doesn't look like these are clickable. So maybe up here we can dive in a little bit more. Okay, it looks like it scaling deployments, stateful sets and custom resources. Okay, so we saw on the previous page that this is about mapping an event source to a thing we want to scale. So let's take a look at what the spec looks like and kind of understand what's there. So scaled object, yep. It's got some annotations that look like they're optional. It makes this a little bit bigger so everybody can see it too. And a couple more optional things. And the first thing that looks like it's required here is that scale target ref API version kind. Those are going to be your typical Kubernetes kind of things you use to identify what you're going to use. And then the name of the thing we want to use. Okay, that looks like it's mandatory. By default it looks like it's going to work off of deployments from the V1 API group for that. So that's cool. It doesn't seem like we have to provide a whole lot of stuff there. So we could probably start out with just a scaled object and scale ref target that points at the name of it. Pretty cool. Looks like you can do a lot of customization too. So if you want to figure out what the container you want to scale up is, that's there. Looks like some extra config here. How often we want to pull for this? How often we want to wait for it to cool down? Failback stuff, a bunch of advanced configuration. Maybe we're not going to get into that since we don't have a ton of time to dive into things today. Okay, so like in this case we would be using apps V1 deployment because that's the defaults and then the name of the target. Cool. And then looks like there's just a whole bunch more documentation on all these fields. Maybe we'll come back to that in a little bit, but I think this looks like there's a lot of configurability into this tool. It'll check the trigger source every 30 seconds. You can tune that up or down if you want. What else do we got? Cool down period. I don't replicate count how many we want. I think one of the cool things about this is especially for jobs, you're going to keep them at zero but for deployment you can scale it to zero until it needs to handle any kind of workloads. That seems like a really cool feature that's built into this must be less than the minimum replica count. So minimum replica count defaults to zero. Okay, that's cool. So by default it'll scale it down to zero. It's a pretty cool feature that's built into it by default. Okay, there's that advanced configuration, a whole bunch of more advanced stuff. Okay, and now triggers I think this is the interesting part. The list of triggers to activate scaling of that target resource. So the things that are going to actually kick that off. The type to use some metadata, the name for the trigger. There's that authentication ref thing we wanted to see. Metric type. It's really cool that a lot of these things are optional. We probably out of the box don't need to provide a whole lot of information. That seems pretty useful. Oh, we can also pause out of scaling. That seems like a really, really beneficial feature to have as well. Bunch of modifiers. Okay, cool. There's a lot of really interesting configuration options in there. Let's go back up and look at the job one and see how that differs. So scaling jobs. So as an alternate to deployments, you can also run and scale your code as jobs. That seems pretty cool. Primary reason to consider this is to handle long-running executions. That seems like a good idea. Rather than processing multiple events within the deployment, you might want to spin up a job and handle that. So cool. This looks pretty similar, different type obviously, but we've got a job target ref instead of a target ref for the deployments template. It looks like it's a job template that you would use for defining a job. Okay, that seems pretty straightforward. The number we want to use and then same kind of looks like the same similar or similar parameters here. A little bit different for maybe rollout stuff. And then again, the targets, target ref stuff. I think maybe if we get down here to the bottom, we'll see triggers look kind of the same. Here's a really good example. In this case, we're going to do a scaled job that's going to read from RabbitMQ. And there's the job template. So instead of having to define a job in the cluster as a separate thing, we'd include that in the target ref. So that seems pretty cool. And then the trigger here is RabbitMQ. Okay, cool. So let's go take a look again before we get started at the scalars that are available. I think that's going to be an interesting thing to kind of guide what we look at next. So we've got, wow, 64 scalars available. These things are built-ins. Wow, that's pretty useful. A couple external ones. Seems like it's pretty adaptable and you can probably write your own to do a lot of that. But there seems to be a lot of built-ins that are available for us here. Let's take a look and see if there are anything that stands out to me that I can take advantage of during this session. Oh, Cron, schedule applications based on a Cron schedule. That seems pretty cool as a feature for deployments. Cron jobs are a thing that exists in Kubernetes as a primitive, but that seems like a really neat thing to do with deployments. So like maybe we'll come back to that. I've got an Nginx deployment handy that I could use for that. So that'll be kind of cool. I also see Azure Storage Queue here. I have an application just me messing around with the Azure SDK to kind of figure out how Azure Storage Queues work. So we could probably use that as well. Let's take a look at what these look like or how we use them. Okay, so this is that trigger section that goes into that CRD type as Cron. So that's probably, I guess, is the built-in name. And then basic metadata, when it should start, when it should stop, how many replicas we want, and the time zone. That seems pretty simple. So I think we can try that with my Nginx deployment that I've got handy. And then the other one was the Storage Queue. And for that, we've got the name of the queue, okay? The queue length. Queue length is the target value for the queue length passed to the scaler. If one pod can handle 10 messages, set the queue length target to 10. If the actual number of messages in the queue is 30, the scaler will scale to three pods. Oh, that's really cool. So if we wanted to run it with multiples, we could do that. That would be pretty awesome. Okay, activation queue length. This stuff seems like it's optional. Connection from environment variable. That's probably what we need for how we're going to connect to it. Looks like connection string is one way to connect to that. So I can get that info. That'll be cool. I also just pod identity or workload identity. So that seems pretty useful. We'll just use a connection string. I think that'll be the easiest thing to do for my kind cluster. I'm going to use that for everything today. Okay, so I think we've identified two scalers we want to try out, the Storage Queue and Cron. Let's give those a whirl and see what they look like. So first, we need to deploy Kata. Looks like there's a couple of ways we can do that with YAML files by themselves, Helm charts and Operator Hub. Okay, so let's give that a whirl real quick. I'm going to bring up a terminal window here. I'm going to make it a little bit bigger for everybody to see. And don't think I have a kind cluster right now. Let's make one. Yep, so let's do kind create cluster. Let's do give it a name. Call it clash, clash, chat loop back off. Okay, so while that's going, let us follow along with some of the Helm instructions that are there and do that while it's running. Okay, so Kata provides a GitHub repo for their charts. That seems pretty useful. So let's go grab that. Okay, Kata Core has been added to our repos and we're going to do Helm repo update. Okay, cool, so it's been installed. Okay, I've got a kind cluster now. QCTL, get nodes. All right, cool. Looks like it's running, but not ready yet. We'll see if we can power forward or if we get stuck. Okay, so next thing we're going to do is Helm install this into a namespace called Kata and create nameshaces. I think this is going to install the CRDs for us. So we'll check that out in a second. It looks like you can deploy them separately with this YAML file, but we're just going to do it all together. Let's do that. Okay, that seemed pretty fast. Looks like it spits out a couple of informations for us. Let's see if we have any scaled objects in our default namespace. Nope, okay, so we seem like we're good to go. QCTL, get pods in the Kata namespace. We've got our admission webhook running. That's cool. The operator and the metrics API server is running. So you can look at the logs and see if there's anything interesting there. Let's specify the namespace. All right, so it looks like the controller handles scaled jobs, scaled objects. We saw that before. The cluster trigger authentication, trigger authentication, we saw those things. It looks like cloud event source is another one that you can look at. I didn't see that in the docs. We can find that in a little bit and start rotator. Okay, that's pretty cool. So I think by default we've got this running. So we've got, again, those pods. And I think now we're ready to go and maybe try this out with the deployment that I have. So I started just figuring that I would need a deployment. So grab that thing. So we've got an engine X here. So let's just code that up and take a look. Okay, so in engine X, we've got a really basic deployment of engine X, right? So this is just a Kubernetes deployment. It's using, I think, the latest version of engine X and we want two replicas of it to start with. So let's apply that. Okay, so it looks like our deployment was created. Type correctly. UBCTL, get pods. Okay, so there's our two engine X deployments that are running. So I think the next step that we need to do is to create one of those scaled objects instances. So let's go, let's go do that. Gonna make a new file in here. Okay, there's our scaled object. Let's go grab an example of that from here and start with that. Back to the docs. Okay, so here's our full set of things. Let's drop it in here. Okay, so again, we're gonna work through this and remove some things. I don't think I need any of these annotations. They all seem optional. So I'm gonna remove that. We're gonna, the name is optional, but let's give it a name. We'll call it engine X scaler. Okay, and then our scale target ref. So our deployment is absolutely one deployment. So I think we can remove these two. I don't think we need those. Okay, and then this must be in the same namespace as a scaled object. So we've got that. Our deployment is gonna live in the same one. It's called engine X deployment. Let's call it that. Okay, this is optional. So let's remove that one too. Okay, all these things are optional. Gonna remove them just to simplify things for us right now. Advanced again, like stuff doesn't seem like it needs to be here. I think this will be the minimum set of stuff that we need for this. Okay, so now triggers is the field we need to fill out. So let's go back to the docs again and go back to scalers and we're gonna use cron for that one. So let's find the cron. This looks really simple in terms of like what we need to provide. So let's, let's give that a try. Okay, so type cron. We're gonna line that up. The metadata that is here is the time zone. So let's change this to, I'm in Denver, America, Denver. I think that's a viable time zone. Let's check real quick. Yep, America Denver. Okay, so we're good with that one. Okay, so it is currently 128 my time in America Denver. So if we start this on the 30 of the hour and shut it down on the, let's say the 32nd of the hour, that should be a fun little experiment to try. Okay, so this seems like it's good. Let's clean that white space up a little bit. Let's apply this thing. So we're gonna do cube CTL, apply dash F into next scaled object.yaml. Scaled object in version cannot be handled as a scaled object, strict decoding error, unknown field, failure threshold. Oops, this didn't need to be here because that was underneath of the failback info, the fun of live demos. Okay, let's try it again. Okay, so our scaled object is created. And if we look at what pods we have in this namespace, okay, those things are still running. Cube CTL, get logs, dash NKDL. Let's see if the controller, oops, cube CTL, get pods, dash NKDL. Let's see if the logs for the controller have anything about what's happening. Cube CTL, log, dash F. Oh, looks like I messed up. Ah, I think I messed up my data. And sure enough, I spelled it ND instead of end. So now let's apply that again. Okay, let's get the logs and see what's happening. Oh, okay, cool. So it looks like it picked it up already. Successfully set the scale target to min replica count, new replica count, zero. So I bet if I do cube CTL, get pods. Yeah, so my NGINX things are gone. So I think if we wait just a second, it's 130, timed that pretty well. This should scale us back up in a second. I think the timer that it's working off is 30 seconds. So maybe we've got to wait 30 seconds for this to go through. Yeah, oh, cool. Looks like it's working. So now if I do cube CTL, get pods. Yeah, one is running. That's really awesome. So I think I could envision using this for things that I want to run during the day. I think especially pairing that with a cluster auto scaler that maybe brings nodes up and down for me. I could probably set my cluster auto scaler to remove the worker nodes for me and then set this to maybe start my application up during US business hours if I needed something to be running while people were in the office or something. It seems like a pretty cool use case and make sure it's not running all the time. Maybe that's a good energy saving thing for thinking about environmental aspects of things. So I think this is going to scale up again. Let's see what's happening in the logs. All right, scaled it up. Not sure why it's got so many. Did I set that in the scale? Oh, desired replicas 10. That makes sense then. Didn't see that I copied that. So I think if we set this to a lower number, that would definitely give us more of those or less of those. Let's change it to one and see what happens. See if we got this thing running again. Probably got to wait 30 seconds again for that. It's still running. Let's wait and see what happens. Looks like it just ran 30 seconds ago or so. Maybe we got to wait just a little bit longer. Or maybe it's done actually. No, 32 is 32 now. So maybe it's going to scale this down to zero. Don't know. Let's go back and look at the chron box and see what that says. Chronic expression indicating the end of the chron schedule. The chron scheduler allows you to define a time range in which you want to scale your workloads out and in. When the time window starts, it will scale from the minimum number of replicas to the desired number. What the chron scheduler does not do is scale your workloads based on a recurring schedule. Okay, so that's interesting. I think I kind of would have expected it to scale this down based off of the number that are there. Those are scaled up originally. I think I would have expected like a new replicas count to see a new scale target. Let's see what our scaled object says. Yeah, a desired replica count is one. So it should be good. Nope, they're still running. I guess maybe that documentation is not super clear. I think this is a thing I'd want to look into a little bit more for that. But it seems like a pretty useful thing to start with. Maybe it just doesn't scale down. Okay, well, I think that's useful enough. I don't want to spend too much time kind of digging into that. Maybe we can come back to that in a second. Actually, let's check one thing. Let's edit that. Set this to 35 because that's almost what time it is. Set this to 45 just to let it go for a bit. I want to set it to desired replicas one and make sure that it actually scales down. So let's Cube CTL apply that one more time. Ten are still running. Let's see the logs. Yeah, so it's just a new replica count to one. So we get the pods now. Our desired replicas is still set to 10. So I guess I would have expected that to scale to one instead of 10. It seems like an interesting one. Yeah, start to 35 ends at 45. Desire replicas is one. I'm not sure why that is not doing what I would expect there. If anybody in the chat has any suggestions, we'd love to hear them. If you've used the cron trigger before, that'd be super useful. Oh, they're good. Like I said, I just didn't, I got impatient and didn't wait long enough for things to reconcile. So the cron thing did kick in here and did give me one running between that cron period. So that's pretty cool. So I think that seems like it's significantly working at this point. Okay, so the next one I want to use is Azure Storage Q. And this one takes a Q type or a Q name, the length we want, activation Q length, a connection from environment variable. That one seems a little weird. Name of the environment variable for your deployment to use. I wonder if you can use this with jobs or not. Maybe not. So let's give this a try. So I do have a really simple go application here that connects to, uses the Azure SDK for go and connects to a Q and tries to dequeue a message from it and just run it. So if I run that thing, it's something like, hold it real quick. Okay. Q reader. Okay, so it needs to have those environment variables and stuff set. So what I'm going to do is stop sharing my screen for one second just so I can set a secret variable that we don't want to share on the YouTube. And then I'm going to come back over here, set those, and then we'll use the Azure CLI real quick to see if we can pull those things down. Okay. Thanks for bearing with me too. This was a little scary that I didn't see things scale up when I expected them to. But I guess that's the fun of trying to do things live when you haven't used them before and you don't really know exactly what's happening behind the scenes. We'll take a look at the logs again. Maybe we'll have a little time at the end. Okay. So I have now set this. I should be able to go back and share my screen one second. Okay. I just created a new queue called CLBO. Let me share my screen. Go back to that. Okay. I'm running the az storage message peak command. I take a second. Oh, I spelled message wrong. Okay. So there's no messages in that queue right now. So if I run, I built that thing. So if I run queue reader, no messages to print out. Okay. So it looks like we need to run that as a deployment. So let's make a new deployment for this thing. I didn't have that before. Let's copy our engine next one over just to make it easy on ourselves. Next deployment YAML here. VS code. Actually had that here. Okay. So here's our deployment YAML. We're going to change this to queue reader and queue reader. Let's start with zero. So we can see it scale our thing up for us. I think that's, that'd be the most interesting thing to see. We want to scale it up when there are actual messages in the queue that we want to read from. Okay. And then we'll call that queue reader. And I already pushed this to docker hub. Okay. So there's my queue reader image. Sort of latest. Okay. That's cool. Let's run this deployment. Okay. So now I would expect to see no pods. Yep. Okay. One thing that I know that I'm missing here. I'm going to need secrets for this thing to connect to. So let's go back to here. Okay. So made one of those while I was off screen too. So let's keep CTL applied. Constring. Okay. So now we've got, okay. We have a secret called Azure connection string. So let's go back here to our friend CLBO. And what we're going to need to do is add the environment variables into that. I never remember how to do that. So let's go look up the docs for that real quick too. Use secrets in Kubernetes deployment. Secrets as environment variables. Okay. So we want, and name from what it's going to come from. Okay. So let's give that a whirl. Okay. So we've got and this thing expects connection string. And the key for that is going to be, what did I call that? I think I just called the connection string. We'll give that a shot and see. And then we also need the Q name. Q name. Be this again. Secret. Actually the secret we want is Azure connection string. So we've got to fix that. And this one I think is called connection. Sorry. This one I think is called Q name. Okay. Let's scale this up to one and see what happens before we try to scale anything on it. Okay. So let's apply that real quick. Oh, my YAML is bad. I'm 28. Invalid value. Okay. M secret key ref. Okay. So what did I mess up here? Oh, indent, indent, indent. Perfect. Okay. So now it's creating an error. Why did it error? Cube CTL logs dash F. Okay. So pull the image, but it's failing. So why is it failing? What does that command? Cube CTL logs the name of the pod. Okay. So Cube CTL get pods. Cube CTL logs dash F dash dash previous. Oh, it's not, not running correctly. Q reader, no such file or directory. Okay. So let's take a look at what's in our image. This is right. You build this real quick. Okay. Yeah. So it looks fine. So let's, okay. So now let's run that again. Okay. So that looks good now. So now if I go back to my Kubernetes cluster, let's delete the thing. So we get out of crash loop back off. It seems pretty appropriate that I had a crash loop back off during CLBO. Okay. It aired again. Let's see why. Oh, okay. So I'm unable to connect to the queue. The connection string is blank or malformed. The connection string should contain key value pairs separated by semicolons. Okay. Let's see if we can figure that out. So I would guess that something's wrong with my secret here and connection string. Okay. I'm going to stop sharing for one second because I'm going to show another secret value. I only have one screen too, so I can't do the top screen. Keep CTL get secret as your connection string. Oh, yeah. Okay. So our connection string base 64 encoded value is that looks like it's correct. Maybe it's not getting it correctly in the secret. It is called connection string. That should be right. There is a extra character in my string. That would do it. Let's fix that real quick. We've only got a few minutes left, so maybe we won't get through this, but thanks for stumbling through this with me. I think it's been an interesting learning experience so far. I can definitely see where this tool would be really, really useful for a lot of use cases. Okay. So we're going to do, recreate that secret. There's a quotation mark at the end of that, I think. Okay. Almost ready to try this again, I think. Create a secret. Let's do it. Okay. Almost ready. One more second. I am making a mistake. Okay. So now I've got my secret again. Let's give us another shot. All right. Sweet. Not so sweet. It failed again for the same reason. Okay. Well, I'm going to not try to run that thing, but I think you, I think we kind of know where that was going. So that was, that was my bad. Sorry about that. But I think one thing I did want to show is that, okay, we're sharing again. All right. So same error, right? I recreated the secret. So something I'm doing with creating secret is not quite right with what's in there. It looks like this, but something, something's missing from it. But if you remember, we had those NGINX deployments running, right? So we still have QPCTL get deployments. Oops. QPCTL get deployments. There's our NGINX deployment. It scaled to zero. So I think if we looked at the logs again, we fell out of the 45 minute mark of the scale object config for that cron trigger. And it definitely scaled my thing back down to zero after running for a little while. So that's cool. It was at minute 49 afterwards. So a little bit of time after the window happened, but I think that is really, really cool. Having the ability to scale a deployment up and down on a cron schedule, that seems like a really powerful thing to use. And I definitely can see the value in all these other scalers. Let's go back to the Cata docs. So instead of using this kind of workflow where you're only scaling things off of those metrics that are coming from the metrics server, having the ability to use all of these different scalers is pretty cool. There's a real extensive set of things in here. I think that's really cool. Obviously, it's kind of weird that it provides things like CPU metrics. Those are some things you might get out of existing metrics integration, but there's a really interesting set of things like scaled GitHub runners based on the number of Q jobs in GitHub actions. That's really cool, especially if you were trying to self host some self hosted runners in a Kubernetes cluster. That's a pretty cool way to do it, especially if you're really trying to make those things ephemeral and go away when they're not needed. That's a really, really cool. I didn't even see that when I was scrolling through before, but that's really cool. I'm doing a bunch of GitHub action stuff in my day job today and kind of understanding how to scale runners up and down. That's really cool. I'm going to come back and take a look at that sometime later today. Do we have any questions? Anybody want to ask anything? Anyone want to share anything about using Cata? I think that'd be really fun. If anybody has any experiences with Cata, it'd be really awesome to hear your experiences with it in the couple minutes we've got left. Ted said thanks for running the session. It was really awesome. Thank you for attending. I think hopefully you learned something along with me today. Sorry, I wasn't able to get the second example running, but hopefully the crown example kind of shows you the power there. Let's go look at that config again real quick. There's not a lot of data that we needed to put in here, really just the name of the deployment we wanted to use right here, and then the data for our trigger. That's kind of good to go after that. That one was pretty easy, just kind of waiting around for things to run, I guess was my failing. But I think that was really neat to see in action. I think for this one, we would do something similar, scaled object we pointed at the other one, and then really the only thing they would need to change would be this configuration down here, kind of hung up on that connection string for the Q reader, but other than that, I think it would have been pretty much similar. I think those concepts kind of carry over and transition over to the other ones pretty well. All right. Well, thanks everybody again for joining us today. I'm going to stop sharing. I hope you learned something today. Come back and check out the next episode of this. I think there's a bunch of really cool topics that we'll see going forward. If you have suggestions and reach out to Jeefy maybe and suggest some projects you'd like to see on the stream going forward. Thanks for joining me. I hope you everybody has a really awesome day today.