 Interesting. All right. Yeah, let's, okay, perfect. So are we kicking off right from the beginning I pursue? So we're having some technical difficulties here. Lovely, no worries. So let's kick it off in the beginning. Always challenging. Yeah, let's go. So yeah, welcome to Cloud Native Live, where we dive deep into the code behind Cloud Native. I'm Annie Talastro and I'm a CNCF ambassador and I will be your host tonight. So every week we bring new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will break things and they will answer all of your questions. So you can join us every Wednesday to watch live. So this week we have Andy here with us to talk about how to write sites to readies with live coding demo. So very excited for that. And as always, this is an official live stream of the CNCF and as such, it is subject to the CNCF Code of Conduct. So please do not add anything to the chat or any questions that might be in violation of that Code of Conduct. So please be respectful of all of your fellow participants as well as presenters. So with that done, I'll hand it over to Andy to kick off our today's presentation and take number two, I guess. All right, let's do this again. So I'm Andy, thanks for having me back on the show. I'm the CTO of Fairwinds. We are a Kubernetes first company. We've been doing Kubernetes for six or seven years at this point. We run clusters for our customers. We operate those. And then we also have a lot of software that we provide in the Kubernetes space for a lot of different things. So today I'm gonna focus on setting your resource requests and limits. So we've always told our customers, hey, set your resource requests and limits. This is what makes the scheduler work. This is how we're able to autoscale properly. This is how we introduce stability into our clusters. And it's kind of the first thing everybody needs to do with all of their workloads. And all of our customers would come back to us and say, that sounds great. What do we set them to? And so I said, that's a great question. That's very reasonable ask, what should we set them to? So we would go look at graphs and dig through their workloads and say, here's what we think you should set these workloads to for your resource requests and limits. And it's sort of a bespoke process, one-off thing. It's not really something we can do continuously. It's not something we can re-up on frequently. And so we said, there's gotta be a better way to do this. So I went looking around at the very software available time. This was probably three or four years ago at this point. And I really wanted to learn go at the time. And I was like, all right, what can we do here? What can we write? And I found the Vertical Pot Auto Scaler Project and the Vertical Pot Auto Scaler, if you're not familiar with it, it introduces a CRD called a Vertical Pot Auto Scaler and you attach that to your workloads and it can automatically resize them based on how much CPU and memory they're actually using. I said, that sounds kind of cool, but I don't really like the idea of automatically doing that. I'm a big fan of infrastructure as code. It doesn't work well if we're using HPAs with our horizontal Pot Auto Scalers with CPU and memory and or memory. And so, what else can we do here? And I looked at it and said, well, it's providing these recommendations. These are really great. Why don't we just operationalize the Vertical Pot Auto Scaler a little bit more? And so instead of having to create a Vertical Pot Auto Scaler object for every deployment in my cluster and manage that separately, we will just do that for you. So write a little controller that creates all these Vertical Pot Auto Scaler objects and take all those results from the Vertical Pot Auto Scalers and summarize them basically. How do we do that? So out of that came a project called Goldilocks, which is one of our open source projects, which is what I'm gonna focus on today. So why don't we just go ahead and jump straight into the screen share here? Thank you. So this is what Goldilocks looks like, just when you award, but under the hood, there's a lot more going on here. So I'm gonna start with sort of the setup here and talk about how you install Goldilocks to get it working. So I have here a Kubernetes cluster. It's a kind cluster running on my machine. This one's 1.23, excuse me. And it's all a couple of things in here. So the first thing that we have to do is make sure we have metrics available. So let's look in our metric server namespace. We have a metric server running. We can top pods and see the current CPU and memory usage for all the pods in our cluster. So that's good because we can't really make recommendations on usage and have existing metrics in place. So the second thing that you have to install as a prerequisite for Goldilocks is the vertical pod auto-scaler. So I have done this already and let me find the command that I use. So I just installed the VPA home chart from the Bearwind stable repository. So we have a vertical pod auto-scaler controller ring and the recommend both. We look in the VPA namespace. We have a recommender. This is the only component of the vertical pod auto-scaler that's required. And then the thing is the CRD for the vertical pod auto-scaler and the vertical pod auto-scaler checkpoints has to be on as well. So we'll keep CTL get VPA and that doesn't yell at us that it doesn't exist. So we have the vertical pod auto-scaler installed. And then the next thing that we'll do is all Goldilocks. So that can be done also via Helm, a fairly straightforward command that Helm upgrade dash dash install or a Helm install. I've already installed it. So I'm using the upgrade dash dash. And in the Goldilocks namespace, I'm gonna create the namespace and then I'm going to set a flag called on by default. So this is a diagram of the controller. Typically, Goldilocks works via annotating or labeling namespaces to enable them for Goldilocks. So if you wanna just test it out in one or two minutes, you can just run it in the default way and then label a namespace. The other way to do this is to set this on by default flag. And what this is gonna do is tell Goldilocks, look at all the namespaces everywhere, all the time, no exceptions. So Goldilocks is now installed. So if look in the Goldilocks name, we have two components. We have a dashboard and a controller. First, we're gonna focus on the controller. So let's take a look real quick at the logs on that controller. And there's an audience comment, perhaps, maybe not a question, quite, but comments that we can try to, if you have any thoughts. So usually says VPA recommendations from memory, request slash limits are risky. I totally agree. So that is a great way to talk about what Goldilocks is for and what Goldilocks is not for. So Goldilocks is intended to give folks a starting point. When you're spending up services and Kubernetes and you have no how much resources you need to allocate to your different things if you're not testing them locally to see how many resources your pod consumes or you just need some place to start, Goldilocks is great for this. It's a great way to just see, hey, I'm roughly using this amount of memory in CPU. It's a great starting point for that. Is it 100% accurate? Is it the absolute truth of what it's to? Absolutely not. And I don't think you should just automatically copy the recommendations from Goldilocks necessarily. But it is a great way to get started, but always definitely review over time and always double check and do a sanity check to make sure that you're setting things reasonably. So great comment, totally agree with you. The other thing you can do is connect the VPA to Prometheus to use a longer history of metrics to generate the recommendations which does make them a little bit more accurate. So great comment and keep those coming. Yeah, and then there was a question regarding the logistics here. So Mauricio missed the first minutes. Will this be recorded so they can revisit? Yes, it is being recorded and you can access it and see the recording in the CNCF YouTube channel just really quickly after this. So if you missed a few minutes, no worries, you can always watch it later. All right, so coming back to our controller here, see that Goldilocks has run a few, what we call reconciliation here. And so it has gone and created vertical pod autoscaler objects for all deployments in my cluster. This also works for other pod control data sets. It will work for technically it'll work for jobs and crown jobs if you enable the RBAC but I've had great results with the vertical pod autoscaler and jobs and crown jobs, something we need to look into a little bit further but we're gonna talk about that today. So we have a vertical pod autoscaler and we see that it's generating records for all of our workloads in this cluster and you'll see here that VPA is in mode off which means it's not gonna automatically update anything, it's not gonna change anything in my cluster, it's just gonna sit here and generate which is exactly what I want from it. So I'm gonna focus on the stress namespace for the moment because there's actually load here, I'm running a stress container, it's attempting to consume higher CPU and I've set the CPU lit to 500 millicores so it is being throttled very heavily right now, I would assume. And so we can take a look at that VPA. All the VPAs created by Goldilocks will be prefixed with Goldilocks so that if you have existing VPAs, like nicely with those, not interfere with them. And we take a look here and we see that the VPA object has a status and it has this recommendation object in it. So this is the thing that Goldilocks is looking for, it's going to query all of these, it's going to pull these values, it's also gonna take a look at the existing resource requests and limits for the pod. So there's pod and we take a look at the resources block here, if we can find it, oh, that's that resource request and when it's on this. We had to, all right, so we've got a limit of 500 millicores and by default because I only set a limit we get a request of 500 millicores. So now we can tap the dashboard. So this is where I'm going to start moving a little bit towards the live coding that we're going to do today because I have the Goldilocks repo open here and I have a branch open for the live stream and I'm going to go ahead and build this and run the dashboard locally. So you can, as long as you have a cube config that has access to the cluster, you can run the dashboard and the controller locally and just point their cluster. I'm running the controller in the cluster just because I'm not going to mess with the controller code today. So I'm just going to let the handle that and not have to sit here running it in the background. So let's go ahead and run this and we're going to run dashboard command and we're going to say on by the, it says the same behavior as on the controller. It ignores any sort of labeling or anything like that and just turns on Goldilocks for all of the namespaces. So we start that up and we see it's running on port 8080. So we'll go back over here to our browser and give it a little refresh and we'll see that we're running the dashboard locally. So that's great. I can make changes and hopefully they'll show here. So if we go take a look at our stress namespace, we're going to see here, we have a single deployment in this namespace with a single container and we're going to see two different sets of recommendations and we'll talk a little bit about where these come from. Right now we're seeing, I don't have an explicitly set CPU request. It has been implicit by Kubernetes, but it's not been explicitly set employment. And so I should probably do that. And then it's going to surface up the VPA recommendation here. So here we're saying, this is if we want to use guaranteed QoS which means setting our resource requests equal to our resource limits. There's some definitions down here. Then we're going to want to set our CPU requests to 587 millicores according to the VPA and our CPU limits to 587 millicores and our memory request limits to 105M. And so that recommendation from the VPA and we have some YAML here if you want to just copy paste that in. And then over here, we have the burstable QoS and this is going to be the topic of the day because if we go to look, well, let's talk about this a little bit. So that's going to take the VPA lower. Let's go look at our VPA object again. This is where it gets a little bit confusing. So we take a look here. We have four different values that the VPA gives us. It gives us a lower bound, a target, an uncapped target and an upper bound. So Goldilocks for the recommendations in TQoS, it's going to pull this target for both values, for both the bust and the limit. But then for the burstable, where we're setting our requests lower than our limits when we're allowing the container to burst up from its requests, it's going to pull, and I have to double check this in the code once we dive into it, but I believe it's going to pull the lower bound and the upper bound as those two values. And so the dashboard's going to say inside of that upper bound and lower bound. And so this is where we introduce a little bit of confusing behavior in Goldilocks. And this is a decision that I made randomly years ago and regret it all do when we write code, I imagine. So if we go take a look at a different new space, let's just take a look at Goldilocks itself. We're going to see this here, where it says our CPU request 25 millicores is equal to the burstable recommendation of 15 millicores. Now you may be listening, 25 is not equal to 15, Andy, that makes no sense. And I agree with you, it doesn't make any sense. We have a open issue with quite a lot of discussion that this is under, but really the way it's intended to work is, hey, we're saying your request, your lower bound to 15 millicores, your upper bound to 253 millicores, you're currently set to 25, that's within the range. It's in between those two. Again, probably the most obvious behavior. So today, what we're going to do is add how we display the lower bound and upper bound, and how we tell you where you should set it to. And I have a couple of recommendations from some coworkers on this behavior as well. So if we don't have any questions, which I don't think we do, I will go ahead and start the moment. Yeah, well, let's go ahead. All right, we will carry on. So I have over here, the Goldilocks code base. And so let me close this and we'll just start with the tree here so I can kind of describe what's going on. So in here, we have a package. We have package dashboard, which controls the dashboard. And so we have a GorillaMux router, which handles the dashboard. And then we have a bunch of templates that create the dashboard. Yes, I know GorillaMux is dedicated, we'll be moving eventually at some point, but haven't had the time. So we will take a look. I know for a fact that the container recommendation here, this blue box, is rendered by the container.go.html. And so we'll take a look at our Go template and see. All right, so we've got a bunch of variables being defined at the top here. We're pulling in the CPU requests, CPU limit, memory requests, and memory limit for that container. So this would be the existing values that you're set to. We've got the lower bound and upper bound for both memory and CPU. Those are those VPA recommendation values that we're gonna pass in. And then we have the CPU target and the memory target, which is what we're gonna use for the guarantee QOS. And then some other random stuff that is not super important. So let's take a look at the, we've got current values for the guaranteed QOS class, but we wanna go down to the burstable QOS class. And here is where we start to see the recommendations. So we've got, if it's, oh, well, if the burstable, that's cost, we're not gonna worry about cost stuff today. Let's look at the recommendations. Current level, CPU requests, CPU lower bound, where to pull in our icon. Let me, let's do this, let's find the ID of the equals sign here. HTML is not my strong suit. So I'm definitely reaching a little bit today here. That's what we're looking for, comp icon. There it is, okay. So here's our icon and we're calling a template function get status range. We're passing at the CPU request, the current CPU request, the lower bound and the outer bound. So this get status, I assume the function that we're looking for. So let's go back to our router and take a look and we'll dive through the dashboard namespace function. So we're here and this is what's sort of the dashboard. Oh, I remember right now, in templates here, we have a set of template functions somewhere. Where did those go? Where are they? Hmm, well, we'll keep digging. Files, cost stuff, we're not paying attention to today. And then we render, we create the data and then we write templates and where are our template functions? They're in here somewhere. All right, we're gonna cheat. We need get status range. So let's look for get status range. It is in templates.go and we're gonna go, I'm gonna scroll right past it. Here we go, get status range is a template function that we're passing into the templates and it is in helpers to get status range. Now we have found the source of our problems. So we're passing in an existing value, a lower value and upper value and those are all resource quantities. So this is where things get a little bit fun. These are resource quantities from the Kubernetes. I believe it's in API machinery. So we have those compare with each other where they get fun is in displaying them because they are not a normal number because we can just split them in different units and things like that, which is something else we need to tackle in this dashboard because you may notice are we typically specifying megabytes and we're here getting, I believe that's megabytes back. So maybe a problem for another day. We'll see if we get to it. So we're gonna take a look if the existing is zero, it's not set. So we're gonna return this font awesome icon called exclamation. And then we also have a concept of both text and icons because we wanna put in both a text version of it for anybody using a scooter instead of a font awesome icon. And so we have two different ways that we return this return an icon and a text. So that's if it's not set. All right, so we're gonna do comparison lower, which is comparing the existing value to the lower and we're gonna compare the upper to the existing value. And so we have our two comparisons. And so these come back as integers. The comparison function is going to return negative one if it's less than the other one and positive one if it's greater. And so if the upper value is less than or equal to zero or if the upper comparison is lower and the lower comparison is greater then we're gonna return it equals. This is where our issue exists. If it's in that range, we're gonna return it equal. Yeah, and we have a new audience comment again from Visily again. Amazing, thank you. So they say for VPA in non-prod environment to be useful LT should be run constantly. In prod environment, there will be a resistance from app teams to deploy slash use it. Yep, great point. So, you know, there's saying here I'm assuming LT's means load tests. I can't think of another thing without staying for the moment. So yes, if you're running this in your non-production environment, you need to be generating load to get accurate. You know, it's looking at existing, VPA is looking at existing utilization. So obviously you're non-prod environment unless you're running load tests, the numbers will be off. And then in prod, there will be resistance from app teams to deploy or use it. I would argue that Goldilocks is not a responsibility of app teams to be deploying and using. I think operators, you know, cluster administrators can run Goldilocks in the cluster and provide the results back. And because we're using VPA in non-update mode and we're using in an off mode, it's perfectly safe to run across your entire cluster to provide these recommendations in production. So I would say Goldilocks should be run in production because it's perfectly safe to do and by cluster operators. So thank you for the comments. All right, so now we need to decide what to do with our comparison here. So we know that the existing value is less than the upper bound and it's greater than the lower bound, but that's way too arranged for us to be saying this is equal. So I think what we wanna do is take a look at the size, how different it is. How much bigger or small is it than the lower recommendations? So let's just start throwing together something here. Let's say, let's figure out percentage difference of the lower bound. And so we're gonna say that I never remember this actual formula. So we're gonna have to, I signed up to do math live today. Probably a terrible choice, but we'll see how it goes. So we have definitely the existing quantity, the lower quantity is existing. Existing is, okay, the current one. So lower upper, we need to be doing this for, so here we're presenting the, I need to look at this. The CPU request, so the CPU request and then we're passing the lower and the upper bound into this function. Yeah, this may be a messier thing than I thought. So we have no concept here in the get satisfies function of what we're asking for the resource request or the resource limit to pass back to our template. And so we need to say, yes, lower upper. So in this case, we don't want to remove the equal sign, frankly, or change this logic completely. And now there's a question from Zechariah. How can we set the CPU limit for a pod to avoid CPU throttling? That is a great question and a very large and medium topic that I don't necessarily have the time to go into today. CPU throttling is a very common issue, a very contentious issue. There's also been several Linux kernel bugs that made it in the past and so it's gotten a lot of noise. And it's complicated because, and it was actually a really great talk that somebody sent me from KubeCon, I believe the last KubeCon North America that talks about why we have trouble communicating about CPU because we're specifying in fractions of course and CPU is actually calculated in time. And so we're sort of doing this weird translation of quantities that makes reasoning about CPU requests and limits a little bit funky. And so essentially, to avoid CPU throttling, turn up the CPU limit or turn it off. There's a lot of advocates out there for saying don't set CPU limits. I've been quite jumped over to that side at this point in time, but it is in some cases a valid way to do things. But generally it's just increase your CPU limit. And actually, I intentionally set this demo up with the stress container being throttled to sort of show that the VPA will detect that you're at the high end of CPU usage, but I don't believe it actually looks takes into account CPU throttling. So if you are seeing a lot of CPU throttling this is one of those cases where VPA may not be best recommendation. And so go ahead and turn that CPU limit up if it's affecting your workloads. And there's no harm in doing that. In most cases, assuming you have the space to schedule that pod and all of those things. So definitely a big topic and something that we should all learn more about. I definitely need to do a little bit more research there. So cool, let's go back here to our code. I think what we want to do is just... There's another question and comment as well, which is amazing, by the way. Thank you so much for everyone for engaging. So we really asked, let's say for microservice VPA recommendations to change to CPU 2 or to CPU 1.3. How can you assure app teams that there will be no performance degradation? I can't. That's why we test. So I think testing these changes in staging is the right way to go. As you mentioned, the same person mentioned earlier, load testing and staging is a very valuable tool. Very valuable tool. And so if you're not doing that, then maybe turning down your CPU recommendations is it's a highly critical workload and things like that. Then perhaps not changing that would be the right way to go. I always recommend best judgment and everybody's workload is different. Everyone's workload has different requirements and we have to take all of those things into consideration as operators when making these changes. So just a recommendation for a reason and is not an absolute for sure. All right, so back to this. I think what we need to do is just remove this particular if statement and say instead, maybe we just remove it. I'm curious. Yeah, let's give that a shot. Let's restart our process here. So I just removed the statement that says if it's in between the two, give us an equal sign, which I believe means we're never going to get an equal sign, so that concerns me a little bit. So let's take a look at the effect that this has had on our dashboard. Let's go take a look. What was that? Goldilocks, goldilocks, yep, yep, there's the problem. Now we never get an equal sign or anything at all, which is not quite right. So if we are in between, but we need to not equal. So this is get status range. What are we using? We must be using a different function for these. Let me make sure we generate an equal sign. So that was the controller. So let's add the controller. So I'm going to edit the resource requests on the controller to match the current recommendation and refresh the snow. We are getting an equal sign on the left on the guaranteed co-s, so we haven't broken that. That's good. And so over here for get status range, let's just go ahead and say if greater, we're trying greater if it's less than and less than and let's say if comparison and offer. Let's, if we get this far, let's go ahead and job return equal, see, do we have any success? Here we go, FA, exclamation error is what we've been using. All right, so now anything we're just going to return a knot. Oh, we're going to restart. We can tie them up, oh, there we go. All right, oh, now we need to check actual equality because now we're saying 15 is not equal to 15. Sew, I have coding exercise always dangerous. All right, if comparison lower equals zero to FA equals success, not there. Right, our Boolean operators correctly. Do start with that, yes, I want the process too. All right, see now this is where we're running that problem I was anticipating, which is we don't know whether we're looking for lower or upper. So we need in our status range, we need to know if we're looking for, oh, type source or what, oh, let's see, I need type. So we need to know if we're looking for resource requests or resource limits, let's say resource type. So we're going to ask for resource type and then in our, oh, we're going to completely change the logic of this function. That's going to be exciting. All right, but for equals of another switch on resource type, we'll do case a request or if it's a limit and if comparison. So if it's a request, we want to compare the lower and if it's a limit, we want to look at the upper. Cool, so we've got that, let's go back here. So now we're going to have to call this function differently. So we know this is for the request. So we will say request here and for the limits. So we're just going to pass whether it's a resource request limit into our function. And hopefully once Apple stops yelling at me, we get, oh, we're getting template data. All right, we broke it, request not defined. I probably need to, you know what? Then now I know why we use, all right. So some variables just to make escaping quotes and things better, it seems like an awful way to do it but I'm just going to follow the pattern that we have in place. It's amazing when you revisit code that you wrote so many years ago, the number of changes and things you want to make but I don't have time here today. Although it does look like I might make it to that. We will see. Are you getting template data function limit not defined? Didn't I? Why does it think I'm calling another function? Get status range, oh, we got to do this again. And we need to request, request limit, limit get status range, we need to do it here as well. All right, I was trying to avoid having to add another parameter to that function. But such is the way it goes. Now hopefully, we'll see, it could work, all right. So now we're seeing if we're exactly equal to the lower, we get the lower for exactly greater than, exactly. If we're off, we get a not, should probably be a not equal to but that's great. So let's go ahead and commit that change because that was annoying enough as it was. Replace equals logic for first rule QoS to be more included. So that's kind of the base of the issue. That fixes the beginning of it. But we have 15 minutes left and we would really love to know if we were to say like, because this recommendation is going to fight as you generate love. If we go look at our stress container, I'm going to guess, well, okay, it's stress is pretty consistent, but this will probably between 587, up and down a little bit. And so we don't necessarily want to always suggest changing those small increments. And so I have two recommendations from coworkers, which I think are really good to round either round the recommendation or only say it's not equal if we're within 10% or something like that. And I think what we should first do is round the recommendation up. So let's go digging for where we might do that. Oh, we should fix our tests. You know, testing is boring. I'm not going to do that here today. I will fix it before I open the PR, but let's leave the live stream on features. And I can do all the testing work later. Testing is very important though. I'm not saying that you shouldn't have tests, but I think what we're actually going to need to do is modify the summary. Summary package is what actually goes and gets all of the data from the VPAs. So if we run the summary commit here, we'll actually get a big old gnarly JSON object that has all the data that feeds that dashboard. So it's sort of the API side of things that's getting that information. So let's see where we go collect all of our VPA objects. New summarizer, get summary. Where is our function where we're going to get the summary, aptly named function. If we're going to filter by name spaces, we're going to look in a cache that we keep locally just to speed things up a little bit. And then we are get or create namespace summary. There we go. All right, so workloads summary. So the first thing we do is we get the actual settings from the workload. We don't need to worry about that part here, but now here we are if the VPA status is nil, that's what we do with that. If the length is less than or equal to zero, get the excluded containers. Ah, yes, I'm vaguely remembering this now. This is one big ugly loop looking forward. You have to go through all of the containers in each pod and separate them out because you can have multiple pod, this container's in a single pod. So we have this big here, but we're not gonna worry about that too much either because that's excluding here we can't pull the workload spec. Upper bound, you tells that format resources. Ah, wonderful. We already have a helper function here that's formatting the recommendation, which means we can go modify the format function to do our rounding. All right, so, ah, oop, this might be a place where we can fix the type of recommendation that we're giving as well. Oop, okay, we're not gonna worry about that alone, but if, so this is just formatting memory, is that if min, xis max allowable string length is five, if length of memory is greater than the max allowable length and it is less than that. Oh, we are rounding. Roundup updates the quantities provided scale and string of the values at least one. Foss is returned if the rounding operation resulted in a loss of precision. Hmm, but I don't actually care if I lose a little bit of precision resource.scale. So where are we running a little bit, quite what we want. Yeah, and then there's an audience question as well from Tahir. So recommendations are based on historic metrics from Prometheus. So if Prometheus part, let's say it's restarted O1, do we then incorrect recommendation in what ways can we instruct for recommendations? Great question, great question. So recommendations can be based on historical Prometheus metrics. So the VPA, and we're really, you know, all of these questions about, you know, quality of recommendations and what the recommendations are actually doing have nothing to do with Goldilocks. I just wanna be clear here. Now I'm not saying that I'm not responsible for folks who are using Goldilocks, but if you do have deeper questions about how the vertical pod autoscaler functions, I definitely recommend going to that repository and maybe contacting that community. But you can hook up the vertical pod autoscaler to Prometheus so that it will take into account historical utilization. And I do believe it actually takes out of memory events specifically. So I have not dug through this code in a little while and it has changed since then, but I do believe when it sees historical out of memory events, it will increase in the memory utilization, but I can't promise that. So I definitely recommend checking with the vertical pod autoscaler folks. So it's in the Kubernetes autoscaler repository under the vertical pod autoscaler folder is where all the code for this lives. And I've actually done a lot of updates that we're incorporating to Goldilocks here soon within probably the next few months or so. We may get some enhanced behavior from that because I know there's been a decent amount of work done on this repository since our last update. But great questions, keep them coming. Okay, I'm not certain. This is where I want to round my recommendation. So let's go back to our summary package. And we're getting the upper bound right here. So this is a resource list. Right. Yeah, another question immediately here, which is great. Can it be integrated with Thanos gateway, which may be persistent from this data for a longer period to hire continue? I have no idea, quite possibly, but I really don't know. So from the VPA's perspective, if we go back to that repository and we take a look at running the recommender package and we look at the flags here on the recommender package, all we can give it is a Prometheus address and a Prometheus job name and then a history. So assuming you have a previous endpoint available that you can and it will query for the amount of time that you have told it to and dairy that should work. But I am not at all familiar with Thanos gateway and how it functions. So I can't answer that question directly. Okay. Well, we only have a few minutes left. So I'm not certain I'm gonna have time to round the metrics, round the numbers, but I will be working on that. Keep an eye out for a PR on Goldilocks within the next few days to implement these changes, but hopefully clarify the very confusing behavior of the equal sign in previous versions of Goldilocks telling you that large numbers are equal to small numbers because I think we can all do that math very quickly. So do we have any more questions? We do not. Not at the moment, but while we see if there is any questions, you can obviously maybe let us know if there's any learn more resources that we can check out after the session or anything like that. Yes. So we do have documentation for Goldilocks just plain information in here about how it functions. There's a whole FAQ on how to use it. So if you wanna use Goldilocks, check that out. That is at goldilocks.docs.fairwins.com. If you want to take a look at any existing issues or anything like that or file an issue, please go to github. So we have github.com slash fairwinsops slash Goldilocks fairly easy to find. In our fairwinsops repository, we have a whole lot of other open source projects. So please check those out with policy. We have things around checking for deprecated API versions. We have Polaris for policy and then Nova for checking for out of date versions of things because we all know keeping all the many things that we run in Kubernetes up to date is a nightmare. So lots of great open source resources from Fairwins there. I think that's it for resources. Great. And I think it's starting to get closer to the final call if anyone has any questions because we only have a few minutes left. But anything else that you wanted to finish us off with from your side while if anyone's typing a question they can submit it to. Set your resource requests and limits folks. That is kind of my thing. I talk about it all the time. I work with Goldilocks a lot and so many problems can be mitigated by just setting them and setting them properly and reviewing them over time and load testing in your non-production environments if possible. So I highly recommend that. Perfect. And I don't see any new questions as of now. So I think we can then start wrapping it up. It's just been really great. And I loved all the questions from the audience as well. So. Definitely. Thank you everyone for joining the latest episode of Cloud Native Live. It was great to have a session about how to write sides to Kubernetes today. And we also really loved all the interaction and questions from the audience such as I mentioned before already. And we as always bring you the latest Cloud Native code every Wednesday. So in the coming weeks, we have more great sessions coming up. So tune in then as well. Thanks for joining us today and see you next week. Thank you.