 Hello, everyone, welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I am Annie Telastro, and I am a CNCF ambassador and lead marketing at Vision as well. And I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer all of your questions. You can join us every Wednesday to watch live. So this week, we have Andy and Stevie here with us to talk about Cloud Cost Monitoring. Very excited for this. And as always, this is an official live stream of the CNCF and at such, it is subject to the CNCF Code of Conduct. So please do not add anything to the chat or the questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants as well as presenters. So with that done, I'll hand it over to Andy and Stevie to kick off today's presentation. Thank you very much, it's great to be back here. So as the title says, we're talking about Kubernetes Cost Management. The title is a little bit of a misnomer though, because we're actually here to talk about the thing that I care most about, which is resource requests and limits in Kubernetes. If you've listened to anything that I've done, you've probably heard me say this, but I'm also joined by Stevie today. So I'll do a quick intro of myself. I'm Andy, I'm the CTO at Fairwinds. I'm author and maintainer of a lot of our open source at Fairwinds, as well as a long-time Kubernetes practitioner. And then I will hand it over to Stevie, who's going to do the majority of our demos today and to introduce herself. Hi, so my name is Stevie. It's very weird because you all are on a screen to the right, so I'm looking at my camera and I'm not seeing you, so it's very weird to be just speaking into the ether. But my name is Stevie. I am a SRE technical lead at Fairwinds. I've been in the field for a number of years and have had many different roles. And I'm currently at Fairwinds, helping customers with Kubernetes clusters and working on some of our open source stuff. Awesome. Is that me? It's time to get up. All right, I'm kicking it off. Although before I kick it off, I did have a quick question. Oh, oh. Yeah. This is a standard of Andy's in mind. Andy, why should you not use beef stew as a password? Why should I not use beef stew as a password? I mean, other than the fact that it's too short. I got nothing. Why? It's not stroganoff. Oh, that's terrible. I was great. I love it. So yeah. So we're here today to talk about, so it says cost optimization. And as Andy said, really what we're gonna talk about is his favorite topic, which is resource requests and things because that is a big part really of cost optimization. So it's already difficult. Like if you just have a plain app running on like an EC2 instance or wherever, right? You've already got the challenge of every time you change your app, every time you add a feature to it or something, that changes sort of your app's profile in terms of like the resources that it needs to use to do its job, right? And then you add containerization and orchestration and like the whole just Kubernetes thing on top of that and you have a bigger challenge in trying to understand what costs your app, the cost of your app. How much of the cost of your overall infrastructure is due to that app, right? You've got things like the fact that if you're running a Kubernetes cluster, you likely have multiple teams deploying to it at any time, right? So you don't know who's throwing what into your cluster, how they're setting things or not setting things as the case may be. You get someone who's just like, I'm gonna toss this thing in here and just these two gigs. Two gigs of memory and that's what I think. So let's just go. My MacBook has 32 gigs of memory, right? So that's how much my app needs. Exactly, right? I'm running it on my MacBook. So that must be the thing, right? Or this is what it took to run it in Heroku. And then you have things like potentially running in multi-cloud, right? You could be running in GKE, AKS, EKS. You know, so you have things running all over the place, how you keep track of like the costs when it's spread across clouds. And then the most major thing is the fact that workloads are dynamic in an environment like Kubernetes, right? If you've got like just in a general everyday operations of the cluster, your workloads will move around, they'll get rescheduled, right? To different nodes or whatever. And if you've got any kind of auto-scaling in place, if you've got a horizontal pod auto-scaler or vertical pod auto-scaling, you know, or cluster auto-scaler, then you've got these things like moving around a lot potentially, right? They're just always, they're just, you never know where they are. So it's really hard to track them. They're like ninjas. It's really hard to track them across your cluster and figure out what exactly do you need? How much are you costing me at any given time? Layers of abstraction that get a deep through. Yeah, exactly. Like it becomes very difficult to take a basic thing like a node and slice it up and say, oh, this application is costing this and this and that. So, you know, that's where we come in. Like, so how do you get to the point where you can start understanding what the costs of your workloads in a Kubernetes cluster might be. So, you know, Andy again was talking about resources and requests or resource requests because that is what the CUBE scheduler, that's one of the things the CUBE scheduler is using to make decisions about where to schedule your workloads in the cluster, right? It's got a pretty complex algorithm actually, but there are multiple factors that it takes into consideration including what you tell it you need for your application. It uses that to determine how well and how efficiently it can put your workloads together on a node in your cluster, right? So, what you tell it is very important to how it determines, you know, how much you need in your cluster, which ultimately affects how much you're spending for your application inside of your cluster, right? So, you might be thinking, okay, well, great. So, now I know I might not, I might be overspending in my cluster. My cluster may not be efficient in that way. How do I even get to know that? Like, how do I get that information? So, one of the first things that, am I sharing my screen? Am I on my terminal now? Great, all right. So, one of the first things that you might wanna do is just get an overall look at what your cluster looks like in terms of utilization or in terms of like what you're requesting. We like to use a tool, open source tool called Cube Capacity. It was actually written by Rob Scott, who used to be a Fairwinds employee. So, you know, we're trying to keep it in the family. And Cube Capacity is really neat because it essentially sort of munges top and describe together. Cube control top and describe together and gives you some good information. So, I've already, you know, obviously downloaded it. So, if I just run Cube Capacity, what it's going to show me is a sort of high level view of the resources in my cluster. The top line is going to show me how much of the CPU and memory that my cluster has that it can give me. This is not like allocatable or anything, it's just like in general, right? How much of that am I asking for? So, it's basically, so the total there is basically the amount available on that node total. At the top, it's across the whole cluster. Across the whole cluster, but then each node. And then each node, yep, yep. So, you can see across the whole cluster, we are requesting with all the things that we have in there, roughly about 84% of the CPU that that cluster has, right? That seems good, right? That seems, yeah. I mean, that seems like what you want. You definitely want to get as close as possible without with leaving a little bit of headroom, right? For things like spikes and stuff like that. And so, anyone looking at this would be like, oh yeah, that looks about right, right? But if you add, and this is the useful part here, this little flag, noodle. So, it will add another two columns that will show you what you're actually using, right? So, you can request a certain amount, right? And so, the CUBE scheduler will say, okay, this is the amount this person is telling me they need at a minimum to start up their workload. So, I'm gonna make sure I put them on a node that has that amount. And if there is no node that has that amount, I'm gonna, you know, the cluster autoscaler is, I'm gonna put this thing in pending and the cluster autoscaler is gonna see that and is gonna pop up another node to accommodate that, right? But you're not actually using, if you're not actually using the amount that you're requesting, then that's a lot of like, you're essentially over-provisioning your cluster, right? Reserving that space, but not actually utilizing. Not actually utilizing, right? And nothing else, like the CUBE scheduler isn't gonna put anything else on that node that it thinks would infringe upon that. How kind of that? Yeah, I need that. So, this is a sandbox cluster that we're running here at Fairwind. And so you can see across, just in general, from that top line across our entire cluster, we're requesting 84%, but we're actually only using at this point in time, 4%. This is a point in time, you know, representation, right? So, it's clear that there's some room here for improvement. Memory requests are also a little off, not as like wildly skewed as the CPU though, right? So, you'd wanna, so we're like, okay, we're gonna concentrate on the CPU requests. So, what I have to do now, I have to go and look at like some graphs, I gotta pull up Hermitius or something and look at a bunch of graphs and try and like figure out an average on my own and things like that. This is where Goldilocks comes in. So, Goldilocks is an open-source tool that Fairwinds created that uses another tool that is also open-source, the vertical pod autoscaler, right? As the vertical pod autoscaler lives in your cluster and essentially like makes recommendations and can also like manually change or dynamically change your workloads. And we don't do that with Goldilocks, we just use the recommender. So, I am gonna go over to Goldilocks and just show you, so we have really great documentation. I'm very proud of our documentation. And so here, Goldilocks, the installation is very simple. As you can see, it's just, you need the vertical pod autoscaler, you can install that separately or you can use it if you install Goldilocks using our chart, which we recommend you do, you can enable a sub chart, which we also maintain for VPA. And so you can install Goldilocks in your cluster and it will help you visualize and give you recommendations using data from the VPA recommender to help you set your resource request and limits in a way that is a little closer to what you're actually doing, which will then have a domino effect of allowing you to schedule more pods on fewer nodes, which will help your costs, right? So let's look at Goldilocks in action here. So let's see. Did you have anything you wanted to put, I've been like just run off of the mouth, Andy, is there anything that you wanna put in before I keep going? No, I have nothing to add, this is all gray. All right. So the way that we've installed it, installed it, yes. The way we've installed Goldilocks for our demo, we use our GoCD in our environment. So we actually, we do this thing where we have, we use Reckoner, which is another open source tool that Faroans maintains that allows, helps you manage multiple Helm charts in one file, right? So we have this thing called the course.yaml, we put a bunch of, we point to a bunch of Helm manifests and includes some values in there. And so this is what it looks like, the way we've done it with Argo, right? So we've got Goldilocks chart, we've enabled the VPA, which again, if you did not enable this, you would need to install it separately. And we've also pointed the VPA to an existing Prometheus installation that we have in our cluster, using the Prometheus stack chart. This is possible to zoom in a bit. Oh, let's see if I can, yeah. How's that? Better, yeah, thanks. Yeah, so we, yeah, so we have Prometheus running in the cluster, which means that right off the bat, the VPA will have access to historical data that Prometheus has, depending on what you've set for retention and things like that in Prometheus. These are a bunch of, you see we are also setting resources and requests. And we've enabled the controller in the dashboard to be on by default. If you did not do this, essentially you would want to go in and tell Goldilocks which workloads you want it to do the monitoring for. And I'll show you that when we look at the Helm command. So anyway, this is the rest of that. And this is how we deployed that into the cluster that we're working with here, right? We just template it, yeah. Sorry, so what you're showing here is basically the equivalent of a values file that you would pass to Helm, right? Underneath the values heading here. Just in case our listeners aren't familiar with Reconair, which I imagine they're not because it's not a super popular tool. So. It should be, right. But probably, you know, a lot of you probably use Helm and this is the equivalent of essentially that, right? So this is the, let me actually do this. So you see at the top, right? So this is the equivalent of that. This is a Helm upgrade install Goldilocks. We're gonna put it in its own namespace and pass in the command to create the namespace if it doesn't exist already. We're pointing to our Goldilocks chart in the Fairwinds stable chart repo. And then I'm pointing it to a values file, which you may have seen on the side here, right? And this is exactly the same thing as what you saw in the course, YAML. This is the values file that you pass in directly. Again, the same enabling VPA. So we're gonna use our sub chart. We're gonna disable the updater because we don't want VPA making actual changes to the workloads, right? We are pointing it to an in cluster Prometheus. We're enabling a dashboard, you know? So pretty much that same deal, just a different way. So this is just two different ways that you can install Goldilocks. They're both using Helm under the hood. It's just different approaches for it. So you install Goldilocks. If you install it this way, you notice the flags on by default are not in this values.yaml file. So if you install it this way, what you wind up having to do when you want to, if you don't specify those flags, you'll just need to label the namespaces that you want Goldilocks to watch for which the workloads that you want Goldilocks to watch. So you would just do a, let's see if I have this. Yeah, so like for example, here, this is an example command for enabling Goldilocks on the carpenter namespace, meaning that Goldilocks will then create VPAs for the workloads that are in that namespace and then we'll use that VPA information to give you recommendations or to show you recommendations for how to optimize your resources, request them limits for those workloads. Yeah. When you say workloads, what kind of workloads does Goldilocks support? Which kinds will it work with? I believe it's, if I understand what you're asking, it will work with most top level standard controllers. So like Damon sets, staple sets. Damon sets, staple sets, deployments, kind of your standard. Okay. Yeah, yeah. So I have, so as you saw, I've already installed Goldilocks and let's just go take a quick look at what it looks like installed here. All right, so we've got the controller, we've got the dashboard and we've got the VPA recommender. And again, we've set it to on by default for all the namespaces. So there should be VPAs for like everything in this cluster at this point. So yeah, so as you see, there's VPAs for everything. Mode off means again, that it's not actually going to be doing any updating of the workloads, not going to patch the resources and requests, I would say resources and requests, the request and limits dynamically. So you've got Goldilocks running in your cluster, it set up VPAs for everything. And now, let's see. And it's probably important to note just briefly that the default when you enable VPA via Goldilocks, we don't even install the other components of the VPA. Yeah. We only install the recommender. So it's safe. So this is what our dashboard looks like, right? So we set up Goldilocks, we set an ingress in front of it, we're hitting it here. And these are all the namespaces that are in our cluster. And so these are all the namespaces that Goldilocks has, as we saw, created the VPAs for the workloads that live in them, right? So if we click on any of these, I click on carpenter, because as you saw, I'd actually manually labeled carpenter in a previous run through. Here it shows you the details about carpenter. So the namespace, this is the top level controller deployment in this case and here's the container. And it shows you two different recommendations for how to set your request and limits. There's a guaranteed quality of service and a burstable quality of service. And we handily define these for you below. But the TLDR is that guaranteed QoS is generally you set both your request and limits to the same thing. And that affects how your pods get evicted from a node or don't get evicted in the case of the guaranteed QoS. So it helps set like a hierarchy for what happens when pods need to be evicted because of some resource contention. Burstable QoS is exactly what it sounds like, right? It just means that you set it two differently. So that you're able to burst for a short amount of time, also depending of course on what else is running on that node, but you're able to burst above to handle like short spikes of traffic and things like that. So yeah. Oh, I was just gonna say the other important thing to note is that although the limit is there, the request is what the scheduler uses and it's also what an HPA would use if you're scaling on CPU or memory. Yeah, that's a good point to make. So yeah. So if you click here, we provide a nice little YAML block for you that you can use to update your workload in whichever fashion you typically update your workloads. In our case, again, because we're using GitOps, we would take this and put it into our course.yaml file and run a recommender plot on it and change it. Do we, how are you doing on time? We got plenty, go ahead. Okay. All right. So now, Let's see some GitOps. She's magic. Where am I actually? My notes said before you did this, to go into the right inventory, so you wouldn't have to do this. And I ignored my notes. All right. So here's our course.yaml. Actually, I think this is my course.yaml here anyway. All right, so we were looking for carpenter. All right. Let's look at carpenter. And so here are, wait a second. You know what I mean? Well, I try to figure out where my screens are. All right, so we're gonna change this 35 for CPU and 226 for memory. According to this, I can actually just do that. That was exactly the thing I said I could do. Let me here and copy this. Now here's the question. Yeah. I'm sure there's something I could turn off to make it not do that. Weird like ignoring my indentation. So I think request, blah, blah, blah. Yes. All right, so now we have set the request and the request and limits for carpenter to be the same. So, right? So we have this up here. Let's go over there. And I like to switch everywhere that I can possibly do it. Now we're going to update resource request and limits. Ta-da. Now we're going to, oh, guess what I did that I wasn't supposed to do. Mm-hmm. What did I do? Do you know? I did. Did you forget to pull? I forgot to even create a branch. I'm on master. Whoopsies. Don't do that. You are an admin and I would probably forgive you for that. No, but I'm trying not to show the wrong behavior here. All right. I mean, that is sort of the point of GitOps that we can review our code before it gets deployed to our cluster, right? That is true. And then I have done something very goofy here. Undy, last commit and let me do Undy, my stage change. I hate this. You did say in the beginning that we're going to break things and other things. So we're just following any instructions from the beginning of the live stream. Perfect. Good instructions following here. Actually, what we should do before we do that and which is what I was doing before is we're now going to do a Wreckerner template. And on Goldilocks, we're templating, we're templating Carpenter. Carpenter. Am I in the right place? Okay. So here we're templating out the actual manifest from the Helm chart into a directory so that our diff when we make the pull request actually reflects the top level, the full change set that's going into our cluster rather than if we had specified a Helm release going into the cluster, if you change the version of a chart you might not know the underlying changes happening. So this is why we have this process. Yes, and this is why we have to undo this last one. Wreckerner. Remember I had to download manually the old version of Wreckerner or the newer version of Wreckerner. So I have to run this command again but this time I have to do it using my Wreckerner which is in my downloads folder. Okay, live coding. All right, that ought to do it. Let's see if we have the modified kind of Carpenter deployment Carpenter. Oh, right, because we only did that one. Okay, great. All right, so now we've got that. You would appreciate this, Stevie. Somebody in LinkedIn said, I reckon it's all going fine. So, I love it, I love it. All right, so we're going to do that. We're going to do a git commit. I'm going to say update resource rex, not rex, rex and limits. Yes, see, slash demo update Carpenter, CarterSauce and we're going to create a PR. Yes, fine, and I'm going to submit it. So I just submitted the PR for this. Andy is going to go and hopefully Decline it, close it without merging. Okay, right, yay, there it is. Any approvals? Thanks, man, you're the best. No problem. All right. It's really all we do all day is just to prove each other's PRs, right? Back and forth, back and forth. Our whole job. What we should see in this cluster once Argo has reconciled, should I go to Argo or do you want me to just say where I am? Up to you. I'll stay where I am. You let me know when it's reconciled. Let's go into the Carpenter namescales. That's resumably updating our stuff. I'm still waiting for it to pick up the get change. Yep, all right. So yeah, so we're waiting for Argo to pick up the change and redeploy with our new resource settings. And then hopefully what we'll see in Goldilocks is that it does not have, it'll probably still show the burstable QoS but it should be good on our guarantee QoS, yeah? Argo CD just updated the deployment. Oh, yep, ready, one of two. Delicious, right? So if we look at the deployment for Carpenter over here, we can see a difference in, we can see we changed our resource and limits there. And now let's go over to Goldilocks and there we go. All right. So we did what it told us to do, we did what our computer overlords demanded and they are pleased. So yeah, so this is probably a bit of a slower but an idea of a workflow that you could have, right? You check your cluster first to see sort of get an overall view of your efficiency in terms of cost, in terms of utilization. And then use a tool like Goldilocks to get an idea of how you can adjust your resources to make the keep scheduler or help the keep scheduler, I should say, schedule your workloads in a more cost-efficient way. But I'm going to turn it over to Andy now. He's going to share his screen because he's going to talk to you about Goldilocks cost feature, the direct Goldilocks cost feature. And before we go over there, we had a comment or a bit of a question is almost a comment from the audience. So there's Mark saying, no Argo CD web hook used. I'm in shock. Good comment. I mean, it would have picked it up without me hitting refresh. So it's just a little too slow for me. But never did get around to enabling the web hook. It's a sandbox. So it doesn't get as much love as we might hope it does. It would. So yeah, Stevie, thank you for showing us Goldilocks and how it can help you go find your goal. I'm actually looking at the Goldilocks screen just like you were so seamless transition here. So yeah, you may be asking, hey, I came here to learn about cost. And you've just been talking about CPU requests and limits the whole time. And I always love to reiterate to folks, we get a lot of questions from our customers that are like, can you recommend what node size to use? Can you recommend what node size to use? Can you recommend what node size to use? Or like, can you recommend what node size to use? Can you recommend tools to help us save costs? And really the thing that drives all of the scaling and all of the bin packing and scheduling in Kubernetes is this setting right here. It's resource request. So I will probably be saying this for another five years. I've been saying it for the last five years, but I think this is probably the most important thing that we can all do to enable stability and control in our clusters. So that being said, the tie between these numbers right here, these 35 millicores and this 226 megabytes of memory to cost is a little bit obscure, right? Like it is a portion of a node running in a cloud provider that's billing me by the hour for that node. That node has a certain amount of CPU and memory available. There's a certain amount of overhead that's taken up by Kubernetes on each node. So how do we really understand what this is costing us? And so in our commercial product, we have a ton of functionality around this, but we wanted to bring some of it back to the open source. And so here we wanted to take the data that we have from the cloud providers around the cost of various bits of infrastructure and expose that here in Goldilocks. And so you may have seen this banner up at the top for Goldilocks. And basically what this enables you to do is get some cost information right here in the Goldilocks dashboard. So I'm gonna put my email address in and it's gonna send me an email with an API key. It's totally free API key. And we're gonna put that API key in and hit submit here. And a bunch of numbers are gonna start showing up. Well, actually, sorry, there's one more step here. We have to tell it what our infrastructure is. So we have AWS and GCP cost data in here. We also have the ability to say other. So if you're running on-prem or you're running in a different cloud provider or you just wanna put in your own numbers because you don't trust us, totally fine. You can hit other and put in the dollars per CPU hour and dollars per gigabyte hour. I'm gonna click on AWS because I know this cluster is running in AWS. And I'm gonna find our node size here which is most likely, I don't actually remember but it's most likely an M5 large because that's kind of typically where we start with demo clusters. And so we have a rough estimate here. Well, we have an actual number from AWS as to the on-demand cost of a CPU hour and the on-demand cost of a gigabyte hour for this node type. So I'm gonna hit save. And then we're gonna start to see some numbers show up on the dashboard that weren't there before. So now I have an idea of how much this container is costing me per hour to run based on the current setting. So obviously I can go punch all these numbers into an Excel spreadsheet and do all this myself. This seems a little bit more convenient to me. And then if we go look at, say, a workload that is over provisioned, we can take a look here at this demo app and see the recommendation to lower our CPU memory request because it is over provision. And we can see roughly how much this is going to save us in our cluster. And we can also do this across all namespaces. If I go to the detail all namespaces section, perhaps, perhaps not, we shall see. Well, live demos again, but we'll be able to look through all of our recommendations all at once and see sort of the cost of applying those recommendations. So we can start to target the things that we think will save us the most money. And so I've actually tuned a lot of these already, but we shall see if we get some better numbers here. Stevie, were you gonna say something? No, I was just damning the demo, God, on your behalf. Oh, yes, yes. Yeah, so these numbers are all really small because frankly, this cluster as a sandbox is not doing a ton, but we could look through all of these and see, okay, this is probably our most expensive workload. Let's go ahead and see if we can reduce the cost of that. And so we're starting to enable some of that, that functionality here in Gollilocks, lots of opportunity for improvement. We're happy to accept enhancement requests on the repo, but that's what I'm here to share. So what else did I not cover? So when we look at this, and this is, you know, frankly, a new feature for me, so this is cool to be seeing in action. And so what we're talking about here is that this will, if you change your workload to the current and guaranteed, sorry, if you change your workload to match the Gollilocks recommendation, that is a way that we will see, like be able to see almost in real time, I guess, like how much you'll save if you follow our recommendations. So just like when we changed the recommended settings for resources and we came back and Gollilocks was like, yay, we would hope that we'd see this, the numbers next to the QOS and at the top under the container, we'd hope to see those like match, like we'd hope to see that container number decrease to what Gollilocks said it could get to, right? Yes, yeah. So if we had looked at Carpenter beforehand and had costs enabled, it would have said something slightly higher than, you know, 18 cents an hour, or I don't think I'm doing that there, right? But, you know, something greater than that costs and then we reduced it. Now it's important to note that this is, A, a recommendation. So these numbers are not set in stone. They're recommended by the VPA. They're dependent upon you having actual usage. They're, you know, averages across time. We are hooking it up to Prometheus. So we're getting a little bit more accurate data, but it is still a starting point. So, you know, think about, you know, the needs of your application when you're going to apply these. And then also important to note that you may see increases because sometimes the VPA is going to recommend, hey, you're always right at the top of your CPU limit. We think you need to bump this up. And that will increase cost. You know, it's as much about efficiency as it is about cost. Higher levels of efficiency get you, you know, ideally less cost, but that may not be the case in all environments. I will say it's the case in like 90% of the environments I've looked at, but, you know, it's much easier to over provision Kubernetes than it is to under, I think. Yeah, yeah, I agree. And I think of Goldilocks, you know, the same way that I think of like Google Maps in a way, right? Like, you know, Google Maps will tell you where to go, but, you know, a bit more about like what's on the ground. And so very clearly, if you see a barrier in front of you, like a police blockade and Google Maps is like, continue straight, you know, not to continue straight. And I feel like using Goldilocks is, you know, has some of the same, it's got some of the same things, right? Like, you know, if it's like, yeah, take this workload and decrease it. And you're like, okay, but I know that my workload has like some ridiculous spike that maybe got, you know, evened out over like the aggregation in Prometheus or something like that. Like, you know, it's all like a, just a little common sense along with the, you know, because you know your app, right? So it's a combination of those things. Yes, strongly agree. And I think that's a great, great metaphor there is Google Maps. You know, maybe you know that road's closed and they haven't figured it out yet. Right. Don't drive off that cliff. There's no road there. But Google told me. Do we have any, I don't see any questions coming in. People, please drop. Oh, there it is. There it is. When you ask for questions, they will come. So I think now is the perfect time to ask the questions. And then we can see Mark has already kicked it off. So they ask regarding costs for burstable configurations, does Goldilocks estimate cost according to the average use of the workload or based on the research limits? I should know the answer to that. And I don't. I believe we would actually calculate based on the request because that's the, it's definitely not on usage. I can tell you that because the Goldilocks itself doesn't really have historical information about the usage, the actual usage that's mostly piped into VPA. And then VPA provides the recommendation to Goldilocks. So it wouldn't be based on that. My guess is that it would be the request but frankly, I didn't actually write this piece of functionality, which is unusual. And so I don't know exactly in the context of this. I'd have to go diving through the code to find that. But I can tell you it's definitely not usage. If anyone wants to reach out to you with this answer or like similar questions later on, is there like a place where they should reach out like a Slack or social handle somewhere? The Goldilocks repo filing an issue was always a great place. We have a community Slack for all of our open source projects. That's a great place to get ahold of us. You can find a link to that in any one of our readme's or in any one of our documentation pages which incorporate the readme as well. And so there's always a join the Slack button there. And then I am personally in the Kubernetes and CNCF Slack as Suderman Jr, Suderman JR and happy to respond there as well. So lots of places to get ahold of us. Perfect. And the audience, please ask your questions now. Now it's the Q and A moment, perfect. And while we see if anyone else is gonna send anything and I would have a few questions. So if someone's super excited about the topics right now, is there any really good learn more resources that you could share to our audience? Ooh, that's a good one. I mean, our documentation has a ton of information. That's an obvious one. We have a ton of content on the fairwins.com website about cost and about our open source, lots of lots of webinars and blog articles there that are authored by our engineers as well as, Stevie and I do a lot of webinars about this topic. So you can find a ton of content there. Perfect. Everyone can head over there to learn more. And then we have a question from Milo's. They ask, we use about six to 10 instance types for cluster from demo. It seems that you can own the SIPP one. You're correct. Yeah, this is early, early stage functionality for Goldilocks. So we do only allow you to set one instance type for the cost data in it, in Goldilocks. Definitely a potential enhancement there to, you know, expose more of that information to the dashboard. And then we also have the free tier of our product that allows you to actually incorporate your AWS billing data directly. So we get not only more instance types, but also your bill itself. So options there, but definitely open to suggestions on future functionality. But you are correct. Only one in the current setup. Perfect. This actually connects well to my next question that I had in mind. You kind of teased a bit about like the future stuff as well. And this is, as you said, kind of early on on the features life cycle, but do you have a bit more information? What's next? What are you thinking in the future? That's a good question. There's a lot of possibilities and a lot of options going forward. There's, you know, a lot of different concerns that we have to balance on the open source side being that we are also supported, you know, from a commercial entity. And so we have to keep in mind the tie-ins between those two things. So there are some potential changes around how we incorporate Prometheus metrics and how we use VPA. Those are a little bit more under the hood type things. As far as large feature enhancements for the dashboard and other things, we don't have anything planned at the moment, but we are always, always excited to accept community contributions as well as suggestions. And we'll take those into account as we find more time and resources to dedicate. That's always the trouble with open source. So. Great. Now, if there's no new audience questions coming in, I'm gonna say panel last call for questions. So if anyone is typing away, curiously and about to send a question in, send that question in, so we'll get to it. But before we see if there's anything coming in, Andy or Stevie, do you have any kind of final words, any reminders to people that you wanna say? Andy, I'm gonna ask you to explain, because I'm sure there are people on this call who are curious or thought about it, but they're like, ah, it's not important to ask. But our stuff is named as space themes. Polaris, Nova, Goldilocks. How does that fit into our space theme? Ah, great question, great question. So there is a space related term called the Goldilocks zone, which is the distance from the star in a solar system that a planet has to be to be habitable like humans. And so the earth is in the Goldilocks zone. And when we're looking for planets that might sustain life, that's sort of the term that's used to describe that area around the sun that is not so hot, the planet burns up and not so far out, but it's too cold. So, yep. Cool, yeah. Perfect. And while that good question was asked and answered, we had one more question from the audience. Or if you have any more, just send them in still. Does Goldilocks provide cost for PV or PVCS also? Not at the moment. No, we're focused mostly on efficiency of workloads, which was the initial sort of goal of Goldilocks. And so we're continuing on that theme at the moment. So. Great, sounds good. Thanks for the questions, folks. Keep them coming if you have them. Yeah, final call for questions right now. Yeah, anything else that you wanted to finish with before we... No, I think I have said my personal mantra about 10 times today, so I don't need to repeat that. And we really appreciate you having us on the show again. It's always good to come. It's always a pleasure, always a lot of fun. But yeah, so then let's start wrapping up. And if anyone has any questions later, we shared earlier on how you can reach out and DSTV later on. So you can go ahead and ask more questions there. But thank you everyone for joining the latest episode of Cloud Native Live. It was great to have a session about cloud cost monitoring today. And also we really love the interaction and questions from the audience. And we bring you the latest Cloud Native Code every Wednesday and in the coming weeks and we have more great sessions coming up. So tune in then. Thank you for joining us today and see you all next week.