 Hello, and welcome to Kubernetes misconfigurations, what they are, how to identify them with Fairwinds. If you're here for a different talk, it's not what we're doing. This is what we're doing. I'm excited to have you. And Ivan, do you want to say your greeting? Sure. We were joking about this when we were getting started. So good morning, good afternoon, good evening, wherever you are. Thank you for joining this webinar. Yes. Thank you. Thanks so much. Okay. Let's dive in. So before we begin, let's introduce each other. So Robert, why don't you introduce yourself? Yeah. My name is Robert Brennan. I'm VP of product development here at Fairwinds. I head up all of our engineering efforts on Fairwinds Insights, which is our platform for detecting Kubernetes misconfiguration from a security perspective, reliability perspective, and efficiency perspective. Great. And you also have your hands in the open source things as well, which we're going to be talking a lot about today. Ivan, do you want to introduce yourself? Yeah. My name is Ivan Fetch. I'm a software developer here at Fairwinds working on the team that's part of Insights. All right. And my name is Kendall Miller. I am technically an evangelist here at Fairwinds and excited to be with y'all. We're going to be diving into a number of different things related to misconfigurations. But first, tell you why we're the ones talking about it. So this is not a hard Fairwinds pitch, but to give you a background on why we're here talking about this. Fairwinds has been in the Kubernetes space for a long time. We've been in business seven years, over seven years, seven and a half-ish. Wow. It's been a long time. And working with Kubernetes almost the entirety of that time, helping organizations use Kubernetes correctly, get over the hurdles of using what is, for most people, a new paradigm and it's complicated to learn. Today we develop open source software as well as a SaaS product that specifically addresses misconfigurations. Organizations out there are worried when I move to Kubernetes, am I going to mess everything up? How do I do this correctly? It's a new paradigm. I know how to think about the old paradigms. How do I get used to this new one? And that's what Fairwinds does. So we build open source in that space. We're going to be talking a lot about some of that open source today. We also build a SaaS platform that addresses this. So misconfigurations is literally the water we swim in. It's what we do all day long at Fairwinds and that's why we're addressing this topic today for y'all. So proper configuration counts. And let's dive into some of the why about this. First of all, what's different, well, before we do that, let's talk about some of the common misconfigurations that we see. And one of the first things here we have is only 35% of organizations have correctly configured 90% of their workloads with liveness and readiness probes. Now we can dig into this specifically here in just a second, but I want to begin with huge percentages of organizations leave things off. Partly because Kubernetes is just a fundamentally different paradigm, right? Liveness and readiness probes in Kubernetes is a different way of thinking about things than previously where there was a time in your career where you could tell a machine went off because it was in your closet in the server room in the back or something. You might have liveness and readiness probes for that kind of thing too, but it's a whole new paradigm. So let's start with what's just fundamentally different about Kubernetes and then let's get into some of the specifics of this particular common misconfiguration. Ivan, you want to kick us off? Why is Kubernetes so fundamentally different? I think what relates to this is that with Kubernetes, things are typically a lot more ephemeral than your server room back of the closet under the desk scenario. So even though we've always had some kind of monitoring, some kind of thing that in the Kubernetes world we're calling liveness and readiness probes, what I think is appreciable difference here is that containers are coming and going and scaling and moving around a lot more than applications tended to move around. And so it sort of exacerbates, makes much more important the proper configuration of all of these things. And in Kubernetes world, aren't we declaring the state that we want in a way that's different from previous models of actually writing code to go create the state that we want? I mean, isn't that one of the big fundamental shifts here is we're just defining what the end state is and letting Kubernetes figure out all the configuration to get us there rather than having to say, hey, go out and build me a box, build me a box this way, build it like that, build it this way and scan it this way and grow it that way and scale when you hit this. Like with Kubernetes, we just set some parameters and it does everything for us, right? We are and while it happens often very quickly and it feels very similar to that scripted or configuration managed way of implementing something that was more imperative, it is that declarative control loop that happens in Kubernetes that keeps things as close to the desired state as it's possible for Kubernetes to do. And that's what helps things heal when you have things break in your app or you lose a Kubernetes node, et cetera. Kubernetes sees that there's that difference now between desired and actual state and works to fix it. Great. So with that in mind and I like to use the analogy that moving to Kubernetes is like moving from Windows to Linux. It's not that Linux is really, really hard to use. It's that if you've never used Linux, it's a whole new paradigm. Once you get used to it, you will adjust. It will start to feel normal. But if all you've ever used is this other thing, this new thing feels very different. And so that's part of why people are so afraid of misconfigurations is they just know I'm going to mess things up. So how do I avoid that? So now talk, I mean, Ivan, you touched on this kind of almost in passing, but let's talk specifically about the liveness and readiness probes. How does that relate to what you were just talking about in the new world of Kubernetes? Yeah, so liveness and readiness probes are two kinds of probes that connections that Kubernetes can make to your application. And it uses them to either restart containers or pods that have been hung or that have stopped responding or also in the case of readiness probes, Kubernetes will stop sending traffic to a pod that is determined to not be healthy. So that helps avoid things like bad gateway, HTTP 502 errors that might come from an upstream load balancer or an ingress in Kubernetes to some application pods that are less healthy. So when we're defining these, we also would need something for Kubernetes to talk to on an application. And if your application doesn't listen on some HTTP port, for example, you can also exec a command in the container. But one important aspect of this is that whatever you are querying for these probes, you want it to be relatively efficient. These get queried pretty often, potentially once a second for a single pod. And you're probably running, hopefully running, more than one pod for your application. So if you have a probe that does a bunch of analysis inside your application or a database query to find out if the application is healthy, you don't want that to cause an outage because the probe is being queried for your app so often. Yeah. So this falls into just one of those. This is a new kind of paradigm. This is one of those configurations that people do get wrong. Correctly configured, you are going to have liveness and readiness probes on more than 90% of your workloads. There might be a situation where you don't care. But correctly configured by our definition here is the vast majority of your workloads should probably have liveness and readiness probes. If the workload matters, you want to know that it's right. Anything that I'm missing there, Robert, before I move on? No, I would just say the big thing to keep in mind is that the reason most organizations or a lot of organizations have not configured a lot of these is that it is optional. So your development teams will be able to successfully, for some value of successful, be able to successfully deploy a workload without these things set. But that workload is likely to experience downtime if it's not set. It's likely to have issues that don't get caught if they're not setting them. So you really do need some kind of proactive approach to check for these things and make sure every individual team is doing it. Yeah. And so, I mean, we have a couple more slides like this and we're gonna get into some more specifics here in a little bit. But just to give you a feeling, like the reason we have these statistics in here is to show you organizations struggle with this. So only 42% of organizations today managed to lock down most of their workloads and 54% are leaving over half of their workloads open to privilege escalation and thus security holds. So these kinds of statistics that just show us that people struggle with configuration in Kubernetes because it's so different and it's vital for your organization if you're using Kubernetes to get it right. I feel like that sounds obvious, but in the same way that I've seen people move to the cloud and never implement things like auto-scaling, which like the whole promise of the cloud is that you can scale up and scale down. So if you're using the cloud and you're not using something simple like auto-scaling, okay, simple is hand-wavy. It can be complicated, but you're not using one of the greatest promises of the cloud, you're getting it wrong or you're messing up probably. Similarly, when you're using Kubernetes and it just is different because it's cloud native at scale and whatever cloud you're running it in, it's very easy to mess up these common things and it's just not worth doing when Kubernetes is a different paradigm, you can get it right. We're gonna show you tools to help you get it right, but Kubernetes is a different world. Okay, I think I've belabored that point long enough. Are we ready to dive into some specifics around security, reliability and cost? Yeah, let's do it. Totally. Okay, here we go, security. So there are a number of common misconfigurations that we see in security. Let's talk about over-permissioned containers and then we'll dig into some broader other things that we talk about, but we'll go deep on this one to start. We'll go deep on one specific issue with each of these and then talk about the broader common kinds of things that we see. But Robert, you wanna start with over-permissioned containers? What is an over-permission container? What is the problem with that and then go from there? Yeah, so again, the defaults for Kubernetes are not always the most secure way to run a container. There are a lot of things that Kubernetes will allow you to do by default that you don't necessarily need to do. For instance, running a container as root, Kubernetes by default will allow a container that runs as root, but you can specify in your configuration, I never want this container to run as root. And that's a great way to tighten the security of that workload, because for the most part, it probably doesn't need to run as root unless it's doing something very specific or it's designed in a very specific way to need root access. Most likely you can run your application perfectly fine without root. This goes for several other different configuration options that are available in the Kubernetes security context. So whether that container runs as privileged, what capabilities have been added to that container? These are all things that basically a workload that is misbehaving or that gets compromised by an attacker, these permissions could be used to basically escalate to get access to the root node, to get extra permissions on that root node and potentially basically spread the attack throughout the cluster instead of being restricted to that one single container. So it's super important to, as much as possible, tighten the security of a workload to make sure it's adhering to that principle of least privilege, to make sure it doesn't have permissions to do things that it doesn't need to do. Yeah, and I mean, I've said before in other webinars, I feel like the average response from a person who's not tuned into this is to think like, nobody's gonna break out of a container and get access to other things, right? Like that just, it sounds far-fetched, except in the world we live in, we know of lots of people who make a career out of breaking out of containers. Like that is a thing that people are like, hey, I escaped a container in this situation, I escaped a container in that situation. I mean, one of the ones that I saw was, I think it was I escaped a container running on a cluster in a mainframe or something. You know, it was just like, yeah, that was more just for street cred because that's not gonna be a thing that we're gonna run into a whole lot in regular life. Ivan, anything to add to specifically over-permission containers? Just a bit of an underscore, Robert covered it all, but this is harder, I think, to put time and effort into implementing correctly than for example, the thing we just talked about, readiness and liveness probes. So if you wanna limit the Linux capabilities, which is essentially what kernel calls is it possible for your container to make, that takes effort to minimize those and then run your application through its typical QA testing if you wanna make sure that all the things your app needs to do are still possible for it to do while it's running. So this is an easy one to ignore because it's hard to do and you gotta set that time aside. So it's a bigger gap for sure that we see and when you take the time to do the first few, like don't run to the root, have your file system, be read-only in your container, those type of things, that's an awesome start, please start there and then move on to the other stuff when you can make time for it. Well, and so let's spend a minute talking about other security issues that we see commonly made in Kubernetes. So it's not unique to Kubernetes, but something that people forget about like there's some amount of, I can deploy a workload with a known vulnerability. Yeah, it's in a container. It's gonna be fine. It's not gonna have access to anything. You know, I think of the famous log4j example from recently, you know, first of all, you wanna stop known CVs that are running in your workloads or running in your containers from being deployed into a production environment. You wanna stop that from happening, period. If that's going to happen and you've at least kept your container pretty locked down, that does limit some of the attack radius that it can have, right? So when we think about security, there's the big picture things and you work down to the granular level and everything in security and operations is a trade-off, but there's huge trade-offs to be made, like there's huge mistakes to be made and this may be more complicated to do but really limits the attack or the blast radius of an attack if you do it well. But let's just broadly talk about some of the common security issues that we see. Over-permission containers we've mentioned, I just mentioned, you know, deploying known CVs or vulnerabilities into your cluster. What are some other common misconfigurations that we see regularly? They're probably still broadly in this space, but you know, allowing access to things like host path mounts from your container. So now your container is mounting a directory that's on the node itself, depending on what you allow access to, that can be a risk, similarly access to the host network. It's pretty common that we see containers being allowed to access the host network instead of isolated network space in the container. And that means that everything that the host can see on the network, traffic-wise, that container can see, which can be handy for certain, there's certain things that we won't mention that need to run that way, but it also is a risk. Similarly, host IPC having access to that inter-process communication or host PID as well means the container can see processes that are running on the host, not limited to the processes that only run in the container. So that gives you, even if the container is not running as root, gives you some visibility into what else is running on the host, which is Intel for some attacker, for example. Yeah, Ivan brings up a really good point there. The reason these security options are available is that sometimes they are necessary. If you're running a workload that does like network telemetry, it probably does need access to that host network to be able to do its job. And so that's an example where you would wanna create an exemption for some of these rules and say, okay, this particular workload does get access to this particular security feature. The issue is that the vast majority of workloads, especially the ones that you're building internally at your company, don't need access to these things. They're probably just API servers or serving a website, something like that. They don't need deep telemetry into the network operations going on inside the cluster. So I think it's important to note that sometimes these options are appropriate, but it's in very rare and isolated cases. Yeah. Yeah. So, well, anything else to add just in the broad bucket of security misconfigurations that we see commonly, Robert, what are the other ones that come top to mind? The other like broad category that we haven't really mentioned is something like the control plane level. So that would include things like if you're managing your own control plane, you're not on like EKS or GKE, making sure that that's not a publicly available control plane so that anybody from the internet can just log in and start messing with your Kubernetes cluster, making sure that it's using SSL, that at CD is encrypted. There's a whole bunch of stuff you need to do to have a really solid, secure control plane environment. The easiest thing to do is to go with a managed service like EKS or GKE where they manage the control plane for you. They do it well. So if you can go with one of those providers, but if you're managing it yourself, there's a lot you need to do to make sure you're doing it right. There's also kind of analogous to the control plane. A lot of topics are on role-based access control, making sure that you're using the principle of least privilege to ensure that different personas at your company have the right level of access to your cluster. Maybe the SREs need to be able to delete and modify things on the fly, but the developers only need read-only access, that kind of thing. And also the individual workloads in your cluster, the service accounts that are doing some automated operations, those two should adhere to the principle of least privilege. They should only get permissions to do the things they need to do in order to get their job done. And it's worth making a plug there for one of our open source projects is called RBAC Manager. So RBAC, role-based access control that Robert just mentioned is for managing role-based access control, which part of the reason that that project exists is we see people struggle with it more than we would like them to. So we've tried to ease some of that process. There's a lot of ways to manage RBAC, but one of them is our open source project. So check out RBAC Manager if that's something that you're not feeling confident in today. It will help you make that a little bit easier. Ivan, anything to add before I move on? No, that's awesome. We'll see you in the next one. So I wanna wrap up security with just a, there are a lot of misconfigurations that you can, how do I explain this? Reliability costs you something immediately. Oh, this is slow, this isn't up. User tried to get on my website and it cost me, I think of how much money it has to cost Amazon if they have two minutes of downtime, right? That's why you basically never ever see Amazon's website down because it's probably millions of dollars for a minute, if not more than that. Your website is similarly affected by reliability misconfigurations, which we're gonna get into in just a second, but security misconfigurations, you can get away with until you can't. It's free until it's so expensive it puts you out of business. It can do damage to your brand, it can do damage to your user base, it can do damage to everything. And so having a correct security posture, I guess anybody who's been in this space knows this, but I feel like it's worth reiterating. It matters to get this right. In fact, security is like one of the number one reasons we see people interact with our software, both open source and our SaaS software is because they want a security posture in Kubernetes that they have confidence in. So it's not a minimal thing. Okay, now we're gonna do something similar here, talk about reliability. Let's talk about health probes. Ivan, you can start and we'll talk about, what are health probes? What's the problem that they're trying to solve? What's the impact of getting it wrong? And then we'll go broader on some of the other reliability issues that we see people struggle with. Yeah, so I won't explain these probes because we actually kind of talked about them earlier when we were starting with kind of the overall overview of these topics, but I'll say a little bit more about the what happens if you get it wrong part. So if you don't have readiness and liveness probes defined for your application, then Kubernetes doesn't have any way of knowing what the health of your pods containers are. And the impact of that, I apologize for the fire engine in the background. The impact of that is that if you have a container that hangs or becomes unhealthy stalls in some kind of way, Kubernetes won't know that it can go ahead and restart or that it should restart that container. Without a probe, Kubernetes definition of a healthy container is that the process you ran for the container to start is still running. And as we know, that process could be quote unquote running, but it could be deadlocked for some reason. The readiness probes without those, Kubernetes won't know whether it should send traffic to that pod. So if you have a service defined in Kubernetes that could either be used internally or other applications to talk to your application like a microservice architecture or a service that is leveraged by your ingress controller or directly attached to a load balancer, you're getting traffic into your application through that service. And without a readiness probe, if you have that similarly unhealthy or unresponsive pod, Kubernetes will otherwise then continue to send traffic to that. And so now that means that your users and your customers are accessing an application pod that isn't able to do work. So that means hung connections or HTTP errors and those kinds of things. So those are the ramifications of not setting these probes that we've talked about earlier in the webinar. Yeah, making sure that things are working the way they should be working. Go ahead, Robert. A good like symptom that you've got an issue here, either they haven't been set or they're misconfigured is if you see, every time you deploy, there's a big spike in 500 errors or some other type of error message in your logs, that's like there's a good chance that, you're seeing this kind of downtime as a new pod spins up, it starts getting sort of traffic before it's actually ready for that traffic. So keep an eye out for those kinds of spikes in errors every time you deploy. Yeah, I'm trying to think of an analogy there, but it's basically think of a new person coming into your organization. And on day one, you hand them a mountain of paperwork and ask them to file your company's taxes like there's no way they have the context to be able to do that. They're not going to be able to do that. They're not going to do it well and they're going to stare at you with a very blank face the same way a workload is going to respond with not yet, not yet, not yet, except it's not smart enough to say not yet. So it just says no. Anyways, what are some other common reliability issues that we see? I mean, you know, building, yes, I need to build my cluster so it's secure. I just covered that, but building a cluster so that it's gonna be reliable and what are the common misconfigurations that we see? What do people get wrong often? Another one might be, you know, the presence of a horizontal pod auto-scaler or a pod disruption budget for each of your workloads. So the horizontal pod auto-scaler, for instance, you can tell Kubernetes when to scale your workload up, when to scale it down, how, you know, what the maximum level of concurrency you want, like how many pods you want at once. What's the minimum number of pods you want at once? Like you probably always want at least two running at any given time for any kind of high availability scenario. So making sure that that HPA, that horizontal pod auto-scaler definition is present is a huge part of reliability. Yeah, I mean, to add to that very simply, you know, there's a big difference between how much traffic you serve as a e-commerce website on a Tuesday and February than there is what you're serving on Black Friday. And if you don't have the ability to scale, you know, if you're trying to serve 10,000 times the customers with the same amount of computing power, you're not going to have a reliable experience that is good for your users. Ivan, anything to add to that? No. Other reliability issues that come to mind right off the top of your head. I mean, we're gonna, some of these bleed into one another and we're gonna talk about cost in a second and some ways to get cost right and some of the effects of getting cost and settings around cost wrong and some of that affects the liability. Yeah, let's move on to cost, because I think there's some good natural overlap coming up here, so. Okay, okay, cost. Inappropriate resource requests and limits. So Ivan, I know that you were worried that we had these separated and you weren't gonna be able to talk about them separate. So now I'm gonna let you talk about resource requests and limits, how it affects both your cost and your reliability. So dive in. Yeah, not really worried, but definitely I think that a lot of these categories are, of course, relate to each other. Like you said about. Ivan, you were worried. I remember the voice that I'm just gonna go, keep, keep, keep going. Well, like you said about the, you know, efficiency and reliability and how they kind of overlap in security. And so does this. So as far as cost goes, we've got two big key things here around resource requests and resource limits. And the cost, no pun intended, of getting these incorrect is that you end up having a noisy neighbor problem among other things in your Kubernetes cluster. So if you've got over provisioning of your nodes, so you've got nodes that are too large or too many nodes, then now you've got cost overruns happening. If you have under provisioning on the other side of that coin, then now you've got instability. So an instability is its own thing, the overlap, but also, you know, there's a cost to instability, which is downtime and impact to your business and your data and your customers. So developers or folks that are deploying your apps need to be specifying CPU and memory, requests and limits, and requests are essentially, what are the resources that you think your app is going to need, that's a baseline, and then limits is how much should your application use as a maximum as a cap of sorts. And there's a lot of technical detail that I think we'll avoid for now about what happens when you reach those, but this relates to all kinds of other functions and Kubernetes, depending on how you have these requests and limits set, they get used for scaling new nodes into your cluster and putting the workloads on the correct nodes. Yeah, so these do bleed together. You know, I just think of the reason that we have cost problems around inappropriate resource requests and limits is tied to reliability. I'm an engineer deploying my workload. It works on my machine. I literally click the apple in the top corner, see how big my machine is and make sure I provision a container that's the same size. That's probably overkill for what I'm probably doing in that container, probably in huge quotes. You never know, I can write a really inefficient workload if I want to, just make sure that, you know, there's memory leaks everywhere, but that, you know, as a developer, I just want that thing to work. One thing I definitely don't want if there's a culture of service ownership is for my app to ping me in the middle of the night because I under provisioned it. So I'm likely going to wildly over provision it so that that's never a problem. And so that's where the business use case ends up in conflict with the individual's use case. You know, the business wants to make sure that that workload is up, but also that it's not costing a fortune and even understanding quality of service levels is really important here. And it's easy to mess those things up, but Robert, anything to add? No, just that, you know, there is, like you said, there's a natural tension here between the folks who are charged with making sure this application is up and running all the time. So namely, you know, the developers who are going to want to over provision and say, give me all the CPUs, give me all the memory versus, you know, finance and ops who are, you know, ultimately responsible for that AWS budget that's getting a portion to these containers, you know, sometimes incorrectly. So it's just, it's super important to have those discussions and make sure you're making those trade-offs correctly and, you know, making sure that they're also data-fueled conversations, right? You need to make sure you're watching these workloads in production. So you know, okay, you can come to the table and say, you know, this workload has never used more than half of a CPU at any given time. And we've got four CPUs provisioned. Like we definitely need to take that down to at least like one CPU which would still be way over provisioned but save us, you know, 75% of our bill. So having data for those discussions is super, super helpful. And it's one thing when your organization is very small and you have one or two pods running. It's another thing when your organization is very large and you have thousands of pods running and all of them are even a little bit over provision that just spirals out of control really, really quickly. So, okay. So we've given you lots of examples of ways to mess things up. And now we wanna talk, you know a little bit about open-source tooling to actually identify those misconfigurations. So we play a bunch in the space. We're gonna talk about some Fairwinds open-source tools. And then we're gonna talk about a few other open-source tools that are non-Fairwinds and go from there. So let's dive in first with Polaris. So a backstory for Polaris. Today, Fairwinds is a software company. All we do is build software for organizations to help them succeed with Kubernetes. But our genesis as an organization was in services where we were building and maintaining Kubernetes infrastructure for customers. And at some point we realized it doesn't matter how great of an infrastructure we build for people if everything that's deployed into it is fundamentally broken. And we saw the same mistakes that people were making over and over and over again. And so we went and built Polaris. So Polaris exists because people mess the same things up. Everybody makes the same mistakes. And I imagine some of it is related to, if you come from Windows over to Linux you're looking for the start button, right? That's the paradigm you know, you're gonna make mistakes. If you're gonna click around hoping you find something that looks familiar and you're gonna make the same mistakes over and over again every single person that's making that transition is gonna struggle with some of the same things. And so that's part of why we see these same issues. But Robert give a little bit more of an overview for Polaris than I'm giving right now. Yeah, I mean, so Polaris checks for pretty much everything we've talked about today. All the misconfigurations we've talked about from CPU limits missing to security context issues to liveness and readiness probes. We have built-in checks for that. I think the really important thing to note about Polaris is that it's not just gonna look inside your cluster and tell you here are all the things you're doing wrong. It actually can be implemented not just as a dashboard looking at, what's inside your cluster already can also be implemented as an admission controller. So it can block things from getting into your cluster if they don't meet a certain level of configuration. And it can also run in CICD. So it can look at your infrastructure as code changes as somebody's making a PR and say, hey, you added this new deployment that doesn't have a liveness probe specified. I'm gonna block you from merging this PR until you specify that liveness probe or until you add CPU requests and limits, things like that. So the fact that it can run the same checks in all three contexts, infrastructure as code, admission control and inside of a live cluster makes it a really powerful tool. And you can also write custom checks using a JSON schema which is important because there are other tools out there that do custom policy enforcement and we're gonna talk about OPPA in a second. But there's some people struggle with Rego, the language that you need for OPPA. And so if using a JSON schema that you're familiar with is an easier way to approach that, Polaris can make that easy for you to implement that way. And this is what a dashboard for Polaris looks like. So it gives you an overview of the cluster, gives you a grade, a health score, talks about all the things that it's checking, what's passing and what's failing. This is great to deploy across one or two clusters. It's really difficult to deploy across an organization and check everything and make sure it's all implemented in the right way across lots and lots of clusters. And if you want to operationalize any of these tools at scale, check out Fairwinds Insights. That's our SaaS platform. I'll probably mention that a few more times. But what Insights does is adds a few things proprietary above and beyond this to add value but also makes it really easy to operationalize at scale, write the policy once, enforce it across all your clusters in your organization. So if you want something similar, check that out. Goldilocks and Ivan, you wanna give a high level overview of Goldilocks? Sure, so Goldilocks is the theme of get your resource requests and limits just to write. And what it does is watches your workloads running in your cluster and then makes a recommendation for what you should be setting those to. And we do have some extension beyond that in our other products as well, but that helps you for scenarios where you have very spiky workloads at certain times of your life, sort of like Kendall's Black Friday example earlier. But Goldilocks is awesome because it helps you avoid having to like dig into a monitoring dashboard and look for the peaks in the graph and try to do the guesswork of what should I be setting my requests to? Now that I know that they're important and I should set them to something and what the cost of me not doing so is, how do I figure out what those numbers should be? And Goldilocks helps you do that. Yeah, it's called Goldilocks, so you can get it just right and also just to add on and we do got to speed up the wrap up here in time, but everyone struggles with the resource requests and limits. You tell an engineer, get that right. They don't have any clue how to get that right. Goldilocks makes that easy. Finally, go, no, go. Robert, do you want to give a quick pitch on this? Yeah, so this is actually our newest project. It's a really cool way to basically validate whether or not you're ready to upgrade to a new version of a HomeChart. So often for HomeCharts, like for instance, CERT Manager is a very popular one for managing certificates. They'll make breaking changes from one version to another. You'll have to update some CRDs and the way you're doing things in order to be compliant with the new version. Go, no, go. We'll look inside your cluster and tell you whether you've implemented all the changes you need to make in order to safely upgrade to the new version. Great. And let's talk about a few third-party tools here. Trivie, Opa, Kubbench, you want to give the quick rundown of that, Robert? Yeah, so Trivie is a great tool for container scanning. It can look inside of containers and understand if there are any known vulnerabilities inside of those containers by cross-checking them with a very large database of known vulnerabilities. So very powerful tool for container scanning. Opa is the next one in line here. Opa allows you to implement custom checks. So similar to Polaris, but even a little bit more powerful. It's really like a not quite terrain-complete but a full-fledged programming language for doing these kinds of checks. And this is great for if you have very special custom needs in terms of making sure that every workload has a particular label set, like maybe you want a cost-center code label on everything, things that are very specific to your organization can be implemented as Opa checks. And then last, we have Kubbench here, which will look inside of a cluster and help you understand how well it conforms to the Sys benchmark for Kubernetes, which is a set of guidelines for how to configure, particularly the control plane of a Kubernetes cluster. So if you're managing your own control plane, Kubbench is a great way to understand how secure that configuration is and what you might need to do to really get that security profile tightened up. Great. And finally, we do just want to give a quick plug. I mentioned in passing Fairwind's insights, but this is for Kubernetes governance, putting guardrails around the ways that people are deploying things into Kubernetes from CI CD through to production, write policy once and force it everywhere. We cover security, cost optimization, policy and guardrails. It includes Polaris, Goldilocks, Trivi, Kubbench, Opa and a few more as well. So if you need an all-in-one solution, that's gonna make it easy to operationalize policy enforcement in Kubernetes across your organization, check out Fairwind's insights. And finally, we're gonna wrap up with, go check out a white paper we have for common Kubernetes misconfigurations where we cover these topics. Kubernetes the good, the bad and the misconfigured. So we have a white paper on that and I think probably where this is published. So thanks so much for being with us. We're gonna wrap up to hit that 40 minute time and we will hopefully see you another time, thanks.