 Kubernetes open source tools you need in 2023. And we're here to discuss tools that we use here at Fairwings that we've developed and that we use to manage Kubernetes clusters because it's hard to do. And so whatever, you know, tooling you can find out there to help manage policies and help create, you know, help create governance policies that make your clusters more stable and more secure and more efficient and all that great stuff. You know, the more you can use, the better your lives will be as folks who operate Kubernetes clusters. So I'll do a quick intro here. A question. So we're not allowed to start using these until 2023. Is that what I'm getting from the title? I think so. Okay, cool. So everybody wait a month, take the next month off and then 2023. So your new year is present, right? Yeah. Pretty cool, must. Snuzzle tag if anyone watches Gumball. No one watches Gumball, Steve. All right, anyway, so I'm Stevie Caldwell. I'm an SRE technical lead here at Fairwings. I've been working, you know, in tech for longer than I will ever admit to anyone. But I've been through lots of stuff. I've been a SIS admin and network engineer and, you know, DevOps and working with Kubernetes for a few years now and also starting with some open source development here at Fairwings with some of the tools that you see here today. And I'm Andy. I'm the CTO here at Fairwings. I've been using Kubernetes for I think seven years now. I've been with Fairwings for four and a half. Like Stevie have also been a SIS admin and I like to call myself a reformed SIS admin for many, many years. And I'm also an author and maintainer of a lot of our open source. So awesome. I think what do we want to say in the mission statement? Oh, I know you love reading the missions. Because everyone loves a slide that gets read out loud to them. So Fairwings is a trusted partner for Kubernetes security policy and governance with Fairwings customers ship cloud native applications faster, more cost effectively, and with less risk. We provide a unified view between dev, sec and ops, removing friction between those teams with software that simplifies complexity. Wonderful. Have you memorized that yet? Not yet. No, I should though. It's my radio voice. All right. I think we're going to kick off a polling question here for everybody. Just a little interactive piece. So what is your greatest opportunity to improve your Kubernetes environment? Is it A, getting help with the basics? B, general best practices assessment? C, improving the security posture of your clusters? D, saving money? Or E, improving the reliability of apps running in Kubernetes? I have to give Zoom points here. Zoom doesn't let me vote in the polls. Some of the other platforms do let you vote as a presenter. So I mean, I would definitely say A, I need help with the basics. I die. All the time, just like a CTO, sketch your hands and stuff and what are you doing? All right. We'll leave the poll. There's just five more seconds. I like how you just like the secret voice of Oz. Exactly. So it looks like we've got 14% looking for help with the basics. So maybe Andy did slip in his answer there. We've got 38% getting a best practice assessment, 13% improving security posture of your clusters, and the remaining 38% improving the reliability of apps running in Kubernetes. All right. Well, I like that B and E, general best practices and improving the reliability are completely tied together because if you follow the best practices, hopefully you have a reliable cluster. And I think that leads perfectly into the first tool that we're going to talk about today, which is Polaris. So I'm going to figure out how to kill these slides. Because I'm using a new browser these days and it is sometimes baffling. All right. So let's talk about the setup here. I have a Kubernetes cluster. So if I keep CTO get nodes, I believe Steven, I will be using the same cluster today from different perspectives with the different tools. So we have a cluster here. It looks like it is working on scheduling down some nodes. That's interesting. And then I have two demo applications running in here. So I have some, a relatively simple application that runs a single deployment. It's got an ingress and a horizontal pot of a scalar. And then a more complex application that has a database and a cache and a front end and a backend. It's a multi-tiered web app. And these are running in the cluster. They're available at public endpoints. So we have this one here. It just pings the pods over and over again. And then we have this one here. This is the multi-tiered web app. It lets you vote for where you want to go to lunch, even though I think all of these places would be more than two hour drive for me. So I don't know that it's going to happen, but we can vote on where to go to lunch. And we are generating some traffic against these. So this vote count is just going to keep going up and up. Hopefully it may have finished generating load. But we have somewhat of a realistic situation here. We have a multi-tiered web app. We've got another app. They're in different namespaces. And we want to figure out if we're following best practices and how we've deployed these. And so the third thing that I've installed in this cluster is Polaris. And if you go take a look at our documentation page for Polaris, which is at polaris.docs.ferowins.com, you'll find instructions on how to install it. It is a relatively straightforward helm installation. I have customized it with a bunch of values that I won't necessarily get into too much detail about just yet. But what it has given us is a dashboard. You can also run Polaris as a CLI tool. So you can run this exact same set of checks against your cluster just from the CLI. I like the dashboard. I'm an executive, so I love shiny dashboards. And so we can scroll down here and see right now. I believe we're filtered by namespace. But if I drop the namespace filter, we should see all of our namespaces pop up here. And we'll get a score that is related to the number of passing versus failing checks. And we've got some errors, some things we consider dangerous, some warnings, and some passing. So if we scroll down, we will start to see some of the things that we care about. I'm going to scroll past all this our back stuff for the moment. And I'm going to look at the different namespaces. And we'll see different checks failing or passing in some of these places. So I'm going to look first at like this deployment. And we see that we have some passing checks. These are custom ones. We're not using host path volumes because that's a big security no-no. And then we have this check here that says I should have a pod disruption budget for my deployment. Well, what does that mean? So I'm going to click this question mark. And that's going to take me back to that documentation page again. And I'm going to see here that missing pod disruption budget is the only one that has no description. Nice. Didn't mean to do that, but we should have descriptions for a lot of these. But essentially what we need to do is add a pod disruption budget that refers to that deployment. And so we have a whole bunch of these checks built in. They're documented here under reliability, efficiency, and security. And these are all best practices that we have learned, followed, and suggest to all of our customers as we've built and run Kubernetes clusters over the years. And so that's the very basics of what Polaris is. And you saw all the built-in checks. And now you may want to add custom checks to Polaris. So maybe there's some things that are specific to your environment or some things that you want to enforce. And so this is where we'll jump into the Polaris values file that I have used here. And we'll scroll down into the config section. And we see here the list of checks and their level of... What would the right word for that be? Their severity level. That's the word I'm looking for. So these are the different severity levels. Ignore, warning, and danger. So the ignore ones just don't even show up. And the warning ones showed up on our dashboard as warnings. And then we have the danger. The danger ones have the added ability of being able to be blocked by our validating admission webhook, which is another piece of Polaris that you can add. So once you've gone through and you're happy with the state of your cluster and you've fixed all these things and you've put in your exemptions and whatnot, you probably want to start blocking things from getting into your cluster that violate some of these rules. So you can mark them as danger and block them at the admission request level. All right. So I was going to talk about custom policy. So if we come down here, we'll see that I've added a few custom policies. I've named them image registry resource limits and host path mount. And I've got some carpenter stuff in here that I've been tinkering with. But we can look at this custom check for image registry. So perhaps you're only pushing your images to a specific registry at your company or you are... Can folks see this? I see my little preview here and I'm concerned. Oh, no, never mind. We're good. I saw something else. But perhaps we want to say that all of our images have to come from a specific list of registries. We can write this policy here that says... It's got a couple of messages to go with it, sign a category to it. It's targeting the container specification. And then we write our policies in JSON Scanline. If you go into the Polaris repository, you'll see that all of the default checks are in the checks folder and they are listed there in YAML files. And they look exactly like this. They're all JSON schema checks. And so we're looking at the image property of the container. And we're going to say that it has to be any of these string pattern matches. So this should be coming from any one of these registries. And so any one of these that doesn't match will pop up as a warning in our Polaris dashboard because we have that currently set to warning. So that's how we add custom checks. And then the last thing I'll talk about real fast is exemptions. There's probably some things in here that have to run this way. For example, that pod disruption budget for CERT Manager that I mentioned. CERT Manager is a single pod controller in this cluster. There's no need for it to run more than one because it runs as a reconciliation loop. And if it goes down or disappears for a little bit, it comes back. It's going to take care of its business when it comes back. That's not a big deal. And so what I want to do is add an exemption for CERT Manager. So I'm going to say controller names. And I'm going to go back here and see that the controller name is going to be CERT Manager. And so move that here. And then I'm going to say rules. And it is going to be exempt from the exact name of it. Missing pod disruption budget rule. All right. So we'll add that in there as an exemption. So ideally we're going to save this. We're going to rerun the helm install that references this values file. We're going to update Polaris in that namespace. And I am going to wait for that to finish and go back to my dashboard. And hopefully that's gone away. And then some of those other settings to set things to warning have been enabled. Any questions, thoughts? I'm trying to keep an eye on questions here. Feel free to use the Q&A tab to answer, to ask questions of us. And we will try to get them answered. All right. I'm going to get pods in the Polaris namespace and just see if we've got new ones yet. And as soon as they're up, I will go check the dashboard. There we go. All right. We'll refresh this. I go ahead and filter down to just the cert manager namespace. All right. And we see here the deployment cert manager just has this new security or this new check for priority class that I believe is a custom check, actually. So I'm not going to worry too much about that. But our, let's see. Where is the PDB1 isn't showing as passing. It should be passing. So we should be good. I don't see this failing. So that's a good sign. But that's custom policy exemptions and how we deploy Polaris into our clusters. And then once we have, like I said, once we have that working, I definitely am going to go. Once we have all of our policies green and we have specific ones green that we care about a lot, I'm going to go enable that, validating admission webhook and start blocking things from entering the cluster that don't meet these requirements, that don't pass these policies. Anything else about Polaris that I should cover and that I didn't see? No. I think that's the thing that's good. All right. Thank you very much. Well, then I think I will go ahead and hand it off to you for the second one. One of the things that Polaris will tell you to do all the time is to set your resource requests and limits on your deployments. And if those aren't set, you may be asking me how to set them. So that's where Steve comes in. Hello. So I'm going to see if I can share my screen. I always have difficulty sharing like the right desktop because yeah, let's see. Let's try this one. See if that works. All right. Can you all see a terminal here? Yes. Yes. I see a terminal. You see a terminal over there? I do. Yes. Excellent. Yeah. So Goldilocks, let me see if I can do this thing and show you a, yay. So this is our documentation for Goldilocks, a documentation page. See if it might be a touch small on the text size. Just heads up. Oh, for real? All right. Let me see if I can give you a make it big. How about that? That's a little better. Yeah. A little better. You want it even bigger? I mean, it's hard to tell, but I'm sure it's fine. All right. Is it also small on the terminal? Is that good? It's fine for me. All right. Then we'll keep going unless somebody pops into the chat and says, yo, that's not good. All right. So yeah. Goldilocks, what does it do? And Andy was talking about how you'll get reports from Polaris about setting resource requests and limits. And I'm sure it's been hammered into you. Like, yes, you have to. You should best practice set resource requests and limits for your CPU and memory and your workloads. And that's always something that people will struggle to do because it requires you to do some thinking about it, right? You need metrics. You need to have a good span of time for metrics. You need to capture traffic at your high loads and your low loads and all that other stuff. And that's a lot to do manually if you're pulling up graphs somewhere in Grafada or something and trying to do that math. Goldilocks does that for you using some already existing Kubernetes projects because it's always great to build on trusted and tested projects and open source software. And so that's what Goldilocks does. So Goldilocks can be installed. So actually first prerequisites for Goldilocks we're talking about those tried and true projects. Goldilocks requires you to install a metric server and vertical pod autoscaler. Both of those things are pretty standard. I feel like most people have them installed in their clusters just so you could do like a cube control top or something like that, right? So this is the cluster Andy was working on. It looks a little different because we're accessing it a little differently. And all of my stuff's just not as cool looking as his. I don't know. It's just a thing I aspire to. But anyway, if we look and I'm going to go ahead and disclaimer at my typing gets really bad when I am in front of folks. And so there's going to be a lot of copy pasta if I can help it so that I don't have to worry about my typing. So as you can see, we already have metric server installed in this cluster. It's all running a good because we can do the good old k-top pods and find stuff in there. And so the other thing that we need running in here, like I said, is the vertical pod autoscaler. Vertical pod autoscaler has three components to it. And the component that Goldilocks needs is the recommender, which we have installed here. And we also have the updater, but that's not exactly necessary. You could get away with just doing the recommender. The third portion of the VPA is the admission controller. We don't install that by default. So you can install the metric server using their Helm chart and for installing the vertical pod autoscaler, you can go to their repo and install it in there. I think they have a script in their hack folder or something like that. We also have a Helm chart for the VPA because as far as we know, there isn't an official one yet. And so our chart is opinionated and we'll by default just install the recommender and the updater. And the recommender and updater pretty much do exactly what it sounds like. The recommender gets metrics from the metric server and then uses those metrics to essentially make recommendations to the VPA objects that are created for the deployments that you've attached to VPA. And then based on those and Goldilocks essentially uses those values to make recommendations or to surface those for you based on the VPA. So like I said, this has already been installed in the cluster and Goldilocks has been installed as well which is also installable via a Helm chart. So we've installed the controller and the dashboard. And those are pretty much all you need to have Goldilocks running and giving you information for setting your resources, your resource limits and request. So how do you actually get Goldilocks attached to your workloads? How do you get it to create VPA objects for your workloads? That comes through labeling namespaces with a particular label. So we actually have already in this cluster because of the way we've set it up, we actually have a bunch of namespaces that have the Goldilocks label on it. But we did create, as Andy said, two new namespaces, this Yelp one here and this demo one here for the demo app. So we created those namespaces specifically to show you Goldilocks in action. So we're going to work within those. But just to show you, if you say K get VPA and K is just my shortcut for cube control, you can see all the VPAs that were created in this cluster because of Goldilocks. And I'll show you how those probably got created in a little bit. But let's start off with just labeling. Well, let's take a look at the Goldilocks dashboard first, right? I'm going to run this in another terminal. I'm not a Tmux aficionado like my buddy here. So I'm going to support forward Goldilocks dashboard here. And then we're going to go over to the browser. And look, I even pulled up localhost already because I am soups lazy. So we're going to pull localhost88 and here's the Goldilocks dashboard. So normally, if you've just set up Goldilocks in your cluster, all this stuff isn't going to be here. These namespaces won't be here because you typically haven't labeled your namespaces. These namespaces are already labeled or they've already been set to have VPAs. So they show up. If you were starting with like a pure vanilla installation, there would be a nice little block of text up there that tells you exactly the command you need to run to manually label your namespace. And that's the command I'm going to show you now to get, let's start with our demo app. So we're going to get our demo app set up with the Goldilocks label. And again, copy, pasting, like it's my job. And in some cases it sometimes is. So we're doing a Qt control label namespace demo with that Goldilocksferowins.com. And able to do this true, we're going to hit the okay on that. And now we see that namespace is labeled. And if we go back now over to our dashboard, which is not there, Steven, it's right here. No, it's not right there either. It's right here. There we go. And we refresh that guy. And we see demo namespace right here. So that's how easily you can add workloads to Goldilocks. Everything that's now deployed in that demo namespace will be picked up and will have a VPA associated with it. So if we do a VPA, that's the new space demo. We can see there's the Goldilocks demo, basic demo VPA. And you'll notice, well, let's actually just look into it a little bit. So a couple of things you'll notice in here. The update policy is set to off. So with the vertical pod autoscalar, if you have the updater installed, the updater can actually vertically scale your resources on your pods for you based on this information down here from the recommender. We have these set to off. So it doesn't do this automatically for you. And I don't know if you noticed, but when I did get labels on all the namespaces in the cluster, there were some of them that had a label that was update policy or update mode set to auto. So you can pass another label to your namespace that will enable the update mode to be auto. And so it will automatically scale the resources on your pods as needed. But in our default method or our default mode is set to off. And then these recommendations here that come from the recommender, they play a part in how we recommend resource limits and requests in Goldilocks. So if we go back here and we look at the demo app, I'm going to just go ahead and collapse that load. So you see, it's a lot like Polaris in the sense that there's a namespace. So it shows you the workload, shows you the container, and it shows you what Goldilocks is recommending you set for your resources and requests. And so this is the current and this is what they recommend for guarantee quality of service and for burstable quality of service. And for each of these prints out, some YAML that you can then copy, post it yourself into your Helm charge or I guess you can live edit it in the cluster, but I wouldn't recommend that if you're doing some CICD stuff. So also we have a little glossary down here that explains some of the stuff about the difference between the guaranteed quality of service and the burstable quality of service, which essentially about like how Kubernetes treats your workloads when it's under resource pressure. So Kubernetes, the CUBE scheduler uses your resource request for trying to impact your pods on nodes. The HPA horizontal pod, all the scaler also uses your request. And so that's important to have set, but if you have both your request and your limits set to the same values, then if you have some sort of resource contention, your pods will both sort of have a guaranteed like, this is what I'm going to use on this node, right? Burstable means that your pod could go up or down for like short spikes of time. You know, except for a CPU, I guess, because then you get throttled if you do that. But for memory, you know, you get a little burstable thing here. And so that makes it a little more flexible. But I also think it means that it's more easily evicted if you have some resource contention. But so these are the things that Goldilocks will recommend. These are starting points, right? So that's important to keep in mind. Like ultimately, you know your workloads and you know your sort of your business patterns better than we do, better than Goldilocks does. So this is like a good starting point to like test out and tweak and see how those things will work for you in terms of your traffic patterns. There is a way to label all of your namespaces. So like you know what I showed you was going in and manually adding a label to your namespace. But you can actually just go ahead and label all the namespaces in your cluster. Or it's not actually labeling the namespaces, but it's like tagging them for being used with Goldilocks. Because if you do a show labels, you'll find that they don't actually create a label in the namespace. But it does create a VPA for every namespace in your cluster. Which means that any new namespace will also automatically get VPAs added for any workloads running in those namespaces, right? So for example, let's see if this will work. I'm going to try and actually I'm not going to do that here because we actually didn't install Goldilocks with Helm in this cluster. So there is no Helm chart that I can upgrade. But potentially in my cluster, just check it out here for a second. I do have a Goldilocks chart. So this is my client cluster. This is what I meant when I said I was going to be using a bunch of different clusters because I think I already ran this command. So all this copy paces, you can see what it is, but I won't necessarily run it here. But this command, there's a couple of flags that you can set in Goldilocks, the controller flag. So if you set controller flags on by default to true, or flags on by default to true in both the controller and dashboard sections of Goldilocks, what that'll do is automatically add a VPA for whatever new namespace you create in your cluster, which is pretty cool. And also create the VPAs for all your existing stuff in the cluster. And the last thing that I'll mention about Goldilocks, I think is that, so Goldilocks uses the recommender, but you can actually, and so there's like a limited amount of information, like it only goes back so far, I don't actually know how far back it goes, but you know, not super far, but you can actually install like Prometheus in your cluster, and then hook Prometheus up to both hook that up to the vertical pod autoscaler as a back end, like as a storage. And then you'll have like however long you decide to set Prometheus for the worth of data to reference in terms of making those kinds of recommendations for CPU and memory request and limits. That's all I have. Any questions, Andy? Don't ask me any questions, Andy. I don't have any questions. I was a great overview of Goldilocks. All right, so I'm going to stop sharing. Actually, I was thinking, do you want to just do the Nova one, and then I can do, that way we don't have to flip screen shares four times? Sure, sure. Okay, all right. So we're going to go right into Nova. Tool number three, covering a lot of ground today. We are. We are. All right, so Nova. Let's take you to the Nova page. Bye-bye, local host. Nova documentation. All right, so Nova. Again, this is a tool that we developed in-house when we just, you know, found a need for it based on all the clusters that we manage. And it scans your cluster for updates to Helm charts and container images if you're not using Helm. So, you know, it's useful for, like, keeping track of add-ons, you know, your cert managers, your nginx ingress, your external DNS, all those things. You know, you want to keep track of, you want to keep as up to date as possible with those, because obviously there's all kinds of security patches that'll come through, stability patches, and, you know, keeping those things up to date as to, you know, the security and stability of your cluster, right? It helps, you know, maintain that. So, Nova is really simple to use. I'm actually going to, so I can do both, right? I'm going to run it in the cluster we're using, but you're going to find, oh, that was a little typed find as I said, find, and that was pretty dope. You're going to find that there's not a lot in this cluster, because, again, this cluster doesn't have a lot of Helm charts installed in it. But let me run through the command here. So, Nova find is the base command, and by default, Nova will output stuff in JSON, and so it passes the format table flag, and it'll, you know, give you this cute little table, and dash s wide is just more information that it shows you, like, if you did this without the dash, that's why you get, like, just a little bit less to get the four columns. You about to say something, Andy? Oh, I was going to ask if you were going to cover the dash dash containers, but it looks like you are. So, yes, cool. And so this is the Helm, this is how it finds old and deprecated versions of your, or the terms if your Helm charts are old or deprecated, you know, quickly going through these fields, you know, release name, chart name, new space, the Helm version. So installed is obviously the chart version that's in your cluster. Latest is the latest chart version that Nova knows of. Old is a bool that Boolean that says, like, yeah, this is either your version is old or not. And then deprecated is a flag that sometimes inside actual Helm charts, they can be set as deprecated so that you know that the Helm chart, you know, should not be used in the future. You should start moving off of it. We saw this happen, you know, in a big way when all those charts started moving off of the stable charts repo and moved into their separate repos, right? So that's why that flag is there. And let's see. So and where Nova gets this information about the charts that are, you know, the different versions of charts and whether or not they're deprecated is we pull artifact hub to get that information. And there's one caveat that has been mentioned in previous presentations that I'll just, you know, carry on the tradition, which is that, you know, you can fork charts, charts can be forked. And that's cool. And sometimes those forked charts wind up in artifact hub. And the chart that YAML is sometimes the same between the forked chart and the original chart, which means that Nova doesn't have a way to see which upstream chart is actually the one that your release came from. So that is important to just keep in mind. We do some matching and scoring to try and like mitigate that. And we do a pretty good job of it. But that is just something to keep in mind because that is where it gets that information from. So what Andy was going to ask about is showing the container scanning. So Nova also scans container images. I'm just gonna go ahead and copy pasta that command from over here. So let's see if we can find some painters. And this is a big cluster, so it might take a moment. So you can see we've got like a similar output here. So these are the containers running in this cluster. Here's the current version. And here's another Boolean. It tells you whether or not it's old. It tells you the latest container version. But it also is so it tells you three different, in three different ways, what you're different, like how out of date you are. So it scopes it by the latest major version, the latest minor version and the latest patch version. And that just gives you more info on what to update. So for example, you might not have any concerns about updating your add-ons to the latest patch version or even the latest minor version. If it's like, hey, my latest minor version is, I'm on 60.1 and the latest minor version is 068.1 or something. You might be like, yeah, I can go ahead and patch that. I don't need to worry about any breaking changes and stuff like that. So it just gives you more information so you can make linear or really granular decisions about how you want to handle patching your add-ons and do those upgrades. Another thing about NOVA is that there are some configurable options to it. So there's a command you can run, and I'm just going to go ahead and do it here. So NOVA generate config will actually, oh, did I not do that right? Yeah, generate config is a dash. Generate dash, I think. There we go. And so that will give you a boa on NOVA config.yaml file. Let me actually put this at the top here. I'd rather have a boa than another yaml file. So this is the config that NOVA is using when it runs. I think it's essentially these maps to command line arguments that you can pass into NOVA on the fly. So for example, format JSON as you can see, that's the fall, and we pass in format table from the command line, but we could easily change this to table. The difference is that you'll then have to point NOVA to your config file when you run it so that it uses your changed config. So I guess we can actually even go ahead and do a quick change there. Change it to table. And then I think if I go back up to NOVA find format table config. Nova config.yaml. So this will just be, so I won't say containers, sorry. I think that will just give me by default. I said format table. I want to format table. So I want to see if that works without it. How about that? So many containers. There it is. See, that's magical. All right. So yeah. So because we changed that in the config, now you've got table as your default, which is great because why did you ever really actually want that to be in JSON? There's reasons you'd want it in JSON. But for this purpose, you don't want it in JSON. A couple of other things that are interesting in this file that you can look into is that you can set the desired version. So that is a map that allows you to specify the desired version of your home chart or container. So for example, if you have some dependency in your cluster where you know I can't use anything above a home chart of this version, then you can put this in here and set that version constraint, essentially, and then it'll ignore it. It'll drop from the output of NOVA. Also, there's a URL list down here. So I was telling you that NOVA pulls from artifact or pulls artifact top. If you have other repos that you know your home charts are sourced from, like maybe you have some private repos, you can add those to this URL list. And it actually uses both. So you can have pull artifact hubs that are true and your own private URLs and it'll scan or pull all those things to give you a report about your deprecated and out of date, so old and dusty containers and home charts. We have a question. It's very important. Hold on. Where's the question? I'm going to read you the question. All right. All right. Can we have your elite bash prompt config string? Can you have my elite bash prompt? Actually, you know who I got this elite bash prompt config string from? You ready? That guy right there, my CTO, I grabbed it from him because he has a whole really cool thing for using Starship. Starship is what I use to configure my bash prompt, so I don't know if that's public or not. It is. Yes. My .files are public. I put them. I don't know that I can chat with everyone, but I did answer the question through Zoom, so. Yeah, Starship is really neat. Sometimes it's more information, like it gets a little messy, depending on what I'm doing, but yeah, I find it very helpful. So shout out to my boss. Did I miss anything? I don't think so. That was great. I think the only thing, oh no, that's for Pluto. No, I think that's great. Thank you. Super. I'm going to stop sharing my screen. I want you to go back to yours. All right. We have one more tool, and then we've got just a couple of things to talk about. So let me get the Zoom bar out of the way so I can actually see what I'm doing. I'm going to go back to the cluster here, and the last tool, yet another tool written as we ran into problems. We were doing the Kubernetes 1.16 upgrades, and if anybody remembers that particular upgrade, all of the old deployment extensions v1, beta 1, API versions were removed in 1.16. And so if you had a whole lot of old YAML laying around, or you were deploying old Helm charts that had deprecated API versions in them, you were sort of scrambling potentially to update those. And we needed a way across all of our customers to tell them, hey, this is where you're using deprecated or removed, particularly removed API versions, but deprecated as well. And so we wrote this tool called Pluto, and what we realized while we were trying to write Pluto was that the Kubernetes API server is a lot smarter than we are, and it automatically translates API versions from one to the next. And so if I was looking at a cluster that had a deprecated API version, and I said, kubectl get deployment-o YAML, it would give me back the apps v1 version, regardless of how I put it in the cluster, because it said, hey, you gave me an extensions deployment, but I know how to translate that into an apps v1, so I'm just going to do that, which is great. And it makes it so that we can upgrade in place, but then that breaks our ability to deploy to the cluster after we've done that upgrade. And so we wanted to prepare our customers beforehand, rather than just blocking them from updating their clusters. And so there's a couple different strategies by which we do this. First, we gave folks the ability to just scan the YAML files. That's kind of the most obvious thing is like I've got a list of YAML here, I want to run Pluto against it and say, are any of these API versions removed or deprecated? And then we thought, okay, well, we want to be able to do that with local files, then we want to be able to template out helm charts and feed those into it. And so Pluto has lots of different options. So if we take a look at Pluto, another CLI tool like Nova, we can run the help command and we'll see that there's several detect commands here that detect different things. So detect files, you just pass it a directory of files and it'll look through all of those. That's relatively straightforward. And then we have just the straight up detect command. And this one's pretty interesting. So I'm going to, I'm going to template out a helm chart, a very, very old helm chart. This is the cert manager, 0.7 helm chart. This is probably what, two and a half years old at this point or something like that. And if we take a look at this, we template this out, we're going to scroll back up here and see we've got, well, we have a validating web hook configuration that is the admission registration v1 beta one version that was removed in 1.22. We've got this cert manager dot kates.io v1 alpha one that's also been removed and deprecated. So there's a whole lot of versions in here that we would not want to try to apply to a cluster today. And obviously we wouldn't be applying this chart because it's two and a half years old, but it's a good example here. So then we're going to pipe that into Pluto detect. We're going to give it the old dash for standard in shortcut that is so popular. And we're going to see it's going to, by default, give us a table output. And we're going to see, I'm going to zoom out just a little bit here so we can see some more of this output. We're going to see a list of objects and what kind they are and what API version they are. What API version replaces it. And then we're going to see, has it been removed and has it been deprecated? All of these have both been deprecated and removed. If you're not familiar with the way that Kubernetes APIs are removed and deprecated, they are first marked as deprecated. And then in multiple versions of Kubernetes later, they will be removed. And in the Pluto FAQ on our documentation site, there's a link to the policy that describes how this is done in the Kubernetes code base, but there are specific rules about how this happens. The important thing to note is that once it's been removed, you can't do anything with it. You can't query it, you can't apply a YAML that has it. It basically seems like it doesn't exist anymore. We have completely erased it off the face of the year. So we don't want to be applying any removed API versions. Andy, real quick, I just want to figure out, it looks like maybe the last column after replacement is like off screen. There we go. Yes. Yeah, it was actually wrapping, which is super annoying, but yeah, we have the removed and deprecated columns over here. So I guess I could have used less or something like that. But anyway, that's how deprecations and such work. So if the Kubernetes API server can't tell us about these versions, then what can? This is the question we asked ourselves. Well, the first thing is helm chart releases. So when you apply a helm chart, it creates a release object, which inside of that release object, it has all of the YAML that was applied as part of that helm chart in its raw form, hasn't been translated through the API server yet. So we can look for those and ask for those. So we have the Pluto detect helm command, which will give you the opportunity to do that. Now, I don't have any helm charts as Stevie was pointing out in this cluster that are deprecated, but if I had applied any of them, they would show up here with their name and name space applied. And then the very last thing, since we are actually running relatively short on time, the very last thing that it does that has been recently added, thank you to a community contributor for that is the ability to detect, it's called the Pluto detect API resources command. And what that does is it looks for an annotation that Kubernetes adds to some resources, depending on how you applied them to the cluster, that is the last applied configuration annotation. And what that contains is essentially a string copy of the last applied YAML. So if you keep CTL apply your YAML, then that that annotation gets set. And so we can look through that and see if there's anything. So actually, I've found two things here using that functionality. The fun thing is that these are both provided by EKS. And so there are not things that I actually control in this cluster, but Pluto has found the last applied configuration annotation and said that, hey, this is something that should not exist. So those are the different ways to use Pluto to detect your deprecated API. And since it's a CLI tool, and we control the exit codes based on whether we find resources that have been removed or deprecated, you can use it in your CI to block builds or fail builds if people are trying to deploy things that are not that are deprecated or removed, depending on however you want to set that up. So that's the last tool you should use. So we've talked about Polaris, Golilox, Nova and Pluto today. I believe we also have content detailing even more depth each one of those tools individually. So take a look at our past webinars and you may find some more content if you're curious about these. But altogether, if you're running all four of these and fixing everything that it finds and tells you should be in a much better place in your Kubernetes journey, finding a lot more good configurations and keeping things up to date and not deploying deprecated API versions and setting your resource requests and limits properly. So the last piece, because I can't leave without talking about it, at least briefly, is what if I don't want to install all these tools and run them myself and write my own CI CD code and deploy these into my 100 clusters that I'm running everywhere. And I really just want to install a single little agent that's going to report all this information back to a single dashboard. Can I do that? Can I? Yes, you can. Okay. So this is where we get to Fairwinds Insights. So Fairwinds Insights is our commercial SaaS platform. It allows you to hook up all of these tools that we've talked about today, along with multiple other tools, and then adds a much more additional functionality on top of them. So we normalize all of the results from all of these tools into what we call action items. And they're reported into this dashboard. From here, you can route these to different places. You can write automation rules to send them to Slack to generate JIRA tickets, to generate GitHub issues. And then we add in the ability to take a look at the cost and efficiency of your clusters as well. So we've got this entire section on cost that lets you see what your workloads are costing you, how much your clusters are costing you across multiple different clusters. So if you like all these tools and you want to operationalize them at scale across a lot of clusters, give us a shout out, give us a shout and we can talk to you about Fairwinds Insights. And I don't see any additional questions. So I want to thank you, Stevie, for presenting with me today. Always a good time. Thank you to everyone who joined today and gave us your time. I hope you all have a great rest of your week. Thanks.