 Hello everyone. Hi and welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Itai Shakuri and I'm director of open source at Aqua Security. I'm also a Cloud Native Ambassador and will be hosting today's show. So Cloud Native Live, this is where we bring a new set of presenters every week to showcase how to work with Cloud Native technologies. They will build things and break things and they will answer your questions every week on Wednesdays. This week, we have Andy from Fairwinds to talk to us about Polaris and Gridlocks. Before we get to that, just a quick reminder that KubeCon is coming up. It's going to be both an in-person and virtual experience. So make sure to register in time and now is the time. Another disclaimer that this is an official live stream of the CNCF and such is subject to the CNCF Code of Conduct. So please don't add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, just please be respectful for your fellow participants and presenters. So, hi Andy. Why don't you introduce yourself? Hi, I'm Andy. I'm director of R&D and Technology at Fairwinds. Fairwinds is got its start as a managed service provider in Kubernetes and we took the learnings from those years of managing lots and lots of clusters, and we've built a lot of open source while we manage those clusters to solve a lot of the problems that we have. Then on top of that, we built some SaaS. So we just do all Kubernetes all the time and I get to tinker with all the fun stuff. Cool. You've open sourced some of the tools that you've discovered along the way, I guess. Yeah. The two tools I'm going to show today really were built out of a need for just some additional tooling to help us in our journey running Kubernetes for all of our different customers. So Polaris really focuses on best practice configuration of your application workloads and then GoliLocks came out of a need to help our clients set their resource requests and limits properly on all of their deployments in their clusters. Yeah. Both of them sounds very helpful. How would you like to start this? Would you like to start with one of them or how did you plan? So what I'd like to do, I have what I've done today is I've set up cluster. It's a pretty bare bones DKS cluster. Then I have installed an app on it. This is a demo app out there. It's a multi-tiered app called YELB or YELB. I'm not actually sure how we're supposed to pronounce it, but essentially it's just a basic voting app. So I wrote a little loop over here in the console to just randomly vote. So this app is running in my Kubernetes cluster. I got some YAML for deploying it from the repository that YELB came from. It's very, very bare bones deployment. So we have just a few different pods running here. We've got a database, a Redis server, we've got a front end and a back end. What I'd like to do today is start with Polaris, and I'm going to install Polaris in the cluster and look at the findings that it has regarding this particular application. So modeling if I had just deployed a brand new application into my cluster, how could Polaris help me improve the security posture and the configuration of that? Sounds good. What do we do that? I just want to remind everyone that they can type questions in the chat. If you have anything, just type it as you think about it, and I'll pick it up sometime and maybe bring it up. Great. All right. So in order to install Polaris, I'm just going to use Tom. So I've actually already installed it, but I'm going to show how to install it. So I'm going to install Polaris in the namespace Polaris from the Fairwinds stable helm repository. Then if I were doing this from scratch, I might want to add the create namespace flag just so that we get, I've been doing too many other things today, helm upgrade dash dash install. So this should install Polaris in our Polaris namespace within our cluster. I'm going to pop over here and use K9s because I really like the way it goes forwarding. So I'm in the Polaris namespace here. I'm going to look for services in this namespace. I see that we have a dashboard in this namespace. So I'm going to port forward to this using K9s. So I'm just going to port forward to the dashboard on localhost 8080. Then I'm going to go up here into my browser and we're going to take a look at what that looks like. Maybe. Let's make sure it's running. Shall we take a look at the pods and Polaris namespace? There's the problem. I can't spell localhost. Dangers of a lot of them. All right. Localhost 8080. There we go. So we see the Polaris namespace or the Polaris dashboard here. It's going to give us just a rough score for our cluster. It's going to give us some just numbers on how many checks we have passing, how many checks we have in the morning, how many checks we consider dangerous, and then some basic information about the cluster. I'm really going to focus on the app namespace. I've deployed the YELB app into the YELB namespace. You can look through all of the different findings in Polaris, but it's going to give you stuff for the entire cluster. When we filter down to just our namespace here for our app, we see we're doing really poorly. We have an app. That's not a great score. Let's take a look at the different findings. Like I said, we have different deployments for the UI, the database, the app server, and Redis. Let's just focus for now on the UI, because that would be our front-facing portion. I see we have some dangerous things that are enabled that are going on here and we have some warning things. The top one here, I'm going to just tackle that first. Let's take a look at what that means. It says privilege escalation should not be allowed. If we click on the question mark here, I'm going to get a link to our docs where we see privilege escalation allowed danger. SecurityContext.allowPrivilegeEscalation is true. That is the default setting. If we go into our YAML that we used to deploy the app, which I have here in this folder, we can take a look at the UI YAML file, and we have just a bare bones deployment here. I mean, just the bare minimum we need to get this running. I can make things a little bit bigger. I see the question there. Yeah, no problem. Let me, I'm just going to let this take over for a moment. I'll have to switch back and forth, but that's all right. So we know we need to set securityContext.Privilege, allowPrivilegeEscalation to false. And really what we should do is have another link back to the Kubernetes docs. But maybe I don't know what this means. Maybe I'm unclear on this, so I'm going to look for the docs. I'm going to find the Kubernetes documentation on configuring the securityContext. I'm going to make this a little bit bigger too. And I tend to just scroll through until I find the YAML I'm looking for. This is probably looking, hey, look at that. AllowPrivilegeEscalation. So this is going to prevent anything inside of our pod, or inside of our container specifically, because this is the container security context, from escalating its own privileges. So let's add that to our YAML file here. So we've got our deployment, spec, container, securityContext, allowPrivilegeEscalation, false. And I'm going to fix my YAML, because I just broke it here. Is it entitled the wrong place? It's fit here instead. There we go. All right, so we've got our container, securityContext, allowPrivilegeEscalation, false. I'm going to save this. And then I'm going to apply this to the cluster. And hopefully, we then go into our Polaris dashboard. And we can refresh this and take a look at our UI. And we see that our red X there has gone away. And we've just improved our security cluster a little bit by fixing some of the default configuration in that app. So I'm going to keep going with a couple of these. If we look, we'll probably see that on all of these. So I'm going to just take a quick look at my YAML here. And I'm just going to apply that to all of them. So my goal today is I'd love to get a higher score in our Polaris dashboard for this new case. So we started by installing Polaris just from home. We saw some issues. We fixed an issue. We immediately saw the updated results. So Polaris is constantly watching for changes in the Kubernetes API and always up to date, right? That is correct. So it's basically like an operator, that Kubernetes operator that enforces these security configurations. Is that an accurate way to put it? That is correct. So initially what it does is it scans so Polaris also has an admission webhook that you can install as well to enforce these on applied to the cluster. And then it also has the ability to add custom checks as well. So we have all these built-in checks that we see here. But we are also able to add additional checks. How does one add the? So I don't want to interact to our flow. But if you could just say in a few words, like what's the language where I can specify things on my own? Let's see. I actually haven't written a custom check in a little while. So let me go to here. And we can go to the Polaris documentation at Polaris.docs.ferowins.com and go to the custom checks area. And we essentially write them in YAML here. But I believe we're using JSON schema onto the hood to write these checks. Cool. Cool. Thank you. Yeah, no problem. All right. So we've got our security context. We have no, we are no longer allowing privilege escalation, hopefully, everywhere. We've got rid of all of our dangerous checks. We just went from F to a D minus. That's great, I suppose. Ds get degrees, I believe, was the saying when I was in college. And so we can see that our security score actually has gone up a little bit. So that's great. So that's some of the security things that you'll see. If we keep looking, you'll probably, we may see some more not allowed to run as root is another common one. So that's also set in the security context. But that's set at the pod level security context. So we probably want to disable that. In fact, the CVE that was released last week, if you don't have the ability to run any containers, this root would have not mitigated entirely, but reduced the blast radius of that CVE that we all had to deal with last week. So we can go ahead and add that in as well. That's in the, oh, we need to run as user, run as group for the pod, that's right. So we're going to modify the pod level security context. So in the pod spec, not underneath containers, to add in our security context. And this was a little more tricky to modify, because you can just stop running as root, but some containers, depending on how you built your container, doesn't necessarily always play nicely. So we're going to try this and see if it works. For the UI, I'm not horribly worried about it. The database container in the Redis server might have a little bit more problems. First thing first, let's make sure our app's still running. Looks like we've refreshed our database, restarted, and took all of its data with it. So our number of total votes is way down. Feel free to throw some load at this thing while we're on here. See if you can get one of these to win. We've got yell.kepler.hillghost.com is the URL. And it's acdb only. I wasn't able to get a tail list working with this particular app while I was prepping for it. So let's go back to our dashboard. Take a quick look at our UI. And see, all right. So we've dropped our ability to run as root. Notice that check has gone green for us. So that's great. Let's see what time we have. 10, 19. So I'm going to do one more of these before I jump into the efficiency side of things. So let's do one more security one. And let's do capabilities. This is an interesting one. So again, insecure capabilities. This will link specifically into our internal list of insecure capabilities. These are the Linux kernel capabilities that your container has. These are also covered in the documentation here. Capabilities. Oh, that's not the link I wanted. Sorry. Capabilities. There we go. All right, so what we want in the container level security context for each container that we're running, we have a list of capabilities that we can use. In theory, for our UI container specifically, I don't think we'll need any. I haven't looked deeply into this app. But we have the ability to add capabilities. But primarily what we want is the ability to, let's take a look at this again. Does it show the drop here? It doesn't. I believe we can do the same list here. We can drop all. And then we can add the ones we want. Let's try dropping all of them. Apply that. Assuming I got my M well correct, we did. And we'll take a look at our pods. And we have a crash loop back off. Not super surprised there. So capabilities is kind of like changing your user and your group that you're running as, depending on how your container is built and what it requires to do. You probably need some level of capabilities. My guess is we're going to need something related to networking. So that we can actually run whatever it is we're running here. So let's take a look. Logs. Yep. All right. We're running an nginx container. I see. So the Yelp UI container is obviously built on an nginx container and then serving up some files out of that. And nginx is going to need some of those capabilities to run. So let's take a look. And we're going to add. Actually, I haven't done this in a little while. So I'm going to take a look. Let's close these. And we'll take a look at the list of capabilities. I'm fairly certain we're going, well, let's just guess here. Let's kind of feel free to throw into the questions if you know exactly what capabilities nginx needs. But that doesn't work. We can move on from that. So I like to run through this just kind of from a super clean perspective here because say I'm an ops person working on a team with developers that built a container and I need to help change the security configuration here. There are some things that are easy that are low hanging fruit and then there's a lot of things that are more complex to change and more complex to update. And Polaris does a great job of alerting you to potential issues there. But it still takes some effort to get these things working. Still crashing. Probably need. I think it sounds like it's related to the user change. Maybe not the capabilities change, do you think? I was actually just wondering that myself. So let's this, I think you might be right, because it does say it's attempting to run. User directive makes sense if it's running with super user privileges, which it's not. Now we're going to get an operation. It's trying to churn something. So I'm going to spend a horrible amount of time attempting to get this working. I just wanted to really show kind of the level of complexity that I can take to get some of these configurations locked down. It's not just a matter of go set the thing, do the thing here, change your YAML. You really have to understand what's running inside your container, what capabilities it needs, and also build your container in such a way that it doesn't require root level access, if you can. So let's put this back. Let's get this running again. So that's really a decent overview of some of the security checks we have. There's probably additional ones I haven't talked about, but those are really the, they should be the most straightforward to get configured with your various applications. All right, so let's just make sure we've got everything running here. Or at least still, that looks much better. All right, let's get this little refresh. And unfortunately though, if we refresh this page, we're gonna see, we've got our privilege escalation back because I commented that back out. So let's talk about some of the reliability stuff next, because this is actually really one of my favorite areas to jump into is reliability and efficiency. So we'll see a couple of issues here, specifically around, and this is still, I apologize for that, for memory limits, CPU limits, CPU requests, memory requests, and liveness and readiness probes. And these things, and again, I'll link out to the documentation if you click on the question mark here. But these things, the liveness and readiness probes, the CPU requests, the memory requests are really kind of the bare minimum for good reliability inside your Kubernetes cluster, especially if you're running multiple apps that need to use different amounts of resources and things like that. So this is really where Goldilocks comes into play. So CPU and memory requests are great. I can say, set them. But the question is then, okay, what do I set them to? You can, maybe you can profile your app, you can run it for a little while. I could go in here and kind of get an idea of how much CPU memory is my application using, is each piece of my application using right at this moment. Maybe I have some sort of monitoring hooked up and I can go look at historical graphs, but it's really kind of a frustrating, it can be a frustrating experience, especially across many apps, trying to go in and figure out what do I set these to? And so we set out a long while ago to try and make this at least a little bit easier, just move the needle a little bit, give people a tool that would make it possible to set those memory requests and limits in an easier way. And so what that resulted in was a project that we have called Goldilocks. And all these products are on GitHub in our pyramids org, but Goldilocks is a controller that manages vertical pod auto-scalar objects in recommendation mode, and then aggregates the recommendations from those vertical pod auto-scalar objects into a dashboard. And so the way this works is we install Goldilocks in our cluster. So we're gonna, we would do the same type of how install that we would have for Polaris. So we'd install Goldilocks in the Goldilocks namespace from the Fairwinds stable repository and create namespace, I'm not gonna run this, I already have it. If we take a look in the Goldilocks namespace, we have two components as a controller and a dashboard. One of the prerequisites of installing Goldilocks is that we also have the vertical pod auto-scalar installed. So if we look in our VPA namespace, I've already installed the vertical pod auto-scalar. We have a chart for that. It can be installed as a sub chart of Goldilocks. But essentially we only, I really only need the recommender portion. I'm not gonna run a vertical pod auto-scalar in the automatic mode where it changes your request and limits. I'm just gonna run it in recommendation mode. And then the last thing we have to do is label our namespaces. So if we take a look at our namespace, our app namespace Yelp, we have this label Goldilocks.Fairwinds.com slash enabled equals true. And what this does is it goes and creates a vertical pod auto-scalar object for all of the deployments in our namespace here. So we see we have, these have been here for a few days. I built this a few days ago. So they've been kind of collecting information over that time. So vertical pod auto-scalar will watch the resource usage of your pods, each container in your pod and create a recommendation. So if we look at the, say for UI, we look at the VPA object. We see that in the status block, there's a set of recommendations. There's a lower bound, a target, an uncapped target and an upper bound. Currently these look all the same because they're using the minimum of what the vertical pod auto-scalar is set to. It has a minimum target, but over time, if we had load on our application, we would see these numbers start to change. And so we can take a look at the Goldilocks dashboard and we can see those recommendations across all of our deployments, across all of our containers, I have created a nice dashboard, so let me pull that up. I'm gonna pull up the Goldilocks dashboard. I put forwarded to it. We can list all of our namespaces. We see all of them that are labeled here and have VPA objects in them. So we saw that the Yelp namespace had that label on it. I'm gonna click into this namespace and I'm gonna see, it's a little bit too big maybe, I'm gonna see the various deployments within there, within that namespace and then each here within that deployment. So if we had multiple containers, we'd see those here. And we see Goldilocks is giving us the same issue that Polaris was, which is that our limits aren't set and it's gonna give us a recommendation on how to set those. And so if we install Goldilocks alongside of our applications, we can get some recommendations over time on how to set those. Another nice thing you can hook VPA or the vertical pod autoscaler up to Prometheus to get some more historical data incorporated into your recommendations. So I'm gonna go ahead and apply the recommendations that Goldilocks is making to my UI container. Take a look at the questions here. Is there anything? Take a look. Let's go set up our port forward with, that's Redis we applied to the UI. So now we're gonna see that Goldilocks is seeing that we have our resource requests and limits set to exactly what I recommend. So I'm gonna talk a little bit about QoS now. If you're not familiar with quality of service class, it is when your, it's the configuration of the difference between your limit and your request. So if your limits and requests are equal, it's in what's called the guaranteed QoS class because you're guaranteeing that amount of resources to your container. So we show both burstable and guaranteed burstables when you have requests that are lower than your limits so that your workload can burst up to the limit. And those are actually defined down here and link out to the Kubernetes documentation when we talk about it. But we use the lower bound and upper bound to build the burstable QoS recommendation. And then we use the target from the VPA recommendation for both the request and the limit for the guaranteed QoS class. So what I'd like to do is rerun my loop here, but I think I lost, I could generate some more load on this. So let me grab the code for that real fast. I'm not a JavaScript expert. So I asked one of our lovely app developers to give me this. Are there any questions that I can answer for people? So there was one question that I maybe try to generalize about Polaris, I think specifically. I'm not sure how it applies to Gridlock, whether it follows some kind of standard for this specific set of rules that you chose to enforce there, or maybe to even generalize this further, how do you choose which rules get into Polaris, which tests and how do you update them, does it relate to any kind of compliance or standard? A great question. So the current set of checks that are built into Polaris are not built on any particular standard. They're really kind of a collection of things that we've seen as common best practices over the years. So we don't currently have anything that maps specifically to standards. We are talking about what we can do in that area. We just achieved our SOC2 certification. So we're working on things to work with standards like that. The other thing that we can do is in our commercial product, we started using Qbench, which will give you the Sys benchmark, which is a nice thing to do. And then there's definitely a potential for building custom checks that would align with a standard like that. But currently it's just best practices that we've developed over time. Okay. Another kind of random question about which terminal you use, but... I use, it's called a Lackardy. It's a Rust terminal emulator. Cool. Yeah, it's out there. It's open source. So I'm trying to get some load running on these different pods so that we can see different recommendations in Goldilocks. The other thing that we can do is we can kind of tweak these a little bit to see, say we maybe, say we built our container and we've got just these huge recommendations. I'm not gonna go that high because I'm not sure how big these nodes are, but the original recommendation was much lower than this. Let's take a look. Goldilocks will not only tell you if you haven't set your requests and limits, it will also tell you if, okay, it will tell you if you have over provisioned your requests and limits as well. So we can go back here and take a look at the dashboard and see that, hey, we've over provisioned these. Maybe we were using, maybe allocated too many resources. Maybe we have an opportunity to save a little bit of money here and reduce the number of nodes that we're using. So the other thing I wanna look at is the DB because it looks like it was under some load. And we also don't have them set here. I'm gonna go ahead and set them for the database as well. That's the right button in the red one. It's putting it in my way. So I'm gonna try to put it back, produce this again. So that nice copy paste there makes it relatively easy if we want to set these everywhere. I did DB Redis, let's do app server. Nice feature idea at the bottom that applies to the command. That would be cool. Although in an infrastructure as code world, not my favorite solution. But one thing that we've been talking about doing actually is adding the ability to do kind of depend about style pull requests. So you have issues in your Polaris and it goes and creates a pull request on your infrastructure as code or your Helm chart or whatever you have to apply these settings. Which that would be super cool. But our issues are open on all these open source projects. So feel free to go make that request there. We'll see what we can do. It's gonna refresh our dashboard here. See what we've got. Great, great. Green, green, green. Oh, well, I changed it to lower than what they recommend. So let's just get it all green because green is a good color. So that's goal to lock. So let's open and generate a little bit more load on this. But that's all right. They're still healthy. Let's do every 10 milliseconds. Let's see how much we can click on this thing and how fast the load generation is one of the most difficult things. And you know, nice thing. If we refresh our Polaris dashboard here because we've set our resource requests and limits on all of our pods, we're gonna jump from a D minus to a C plus. Hooray. Which were us? Yeah. And notice our efficiency score just went from zero to what it was before up to 100% because really the number one thing about efficiency is getting your resource requests and limits set properly. And if you've watched any of the stuff that I've done recently or you talked to me, you probably know that I tend to harp on that a lot. It's one of the things that I just jump back to frequently but I have noticed over running clusters for so many different clients that so many problems can be solved by really knowing your resource requests and limits and then setting those properly and utilizing the horizontal pod out of scaler along with your cluster out of scaling effect. Just those few things right there can increase the stability of your Kubernetes deployments considerably. So we've done security, we've done efficiency. Let's talk about reliability. That's a security one, that's a security one. Let's talk about liveness and readiness probes. So right now you may have noticed that in my email files here I have no liveness and readiness probes whatsoever. Liveness and readiness probes are super important to reliability because they allow you, essentially allow you to not route traffic when your app's not ready and also allows your app to be your pod to be terminated and brought back up when it's doing things that it shouldn't. I don't know if you've noticed but whenever I restart the pods I get a bunch of errors in the console here. I think they're further up. This is a different error here. But that's because we're still routing traffic to a pod that's shutting down because the readiness probe hasn't, there is no readiness probe configured. So traffic is always being routed to my pod. So if we take a look at the Polaris check where it says liveness probe should be configured I'm gonna hit the question mark here. Again, it's gonna take me to readiness probe missing liveness probe missing, should probably link out to the docs here. So maybe I'll make a poll request for that later. Well, we can search the Kubernetes documentation for liveness probes. And we've got a nice document here on configuring liveness readiness and startup probes. And we can, let's find an example that's an HTTP because we're running an HTTP server here. So in our container we want to configure a liveness probe. Do this. And then we'll have to modify it a little bit for this application. We're in the UI. We know it's listening on port 8080, or sorry, port 80 due to the container port here. We probably don't have a health path that I'm aware of and we don't need to send any custom headers. So there's our liveness probe. It's just gonna be an HTTP yet on port 80. If we get a 200 back, it's gonna pass. If we don't get a 200 back, it's not gonna pass. It's fairly straightforward. And then we can, I think right in this probe, let's copy and paste it in our right. Let's try that out. Right to canines and we'll get pods. And we'll see we have a container creating. So now it's running, but we're not routing traffic just yet because the liveness probe has, or the readiness probe hasn't started. And then the readiness probe starts. We have the ready pod, and now we're ready to terminate the old one. So hopefully we've had time for any connections to move to the new ready pod once it was actually ready to start accepting connections. And we'll go back to Polaris. Hey, look at that. We're getting a B now. Good times, good times. We'll do the same for our copy of this. And this probe, let's do the same for our app server. We'll have to change the port here because we're listening on a different port. We'll apply that YAML, we'll watch our pods here. There we go, waiting for it to become ready. It's possible this app doesn't respond on route. And so we may have to change our probe slightly. Are you about to say something? Let's see, get back into it. And so I'm happy with my, that was unkilled, so we don't respond directly on that port. So our live and some raised probe on that port is not going to work. But another one of those things and another good example that building this stuff into your application from the beginning is much easier than trying to add it to an application that you aren't super familiar with upfront. To answer the top question there, Goldilocks does not provide any security-related recommendations that is entirely up to Polaris. And then another good question there that I didn't cover that I think is really worth covering is exceptions, exemptions for Polaris. So can you, the question is, is it possible to configure in Polaris to disable security context run as route for some pods? So essentially we want to say, this pod is supposed to run as route or this pod has to run as route. We can add exemptions. So you can exempt an entire deployment from all checks, but you can also exempt from specific checks. So if we want to, for example, annotate our, let's do the UI or actually the app server deployment. We weren't able to configure our liveness and readiness probe. Perhaps there's some reason we want that to be the case. We'll add a annotation to our app server deployment to exempt it from that check. So annotations, polaris.fairwinds.com slash app server, liveness probe is liveness probe missing. So we will do that. And let's also do readiness probe and patient run. Oh, this one's going to be a string. So the one that is curious gets on that app me. There we go. So we're gonna take a look at our dashboard again. Oh, I lost my foot forward. Of course we could put this behind an ingress, maybe front it with some OAuth using proxy. But if we take a look at our app server, we see that the liveness probe, liveness probe's still here. Readiness is not, did I misspell something that can happen? Perhaps a bug? I may have discovered a bug live on CNCF live stream today. Let's take a look that later. Well, we did drop the readiness probe issue from the list. So that is how you would do exemptions. Obviously if we wanted our score to go straight to an A plus, we could just turn on all the exemptions and we'd get that. Which comes to the top question there, which is how is the score calculated? I believe, we may have changed this recently, but the score is essentially a percentage of passing checks to failing checks. So that's how we get the 81% and then we just do a typical letter grade score based on that percentage. Can I question there about any way to configure some sort of a notification? That is a good question. I don't remember if we have that in the open source. So we definitely have that in our SaaS product. We've used the data from Polaris and sent that to our SaaS product and we can do notifications in there. I don't think we do notifications from the open source project. All right. So an additional feature of Polaris is there is a CLI and you can run it in CI CD as well if you want. So if we had our YAML files here, we could run the Polaris CLI and we could run a Polaris audit. I believe by default, it tries to connect to your cluster but we can Polaris audit dash dash dash. Audit path and we can audit our YAML in place right here. So if you wanted to put a CI CD check in place, use Polaris to audit it and then write some automation to send a notification based on that. It would be relatively straightforward. We just output this nice JSON object here that you can parse and see all of the different failing checks in the different namespaces and what's going on with them. So we see like my ingress as I mentioned earlier doesn't have TLS configured. We would see this as a result in our JSON object. So it'd be relatively straightforward to build a pipeline notification for that using the CLI. Cool, that's cool. I think now would be a good time to start summarizing as we near the end. Yeah, that'd be great. So a quick summary of what I did today is we took an app just deployed with some very basic YAML files with almost no overrides for the defaults and Kubernetes. And we used Polaris to identify some of the security issues with those default deployment YAMLs. And then we use the recommendations from Polaris to fix some of those security settings. So not running as roots, not allowing privilege escalation, looking at kernel capabilities. And then we used Goldilocks to take a look at resource recommendations to set our resource limits and our resource requests for the deployment that we had deployed to the cluster for that application in order to work on the efficiency of it. And we also talked a little bit about reliability with LiveNestProbes and ReadinessProbes and how Polaris can identify where those are missing in your applications as well. Yeah, and where can people go to get started or ask questions after this show? Oh yeah, that's a good question. So we have our GitHub repositories for all of our open source are at our GitHub organization, Fairwinds Ops and then slash Polaris or slash Goldilocks. Feel free to file an issue or take a look at PRs on any of those. And in addition to that, we also, if you go to any of our open source repos, there's a link to our community Slack. You can click on that and get an invite to join our community Slack and talk about these projects as a channel for each one of our open source projects. And then we also have an open source user group that we've been working on building recently that meets every so often. And there's also a link to do that there as well. So feel free to reach out through any of those mediums. And I'm also in the Kubernetes Slack. So if you wanna hit me up there, I'm always available there as well. All right, great. So with that, Andy, thank you so much. This was a really great introduction to Polaris and GreatLocks, yeah. And everyone else, thank you as well for joining and see you next Wednesday on Cloud Native Live. Thank you. Thank you.