 Hi, everyone. I'm Vishwa, and I'm a software engineer at StackRocs. Thank you for coming to our talk. So, if you haven't heard of StackRocs, we're a startup headquartered in Mountain View, California, and our mission is to make it possible for you to deploy your applications into Kubernetes clusters securely. To help you do that, we've built an industry-first Kubernetes-native security platform. But today, I'll be talking to you about KubeLinter, an open-source tool that we recently released. Now, KubeLinter was born out of a realization that we had over the journey of working, of building out our product. Namely, that Kubernetes YAML files are hard. They have so many knobs and dials, and it can be hard just to get your app running and your containers talking to each other. But even once you get there, that doesn't mean you're done. Kubernetes defaults are geared towards making it easy to get up and running, and not towards security or production readiness. Now, while this makes sense from the point of view of lowering the barrier to adoption, it does mean that if you're not careful, you could be placing your sensitive systems at risk. So, just because you got your app running doesn't mean that it's ready to run in production. Enter KubeLinter, our answer to this problem. KubeLinter is an open-source tool to run static checks against Kubernetes YAML files and Helm charts. So, what does that mean? How does KubeLinter fit into the Kubernetes security picture? Well, let's say you have a cluster with a bunch of applications running on it. Now, our commercial product would run on this cluster alongside your applications and do security checks on it for you, and it could audit your configurations and tell you, for example, that a certain app was running as privileged when it shouldn't be or that another app was missing a required label. But the key insight that we're trying to leverage with KubeLinter is that the applications that are actually deployed into many people's Kubernetes clusters, especially as organizations mature in their Kubernetes journey, are actually checked into a source control system in a repository like GitHub or Bitbucket or GitLab and then deployed using a CI CD system. This means that the Helm charts and YAML files that are stored in an organization's GitHub repository represents what will later become an application that they see in their Kubernetes cluster. This means that we can run many of the same checks that we run on live applications in the cluster on these configurations of applications before they're even deployed, thereby shifting security left, and this is where KubeLinter comes into the picture. That's a quick overview of what KubeLinter does. Now I'm going to switch tracks to a demo, but before I do that, I just wanted to give you a high-level overview of what you're going to be seeing in the demo, so I'll be showing you first and foremost how to run KubeLinter, how to use it to remediate violations that it finds, how to ignore specific violations as required, which you may need to in some cases, how to configure your custom checks, how to lint Helm charts and not just the Kubernetes YAML files, and how to integrate KubeLinter into a CI system. All right, so let's get started with our demo. So just to have us set up that we can test on, I have three YAML files here which I'm going to try KubeLinter out on. They're all open on the window on my right, so you can see there's a main deployment.yaml which contains a deployment. There's a main-service.yaml which contains a service, and then there's an unmaintained-deployment.yaml which contains another deployment. Now I want to run KubeLinter on these files. How do I do it? Well, I have KubeLinter installed already, so the way I invoke it is I run KubeLinter space lint, and then I pass in a list of files or a list of directories. In this case, I just want to lint the current directory, so I'm just going to do KubeLinter lint dot and hit run. And there it is. You can see, yikes, we found nine lint errors. And I should mention here that we haven't passed in any specific config. As you can see, we've just called KubeLinter lint without any arguments. So what's being run now are the default checks that KubeLinter runs if you don't specify anything. You can see that for each error that it finds, it prints out a line, and that line tells you what file the error is in. It tells you what object the error is in. So in this case, we know it's a deployment in namespace production, and the deployment's name is main. It tells you an error message. So in this case, it's telling you that the container engine X in this deployment does not have a read only root file system. And most importantly, it also tells you how to remediate. So it says to remediate, you just need to set read only root file system to true in your container security context. So I'm going to try doing that. So let me copy that key. And then when I paste this here and set this to true, I save the file. And now when I run KubeLinter lint again, you can see that I only found eight lint errors this time and the read only root file system violation is gone. Now the next violation here is container engine X is privileged. Well, let's say in this case that I actually do want to run this container as privileged. So I can't, I don't know what to do about this right now, but I'll come back to it. Let me work down the list. So we see a few more violations here. We're seeing that in still in main deployment, the container doesn't have a CPU request or a limit set in both cases. They're unset and it tells you set your container CPU requests and limits. And so we're going to do that. So here we've set the memory limits, but we're also going to set the CPU limits. And here we're going to set the CPU request to let's say for now 100 millicores. The next violation that it's found is in the service in main-service.yml and it's saying that there's no pods found that match the service labels. So we can see here the remediation is to make sure your services selector correctly matches the labels on one of your deployments. And you can see here that the selector says app main within whereas the deployment spots are labeled with app main without any. So this is probably just a typo. And we have flagged it ahead of time, which is great. Now I fix it. I save the file. And then now that I fixed a bunch of linterros, let me rerun kube-linter to see if I made progress. And yes, I have now we only have five linterros left and all the ones in main-service are gone. And everything in main-dash deployment is gone except the privilege check. Now we see in the un-maintained deployment that there's an environment variable aws-secret key and kube-linter tells you don't use raw secrets in an environment variable instead either mount it as a file or use a secret key ref. But let's say in this case you this is an un-maintained deployment and to fix this is a more is more involved than you currently have the time for. And since the deployment is not maintained anyway, you just want to ignore these checks for this deployment. Well, kube-linter has a way to let you do that. So the way you can do that is you can add an annotation to your object, which has a key of this form, which is kube-linter.io-ignore-all. And the value is an explanation of why you're ignoring kube-linter checks for this deployment. So in this case, you would probably just say this deployment is not maintained. And now when I run kube-linter again, you can see that all the violations that we were finding for the un-maintained deployment have disappeared. Now we're left with this last one, which I promised we'd come back to, which is that this container is running privileged. And in this case, we don't want to ignore all violations because this is a deployment that I actually care about and I want to make sure that other checks are run against it. But I do want to ignore just the privileged check. Well, the way you do that in kube-linter is you pass in an annotation very similar to the other one where you do kube-linter.io-ignore- and then you copy the name of the check, which in this case is privileged-container. And again, the value should be an explanation of why this container needs to run as privileged. So in this case, I'm going to say this deployment needs privileges because it accesses the kernel so that people know why you've ignored this rule. And now when I run kube-linter, no linter is found. That's exactly what you want to see. That does feel a bit anti-climactic though. I'm going to give this the celebration that it deserves. Now the next thing I want to talk to you about is how you would configure custom checks. So far we've looked at using kube-linter with only the built-in checks. Well, the way you do that is you pass in a config file and here is our example config file, which lets you configure any number of custom checks. And let's say that the check that I want to configure is that I want to make sure that all objects that I create have an annotation called team, which tells you which team owns this specific object so that if something goes wrong, we know who is to blame or who needs to be contacted. And the way I figure out how to do that is I go to kube-linter's documentation and I look over the list of check templates and we have a check template for required annotation. And what this does is it flags objects which don't carry at least one annotation which match the provided patterns. And it takes in two parameters. It takes in the key and it takes in the value of the required annotation. And you can use this to create your custom checks. So let's give it a name. In this case, it's going to be required annotation team. And we need to pass it a template key. So in this case, we know the key is required annotation and you need to pass in a bunch of parameters. So in this case, the only thing I care about is the key of the annotation. And so the parameter is called key as you can see here. And I'm going to set it to team. And I'm going to add a remediation to my check. So say add the team annotation. And now when I run kubelinter, I want to pass it this config file. So here you go. Let me open the YAML files as well. And I run it and well, sad, no applause anymore. And in this case, we see that the main service dot YAML is failing our new check because it's missing this required annotation. And to remediate, we can add a team annotation. So let's do that. I go here and I say annotations team. And I know this is owned by the same team. So I just say main. And now I run kubelinter again and we did it. So here we saw an example of how we configure a custom check with kubelinter. kubelinter also comes in with a bunch of built-in checks. So you don't always need to write your custom checks. Not all the built-in checks are enabled by default, but you can find the full list in our documentation and you can add whichever ones you want to your config and we're adding to these all the time. The next thing I wanted to demonstrate to you is how kubelinter works with Helm charts. Here, if I look at the repository of all Helm charts that are in the GitHub slash Helm slash charts repo, I can take a random one. So let's say I want to run kubelinter on telegraph. So just to see, prove that it's a Helm chart, you can see that it has a chart.yaml, values.yaml and templates. And now when I run kubelinter, lint and point it to telegraph, you can see that it renders the Helm files on its own and finds violations just like it did on the raw Kubernetes YAML files. The next thing I wanted to demonstrate is how kubelinter can be integrated into a CI system. So here I have a pull request in a repository that I own where I've checked in the same YAML files that you were just looking at and I have a GitHub action that runs kubelinter against this repo and you can see that it's failing and you're able to see the exact same output that you were able to see when you run it locally and because in this case it had violations, in this case it found 13 lint errors, it is exited with exit code one and it's failed the build and so you know on your PR that you need to do something to your YAMLs before you can merge them. So I hope that gave you an overview of how kubelinter works. So if you're interested in trying it out, like I said, it's free and open source and you can find it on our GitHub. Please do try it out, let us know what you think and if it works well for you, do consider giving it a star on GitHub. We look forward to hearing from you and if you have more questions or want to chat with us, visit and you're at KubeCon, visit the StackRocks booth in the Platinum Hall to learn more about kubelinter and about StackRocks. Thank you so much for listening and have a wonderful day.