 All right, I guess we're alive now. So hello, everyone. Thank you for joining. Welcome to Cloud Native Live, where we dive into the code behind the Cloud Native. I am Itai Shakuri. I am director of open source for Aqua Security. I'm also a Cloud Native ambassador and I'm joining as co-host for today's show. So I'm happy to be here. Cloud Native Live is where our weekly show, every week we bring a new set of presenters to showcase how to work with the Cloud Native technologies. They will build things, they will break things and they will answer your questions. So this happens every Wednesdays at eight Pacific and today we have our friends from StockWorks here to talk about CubeLinter. Just a quick reminder to join us at KubeCon and Cloud NativeCon in Europe early next month to hear the latest from the Cloud Native community. And a quick admin before we hand it over, this is an official live stream of the CNCF and it's such a subject to the CNCF code of conduct. Please don't add anything to the chat or questions that would be in violation of that code of conduct. Basically in simple words, just be respectful for everyone, the fellow participants and presenters. All right, so with that I'll hand it over to Vishwa. Hey everyone, my name is Vishwa and thanks for coming depending on where you are whether early in the morning or later in the day to attend this talk. So I'm Vishwa and I'm an engineer at StackRocks or what was formerly StackRocks? It feels funny to say that because about two months ago StackRocks was acquired by Red Hat and now our product and our team are joining the Red Hat portfolio of products and companies. So at StackRocks, I've been an engineer for over three years and most recently or fairly recently I've been working on this open source project called KubeLinter, which is what I'll be talking about today. So today's talk is just gonna be a very hands-on demo of KubeLinter, I'm gonna be showing you how you can use it to find errors in your files, fix them and kind of go through that process and also go over where you can find more resources and how you can contribute to KubeLinter if you're interested or how you can find out more if you'd like to get in touch. So definitely gonna keep it informal and please at any time feel free to ask your questions in the chat, we'll be monitoring them and we'll definitely have a dedicated time for questions at the end. But if something doesn't make sense, please feel free to ask questions you don't have to wait for the end either. So before I jump into the demo itself just a very quick overview. So KubeLinter is, as the name probably suggests, a linter for Kubernetes, specifically for Kubernetes configurations which are typically in the form of YAML files. And the idea is that the real problem we're trying to solve and it's one that we had internally back at Stack Rocks is that a lot of Kubernetes configurations are A, complicated. There's like a lot of fields that you can fill in in the YAML file. Most of them are optional. So you don't know what is required or what you can kind of get away without. And B, a lot of defaults are not secure. And there's a couple of reasons for that. One is the Kubernetes initially did not have a lot of security features. And so when they added them, they had to kind of default them to off because otherwise it would break backward compatibility. And another reason is Kubernetes is kind of complicated. The part of the reason for its success, it's the level of flexibility it offers and flexibility always comes with some complexity. And the challenge they had is, they wanna make it possible for people to actually see something up and running with the tool. And what that means is some of the defaults are geared towards that and not necessarily towards security. Cause as we all know, there's generally a trade off between something that's more secure and something that's quicker to get up and running. So those are the two big problems that we tried to solve with KubeLinter. And the way we tried to do that is we looked, tried to make it so that if you run KubeLinter by default, it will complain if you have your configuration in a way that's not secure. And then when you run KubeLinter against your YAML files, you can see the errors that it prints out and update your configurations to be more secure. So that's a very high level overview of the tool itself. You can see the link to the tool in the banner on your screen and feel free to check it out, but I'll definitely be diving more into the repo later on as well. So with that preamble out of the way, let me get started with the demo. So just give me a second to share my screen. I suppose this one, what are you doing? Good. Good, I'm Paolo, I'm working here today with Itaí. Just a few questions before you start your demo. Yeah, of course, yeah. Sure. I look at the project, there is an open source brush, right? And the idea is to check the configuration. It's amazing project, it's very good. And maybe you can explain a little bit for us. For us, it was right in goal length, right? So it functions. I read the site from the project and why you decided it is a little bit similar to kubic control. So what's your reason behind the architecture that you choose to create this kind of clients to? Yeah, I think, are you referring to the decision to build it and go in a similar client to kubic control? Is that the case? Yeah, okay. So basically the idea there is that it was honestly just what is easiest for us. So the thing is like Kubernetes itself is written in Go. And so by writing in Go, we get to take advantage of the libraries that Kubernetes offers, which is like, the biggest heavy lifting in our kubilinter code comes from the Kubernetes code itself, which we use as a library. And the second is by using the similar tool, we also get a very similar feeling client. And so one that's very familiar, I think, to people who are in this ecosystem and are used to using tools like kubic control and other tools in the ecosystem. So just giving people the kind of familiar feeling, you know, it's just like a static binary. They download it, it works. They don't have to do any installation of anything in their environment. So that's the idea here. Great, great. So we can say that we have one tool to put in the SRE and developers to box, right? Like control, you have this kubilinter to use during my daily job, checking the code, checking the compliance of the code, et cetera. So it's a more one tool inside of the toolbox for us, right? That's right, yeah. Great, great, great, great, thank you. Cool, thanks, thanks Paolo. Any more questions or should I get started with the demo? Yeah, let's see the demo. And we have a comment on chat where some people are actually following alongside your steps. So requesting to go slowly or at least at a reasonable pace so that people can try at their home. Yeah, awesome. Happy to hear it and definitely we'll keep that in mind. Cool, so I am gonna share my screen and it I and Paolo I will, in case I'm sharing the wrong screen or I'm not sharing my screen correctly, please let me know but other ways I will assume that it's working as intended. Cool, so here we are and like I said, that this is gonna be a demo of the capabilities of KubeLinter. And the, what I have here is like a file, a YAML file that contains specifications for a few different Kubernetes objects. So we have a deployment which is called non-compliant in this case because as you'll see this deployment does not comply with a lot of the security best practices that KubeLinter has by default. There's another deployment defined in this file which is compliant. So as you'll see this will have fewer issues. And then there's a service account. So KubeLinter here is gonna lint this YAML file. So now how does KubeLinter itself work? So I'll go over installation and stuff later but it's extremely straightforward but I obviously have KubeLinter already in the system. So you can see that I'm able to run it and it has like a few sub commands and things like that. And the way the most important KubeLinter command that you're gonna use day-to-day is the KubeLinter lint command which lints Kubernetes YAML files in Helm charts. So here you can see, in this directory I have this myapp.yaml file which is the same one that you can see on the right side of my screen. And then when I run KubeLinter lint and pass it the name of this file you can see that it basically is gonna run again against this a bunch of checks and output the results of those checks. So when I run it you can see it's found nine lint errors over here. And there's a list of output of these are the errors it's found. So what I'm gonna do now is just kind of go over these errors and how like basically kind of this scenario here is you are a developer you're like trying to configure your deployment YAML files or you probably typically this is how I do it and I'm guessing this is how most people do it you just copy something that you know works and then change whatever you need until you have your specific deployment. And then after that when you run KubeLinter it can tell you if there are things that you're doing that are not security best practices. So for example here we see that it's telling you that this object which is a deployment in my name space called non-compliant which is this one mounting an n variable as a secret. And so you can see that here you've used I mean this is obviously not a real secret key I just put a random string in there but you can see that here we've used AWS secret key and just put it directly in the end of the container. And KubeLinter's recommendation is to not do that it's to mount the secret as a file or use as F. And the reason for that is that n variables that are stored this way are much more exposed. Anyone who can see your deployment configurations which are generally not very locked down will be able to see this key. Whereas if you put it in a secret the configurations for secrets in Kubernetes are way more locked down. Obviously things that's harder to fix because the ideal fix for this would be to create a secret and then like reference that secret here but in the interest of time I'm not gonna do that right now I'm just gonna comment out but that would be how the right way to fix this this was not just a demo. Another error that we're seeing is that the container does not really root file system and it's saying the fix is to set read only root file system to true in your container security context. So that's definitely something that we can do and the way to fix that would be to just do a secure context and then take this read only root file system and set that to true. So now I've looked at the first two errors and I've made changes in my ML file so that they are no longer errors. And so when I rerun kubelinter you can see now it's only found seven lint errors and the first two lint errors are gone. So that's kind of the high level idea of the way that we expect this tool to work. So now I'll go over like a couple more of this because I think it might be interesting. So this one is another interesting one. This one, maybe we can take a quick pause and address some of the questions in the chat. Do you mind? Yeah, yeah, sounds good. All right, just so we don't accumulate too many questions. So one question was about whether we can run kubelinter against the directory and not a single file? Yeah, absolutely. So you can, yeah, so you can, if I had run kubelinter lint. Instead of kubelinter lint of name it would have run it on the directory and it also works recursively as well. So if you pass it a directory it will go to all the sub-directories and fetch the files. Great. Another question was about integration in the pipeline. I'm assuming kind of like a CI CD pipeline so that I don't need to manually execute the CLI but it automatically lints. Yeah, exactly. Yeah, that's definitely supported. The way the tool works it returns an error code if there are any errors. So then when you run it in CI your CI build will fail and that's something that I'm gonna touch on in a demo like once I get through the first part. All right, so we'll get to see that even. Another question came about disabling some rule. So if I don't... Yeah, that's, yep, absolutely. That's, there are like a couple of different ways to do that and I'm gonna go over them in the rest of the talk but yeah, that's absolutely something we support. Great. All right, yeah. Sorry for interrupting, I think we can... Actually, there's one that just came in. Does KubeLinter authenticate to API using the same KubeConfig like KubeControl? No, it does not. Right now KubeLinter is entirely static. So we don't need to talk to the Kubernetes API. Like we just look at the files and we don't care about like the actual cluster. So that's how we are, that's how we are architected right now. Having said that, we do have plans to do this in the future. There are some things we can do where we can help the customer, the user know, is there a way that, does your YAML files work with the version of Kubernetes you're running, right? And then for that, we can kind of infer the version of Kubernetes you're running based on your KubeConfig but we haven't implemented that yet. All right, yeah, that's actually pretty useful. All right, yeah, sorry for interrupting. I think we can carry on the demo and I'll pile on the questions and maybe do another break in a few more minutes. Sounds good, yeah. Please keep your questions coming and I definitely want to be able to get to them. So let me go back to sharing my screen. All right, so assuming you can see my screen, so again, like I was saying, the error that we're getting here is that the service account non-existent is not found. So this is an interesting one which has definitely happened to me where typically in a deployment you kind of reference a service account but maybe you rename your service account and so the reference in your deployment is stale and in real life, you'll only find out when you actually try to deploy your deployment and it's that, oh, it isn't working because the service account doesn't exist and so you're not able to do the API access that you want. But Kublinter kind of, since it has this global view of all your files and all the objects, it sees that, okay, there's no service account that I can see called non-existent, so are you sure this is correct? And sure enough, our service account here is called like my-service-account and if I go and change this to be my-service-account and I rerun Kublinter, you can see now there's only six, where there were seven and that first error is gone. So again, we've managed to work through that as well. So I have a question because we just said that Kublinter doesn't interact with the API server and now we're saying that it can validate if, for example, a reference service account doesn't exist. So that's only in the AML files that were linted not in the actual cluster, right? That's right, yeah. And I'll go over this and more like a little more later, but basically the recommendation and the kind of defaults of Kublinter are geared assuming that you're pointing to like a kind of self-contained, comprehensive list of AML files and that you keep your deployment service account, all of them in like maybe say a same directory or a same Helm chart. And then it kind of crawls all of them and builds like this global view of like these are the set of related objects you're deploying. Having said that, there are some of our users who know who just use it for one file at a time and they may want to disable checks like the service account one because maybe the service account is already in the cluster and they know that. But to go back to your point, like this is another thing where if we had the kube config access, like maybe we could make this also more seamless as well. But right now, as I said, we don't do that. All right, thanks. Cool, yeah. And then there's a few other errors here which I'll go over very quickly. I don't want to necessarily use up all the time because there's a bunch here. But here we're seeing that this object is not running as non-root, which is another good security practice. Again, I should mention that with things like run as non-root read only root file system, like you cannot blindly change to them. You need to be sure that your app doesn't depend on it. Most of the time, apps don't really depend on running as root. So that's something that should be safe. With read only root file system, you need to make sure that your code is taking that into account that maybe you only write files and like a well-known directory like slash temp and then you make it so that it can only write it in temp. But just want to throw that out there that these are security settings that constrain what your container can do. And that's a good thing for security, but you need to make sure that your code is kind of compliant with that as well or you lose functionality. And the other list of things we have here are around unset CPU requirements and memory requirements. So this practice that we encouraged by default is to set your container CPU requirements. So I'm gonna just fix those real quick and lucky for me, I have a compliant container that kind of has all this already. So I'm just gonna copy what it has in its configuration. And again, these are things that are gonna take longer to in practice because you don't actually know what the CPU requests and limits are that may require you to do some measurement or have some understanding of like the workloads that your container is how your container workloads are expected to work in our experience that does take its own time as well. So now that I've done that, if I run prove linter, you can see all the other errors are gone because I fixed all of them and now we have just one error left. This one is an interesting one. So this is in the compliant deployment and the check is that the check is basically saying that the container engine X is privileged. And the reason this is interesting is because I actually wanna run this container as privileged. Like I was saying, maybe some containers need privileges. So let's assume that for whatever reason, I want to run this as privileged. Now, how do you do that in kubelinter, right? So you have two options. One is you can disable this check itself, which means you don't care about running things as privileged, like it doesn't matter to you at all. And we support that via like an exclude flag. So you can exclude checks by name. And sorry, I think I spelled that. You can exclude checks by, oh, sorry, I don't know what happened. You can exclude checks by name and you can exclude obviously multiple checks. So this is a check that's run by default and so you need to explicitly exclude it. And then when I run with this check being excluded and you get the name of the check from the output, sorry, I seem to be struggling to spell when the word is wrapping around. You can see that, okay, it's saying no, Lynn Terrell has found. And that's because I've excluded the check, so well it's not even checking if anything is privileged. But the other way to exclude, and this is what, this may be what is more relevant for a check like privileged, is let's say you generally want to run this check, but you want to have a way of having some sort of denoting exceptions. So you want to say that like, I generally don't want to run things as privileged, but this specific deployment should need privileges. So I'm okay with just this one needing privileges. And the way we do that in Kubelinter is via an annotation on the deployment itself. So the way that annotation is structured is like this. So you basically have a key, which is ignore check.kubelinter.io. And then you pass in like the name of the check, which you again, I'll copy from there. And then you just say true, right? And then when you do that, you can see, oh, sorry, maybe I think I missed a dash. When you do that, you can see that the check goes away. And this is because although we are running the privileged check, like we haven't excluded it, we are basically running it saying that this check is ignored just for this deployment. So that's kind of how you would deal with exceptions in cases like that. And you can ignore any number of checks. There's also a way to ignore all checks for a specific deployment in case you want to like declare bankruptcy on like a specific one, but still want to run Kubelinter. Another thing, the next thing I want to go over is like the configuration file. So Kubelinter supports like a config file, which has a list of things that you can configure. You can see that we support things like include, exclude in the configuration file itself. So you can include some checks, you can exclude some checks. You can also say do not auto add defaults. And what that does is if you set that to true, then the default checks are not added automatically. As you saw, we tried to make a lot of checks run by default when you just run Kubelinter linked with no arguments. So that again, it's like security by default. But if you don't want that behavior and you just want to explicitly include the checks you care about, you can do it here. All these, we've made it so that everything in the config file under checks can be also configured directly as command line arguments. So you can do dash dash exclude like we did. You can do dash dash do not auto add defaults. So all that works. And if you specify both in a config file and in command and as a Clifflag, we will give precedence to what you pass on the command line versus what's in the config file. So definitely try to be flexible with configuration there. And the other thing I wanted to go over is custom checks. So Kubelinter does support custom checks. And the way you do custom checks is you need to pass, like, so this is an example of a custom check. And you need to pass like the name of a template. And based on the name of the template, each template is basically a piece of Go code that does something. So the template here in this case is called required annotation. And the way you would kind of figure out, you know, the list of templates is if you go to the Kubelinter documentation, you can see we have like a list of templates that we support. And this specific one is called required annotation, right? And basically what it does is it flags objects that are not carrying at least one annotation that matches the provided patterns. So you're saying that basically you're saying all my objects are required to have this annotation. And the way it works is you specify like parameters to the check. So you need to specify like a key and optionally a value as well. And what we've done here is we've said there's a required annotation and the parameter we're passing is that the key is team. And so what we're doing in practice here is that we're saying that all objects that are configured need to have an annotation of team. And maybe this is because when I'm actually running this, I wanna know which team owns which object, right? So the way I would run Kubelinter with this is I just pass a dash, dash config and you can see like I just pointing to the file I just opened. And you can see, you know, this is an error now, the non-compliant app, it's saying there's no annotation found matching team equals any where it means that the key is team and the value can be anything. And it doesn't have that annotation. Whereas here, you know, we do have that annotation for the service account as well. We do have that annotation. And if I kind of copy this annotation here and stick it into this deployment, no lint errors are found. So that's kind of how you can do your custom checks. And this is just a particularly simple example, but as you can see, we support a lot of it in templates and you can go over the documentation to figure out what is supported and we're adding more to these all the time. If I'm understanding correctly, if I'm adding my own rule, the business logic that I wanna check is in the template and I can reference that template from the config file and maybe pass some parameters to that in order to start enforcing that. Correct. So is it possible to also add my own templates? Yeah, so adding your own templates requires a code update. So you need to send the PR for that and basically you need to update the Cooblinter version. So the templates right now are go code and kind of bundled into Cooblinter and where the flexibility this gives covers, what we found is it covers like maybe 80, 90% of the use cases. So that's where we went with this and the simplicity it enables. What we're kind of exploding is how can we cover the remaining 10% of cases without requiring users to manually update the code and that we're investigating options there around, maybe we support like maybe one of our templates is just like supporting OPA, right? Like we allow people to write like regular policies that can be arbitrary and put that and make that like a template and then for like advanced users they have like a very specific thing they want. They just use a regular, but if it's something more commonly, common and is supported by Cooblinter then they can just use our like, use our built-in template. So that's kind of the approach we are thinking of taking but as of now we haven't implemented that yet. So does that make sense? Yeah, it makes complete sense actually. I was wondering as you were explaining what's your take on OPA is because a lot of similar approaches to concepts shift left for infrastructures code scanning very prevalent with Rego slash OPA technologies. So yeah, it ring the bell. So, and it makes completely sense if you are going to support or planning to support OPA as well. Yeah, yeah, I think what we found is that at least in our internal use is that there's a lot of value in like simple configuration. And so that's kind of what we focused on here. And, but tools like OPA are extremely useful because they are so expressive, right? Like that you can, if you have like this very specific thing that you want you can do that in OPA like no matter what it is. And so we think like the right way is to like give like a simple configuration mechanism for users for like the 80% of cases. So you can like kind of quickly do that and then kind of offer like a way to use something more expressive for like the more complicated cases. Yeah, there's definitely a learning curve I think for writing Rego policies. I hear it myself as well. And also just a comment. It kind of ring the bell also the separation between the templates and the instantiations of them or the usage of them from my experience in the OPA world something similar like in a contest or gatekeeper they have the constraint template. So maybe some of the listeners are familiar and to me it sounds exactly like the same concept which is a, it's a great concept where you have defining the template once and then you can reuse it multiple times. So kind of for me, it helped me understand what you are after here. So maybe it will have the readers as well. Yeah, thanks. I actually haven't used OPA and that use gatekeeper or contest in that much detail beyond just trying them out. But yeah, now that you mentioned it I think that makes sense. Yeah, thanks. Yeah, it's now a good time to also review some questions in the chat. Yeah, yeah, I think so. All right. So we have a question about supporting CRDs, custom resource definitions. Yeah, that's something we don't do right now. We, again, we'll need code changes in our current architecture but we are, this is some issue that, like an open issue on our GitHub that like has quite a few thumbs up the last time I checked and it's something we're trying to figure out. The challenge is trying to keep our simplicity of configuration that we have while also giving people like a good way to plug in their CRDs and we're still working that out but right now, no, you cannot. We do support OpenShift. So like not every CRD in OpenShift but we support anything that's like an OpenShift type. So it's like a deployment configured OpenShift, for example, even though that's like not the built-in code deployment but arbitrary CRD is not right now. Thanks. We also had another questions about integrating in the pipeline but you already mentioned that you are going to address that in the demo. We had an earlier question, kind of a more general one about now that Red Hat acquired StackRocks what are the plans in regards to the open source nature of the project or the governance of it? Yeah, so I mean, so Coob Linter is going to continue to stay on its own and it will be, it's already open source under Apache 2.0 and that will be the case for like, that's not going to change. And I think the additional thing that's going to happen as after the Red Hat acquisition and this is something I think Red Hat touched on in the press release where they announced our acquisition was that they also intend to open source the rest of our product as well at StackRocks. The rest of our product that our commercial product was proprietary and closed source but Red Hat wants to open source that as well but it's not open source yet but that's something that is an intent that he'll be working towards. Great news. So if anyone was worried that the acquisition will affect it will affect for the better open sourcing more of the product. There was another questions about scanning for RBAC's role-based access control resources. Is it something that you currently do or planning to do? No, it's not something we currently do but it's something that we're planning to do. And it's one thing we're kind of struggling with here is RBAC is a more complicated kind of configuration as anyone who's worked with Kubernetes, RBAC probably knows there's like all these links that you have to kind of keep track of. And so we're trying to see does it make sense to have it in a tool like Google Inter or does it require kind of its own tool that has a more first-class understanding? And there are a few of those as well that I've seen. And that's why we kind of hesitated to go too deep into RBAC similar things with like network policies, right? Like it's technically just a YAML file but it's its own beast in terms of like configuration surface and we haven't gone there either. But those are things that we're thinking about. But I think we'll probably be lower priority than some of the earlier things we discussed like CRDs and things like that, which I think we're seeing more user demand for those. Yeah, another question that I think you kind of touched but maybe you can clarify, can I specify one container spec as privileged, true and other as false? Can it be checked as at container spec level? So maybe reiterate the part about selectively applying the tests. Yeah, so right now the selective application is at the level of the top level object. So at the level of the deployment. So, you know, if you, this is a very good question. Like if you ignore the check, then that ignore propagates to all the spec containers in the deployment and not just the one that you, like not just the one that maybe you want as privileged. So that is a valid question and a gap in our current way. One solution that we've been thinking of, which we've been hesitant to implement for like the reason of having too many ways of doing the same thing is to have like a comment-based ignore, which we've seen in a lot of linters, right? Like definitely if you have a core linter, you can just comment above the line and say ignore this check. And if you have a comment-based ignore mechanism, then maybe you can ignore just that line and then that will help us make sure that the ignore is as localized as possible, but we haven't implemented that. Yeah. All right, thanks. I think we can resume the demo. Cool, yeah. Thanks. Let me go back to sharing my screen. Cool, so that was, I see that you have a comment. Sume, you can see my screen. Not yet. Just a second. Yes, now we can. Okay, so that was kind of, you know, all I wanted to show in terms of the hands-on demo. And now maybe in a few minutes, I'm just gonna go over kind of our GitHub and, and sorry, so first I'm just gonna go over use in CICD and I know there were questions about that too. And then I'm just gonna go over like our GitHub and docs and just talk about like how you can get involved or learn more. So, so basically the way you would use Coolinter and get in CI, it's very simple because, you know, it's a command line tool that uses download and it works out of the box. It's a statically linked Go binary, like no other dependencies. You just need to make sure you get the right one for your operating system. And of course, if you have the Go command, it's just a simple Go get. So you don't even need to make sure to download the binary. And then you just run it on your files, right? So here, you know, I have like an example PR where let's say I'm like running, running like I have like a few YAML files in a directory. And then I'm basically running Coolinter against them in CI and basically you can see that what happens is it just run like Coolinter lint dot and then this dot is at the root of the Git repo. So I think someone had a question earlier about this, about directories. So basically by default, if you pass a directory, we just crawl that directory for YAML files, kind of group them together based on the directory they're in and then lint them. And then you can see that the build has failed. I ran this last night in advance for the demo so that, you know, so that it was ready for you all. But you can see the build failed. You can see the output, the same output that you were seeing on my terminal earlier, maybe I should zoom in actually. The same output that you all were seeing on my terminal earlier. And then at the bottom, it tells you like how many errors were found. And then over here, it tells you like it had an exit code of one, which means like your build is going to fail. And when you look at the PR itself, like it's going to be red. So you know, you have to do something. So that's how you do it in a GitHub action. And, you know, this is not anything special that you can do this in Jenkins and CircleCI, like whatever your Travis, like whatever your tool of choice is for CI. For GitHub actions, though, we do have something, we do have a native GitHub action called stackrock slash kubel dash limter dash action. So this is also like a public repo and you can see here like an example of how to use it. So if you're using GitHub actions, then you can literally just copy this stuff into your .github slash workflows directory. And then like you have kubel running. And of course you can specify your own config file, which you may want to depending on as we discussed, but generally speaking, it will just work and it'll be very easy for you to drop in. So that's kind of how we use it in CI CD. And then the last thing I want to go over is, you know, this is our GitHub repo. I think the link has been on the stream. So I'm sure all of you have seen this, but this has like all the information you need to get started, like a lot of the things I covered written down here, how to install them, things like that. One thing I wanted to call out is that we do have a Slack workspace and you can see the link over here if you scroll down to community and the read me. So that's like a great way to get in touch with us as well as the kubel inter community. So you can feel free to join here and send us a message and, you know, we generally try to be pretty responsive over there. So that's a great way to get in touch with us. We also have a doc site, which is linked to over here, which I think I showed you earlier, but we know that from our experience that with open source projects like documentation is one of the biggest barriers for a user. And so we've tried to invest and make our documentation as comprehensive as possible. That doesn't mean that there are no gaps and that there are not things we can improve. They definitely are, but hopefully this documentation has enough for you that you can kind of figure out how to use the tool and configure it to your needs. So definitely encourage you to check it out at docs.kubelinter.io. And in case you have, there are any, in case you have issues, you find a bug or you have a feature request, you know, definitely feel free to file a GitHub issue, but very encouraging of those. And there's a template that will pop up when you click new issue and then it'll just, you can just fill it out and we'll be sure to get back to you as soon as we can. We also maintain our roadmap itself on GitHub. So if you go to project slash roadmap, you can kind of see these are the things that we have to do and these are, all of them are things that if you're ever interested in contributing, you know, you can feel free to go to the roadmap, find something that you're interested in doing, maybe comment on it and ask, does your approach make sense? And if you send a PR, we're like more than happy to, happy to take PRs. So that's how you can contribute. And of course, if you wanna discuss in more detail, you can join the Slack workspace as well. And then that's, yeah, that's, I think all I kind of had on kind of the, how to find things, how to get in touch with us. And so, you know, feel free to file issues and PRs, get in touch with us. If you, and if you wanna give us a start on GitHub, definitely wouldn't complain as that makes, that definitely helps increase the kind of visibility of our project. So, I think that's kind of all I had on in terms of how to navigate our repo and how you can get involved. So looking forward to hearing from you all. And I think the rest of the time will be just for questions. Yeah, great demo and presentation Vishwa, thank you. Let's see if we have something interesting from the chat. Actually, something I thought about while we were speaking. So we talked about linting Kubernetes YAML files manually using the CLI. So that would be like at development time before I push changes. And then we talked also doing the same thing during the pipeline. Have you thought about shifting it right a little bit into the cluster itself and like lint YAML or workload definitions from the cluster or continuously do that? Is that something you have considered? That's not something that, like the way to do that right now would just be like do a kubectl get dash or YAML and like pipe it to kubelinter. So that's been our kind of answer so far. But I would say it's something we've thought about but not something we are planning to do in the immediate future at least. Like we're kind of keeping the focus on the YAML files because I think there's a fair bit of complexity that comes with trying to have something on the cluster monitoring what's running. And that's more, and we know that because that's a lot of what we do in our commercial product. And maybe once our commercial product is open source there'll be like a easier way for us to kind of do like leverage some of the work we've done there in kubelinter as well. But right now we are kind of just focused on the Linting and CI use case. But I do know of some users who have used the approach of just running like kubectl get all dash or YAML and then piping that to kubelinter. And that gives you like effectively this but it's not as nice as all the stuff that you would get by running in the cluster like new admission controller and things like that. All right. There was another question about generating perhaps an HTML report or I'm assuming we can generalize the question to be about UI of some sorts. Yeah. So we don't have a UI right now. This is the only output format. VR, we do have a PR out one of the open PRs is actually for JSON output. And our hope is that we're trying to output JSON in a Sarif standard, just like a standard format for static analysis reporting. And our hope is that once we do that, there's actually a whole ecosystem of tools that kind of will understand that output and be able to work with them. For example, IDEs typically have a very easy way to understand those. So that's our hope that we kind of do JSON output and then enable like anyone to write a tool that passes that JSON and renders it however the user prefers. All right. And perhaps one last thing if you could share what are the checks or tests that you currently do because we've seen an example for one or two. Is there a way to see the list of checks that you perform? Yeah, there are. And the way to do that is I only went over the kublinter lint command, but there's a kublinter checks command that gives you like a list of checks. And then the same list is also there in our documentation as well. So that's the way you can see. And then you can use those names and include and exclude to customize which checks run. Richard, I have a question here. My last question, please. I'd like the interaction of link linter with the Kubernetes annotation. I think that's a good approach. We can say that when you have a team working on a team and maybe you get a configuration find common for the team, but someone can have a specific case, et cetera. So we can say that is a good, a best practice, a good practice, you adopt the annotation for control this is specifically use case where someone should to avoid some checking or put some checking that's different from the common from the other team, the old team. It makes sense to use a good practice combined with the annotation. Yeah, yeah, exactly. I think that was kind of the thought behind putting it in annotations. So that was the idea. Great, great. I think that's a very good interaction with the Kubernetes annotation. Thank you so much. Yeah, thanks for the question, Paolo. All right, I think we're close to the end. Great demo, great presentation and good interactions from the people. You already mentioned how to get started, your guitar people, your Slack channels. I would also mention that we have the cloud native live channel over the CNCF Slack so if people want to keep being engaged once this stream is over, that's another option. And if Paolo... Yes, please, I'm on that as well. Yeah, great. So if there's anything you would like to wrap up with or finish, unless we can wrap it up. No, that's kind of all I had. So thank you everyone for listening and thank you for all your questions and hope some of you are able to take advantage of this tool and that it makes your lives easier. So, yeah. Yeah, it was great to have you Vishwa and talk about CubeLinter. And just a reminder, this is a weekly show. We meet every Wednesday at 8 a.m. Pacific. By the way, this has changed, the time has changed from previously. So make note, it used to be at 3 p.m. Eastern. And yeah, so see you again next week on Cloud Native Live. Thank you everyone. Yeah, thanks Sita and Paolo and thanks everyone. Thank you so much Vishwa, take care everyone.