 Hello everybody. Happy Monday and welcome to OpenShift Commons. Today, we're going to do another AMA briefing on a wonderful project, CubeLinter, brought to us from some of the folks at Stack Rocks. And we have Michael Foster here. He's going to introduce himself and his colleague Vishwa. And we're going to get an introduction to the project, where it's at, the roadmap, and have time for Q&A at the end. So without any further ado, Michael, please, tell us all about CubeLinter, the community, and where it's going. Awesome, yeah. Thanks for the intro. Again, I'm Mike Foster. I'm a cloud native advocate at Stack Rocks, and I'm joined by Vishwa. Hopefully, Vishwa, give a hello. Hello, everyone. Hey. And Vishwa is the lead maintainer for the CubeLinter project, so he's here to direct me, if I get off course, about anything. Vishwa's going to be my go-to guy as I just walk through CubeLinter, talk to you about the community, demo its functionality, and hopefully have you joining the community after. So let's just get started. So again, what I'm going to cover. I'm just going to go through an introduction, why we decided to create CubeLinter, where it fits into the ecosystem. Again, show you how to get started, download, integrate into your pipelines. Just general workflow. So I'm going to show you how to use CubeLinter and the CubeLinter length functionality. Show you some of the more advanced features like configuration and configuring the policies and setting, you know, some flags so that we can ignore checks and enable checks. Last thing I'm going to get into integration, so show you kind of where the real power of CubeLinter takes over as part of your CI pipeline. And we're going to talk to Vishwa about what's coming next in the pipeline, you know, some issues with the project roadmap looks like for CubeLinter. And again, how you guys can all join in. Already introduced ourselves, but again, that's Vishwa. I'm Michael and I'll, I'll send some, some links at the end. So if you guys want to contact us or, or learn more about CubeLinter, we'll be happy to talk to you. So what is CubeLinter? CubeLinter. So it's a CLI, right? It's a command line interface for linting Kubernetes objects. These Kubernetes YAML files can be in the form of Helm charts. They can be just regular YAML files. And what we're going to do is we're going to use the default policies that are baked into the CLI to then let you know, you know, if there's some configuration mismatch, if you have, you know, elevated privileges and things of that sort. So because we have all these policies, we can configure certain policies, enable and disable them. So we allow for some fine-tuned policy enforcement. And because it's a CLI, it's go-based. We have simplicity and design. Something easy for you to integrate. It's a binary, right? Just download it, install it, add it to your CLI pipeline. Very easy to work with. And again, YCubeLinter. Because, you know, there's all these different policy options out there right now. And so we at StackRocks kind of saw that DevSecOps really wasn't catching on. And there's a reason for it. You know, you can have all these tools. Doesn't necessarily mean your developer's buy-in. Doesn't necessarily mean it's easy to use. There's a knowledge gap, right? For some of these tools, maybe you have to learn another language. Though we wanted to keep the project simple, keep a relatively low knowledge gap in terms of being able to implement it. It is a Kubernetes-focused project. So some of the other policy applications are a little bit more wide in scope. This is for Kubernetes. So you know that that's what we're going to support. And that's what we're moving forward with in terms of policies and all our integrations. And really, CubeLinter is just there to identify misconfigurations as close to the source as possible, right? So we want to enable developers and CI pipelines to catch just issues as close as possible in a declarative way. And the checks are also there to give knowledge for further growth for users. So not only are we linting the files, but we're also showing you why the policy was created, documentation. If you're unaware of why the policy is there, you can go into Kubernetes documentation and find out more. In the end, the end goal is that you can build operational policy within your organizations or teams. So something that can be implemented on a personal level across a team or across an organization. And I'll show you how to do that. Okay, let's get started. I'm going to take a quick journey over to the CubeLinter project. So CubeLinter can be found on GitHub at StackRocks in the StackRocks GitHub repository. StackRocks slash CubeLinter, where here you'll find everything that you need to get started. Vishwan and the team have done a great job at creating a doc site, which I personally like using a lot more. Not dark mode for me, but yeah. So we have everything here, installing CubeLinter, building from source, the usage, how to get involved in the community and the license as well. So again, you can install because it's a go binary, you can build using go, you can build from source. You can use a Docker container to run CubeLinter as well. And there's a bunch of other actions in here. So let's say you wanted to implement CubeLinter into your GitHub. As a GitHub action, there's options for that as well. And a couple of demos that, just examples that I'll showcase as well moving forward. Now, one of the big things that I want to show you before I go on to the command line and showcase CubeLinter are the default checks. So CubeLinter has 19 default checks. And not all of them are enabled. There's 13 enabled by default. And visual correct me if I'm wrong, but really the reason for having 13 was we didn't want to overwhelm users. Not all these checks are necessary. And so we wanted people to be able to go use CubeLinter and see the core checks without being overwhelmed by all the errors. So, you know, focus on those core, you know, six to eight checks that tend to pop up. And then as you start to get more comfortable, you enable all the checks and then you'll get things like, you know, unset CPU requirements, unset memory requirements, which aren't necessary, but should be enforced. So, yeah, that's, that is there's some that, you know, we want people that we think we are opinionated and we're like everybody should follow them. And there's some that we know not everyone is going to be able to follow them, but these are like good practices. So we want the checks, but we didn't want, we wanted them to be more opt in. Yeah, just to add to what you were saying. Yeah, and the description of remediation in the checks also allow for you to run and run all the checks. You don't necessarily need to have an action on them, right? But it will showcase and help you move through it now. This will come up a little bit later too, as we get into customizing checks, but CubeLinter is built off of these templates that you can actually change and alter depending on your needs. So there is a required annotation check that basically just says, you know, you should have an email of some sort for somebody to contact you for this Kubernetes object. You know, we don't necessarily need that, but you can also alter it. So like, let's say you are a team and you require an annotation of, you know, testing or maybe you have if somebody's running an object, they need to put their name in as an annotation. You can set all that and configure the policies as you need them to. So there's some, some flexibility in what you can configure now. All I really have to show you the documentation because the real stuff is in the command line. So for this walkthrough, I've actually created a little demo repo, which has all the examples that I'm going to go through today. So if you, and we'll, we'll post the, the link up at the end as well, if, if you get lost. But yeah, so mfosterrock slash CubeLinter walkthrough has all the manifests that I'm going to be going through today. And I'm going to be doing this in Visual Studio Code. So in that repo, we have a list of manifests here. Now I've split it up in a weird way. People might be like, why is there so many folders here? I did it to kind of showcase how CubeLinter works and works effectively. You could realistically put all these demo files in a single file and just lint through them in one go, but we wouldn't really want to do that. It's not really, it doesn't really make sense and can't scale, right? So what I find really useful is CubeLinter lint lints everything underneath a folder, right? So we break it up, we apply the policies to each folder, respectively. Now just wanted to solve that question because it comes up kind of often. CubeLinter simple install, I use brew to install it because I'm using a Mac right now. But it's very similar to Cube CTL. CubeLinter, it's going to have the available commands for me. There's the checks, help, lint, templates, and version command. Help is always there, you know, dash dash help dash h will always help you and give you a description as to what the commands are. Version is there to show you the version that you downloaded, obviously. And the checks and it shows you all of those default checks that I had showed you previously, right? So we have, like I mentioned before, the unset CPU requirements gives you a description, a remediation, and a link as well if you want to find more documentation for more details. The other aspect here is the templates command. So just like the checks, we have the templates command here. Now the differences is the checks are built off the templates, but the templates allow you to change some of the keys and values in the parameters of the check, right? Which I'll showcase more when we get to the config file. But just to kind of show the differentiation between what a check is and what a template is, the templates are there to help you build custom checks off of, and the checks are the default ones that are built into Cubelinter. I'm moving on to the good stuff. So we have Cubelinter lint and we can technically lint and it will take all the YAML files underneath that folder. So because I have all the YAML files split up into all these different folders, you could say I have 54 lint errors. Now let's wait too much really to go through and check. So we probably want to be a little more specific about what we're linting. So I have a couple compliance YAMLs in here and I have one that is non-compliant. And I found eight lint errors. So what this YAML looks like is right here. So it's a non-compliant app. We have, for example, an AWS secret key that is in the YAML file should not be there. We have a servers account that doesn't match up in terms of labels. So there's also what we're forgetting is there's a lot of information missing here. So we don't have un-set privilege access. We don't have any drop capabilities, any Linux capabilities that are set up. We don't have a UID or GID that are set here. So there's a bunch of other information that's missing, which if we go into the console, we will be able to see. So we, like I said, I ran the checks and these are the default 13 checks, right? And we found eight lint errors. This is not the default 19 or the total 19. So you could see we have an environment vars secret, you know, don't use raw secrets in an environment variable. Instead, either mount the secrets as a file or use a secret key ref. And then we showcase, you know, if you want more documentation, here's how you would solve this problem, right? Same thing. There's no read-only-root file system. The remediation is, you know, set read-only-root file system to true in your container security context and go from there, right? It's pretty straightforward for that example. Now, what I can also do is if I use the help flag, I can see the list of flags that are available to me using kubelinter lint, right? So I have a bunch of flags here. I have add all built-in, I have config, I have do not add, do not auto add defaults, and I have the include. Now, add all built-in does exactly what it says it's going to do, right? It's going to add all those 19 checks. So originally we had 13. If we want 19, we're going to do an add all built-in. So you can do this directly from the command line. We can do a add all built-in. And now we have 12 errors, right? So we've run six more checks against it. We've realized that it violates four of those default checks. And these are going to be, yeah, so there's no memory request or limit set. There's no CPU request or limit set. Run is non-root. It's not set, for example. And, you know, one of the ways we can go through this is we could just tick them off one by one in here until we, you know, fix the issue. So something as simple as, hey, I got rid of this environment variable, run add all built-in, and I have 11 lint errors, right? One way to go about it. The other way, so you can do a do not auto add built-in, you're going to get a warning, no checks enabled. Now, one of the reasons why, and I've had this brought up a couple of times, is why did we even add that? Why did we add a do not auto add defaults? Like you're running the command. What's the point? Well, the reason is, is because you think we have 19 checks, we can either add all of them and disable them as we go down, or we can, you know, disable all of them and include them as we go up, right? So if we say, maybe we really only care about running as non-root, maybe, you know, we just want to make sure that our containers are not running as root, and if they are, we're flagging them for an exception, right? What we can do is we can do a do not auto add defaults, and then we can do an include string, include the run as non-root command, right? And so now I found one link there. So really useful for flexibility. Now, the next thing that's coming up is why would I be doing this on my command line? I mean, you've just seen me like scroll up and down, go and copy and paste. It doesn't really make sense. It's not fast. It's not scalable, right? And that's where config files come in. So instead of running something like this, I can use a config file in order to run the checks, show you, for example, what that looks like. Now, you saw the flag for the at all built in set to true. I can create a config file to basically run the same sort of flag, and then I can also list all of my include and exclude arguments as well. For this config file, normally you'd have an exclude here. I've just commented it out. I have all these checks. They're all set to true. So I should get that 12-lint error message that I had before. Let's just make sure that I'm in the right space, though. Biggs have eight, which should be 12 unless I'm skipping something. Maybe I have something that skipped out. But you can do the exact same thing with I have a do not auto add default set to true as well. So just like the other one, there's a do not add, or there's an add, and there's a do not add, and I can run the exact same check against it. This was my formatting off here. Yeah, I'm just trying to figure it out that it's probably something like that. Yeah, I think it's probably like a tab or something like that that's off. So just as an example, we'll just say that I did all the checks. I can exclude specific checks. So if I was going back to the adult build-in, I have an unset memory requirement check there, and the run is non-root. What I can do is I can tell the check to exclude, and maybe we want to get rid of the run is non-root check. I can do that. And then, again, everything's declarative. We want to comment things out when we make changes. So we're going to exclude the specific run as non-root check, and we have seven errors. So if we go through, we'll see that that run as non-root check is not there now. And one of the reasons for the config file and being set up that way is the declarative nature of YAML files in Kubernetes objects. We want to use that declarative nature so that we can pass on information to other teams. So for example, let's say if a user was using kubelinter, and you found out that that team and that microservice that they were working on had some specific requirements in terms of checks, well, we can set up a policy around just kind of that baseline enforcement of checks, and we just create comments to say, hey, this specific operator or container or pod or something like that requires root access on the host, and we can basically make a comment in the YAML file and in the config file as well so the next person coming in understands why that decision was made. One of the interesting ways is because it's a CLI, you can really just apply it throughout all the different folders. So not only do I need to do it to the non-compliant YAML, but you basically can string all the commands together. So what I found really useful is to create kind of these default config files. So you can see I have a couple examples here like maybe I have specific checks that I want for the CARTs database down here. Well, I create a config file and then I just point the CLI towards that config file, check that folder, and I do it as a process of a couple steps in the CI process, right, with like a GitHub action or a GitLab runner or something like that. You can't always use config files though, and there's always like specific exceptions. So instead of just having everything in the config file, we also wanted to give you the ability to comment things out in the YAML file as well. Just to showcase this, we have this ignore check annotation that allows you to ignore specific checks and document why you're doing it. And I'm kind of beating up on the run as non-root example here. For the example of this annotation, I'm going to actually take it out of here first, disable that. So hopefully you guys didn't get lost in that back and forth. Yeah, so we're back to eight lint errors, right, because I commented out the run as non-root. Now if I save this file, what I should get, I should get seven lint errors because I'm commenting out I'm telling you went to ignore this check, and I want to ignore the run as non-root check. You don't necessarily need to write a description, however it's highly recommended you do. You can't leave this an empty string, but the whole purpose was to allow users to document why they were ignoring checks or why they were disabling or enabling checks, you know, directly in the YAML file, right. And it's just one way that we can say, hey, we are avoiding this security process. We're aware that, you know, this is, let's say, not necessarily the best practice for securing our container, but this is required. Here's why, get in contact with this person if you have any other questions, right. Just to give you another example, let's say we, a developer would probably put this in just to not have any limits, but we can get rid of that, and it got rid of two lint errors. The reason is because you're getting rid of memory requirements, high, a limit, and a request. So we got rid of two errors there, and you know, you can continue down and do CPUs. So I find it really a good workflow for, you know, looking at applications to have a config file that allows all the defaults enabled, and then you just go through and you just basically pick off and document exactly why you're making these decisions and do they need it? Is there something I forgot to mention, right? And, you know, just simple descriptions in order to help you out. Now, I've got a question real quick. Sure. Is there a no excuses mode where you can't use any of those annotations? Yes. So that was actually brought up. There is not a you can't disable this mode. The one of the things I brought up to visual was because it basically defaults as a non-zero exit code, right, when it trips. So it's part of a CI pipeline. You technically could annotate your way out of no checks, right? So your developer could push something, check the way out of it, and you wouldn't be able to see it. So one of the things that's in the pipeline is basically so that even if there's an annotation for disabling it, it still prompts basically an output of like this is the check that was disabled in this YAML file, right? So that even if somebody did annotate it and somebody else was reviewing the work, it would still trip a wire. But actually, no, there is not a, yeah, basically you're saying like you can't disable any of the checks. Don't believe the visual. Is there an option for that? No, but yeah, totally agree that we should add it. Yeah, adding it to the roadmap. That's a great recommendation. I was going to say, go and open up an issue. I can open an issue. I'll open an issue. Do it. We'll have it before the end of this session. Okay. All right. So moving on, we basically just looked at all of the default policies right up until this point. The custom checks allow you and this is one of the things that Vishwa has worked with to enable some flexibility in the different objects that you can apply it to. Right now, a lot of the objects are deployments. There are a couple that are services and as the project moves forward, it'd be interesting to get feedback from the community as to which objects and checks that they would like to see. So for some of the custom checks, for example, I find really useful for required annotation as a check. If I showcase that demo, sorry, the template, we'll see basically the options, the parameters that we can change in these checks. So here's the required annotation check. And we can see there's a flag not carrying at least one annotation. So it has to match provide a pattern. So basically we can set the parameter and the key to, you know, kubelinter slash demo, for example. So if I went to this non-compliant app, if there was no check that was kubelinter slash demo here, it would fly. Right. Now, I can set my own remediation that will come through the check. And you can, you know, add a bunch in there that you want or that you don't want. And what I found really useful about this is like now admins can enforce best practice for annotation for other YAML files. So no longer can somebody push something without their email attached to it or, you know, not actually, you know, adding the correct labels. So if you have like different, let's say you're in GitHub and you have different repositories for different teams or something like that, you have different config files for each team, all enforcing, you know, the exact same exact same different annotations for each team so that you can like keep basically that, let's say, neatness across teams and then into your cluster. The other thing too is some of the parameters allow you to change the object kind. So this is an example where I'm specifying labels, but I want to specify labels to the service object kind. So it'll check basically the services. If I did it for carts, for example, it would check the service to say, hey, is the right label here? No, it's not. We need to make sure that that label is there. And, you know, it'll give you some sort of, oh, where did I just go? There you go. And it'll give you some sort of mediation. So please add environment label to service, depth test staging production, just as an example. Now what this looks like, and I have this as excluding basically everything, except for run as non-root, because that's what I'm picking on today. It's a tough Monday for run as non-root. I'm going to do the config eb-cart, because that's the one we're looking at. And I'm going to do it against the, so you can see we have the checks. We have an at all built-in set to true. And we have this exclude, right? So we're excluding basically all of the checks, except for run as non-root. And then we're implementing custom checks, which is a required label release deployment. We're calling the, it's a specific name for this check. So we're calling the template is required label template. And then we're giving it a unique name. We're specifying the parameter and the key that is changing. So there has to be an environment, annotation there. And then please add environment label. So you do, you know, annotation, environment, semicolon, and then your explanation for which environment it is. So you can see our semicolon equals on, you can see there's no label matching right there. And this is, I think an example of both a, they're looking at the service and we're doing required label release service. It's like a duplicate here. But yeah, either way, you can see that there's both checks that are coming out. And if I wanted to get rid of that check, for example, it allows you a lot of flexibility to play around and configure your policies as you want it to see, as you want them. One of the biggest issues, especially from a security company like StackBrocks that we've come across is you hear this term shift left all the time. And nobody really talks about what shift left is. And it doesn't really matter if your developers see security as a big wall in front of them. They're not really going to work with security. So what we wanted to do is we wanted to create a tool that is in the developer's hands that can be applied in an automated way as part of their pipeline. And also gives you a lot of flexibility. So let's say admins want to enable best practice onto their development team. Well, you can implement this as part of your CI pipeline without actually triggering a build fail just to give educational feedback to them. And I'll show you kind of what that looks like. In this repo, I have a GitHub workflow. So I have a config file for a GitHub action. Now, like I said, if you trip, if Cubelins are limped trips in error, you'll get a non-zero exit code, which would technically, excuse me, cause your build to fail. Now, you don't necessarily have to fail it if GitHub actions, for example, allows you to continue on error, right? So depending on what branch you're working with, maybe your devs are starting off and they're new to Kubernetes, we don't necessarily need to have, you know, 54 lint errors and build fails all the time. So what we do is we run it with all the policies enabled, and maybe we say continue on error is true, right? Then you have a meeting on Monday and you say, okay, well, I'm going to open up tickets because there's 54 lint errors and I want you to fix these eight before Friday, right? It doesn't necessarily, again, need to fail a build. But next Monday, when it comes back, we're like, did we fix those main security issues? Yes. Okay. Well, maybe we can fix two more. Maybe we can fix five more, right? So we're not actually hampering developers, but it's just making those little changes in a way that, you know, kind of makes like allows teams to communicate and collaborate. In this example, I have basically two jobs. I have a test job and I have a staging job. And the staging job fails when you run kubel enter lint. And what that looks like, I go into GitHub actions. So I can see that the test phase worked. And one of the reasons why it's useful to, especially when starting off, have that kubel enter not fail is because a lot of the times when you're building these pipelines, if you get a non-zero exit code and it fails, you actually don't get to continue the pipeline. So in this example, I have two commands here. So I have like a lint and another lint because I want to apply different config files, right? Now applying different config files, this means that if this first command fails, the pipeline just stops. It doesn't continue through. So I found really useful to create a pipeline where it basically gives me all of the output, right? So I'm going into GitHub actions. I can come in here and I can see all of the output from both commands with different config files. I can then look at them, you know, go and have a conversation about what I think are maybe the top two that we need to fix, right? Run as non-root, probably the top one. Actually, if it's not needed. So we can just prioritize. We can say, hey, I don't really care that much about your service not being labeled because it's in deployment, we'll fix it, but you really need to change this from what is non-root. And then you can push those specific policies down. And what we can do to do that is, well, you can set it so that it passes on test, but if they want to move it to the staging branch or if they want to move it up, it's going to fail. So, hey, by the way, here's your feedback. If you want to keep going, you know, we're going to fail the build. So at some point, you need to bring the hammer down as an admin and say, hey, this is not what we're doing. And one of the great things about implementing policy this way and sort of a, I guess, sunshine effect to it where we're just saying, hey, these are the policies that are getting tripped. You figure out what to do with it. It allows us to build realistic security policies across your organization. So you have five teams, eight teams, 20 teams. They all have different needs, right? And employing a one-size-security policy across those teams is just, it's a hindrance. It's not going to help your developers. They're going to end up not liking your security team. So we, as a, you know, if you're going to be security focused, you need to also go and say, hey, like what do developers need? You know, we'll give you flexibility in terms of how you configure this. But then we also expect a little bit of just following best practice. And we'll give you feedback as to what errors you're tripping and what we think you should focus on, right? That was the goal of CubeLinter. One of the main reasons why we did it in the CLI phase is because Dacrox has ROC CTL, which is like an image scanner and all, and gives you a lot more functionality towards just images in terms of for the developers. So we have kind of those paired CLIs. And we just thought it was really useful to, if we're going to be Kubernetes focused, here's your image scanning. And then here's the configuration of the images in the cluster that it was a good whole life cycle approach. And then obviously, when you get into the Dacrox, the sensor collector and, you know, more of the user interface, then you get whole life cycle Kubernetes security management. So again, it's that first stop developer checkpoint or figuring out where your security vulnerabilities lie with Kubernetes. And hopefully, it gives you enough flexibility to be able to adopt and grow from there. Now, I think that's visual. Did I miss anything of major importance? Any functionality? Nope. Thank you. Got everything. So thanks, Foster. I'll hand it off to you. You have the roadmap, which it sounds like we need a, we have a new issue coming in. So wish to add that to the roadmap. Yes, I just saw that. Cool. Cool. So, hey, everyone. Again, I'm Vishwa and I'm one of the engineers working on Kublinter. And thanks, Foster, for going over that demo. I'm going to be giving you a high level overview of like the things on our, on our like near to medium term roadmap. So obviously, one big component of what we're adding is more, more checks. And there's a few different things here. There's like one, there's like more checks along the lines of what we've written. Another is a feature request we've heard from some people is to have some checks that map specifically to certain compliance checks and allow and not just add those checks, but allow some way of tagging checks with metadata as to like these checks will help you. These checks are required for you to be in compliance with this standard and things like that. We want to consider additional resources that we don't yet consider in our checks, like network policies, pod security policies and things like that, and like checks based on those. So a simple check could be like, does your deployment have a network policy? And if it doesn't, it should because otherwise it can talk to anything. And more complex checks could be allowing people to write like more fine grained rules on network policies like, you know, don't allow network policies that are overly broad or whatever they want. And so that's like something we're thinking of. Another one, a big one that we've heard is custom resources. Right now we support like all our checks basically work only on like native Kubernetes objects like deployment services, demon sets and things like that. But, you know, with the latest versions of Kubernetes, like custom resources are becoming a bigger and bigger thing and we're trying to figure out what's a good way to support them. And this also includes and it's not an additional to custom resources. There's like other things like, you know, OpenShift, for example, has its own resources like security context constraints and deployment configs. And that's another thing we would want to support as well. Another is better customizability. So as you saw in the demo, we currently allow you to specify checks through basically having these templates that we provide and allowing you to like parameterize those templates. But we are thinking of ways to allow more flexible specifications of checks for cases where that works because right now, if you want to write a totally new kind of check, we will need to update Go code and like release a new version of Kubernetes, which is not ideal. So we're still figuring out how to design this. But, you know, some ideas are the simplest on the spectrum is like something like using, you know, like JSON parts, allowing people to specify JSON parts, or something to say like this, this field in the YAML should be equal to this. Whereas a more complex thing along the spectrum would be like allowing people to put in, you know, rego policies and use what they use at OPA with Covelinter. Another big few things that we want to do are like usability improvements. There's a bunch of things around this that on our roadmap, but some of the ones that we're thinking of are automated rewriting in cases where it's possible, like obviously it's not going to be possible everywhere, because in many cases, we can just tell you like something's missing and you need only you know what is to be added. But in there are some cases where we can fix things on our own or similar to what many Limp does do, and that's something we would want to support. And then more convenience command line flags. So, you know, people have asked us, there's a few requests on this, and we've added some like some of the command line flags you saw Michael use were added in response to use a feedback, but people have asked us for things like, you know, allow us to ignore certain parts easily and things like that. So, there's like a bunch of those usability related things that we need to do. And then finally, and I did see there was a question about this. We really want to do native integrations. So, that's two big buckets, like one is with IDEs. So, you know, VS Code is I think the one we've got the most requests for. We would also probably do IntelliJ and, you know, the JetBrains family, as well as Lens, which is Mirantis's open source Kubernetes IDE. And we think that's a good fit for something for us to integrate with as well and have been in conversations with them because they very recently released like an API that makes it easy for you to integrate with it. And then CI systems, you know, we do have a GitHub action that we publish, but we do want to make it easier to just drop it into like CircleCI or Jenkins and things like that. Right now, it's not terribly hard to use because it's just a binary and you can download it and then run it. But in my experience, like the easier you make, the more pluggable you make these things, the more it just removes a lot of friction and makes people much more likely to use them. Yeah, you and I have talked about creating a repository with a bunch of basically best practices, right? Like I have six config files in that repository. I think it'd be very useful to kind of showcase like, hey, here's some, you know, default config setups that you can use along with different actions, whether it be like GitLab, Jenkins, GitHub as well. So yeah. Yeah, absolutely. We should definitely add that to the roadmap. Yeah, yeah, absolutely. Thanks, Michael. I totally agree. Problem. Yeah, so that's that's a little that's a high level overview of the roadmap. And our roadmap is managed entirely on our GitHub. So we use GitHub projects, which allows us to basically use GitHub issues as tickets and then put them put them in this context of the roadmap. Yeah, so I guess Michael is doing you. Yeah, you can see. And we, you know, as people file issues, we typically add them to the roadmap or deduplicate them and like update another ticket. And that's the that's what we're using to to manage. Look at Paul. Yeah, thanks Paul. No problem, guys. He snuck right in there. Yeah. That was that was a very quick issue open. Got that number 128, too. Bam. Yeah. Nice round number. Very pleasing. Yeah. All right. Anything, anything else Vishwa that you wanted to chat about? Just, yeah, I think the last slide was just about like helpful links, but you know, we already went over those, but maybe just maybe maybe just forward one more. Yeah, definitely. Well, yeah, so this is the link to our GitHub, which you, which I think some people have posted in the chat already to link to a pause issue. We also have a docs website that Michael showed you. We, you can also communicate with us by you know, filing issues for sure and sending PRs, but you can also join Kublinter on Slack and the link to join the Slack is on the GitHub. So if you just go to the GitHub page and search Slack, we have an invite link that you can use and it'll take you right into our Slack workspace and all of us are on it and you know, we try to be pretty responsive. So if you have, if you just want to like talk to us about anything like that's probably the lowest friction way for you to reach us. And yeah, try the Kublinter walkthrough that Michael has put together. So, you know, we, like in closing, we are not, this is probably the first, not the very first, but this is among the first open source projects that you know, we've done at Stack Rocks. We are like learning a lot about this as we go through with this specific project. And we've been happy with the response we've received so far. And we're looking forward to engaging more with users and trying to figure out like how we can make this a better project that's like more useful to our community. So definitely looking forward to hearing from you. I think one, one thing was specifically interested in as you know, how are, how are people using it? I think we've, we found that people do file issues when they run into bugs or when they have feature requests, but it can sometimes be hard to figure out like, is this, are these just like the tip of the iceberg of people using it? And like, there are many people who use it, but don't actually get around to filing issues. So any, I guess what I'm trying to say is like any input on how you're using it is welcome, even if you don't have like a concrete suggestion or request, just because that's interesting for us to hear about. But that, I think that's the end of our presentation. So thank you for listening. And I guess we'll take questions now. Awesome. Thanks. One thing, do you have any open community meetings set up yet or anything like that? Are you still just mostly doing it through the mailing list and issues connecting with your end users and folks? But yeah, so we have, we're just getting started on doing that. So we actually have an AMA that's coming up, I believe on the 16th. Is that right, Michael? I think so. Don't call me on that. I'll look it up right now as you're going on. Yeah, that's going to be our first foray into this. So I can just pull up the link and post it on the chat. It's just after this. But yeah, that's going to be our first foray. But until now, it's just been Slack and GitHub. Okay. And so for the most part, it's Stack Rocks, engineers working on this. Yeah, we've received a few contributions from users, but we are, you know, and we are most of the contributions have been internal so far. And, you know, the PR approvals and merging is currently just Stack Rocks engineers right now. Well, it seems, I was talking with Michael a little bit before everybody got talking today and we hit the record button about how this seems like a nice complementary thing to what you have for products as well. The products are more based on examining images. So, you know, I would think that this fulfills, you know, a need for this, this piece of the work. And I was just wondering how this relates maybe to operators because we talk a lot about using operators versus Helm charts and YAML file series. Is there any work going on in Vishwa around the operator framework or examining that? Not specifically, like we haven't done anything specific to operators here. Again, like since we're analyzing the YAML files to some degree, you know, if you're able to render the operator into a YAML file, into like its final YAML files, like the tool will work. But yeah, the things that we are, we focused on at least at the beginning have been a better support for Helm charts. So, with Helm charts, we don't require users to render them like we kind of use the Helm libraries under the hood. And if we detect that it's a Helm chart, we just automatically like render the YAMLs or the templates and run our checks on them. And another thing we've had requests for is customize as well, which some people use to manage their configs. But yeah, that's a good suggestion and something for us to look into. Yeah, it would be interesting to see how that works. There have been a couple of things that have come up in the chat. Noel has a list of issues that he's seen other customers from his perspective. And I love the acronym over my dead body. I don't know, Noel, if you want to share that list with them and walk through that a little bit. If not, you probably got that other question covered with the VS code in your roadmap talk. And the other question I think was around comparing what you're doing with CubeLunter with what goes on in OPPA, the Open Policy Agent. Can you talk a little bit about that? Yeah, so that's a really good question. And it's something we've thought about as well. I think we concluded that we ended up building this because we found that for us internally, so this actually came out of internal pain points that we had. That was the initial spark of us building this. And then later we decided that this could be something we open source. But with OPPA and ConfTest, you can achieve many of the same results. But we found that what we're building here is more focused on Kubernetes, and that allows us to do some things which are more Kubernetes natural. For example, the way we ignore checks by annotations, for example. We've been able to build that into the tool because we have this assumption that we are working on Kubernetes YAML files. And the other one is how we are building native support for hand charts. And if you see a hand chart, we just render it like our tool is more Kubernetes aware. So I think that Kubernetes awareness is one big thing that we felt that we could do in a custom tool. And the other thing is we also kind of opinionated about certain defaults checks that we wanted to enforce internally. And we figured that it would be nice to have those built in to the tool, so that if someone downloads it, they don't need to, with ConfTest and OPPA, there are rules sets available. But we wanted to make it something where we also kind of guide users towards saying these are the tools that we think should be enabled by default and they're actually built right into the binary. So those are the couple of reasons we decided to build something something different. And we do, I do still think that the OPPA language is very powerful, much more powerful than the kind of language we've built. With that comes complexity, but we're also thinking of ways to make our ways to use the OPPA language in our policy configuration. So like I mentioned in the roadmap, we're thinking of ways to make more flexible policies. And one of the ways we could do that might be allow people to specify rego files. And so essentially, for people who want that added flexibility, they get that right in Cooblinter, but they also get the convenience of all our other Coob specific features. So hope that answers the question. Answered mine. I hope it did for everybody in the chat as well. And Noelle, Noelle, I think you've got your speaker fixed, your microphone. Could you hear me now? I can hear you now. There you go. Perfect. So I put those lists of things together over the last while, because it's what we're seeing with very large customers of OpenShift is it's not necessarily the developers that are struggling with these things. It's large operations teams or even small operations teams that have a large number of third party either vendor applications or in-house development teams where they spend the majority of their time getting stuff onto the cluster and then spend the rest of their time trying to figure out what's gone wrong. So things like missing readiness probes and all that type of stuff. And especially in the FSI space, we've seen things around, you know, when you're working in service mesh is MTLS enabled on service mesh or not. So it's kind of more granular things, but it was around basically how do we enable those customers to actually kind of set up kind of standards that have to be enforced. Do you know what I mean? To get the application done there. So kind of if you think of like things like SonarCube for code, this would be the equivalent for Kubernetes or OpenShift resources, you know, that's kind of stuff we've seen so far. Yeah, this is a very thorough list too of policies. Yeah, and a lot of these checks are actually implemented in Cubelinter. The other thing too is obviously StackRox is a more complete, you know, security tool that basically looks at all these checks and allows you to enforce either as part of the pipeline or in the cluster, right? So it's, I don't know if that's like a subtle me just saying, hey, like the StackRox basically does a lot of the stuff. Yeah, it does, yeah, yeah, absolutely. So, yeah, no, it's good and it's actually great to hear that the developers aren't the ones having an issue. I've always found when I've been working with customers, especially like an consulting role, it's just the issue of like fine-tuning policy across different teams, right? It's the admin aspect of Kubernetes because the multi-tenant, when you get multi-tenant setups, it just becomes a little bit more complicated. So a lot of our tools are focused around users, but users at scale, right? Because security only really works when people buy in. And so we want to make sure that the tools are set up so that we're not just kicking people out of the cluster. It's more of a security observability tool with the ability to enforce policies, right? That's the goal. But yeah, this is a really cool repo. Thanks for sharing this. Yeah, I second that Noel, it's really useful and we're definitely going to be looking at that to steal ideas for Kubernetes. So thanks for sharing it. Given us issues and given us ideas, what is this? Noel, you want to come join the team? It sounds like a game plan here. This is great. You talked, Mike, a little bit about having some best practices config files as well. So setting up sort of recipes and cookbooks for people to use as best practices in their own organizations. And I think maybe some of what Noel is listing here are best practices, things for different platforms that we're deploying to. So it's actually some really cool work to go through and collaborate with you guys on. Yeah, we're almost to the end of the hour. I'm not seeing any more questions in here. I'll ask the one out there. Hugh Blinter, is there any plans for sandboxing it into CNCF? What are the next steps for you guys as a project? Yeah, we actually don't have a great answer to our long-term plans because we have just been focused on the short and medium term so far. I think we've kind of been focused on just the project is only three-ish months old at this point since we released it. And so we've just been kind of focused on understanding how are people using it and trying to make improvements to it. And I think when we released it, we didn't know what to expect, whether nobody would care or whether people would use it. And we've been happy. It exceeded our expectations for people who have been interested and have been using it. And I think now is a good time to think about things like that. Again, this is something we don't have a ton of experience with at Stackbox because this is our biggest open source project. And so I think it's just something we need to figure out. But yeah, to answer your question, we haven't given that a lot of thought so far, to be honest. Well, it looks like from the interest in it today and the work that you've done, you're on the right path. So I think we're going to follow you guys actively and see how this goes and have you back again with the next release of it. So this is awesome. So thanks very much for coming today. We totally appreciate the overview and the deep dive into it and look forward to working with you guys more on this. So take care everybody. And I will post this video and the slides up shortly to the YouTube channel on OpenShift. So yeah, thanks. And along with Noel's list of suggestions there. Awesome. Thank you so much for having us. It was great. Thanks so much for having us, Dianne. It was great. And thanks for listening everybody and for your engagement and your questions.