 Welcome once again to Stack Rocks office hours. I'm your host, Michael Foster. And today, as always, we're going to be discussing everything you need to know about the Stack Rocks project, Stack Rocks open source. Joining me is the co-host, technically co-host for the day, and fellow Stack Rocks community manager, Mathias. Mathias, thanks for coming on the show. It was snowing yesterday where I am, so this is kind of my break for the week. I'm super excited to be on here with you. Tell us a little bit about yourself and for all the listeners out there. What do you do? Thanks for having me. So I am a software engineer currently working in T-Maple, so doing most or exclusively core engineering for our product. I joined the project in January last year, so I've been with the project for a little bit. Currently, I'm working out of Germany, and what I did so far is, amongst other things, and with other people, I provided a lot of engineering support for the open sourcing efforts of Stack Rocks or ACS. And besides that, as you've already mentioned, I'm a fellow community manager, and I'm part of the Code of Conduct Committee. So, yeah, that about me. That's awesome, yeah, thanks for taking the time to come on. And for everybody watching, we have engineering meetings the second Tuesday of every month, 12 p.m. Eastern. That's posted on YouTube if you missed it, and then again, the third Tuesday, which is today, is when we talk office hours, and we welcome any questions. We'll do demos, walkthroughs, so today the big focus is on getting started with Stack Rocks, specifically because now that we are open sourced as of March 31st, it is freely available for everybody to use. So anybody that's watching, throw questions in the chat. If you're unaware of links, Stack Rocks.io, we are more than happy to take you through it all. So you actually mentioned the community charter. I think a good place to kick it off would be Stack Rocks.io just to walk through a little bit of the community site. Is there any chance that you wanna take us through that showcase maybe some of the links and blogs, articles, everything like that? Sure, we can take a look at, start with our community hub, where we have basically all the links to get started. Ideally. Awesome, let's do it. Bring it in. So this is Stack Rocks.io, our main website, where you can definitely, where you can see upcoming events, where you can see our calendars, as well as most of the community links that are at least most of the community links to be able to getting started. For example, you can join our Slack. Which is, we are on the CNCF Slack and we have the channel Stack Rocks. So this is one of the quickest ways to reach us. This is where quite a lot of people are active, not only from the Stack Rocks team, but also collaborators and general community. And yeah, basically, most of the community. We do have our engineering blog, where we show off, where we go into technical lead types. And yeah, at least we have, as you can see, maybe the highlighted post definitely is the Stack Rocks community and the open source announcement. Yeah. Yeah, the other thing about that Slack channel too, which is awesome is there are some people who have, let's say departed since obviously Stack Rocks has been acquired by Red Hat and they're still very active in the channel and using the product. So it's awesome to see people who are staying a part of the community. In fact, I think we're probably gonna mention Andy Clemenco and a little bit and all of the help that he's been giving the community. So again, that Slack channel is an awesome way to get in touch with us if you have extra questions that are answered here. And again, if you wanna talk or write a blog about how to Stack Rocks getting started or anything like that, you can email us at community at stackrocks.com and subscribe to the blog as well. There's an RSS feed. And then I think on top of the blog, there's the community page, which I think is which kind of gives you the links for everything. So, you know, finding the project on GitHub, there's the open source Qblinter project that was also released at the end of 2020. Again, Office Hours, third Tuesday of every month. Like you guys are watching right now and of course Slack channel and Twitter. And you did mention the community charter. So there's the code of conduct at the bottom of the page there, Matias. And for anybody who's joining the community, recommend you check it out. There are three individuals who are part of the code of conduct team. If there's any issues, feel free to contact them. There's individuals as well as the general email. Yeah, awesome. Any other thing that's catching your eye on the webpage that we should highlight? Of course, especially as GitHub also recently introduced the feature of following organizations. Feel free to follow the Stack Rocks organization on GitHub. And of course, leave a star on the repost that we're mainly using, especially Stack Rocks Qblinter and maybe the hand charts if you're interested. Yeah, for sure. There's a ton of public repositories on that GitHub account. And Stack Rocks, the main Stack Rocks slash Stack Rocks is the GitHub that you can get the deploy script that we'll be showcasing a little bit later in the episode. For documentation, there is the docs link at the top there. If you can click it, that'd be awesome. There you go. So new Stack Rocks. There's installation with Helm, installation with Rocks Cuddle and the overshift operator. Now, if you click some of the documents, it takes you to the RHACS paid product, which is funny because they're just for everyone watching at home, they're the same product. So the only thing that really changes is the container image that you're pulling from the open source images are in Quay. But when it comes to setting up and using the application and all of the how-tos, they are the exact same. And we're looking to, we're looking for some solutions for the documentation and try to make this clear. And again, any questions, feel free to ping us on the Slack channel. Ah, Matthias, that was a mouthful. Did I miss anything? No, absolutely not. Maybe to add to that is we're, currently we're linking to the commercial documentation, but also one of the problems that we have right now is, or one of the challenges that we're still ironing out a little bit is the fact that we don't provide release images yet. That is something that we need to sort out in our build process. And we are currently working on that. So I actually have an open PR to start the work to Lady Foundation for that. And as soon as that gets merged, we are approaching or we are planning on making some progress on, for example, providing release, basically release versions, publishing them, having an open source flavor. So really elevating the open source build from the development deploy that we currently have to a full stable release that also has its own release tags that are basically following suit with the commercial product, but still we will have open source release tags. This is something, again, that's a work in progress. That's something that we need to do right now. And that will change in the near future. Just as a preface to all the deployment options that we're talking through today. Awesome. What's next on the list? I think we got to showcase the GitHub repository, right? And all the public repos that everybody can check out. That's actually something that we could do, yeah. So let me think, let's see. I could showcase this one if it's a little bit easier, especially because I have the, let's say, public flavor. So you can, if you click on the Stack Rocks repository, this is github.com slash Stack Rocks. You will come across all of these public repositories. And I believe there's something like 40, 40 plus public repositories in this GitHub account. So like Matias said, you can follow the Stack Rocks organization on GitHub and you'll see a plethora of resources. So the Stack Rocks, if it's the main one that you can get started with the application, there's also CubeLinter, and of course, HelmCharts, the Stack Rocks collector, and a ton of more goodies for you to check out. And I see, look at, maybe we got CubeLinter, almost had 2,000 stars down there. We got to get Stack Rocks up, breaking 500. But if for those to get started, Stack Rocks slash Stack Rocks is the place to go. And the readme will have everything you need to get going. Matias, anything you wanna highlight in this repository? So maybe not only in this repository, but we do have the DevTox repository, which contains more in-depth guides as to how to get started, and also for common tasks that you might encounter. So this is still in, we're also still building and extending this, but in general, if the community, if anyone in the community is interested in deep down guides, or would like to see more or different guides, please be free to drop us an issue, or even a PR if you already have an idea, or a rough draft of things that you would like to see in the documentation. That's awesome. All right. So we have, do we miss anything? I feel like we've highlighted most of the core place for people to get started. Should we just get into it and do our first install? Yeah, sure. So I would, we should maybe differentiate between doing development deployments and production deployments. The code base is the same, although obviously it has some slight changes. The development process is already done. So this is something that we're happy with as is. And again, the production deployment of the OSS flavor, that's something we're currently working on. We encountered some smaller things that we would like to fix. So this is something that will be coming soon. For illustration purposes, we maybe it's the easiest way to get started is to just run the deploy script. So the idea of how does the development work for work? So if you would like, I could show off a test deployment just with a local Docker for desktop. So as you can see, we have, this is my Docker for desktop that is running with an enabled Kubernetes cluster. And this is one of the possible options that you have. Of course, you could also deploy into a remote Kubernetes instance, basically anything that is happy with Kube-Cuttle and Helm charts is something you can install in. So I know that we are, I definitely know that we are, that everything will work with Docker for desktop. MiniCube is also working. I believe K3S has some smaller tweaks that people need to do. But I think Andy Climenko is one of the people that figured that out. And I think he documented that in a gist that we could link in if anyone is interested in. But for now to get started, let's say we are already at the step of following the, yeah. I was gonna say real quick, do you mind just making the text a little bit bigger for the viewers at home? It'd be great. Great, very colorful terminal though, very, very cool. So as you can already, as you might already see, I'm not running on master, I'm running on my own development branch. So this is highly unstable currently working on. So this might be a little bit more unstable than our current master. So the idea is if we follow the quick start, you will end up at a point where you basically do a make image, which builds your local images. For the sake of time, I will skip this now because it takes roughly 10 to 15 minutes depending on your hardware that you are locally running. And I think we don't need to borrow. We don't need to show the Docker build process. So it's like those kitchen shows, right? Hey, we did the first couple of steps and then now we're at the second step already pre-built. So with some time magic, we are now skipping the next 10 minutes. And what you do is you start with a deploy. So this is the folder that contains all the different deploy scripts. And as you can see, we have for example, Kubernetes. And then you have two options. You have for Kubernetes, you have the deploy.shell, which is for remote deployments. So something that is not running on your own machine. And we have deploy local, which is especially tailored towards locally running Docker for desktop, for example. Before I do that, I just remember it, I should make sure that I'm running on the correct Docker context and I am. So just making sure that I'm not deploying somewhere into one of the GCP clusters that I'm currently running for development purposes. So what you do for development is the workflow of you make a change or you build a feature or something like that, you execute make image and then you can deploy that to your local Docker or Docker and Kubernetes setup. And the neat thing about this is this does everything for you. So once it is done, so our documentation distinguishes between installing central and secure cluster services, our deployment script does that for you. So it's a one-stop shop, you just run it and it will deploy central, it will deploy a cluster in central so it will get you up and running with a single command. Very cool. Yeah, there was a ton of script there too. I think it outputted the password as well as how to log in at the beginning there. Yes, I'm not entirely sure if it still does, it should. If it still does, it should. So I am... Leave it right there, right? So just underneath that to deploy parts. So when you're deploying, there's a password output to a specific text file in your repository that you're gonna have to click to log in. If you go through the read me, it is there. It's just not necessarily standing out. Hey, here's the password. So just worth pointing out. Yeah, sorry. In Rockhound Point Sets, real quick, is it possible or intended to be used in air-gapped environment? Yes, that's the architecture. So along with being able to deploy in a Kubernetes Cloud environment, if you are on-prem and you wanna be in an air-gapped environment, that StackRocks is also designed to run there as well. So what I'm using here is a workflow script. So we do have the StackRocks slash workflow repository. This repository contains multiple quality of live things. For example, a teardown script that will just teardown all traces of StackRocks being installed in your cluster, which is especially handy if you're doing development and testing work, as well as lock me in, which is a command that just launches the browser and already logs you in. So you don't have to do the copy paste password dance. So this is also kind of nice and heavily recommended, especially if you're interested in development where you do a lot of teardown and redeploy of these clusters. Very cool. So as you can see, this is the result of running deploy local.shell, which is we have an up and running StackRocks central with the DevBuild that is running on my custom branch. And now if we look at the platform configuration, let me maybe zoom in a little bit. If we look at the platform configuration and in the clusters, you can already see we already have a secure cluster, which is also running on your local machine. And you are basically ready to go. So you can just start scanning your local environment and it will generate you all the data that StackRocks is usually able to, yeah, able to scan or able to, no, I forgot my words. It's all right, it's getting late for you, right? I think it's way past dinner time out where you are. So it's all good, this looks great. So we have one option. We're running several OpenShift clusters, each in disconnected environment with a complete GitOps approach. I'd like to see more YAML slash deployment examples in the documentation. Does the deployment work with OpenShift? So I guess the question is, we're talking about RH ACS documentation. That's something we'd be happy to help. If I please feel free to reach out in the chat and we can get something that's a little bit better for something that's disconnected, especially for your use case. But OpenShift and obviously RH ACS has an operator. In fact, it's my favorite way to deploy the application. The operator is awesome on OpenShift. Definitely worth checking out. So what's typically your workflow if you're doing a development and you're making some changes and you're going into the application? What are some basic checks that you go through to make sure that everything's up and running correctly? So usually my most important step, especially either even, it doesn't matter if you're doing a deploy shell or if you're deploying through ROX Cuttle or even Helm charts, I would always recommend to check your clusters so that you will see all the clusters that you have added to Central in the platform configuration clusters area. And there you can just take a look and usually the easiest thing to see is of course, is everything healthy? So the internal health check of the pods themselves and yeah, basically clusters to Central communication. What I usually do as well is do a compliance scan because that's also one of the main components to fail loudly or to fail quite easily to see. What I would generally recommend as well is doing a stack ROX. So take the namespace stack ROX and of course do a get pods and see if everything is fine if we have some restarts. So one or two restarts depending on especially your Docker desktop version. So collector is sometimes prone to restarts especially depending on your Docker desktop version. There was some versions where the collector was missing I think kernel drivers for specific kernels in Docker desktop. So that's something to know but usually that is not a big problem. So the important parts are Central scanner, scanner DB and sensor. So of course collector is also a vital part of the stack ROX platform itself but collector is, yeah, as the name might imply collector is collecting metrics and runtime information from inside containers. So this is one of the parts of the platform but the platform would also be able to work and run without that part. Sort of similar to a Fluent D architecture it's collecting and shipping to the central in the database but it can restart and then recollect after it when it comes back up. Awesome. And that came up extremely quickly. Obviously it had been previously built on your local host but we were up and running in what, two, three minutes after you started the containers. So that's great. So usually what takes the longest is waiting for the initial central to come up and then the rest of the cluster is usually deployed quite quickly as you can even see in the age column. Yeah, there you go. No kidding. If we're deploying remotely, so you mentioned there's the deploy local and there's the deploy remote. Are there any specific things in terms of tags or variables or anything like that that's worth calling out for people to be wary of? Let me think about that. So... Yeah, it's a little bit of a tough question. Generally, it's a lot. Right. Yeah, there is so much that you might think about. So when we're talking about development, deployments, if you don't make changes to the UI, I would recommend to export the skip, how's it called? Skip UI build, which as the name might imply, let's our build process skip the UI build because in day-to-day operations, if you don't make changes to the front end and only make changes to the backend or services in the backend, you usually don't need to rebuild the UI and the UI build takes quite a lot of time. So that's one of the recommendations. Let me think the deployment itself changes a little bit especially if you don't make use of the deployment scripts. So obviously, if you need to create a cluster unit bundle, you need to download and provide that YAML to, for example, helm templates. But I don't think that I would have any recommended settings or environment variables for that. There is though, so if the community is interested, please let me know because there is one final thing that I definitely enjoy and that makes my life a little bit easier, which is if I'm, for example, working on central, which happens quite often, what you can do is you don't need to rebuild the whole central image every time. There is a way to hot mount your local central binary in the remote cluster that you're working on. So that's basically, that's almost like a live reload with the added step of you need to compile into a goal binary. But that's something that is quite nice and that shortens your average build time from 15 minutes to under five minutes, I think, hopefully, depending on your hardware, of course. So if the community is interested in that, please let me know. I am happy to write maybe a DevDocs article or even a blog post about that. Definitely, and you can always come and quiz you on the next month, second Tuesday, engineering meetings. So that'd be great. Now for users to get started, do you have a cluster where you've built a one-on-one remote Kubernetes instance? I do have a cluster prepared to actually deploy to. So I'm feeling a little bit adventurous today and trying my luck in the live stream today. Okay, yeah. Well, one of us is gonna be, right? Because I have the same thing set up. So we'll see how you do. And then it'd be good. We can just debug as we go. So that's great. I mean, if you want to share, you can obviously do that as well. I'll let you take all the meetings. That's fine. But I will comment on a couple of things that I've, first time I went through the open source deployment, you get a little hiccups done. Although, honestly, it's extremely smooth now. It's very, the only thing I have trouble with is just finding the password sometimes. But by design. Yeah, so you obviously, you had two clusters, you had your local set up, and then now you're switching context to your GKE. Right. So this is a naked cluster that is not running anything. And just to make sure, I will just quit my Docker desktop. So nothing. So I have no cards up my sleeve. They were naked. Very nice. Let me think. So what I would say is maybe we just do, yeah, so we actually have multiple options or multiple routes to go. So there is the interactive installer or that ideally I would recommend to do, as you already mentioned, the open shift operator, which is the most comfortable and stable way to deploy the platform. Unfortunately, we don't have that for the open source flavor yet. So I'm actually not entirely sure if we are planning on doing that, but generally if the community would be interested in an open shift operator for the open source product, please let us know. So we can argue that. The next best thing you can do is Helm charts, which is something that is very great. So Helm charts basically give you the option of doing rollbacks and targeted installs and uninstalls of bigger deployments. I guess everyone, so for the people that are not aware of it, have a look at Helm charts. They're great. And also the final thing or the next best thing that we have is ROXCuttle, which is our local command option to, yeah, this can also generate deployments and this can also generate Helm charts. So I have actually just implemented something to generate Helm charts that are pointing to the default KWIO open source repository. So let's see if that works. So if that works, that might even be a nice thing to have that would soon hit master. Awesome. So we do have, yeah. So let's see. Yeah, generally, Gantar, correct. If you can go for Helm charts, always go for Helm charts. It's heavily recommended. So let's see. Yeah, and a big ROCD user probably joins the Christian Hernandez streams. I'm sure he'll be posting about Stack ROX and ROCD soon. So let's see, we are doing, yeah, right. Basically what we're doing right now is we're telling ROXCuttle, please generate us a central instance or generate us the information to deploy a central. I've actually, I should have tested this beforehand. If not, I can show the default deploy script as well. Yeah, sure. So what we have right now is we do have a central bundle and the central bundle itself has Helm charts. So we can now have a look at the README and I think if I remember correctly, you can just do a Helm install. Nope. Yeah, that's unfortunately not what I can do right now, but... Well, you go type in the way, I'm just gonna answer ROCKHOUT. Yeah, it is funny. I mean, a lot of people at Red Hat obviously, OpenShift is just a, let's say enterprise version of vanilla K8s, right, with some guardrails and some extra security features and things like that. And we want people who are coming in and can just go fire up a Kubernetes cluster to be able to use the application. So if, hey, I mean, if you want us to OpenShift only demos, I'm more than happy to oblige, but I find most people who are watching are familiar with vanilla K8s. So why not focus on that? So to give everyone an idea, what we're doing is we're right now, we have generated Helm charts, which can then be deployed with the Helm command. And I'm just checking, let's see, that's correct. Oh, I just remembered. Actually, we can do that because although we are pushing Dev builds, we are not pushing the builds of feature branches to Quay.io. So this image isn't publicly available that I'm currently would deploy to GKE. Shucks. Sorry, no worries. Helm chart version two coming up next month after these releases. So, I mean, I could showcase just, I already did it and I already kind of deployed it, but I could showcase the deploy script that would be on GitHub that most people would see. If we are interested, let me just go over to Stack Rocks. And I actually, I've been meaning to push this read me change because there's a couple of options of things that I've been meaning to update. But yeah, the Helm chart, like you said, Rocks Cuttle uses Helm charts underlying to create and generate the manifest, even the deploy script, I believe uses Helm charts, correct? If I remember correctly, the deploy scripts also deploy Helm. Yeah, let me, while you're showing the Helm installation, let me boot back up my local cluster and check because it should list me. All right, but for people who are following along at home who want to deploy to their vanilla k8s cluster, whether it be GKE, Azure, AWS, whatever it is, if you're in the main Stack Rocks repository towards the bottom of the read me, which I'll be moving to the top, but towards the bottom of the read me, you'll see these steps for orchestrator specific deployments, whether it be Kubernetes or OpenShift, something easy if you're down with the GitHub repository and you want to try it out, run the deploy script and you will be, where is it? You'll be basically running the deploy script and setting a image tag. So you set the environment variable to main image tag, which will be latest. Again, I'm gonna update the read me so that this copied line works every time and what you'll get if I can swap over to my other screen is to, whoop, that's not it. Let's go back over. So up over to my other screen, you'll see this. So main image tag latest and we're running the deploy script and you can see all of the variables that are by default set. So in cluster central endpoint, scanner support, collection method, EBPF, StackRox namespace. By default, we want to deploy to that StackRox namespace, just everything will work so much smoother because we obviously use Kubernetes native networking underneath the hood. And when you go down, you'll see this part, right? So we have it's deploying central, it's deploying scanner. And for the people who want to log in, just note that for the administrator login says login with username and password, the default is admin, the password file is going to be located under central, I believe it would be StackRox deploy k8s central bundle and then password. Now, on the read me, it says the exact location of it but just an FY that you have to go and find the password. Do not try to reset this in the Kubernetes secrets either because it is a becrypt hash password. So it will not be accepted. Am I correct here? But yes, cause that is something that I've seen as well. I'm not entirely sure if we even have, no, I don't think you should do that from the outside. I think there is a way to reset the admin path only but I'm also not entirely sure because honestly that's something that I as a developer never ran into because usually if I lose my admin password, I honestly just do a tear down and read deploy because it's just so fast. And realistically, if you're setting this up for multiple uses, you're going to want to set up an OAuth or some sort of authentication on top of that. It's kind of an anti-pattern just to sit there with an admin password. So, but it is good for first use so you can get access to it. Rockhound asks, if I start looking for security solution, why should I go for StackRox and Aqua's new vector sysdig? Is there a main differentiator I didn't get yet? Main differentiator, Kubernetes specific. So a lot of the other platforms, you'll see a lot of container specific information focused across cloud workloads and kind of push the CSPM methodology. StackRox is very specific for Kubernetes native. So as we'll get into the dashboard, you'll see a lot of Kubernetes vernacular and ways of managing risk that you won't see in other platforms but stay tuned and you'll see a little bit more. But yeah, so as we go through, you'll see this whole list of all the emails that are being generated. I'm not gonna read all those out. I think you'll see everything being deployed. After all that, so what I did was I got a couple services, created a load balancer, also just set it up for local host and then I deployed a sock shop. So if anybody wants to check it out, I can make it public but at the end of it, basically I was able to get access to the dashboard. Let me swap over. This thing is not the best for swapping between screens but I'll make it work. And we ended up with this. So I think like Matias said, let's go to platform configuration and go to system health. Everything's looking good. And really that was just a run a script and it worked perfect with GKE. So now let's just go and make sure that we check to click the scan environment and this makes sense, right? We don't want to install it and then just have our application going and scanning automatically. We wanna make sure that we're not putting access CPU workloads unless we actually want to click it because I have seen people do thousands of container scans and blow up their memory. So again, a very solid process. I think one of the best things that I like about this is risk because this is something that everybody has their own definition of risk. And in Kubernetes, it is slightly different, right? You know, it's not the VMs of the past. We need to look at things like how deployments are set up, the deployment details, the namespace, the port configurations, what ports are exposed, things like that that are missed, I believe. And then, I mean, even things like security contacts, secrets, volumes, the specific image names, everything's in there. And of course, we have your typical compliance, vulnerability management, violations. And one of my favorite is the network graph. Let's check out, I really wanna check out the SOC shop that I set up. So you can see the SOC shop application and the how it is exposed. So obviously we have some things exposed to external entities and we create network policies and generate them as well. But I don't wanna get too far into this. I'm kind of saving like a whole deep dive for next month. That was supposed to be the plan. This was kind of getting started, getting into the application, looking for feedback from y'all that are watching, hopefully at next month's engineering meetings. And yeah, any questions in the chat we'd love to hear from you. We have a couple more minutes left, but Rockhound, thanks for joining and chatting with us. On OCP, what I have different RBAC rules, view admin, self-defined, rules like Argo. OCP, there are different default rules, right? So instead of having default service accounts, OCP likes to go and change those, especially per namespace. That is actually a security feature because you don't want default service account mounted into every single pod. StackRocks will actually alert on something like that if you're using the default service account. So it is definitely, it's different in terms of just being able to go and deploy, but in general, most applications can deploy freely. And that, not to get too preachy, but that is kind of the big difference between vanilla K8s and something that's an enterprise version of Kubernetes, right? It's a little bit more security guardrails. Yeah, it could be annoying sometimes if you're just looking to deploy some random application, but in general, it's a lot better than deploying some crazy vulnerability with a default service account into your clusters. Yeah, overall, that's one of my favorite ways to deploy it. If you're just looking to check it out on a vanilla K8 cluster, use that deploy script, pull the password. You can run a port forward as well. You don't have to expose it publicly using a load balance or anything like that. Although it is fairly easy with the simple command. Anything else I missed, Matias? I also did not want to just run the deploy script live. I didn't trust it, but sometimes finding that password could take a little bit long. Yeah, so let me think. But honestly, I like the deploy script, especially for the dev builds. I would always say deploy script for development is a great idea because you have so much quality of life. It just is a one-stop shop. You run it and you basically don't need to worry about anything else. Besides that, we are currently ironing out some smaller wrinkles around the whole open source. Yeah, we're calling it flavor. So the whole open source. Community edition. Yeah, the community edition is still a work in progress. We're actively chipping away in our current development sprints on that. So there is more to come. We're not stopping here. We're actually planning a lot of things that will improve the whole usability and onboarding experience or deployment experience for the open source and community editions. Besides that, let me think. But I would say yeah. I think one of the other biggest things is like we showed on the GitHub repository, there is an issues list. So if you see something, we would love for you to say something. It's one thing to post it in Slack and say, hey, I don't know how to do this, but if you really want a lot of eyes on it and some actionable intel, I would recommend you go and open an issue and say, hey, Michael Matias, this thing is not as simple as you make it out to be. Please fix this or at least elaborate, make it clear, we'll be happy to help you. Issues are definitely the way to get eyes on problems. So generally, maybe for everyone to know, if you open an issue with the Stack Rocks project, what usually happens is someone will stop by, have a look at the issue and then give you a preliminary information or evaluation if that issue is, if we need some more information or what will happen with the issue. And as soon as we're okay with the issue, we have all the information that we need, we will take this issue and basically relay it to the internal engineering and discuss with them. And then I guess, yeah. And then we'll come back in the next community meeting and basically be open for discussion. We will come prepared if you have any questions or would like to discuss your issue, stop by in the community meetings, that's what they're for. And to get up to date on the calendar, if you're in Google, Outlook, whatever calendar you're using, you can subscribe community at stackrocks.com. I need to like a little blurb that pops up in this so everybody can, yeah. If you're a Red Hatter, there's of course another way to log issues. That's true. If you're using RHACS, there is the internal site. This is specifically for the community edition. And of course, if you want to tune in to any of the meetings, community at stackrocks.com, email, subscribe to the calendar. All the events are public with the Zoom information and the YouTube and Twitch links for you to join. That's all I'm thinking about. Oh yeah, and if you join the engineering meetings, we do have monthly rock stars. So there will be swag that gets handed out. The more issues and the more you contribute, the more likely you are to become a rock star. So we appreciate everybody that helps out. We understand that you are also volunteering your time when you open up issues. And complaining to us is actually extremely valuable. We recommend it and we love to hear from you. So, Matthias, is there anything I missed? Anything you want to cover before we head out? I guess not. If anyone in the community wants to reach out, feel free to do so. Feel free to do through GitHub issues or through the Slack channel, for example. If there is another way of communication that you would like to have, let us know. We're always happy to adapt, I guess. That's for sure. Until then, check us in the Slack channel or we'll see you the second Tuesday next month at 12 p.m. Eastern in Zoom. And I'll be posting the Slack channel and on LinkedIn as well to make sure you guys get the link. Thanks, everyone, for all the questions and comments. And we hope to see you next month as we walk through. Next month, the plan is to walk through the dashboard and really show you all the tips and tricks on how to use Stack Rock. So look forward to seeing you in a month. Awesome. Take care, everyone. Bye.