 Thank you, Candice. Thank you, Candice. And hello, and welcome everyone to today's webinar. My name is Martin, and today I will be talking about cloud-native application security and its relation to open source, alongside with Tim Miller, a technical marketing engineer at Outchipped by Cisco. First, let us introduce ourselves in just a few sentences. My name is Martin, and lately I'm a product manager at Outchipped by Cisco, where we are focusing on cloud-native application security. I have a strong engineering background, and I was lucky enough to work on a lot of different open source projects in my past, especially with my previous company before we joined Cisco, to be plunged some successful open source projects that were also continued within Outchipped. And two of those, the logging operator and the bank walls are now on track to become CNCF sandbox projects. I think logging operator already got accepted, and hopefully it will happen with bank walls soon as well. Nowadays, I'm mostly working on open clarity alongside a few other things, but that's enough about myself. Tim, do you want to introduce yourself? Sure thing. Thanks, Martin. My name is Tim Miller. I've been a longtime Linux user. You'll see in the little bio there, I jokingly say I've been there and done that and got many t-shirts. So long time Linux user from the days of Red Hat Enterprise Linux, sorry, Red Hat Linux, before the Enterprise days and was a system administrator for quite a number of years, evolved into the rest of all those things that we build into data centers. So I've run networks. I run high-performance computing environments, automated all that. I'm a big programmability and automation fan and run the gamut in that space for those dinosaurs like me out there. CF Engine might ring a bell, Puppet, Ansible, of course, nowadays in Terraform. So a long time open source user, not been much of a contributor, but as my background shows, a huge fan of the space, huge supporters of the space and a believer. So happy to be part of this conversation and to show you what we're lining up here with the open clarity project. Cool. Thank you, Tim. OK, so let me start with talking a bit about the current state of application security in general within the cloud-native space, because we are talking a lot about how cloud-native changed the landscape, how we were introduced to microservices, containers, Kubernetes, and how it improves scalability, resilience, and how it gives us easier upgrades. And a lot of other great things that we are really talking about a lot. But from a security standpoint, from an application security standpoint, it also introduces a lot of challenges as well. First, because, well, of course, we are running a lot more services than before. And that means that we have a broader attack surface. We are running a lot of cloud resources from serverless applications to containers to different kind of Kubernetes installments. It complicates things a bit, and it means that some new attack buses are appearing that were not there before. These environments are also dynamic by their nature. And that means that it's really hard to maintain a constant security posture, because you have resources that come and go. Maybe you, when a security scan a few minutes before, and now there are some new resources coming up, how do you know that those are secure or not? So things are changing a lot more frequently than before. And that is harder from the security standpoint. But also we have a lot of secrets or credentials flowing around in our cloud native environments. These are distributed across our VMs, across our Kubernetes deployments. Somehow we need to maintain those that's also a bit harder than before. And along with the technical changes of Kubernetes, there came the whole DevOps approach of things, like how the frequency of deployments are now much faster. We are doing much faster release cycles. It means that we are introducing new features and effectively new code into our production environments more frequently as before. That means that we need to shift our security standpoint left, we're shifting left, to earlier in the release cycle. So we need to introduce new security features within our CI CD deployments, within how we build applications, within how we are building containers, how we are distributing those containers, and actually about how we write code. So security starts with writing code, actually. If you're writing down a few lines of code, the best way to prevent some kind of security risk happening is to just let the developer know that you are writing something down that you shouldn't. So things are changing from the security standpoint in Cloud Native. And organizations recognize the importance of this. And they are constantly looking for solutions that can help them overcome these new challenges. But it's not easy, because on one hand, traditional tools are often fall short, just because they focus on parameter level security. They are focusing on how things work with model needs and traditional environments. But of course, there are a lot of new tools appearing in the Cloud Native space. But as with all other things in Cloud Native, it is a really fragmented space. There are a lot of different solutions out there. If you look at the CNCF landscape and just look at the security section, you will see lots and lots of companies and projects that are trying to break into this market. And on one hand, it's good, because you have a lot to choose from, but on the other hand, it makes things difficult, because you can end up in this security limbo, because you don't know what to focus and what to prioritize. Depending on these companies, on these projects, they are solving different challenges. It's hard to figure out what to do and what tools to use. And of course, how I've mentioned shifting left, it means that we are now working more closely with DevOps people. It's not like before where security was often a silo. No one knew about the security guys. They were there somewhere in the building. They were taking care of application security. Now it's becoming different. Now DevOps people will need to talk to developers. We need to talk to security people. We need to talk to SecOps people. So this is also a cultural change, not only a technological change. But how does it relate to open source? Because of course, we have lots and lots of proprietary tools, but how does it look like from an open source standpoint? Well, of course, just like with everything else, we have a lot of good things and a few bad things in terms of open source security. The good is, of course, that we have a lot of tools to choose from. If you, again, look at the CNCF landscape, you will find a lot of tools that are open source out there and a lot of tools that are mature enough for you to be able to use even in a production setup. These tools that we've highlighted here, SIFT and GRIP, they are developed mainly by Encore, and they are tools for generating S-bombs and to generate the list of known vulnerabilities from this S-bomb or from a container image. They can work hand in hand. Trivi is somewhat similar. It is solving similar problems about S-bombs and known vulnerabilities. These projects have thousands of stars on GitHub. They have a vibrant, active community. People are contributing. People are using it day-to-day. So this is good. And of course, I've just highlighted three of these projects, and these are closely related and are solving kind of the same problems. But there are other tools in the open source space, like to name, for example, Falco, that is quite popular nowadays and that is a runtime security tool for Linux, for real-time threat detection, or maybe even the open policy agent that can be considered a security tool as well, or Cubescape, just to name a few. So we have a broad set of these tools, and these tools are getting more and more mature. They are, there is a good community around these tools. And in general, the space is improving. So if you have all these tools that you can use, what is it that is missing from this landscape? Let me talk a bit about that as well. And to talk about that, let me start with a simple question or a simple example. Let's say that I want to cut the scope a bit, so not talk about cloud security posture and stuff like that. I'm talking about running a virtual machine in the cloud. How do I know that this virtual machine is secure or not? Here comes the question, what does it mean that it is secure? What does it mean that I'm running a VM in the cloud? It can be an AWS, for example. What does it mean that it is secure? Well, it can mean a lot of different things. It can mean that I'm running images, container images on the system. Are these images pre-op, as well vulnerabilities or not? Do I have any exposed secrets on my VM? Is there any malware installed on the system? Can I conduct a security audit? How can I ensure compliance? What about rootkits? So to define that a VM is secure or not, it involves answering a lot of different questions. I can use open source tools to figure out the answers for these questions, but how would it look like? I can run all these tools. Just give me a terminal. I can type in the commands. I can use gripe. I can use trivy. I can use gitleaks. Check rootkit. What I will get is a lot of different outputs from a lot of different tools. But how can I be sure that what I'm seeing is getting me a comprehensive picture of my VM security? How can I be sure that these outputs are giving me a complete answer? Well, it's a hard question, especially because I can have different tools for very similar tasks, just like I mentioned with trivy and gripe. So those tools basically answer the same question. Do I have any vulnerabilities in my Aspom? I can run both. Or maybe is it enough to just run one of them? Will I get all the answers? Maybe I will need to run both to get a good picture. But if I run both, then how should I compare the results? Maybe those are reporting completely different outputs. Also, I have different tools for different tasks, like not only Aspom vulnerabilities, but how I mentioned detecting movers or exposed secrets. I can run those other tools as well, but they will generate a different output. What I'm looking at or what I would want to achieve is to get one page of all the vulnerabilities or all the risks on my VM that someone can find. It's not impossible to do with all these open source tools, but it's not easy. But another question is, OK, I just ran all these commands from a terminal, but should I run this periodically? Should I just run it once? Should I run it once a week, once a day, even more frequently? Where should I integrate these tools? Should I integrate it in my CI CD? Should I just create my own dashboard? And what we figured out is that it is a hard task. Integrating all these tools and correlating the results and getting to one complete report is hard. It probably requires a dedicated team that someone will set up in a company They will start working on these things. They will start integrating all these tools into their tool chain. They will probably start building a platform on their own that will generate reports and answer all these questions. But it is a significant effort. And basically, that's how we arrived to our question. But what we are trying to solve here, and I will let Tim talk about the next part. All right, thanks Martin. It looks like if we want to switch the share, you have to give up the share. It's not going to be taken away. OK. We thought that might be the case. So all right, with that, so what are we trying to solve? So to codify or to sum up the challenges we just heard from Martin, it really comes down to kind of four guiding themes, and we'll call it that. So we need to build a common platform that takes all of those different outputs, normalizes them, stores them, and more importantly, correlates them to give us, for a given particular asset as we use the term. And so we can just call that a container. We can call that a VM. We can call that whatever artifact that we're looking for from the application perspective. But we need to correlate all of that information that's been normalized from the outputs so that we can, for that specific asset, really understand what's going on with it, what risks are associated with it. And ideally, we find that, as I mentioned in my background, some big champion of open source, it's clear that the open source community is the right place to build and develop this platform. And so we want to establish this project to provide a foundational level of security, bringing together all those risks, providing that visibility. And use it as a starting point for just completely... Just use it as a starting point for understanding the risks in your environment. Now, we didn't get to this point without having already begun the journey and learned some lessons from the bumps, the successes along the way. The successes along the way. So we started this journey with looking at Kubernetes and the workload security around it. So this culminated... Our first effort into this was about three years ago, I believe, if you followed out Shift-Visisgo and all the names we've had before that, it essentially culminates in the CUBE Clarity Project. So where we looked at supply chain related risks, container, Sbom and CV analysis, bringing together the SIFT and the gripe Martin mentioned and those results and trivia, it supports all three of those tools. Doing that in a modular way, and that's the important part, we don't want to say this project, when we spun up CUBE Clarity and what we're looking forward to going forward, we don't want to prescribe a specific set of tools. The landscape is huge. Everybody has their favorites and as the space evolves, new favorites will come up and old ones will fade away. So we want to make sure that this is a modular approach. And so that's what we started with CUBE Clarity with being able to support SIFT, gripe and trivia at the same time. And so the project brought it together. It's one of the problems with all of this space is visibility, right? So we tend to know most of what's going on but there can be things that happen, things that get deployed and then get forgotten. So being able to have that comprehensive discovery of all the resources in the Kubernetes environment and then calculate those vulnerabilities. And most importantly, the last bullet point is provide a stable contact point, right? I'm talking an API, right? So the CUBE Clarity, one of the nice things about it is that we've provided a published API to the platform for the Kubernetes security that you can consistently query again. So no matter which modular scanner you put into place, the results and the findings you were gathered in a consistent way. So that allowed you the stability to leverage that information and yet move with the times or the changing needs of your project, your application and so forth, right? And so what it looks like here, if we've got time, we'll see that in the demo here at the end, but this, this is the CUBE Clarity project and at a glance, we provide the top risks, the summary dashboard of all the things that we have found and of course, you can drill down into those particular findings. The next evolution in our journey in the Clarity suite was looking at APIs. It became very clear over the past couple of years that API security is the top vector for many attacks now. And so it has similar challenges that Kubernetes security has. There's APIs that get deployed all the time because developers are rapidly innovating. The term is feature velocity. We're driving a lot of feature velocity and just putting out a lot of capabilities to help our business grow, right? And so we need to find them all, identify what traffic is going between them East and West as well as what external APIs we're consuming. Because at the end of the day, we're moving so fast. A lot of the challenges in this space are, I'm using third-party sources, right? I'm a Python person. So I'm not gonna write the module to do REST API calls, right? I'm not gonna, I'm just gonna use the request module, right? And I'm not gonna use or build my own API framework. I'm gonna use something like FAST API or Flask, right? So I'm gonna use all these third-party capabilities and then similarly in the API space, there are capabilities out there that I'm just gonna leverage. My favorite example is if I wanna have a little weather, you know, dashlet in my weather interface, I'm not gonna write that myself. I'm just gonna query data and query information from weather.com. But I have to know about that. What are my developers using? Is it, you know, that's the first and most important part. And then if there's any information that we can do to help secure it, which is the next part, right? Those third-party APIs have a published open API specification. If our developers have the time and the discipline, right? They're writing those with their own APIs. So we can do a specification analysis and look for best practices. But the nice thing about the project is if we're brown-fielding into this, you know, we rushed out and deployed our APIs and now we have to come back and try and do some security around it, we can actually reconstruct it with API clarity from the live track. So this was a very cool benefit of the project. And then once you know the spec, then you can do these ongoing evolution things. As Martin mentioned, you know, things are, you know, today I scan it, it's fine, but what about tomorrow, right? The developer pushed something tomorrow or three months from now that API is still published but nobody's using it, which probably means nobody's actively maintaining it. And so that zombie API could come back to bite me with a security vulnerability. And at the end of the day, we're really driving towards what the industry is looking for. OWASP API security top 10 and with the API clarity project, you know, we focus on broken authentication, broken authorization and even BFLA, broken function level authorization. So we do some modeling of the traffic and then look for broken function level authorization in the traffic. So that's a great capability with the trace analysis that comes with it. Yeah, very quickly, here's the dashboard, you know, so we can see, right, the, you know, the inventory of the APIs, how they've been discovered, you know, the, you know, some status information and then the findings, you know, we could, we're not gonna cover API clarity today because we're gonna focus on some of the other things, but the project is out there at the end. You'll see some links about how you can engage in it. And there's a couple of other projects, but for the sake of time, we're not gonna go into them too deeply, but to sum it all up, you know, to get back to the journey and what we learned, right? We had some very solid successes with these projects, right? That we got some traction with, you know, in interest with some colleagues in the industry around API security, around cube clarity. And we did a good job of solving those domain specific issues with domain specific, you know, back ends. And it proved to be quite successful in being a foundation for something broader. You know, I work on a product that Cisco, you know, develops and that is, you know, these tools were the back end assessment engines of that broader product that we're developing as part of the Cisco organization. So the very good foundations, but the problem is, is that when we started this journey, we were focusing on those domains. So we weren't thinking at that point what was going on. And the industry, both with, you know, the commercial products we're driving in an out shift as well as the open source ones, the industry is driving towards more and more consolidation and correlation. As Martin pointed out, it's at the end of the day a security engineer or SecOps person has to have all of this related. And if they, and the teams are just too small and getting too constrained to be able to build those platforms that Martin was talking about. And unfortunately for the different projects we built, you know, they weren't, they were, you know, they were designed very fast and very specific to those domains, we couldn't really extend the existing projects that were there, which leads us to the new effort that we're launching, right? Open clarity is going to be the new platforms approach for bringing all these tools together across the different domains, right? So with that, you know, the, you know, the four, those four driving principles we had before about building a platform, you know, those come down to the three, you know, we put boilies down to the three particular things that we're going to focus on at the beginning, right? Which is providing that platform with a comprehensive view, right? Bringing together, you know, when we think cloud native security, we're frequently thinking about containers and Kubernetes, but the reality is, is that, you know, virtual machines are part of these environments because, you know, lift and shift does happen, right? But, you know, there's a lot of consolidation that is still going on to get to a cloud platform. So virtual machines play a component. There's also serverless functions out there as well, right? So they, you know, developers have shed virtual machines and containers and Kubernetes and just gone straight to serverless. But at the end of the day, so we have to provide that visibility across them all. But the key thing here with any successful platform is, this has to be easy to consume. We use the term all the time frictionless here, but we have to be able to onboard this quickly, get results quickly, and then use that information to help build the skill sets and understand what's going on in your environment, how to start getting awareness of the things you need to learn and things you need to remediate. And so we, you know, with all those lessons learned, with all those guiding principles, you know, where we're going with this, right? So we're building a new project where I should say, we built a new project to start that foundation as well as investigate a new space, right? So that is the VM Clarity Project that you see on the screen, what we're gonna demo here. And that has all the bits and pieces for the foundation for open clarity. And so we've built this out, we've proven to ourselves that we can build the scalable backend. We've started with new capabilities because as Martin indicated earlier, you know, that looking at VMs, what the secure mean with a VM, you saw all the categories, right? And from vulnerabilities to malware, to misconfigurations, right? So that was the perfect problem set to really prove to us that we can bring in multiple different tools into a common platform. So that is why you see a VM Clarity as the starting point. All right, and with that, I'm actually going to jump into a demo and show it to you. So with that, we will go to VM Clarity, right? So this is what that looks like. Move the video out of the way so I can see. There we go. All right, so with the other clarity, like with the other clarity projects, you know, we're gonna give you the summary of the information up front, right? Top risky assets, top impact, findings impacts. And so, but again, we're bringing together multiple tools and multiple information together for from those different tools. So these are broken down by the categories, you know, the first one is vulnerabilities, exploits, misconfigurations, right? Secrets, malware and root kits, right? So those are the, and then, so that's, you know, the types of risk findings that we present when we talk about impact. We also have, you know, packaging information and primarily from the point of the packaging information is that the SBOM like information for a VM, right? So what are all the different packages, software libraries and the dependencies that go into those particular VMs? We pull that out and then, you know, summarize those findings here. So these are the findings, the finding information that was discovered by it. And if we want to drill down into that, we go straight to the findings page. Hopefully my port forwarding is working, not so if you'll give me, let me pause the share and go to my port forwarding here and resume sharing. There we go, yes, my port forward time down. Apologies for that. All right, so yeah, so we went from the dashboard directly to the detailed finding list. You can see that we found, you know, in the vulnerabilities, which is the default view, 5,800 vulnerabilities in this cloud account that has just, you know, a few VMs and of course some unhealthy VMs just to show some results, right? So we've got multiple pages of vulnerabilities we can look for, you know, by default, it's by the most recent, we can certainly do that by severity, right? So now we have the most critical ones, their filtering capabilities is very powerful so I can do things like found on, you know, so basically limiting my results to the most recent scan, right? So now here are the most critical things from the most recent scan of the machines. We do maintain a history, you'll notice that the number of vulnerabilities dropped. So the previous sets of scans are there with the timeout, I believe, of 30 days, right? So the findings can age out, but we do maintain that history. And then, you know, that's CVE vulnerabilities, right? Exploit, you know, the ability to exploit the VMs that are in my environment, the misconfigurations that are available and just to drive down into that super quick, right? I'm not checking for password strength so I'll flag that as a misconfiguration because that's not the best practice, right? Secret malware is through kits in and in package, right? You know, for the sake of time, I'll just show the kind of packaging information that we show, again, that's that kind of S-bomb-like material of what package was found, what licenses it has, and what assets it was attached to, in this case, a VM that had that very cryptic AWS name, right? And if we really care about the scan details, you know, this is basically, when did I see this last? Which particular on-demand scan or scheduled scan did I use in order to find this particular package, right? All right, so that's kind of a detailed view of all the findings, what we've discovered, and the way that we drive that is with scans, right? So we can do on-demand scans here and we can configure them in a couple of different ways, right? So these are the ones that were one-time scans, as you can see here. I can do filtering in the, in fact, why don't I just edit the existing scan here? So this is the scan definition. I can do everything. I can define a particular scope, which in the context of clouds are regions, specific VPCs. So I can really target these scans for particular times, for particular destinations. I can scan everything, whether it's running or not. I can do filters, as you can see here, where I filter based on a name or based on a label, and so forth. And so when I go to the scan types, this is where I can start bringing in all the different scanning technologies based on the back ends. And then when do I wanna do this? Do I want it to be a scheduled time? Do I wanna set it up one time at midnight later tonight when I know there's a low period? Or do I just wanna do it now one time, right? So all these capabilities exist from an orchestration perspective, as well as if I've got a very large environment, right? I want the scan to complete in a reasonable time. I can kick up the number of scanners that are operated in parallel. And since these run in the cloud environment, that would, of course, do some cloud costs. So you wanna balance this number with your budget. And then to help with that, you could also use spot instances to help reduce that cost of the scanning. It's an, you know, and there's two benefits to doing this in your cloud environment. One is you don't have to worry about provisioning anything yourself. The orchestration of being in clarity does that for you, but also all of your data stays in your cloud environment, right? So we're not bringing this down somewhere. We're not, you know, putting any demands on where that information, where that VM lies, okay? So that's how you would set up a scan job. You can see the, you know, from reviewing this, if I wanted to kick this off again, I hit the play button. So even if I do it one time, I can repeat it, you know, push the button again, start another scan and it will go off and running and generate more things. And then from an asset perspective, you know, again, the point of the platform is to bring all these findings together and relate it to a given asset. And this is that view, right? So this is where I'll have all the asset findings, you know, all the assets that I've scanned here for my environment and their related findings. And then I can drive down into this and see them in a summary view. All right, with that, I'm going to quickly show you the additional information that we're looking to bring in from Q-Clarity. So, you know, just a quick recap before I do that, you know, this is the launching of the backend and the vision for open clarity. So we've proven out the backend, his modular is scalable. And now we're looking to bring in the Kubernetes related information, which, you know, there's a great deal of overlap, right? When it comes to CVE vulnerabilities, secret detection, right? So there's a lot of overlap. And so it's a natural sense to bring in Q-Clarity, right? So here is that dashboard. And so some of the things that would come back, come into the open clarity, you know, some of those types of information would be, you know, the Docker CIS benchmark for those containers, right? If I go down into that, right? Was this Docker container built in a healthy way? Is it matching best practices? Right? If I look at, there's the application side of this, right? So the application side looks at it from a lens of how it was deployed in the Kubernetes cluster. So from a Kubernetes perspective, you know, we start with a deployment, a daemon set, a staple set, right? Some sort of generic thing that we call workload. And from that gets a replica set and a container. So how was that deployment defined? What container did it use? What settings did it use? All of that information can get rolled back up into, into open clarity to enhance that asset information, right? And then from the runtime scan, you know, this project, you know, similarly, you could do one time or recurring scan. So the runtime scan is just that project's way of doing it. But this is the way that we get to look at vulnerabilities. What also Qt Clarity brought, one last thing, it's no time to demo it, but the ability to run those vulnerability assessments again, the ability to run those vulnerability assessments against containers, against software, against serverless functions in the pipeline. So there's a CLI tool in the Qt Clarity project that will get ported that will then allow you to do pipeline capabilities and feed that information into the backend to then correlate with the assets that we've seen thus far. And with that, I believe I hand it back to Mark. Okay. Let me reshare. Okay. Thank you, Tim, for the demo. Before we finish, let me say a few words about how you can get started with this project because, well, it's one thing that we develop these open source tools inside Outchiff, but it would be even better to have a vibrant community around these projects and we encourage everyone to get participated and just get started with these projects. Try these out. If you find any issues, feel free to report it. Maybe feel free to pick some small things and submit a PR on GitHub, but really how you would get started. Of course, we have installation instructions available on GitHub. You can follow this link, but we wanted to have a really easy way of starting things if you get started with VM Clarity. So let me share with you how it looks like on AWS. For AWS, we've created a CloudFormation template that is able to set up the whole environment on AWS. So you just download the CloudFormation template from GitHub, deploy it through the CLI, or you can use the wizard, and it will get the whole VM Clarity environment up and running in a few minutes. If that's running, you can open SSH download to the server by copying the command from the readme, and if that SSH download is live, then you can just access the VM Clarity UI from your browser. That's good, but here I have a small diagram that makes it a bit easier to understand what's happening behind the scenes because you can create a CloudFormation template, but what does that CloudFormation template really do or what kind of resources does it contain? Well, VM Clarity has its own isolated environment, so it will create a separate private network within your selected AWS region that will, of course, have an Internet gateway, and that's how it will be able to access the Internet. It will have a public subnet inside that contains the VM Clarity server that is actually running the backend that is able to schedule the scans, and that's how you can interact with VM Clarity. That's how you reach the UI, but it also has a private subnet inside that same VPC, and that private subnet is actually used to scan the snapshots of the different AWS VMs that VM Clarity would scan. This theme showed you in the demo you can configure VM Clarity to, for example, scan a specific security group. If that security group contains 8 VMs, 10 VMs, those VMs won't be affected by the scan. Yeah, we really like the word frictionless, but it really is frictionless and also agentless, so it doesn't mean that we have an agent that VM Clarity deploys on your VM that will run the scans and so on and so on. What it means is that we will create a snapshot of your existing VM. We will put it in this isolated environment. Your production environment will continue running without being affected. We will scan the snapshot in VM. We will report vulnerabilities. We will connect it back to the original VM, and that's how things are happening. It's really important to understand because this is actually one of the main benefits of VM Clarity because you won't be at risk of doing something with your production VMs while the scans are running. Those can fail, those can produce errors, and you probably don't want that. Agentless is something that is more and more popular nowadays. No one really wants agents on their VMs, and that's how VM Clarity works. Of course, if you have your VM Clarity environment up and running, you can get started with your first scan config. I won't go into details here as I've been working through in the demo. That's how you define the scan config scope, how you select scanners, and how you configure the scan schedule. If you're not on VM Clarity, but you're interested in Qt Clarity, the installation process is, of course, a bit different there. It is just like how you would usually do it on Kubernetes. You add the ham repository, you install the ham chart that contains Qt Clarity, and you report forward the Qt Clarity UI to your local host. That's how you get started. Again, feel free to join us on GitHub or any other of our social channels. We have a Slack channel where you can join the conversation. Feel free to do that if you happen to try out VM Clarity, Qt Clarity, API Clarity, and have some issues. You're free to submit an issue on GitHub, but you can also join Slack and ask us directly. Someone will always be there and try to answer the questions that come up. Of course, KubeCon North America is coming. We will be there. We will have a nice booth. We will have a lot of interesting things, even a coding challenge powered by Cisco DevNet. Come visit us, try yourself out, and learn more about our open-source projects. We have other things outside of security as well, as mentioned with the logging operator and bank walls that are just going through the CNPCF Sandbox project. Come visit us and join the community. We are always happy to have more people on board with these open-source projects. And thank you. We are at time and we are finished. If you have any questions, then feel free to type it in Q&A. We will do our best to answer them. If there are no questions, then thank you for attending Q&A panels. Yes, thank you very much for attending. Thank you so much, Tim and Martin, for your time today. And thank you, everyone, for joining us. Oh, it looks like maybe we got a question. Oh, just one thing. Thank you. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars, and have a wonderful day.