 Thank you so much. My name is Shauli. I'm the CEO of Armo. And we are the company behind the open source Cubescape. I know it was mentioned in one of the earlier talks today around tools that you can use to scan your Kubernetes cluster against different frameworks and misconfigurations. And it has been used quite frequently in the last year or so. And we've scanned, well, we've scanned over 20 clusters to date. But when we reached to 10,000 clusters, we wanted to do this analysis of what we've learned, what we are seeing in the industry. And I wanted to share that with you. So first of all, what have we looked at? We've looked at the data set of 10,000 clusters. 48% of the users were in North America. And 33% were in Europe. And the rest were in EMEA and APAC. If you think about cluster sizes, we see that almost 70% of clusters had less than 10 nodes, up to 10 nodes, and about 6% had over 50 nodes. What we see in terms of trends is that we see clusters getting larger and larger. What is not here, and also we've seen, is the number of clusters per user. It is also growing. I think like 5% of the users had more than 25 clusters, which I personally think it's way too much. But I do think that there is this notion of the trade-off between the size of the cluster and the number of clusters and how you more than manage it. So we see that in the data quite often. In terms of job titles, we see that 57% of the users are DevOps users, which might be surprising to some, but I hope that it's not. Because it is a security tool, and many people have these misconceptions that DevOps don't care about security. Actually, in a Gartner research lately, it was told that actually 30% of organizations, when they were asked who is in charge of the Kubernetes security, said that DevOps do that. And we can see it in our data. 24% of the users are security engineers, security architects, security people who are hands-on and who are in the material. And 5% DevSecOps, which is something that we see, of course, more and more. We scan, usually across different frameworks of security, the Mitre attack framework that was developed by the Mitre organization and was adjusted to Kubernetes by Microsoft, the NSA and CISA guidance for Kubernetes security that was issued by the NSA, the Armor-Best Practices, which is basically a framework that we took the most important parts of those two and took them together. And then there is DevOps-Best Practices, which is basically an enhanced version of those security methods, but also to non-security checks. So for example, if you are running without a liveliness probe, it's probably a bad practice in terms of DevOps, but it's not really a security issue. So these are the type of controls that we are there. What we've seen is that there is a very large, of course, correlation between your score on all of them. So it's not that the NSA and the Mitre are very different in what they assess. If you are bad in one, you're bad in all. That's basically what we've seen. And another best practice is to keep your score below 30. You need to fix, to continue fixing things. And when you get below 30, that's usually a place where you are in a good position. If you are below 10, you are in our best 10%. And if you are above 60, you are in the worst 5% of clusters that we've seen. In terms of the top five misconfigurations that we see in the market that we see out there, it is one privileged containers, cluster admin binding, missing resources policies. We'll talk about that in a minute. And not using immutable container file systems, which is very hard to do. It's hard to use. And I understand why many people don't do that. And not blocking the ingress and egress of a microservice that is not supposed to be open to the internet. Now, 100% of clusters add at least one issue. And that's not surprising, because you know how complicated Kubernetes configuration can be. But also, it is because some misconfigurations are OK. You can live with them, and fixing them gets you more pain than actually solving them. And that's OK. You don't have to be perfect. What is more concerning is that at least 65% of clusters add at least one high-severity misconfiguration, which is something like running in privileged mode or allowing privileged escalations within the container or maybe having application credentials in your file system, in your container configuration. So now I go into a little bit of the detail of that, not having a proper limitation. 63% of the clusters did not have workloads limited in terms of CPU and memory. And that's a bad practice, both in terms of just best practice in terms of resource utilization, but also security-wise, coin miners, different applications that are going to utilize a CPU and memory. And you want to limit them as much as you can. The second part is secrets. Secrets and configuration files, we see it. It's not one of the top ones. Like the top ones, they were all 60% or more of clusters. But still, 37% of clusters add application credentials or misplace secrets in configuration file. Usually what we see is AWS access keys, S3 bucket keys. And I understand why developers sometimes do that. It's the easiest place to put it in. But of course, it is very problematic in terms of how to do it security-wise. Risky capabilities. What we've seen is that many, many workloads, well, at least relatively, 23% and 35% of workloads are running with either insecure or dangerous capabilities. We see here on the right the different capabilities that we look at. The most problematic ones are in these red triangles, the net admin, and the net row, and sysadmin. And we see it's not negligible. It is a significant number of clusters that are running with workloads with these capabilities that they don't need to. Finally, vulnerabilities. Misconfigurations always go together with vulnerabilities. And what we've seen is, well, 44% of the vulnerabilities that we've seen were medium. 21% were critical. 35% were high. In terms of critical vulnerabilities, 35% of clusters add at least one critical vulnerability in one of the workloads. And 6% add more than six. Of course, the critical vulnerabilities are the things that we are most concerned about. But we also always need to think about them in conjunction with misconfigurations. Because the reality is, if you think about the vulnerability, and this is a critical vulnerability, it made a lot of noise in 2021, even early 2022, I think. And it was about being able to escalate and penetrate Kubernetes clusters via a vulnerability in the container runtime. The thing is that finding that vulnerability is great. But even if you are vulnerable with this vulnerability, if you put the right controls, if you didn't allow privileged container to run, if you didn't allow privileged escalation within the configuration of that container, even if you have that vulnerability, you are not exploitable. It's very hard to exploit by the user, by the attacker. And that's why we think it is very, very important to actually take the two together. And we are in since here, and we talk a lot about roadmap. That's exactly the next thing on our roadmap, like cross-referencing your misconfigurations and vulnerabilities, and actually understanding whether that vulnerability is relevant, whether it is exploitable right now in your current system. So this is what we do, and this is how we do it. Yes, do you have a question? One minute. So let's see if in one minute I can show you how to get a scan running very, very quickly. Let's go here. Don't look at my Gmail for a minute. No, I'm not going to do it in one minute. But OK, all you need to do is go to Cubescape in GitHub. You use this one liner. You just copy it into any machine where you have a Cubes CTL access to your cluster. You run it less than three minutes later. You will have your first report that tells you right there in the standard output where you pass and fail for each one of those configuration tests. And you can go into, at the end, you have a link to a nice UI that you can log into and see also vulnerabilities. And from there, you can do many, many things. So thank you so much. It was 10 minutes, but I hope I got it across. Thank you so much.