 Thank you so much, Candice. Pleasure to be here. First of all, I'd like to introduce ourselves. And I think it makes sense for Ben, if you'd like to go first. Oh, thank you very much. Hi, guys. It's great to be here. Yeah, I'm Ben, CTO and Coplander at Armo, and one of the maintainers of the Q&A project. The important activity in CNCF and my previous experience that I had was, you know, Waiteth Hecker for many years and went into cloud native later. And my name is Alex. I work as a director of Kubernetes at Canonical, the company that brings you Ubuntu and OpenStack and other technologies. I also work on the Open Feature project, which is a CNCF sandbox project. I enjoy collaborating with lots of communities and folks who are in the clouded ecosystem. And I guess that's also why I'm here today, because, you know, my interaction with Ben and the CubeScape folks was really one out of native interests for cardning Kubernetes and security. And I suppose it was sort of a natural overlap that we had these intersecting interests. So just to get started on why we're here. I mean, Ben, do you want to take us through? This is a powerful statement that I picked out from this recent gardener report. I mean, what do you feel about? Yeah, so honestly, it's a very strange statement because when I first read it, I said that this cannot be, OK? This sounds, I mean, very strange and futuristic. And then I started to think it was so unreal that it actually makes you think, OK, what is here? And then I looked at where is the industry today and where the industry is going, OK? And today we are in the after, I can say today, really, that we are after the cloud revolution, OK? We are in a place where most of the things we, as application vendors, operators, people who are working in this industry and providing certain computational services are doing are mostly related, OK? Are defined by code, OK, right? And all these infrastructure components, which are the actual computers and remand system, all these things together are provided by other services, which we love, OK? And all these great cloud vendors are enabling us to define all these systems, all the things we need as code. And if I'm thinking of this again and of this statement and I see that, well, if the cloud vendor and we assume that the cloud vendor does his work right, OK? We usually I don't want to say brand names, OK? But we all know them. And we know that they are investing a lot of efforts into securing the infrastructure. And they have a lot of experience doing it. So it means that if where the things can go wrong is the place where we are defining, OK, what we need from this infrastructure and what we put above this infrastructure. And these are mostly coming from configurations. And this is why at the end, when I after thought it through, OK, this made sense, OK? And I'm not going down to numbers, but it is really misconfigurations and definitions going to be the core sources of security bridges in the future. I think that's a really interesting point. I approach this from a product perspective. I mentioned that I direct to Kubernetes engineering team. We build Kubernetes distributions. That's our bread and butter. And the more customers we see on board to Kubernetes and start to work with container orchestration, the more it becomes apparent that you get folks from all over the spectrum, right? So you get people who are super into low level components. They understand it in and out. And you get teams who are just being mandated to simply move into containerization. And this might seem like a no-brainer, but what this is introducing is a whole new set of attack surfaces, not just at the Kubernetes level, because of everything that Kubernetes is connected to. So using images, OCI images, they have a whole set of attack surfaces that build pipeline and change the provenance. So it can be really overwhelming for folks to even start to think about this. And I love this diagram that we have put together for this, because I think that illustrates two things. It shows a modern CI CD pipeline. And I'll talk about it in a moment, but it also shows a GitOps approach as well, which is starting to trend more towards the favorite way of doing reconciliation between your configurations you mentioned and the reality. And if you think about this from a left to right, Dev is the dream, releases the reality. It's almost the idea of the reconciliation showing desired state versus the actual state. So when I'm coding, that's my dream of what I wanna create and that's what my configuration is defined. And then as it goes through this machinery, it comes out into our cluster and that's the reality. And so I think that this is the new attack surface because there are so many joints in it, right? There's so many moving parts. It's almost the sort of analogy of building a rocket with the cheapest components possible. You've got to make sure that that cheapest component is still gonna work to get you into space. Equally here, this might look secure if you use all the right practices, but there are many ways that these components weren't designed to work together. For example, DDoSing on container registries, if that container registry goes back many strategically, many strategic cost is actually you will have a fallback registry, which you can then manipulate the in cluster credentials or you can fetch the credentials from that fallback registry if you can use up the or intercept the root. So just that simple example gives you kind of an idea. I hope that there are so many different ways that we need to start thinking about securing this new sort of challenge of CIC pipeline. I mentioned that it starts with the development environment as this dream of what you're building, right? But this really is where you start to introduce everything from semantic misconfigurations all the way up to not having the correct kind of testing or even testing in an isolated environment. For example, if I'm testing SQLite database locally, but actually in the cloud we're using a distributed SQLite database and sorry, distributed SQL database. It might work in a very similar way. However, the credentials and the connection strings and the encoding and the security practices that I have around this may differ somewhat. And so therefore I've already introduced the difference between the desired state that I'm creating and the actual state. And specifically around, you can see some of the icons around the Kubernetes side of things around things like Helm configurations and YAML configurations. I'm opening ports for my application to work locally. I might not think about what that does remotely. A simple example of the case is setting host networking to true. That's something that people who work in Kubernetes might have done to be able to obtain IP address or to talk to a host system. It introduces vulnerability into that system. And even when you've got from your local environment and you've started to push that up into your remote repository, compilation doesn't necessarily mean that you're detecting vulnerabilities, right? The build being successful, whether that's a Python build of a package or it's a Golang build, that doesn't necessarily mean that you're free of vulnerabilities. And equally, many scanners, right? You might have a great Sonar Boy or another sort of scanner set up to scan for vulnerabilities. Many of them use static configuration files that are scanning for. And they weren't actually looking for CVV vulnerabilities. They'll be looking more for sort of code compromises or even things around misconfiguration in the code or in optimizations. And then when you do get to actually employ detection tools at a larger organization, many of those will have soft warnings because if you're running 200 teams and they're using centralized tools, the amount of tickets that these tools create end up inevitably having humans that operate them, turn those down to warnings because we don't have the people power to deal with that. And then the final part of this is that most base images, you just fetch, right? From something, from Fedora, from RHEL, from et cetera, et cetera, et cetera. We don't really think about where those base images are coming from. Now, I will then do a little shout out here to projects like six store, which are thinking about provenance, but there's a long way to go between thinking about it and having it be out to the majority of people in the cloud native ecosystem. And then the final part, as I mentioned, multiple times is that desired state is not an actual state. And if you've worked with Kubernetes enough, you'll realize that actually there's a drift that occurs in clusters when you start to introduce users playing around, you start to introduce different deployment mechanisms operating in the same cluster. Perhaps you have GitOps plus client side queue control plus Helm as well. And so therefore those differences, those drifts can be vulnerabilities. They can be made to exploits. Converting a cluster IP into a load balancer can easily give somebody an ingress into a cluster on the port 80, which is an unsecured HTTP port. So we have to be really sure that the actual cluster is continuously being reconciled and we understand what's going on in that cluster. And so that makes you ask the question when you're running a small organization versus a large organization, can we really afford not to improve these practices? What is the cost of building our COCD pipeline just kind of ignoring that stuff? It might be nothing. However, if you are an organization that has to comply to SOC2, HIPAA, you're running FIPS conformance, CIS conformance and you're saying you're doing that and you're not doing it, it could be the multimillion dollars. It could be the loss of a contract. It could be a master service agreement torn up. So not doing these things can mean the loss of business and eventually the loss of your job. So it's very quickly spirals depending on the context of this question. But what I would say is that we need tooling to make this easy but to do that, we really need to be able to consolidate this in a way that is digestible because it's so much. It's such a big topology. There's so many moving parts. Where do we even start? And I thought this would be a good opportunity for Ben. I wanna get your take on kind of this idea of security games. Yeah, so I think that at the first stage, I think that we really succeeded in frightening all the audience here about how many problems we have. But now we are still talking about the good parts. That we are safe, there is a way. And not just there is a way, but I think that the way is much better than what we have before. I mean, before cloud native, before CICB pipes and so on. And it really starts with all these phases that you mentioned, that when we're going into, when we're talking about the development phase, we can hook up really the user experience of these tools, which are actually the developers and DevOps engineers and SREs, who are writing hand charts and YAML files and implementing application and scripts and so on. What they are doing is what we enable them, can enable them is already a tooling at this phase. This means that when they are starting to write something, they get an immediate feedback, which is awesome because I'm just re-looping to what you mentioned about the costs. The cost of delivering something wrong to production and I'm not talking about actual cost of actual security breaches. I'm talking about there was something wrong from a security perspective, we need to fix it. Before there is a breach. The actual cost is obviously, it's much higher. If it happens, if it's identified down the road and as close we are getting to the developer where the inception is happening, okay. The cheaper, okay, we can go with it. Okay, you mentioned, okay, connecting the pods to the host network, okay. If it's done, you know, without understanding what does it mean, okay, which is okay because not everyone needs to be a security expert and not everyone needs to understand everything, but this is why we are doing tooling, okay. If we can explain these things, for example, while the code is implemented, okay. The solution can be very, very fast and it's not just, you know, actually hooking up in the development environment, okay, the ID, in the VS code or something, but even if we can hook into the pre-commit hooks of the Git repository and, you know, when someone tries to commit, okay, his work, you know, we are checking his work already. We have done a big leap, okay, forward. Okay, we already save a lot of time or work and we can deliver more secure things. The second thing is actually is the policy, okay, because for everything, and you will see through this whole presentation in the sense that there's something, okay, what we are doing in order to save ourselves time and money, okay. We are trying to, you know, notify early, but somewhere we need to enforce these rules, okay. And if we are going through the first step, okay, of, for example, implementing a new hand chart, okay, the developer had, you know, committed it to his local repository. Now he's pushing it to GitHub, okay, and creates a full request, okay, for the maintainer of the project. Now, okay, at this stage, okay, we can do two things, we need to do two things. One is we have to check in the pull request whether, just as, you know, any unit test would happen, we have to check whether the new thing stands in the security standards of what we want to accept. So we are adding another gate, which is a hard gate, okay. Accepting only, you know, secure code into a repository, okay. Which is, you know, in case of GitHub action, okay, it's really, you know, an action of a pre-check of PR and we are not accepting the PR while the security issues are not solved or not decided that they are acceptable, okay. So Ben, to interrupt you there, would that mean that, let's say I opened a PR on a repository and I put like a host network true on the YAML file, that would be a check that auto fails because that's a control that you would have on that repo, right? Yeah, so absolutely. And, you know, this obviously acts, this is something, okay, that, you know, an organization has to decide for himself, okay. What is the security level? What is the acceptance, okay. There is obviously needs to be, we have, there has to be here some, I'm not sure that the word leniency is the right word, but we have to be a little bit, you know, adaptable, okay, because otherwise, okay, we are creating, you know, problems for ourselves which are not really worth it. But yeah, I agree with you. So this is, you know, this is really, you know, the step where, you know, you have to decide how you want to handle these events. So, and the next step, okay, is also, okay, when after you have committed, for example, your YAML definition, okay. You have to look at the images you are using, okay, in your Kubernetes deployments, okay. So they're coming obviously from a container registry and, you know, you can check, you know, container images for vulnerable, you know, ahead of time and look them into, okay, during even the build phase, okay, container images, but there is slight problem with that, okay. First of all, it's good practice, okay. And it's also security gates, okay. But it is also important from day to day to witness can your images in your container registry. Why? Because every day there are new security vulnerabilities found in different projects, okay, which are, which might make your container, be part of your container images. And if you have scanned your container, like, you know, even a week ago, and you, for example, you didn't get any new critical vulnerabilities, you might say, okay, it's fine. But what if there is a new critical vulnerability found, you know, two days ago and was published. And now if I'm re-scanning the same image, I will do find this specific vulnerability. This is why this cannot be a single, you know, single event, okay, in your processes. You have to, you have to re-scan your containers, you know, every time there is new release of list of vulnerabilities in order to be able to understand what you have. And then the last step, obviously, okay, after you have deployed, you know, from your code repository, you have deployed your code and, you know, your images went into the cluster. Okay, you have to still re-check, be able to re-check, okay, your clusters and your, you know, and your workloads always, okay? Because, again, you went through a few, already three gates, okay? But what if, you know, there is someone who was able to bypass one of these gates, okay? And as you talked about, you know, the different tooling, okay, and different ways how users might get into a cluster, even, you know, legally, okay? But also, you know, the same thing with new vulnerabilities which are happening, which are being found every day and, you know, new issues which are raised. So as after your deployment, you still need to be on top of all of these things. And the great good thing is that the tooling is there. It's interesting because the emphasis on the gating is sort of like, I guess, locks on a dam or on a river, right? And that you need to go through them and only once you've gone through every one and they've all passed, can you continue to production? And I think that it sounds really simple, that kind of idea, but it's an ideology. It's a philosophy that engineering team can get behind, right? They can understand that, right? Okay, these are the checks that have to be green. This bill has to be green because once you start speaking the language of product engineering teams who are just looking to deliver their workloads on top of Kubernetes and build them a pit of success. It's like what I was talking about at the beginning of this webinar. It's about making it so easy that the alternative is unthinkable, right? Because it's like, we don't have to do anything, right? These gates are here. When they go red, we know something's gone wrong. Otherwise, it's straightforward and that's our path to success. At least that's how I see it, right? Yeah, completely. And also, it's also about empowering, okay? Empowering, okay? You know, the developer to know about these things and save himself and everyone as much time as he can. It's also sort of an interesting one because I think that when we start to think about the security tooling, I mean, looking at this sort of complex, distributed, multi-substrate environment to see ICD, right? Because you could be building on one thing, you could be deploying on another. It really comes to mind that there needs to be some sort of way of aggregating this, right? And the ability to sort of start to bring this all together. And I know that you and I have talked about this is why Kubernetes is becoming so popular is that it seems like the POSIX of cloud native in that it's an interoperable layer that people can build on. But also it means that it's becoming the greatest attack surface that we've seen in recent years. So I guess that with all of this focus on case, building a single pane of glass to monitor all the ingress and the egress and the continuous reconciliation and all those other steps we've described seems to be something that's just a net ability. And what I wanted to get you to talk a little bit more about on this slide is those characteristics or those things that you think are really important in these pane of glass to really help people on this journey. Right, right. So I think that, you know, one of the, you know, you can look at, you know, the single pane of glass, okay, of something that is, you know, combining some different, you know, subjects, you know, under one hood, okay, but you can also think of having a single pane of glass of the same thing across multiple, you know, stages. And I think that beforehand we talked about, you know, tooling in different, you know, stages. Now we're getting into, you know, of talking of single pane of glass across, you know, different subjects and themes. And really, okay, I think that really what, you know, is important is to bring all these things under one hood because security is at the end, okay, is one question, okay. Is the attacker able to penetrate? Is there a problem or not? And I can tell that I'm using, you know, 12 different tools, okay. And I worked really hard, okay, to implement them, but at the end of the day, okay, I still, you know, I'm still managing 20 tools and different things can fall in between the cracks. So really, okay, having a single, you know, solution, which brings you through the whole journey, okay, between image vulnerabilities, which is more closer to what we are running inside the Kubernetes cluster to workload configuration and API server access and their interconnection, okay, because these things are, you know, are affecting each other, okay. I can give you a very simple example, okay. If a workload is a public-facing workload, you talked about load lancers and ingresses, okay, right. If it's a workload which talks against traffic from the public internet, obviously it's securities, we look at it's security in different way than, you know, workload which is, you know, gets CPU metrics one time a day, okay, behind the scenes for your knowledge. So having this interconnecting, also this information is also very important and it's very important to, you know, for the users to enable them to have one tool instead of, or one platform instead of like, you know, 12, 20 or more different platforms. And I think you've reminded me of two things. I think having a single platform gives you an ability to educate. That's the first thing, is that a lot of security issues are around education, around learning what is a good practice is one that leaves you vulnerable. But the other thing I think is very interesting is that having a single pane of glass enables you to perform a level of forensic analysis. You can look across your cluster and look at the vulnerabilities over time. You can look in your armband, you know, you can start to perform deeper introspection and evaluation of potential issues and potential controls that are, you know, that are being flagged. And I think that you're right, like when you do have a bunch of different tools, you can do it, it just makes that a much more cumbersome experience. And then what happens if you're working with five other professionals and you need to reproduce that effort or say, hey, look at this. I mean, everybody I'm sure would be familiar with projects like Grafana, where you can just simply send a link to a dashboard and say, are you seeing what I'm seeing here, right? And there's sort of a level of collaboration, which I think is super interesting. Right. And so I guess this is really where we wanted to, or I mean, you especially wanted to talk about how Cubescape effectively covers some of this stuff. And so I wanted to do a bit of a demonstration in a moment, but do you want to run us through this sort of high level of how does an open source tool like Cubescape cover your basis across these security gates? So Cubescape is, you know, an open source community security platform, okay? Which we started from, you know, when we went to our customers and we wanted to understand what are their problems, okay? Most of our customers were going in, were new to Kubernetes and went into Kubernetes. And they said, well, I'm just, you know, functionally I was able to build my first, you know, system in Kubernetes, but I don't know if it's secure or not. And, you know, this is where the project started a little bit more than a year ago, okay? That, you know, understanding, okay, different Kubernetes objects where you have these configurations inside them. But I think that after, you know, after a few, first few months, okay, where we, when I have to tell a hero again, okay, that we got a huge love from the community and which was awesome, okay? And really made us think of, made us continue, okay, this, down this road is that we created a kernel of understanding Kubernetes objects, creating policies around them using OPA Open Policy Agents language, okay? And detecting, you know, misconfiguration issues around, you know, around Kubernetes subjects. And it's so very fast that it's not just what we have to look into what you have in your Kubernetes cluster, okay, what are the deployments, but, you know, people and ourselves want to look into the same objects on the left side of the screen, also right in the depth and the pre-deployment phases. And this is where we created in a way the kernel that is portable. And we are able to add this, you know, this kernel, this knowledge. We can distribute it along the different phases, okay, we've been talking about. We can edit in the development phase, in the interface phase in, and later on when we added the image scanning capabilities, okay, we added also image scanning inside the cluster and also in container registry, which was our container registry spending, which was our, you know, third gate, if we recall that before. So this is the way we created KubeScript to be able to be portable by designing, be able to, you know, stuck in those different phases for the user because we understand that they need this knowledge in all of these phases. One of the things that was introduced to me as well in this diagram is that KubeScript is also on the Grafana icon. Is observability an important thing for you? Yeah, I think that, you know, first of all, Prometheus and Grafana is such a, you know, such a great project, okay, and really enables you to create security dashboard for yourself, okay, for your company. And, you know, observability is really important. Security must have observability. And I think that, you know, any maintainer, any kind of life system knows that it goes both ways. So there's no real observability without security. So it was really given from the beginning, okay, that, you know, it is that data has to be also going in the Prometheus Grafana space as well. That makes a lot of sense. And I've had a chance to play around with KubeScript for a while. I mean, I actually run it on several of my clusters. What I wanted to do in this segment was to just sort of run you through some of the things that I found interesting and really kind of, I guess it picked Ben's brain while teaching this webinar as well around kind of the motivations behind them. So I'm going to share my screen here and please do mention in the chat if there are any issues, you should be seeing a VS code screen of some example code. So I've got a very simple piece of YAML and for those of you who are in the community space, you'll know that this is a deployment for a simple pod for engine X. This is typical of what you might find in a Git repository, you know, you have some organizations that centralize the YAML files but I think in the majority people tend to package their code with their YAML in the same project. Now, that definitely is down to opinionation. Now, what's interesting here for me is that you have this idea that with, when you're in VS code, you get kind of linting errors. Now, what was really cool that I like is that when you do something like, as we described earlier on, you set something like host networking, you instantly get this warning that comes up. And what's quite interesting about this is that the warning says, hey, you know, setting this up is not a great idea because effectively you are allowing this pod to have access to the host network. And I like as well, then you can go on and find, you know, so some more information about it. And I think that as you describe, and as you've been talking about throughout the course of this conversation, education is the first step, right? Having some education around, oh, okay. Well, I didn't know that, right? I didn't know that was a good or a bad idea. So for me, when I'm onboarding new engineers who have to, for example, in my world, build an operator and they're writing YAML, they might be writing a service or a deployment, just knowing these little tips around, this might be something you've got to think about rather than just do is super useful. I mean, hands up if we've copy pasted a YAML deployment before, right? Like everyone does it. And I think sometimes you copy something from Stack Overflow or whatever it might be and you have this thing in there, you don't know what it is and you look at it and you're like, well, I'm just running that CPU server, that CPU ticker you mentioned, maybe it's not the best thing to do. So removing that, and I always think it should be like least privilege, right? You need the least amount of configuration too, should be the kind of philosophy is a really important approach to this. So I'm gonna switch back to my browser now and show you something else that I like. So that's the development journey that I really like. It's super easy to use. Now, I'm gonna change gear and change tab here and put cats. Cats is a simple repository which shows you pictures of cute cats. This is a lot like your typical Golang-based Kubernetes repository, right? You got your package with your code in here and you've also got templates that form your Helm chart. Helm is a popular tool. There are lots of other popular tools you've got customized, et cetera, et cetera. Now, I wanted to introduce a way of actually passing my repository, my configuration to see what happened. So what I did was I used the CubeScape workflow, right? Which is like, you know, what, squiggly, it's 18 lines, super simple. So by introducing this, what happens here is that I get a scan that then interpolates through JUnit to give me some meaningful action output. And what's quite cool about this is I can choose to make this either pass or fail, but what you will see is that I do get the annotations coming back as errors from JUnit saying, hey, by the way, in your deployment here, chief, you've got some problems like, for example here, privilege escalation, I've got resource limit, you know, there's a whole bunch of interesting stuff. And if I was not in a demo environment, I would most certainly have this as a fail, right? Like we described, you pick your controls, you pick your exceptions, and I think that's a really interesting one. I mean, out of those two things I've explored so far, do you think there is kind of a prevalence for one or the other? Are you starting to see people use these in conjunction or use the local sort of implementation on VS Code and also the GitHub action, or is it kind of more of a one or the other sort of approach? So I think that, you know, the answer is wise around a little bit of everything, okay? Because I met guys who said that, well, okay, I just want to hook this up in my PR action, okay? And you know, if he needs to solve it, he will solve it here, you know, I will, you know, this is where I'm coming from. And I met some of the guys who said, well, you know, VS Code is everything, okay? And if I had it in VS Code, I'm fine. And I met, you know, a team leader from a company who said that they are using both. And I think that somewhere the answer comes from, what is the right thing is really depends on who you are and what you're trying to achieve. But because in general, okay, maintainer, okay, of a project, we'll add this to his, you know, to his PR actions, okay? I took a point, yeah. Yeah, because this is how he is gating, he, this is how we are, he's protecting, okay, his stuff. And those developers who are very, you know, security savvy and want to work ahead of time, okay? They want to sell it. Well, I know that what I'm going to do is right, okay? Because, you know, I'm the king of the castle, okay? And what I'm going to do is I'm going to use the VS Code and I will fix inside everything. And, you know, there comes those who are working in our organization, looking all of these phases in one, you know, in a single pipeline and not just different phases and we'll say, okay, I will, for sure I'm going to put this into, you know, into the repository as, you know, as a gate. But they all will tell, but they will also tell my team, guys use this plugin, okay? Because, you know, it will save us time. That makes a lot of sense. And actually you reminded me of something. I just wanted to segue before I show the last part of my demo, I've been using Cubescape as part of the open feature project. So there's a, there's a tool called CLOMonitor.io, you know, and CSCF projects get monitored by this. And what's interesting is that, you know, I was the author of FlagD and the open feature operator. So I have to, the blame lays at my feet here somewhat and that we have a 69 out of a hundred score. And what I'm going to do off the back of working with Cubescape is that a lot of the stuff that is being flagged around our YAML, our dependencies, you know, and stuff that we described in the VS Code. So for me, there's like a real incentive to add this in because this is a health check for our project. So as we showed, you know, in this action, maintainers do have a stake in this, right? And maintainers want to keep the hygiene of their projects really high, especially if they're working under the umbrella of the Linux Foundation or one of its sister organizations. Right, we're sure. So the last bit I wanted to show was the bit I was holding back, because it's the bit that's still giving me the wow moment a lot is, I love using the Cubescape SaaS solution because, well, I'll tell you why, there's a few reasons, but one of which is that I have a bunch of different clusters that I monitor at any one time. And as we've described so far, it's kind of been on that single track journey, right? It's always been for like APR against a repo for a cluster. Now, of course, if you want to go multi-cluster, you need to think about that, right? I was really interested as well, especially with you on the webinar, Ben, to get kind of your thoughts around what Cubescape cloud needs to you and to your users. And what was sort of the incentive to build, we'd spoke about this single pane of glass, but what was your incentive to really build this out? Because it feels like it's just something that brings together what we were talking about earlier on. Yeah, so really okay, the Cubescape cloud project, which is our part of our solution today, is really a way to bring you all to this information or collecting in different phases, okay? Enable you to not just to see the results in a graphical way and get understand faster, okay, of your security issues, but also later on, okay, also to interconnect, okay, these features. So Cubescape is really a very young project, okay? A very young project, okay? But we are already building things where we are trying to connect all these information sources, okay, together. And this is gonna be the place where we can do it, okay? This is where we can bring information from your production cluster from Cube API together with your vulnerability information, using your cloud, your assistant, your cloud API and reading, you know, some of the cluster definitions from there and, you know, showing these things together to you and also to prioritize. Now, what, you know, this is already, you know, you can use it today, today this is a free project, you know, your service you can access. As we are going to forward the cable, bring more interconnections, okay, between these data sources and, you know, we are really looking into this as a single pane of glass, okay? Giving you a single security solution for Kubernetes, okay? And getting you not just, as I told you before, okay? Not just single pane of glass or from different, you know, subjects, okay? Of the Kubernetes security, but also from different phases, okay, of your security, okay? Scanning your repositories, scanning your container registries, scanning your clusters and so on. And I think that this is the way, okay, we want to work. Okay, we would not want, if we need to, we are building, you know, solutions from different parts, but if you're talking about security, I think, and my philosophy is that if you are putting, if there are too many things, there will be things which will fall between the chairs and we need something that covers you from, you know, from left to right and keeps your cloud is meant to be that. I am, I try to do some justice here by putting some takeaways together. What I, as an end user and as a product engineer who builds things in this grand cloud ecosystem of taking away is CICD security is particularly challenging, right? There's a bunch of stuff you ought to consider in all these moving parts and safeguarding that system is critical for not only commercial leads, but also as you've described, but they are a maintainer of a project for your credibility, for your ecosystem and you're actually protecting your end users as well. And I really like this kind of, it's too easy not to approach, I call it a pit of success and I believe that I've taken that from somewhere I'll have to remember where that quote was from, but I love that idea of like, it's so easy, the part, you know, there's no reason not to and I think that that's one of the things that I've really liked about it. In particular, you know, when we're building operators, whether it's open feature, whether it's, you know from my work, I like the idea of being able to have engineers who are like, oh yeah, I caught this thing, you know, and they can say, oh yeah, this is something that's come up, let's go fix that, not this kind of sentiment of dreading pushing something to the build phase and, you know, seeing it go through, actually becoming more familiar and more comfortable with it. So I think that's really nice. And I'm just really happy that I got to, to talk to you about it today, even. So I don't know if you want to give any closing thoughts or anything you want to share before we finish up. Yeah, I think that the only thing, okay, I think you've summarized it so great, but, you know, really just my takeaway for you guys is also to see this as really as an opportunity to improve, okay, CICD, although it opens up a lot of security questions, okay? But it can also make these security questions disappear even more if we are working a little bit more methodologically and using the right tooling and the right approach. I think it's a great opportunity for all of us. And I would, you know, just like to echo the sentiment that I'm excited as mentioned at the beginning of the call, you know, contributing to the tech lead that this is now a maturity point in the cloud network ecosystem that we're seeing these tools starting to address some of the gaps in our tapestry of cloud native projects. And it's great to see that it's becoming so easy to use that we're lifting each other up and becoming more secure by design. So again, thank you for indulging me on my questions, Ben. And with that, I will hand back to Candice at the Linux Foundation. Thank you so much, Ben and Alex, for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day.