 Hello, greetings, folks. I will be leading this one today. Fuller co-chairs are not on. If you can drop any agenda items into the meeting notes which posted in zoom. They're also in the slack channel. Welcome, folks. If anyone else joined there, please add your name and any. Please add your name and any agenda items to the meeting notes. Which I posted again. I see that you added one Jim welcome and thank you. How much time do you need. For that normally. Yeah, so we can. Depending on the other items, you know, about 10 minutes should be. Good for a quick demo. If we want to go deeper, we can certainly showcase more use cases and other items. All right. Are you familiar with the telecom music group? I had a quick discussion with Victor as we were discussing different use cases and I have looked at some of the documents and other information, but not deeply familiar, but I am at a high level. Yes. Typically for longer demos, we suggest putting them on, on that. Okay. And within that user group, but a 10 minute demo sounds good. And then. For sure discussion. That's the main focus here. Is this your first time in this group? It is. All right. So. Have you been, have you gone to the scene of working group. Repo. No, no, no. Just, I just invited Taylor. Because I guess like. It's a great tool like this. Be nice to be aware of the benefits and all the efforts that they are doing there. So yeah, it's the first time you're in here. So. I guess like, he can present that and. Just share a few, a few. I think that they have an eventually maybe we can. Do another more deeper demo in the future. But yeah, I guess for now, like, but Jim has like 10 minutes showing. It's a minimal. Or the basic ideas, I guess, good enough. What do you think? Yeah, sounds good. Maybe. I'm thinking about the focus. So just a quick overview for you, Jim. The CNF working group or it's cloud, it means cloud native or stance or cloud native network function. And the focus is on. Identifying. Best practices. And the use cases around those. Where. For networking applications so that we can. See how they can best run in a Kubernetes environment. So specifically looking for things that. Might already be obvious and other like enterprise applications and stuff. How do they get applied? So this. Run as an honor user with the pod security part of your demo. That's relevant to. A whole area on the security side that we've been looking at. Specifically within the principle of least privileges. And. One of them would be running your processes as non-root. So that's actually what we're going to be talking about today is one of them, one of the things. And that's, so that's a goal. So if there's use cases. That you. Know of that are. Would be useful that. Illustrate. Here's why. You should run as non-root. Here's why you should do these other security things or whatever. So that's a key thing. And then there's also a big. Good contributions in the working group. Also areas that are problematic. Like here's an area that. Is, you know, they're thinking about over. And the plumber's working group or sick. You know, sick testing or wherever. And if you point out, they're trying to work on it, but they don't, they're having some problems. Those are also area. There's kind of gaps. And then again, the end goal is to be able to share. You know, to share your best practices that we're trying to get everyone to adopt for the platform and applications. Another initiative that's related that I think. I don't know it. Enough about this project that you're going to demo. To say. But the CNF test suite. It's actually focused on. Creating tests that try to check practices. How things are deployed and how it's running, how it works and run time. So this would be, you know, deployment onboarding of new applications as well as like the second, second non-going life cycle management type items. So it goes across the board. So we, that initiative actually use various tools like Falco. If you're familiar with that and. If you're familiar with that. If you're familiar with that. OPA and other things as part of it. Okay. The testing. So this might be something that could even be used there. We could talk about that. Perfect. Okay. Does anyone have anything else to add to the agenda? I'm having. Some problems. This just came out. Bill, are you available if my screen. Dies or. Lucina, somebody that could help if, if I can't load something essentially right now, I'm okay. So the, this best practice. So we've been working on. A whole set of use cases. And we have a bunch of write ups around this. I'm not going to go back through, but if folks want to look, they can go and see in the notes or links to. Documents around least privilege. And other things. We've had this best practice for a while. And then we've been going through and. Resolving issues. We're now down to a point where. Everything has been resolved. And we have. This is pulled in just because it's, it's from the, the main branch. So these are typos and stuff. So we can ignore that. Here's the. Main best practice. So. This is for the non-root. With the recommendation. That all the processes should by default be run as a non-root user. Part of the. Defense and depth strategy against compromises. If something gets through, then what do we do? So this is the idea. So here's the set of user stories. This is one of the last kind of remaining thing. Writing these up. So there's a whole set of user stories that can be used around. At a minimum, the least privilege items. There's probably other security things. But. Wherever something does break through. If it happens, which is. Likely at some point. Then how can we stop it? Well, then. The non-root user is part of it, but. So compromised updates or maybe. A central registry. Maybe it's coming directly from a vendor. Maybe there's some other, you know, centralized registry where everybody's pushing to. Updates, maybe the registry itself was compromised. And. We actually pull down compromise code. Maybe the images or. There's, if someone has done more of a lift and shift. And the application is not. Deploying new images for updates. Instead it's doing an update within itself. Which wouldn't be a good practice in of itself. But if it was doing that. And it. Wasn't using root and it could limit some of those things. And there's several other user stories. I'm not going to try to pull these up. Or maybe I'll. See if it loads. No, I don't think it's going to lead. So y'all can click through this. If you want to check them out. But we have a bunch of user stories written out. And. A bunch of references going to different. Places talking about why this is a good idea, including. The CNCS. Tag security. White paper and. Bunch of papers from different. Other folks. And I think that's about it. We talk about some ways to check for these things. That can happen. And it happens at the CNF test suite. So. So. Also. Can check for this one. So that's it. Wait. Just one thing about that one. It's like, at least. I noticed that you haven't raised. The latest changes. So. At least for me, it was really hard to distinguish what was. Different between. The previous or like, like. What is. So I don't know if you can, because in now it's reflecting. 70 changes. So, I mean, most of them are. Coming from. Or commit. I don't know if you could. Surveys it or we just need to focus on. The. The document that you have created. I'm going to try to pull it back up. It's having a hard time. But I'll. If someone else wants to pull it up and share, then you can do that. And I can walk through it. But the. We were actually at a point where everything was. Resolved and ready to go. Except for the. The user stories. So. Everything else that's been updated has been typos spelling. I don't know what's happening with my DNS. Or somehow I think either in a rebase or a merge. We've pulled stuff in from Maine. Yeah. I don't know what's happening with my DNS. But we've pulled stuff in from Maine. That is. Causing all of those commits to show. But those are on other files like the. Spell check file. Read the read me got updated for spelling. So there we go. So there's workflow. I don't know why it's showing here. Pulling over here. So that it should be a no op whenever it goes on to the. Shouldn't do anything on the main when we merge these in. Get ignore. So ignoring the dictionary. This is probably already on main and the rebase. Hold it in spell check. So that also shouldn't matter. This read me. That's a spelling. Update that someone did. This one is also a grammar issue on the existing process. Doc document. I'm going to skip this one for a minute. We can go look at some of the others. Again, spelling on the main read me. Glossary. This one. Is updated on the main. So that this. Is bringing this branch that has the. Process aligned with what's in Maine. It added some new, but we already have these on the main branch. This is one of the good action things I think. This is more spelling on existing use case. Spelling on another use case. This is the newest and one of the older actually. Use cases, the onboarding use case. It's just a full ad. It's already been added to Maine. So again, this is something. Where the PR is messing up because it's not. It's not going to actually be an update. It's. Already there. Supply chain attacks. So this is new. This is the user stories. It looks like we have some. Spelling. There it is. I see it. I think. Victor, would you mind doing like a. Commit or suggestion for those. Yeah, I can. I can do that right now. Where it says. Con a tiner. Instead of container. But this is the user stories. So this is a defense and debt supply chain attacks. Talking about what those are. Saying you can. Have a bug. Whether it's bugs or. An actual malicious actor that's trying to get something in. There's a lot of places. From development all the way through. Production where. A problem can happen. You can have a bug that's. All the way in production. And it wasn't intentional, but it could cause some. A security issue. And then they can get in that way. And these are the actual stories about different. Different ways that this can happen. And that's the main thing. That one. And then this section right here. User story. Adding. This section with the user story. And then. The rest of the pieces within here. Were either spelling or specific changes requested by. Folks in the comments. So if I. Probably can't see it here. But if we, if we look at like the conversation. There was some stuff that granny. Suggested and those were accepted changes. So. So. Yeah. Liko. She suggested several things. Those have been accepted. Pankai. Made suggestions and those were. Included. If we show like. Very minor things on the central system. And actually this user story got deleted. So it doesn't matter. So. Essentially. Like this. Notice here's a question that shouldn't be there. Okay. Yeah, this was a comment that somehow made it in the code. So we deleted that. But these were the minor changes. Most of the, most of this was all done. All the way back in. July. July and August. And then the user stories was what we were waiting on. And if you go down, you will see my. Yeah. Change the container. Yeah. See if I can load this. Oh, it loaded. All right. And I can do. So I think that's it vector. And you can, I just refresh. You can do a. A new review. It was pointed out that it's harder to. Review because of the rebase. Not a whole lot. That we could have done on this one because it's been. Open for so long. But maybe we can figure something out next time. Any other comments. Before we merge this, we do have enough approvals. Yeah. Apparently there are a few words which are not in the dictionary, but I can just add later. All right. Right. When a squash and merge. Huh. I'm going to keep that because it's funny. That's a comment. What should we test? That happened. At a format version. And I'm going to look that's far. I think this. Was already covered. It's tagged one. Okay. Several additional terms. Bill, you want two different. Coauthored. Let's remove that one. No, it's okay. You can just do one. Okay, let's get rid of that in the sense. Wow. You can see there's a whole lot. Okay, let's see. That's. PICE. Let's forget that. Type of spelling. Fixed glossary upgrade. It's alignment. It's a whole lot going on here. Okay. Another bill, another vector. Okay. Everybody was on this. These were deleted. I just want to get in favor of taking time. I don't think I can start profanity. I mean, it's. Yeah, no problem. I will. Let's do that. Thanks. Jim. Go ahead and I'll finish. All right. Okay. That sounds good. Thank you. Yeah, let me. Just kind of do a very quick introduction. In fact, this presentation, I'll kind of just share a few slides from. It's something I did at OSS summit just last week. So on Kivarno. So just quick introduction to myself because this is my first time in the working group. And thank you for having me and being open to kind of listening to Kivarno and what we're building in the project. So I'm one of the, you know, creators as well as a maintainer on the Kivarno project. I will also act as a co-chair in the Kubernetes policy working group and a track lead in the multi-tenancy working group. So also, of course, participate in other various forums like, you know, tag security, security, et cetera. And I'm a co-founder and the CEO at Nirmata. So just a few, few things on Kivarno and I'll jump around. I'm not going to go through the whole architecture because this was an hour long presentation. I think it should be a recording up in a few weeks. If you're interested, you know, I can share that with the team. But talking about why, you know, what's the motivation for Kivarno and, you know, what's the, what's the kind of, what are we trying to solve with this project, right? So first off in Kubernetes, of course, policies are becoming critical as, you know, the complexity of Kubernetes and not just Kubernetes, but extensions being built on Kubernetes continues to grow. Right? So what we're trying to do with Kivarno is to bring a very Kubernetes native of a two policy management, right? And obviously, given that our tagline is Kubernetes native policy management, it begs the question, what does that even mean and why does it matter? So one way of kind of, you know, picturing that and thinking of that is the several tools which, you know, talk about being Kubernetes native. But what we mean by this is being, you know, fairly deeply plugged in into the control plane, being able to not just talk to the Kubernetes API server, but also understand Kubernetes API schema, understand custom resources, and be able to work with Kubernetes patterns, idioms like pod controllers, you know, things like knowing what pod admission controls means and how do we complement that and extend that to provide better security and automation tools. So that's kind of what we're trying to solve with Kivarno, make it really simple to and very native to Kubernetes and how policies are written, how policies are managed, and even how policy reports are visible, you know, in Kubernetes itself. And we'll see that in a quick demo that I'll do, right? But just to quickly explain, I'll kind of skip past some of this. Feel free to let me know if there's things, you know, if you want, if there's questions, et cetera, we can go back. But just how Kivarno works, it works as an admission control time webhook. So it integrates and I'll go through a quick install. It's very simple to bring up on test clusters or production clusters. It supports full HA mode for, you know, if you have larger clusters. But it's very simple to get started and install. It plugs in as an admission control mutating, as well as validating webbooks. It starts receiving then policies, you know, or, you know, admission requests. And based on your configured set of policies, which are just Kubernetes resources themselves, it will either validate block or mutate as well as it can generate, you know, new resources on the fly, right? So brings up quite a lot of interesting use cases. For example, when you deploy a new workload and you want to, you know, let's say mutate the pod, even, you know, for example, I saw some, you know, like some other, you know, things, but it doesn't have to be just for pod security, but you could automatically inject or, you know, override a security context. Could be like injecting sidecars. Could be even changing things like network settings, except around the fly or creating completely new resources. For example, if you're running a service mesh and every time a service is created, you want to generate search and create an Istio service, right? Those type of things are common use cases we're starting to see in the community. So just to explain how a policy works and there's different kinds of rules in Kivarno. So every policy has a set of rules. Each rule has a match or exclude block and that lets you do some fine-grained logic on which, you know, resources to match, which namespaces, even you can match by user roles, labels, things like that. And then you can, you know, Bay, once you've matched a set of resources, you can run, you know, rules to either mutate, to verify and, you know, images, image signatures. So I think earlier, Taylor, you mentioned supply chain security. So that's, of course, something that requires admission controls to complete that end-to-end security in a posture and some of the work that's going on with other communities like SixTor. We're integrating Cosign with Kivarno to also be able to verify image signatures from any OCI compliant registry. And then you can, of course, validate which can either be just blocking. So if something's non-compliant, you can block in production clusters, or you can report and audit in DevTest clusters. And you can have a mix of this based on policies or even based on namespaces as you wish. And then, like I mentioned, another powerful use case is generating resources itself, right? So this allows you to automate a lot of things which previously required custom admission controllers. We're seeing more and more use cases. And even simple things like if you want to deploy, you know, registry secrets, other sort, like things like certificates, things like that, you can either generate and kind of manage on the fly and make available to every workload or every namespace. So with that, let me dive into the demo and I'll show, you know, some on the Kivarno IO website. And, you know, we have a whole bunch of sample policies. Today we'll just look at the pod security policies. And, you know, but there's several other best practices. For example, using immutable label tags, right? And not using something like latest. Now, it seems harmless to use latest. And of course it gets used quite a bit in DevTest. I do that. Everyone does that. But if you're running in production, you want to use a version, you know, software. You want to also do things like replacing your image tags with digest. All of these are best practices. And there's 80 plus policies driven by the community. And this, this keeps growing with every release. So certainly several to kind of look through. But just focusing on the pod security. And we talked about running as non-root as one of the policies, but there's several other policies. And all of these are, you know, as per the definition of the pod security standards and Kubernetes. So this, if you're not familiar with, is a very key document, which is driving. So PSPs were one implementation of pod security standards. But now there's other implementations like Kivarno, Opah Gatekeeper, as well as the upcoming pod admission controller, which will be, you know, doing label based settings on a namespace level granularity, right? So that will be, I believe it's targeted for version 1.25. But if you're using something like a policy engine, like Kivarno, you just get a lot more flexibility in how you're managing these profiles and how you're applying them across your workloads, namespaces, as well as you can do, of course, you know, security, not just for pods, but for other things like making sure other best practices like running a read-only root file system is not one of the PSP policies, but that's also, you know, considered a best practice and a good security standard to apply. So anyways, what I'm going to do first is I'm going to start by installing Kivarno, right? And I want to show how easy it is even to get started. So we'll jump back into the documentation or go to the installation. There's several ways to install Kivarno. I'm just going to use the, you know, command line option through YAMLs. So this will, you know, pull down a set of YAMLs and it will run Kivarno. Just to kind of show, I just brought up a new mini cluster and I have, I think one namespace, you know, that I created. Oops. Let's say get namespace. I created a test namespace, which is running an engine export, but that's all I have on this cluster, right? So the first thing I'm going to do is just pull down these YAMLs and install Kivarno. And it comes with a set of, you know, custom resources, which allow you to define policies, which also, and then there's a policy reporting resource, which is, by the way, which is also now being used by Falco, by Kubebench and a few other projects. So more and more projects are generating, you know, policy reports in the same manner, which allows for some standardization and reuse. All right. So if I do get namespace, now we should see Kivarno running. If we do minus N, Kivarno, and we do a get, you know, test, or let's just say get pods. I don't know why I keep saying get test here, but so we have a pod which is up and running and so Kivarno should be ready at this point, right? So like we can, if we want to make sure, we can just check the logs and we'll tell the logs of this. Whoops. We can just do it based on the deployment. Get rid of that. Okay. So everything's good. It says it configured its own webhooks, which is what it needs to start receiving policies and things like that. But if I now do, you know, get cluster policy, I don't have any policies installed at the moment, right? So it's saying, you know, so if I do, so this is the resource which has, you know, which we will now install some of these pod security policies, but at the moment there's no, you know, policies running on this cluster itself. So let's go back to the policy repo. I'll pull, you know, here, this is a command, which will apply all of the pod security policies and it's going to apply a customization. So what this customization will do is it's going to actually set these pod security policies instead of audit, which is the default mode that they're configured in, it will set them to enforce. And this means that now if I try to create an insecure pod, that should get blocked by default, right? So to test this, what I'm going to do, you know, one site that I use for testing some of these insecure pods is there's this site by a company called Bishop Fox in the security space. They have the site called bad pods, which shows you, you know, like pods running with the host name spaces, which run as, you know, root like several other things are not configured correctly in the pod, right? And you can grab like a deployment or a demon set or things like that. I'll go with the deployment in this case, we'll go for the raw YAML and I'll grab this one, right? So if we, by the way, if we do that same command again, we should see there's several policies configured at this point, right? And I'll show what one of these policies looks like in a second once we finish this. But now let's try and run, you know, this pod. So if I do kubectl create minus F, we'll just give it the YAML. And I see a bunch of errors which came out right away, which are saying I can't run on the host name spaces. And this is the one that we were interested in, to making sure that it's checking not just the pod, but like the containers, but also the net containers. And one thing if you notice is when I install this policy, let me actually show what this policy looks like in Kiverno, right? Because when I installed the, or when you write this policy, the policy itself is written, you know, just on the pod resource. But Kiverno automatically knows again, since it's designed for Kubernetes, it can automatically now apply these policies and deployments, demon sets, any pod controller which you run, even if it's a custom, like something like, you know, Argo deployment, which is a custom pod controller, it will recognize and it will apply the policy correctly to it, right? But this is, you know, what the policy looks like. There's just some, you know, again, we're matching on the resource pod. And then we're checking in the security context and we're saying run as non-root. So if security context this, this means this declaration means that if security context is configured, run as non-root should be true. And similarly, we're checking, you know, for init containers. And then we're also doing the checks, both in the pod spec as well as the container spec, right? So that's really, you know, how simple it becomes again to configure and run these policies. One other thing I can just quickly show is if we do get policy report, minus, minus all, or let's try minus a, I can see that for my existing, you know, you know, pod, which I was already running. Now it's generated, you know, some policy report. And if you look at that, you know, let's just do minus O YAML. I'll see all the details of what, you know, past and what failed, right? So actually this is in the namespace tests. So I need to do that. And it shows me every workload where, you know, which and every rule that it applied, which ones passed, which ones failed. And of course all of this can be collected. There's other open source tools. There's ways to get this into Prometheus. You know, so there's several ways to kind of report this information. In fact, I think I have some slides on that here, which I was showing the default dashboard, as well as there's, you know, a policy report or project, which can show this information graphically as well, right? So lots of interesting things, but, you know, let me pause there, I think, and see if there's any questions. Otherwise we can, you know, keep the demo short for today. And certainly happy to follow up with more details. I have a question. So very interesting stuff. And of course, I'm always supportive of anything that brings us to policy oriented orchestration. I really think that is the future of, we'll keep seeing more and more policies being used in lots of areas. I'm thinking of the topology operator for Kubernetes as well for policies for placement. But my question is maybe if you want to talk more about how this would directly be related to CNFs and where you see it in telco being especially important. Right. Yes. So one quick thought is just making it easy to automate, you know, as you're doing testing, as you're doing validation, certainly this can be also integrated in your CI CD pipelines. So that's a very simple, you know, simple set of policies, which can be managed through GitOps or any other, you know, solution you wish. The other thing which, so one, by the way, you know, something which may come up is of course, and we often get asked how does this compare to OPA gatekeeper, which do perform a similar role, right? So the main difference is how do you author policies, but then other powerful use case that Kiverno enables, which OPA gatekeeper doesn't is to be able to generate resources. And that is also another, you know, area where if you want to create policies for workload, you can in fact have policies to generate policies, you know, policies to distribute common, you know, elements to set up different things, which helps, it really helps in decoupling, you know, creating that separation of concerns, decoupling what the developers have to do, from what the operators have to do, which is a fundamental problem right now in Kubernetes and scaling Kubernetes, right? In fact, in this, I think I have a slide here where Richard talks about, you know, just policies acting, using those as a contract and helping decouple what developers care about, what security cares about and what operations cares about, is where I think Kiverno can help quite a bit. So one area that you could probably help with in this scenario is like we have policy on networking and similar, and I know you can help in that, if that networking is through a Kubernetes based, and Kubernetes aware CNI. But one of the things that we see within the networking and telecom service provider space is that there are secondary networks that may not have the same set of policies, or may be unaware of Kubernetes, but end up as secondary interfaces within pods. And if you have a way to help with where the policies that are there can be rendered into the appropriate SDNs so that the policy can persist, regardless as to whether which direction it's, that information is coming in from, or to help with the control of that, I have to say like what systems should be able to connect to each other or not be able to connect to each other based upon a set of rules could be very valuable. Yeah, in fact, Victor and I were discussing that use case, right? So from my understanding, it seems like what's necessary is, based on the cluster configuration, your developer or the author of the CNF may not know how the cluster is configured to operate. So you probably want to inject some of these settings and add admission controls. And it could vary based on where the CNF actually ends up running. Victor, not sure if you want to add anything else to that or? I don't know, you're very correct. Yeah, one use case that we were talking about was the usage of NSM. But yeah, for example, MULTUS and Dannon can also include a few modifications or validations in the creation. I mean, I like this particular project because it's part of the CNCF. And I guess it's proposing another cloud native way to do the things or more like Kubernetes way. So I don't know, you mentioned like there are a few default policies that can also take advantage of them. Yeah, so this policy library, so there are the pod security policies, but there's also like policies you can have for, you know, generate, mutate, validate. Like we, of course, with every namespace, you know, it's always good to have a default network policy. But these can be also customized based on the deployment or whatever needs to be done. Like we were talking about like injecting a secondary network and, you know, configuring the pod for that, right? So I think there's a use case. Let me see if I can find that which it's kind of similar to injecting a sidecar in some ways, right? So it's a fairly elaborate sort of in this case here. We're, you know, kind of checking for certain things and creating a new container and as well as an innate container based on, you know, the policy setting itself. So, but yeah, in terms of defaults would highly recommend if, you know, and I know the team has already been looking at the running as non-root, but starting with pod security, there's several other controls which are, you know, part of the pod security standards. So certainly enforcing those because there's no typically, you know, most pods don't need to run with higher privileges. Most pods don't, you know, shouldn't be using non-default volume types, shouldn't be using host path, privileged, you know, kind of having again requiring escalated privileges, host namespaces, all of that can be blocked by default, right? So starting with this set of policies is always a good best practice and then auditing for that in your CI CD pipeline, reporting and then of course enforcing in production. And also, Freddy, the use case that you are also mentioning about producing multis, maybe another possibility could be adding validation which ensures that you have redefined the additional network in multis. So if someone is trying to use an unexisting network, yeah, you can, I'm pretty sure that you can catch all these things because at the end it's just like a single annotation. So I'm pretty sure that Kiverno can catch and do some logic to ensure that that network is. Yeah, and it's an issue not only in multis, but it's like you land an interface and I could even use network service measures as an example. You land an interface here and both of them have some level of control as to like who's allowed to put the interface there. There's a portion there that could be bound against. There's a little bit more flexibility in network service mesh in terms of how the policy can get injected and enforced, but neither one of these has the component that's already built in that is how do you actually program the SDN itself? Like maybe you have certain rules that need to be within the SDN when something is set up, how do you ensure that those rules have been rendered into the SDN itself? And so those particular types of things would be useful in both the NSM and multis solutions because then if you could define what those rules look like here, then you can render them into each environment and ensure that they're getting applied consistently across the board. Are those rules expressed as a Kubernetes resource or are they through a custom resource or a config map or something like that? They're not expressed at all is the point. Being able to, like there was some literature, I don't know if it ended up in the CNTT path where if you are adding a secondary interface, you have to make a decision. Are you respecting a Kubernetes policy contract or are you not? And if you are, in other words, are you exposing a faster Kubernetes compliant path or is it a non-compliant path? And that distinction was made necessary because if it's non-compliant, then you have to rely on the SDN and additional configuration. And yet you want to make it explicit to the person who's configuring it that they have to pay attention to this. If it's compliant to Kubernetes, you're just providing a faster path. Like maybe I have a web application that needs faster access to a storage system. That storage system is exposed in Kubernetes and that you're basically providing a faster Kubernetes path. Then what you're doing is you're saying the SDN has awareness of the cluster and is able to monitor the policies and is able to render those policies regardless as to where the, regardless as to whether you're taking the slower path or the accelerated path. But that was a distinction that was added at that particular level, but there's still that issue about what rules do you want to apply to the secondary networks that are non-compliant to Kubernetes policy. And being able to just to match, like I have these particular things that I want to have these type of connections with and being able to then set something up where you could eventually interpret that into the appropriate SDN. Like this, I would not provide this with the full set of that, that full path to get there, but it could be that first initial step as to like, here's the first half of the problem where we could even express that. And then how do we render that into the SDN is still an exercise that needs to be done, but we'd be a step closer. Okay. Yeah, so happy to, you know, again, help explore and write out some of these policies. We are fairly active, of course, within the Kivarno community, helping users with different use cases. Like we are also like, for example, here you see policies for CERC manager. There's domain specific and other policies. Flux, one of the GitOps controllers is also using Kivarno for multi-tenancy. Open EBS is using Kivarno as well for pod security. So several projects are starting to adopt. But so, yeah, if it would love to kind of work on CNF specific policies, explore some use cases and help kind of advertise what Kivarno can do. All right. Thank you. Any other comments, questions, or other topics? I guess we only have two minutes, probably not another topic. Thank you, Jim. You're welcome. Thanks, everyone. Yeah, please reach out. I'd like to talk to you about how you can, I guess, get more engaged for writing up some of those best practices that you're showing in the baseline maybe. We could talk about use cases that would be, make it relevant to the folks in the networking, communication service provider space. And I can talk with you maybe about the test suite. Sounds great. You see my email in the Google doc? Yes. Contact me that way. We're on Slack. Thanks, everyone. See y'all next week. Thanks.