 Hello, everyone, and a very good day or evening based on where you're tuning from. I'm Divya Mohan and today I'm going to be speaking about Cube Border, which is a project to simplify policy as code, as also improve, you know, the code's usability and developer experience while doing so. So, before I go ahead, I would like to take a quick couple of minutes to introduce myself. I'm Divya Mohan and I work as the Senior Technical Advance List at SUSE. I am also one of the documentation maintainers on the Kubernetes project and a CNCF ambassador. I have co-created the KCNA exam, which basically helps people who are getting into the cloud native ecosystem test their knowledge of everything, of all the basics in the cloud native world. Now, coming to the agenda for today, I foresee a very packed schedule. So, first up, we're going to look at Cube Border, what exactly it is, what's the architecture, delve a little deep, then look at why it came into existence despite all the admission, control and security policies that we have built into Kubernetes. We'll try, you know, demoing that and we'll also see some of the cool new stuff that's been released as part of Cube Border version 1.3, which is our latest and last major release for this year. And we'll also try seeing it with a demo and hope that, you know, all the demo gods comply today. Now, if you want to actually follow along on the presentation or follow along with the presentation or, you know, you want a reference for later to actually, you know, check out, you can visit the link that's visible on your screens right now. And although the screen cap doesn't show the screen deck, not screen deck, sorry, slide deck, that's included in it as well. So, you are going to have all the code that I am executing right now on my machine, the environment that I am using and, you know, the slide deck for ready reference later on as well. So with that PSA out of the way, let's look at what is Cube Border. Now I would like to first, you know, start off with the textbook definition, which is a policy engine for Kubernetes to simplify the policies code process. It's also a CNCF project. Oops, sorry. It's a CNCF project. And we are in the CNCF sandbox as of June this year if I'm not mistaken. Now, if you actually want to see us on the CNCF landscape, I am afraid that, you know, given the size of the landscape, I was not able to give you a like for like representation here. But I have circled it in black if you can see, and you will be able to view Cube Border under the security and compliance section of the landscape. And the logo that you need to be looking out for is this. So you can go check it out at landscape.cncf.io to know more details about the project in terms of, you know, stars, etc. Or you can visit our GitHub repo as well, which is httpsgithub.com slash Cube Border. Now, this is all great, but what exactly is so special about Cube Border and why was it required despite all the built-in admission control and security that we have within Kubernetes? What's the secret source? So first, we'll address this in two parts. Let me, you know, state that at the outset. We have, you know, when we started off with Kubernetes, we had our security policies to enforce, you know, security for access control based on permissions. And now we know, and we shall see in the next couple of slides, that there were some drawbacks of with this approach, which is why we sort of deprecated then removed and provided an alternative in the form of port security admission. Port security admission is a great replacement, but it might not give you the power to enforce the level of granularity that you'd like. So that's why we see a lot of tools that are recommended as supplements. Cube Border aims to be one of them, but let's see how it stands out because there are other cloud native tools in this space, right? Like there's OPA, there's Gatekeeper. Why not choose them? They are more matured in terms of, you know, the number of yours maybe in existence. So for anyone who's actually used OPA and Gatekeeper, you know the steep learning curve that actually comes along with learning that tool, right? You have to learn a separate language in the form of rego from scratch to actually start enforcing your policies or writing policies rather. So what if I told you that you could write your Kubernetes policies in a programming language of your choice? Doesn't that sound cool? That's, again, one of the aims of Cube Border to allow developers to actually write policies in a language that you choose. Now, there's a caveat. Of course, there's a caveat because, you know, all good things come with caveats. So what's the caveat? The language that you use needs to compile to WebAssembly bindings. Now, WebAssembly is, you know, is a landscape that's growing pretty quickly. So we have a lot of languages that are, you know, fulfilling this caveat currently. But not all of them are supported by Cube Border because, like I said, we are relatively younger in the landscape as compared to some of the other tools. And keeping up is difficult. Although we are trying our best to actually bridge that gap to improve your experience. So what are the languages, you know, that we support currently? Now, when I first said WebAssembly, I'm sure you thought Rust. Like, it's very synonymous. Rust and WebAssembly go hand in hand for some reason in everyone else's mind. And in my mind, too. So it was a no brainer for us at the Cube Border project to actually, you know, empower you to write policies in Rust. Should you choose to? So we've gotten around to creating a SDK that's Rust-based, leveraging the official Rust compiler so that WebAssembly policies can be generated as WebAssembly modules and, you know, then subsequently evaluated during incoming requests. But Rust is not everyone's cup of tea. It's considered niche. I don't know how true that is, but it's considered niche. So what are the other options that we offer? So the second option is Go. Now, Go is pretty popular in the cloud native ecosystem, but it is not a first class citizen in the WebAssembly one. That's kind of reflecting in the fact that Go does not have its official compiler supporting WebAssembly, which is why we have used the TinyGo compiler that actually generates WebAssembly modules from, you know, Go code. And we have a project policy template for, sorry, we have a policy template for you to actually get started in case you want to, you know, begin writing policies right after this presentation. So all of that is linked in the resources section of this presentation. And it's there for all of the languages, by the way, if I did not say that before. The next one up is Swift. Now, I don't know how widely used it is. So pardon me because I do not come from a development background. But if, you know, your choice of poison or your poison of choice is Swift, we have an SDK for that too. Leveraging not the Swift compiler because it's still official Swift compiler because it still doesn't support WebAssembly yet. But we use the Swift version project. So these are broadly the three languages that we support. And I know I started off with an ambitious claim of, you know, saying that users can write Kubernetes policies and their favorite programming languages and that is the there, you know, we have plans of, you know, being inclusive of front end development folks as well by incorporating assembly script, which is like a very strict subsection of not subsection, which is a strict subset of type script, which is a strict subset of JavaScript. So we haven't forgotten your, but like I said, the WebAssembly landscape is one that's quickly growing. And as it grows, we, and as you know, more and more languages have their official compilers support the WebAssembly, you know, spec, we will be able to have, you know, specific SDKs for that as well. But currently, these are the only three, but I spoke a lot about usability as well. In fact, I think I mentioned it twice as of now. And why did I speak of that and what's the deal with it. Now, if you've been in this space, you probably are using OPE or Gatekeeper. And for all the, you know, things that I said about a steep learning curve, people have actually adopted it. It's there for a reason. So what if you actually want to switch to, you know, CubeBottle, or, you know, you want to at least try out CubeBottle. Do you have to learn any of the programming languages that I said in the previous slide? Actually, I mean, you probably could if you want to, but you can also reuse almost all your existing Rigo policies. I say almost because some policies have not some policies, policies have built in functions and some of these functions are SDK dependent, which means that CubeBottle has to actually implement them as opposed to the WebAssembly file, you know, as opposed to it actually getting implemented in the WebAssembly file. Now, we have made it a point to actually cater to a majority of the K8 users by offering support for almost all of the built-in functions, but we realize that it does not cover all the bases and we are tracking this over a GitHub issue. If this is something of interest to you, you should also chime in on that thread. I've included it in the resources section of the slide and you can help prioritize the work that's being done in this regard. And once you actually write these policies, how do you share them? Where do you store them? What do you store them as? Like you could have them on your local machine shop, but what if you actually want to, you know, help others use the same thing? What if you've written a damn good policy? So you could use a web server, not saying no, but isn't it kind of, you know, tedious to do that? So the Qboardin project actually allows you to publish the policies as OCI artifacts and obviously store them as OCI artifacts in an OCI-compliant registry. Docker Hub is currently not supported because it's not OCI-compliant, but I heard that's in the works too in the coming few months in the sense that it will become OCI-compliant. So when you ask me what is the secret source, I'll probably summarize it as Kubernetes Dynamic Admission Control and WebAssembly. And now that we've looked a bit at the WebAssembly side of it, let's look at, you know, the dynamic admission control part of it, right? So we'll be now moving into the architecture section wherein I'm going to first, you know, speak to the architecture diagram, rather speak from the architecture diagram and, you know, describe all the components and then we'll see how they interlink with the help of a sample request view. Sounds good? Right. So this is the architecture diagram and I know it looks intimidating, but I promise you it's not. So we've been talking about policies all this while. It's a policy engine. We are talking about policy as code. We're talking about evaluation of policies. So policy is kind of a big deal in this presentation and it's also the star. So in the context of Kubewarden, policies are WebAssembly modules as you've probably guessed by now or at least heard from the previous couple of slides. Now, when you have these policies, you need something to actually enforce them and evaluate the request against these policies, right? So that's what the Kubewarden policy server is for, which is our next component. And when we talk about, you know, when we actually talk about Kubewarden, we obviously are going to have some custom resources that are specific or some natives that are specific to Kubewarden. They're known as Kubewarden custom resources, very similar to Kubernetes custom resources that help us effectively manage the process of, you know, evaluating policies. Last but not the least, it's the thread that holds all of it together, the Kubewarden controller. Now, the Kubewarden controller is kind of like the eldest child in the family. It's gotten a lot of responsibilities. That's pretty much everything. Sorry, that's pretty much everything you're like in the background. So first up, it's responsible for creation of, you know, some of the components of the stack. It's also responsible for ensuring that Kubernetes understands Kubewarden because Kubernetes, like I said, there are going to be some constructs in Kubewarden. There are constructs in Kubernetes. Somebody has to make sure that, you know, Kubewarden is able to communicate with Kubernetes so that it can evaluate resources. The controller does the job of actually translating Kubewarden constructs into Kubernetes natives. And it also does, you know, this whole thing of reconciliation of policies. Now, what it is and how it happens is something we shall actually look at in the next section, which is about the request flow. But once, you know, Kubewarden is created, Kubernetes needs to know that it exists, right? Sure, you actually are installing it at all, sorry, at all Kubernetes. But you need to know, like, one needs to know of the other for it to work seamlessly. The Kubewarden controller does that as well. So how does it do that? How does it achieve this whole process is via the concept of dynamic admission control, wherein Kubewarden is an admission webhook. It essentially functions as an admission webhook with its endpoint being the Kubewarden policy server. Now, how does this whole process happen? Again, sample request flow, and that's how, you know, Kubernetes ties or rather Kubewarden ties into the whole Kubernetes install. So if this sounds like Mambo Jambo, or, you know, a bunch of rambles, just bear with me, please, because we are now going into the request flow part of it. Where in we look at, you know, how Kubewarden actually gets to the point of evaluating incoming requests by integrating with Kubernetes API server, right? So first up, when Kubewarden is installed freshly, what happens is that it only installs two components. One is the Kubewarden controller deployment, and the other is a policy server custom resource named default. Now, when the controller notices that there is a policy server custom resource, it obviously spins up a deployment. Now, I told you a while back that it is the, sorry, the policy server is the webhook as well. And, you know, by design, the webhook webhooks within webhook endpoints within Kubernetes are required to have some sort of security enforced in the form of TLS. So what the Kubewarden controller does is also take on the onus of securing this endpoint by generating self-signed CA certificates and ensuring that the associated TLS certificate and keys are in order, and then exposes it to the network via a cluster IP service. But now we write our very first policy, that is, we use the cluster admission policy resource and write in as many policies as we want. Now, within this resource, you can specify as many like I said, and I said, but I also said that there's a whole reconciliation process. What's that about? Now, when the Kubewarden controller notices that there is a cluster admission policy resource, what happens is that there is a reconciliation loop trigger in which the config map generation is actually initiated. Now, what's a config map? A config map is basically a Kubernetes API that stores, you know, environment specific configuration so that your container image actually doesn't have to do that. It becomes portable. But when this, you know, config map is created, your controller actually uses it to start up your policy server. And your policies in the config map, there is all, there are all these policies that you have listed in the cluster admission policy. And hopefully you've configured them properly, because if you have, you know, your policy server will start up seamlessly. So once that's done, it'll spawn threads to actually evaluate incoming requests and also listen to incoming requests because you have to listen to stuff to actually start evaluating. So how does it listen to incoming requests? It's by starting up HTTPS web server. I've also said that, you know, the Cubewarden policy server is an admission webhook endpoint and the Kubernetes API server needs to be made aware of this and the Cubewarden controller is what makes it happen. How does it do that? You know, it creates a mutating or validating webhook endpoint. But how does it know the exact moment when, you know, it has to create the mutating or validating and webhook endpoint. So within the policy server parts, there is a readiness group. And, you know, when its status changes to ready, it's when, you know, the endpoint is actually registered as a mutating or a validating one. And once this plumbing is all generated, you're sorted because the Kubernetes API server then sends all the requests to the Cubewarden policy server. And based on the endpoint, whether mutating or validating your policy server actually ends up evaluating and sends back the response. So that's how it stands, folks. And you can extrapolate this to multiple policies, multiple policy server setup. But where would you use it probably, you know, in mission critical setup where resiliency is key is one thing that comes to mind. And maybe in a multi-tenant setup as well, where, you know, you require a dedicated policy evaluation handler for avoiding the noise that's generated from other tenants, you could possibly use that in that as well. So multiple policy servers evaluating multiple policies is certainly doable. And with that, we come to our next section, which is looking at why PSP, PSA did not just cut it when it came to enforcing security in Kubernetes. So to start off, we will first look at PSP, which is port security policy and it's removed currently in the latest version and has been replaced by port security admission. So what is PSP exactly? It's a framework to ensure that your pods are running with proper privileges, able to access specified objects only. No unwanted access is given and no extra access is given. In fact, the concept of lease privileges expected to have been enforced and Kubernetes RBAC essentially links PSPs to users or services via the roles that they actually assume. So let's quickly look at it in action. Now, I mentioned that users or software or services are actually, you know, given roles. Now, in a setup, you will probably have users and software assigned accounts or being part of accounts, which will then be, you know, bound to roles that have certain privileges. So PSP actually checks for, you know, the permissions assigned or corresponding to each of these privileges for each incoming requests and accordingly blocks or schedules the action. But there is a problem isn't there because it did get removed. What was the problem with PSP? Firstly, configuring it is extremely tedious. If you assign, you know, the broader permissive privileges, you are going to land yourself into trouble. And if you go to restrictive, you might have to, you know, reconsider designing your policies, designing your policies all over again, because certain pods might not get created and then, you know, they will be in the non-reconciliate. So the deployments or the higher objects will be non-reconciliatory straight. And that is problematic because doing that for maybe one, two or three deployments makes sense. So it does make sense for a single installation, but it does not when you know you have a huge, huge infrastructure to manage. So yeah, so PSP is world problematic. And that's why we have the replacement in the form of part security standards and part security at mission, which offer a more level based control and, you know, allow you to ensure that policies are set in a clear and consistent fashion. So we're not going to go into the depths of actually, you know, checking how it works. Sorry about that, because we have a demo to get to. But let's look at what's the problem with part security admission. As a version to 1.25, it does not have any mutation capabilities. So that's one drawback. Another drawback is higher level objects are actually evaluated only when audit or one modes are enabled. So that's essentially why tools like your board and are important as supplementary or can I say complimentary. But yeah, so they are, you know, empowering you as developers to integrate into your cloud native infrastructure and enforce the level of granularity in the, you know, levels that you want to enforce it in. So we did create the project or conceptualize the project as replacing PSPs, but we were, we recommend doing it by complementing part security admission. That is by actually integrating Q board in into a part security admission profile towards mitigating those limitations we just spoke of. And now we will just come to the demo part of it. And hopefully you'll have a clear picture of how, you know, this integration happens. So I'm just going to quickly stop sharing this screen and start sharing my demo screen. Right. So just getting the font size up to the mark here so that you're able to see everything. So I have downloaded all of these on my machines but like I said in the initial slides you are going to be able to download all of the manifest that I am currently applying so please don't worry that you know you're losing out on something. Now I'm going to share my screen all over again. Right. So let me walk you through what we are going to try doing. Okay. I say try because I am praying to the demo gods that it hope that it all works out. So the very first thing that we are going to do is create a namespace with extremely restrictive policies. We're not going to allow pods to run as a root user, which is not run pods but run the applications within pods as root users that sounds reasonable. But I am going to try to create a Hello World app that has run that needs to run as a root user by actually specifying it in the Yemen. Now it will obviously not allow it because I have created a namespace that does not allow for it. So what to do? Should I remove the root user? I don't think so. Can you know something magically allow for a user to be created and assigned so that it can run the application as another user with specific permission levels. Let's see because that's the demo. So first up, like I said, we are going to create a namespace. I will. Okay. So we've created the namespace. I've actually created it before. So I'm really sorry, but I will show you the Jammin. So this is the namespace that I've created now. Right. So now we will first try to create to run a Hello World app. And I'll show you the Jammin here again as well so that you are able to understand what I'm saying. So you want to run as user zero, which basically corresponds to root. So these two actually conflict each other and then the pods not created at all. If I actually go ahead and apply this right now and create a deployment in my namespace, it shows that it's created. But let's have a quick look at how that's the case. Am I seeing any logs? Am I able to just a second? I'm sorry. Am I able to see any logs? No, not able to see any logs because it's just a simple Hello World application. It's not anything sophisticated. So just a second. Let's see what's the status of the deployment because I'm sure that the status of the deployment is that it's not available. Nothing's ready. The replica set's not created. Right. And we saw why that is the reason and now what I'm going to do is enforce this policy. So I should have named it better because I named it PSP but it is PSA. So basically what this does is allow the Hello World application to get created. Even if it's run as any and it does not have a root user specified. So if you just go ahead and check this. So what I'm trying to do is I do not have a root user specified here. If I actually ended up creating or applying this right now without the policy enabled, the pod would get created but it would run into a container config arrow, which is not what we want. So once you enforce the policy and then apply this Hello World application thingy without the root user, it will allow you to create. So yeah, so that being said, let's just apply our create, not create, sorry, user PSP policy. This takes a while to actually get created. So not created rather but to come into effect. So if you just look at this, it will be in pending status for at least a couple of minutes. Like I said, it's in pending status. So hopefully when this comes into effect, you will be able to see that the pod does get created. Let's just see if this is active. Okay, still not active. Once this gets active, I will be able to show you the actual enforcement of the policy and the subsequent creation of the pod. Yes, now it's active here. So now I'm going to apply the PSA cube warden.yamel in now if I show you this. So like I said, it's going to go into crash loop back off because it's just a simple message. But yeah, if you actually see, yeah, I actually forgot to delete the previous deployment. So let me redo it again instead of confusing y'all. Yeah, so damn sorry. That was two cubes. Yeah, so I've deleted the app right now. But even then if you apply, huh, you've created the Hello World backup. It still ends up getting into the CC. So it goes into the completed state. And yeah, so your replica sets are available. There is your deployments will not be available because your pod has completed. So let's just check the logs to see everything's gone normal because logs are the best indication. My space. So if you see, like I said, this is just a Hello World application, nothing too fancy. But it does show that, you know, this thing has started and completed successfully, which is why the pod has come into completion status. Diploments are not available and your replica sets created. And I'm not ready because again, completed. So that's it for this demo. I hope you understood a bit about how we can integrate, you know, cube board and into a pod security admission profile. But to summarize, what we essentially did is we created a restricted namespace where we did not allow any application to run as rule. So we applied deploying an application to run as root via, you know, we are the field run as user. We gave it as zero even though just about that statement we have given a separate field wherein we are saying runners non-trute user two. So when we did that, it still did not allow the port to get created or the deployment to come into available status. And once we applied the PSD policy, wherein we allowed for a port to get created if, you know, no user was specified. Now we could have done that in the earlier case. We could have actually just omitted the run as user and it would have run into a config error because we have not actually specified the user. We allow for that particular thing to happen in this namespace by enforcing the user PSP policy. And then we create a Hello World app or recreate it all over again by actually first deleting the previous deployment. And then by going ahead and, you know, creating a deployment wherein we do not specify which user we want to run as. So it allows for it to run we saw from the logs and it gets completed. So after which, you know, the replica sets are created, you know, the deployments are created and the port is created, which is not the case in the very first YAML that we applied. So I hope that's a little bit more clear now how we do the integration. Now coming back to our presentation because unfortunately we have to go back. Right, so we were here and now this was done. And now that we've looked at all of this integration, let's check out some cool new features in version 1.3. So first up, this is something that we're all extremely proud of that is joining the CLO monitor initiative, which is an initiative that checks for, you know, the healthiness of project repositories based on some conditions. It's a CNCF initiative. So we're currently rated a with a score of 97%, which is pretty good. And it's, we started off in the 90s, like lower 90s, but we currently, you know, we currently have a score of 97%. So we've clearly gotten better with time, if I can say that we've also listened to the community. And reduce the startup time for the policy server. So that's another major win in our bag, according to us. And we've bettered our six store integration. And that's all thanks to the fantastic work done by all members of the QBorden project with respect to the six store Rust QBorden SDK. So now, you know, we're able to handle signatures produced by PKCS 11 tokens, which in plain speak is, you know, NSM, not NSM, sorry, HSM and smart cards. So that's a bit of the stuff that we've introduced and a little proud of, but we also have new policies, because policy engine, right, and we want to simplify policy as code. So there are a bunch of new policies that are as always backward compatible, like most of our other policies, except one, if I'm, you know, not mistaken. So we have one that's that's responsible for scanner and compliance of environment variables. We have the volume, securing the volume on policy, and we have one that helps you keep up with API deprecations as well. So, yeah, so we have these four policies. And we have also expanded the scope for some of the existing policies. Now, most of the policies were targeting pods earlier on. We have expanded the scope to include higher level objects like replica sets, demon sets, jobs, cron jobs, etc, etc. And there are drawbacks with each of them and you can use either or both. It's not like there is a strict, you know, rule that you can't use one or you can't use the other. It's all upon your use case. But with this expansion of scope, what we wanted to do as a project was to give you the control as an administrator to actually get the level of granularity that you want. Because you should be able to do that, right? You should be allowed to actually enforce the level of granularity that you deserve customized to your infrastructure. So that's what we hope to do with this expansion of scope. And all theory, no talk is again very boring. So I'm going to go on to the second demo. And like I said, in the big demo before I'll have to start sharing my command line again. So I'm going to stop sharing this one. Right. So just a second. I'm going to open a fresh new window up for you all. All right. Hopefully, just a second. Yeah, this is great. So I'm hoping that you're able to see this. Now let me navigate to the second demos manifest files. So first off, I'm going to give you a summary and we're again going to summarize it at the end because that's that's how I work. So first up, what we're going to do is, yeah, so just let me open up the files. So, so we're going to look at the environment variable compliance policy with this demo. That's first thing. And what this essentially does is that we are going to actually enforce that if a resource is created with any of these environment variables. You actually do not allow for the pod to be created. You'll get an error message with something that's very similar. So saying something very similar to what I just said, but that's that's the objective. Once we see that we will also try to create something we'll edit this resource 1.yaml in in line and we will then try to actually create something that is partially fulfilling like some of it doesn't fulfill and some of it does and we are going to try doing that in this one. So that's so we'll try to see if the environment policies can actually, sorry, the environment variables can actually comply to the policy. So first up, we do not have any preconceived notions here. So I'm going to first just apply minimum required for the environment variable policy here with this file. So this is created takes a little while so right so while we're doing that. Let me just also so so we are going to see the other one. So I just want to show how these look in the escort so that it's a little more visible. So what we are essentially let me just share my screen again. What we're essentially going to try to do is that we are going to create. So this is the resource that we're going to try to create right so we are going to try to have an engine engine x deployment with the environment variables bar. Now, we have explicitly told in our policy or specified in our policy that we do not want this to be created. Right, so this should essentially block this, the this creation or the creation of this resource. So coming back to our command line. Let's see if the policy has now gotten active. Yep, now it's active. So Ctl apply. Oh, okay, sorry, sorry, sorry, sorry, sorry about that misspelled. Just going to clear this out one second. So if you see the error message here, the resource cannot define any of the environment variables from the rule. Okay, the policy work. But now we want to edit the policy a slight bit. Okay, now we're going to try creating a resource firstly, which does not comply to this. All right, so let me just switch the share to our VS code can edit it here but I'd rather do it on VS code. So here in the minimum require war start Yamal we've specified these so I'm not just going to have an environment variable named for three width value but it's just very simple and it's easier to remember. And now we will go back to the command line. I'll just actually just share my entire screen because this the switching is getting a little too tedious for me. So if I actually apply resource one dot Yamal now and press enter the engine next deployment was created. Yay. So that means the policy works right. We have several fields within the policy that you can use. Any is one. There are several other fields we've actually written a whole blog post about it that you can find on our website, a keyboard and I owe. So we've actually seen that you can enforce or at least scan for the compliance and enforce the compliance of environment variables with this policy. So that is what this demo was all about and now we are going to go back to our presentation, which is on its last leg. So I'm going to leave you with these resources to be honest. These are some of the website docs that I really found helpful while preparing for this presentation and I think you would do if you had to read or have it for any reference. The Q-Bordan website is here. There's also the artifact hub where we publish all our policies and we have the official create documentation which provides more details about the Q-Bordan SDK for Rust. And then we have the TinyGo project website that I've linked here because we use their compiler for Go, the different policy project templates, and also the GitHub issue for built-in functions. I did not forget that. So yeah, just a second. So, oh, okay. Just a second. I'm really sorry about that. So yeah, that's pretty much all I had. And yeah, this is the Swift policy template if you all wanted to see that. But that's pretty much all I had for today and I also wanted to give a huge shout out to where you can find us because this conversation should not stop here. So if you want to chat with us, you can find us on the Kubernetes Slack on the Q-Bordan channel. We're also there on Twitter for as long as it's up. And we're there on YouTube too. On the Rancher YouTube channel, we frequently make appearances there as well. And that's it. Thank you once again. And if you missed the supporting material that I actually linked to in the first couple of slides here is that again. I hope you made a note of it. And yeah, that's it from me. Thank you so much for listening in today. And I hope you reach out to us and let us know if there are any things we can help you with. Thank you.