 Hi and welcome everyone. Good morning to for those of you in the US and good afternoon and evening to the rest of you. Today I'm going to talk about multi-tenant cluster solutions using source service and on this talk I will share our experience at velocity of going through the journey of setting up a multi-tenant cluster. So it was a quite a long learning process and I hope I will manage to provide you with some guidelines and tips on where to start looking. So let's begin. A little bit about myself. My name is Errol Maman. I've been a software engineer for over 10 years. My main areas of passion in recent years were around software architecture, distributed systems, improving reliability and developer experience. These kind of interests were what brought me to velocity where we aim to simplify and make development environments accessible to developers. Also a photo to a three-year-old and a country music addict. I'm located in Israel and feel free to follow me on Twitter. So the first thing that we may think when we hear multi-tenancy at least the first thing that I thought is that it's complex and it may be a bit scary. So what I want to emphasize in this talk is that multi-tenancy can be easier than you may think and that there are tools and methods that can help you achieve that without it being too complex and painful. And before we talk about solutions, let's talk about use cases. Why would you need multi-tenancy? So the first group of use cases is about ways to empower teams in your organization. By talking about teams, we actually are talking about internal multi-tenancy which means that the tenants are located inside of your organization. So one example of such use case is creating affordable environments. So these environments are environments which are created in a temporary way for a specific purpose and limited period of time. So developers in organizations want development and testing environments and they are asking for it in order to help with their development and testing processes. And these developers may want to like test or showcase their features kind of in a rapid way without being blocked by a CI CD pipeline or in other blockers. Another kind of use case is that a developer may want to debug a tough or rare bug in a specific environment. For example, if someone has maybe a resource or a memory leakage bug that can take a while to reproduce. So another kind of use case related to internal multi-tenancy is end-to-end. And in end-to-end, you may have different teams in your company and each team owns a set of services and each set of services communicates with other services of other teams. And together, it forms a complete system. Since each team owns their services, then you prefer the one step on each other's toes and perhaps cause harm to other teams. That's why it may be beneficial to see each team as a tenant scope to their own space. And when dealing with teams, it's beneficial to understand the balance of granting autonomy. So developers obviously want to move fast. Developers, their KPIs and what they're measured upon is mainly focused around delivering features and fixing bugs. And they want to do it fast as possible. And platform teams, on the other hand, will probably advocate and push for reliability and maintainability. So let's really feel free to move fast, but be careful not to break stuff while you're doing it. And platform teams often find themselves as the guardrail for reliability and maintainability. But on the other hand, letting the platform teams exclusively deal with the toil of managing environments while the developers wait for it really beats the purpose of autonomy. So by enabling self-service for developers and other stake orders, while still managing to keep the guardrails in place, it has the potential to keep the balance healthy. So if we jump into another category of multi-tenancy, and that is third-party multi-tenancy. In third-party multi-tenancy, we mainly deal with kind of external multi-tenancy. That can be your customers or maybe other third-parties that you wish to run workloads on your side. So a popular example, if you develop a SaaS product and you want to run workloads per customer. For example, maybe you run the code that the customer provides you. Or another example is that you can run your entire backend app per each customer. In this scenario, since you run the workloads per each tenant, you probably have isolation or regulatory requirements, or maybe even specific performance requirements and SLOs so that no tenant can somehow badly affect other tenants. In this case, usually your application has direct access to the cluster while your customers don't. So another kind of use case is Kubernetes as a service. In this use case, you manage a cluster on behalf of your customers and you want to allow these customer applications to run their own workloads on it by giving them direct access to the cluster. So I believe this one is a kind of a niche use case. However, we at Velocity had this use case and I'm going to tell you more about it later on. But the SaaS use case is probably the most popular use case around. So if we attempt to roughly categorize multi-tenancy into two sections, that will be soft multi-tenancy and hard multi-tenancy. And on soft multi-tenancy, you have a single cluster where tenants are being shared on the same cluster. And on the hard multi-tenancy, you create a cluster per tenant. So even though we have this like soft and hard definitions, these are essentially the edges of the spectrum. And there are hybrid solutions in between that you can implement. So as we've said, choosing a solution is really a matter of a use case. So I'm going to try to compare these kind of two edges, the soft and hard multi-tenancy in a high level based on four categories. So the first category is cost and how much will it cost me? How much the solution will cost me per tenant? And this can be very important once you have a lot of tenants. Second category is speed. How fast is creating or destroying a tenant? Isolation. How is it for a tenant to affect other tenants? And maintenance. How difficult it is to maintain the solution? Each of these categories is just kind of subjective and use case dependent. You may care less about one and more about the other. So let's try and compare these kind of edges. So the scale would be here that the red is kind of the worst kind and green is the best and yellow somewhere in the middle. This is kind of an opinionated comparison. Just bear that in mind. So in terms of costs, in hard multi-tenancy, you have a cluster per tenant. And the cluster per tenant is expensive because you pay for an entire cluster, which means control plane nodes and supporting infrastructure. Also, you can really share resources between clusters like node pools. This may be negligible when you have like a few consistent tenants. But once you grow in scale, this can be a serious problem. In a soft multi-tenancy, you make use of an existing cluster and you basically paid for the added workloads. But you don't pay for additional control plane. And you have the ability to share node pools and supporting infrastructure. So in terms of speed, in hard multi-tenancy, creating and destroying clusters takes a lot of time. It takes time in the scope of minutes, sometimes even more than half an hour. And if you're using like a managed solution to bring up your clusters, this kind of timing doesn't always depend on you. On soft multi-tenancy, creating and destroying tenants inside a shared cluster can be a matter of seconds. It really depends on how you choose to implement it. But in many cases, creating a tenant would be just creating some Kubernetes objects for that tenant. And that should take a couple of seconds. So if you have like workflows or APIs that depend on the cluster being created or destroyed, this could be a serious consideration for you. In terms of isolation, in hard multi-tenancy, since these clusters have a separate control plane and potentially even reside in a separate cloud provider or a cloud-wide project or account, they have a high level of isolation like out of the box. In soft multi-tenancy, you get a lower level of isolation out of the box. But you can reach high isolation. It requires some efforts from you. If we go to the simplest level of isolation in a shared cluster, which is a namespace per tenant, you get a quite low level of isolation. If you want to increase the isolation, you have to invest effort and maintain various restrictions and controls in the Kubernetes level. Most likely, you would want to do that and we'll talk later about ways to do it. And in terms of maintenance, in hard multi-tenancy, managing many clusters is just hard. It can lead to a situation often described as a cluster sprawl. So you have to take care of the lifecycle of the cluster. You have to upgrade the version of the cluster. You have to monitor each cluster. And as said, like if you're confident it's in the ballpark of up to like a few consistent clusters, this may be inoculable for you. In soft multi-tenancy, in this case, in terms of infrastructure, you have like a single cluster to take care of. But you still have to manage different tenants' configuration and restrictions. Later on, we'll talk about open source projects that can help you make this a lot easier. In terms of maintenance, in both cases, you can actually reduce operational toll on your platform team by taking the advantage of self-service. So after we're exposing use cases, let's talk about how to do it. I will not going to focus today on hard multi-tenancy mainly for two reasons. First reason is it wasn't a good fit for our use case. So we didn't explore the solutions around it very much. And the second reason is I don't, I think there isn't currently in like an easy way to do it. Hard one-tenancy is just hard. If you do consider it, do take a look at the cluster API project. It's an interesting one. So let's talk about how to do soft multi-tenancy in Kubernetes. So the first thing that comes to mind when you want to separate resources in Kubernetes is using namespaces. If we go to the Kubernetes documentation, the definition really says it all. So in Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. But we can quickly realize that namespace per tenant doesn't really mean isolation. And we can really get to this understanding by asking ourselves the following questions. What prevents a tenant from creating or deleting workloads in other namespaces? So by default, a tenant is not blocked for doing that. So by default, the cluster users allows you to access and modify all namespaces, including other tenants. What prevents a tenant pod from communicating with pods in other namespaces? So Kubernetes does not limit pods to talk with pods in other namespaces. And the DNS part in Kubernetes can actually resolve other namespaces. So your pod can actually communicate with other namespaces. And what prevents a tenant pod from abusing the entire cluster, starving other tenants? So as a tenant, you can bring up a workload which is not resource restricted. And this workload can eat up all of the CPU and memory in the node, starving other tenants workloads and affecting them. So after we get to understanding that namespaces doesn't mean isolation, we have to think about how can we isolate namespaces? So Kubernetes offers us a few ways to do it natively. Allow you to restrict actually on a namespace basis. So let's go each one of them in a nutshell. First, you have RBAC, Resource-Based Access Control. And RBAC allows you to restrict which users or groups can access what resources. For example, you can define that tenant A can only access its own namespace and not other namespaces. So anyone who ever worked with RBAC knows that RBAC can be very complex to manage and to understand actually. Next up is network policies. Network policies allows you to restrict what kind of network traffic can go inside and outside of your pods. So for example, you may want to restrict a tenant namespaces from talking to other tenant namespaces or from directly communicating with the Kubernetes API. Next up is pod security admission. It allows you to prevent workloads from having spatial capabilities security-wise. For example, a pod can ask as part of its manifest to be run as root. And then you can use it, for example, to spawn a reverse shell and then compromise the Kubernetes worker node. So pod security admissions allows you to prevent that by enforcement. And next up is limit ranges. Limit ranges allow you to set up a range or default value for memory on CPU resources for a pod or container. For example, if you want to prevent overcommit of memory or prevent CPU starvation, and these kind of situations can really affect workloads in other namespaces and affect other tenants. Next up is resource quotas. So resource quotas allows you to set a limit for how much CPU on memory resources I can consume on a namespace level. And last one here is node selectors. So if you want to reach a higher level of isolation, you may want, for example, to run workloads of different namespaces in different nodes. So if you look at all of these and there are a bunch of other ways to restrict and isolate namespaces, think about having to apply these for every namespace and having to manage different kinds of configurations. Seems very complex, right? So that was about isolating namespaces, not tenants. Can we really assume a tenant means a namespace? So what if a tenant needs a few namespaces? So at that point, restrictions really becomes complex, because as we saw just before, restrictions are namespace based. So you need to figure out how to apply these for a tenant using a bunch of namespaces. And visibility is kind of hard, because there's no single place when you can see which tenant owns which namespaces. You need to do the digging or build something on your own. Okay, so letting the platform team do all of this setup work and restrictions will essentially bottleneck developers waiting for ITT kits. And also it's kind of a boring kind of toil. We want to aim for developers and other stakeholders to act autonomously within their tenants. So by shifting left to developers, we should aim for self-serve alongside proper automation. We will start by talking about open source project that aim to automate the namespace per tenant approach. The first one is Kiosk. It's an open source project by loft. Seems like it's not entirely currently maintained by looking at their Github, so I'm not going to elaborate on it. Second one is Capso. It's an open source project by Classics. Seems to tackle similar approaches as Kiosk includes more features, and I think it's a bit more flexible. So let's start talking about Capso. And Capso really extends Kubernetes and allows you to, for implementing multi-talented environments, and it actually has a few interesting things that it allows you to enable. First, Capso will introduce you with a kind of a new entity in your Kubernetes ecosystem called a tenant. And you focus your effort on this entity called a tenant and not on namespaces, which is, if you think of it, it's really closer to reality, right? And you can actually, you can configure restrictions on a tenant. So maybe if you want to have like a different kind of tenants, like if you have a free tier and a premium tier, so you want to have a free tenant and a premium tenant, and each of them may have different restrictions, and you can configure that on the tenant object. It also allows you to, for having a tenant, having a few namespaces. So as we follow before, restrictions are namespace-based, so they need to be created by a namespace. And Capso really automates this for you, so you don't have to think about namespaces, just have to think about tenants and self-serve. So you can really extract the platform team to be the main bottleneck of this multi-tenant show. We need to provide autonomy for other individuals to manage their tenant and manage their workloads without the ability to cause harm to other tenants or to the cluster. So to understand how Capso aims to enable these things, let's explore the way it works. So Capso, the main thing about Capso is it defines a new CRD of a kind tenant. You can look at this quite simple example. You can give this tenant a name, and you define who is the owner of this tenant. So the cluster admin persona, which let's say someone who is working at a platform team, it defines a tenant of this kind of CRD. And it also defines who are the users, who are the tenant owners. And from this point, after the cluster admin applied, this tenant starts the self, now begins basically the self-service part. The tenant owner, which is defined on the tenant, can work on this tenant without involving the cluster admin. They can create namespaces, and they can create other workloads on their own. So all of the resources that they create will be actually tied to this tenant. So if you run, for example, kubectl described on this tenant, it will show you all of the namespaces that are tied to this tenant. And tenant owners are really scoped to the tenant limitations. For example, if a tenant is trying to access other tenant resources, or maybe system resources, they will get an access denied error. So as we've seen in Capsule, you can actually define, in top of the existing configuration, you define restrictions on the tenant object. And Capsule will automatically apply that to all of the namespaces of this tenant. So all of the restrictions that we saw before, the resource quotas, the node selectors, the network policies, et cetera, all of these can be defined on the tenant object. And after we've seen that, you can ask yourself, what if namespaces aren't enough for us? Like what if we have a tenant or a bunch of tenants that wants to have cluster level accents? So just for a few examples, if you have a tenant that wants to install a cluster scope controller, maybe a CRD, maybe wants to install an help chart that requires admin roles, or for any other good reasons, a tenant wants complete control plane isolation and the ability to apply things on a cluster level. So just like an example, if the tenants are actually developers, for example, in the company that develops controllers, for example, this kind of use case may be very important for you. So to tackle this kind of use cases, there comes virtual cluster, which is a unique approach. And it really means control plane per tenant. So virtual cluster, it's an interesting approach where the tenants actually share the same cluster, but each tenant is running inside like a sandbox virtual cluster. It's like an embedded Kubernetes inside a Kubernetes. And if you look here at the diagram, then each V cluster actually has a separate control plane. So that includes the Kubernetes API, that includes DNS, Datastore, which is usually at CD, controller manager, and optionally a scheduler. So every tenant that is working with a V cluster actually has a notion that is running inside a separate cluster. So by default, a tenant won't be able to access namespaces and resources of other tenants. So what we get with virtual cluster, at least of the V cluster implementation of virtual cluster by loft, we get kind of out of the box isolation, we get control plane isolation, which means the API, Kubernetes API and the data storage are isolated from each other. So it allows you as a tenant to install a cluster scope resource such as CRD and webbooks. Also, it allows namespaces names to be reused across different tenants. You also get a DNS isolation, which means a workload cannot access workloads in other V clusters or in the host cluster via DNS. However, you don't get a complete isolation out of the box. So if you have a privileged pod inside of a V cluster, you can still escalate to the host node. There are no default resource restrictions. So if you have a pod in a V cluster that use a lot of CPU, a lot of memory, you can cause starvation in the node for other tenants. And there's no complete network isolation. So we've seen that the DNS isolation exists. However, you can still access, for example, by IP addresses, other workloads in other V clusters. So V cluster by loft actually offers you, there's an alpha feature for enabling stronger isolation, or you can actually do it yourself by applying restrictions to the V cluster namespace. So if we try to compare like capsule and V cluster, it's a bit like apples and oranges. But I think like if you put these into these categories that we saw before, it can help you get a better understanding of what's best for you to look at. So in terms of costs, if we look at capsule, a tenant really still resides in a bunch of Kubernetes namespaces alongside additional restrictions. And from that perspective, the costs are pretty much like soft multi-tenancy, which means they are low. In V cluster, there is an additional workload of running the control plane per tenant. And this can translate to a bit more cost. You can choose the control plane actually to be a minified layer of Kubernetes, and that may have lower costs. In terms of speed, in capsule, when you create and destroy a tenant, it is a very quick, like a matter of few seconds. So if you think of it as just an automation to a namespace and some Kubernetes restrictions, which most of them are pretty rapid to create. In V cluster, for each tenant you create, new pods are launched with the control plane. So it's much quicker than hard multi-tenancy because you already have no pools and you have a cluster, but it's a bit slower than capsule. For my test, it usually took more than one minute. In terms of isolation, in capsule, the isolation really depends entirely on how much you invest in configuration. So you can use all of the Kubernetes restrictions in your disposal and achieve a rather high level of isolation. But in the end, different tenants still share the same Kubernetes API and DNS. In V cluster, you get some restriction built out of the box, like we just saw. But if you want to reach a high level of isolation, you still need to invest in restrictions. However, because you get these separate control planes like Kubernetes API and DNS per tenant, in my opinion, this puts V cluster higher in terms of isolation. So there are some use cases that we saw that requires cluster scope resources like installing a controller and CRDs. In this case, I think V cluster clearly wins, as these will be isolated in each V cluster. Internal maintenance here, it kind of depends. So in both cases, you have additional moving parts. And when you had moving parts, you add a bit of complexity. And that other complexity means farther maintenance. In my opinion, the cluster may be harder, a bit harder to maintain than capsule, because adding control planes seems more brittle than automating Kubernetes existing mechanisms. But I also think like both cases are easier to maintain, like then do it yourself hard or soft multi-tenancy. So to sum it up, it's a trade-off as everything. If you need the extra control plane isolation, and you're okay with a bit of an added cost and a bit lower speed, start looking at the cluster. Otherwise, capsule may be a simpler choice for you to explore. So I will now share a story of how we implemented multi-tenancy in Velocity as part of our Freemium solution. So a little bit about Velocity. Velocity allows developers to spin on-demand self-serve environments. And for testing integration and collaboration, you can create a formal environments that are based on your production artifacts that are closed as a production environment. You can develop and debug locally while the rest of your mind is still in the cloud. You can work directly on your IDE or your existing toolbox. You can configure your environments to be destroyed after a period of time. And you can actually use real cloud resources instead of using mocks. So when you think about motivation, so we had a kind of interesting use case of Velocity, we wanted non-paying users to be able to give our product a try. And we kind of aimed for these Freemium users to have like minimal effort, like we wanted the cluster to already exist with our application installed. And we wanted this experience to be very quick, like when they register, they get a working cluster in a matter of seconds. So first we kind of gathered some requirements. So first one is that first by the apps will communicate with our Kubernetes cluster. So as we saw in the beginning of the talk, we are here at the Kubernetes as a service use case. One does support a large number of users. So we wanted to open this to a large audience like in the hundreds. So we wanted to get like a ballpark estimation and limitation of how much will it cost us. So we are a startup and we don't want this to be a major spend for us. Also, since it's a free offering, we can't allocate some of the costs on behalf of the customers. We want to have the ability to have to upgrade these three users to a premium kind of tier because the free tier is actually quite limited in terms of resources and features. And if a customer wants to upgrade and pay to lower restrictions, we should kind of allow that easily without having these customers start off scratch. Observability is important like knowing which tenants owns what resources. And this is mainly for our internal troubleshooting purposes and internal visibility. So the isolation should be pretty high. We don't expect these premium customers to run their production system there. So we don't seek like kind of extreme isolation. But since we do care about security and privacy, we wanted a rather high level of isolation. And lastly, we didn't want this to be a big overt for us because we're a startup and we don't have large teams dedicated to maintaining this platform. We also need to move fast as we explore our market. And we can invest a lot of time in implementing and bringing this up. So when we took our first steps, we quickly understood that how multimodal tendency is not a way to go. It was just not a feasible solution for a free user scenario because of the cost and speed and the need to scale to a lot of users. So we started off by exploring VCluster. It seemed like a very good trade-off between hard and soft multi-tenancy. And we definitely needed a high level of isolation. So we experienced some technical limitations because our app, and this kind of specific to our use case, because our app relied on, for example, dynamically injected Kubernetes resources. So we have a cluster component that copies secrets into namespaces. And we want to do that on behalf of the tenant. So since each tenant is encapsulated with a virtual cluster, we needed these components to connect to these clusters. Not impossible, but it required significant work for us and complexity. Second is because our app relied on tenants sharing a resource in a namespace located in the host cluster. So the VCluster DNS per each tenant won't allow you to access services located on the host cluster. There were a few more general limitations. It seems like a lot of moving parts for us because it's a Kubernetes control plan per each tenant, which seems like kind of like we have to manage these control plans in some extent, which is a bit added complexity for us. So also, it wasn't quick enough for us. It mostly took over one minute. So for many use cases, that is totally fine. But for our use case, we actually wanted to expect this to be a bit more rapid to, in order not to affect user experience. And we had to implement additional isolation. So we didn't get all the isolation built in out of the box. We still had to invest some effort in configuration. So next thing, we started exploring Capsule. And if we actually, first thing, when we read Capsule documentation, it seems mainly aimed at like sharing cluster internally in your organization for internal employees or people in the organization. But our use case was a bit different, like we wanted to share the cluster externally. So we actually surprisingly, we found it as a good fit for sharing the cluster externally. And then it will be used by apps rather than people. So there are a few things that we initially like about Capsule that it's mainly automating existing Kubernetes controls, which makes it a bit more simple in terms of logic. And it makes our troubleshooting a bit more less complex than the control plane approach. So it's really fast, like if we create a tenant, it usually takes us like a few seconds, almost as fast as just creating a namespace. And it meets the requirement of the multi-tenancy benchmark, which is a cool benchmark made by Kubernetes 6. It includes like a list of tests that validates if your cluster is really ready for multi-tenancy. So we wanted the tenant lifecycle to be automated and self-serve. We couldn't rely on a human in the loop creating and destroying these tenants. So we wrote a health chart for setting up the tenants and the access to the cluster. This health chart includes two things. First is a service account, which is a way for our app to access the cluster as the tenant owner. And another thing is a tenant CRD, which defines this tenant. So Helm really gave us the power to define the template. So all of our tenants will share the same definition, but there will be some variance in the restriction because of the templates. Also Helm gave us the ability to use versioning, which may help us in the future with upgrades. So this is like an example from our usage of the tenant CRD. So we define the tenant and the tenant owner is a service account and not a user. And you can see here that we actually defined resource quotas on this tenant. So we limit the CPU and memory per the entire tenant. So if you remember a bit while back, the resource quotas are usually on the namespace level. And this is on a tenant level, which makes a lot of sense when you're dealing with tenants. So this is really cool in my opinion because me as a cluster admin, I don't have to think really in terms of namespaces. I can really deal with tenants. And this is also where the self-service part really comes clear, as from like this point, when you create a tenant, Capsule will automatically impose the Kubernetes in limitations when the tenants create its namespaces. So here's like a general explanation of our application flow. So on the left, we have a flow of creating new tenants. So when the user registers in our system, we have a flow that creates a new tenant, which means we install this tenant help chart, which as we saw, installs a tenant CRD and a service account. And using this service account, the external app, which is now a tenant owner, can access the cluster as a tenant. And on the right, we have the opposite flow where we destroy the tenant and then we uninstall the help chart and revoking from this point the access up for the app. This is kind of another view of this flow. So when the app authenticates towards our back end, it authenticates as a tenant, let's say tenant A, and then the back end kind of creates a Kubernetes token that impersonates the Kubernetes service account that we created in the help chart. And this service account owns tenant A. So after it creates this token, it returns it back to the application. And then when the application has the token, it can access the cluster directly with the token and it create and can create namespaces which will be owned by this tenant by tenant A. So for example, if it created the namespace and then it tried to access a namespace that doesn't belong to its tenant, like tenant B, it will get permission denied error. So after we've done this, we came to the confusion that we really had a very good experience with Capsule and it made us realize that multi-tenancy doesn't have to be that difficult. Just to be clear, we did have some specific issues with Capsule which I think are mainly related to our use case. So as I said, there is no control plane isolation. So you can really create cluster scope resources with Capsule. And we had a few occasions where we modified the tenant CRD and had a syntax error, but we didn't get a clear error for that or any failure. And we only discovered that later, either when the tenant managed to run some commands or when we looked at the controller logs and then we saw the error. So it was a bit too late to get this error. And sometimes when you're trying to delete a tenant, it was stuck because this tenant has some hanging resources. So like if you bring up a resource which is stuck being deleted, that prevents deleting the namespaces and the tenant. But this is generally like a Kubernetes thing, not really related to Capsule. But in vCluster, for example, you could have just delete the entire vCluster and get that with it. It's not always a good practice, but it makes the experience a bit more like a thermal. So to conclude, we did put together a bunch of tips from our experience and from our journey. First, isolation isn't really all about security like you may imagine. It's also about reliability. For example, if you didn't really set proper resource quotas and range limits, then you may cause problems for other tenants because you may starve them. You should really consider your use cases and tradeoffs. Like it's, don't really head straight to the hard solution because sometimes when you think about security, we always want to implement the most secure solution. But if we look at hardwood detalency, like maintaining many clusters is kind of a nightmare. And personally, I would try to avoid it as possible. And especially if you know, like if your cluster will serve internal employees in your company, then you probably don't need the strongest isolation. You should really use the kubectl MTB benchmark tool. It's a very cool tool. It will give you a validation if your cluster is properly configured just in a few seconds. So worth giving you the try. So you should always really assume that someone will be able to abuse your cluster to some extent. And you should really apply some limits. We personally use the client provider budget alerts to make sure that our costs aren't really going way beyond what we expected. If you expose a cluster outside, you should probably consider pentesting. So many of us, you know, don't deal with security on a daily basis. So sometimes an experienced pentester can really benchmark your isolation level. And if you want to reach a really high level of isolation, do have a look at the NSA Kubernetes Hardling Guide. It's really cool dock with good references for higher isolation. So that's it. I really wanted to thank you for your time. And I really hope you gain some insights about multi-tenancy that will help you start off your journey. We do plan to write some blog posts about enforcing security policies on multi-tenant clusters in our blog at Velocity. So do give us a visit. Link is right here on the slides. And I really want to thank you all and have a good one. Thank you so much, Jarell, for your time today. And thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you join us for future webinars and have a wonderful day.