 Hi, and welcome to the future of multi tenancy in Kubernetes. Today we have myself, Tasha Drew. I am the co-chair of the multi tenancy working group, and I work in the office of the CTO in the advanced technology group at VMware, and Adrian. Hi, my name is Adrian Ludwin. I am the lead developer of Hierarchical Namespace Control as a project of the multi-tenancy working group, and I work on GKE at Google. This is Faye from Alibaba. I work on the Alibaba Cloud Container Service Team, so I was leading the virtual class project in this working group. Hey, folks. This is Jim Baguardia, co-founder at Nirmata. I contribute in the multi-tenancy working group. I'm a co-chair of the policy working group, and also a maintainer of Kiverno. Cool. A quick overview of what we do in the Kubernetes multi-tenancy working group. We are working on defining the models of multi-tenancy. Kubernetes will just support, discussing and executing upon work that needs to be done to support these models, creating conformance tests that prove the models can be built and used in production environments. I'm the chair along with Sanjeev Rampal, and the projects that we've been incubating and actually graduated two of these are the virtual cluster project, the Hierarchical Namespace Controller, and the multi-tenancy benchmark project. You can see all of those in our GitHub repo, kubernetes-sigs. Some of them have graduated to their own repos, but you can find quick links to those in our main GitHub. If you want to talk with any of us about ideas or questions you have around multi-tenancy or projects you may be working on yourselves that you'd like to talk about potentially incubating or partnering with us on, we're very active in Slack. If you want to attend our meetings and see our mailing list, then the only thing you need to do is join this Google group that is linked right here. We have meetings every two weeks, Tuesday at 11 AM. Once you join the Google group, you'll have access to our agenda document that you can add agenda items to our meetings that we'll address as time permits based on how many topics we have, and we really encourage people to join, chat with us, and check out our code. So moving into the main content of today's chat, what we're going to be doing is a round table with myself, Adrienne, Fay, and Jim around the future of multi-tenancy and kubernetes. So we're going to kick off with just talking about why is multi-tenancy important and what patterns do we see? Adrienne, would you like to kick off? Sure. So there's mainly, as I see two main advantages to multi-tenancy. One is simply the cost savings. It is a lot cheaper if you have infrastructure to share it among lots of different tenants, whether those tenants are different teams at a single company, which is to say people who are directly using kubernetes or whether those tenants are SaaS consumers, which is to say there are people who don't know that they're using kubernetes, they're just using some app and it makes no difference to them what it's running on, but the SaaS producer, the vendor, is chosen to use kubernetes. So multi-tenancy both gives you cost savings because things like the control plane and the individual nodes can get shared between different tenants. And it can also, in some cases, give you a management cost savings as well because it's easier to manage them all together. Now, that can be a bit of a double-edged sword. Sometimes it can be harder if your goal is to isolate tenants from each other as much as possible. Sometimes it can be harder to isolate them than it is within one cluster. And so I think we'll probably get to that, but in my mind, those are the two sort of key use cases, multi-team and SaaS, and the benefits are cost savings and maybe more marginally administering savings. Awesome. So I think that's a pretty complete description. So we'll move into just talking about what support exists for multi-tenancy in the kubernetes ecosystem. Faye, what support do you see in the kubernetes ecosystem for multi-tenancy? Yeah, I think that we have quite a few existing solutions for multi-tenancy. So as some of the blog posts we were hosting to the Kubernetes blog, so we define as a namespace as a service, one first one, and we have a classic cluster as a service. We also have a new like the control plan as a service. So we have different models to support multi-tenancy. So we beyond on this three concept with you, so the community actually developing different solutions but following the same concept. For example, Laft has bring up a v-cluster which conceptually very close to virtual cluster we are posing in this working group. And we have another efforts called Capsule which is close to the concept that hierarchy or namespace developing our working group. So in summary, so I think we have quite a few, quite a lot of solutions built around of the concept that we already mentioned in our blog. So maybe Jim or Andrea, you wanna bring more details for those projects if you have more to add. Yeah, certainly. So I think just in addition to some of those projects, some different patterns that we also see and there was a recent blog post also put out in the community where folks were talking about using policy engines or admission controllers and being able to inject node selectors, et cetera to isolate workloads or to get sort of stronger isolation for different tenants, right? But going back to the two major use cases that Adrian talked about, I think where you have internal teams either sharing entire clusters through namespaces as a service or getting their own control planes or just doing clusters as a service through cloud providers and cloud provider integrations. Those seem to be well established patterns and there's tools and solutions for those. And of course, if you want something additional, you would have to look at securing your data plane which is also an important topic. And I think what I'd add to that is like, I mean, it's not as though Kubernetes has no features for multi-tenancy like even the basic such as namespaces are back, network policies, the quota. Like these are all building blocks that are in upstream Kubernetes that were designed to make it easier to share a cluster among tenants, whether those tenants are teams or SaaS consumers or even different workloads that are run by the same team. But yeah, there is this kind of explosion of options that are in the community. One of them is the project that I started, hierarchical namespaces, which as the name implies is trying to take a fairly well-established construct from upstream case, which is namespaces and just add a couple of features on them to make them more usable in more contexts. And we're getting some pretty good adoption there. Mercari actually just announced that they were using us in prod and we've got some other people, some other large companies on Slack who have been openly contributing. I haven't got the permission so probably shouldn't say their name but you'll see them if you join us on Slack. And so it's clear that these additions are meeting in need. But I think that if you look back at, the first question, what are the patterns? We haven't seen a lot of new patterns show up. What we've seen are like in these two established patterns we've seen people with different needs at the margins but they've kind of settled into as both Phange and was sort of these three classic solutions. One of them is based on namespaces, one of them is based just on multi-cluster and then the last one is based on virtual clusters which is basically you run multiple control planes within one overarching cluster. And so I would say at this point even though there's a lot of different solutions they've all kind of coalesced into one of those three grippings, which is nice because it shows the maturation of multi-tenancy in the ecosystem. What do you think about multi-tenancy at the data plane? Is that something that this team should be addressing? Yeah, I think it is, to be honest, it's a pretty difficult problem compared to isolating the control plane because of the order trivial and the importance because either way impact production. That's the worst case, you did something wrong you got catastrophic results. I think as a best practice when we're talking about if you want to support using multi-tenancy solution to support internal team that's probably okay we may keep long-term support but if we go beyond that we kind of highly recommend to use sandbox long-time to prevent any candidate to get root access to the node. I think Qatar is one of the solutions that the market, Andrew maybe you can mention generally describe the divisor which is how to use this. Yeah, certainly the control plane is hard to isolate and the data plane in some ways is even harder because at least the control plane is fully like the attack service is the Kubernetes API server whereas the data plane, the attack service is all of Linux or all of the underlying OS which is obviously significantly bigger. And so yeah, as Faye said if you were trying to do multi-tenancy within let's say one company and the teams pretty much trust each other and you're not expected to be actively malicious towards one another then securing the data plane might be important but not critical and maybe get away fine with network policies or Istio mesh to control the communication but you're not really worried about one team sort of like trying to break out of their container and attack others. If you're a little bit worried about that then yeah, you can use tools such as G visor or Cota container. So G visor is a sandbox that's designed to work well on Kubernetes and as much as possible it's just you basically set a new runtime class in for your pod and you just get the sandbox and as long as you don't use an unsupported feature like certain system calls or GPUs everything should just work maybe at a slight performance penalty however if you really are worried about tenants attacking each other maliciously that's when you want to that's when you should at least consider simply using different clusters for everybody it's the amount of work that it takes to go through to harden Kubernetes is not impossible people do do it but you have to be very clear on what you're getting for that is the cost savings really worth it is the management savings really worth it if not, especially if you're using a cloud provider it might be worth your while to go off and just give everybody their own cluster if you're managing a hardware yourself on the other hand it'll look quite different and you might want to start looking at sandboxing technologies but that's a more specialized use case in one that it's hard to give general advice on other than protect your traffic flows and use a sandbox so the one other sort of intermediate solution would also be to share the control plane using some of the best techniques best practices etc but then isolate each tenant and again this goes back to if you're a SaaS provider with multiple customers so each tenant could get their own nodes so you're not sharing the nodes you're isolating tenants and nodes you're taking advantage of things like the VM layer or the node layer or the OS layer itself for that isolation as opposed to gvisor or gutter containers or other solutions so yeah certainly all three options would be interesting to look at in that SaaS provider type use case yeah I guess another possibility is just using a service runtime if people like yeah it's funny we don't have anybody from the EKS team here but that you can use I'm not sure if Azure has something similar but you can use Fargate as a back end if you're on EKS and then you're relying at least for your runtime security you're relying on Fargate managing your multi-tenancy and not your own VM like not your own VM setup now if that's what you if you're doing you better really understand what the multi-tenancy implications are of Fargate which I don't I don't work on it I'm not an expert but that now becomes part of your security story but I think in some cases using some sort of serverless solution could be another potential path depending on your render and your comfort with different security risks so as you look at both what the working group has accomplished and the different data plane and control plane concerns there do you see anything that needs to be added upstream kind of and I thought Faye might have some interesting thoughts around this yeah so to be honest so when we think about how to resolve the multi-tenancy power I mean when it is the first option that the company of mine is trying that upstream to support it but we haven't tried and we talked to different people so we have so many things need to be re-architected and people kind of reluctant to do that in terms of the community issues but luckily in the community so we have some attempts in the community people there is a project called Actos that in that project the developer actually introduced a new API concept called Tenant built on top of the namespace basically if you look at the entity and the object of the full name there is a piece of name called Tenant so that model kind of nicely resolved many of our use cases in terms of isolation in terms of the self-service namespace creation even CRD supports but there is one big problem is that you have to almost all parking has to be changed so this has very big impact on the ecosystem so based on my understanding of the impact of the adoption I think the other people finally decide don't do that they will do something more similar like contributing as a service type of solution to their multi-tenancy use cases so that's at least one time that I have seen one thing that I've definitely seen within the multi-tenancy working group is we often start collaborating with research universities who've started looking into hardening multi-tenancy within Kubernetes as an academic project and just kind of a research effort and the one thing that I kind of want to that we've had like some interesting conversations around is the big part of the value in consuming Kubernetes is being able to stay up with current releases so that you're staying in lockstep with the vendors that are providing services on top of Kubernetes and all of the capabilities and ecosystem around Kubernetes and so that's actually a pretty big endeavor and so if you're kind of looking at current today any given release of Kubernetes and saying hey I'm just going to take this and harden it keep in mind that if you can't continue to consume updates and if you can't stay current with the latest releases what you have ends up starting to look like a hardened fork that isn't compatible with the rest of the ecosystem so keeping upgrades in mind it's a lot of work but it ends up being a pretty crucial component of a solution you might add here so the I think that one thing that comes up consistently when we're talking about this space is should we change the Kubernetes API to be secure like hardened multi-tenancy by default right and that's a huge lift and a big architectural change on the back end but it is a question that has come up in a lot of our meetings I guess kind of I'll pass this one to Jim Jim do you think that there's appetite in the users of Kubernetes for like that kind of massive API change so I think the value of that change is understood but the challenge like Faye was also mentioning especially with like the project that tried you know adding tenants or the tenant information to the APIs is what it would do to the compatibility of other add-ons other solutions that are really built on Kubernetes right so at this point at least you know like it seems like it would have to be a breaking change and since Kubernetes the core APIs are GA they're they're not to manage compatibility there's no easy way to introduce that change into Kubernetes APIs itself right however one other area and speaking of you know add-ons so Kubernetes today you know if you're implementing or using Kubernetes it's a project and obviously there's a control plane there's but there's several other components several other add-ons that need to be run to be able to you know get have Kubernetes clusters operationalized within an enterprise so from that point of view there are projects like if you look at DNS right so every Kubernetes cluster runs DNS requires DNS and DNS is not tenant-to-ware so those sort of changes in my opinion and making something like code DNS tenant-to-ware is certainly you know feasible and seems like that could be done or other DNS you know projects can offer that into Kubernetes to add you know better tenancy constructs and isolation and segmentation even with the namespaces Adrian if we were going to do something to make multi-tenancy easier to use what should it be well easier to use so yeah I think that I'll agree with what Jim and Veja said it's like I don't think that there's a lot of value in making large changes to upstream at this point like it's really nothing breaking because the fact that we have been able to implement things like H&C and B cluster shows that there's a reasonable path forward that I think is probably good enough given the traction that the current Kubernetes API has and it's not worthwhile to rip that all up now let's say that so H&C as I mentioned it's hierarchical namespaces are getting good adoption and by design they've been designed to be additive on top of current ones so let's say that you know three or five years from now we're seeing a large percentage of people using it we could add that to upstream and that would improve the usability because it's one less component to add on but on the other hand most people are going to install lots of components anyway whether it's a policy component such as Coverno or 8Keeper or Core DNS or you know a network plugin so already you've got an overall usability story that you have to answer for Kubernetes and multi-tenancy is just one part of that story and so I think that that's really the way to look at this as we go forward to make it easier to use we really need to start shifting away from the technical solutions because I think that we are we have either plugged those holes with the projects that we incubated or are plugging those holes or at least we know where the holes are and we can point people to like yeah let's go work on Core DNS for a bit let's go figure out how you get across different network partitions using Q-Proxy or something like that once you have that somebody who's coming in who doesn't know Kubernetes and they need to look at this universe I think we've all seen the chart of CMCF projects that looks like a the RE chart from hell what they really need is a guide and what you need are best practices documentation possibly well for the vendors you can actually have wizards and what not but any kind something that can just get people onto the right path so it really does become a usability and documentation I think that I can see that as being by and large the future of multi-tenancy in Kubernetes is not new technologies but new ways to help people use them Thanks for that bridge so just to kind of wrap up and really answer the question that we posed in this session what is the future of multi-tenancy Jim what do you see here Yeah so certainly as Adrian was also just pointing out right I mean there's still work to be done in some areas but the core I guess the building blocks seem to be in place right and there are enough usable tools and we are also seeing like some of the tools we mentioned like loft and capsule and others emerging from both vendors in the ecosystem as well as open source projects right so at this point the work that seems to be left to be done is go address some of these smaller remaining issues like we talked about whether it's with DNS or perhaps with the Coop Proxy and a few other things which could be made tenant aware without changing the guts of Kubernetes itself and then evangelism or awareness or better classification of how users and enterprises can map their use cases back to you know what needs to be done in multi tenancy so it seems like there are you know and within the Kubernetes community itself you know like there's several of course the sake usability there's networking there's the policy working group so those could be areas of collaboration when some outreach into those streams to see whether they these other projects as they graduate and mature whether they fit in well and to carry forward some of the work that we've started here thanks what are your thoughts around the future of multi tenancy? Yeah I think I do agree and James point I don't have too much to add I guess so as a working group we certainly welcome any people if you come with this new idea to first point out the challenges that you are facing point out the problem that you think is not in the Kubernetes but multi tenancy ready so let's see if we can have something new or maybe we are living our mind because I think as open community so we are certainly welcome anything if you have some idea you want to do integration we fully support your approach so yeah that's all Awesome Any closing thoughts Adrian? Yeah I think that as we said like we set out I wasn't actually here when multi tenancy working group started but my understanding is that we set out to find out like what are the patterns that need to be supported and what we need to build or add to upstream or to the community in order to enable those and I think that we've largely succeeded in that task and so does the working group need to continue in its current form? Maybe it is a nice landing place for people to come maybe we can set under sick usability because things of course are longer than working groups maybe we can set something up as under sick usability as that landing zone and I think that we can discuss what the organization will be but I'm feeling pretty good about what we've accomplished in a couple of years since I joined this group Awesome Yeah well thanks everybody for sharing your thoughts on the future multi tenancy and to the audience listening if you have joined KubeCon Live we will be taking audience questions both within the platform and in the CNCF chat or you can come and join the Kubernetes chat channel for multi tenancy which we're in outside of events too so thanks looking forward to hearing your thoughts and looking forward to more multi tenance solutions Woo! Thanks everyone Thank you