 So, hello everyone, today is the last day of conference and this is after lunch talk. You guys are still here, so you must be real geeks. So little disclaimer about me, this is my first attempt to be a conference speaker, so I might fumble in between, so just bear with me a little. Before jumping to the talk, a little bit about me and my company. My name is Pratik, I am a software engineer at Loflabs. From my accent, most of you must have guessed I am from India, but the company is based in USA and I work remotely from India. If you like to talk or interested in technology, I work on or just for casual conversation you can ping on my social handles shown up here. At Loflabs, we work at bunch of technologies, Lof is our commercial project. It enables large companies to provide self-service isolated names ways to large numbers of teams or developers. At Lof, we work at bunch of open source projects and our oldest project is Dev Space, which lets you develop an application directly inside the Kubernetes. It is essentially a replacement for a Docker compose, then there is a kiosk which is a multi-tenance extension for Kubernetes, then JS policy, it is a policy engine that lets you write a Kubernetes policy into JavaScript and TypeScript and then there is a weekluster which we are here today to discuss. Before I start with weekluster, I will try to establish some context. I assume everyone attending here knows the Kubernetes. If not, then maybe you can have your post-launch CSTA, just kidding, don't sleep here. If not, for those who are new to the Kubernetes are getting just started, this definition will make a little sense. When I started working on Kubernetes, this does not make any sense to me. Someone I was working with told me Kubernetes is just a manager, container manager, which controls and governs the overall working and life cycle of containers and then as I started working more and more in Kubernetes, this definition started making little by little sense to me. Nowadays, one of the hot topics around Kubernetes is multi-tenancy. I will try to explain it with what is it and why should we care about it as simply as I could. As the name suggests, multi-tenance is something related to more than one tenant. Basically, it means to share a cluster between multiple teams or multiple customers. Then why should we care about it? Right? So creating a single tenant cluster is quite expensive, so when you use a multiple tenant cluster, it saves a cost. It also simplifies administration. Just imagine if someone is creating 1,000 single tenant clusters and now the overhead to manage this 1,000 cluster is very high and if you also got to manage, maintain the ingress controller, the cert manager, Prometheus and other metrics, a controller in all of this cluster, then it's a bit of a lot of work to manage everything. So this is the diagram from the Kubernetes talk. It shows how the multi-team tenancy and multi-customer architecture is. Achieving a stable multi-tenant cluster also poses some challenges. So one of the major challenges is security. So it's like defining what type of access to be given to what type of tenants and then the isolation is a big challenge as well. So as we go deep in this area, we'll see many other challenges. So by default, Kubernetes provides us with the names versus which is to handle the multi-tenancy. We can set up the network policies, limit ranges, resource quotas and other things to lock down and isolate the tenants. But what if some tenants needs to access cluster-wide resources like someone needs to work with the CRD, then you would have to give them admin access. And what if those tenants with admin access breaks down the cluster, then you will have to spin up a new one and this could cause issue with the other users as well. So you can see there's a lot of work and overhead when we use the names with best multi-tenancy. So is there a pragmatic or is there a middle solution for this? If you notice the diagram, the pros and cons of both the approaches are opposite on both the side. Like isolation is very weak in namespace but the separate cluster is very expensive. So the pros and cons are on opposite side. So with the weak cluster, we are exactly trying to achieve this. It tries to give you benefits of separate cluster at the cost of the namespace. If you see the diagram again, weak cluster tries to achieve the middle ground. The isolation in weak cluster is stronger than the namespace whereas the weak cluster is cheaper than the separate cluster. So what is weak cluster then? The weak cluster or virtual cluster is a Kubernetes distribution that runs on top of another Kubernetes cluster. It has its own control plan so no two tenants will interact with the same API server or with the host API server. It is lightweight and compared to the full-fledged cluster, it takes very little time to provision. So like virtual machines, virtual cluster partition hosts cluster into multiple logical clusters. So I have small demo here and the start itself. So weak cluster is a small CLI. I have already downloaded it and when you do a list command shows you the weak cluster present in your clusters. To create the weak cluster you just have to do weak cluster create and the virtual cluster name. And I am providing connect equal to false because I don't want to connect to weak cluster by default. So if you do now, if you get the namespace in host cluster it has created a weak cluster VC1 namespace and in that namespace we have our virtual cluster up and running. So now we will connect to that weak cluster. We can see it is running and we will use connect command to access the virtual cluster there. It is a bit slow, sorry. So it is essentially stopping Docker proxy and starting a new proxy container in the back end. So now when I do a K gate namespace then you can see the new namespaces arrives there. The age is just a 40 second which means it has just created and the namespace on the right hand side and the left hand side are different. So you can essentially see the difference you are in the separate cluster. So when you do a K gate pod cyberncube system if you are the admin of virtual cluster so you can get pod in that namespace. If you are not then in the host cluster you may not be admin so you won't be able to get it. So then how does this work exactly? You guys are here for this, how Kubernetes inside Kubernetes is working. So basically a virtual cluster is a control plane running inside another Kubernetes cluster. So as we know a control plane is API server, data store, control manager and a scheduler which governs the workloads which are running in the namespace and there can be multiple namespaces as there. So as a user we can only interact with the API server so what we cluster does is it spins an API server and some other components in pods in namespace. So we are running their control plane as a pod and the user will connect to the API server which is running inside that pod. So what does this mean by architecture perspective? So vcluster mainly has two components one is a vcluster stateful state and another is a vcluster service. The stateful set creates a pod where virtual cluster lives in it has the two content of the first one is a control plane which has a API server plus control manager and data store and another one is a sinker and this control plane is certified Kubernetes distribution. So it has passed all the conformist tests and exactly work as any regular Kubernetes cluster. So whenever the user interacts with the API server it interacts via the vcluster service which is a service created in the host namespace and it's a Kubernetes service so you can expose it using increase or load balancer. So let's look how we can create the workload in the virtual cluster. So we have the host namespace there our vcluster is running in it we have a vcluster stateful set and everything is running. So now when we do a kubectl create a namespace and s1 so it creates the namespace in vcluster so namespace is just an entry in the data store it doesn't have any physical entity so it creates the data store could be anything we use SQLite by default because it is very light but it's a configurable so you can use HCD or any other if you like it creates the namespace and then how then let's try to create a deployment in it. So when we do kubectl create deployment of nginx it in similar way it first creates the entry in the data store and then it creates the pod in the same namespace but if you have noticed we do not have any scheduler in vcluster we just have a PSR controller manager data store but no scheduler so then how the pods are getting scheduled. So if you see there is a strange another component called sinker sorry sinker so what sinker essentially does is it copies the manifest it copies the pods created and the data entry to the host cluster namespace so when it copies it create and when the copy is done the host cluster scheduler will schedule this pod on to the nodes so virtual cluster essentially is a Kubernetes cluster so we can have multiple namespaces and we can have multiple workloads with the same names in it so does this causes naming conflicts as both the pod has same name but have their different namespaces and as we are saying vcluster creates all the workload in one namespace so it should cause an naming conflict but what happens is vcluster before sinking the resource down changes the names so it follows certain pattern so if you have noticed it's a strange name it's weird name but the first part of the name is the name of the resource itself and then the namespace name which is NS1 and then the VC1 is a vcluster name and the vcluster name is appended because you can create n number of virtual cluster inside a same namespace so the so vcluster sinker follows this pattern so you so there is a little chance or no chance to be to have an naming conflict in the host namespace so does the sinker sinks everything down I mean deployment triplicas said if it does then what's the use of different virtual cluster right so it does not so sinker own by default only sinks the resources needed by the port to run like mounted config maps with secrets pvn pvc services endpoints so if these things are not there then port will not be scheduled on to any note and it does not sink the higher level resources like deployment stateful sets demon sets and crds so sinker sinks backs sinkers also sinks back the status from the host cluster object to the virtual respective virtual cluster object to keep it in loop and there are so many other resources which can be sinked down by the virtual cluster and those objects can be enabled or disabled by the sink flag or using the vcluster configuration file which is just values.yml file if you are familiar with the helm so it's just a helm file helm values file so another demo just will try to create the workloads here so now I'm listing the virtual cluster which we created in the last demo it's the same vcluster we are using again now when I do when I get the pods in that namespace I'll see two pods kodi ns and vc1 so vc1 is essentially our virtual cluster and it has two pods sinker and control plane and then kodi ns is just a kodi ns pod to manage the dns inside virtual cluster so when I connect to the vc1 now if I do kgate ns I'll see the new namespaces here and now I'll try to create a deployment here k create deployment nginx deploy is the name and image is nginx and the deployment is created if I get the deployment I will see it's already ready and available so the pod is in the running state here and now I'll disconnect the vcluster and I'll check how the who what type of pod is created or how the pod is created in the host cluster so when I list the pod in host cluster namespace you can see the pod name then the namespace name and then the virtual cluster name so this is how the pod is sinked down to the host cluster so vcluster comes with the lots of features so some of the key features are isolation mode so so when the isolation all these features can be enabled using values.tml file or the respective flags so isolation mode basically creates the resource quota limit range and the network policies and it applies the pod security standards to so the workload will be strictly isolated then another is rootless mode it runs the virtual cluster containers such as sinker and the control plane in a non-root mode then vcluster can be paused and resume at any time when it is paused it scale down the vcluster stateful states or deployment and then delete the workloads created by vcluster and resources beside pbs are not used in that state then the API server can be exposed fine grace or load balancer and you can install by default when we created the vcluster it creates the k3s cluster so but you can use any of the following distros k3s kos k8s and eks so this is a little bit nasty but you can also install vcluster into vcluster and so on just making a nested chain there so what are the use cases of putting Kubernetes inside Kubernetes so one of the use case which we discuss what multi-tenancy and and few are like ephemeral CI CD environment you can just create and delete the vcluster any time it just takes few seconds to create so it's a good option for running CI CDs then remote you can run integration into intents and instant preview environment deployment then you can run you can use it for remote development environments you can do experimentation such as trying different version of Kubernetes and you can write one application and try to run it on like three or four different version of vanilla Kubernetes or eks or something and then cluster simulations you can test on it and then you can create a multi-tenant cluster in production and it's also good for demo and training purpose because it is very lightweight it's very easy to create and destroy so how should we get started so it's easy you just have to go to the vcluster.com download the binary and create vcluster create and your cluster name the documentation is very well written and easy to understand you can find a vcluster on GitHub and it's open source so you can also contribute to it you can test it you can try to break it and please do break it so I will improve it and that's all so if you have any one question thank you so much so if anyone has any question can come to the mic and speak in that so the resulting pods are they're all ending up in a single namespace that there's no isolation at all if they need access to their secrets whatever they have access to the whole environment no so when you create the virtual cluster and you will give it to your user you basically what you're giving is a kube config access right so every tenant will have their own kube config so they won't be talking with the host cluster APS server so they won't be able to get any resources if the tenant needs more isolation he can have more vcluster right and there is isolation mode as I said so it will restrict all the accesses once again yeah are there anything that are not possible to do that you can do if you're running without vcluster on the cluster sorry if you have vcluster is it anything you cannot do in that cluster as you can do if you have a physical cluster yeah any limitations i suppose there would be some things but i'm not sure if there are because it's just as your regular cluster but i haven't come across any use case right now so i don't have answer to it but if i get i will write to you guys yeah so we are working on the crd plugin right now we we also have a plugin system so you can write your own custom plugin which shall share across your vclusters and one of the plugin is syncing your crd to the host cluster here we are trying to cover everything yeah hi any chance to have it as a non not as a cli but as an operator basically half crd to create your cluster basically uh we install it via helm charts so basically uh cli is just it's very lightweight but i'm not sure why do you want it to be operator well basically so people can define virtual clusters in a crd in a definition basically and they can just get up so something like that right in order to sort of deploy for example in cict i can get the idea of using the cli probably fine but if you have like maybe more complicated authorization authentication structure or something like that i don't know sometimes this is just what to have that sinker as being basically looking at crd's in the host and then trying to create virtual clusters but it's just another interface so so if you want to customize the installation part of the cluster there are so many options uh i'm i'm also sure uh what you want for your use case are also options in uh configurable which you can do using values file so yeah basically so if you have any more questions you can write me at my email address thank you