 Welcome, everyone, for those who are on my session, choose your own Kubernetes for the local development. I'm Karthikeyan, Karthikeyan Govindiraj. I'm an open source enthusiast. I write medium blogs and I'm a sensitive speaker. At work, I'm a developer evangelist at BlackRock, and you can know more about me on my bio-links, gKarthik. So that's my keyword for my bio-link. I'm very active on my Twitter. You can follow me or ask questions on Twitter if you cannot ask here. It doesn't need to stop there. With that, I would like to introduce a few concepts before we are diving into our different local Kubernetes clusters. At a high level, a cluster is a group of two or more computers or nodes that run together to achieve a common goal. With that definition, if you look at single node cluster, it's an irony. Single node cannot be forming a cluster. But what it means in a Kubernetes way is Kubernetes has multiple components attached to it. Those are like API server, HCD, scheduler, and the controller, et cetera. So a single instance of a machine which comprises of all these services to form and to provide a complete Kubernetes service is called a single node cluster. And the next one is the multi-node cluster, high level for the deployed service, a cluster which has multiple worker nodes and available for the deployment of different workloads. That is called a multi-node cluster. So for example, when you deploy a web application and if you want to deploy three instances of a web application, you would schedule to get deployed in three different worker nodes. If there is an infrastructure failure and if one of the worker node fails, so the rest of the two worker nodes having your web application would continue to serve your request. So you would achieve high availability for your web application. But this high availability is not available for the control plane because still the cluster is having only one control plane node. So the next one is the high available control plane cluster. These are highly available control planes. For example, you will have three different worker nodes and three different control planes. This is because almost all the CNC have certified the Kubernetes distribution users HCD. HCD uses RAFT protocol and HCD runs in the master node. That's a control plane node. I'm sorry about that. So that is where all your data, all your state of the resources are stored. So for the RAFT protocol, you need to have odd number of instances to elect a leader and to get followers to make a consensus. Because of that, we will have odd number of control planes and any number of worker nodes. So with that, we will get the control plane high availability, which is for the Kubernetes cluster by itself and high availability by multiple worker nodes for the deployed services as well. So this is how that means look at the control, Kubernetes cluster, whether if the control plane is always available, if there are, whether the cluster has more than one nodes for workloads and workloads are spread across evenly, et cetera, et cetera. So with that, the first local Kubernetes cluster we're going to see at microkates. Microkates is from the Scandinavian folks. Microkates is a simple binary. You can download it from their website and you can actually create a single or multi-node cluster using microkates. The additional advantage of using microkates as the local cluster is it has an add-on called registry and that is an inbuilt container registry for your container images. So to start using Kubernetes from microkates, all you have to do is just download the microkates binary, execute microkates install. Upon executing that particular command, you are actually creating the Kubernetes cluster and starting the Kubernetes cluster under the hood. When you execute wait-ready flag along with the status of the microkates command, you would actually see status of whether it is running or not and whether it is highly available or not and then additional add-ons that are enabled and the possible add-ons that you can actually enable that are currently disabled. So another thing is microkates also have a sub-command called kubectl. That's the exact functionality of the kubectl. It behaves as exactly how our local kubectl CLI works. So when you want to execute kubectl get pods or when you want to get all the resources from all the namespaces, you've got to execute microkates, kubectl, get all the namespaces. So this is kind of clumsy. All the time you have to actually execute microkates kubectl. So this is how I actually alias kubectl as microkates kubectl and I'll just do kubectl get all. So that behaves exactly like how our local kubectl command works. So that's a single node Kubernetes cluster by microkates. Now how do we create a high available microkates Kubernetes cluster? So this can be, so when it comes with high availability as we discussed earlier, like we need multiple machines or multiple nodes. Since we are running in a single development machine or a laptop or desktop, we need multiple machines. In a sense like we are going to create multiple virtual machines. You can use any tools to manage or create virtual machines like virtual box, multi pass, Vagrant, et cetera. So I use multi pass here. Multi pass is another tool from Canonical folks. This uses hypervisor, hyper kit as the local driver or the virtual box if the virtual box is available. If you want to enable that from your local machine. All you have to do is like you have, you got to say like multi pass launch and then the name of the virtual machine that you wanted to do. And then the memory needs to get allocated the disk storage space. Though those are not mandatory, but I'm just providing it for the sake of adding the memory statically. So with that I'm creating a couple of other machines as for microkate zero to microkate zero three with the same memory on the disk space. Once it is launched when I execute multi pass list you will see the name of the machine, state of the machine and then the IP address attached to it and the image that's been used for creating that virtual machine. So with that, what are we gonna do? So the first thing you have to do is like we have the virtual machines in place but there is no microkates installed into it. So the first thing you have to do is like install microkates into the machine. So you got to designate one among those three as the control plane and then you got to exact into it like as in multi pass shell, the control plane VM name would actually get your terminal into the machine. And then using the snap package manager you can install microkates into that VM. Once you install that the next thing you have to do is like you have to create the token for joining the cluster. So microkates has another sub command called add node. So the output of the add node will be like a command that's a spit out by that particular command saying like microkates join IP address, port number and a token. So let's see what happens when you execute add node. The moment when you execute microkates add node this particular the particular machine that you're executing this command will be designated as the control plane and it will enable the high availability there. And then it creates a join token. If you're not providing a token by yourself it would automatically creates a join token. And then when using that token in another machine it's gonna add that particular machine as the worker node into the cluster. So that's how you actually create a highly available cluster. Now, so you got the highly available cluster you might need to add the Kubernetes dashboard for your visualization aspect. You can do so by enabling it by using microkates enable dashboard. Similarly how we do in mini cube. So now we got the microkates highly available cluster running in the virtual machine. So earlier we got that running in our local machine. So it was easy like to say early as we microkates cube control but it is not now like that. So what we have to do is like we have to get the cube config of that particular cluster that we are running in our virtual machine to do so. You got to exact into the same control plane machine and then you can get the microkates config that is actually the cube config of the cluster. So use that cube config as the cube config of your cube control command CLI and then you can directly interact from your local machine pointing to that cluster using this cube config. You can even run up cube control proxy using the cube config that is obtained from the microkates cluster. So with that the observation that we have for the microkates is like we don't need any Docker to run in our local. We can actually run this in any operating system like Windows, Mac or Linux. It is not something like operating system constraint. It does also have a built in image registry that is helpful for pushing and pulling local images into the cluster immediately. So this is like kind of a single binary. It's easy for creating a single node cluster for CICD environments to do the testing or to do the integration testing, et cetera. The microkates is a CNCF conformant kates distribution. You can actually create a Windows worker machine as well, worker node as well and attach to the cluster or worker node as a Windows machine as a single node cluster and you can deploy a Windows containers as well. Microkates supports Windows containers. With that the next one in our local Kubernetes cluster is kind. So kind follows the concept of Kubernetes in Docker. This is actually owned by our Overy on Kubernetes folks, developers by the CIG kind. So you can create a single or multi-node cluster with kind as well. To start a kind cluster, you got to download the binary kind and then upon executing kind create cluster, of course you got to have the Docker running in your, Docker demon running in your host machine because this particular kind will create a Kubernetes cluster inside a Docker container. So upon executing that, kind will create a full blown Kubernetes cluster in a single Docker container with all the necessary components of the Kubernetes and gives you the Kubernetes cluster. The other feature of kind is like, you can actually create a custom build of Kubernetes. For example, if you're working on a Kubernetes patch or Kubernetes enhancement and you wanna test out if particular patch is working with the Kubernetes clusters or if the particular patch is behaving properly as expected, you can actually use kind to build the Kubernetes image and then run the build image locally as a kind Kubernetes cluster. To do so, you just got to do like Kubernetes build node image. If you're not providing or if your Kubernetes source code is in the proper go path, it would automatically detect and it would automatically build the Kubernetes image, node image from that particular source code. And the image that you built can be added as a image flag value for the Rekind create cluster command. And obviously this kind supports and runs across all the operating systems like Linux, Mac OS and Windows provided if the Docker daemon is running in that host machines. There are additional flags as well. I forgot to mention there is like arc. So you use arc for building the Kubernetes node image for different architecture of the machines and et cetera. So the other advantage of kind is like, you can actually have your locally built Docker images to load into the Kubernetes cluster easily. So this allows you to develop test and develop deploy and test in Kubernetes cluster immediately. For example, if you're developing Kubernetes native cloud native application or service and you want to test it out immediately, you can actually build it locally and then just load the image into the kind cluster and then deploy it in the kind Kubernetes cluster, validate the behavior if it behaves as expected. So that actually improves a lot of SDLC timing perspective. So the next one is kind actually supports configuration, the very famous YAML configuration files as well. Yes, it is YAML. So with this YAML file, I can actually create multiple complex Kubernetes clusters as well. For example, in the RAM on the on your left side, I'm saying like there are three control planes and I need three workers and upon executing upon applying that configuration file, I'm getting three different control planes node and three different worker nodes as my Kubernetes cluster. And of course, all of these are running in a Docker container, but kind takes care of clustering everything all together by itself. There is no manual intervention needed to form a cluster. So with that, what is our observation on kind is like kind needs either Docker or Portman running on the host machine. This runs in across all the operating system, of course, provided if the Docker, Deman or Portman is running on the machine. And you need additional deployment for your container registries if you are in need of a container registry for in your local cluster, but I don't think you need it because you can just build it locally and like load it into your cluster directly. So I'm a fan of very much CA environment for this kind because you can actually use kind to do the integration testing and in your CA environment and then you can actually clean up by just executing a command kind delete cluster and it's so fast because everything is within the container and you are not polluting anything in your CA pipeline. So though kind is a CNCF conformant Kubernetes distribution, it does have few known limitations and one of the major limitations to be noted is it doesn't support Windows containers yet. So this is kind of a drawback because this is definitely not an orchestration tool for production grade. This is definitely for development environment. So with that, the next one we're gonna see is like the K3S. K3S is from Rancher folks. This is a stripped down version of the upstream Kubernetes source code. So in this, the biggest disadvantage is like it only works on Linux machine, but for the Linux machine, we can use virtual machines in our local or the alternate method is like you can use K3D. So K3D is a wrapper around K3S to run in a Docker container. So for virtual machines, again you can use different virtual machine tools. I'm using multi-pass here, the one which I used for the microkites. I created a multi-pass virtual machine and all you have to do is like execute a call command from the K3S website that has like get K3S I will and once you execute that, it automatically detects your architecture, OS version and everything and then it installs the Kubernetes cluster and starts the cluster as well. So that starts a single node virtual cluster, sorry, single node Kubernetes cluster and by default K3S installs the traffic as well. So now you've got this K3S running, K3S Kubernetes cluster running in a virtual machine to interact with that from your local terminal. All you gotta do is like you gotta grip the version, sorry, grip the K3S YAML which is the Kube config and change the IP in the Kube config from the loopback address to the actual IP address of the virtual machine and then feed that as a Kube config to your Kube control and when you execute Kube control using the Kube config you are actually interacting directly from your local machine to the Kubernetes cluster that is running inside the multi-pass virtual machine as K3S cluster. So with that, we saw a single node K3S Kubernetes cluster. So K3S also does support high availability to do the high availability for K3S. We gotta have multiple virtual machines. So once you create a multiple virtual machine you gotta designate one of the virtual machine as the control plane machine and you're going to take the node token that is available under var libbrancher K3S server node token path that's shown on the screen and you also need to take the control planes IP address the machines virtual machines IP address. So you can actually use K3X prefixed environment variable to use in the boot up time or you can actually pass it as the pipeline argument as well. So I used both of them. So I added the node token to K3S token environment variable and I'm using K3S URL as the piped variable to pass the control planes IP address for the curl command that I'm using that I've used for the previous machine. So when I execute this particular command in the new machine, in the new virtual machine so this particular virtual machine will create the Kubernetes cluster adds the token uses the token and tries to get added to the control plane as a worker node. So following this with multiple worker nodes will create a single node, a single control plane with multiple worker node Kubernetes cluster. So with that, we have created multiple highly available cluster. Now that is with virtual machine. The alternate option for that is to run with K3D. So K3D is an alternative to VM installation for K3 years. So according to the rancher folks this is also from rancher folks and according to rancher folks it's a lightweight wrapper to run K3S in Docker. Every single configuration works exactly as how K3S works. There is one additional advantage using K3S like how we saw in our previous kind. K3D also does auto clustering. So when I specify what is my cluster machine numbers like say in this example I've said server as three. So the K3D will automatically create three different servers and automatically connects them as a cluster and gives us a full blown Kubernetes cluster. So if you look at the resultant of that output you would actually see three different servers for my Kubernetes cluster having control plane, HCDN master roles. And additional advantage of K3D is like you don't need to deploy a container registry by yourself because by adding the flag registry create it would automatically create a local registry within the cluster as well. So that's an advantage in K3D in compared to K3S. So with that in K3S the observations are it works only with Linux distributions maybe Windows and it needs a virtual machine management tool like multi-pass or Vagrant for local environment for local desktop machines to manage the virtual machines. An alternate option to that is like we can run K3D but if you're running with K3D you need to have the Docker on your host because K3D runs K3S inside a Docker container. And if you are in need of a Docker registry for your container registry for your local cluster you've got to deploy it by yourself when you're using a native K3S or you can create it with the registry create flag in your K3D. So that's the K3S observations and the K3S is also a CNC of conformance certified distribution certified Kubernetes distribution. So these are the local Kubernetes cluster. The next one is the virtual cluster before diving into the virtual cluster I want to introduce a few different topics or I wanna define few different terminologies virtual cluster and host cluster. A virtual cluster is defined as a complete Kubernetes cluster that is spun up inside an existing Kubernetes cluster. So you will have an existing Kubernetes cluster that is maintained by your infra people and you will have a namespace that you own. So what you would do is like you would actually deploy the virtual Kubernetes cluster in that namespace that you own and that would give you a full blown complete Kubernetes cluster. And what is a host cluster? The cluster where you're deploying your virtual Kubernetes cluster is called your host cluster and self servicing Kubernetes cluster is nothing but as a developer you can actually log in to the cluster sorry log in and create your own Kubernetes cluster whenever it is needed on demand. So that is called self servicing Kubernetes cluster. It's the general genetic terminology. So with that we're going to look into the concept of v cluster. So unlike the previous local Kubernetes clusters it doesn't create any cluster in a local machine. You will not have any local machine running with a Kubernetes cluster. So these clusters are created inside a running remote clusters that is already available to you. So this follows the concept of Kubernetes insider Kubernetes. So developers works with the remote cluster via a CLI via a thin client CLI sort of something like that. And you can actually make developers the admin of your virtual cluster so that they can actually test out all the features available in that particular virtual cluster. So typically when this comes with the CLI it works with almost all the OIS. There is no OIS constraints on this. It works with Mac Linux or Windows. And since these virtual clusters deploy the pure Kubernetes clusters by itself you need to if you are in need of a container registry you got to deploy your own container registry and most of the time you would not be needing that because you would be using the remote container registry as it is. At the next the most important thing is like when you're working with virtual cluster all the namespaces and the resources that you have created within that virtual cluster are encapsulated within the host cluster's namespace where the virtual cluster is running. So there would not be any leakage to the host cluster. So this actually provides and so this particular virtual cluster provides an impartial cloud native developer experience across all the developers using different operating systems and different machines. So the next one is like do you need an admin permission to create a vCluster? Actually no. Probably you might need a admin permission for creating CRDs and cluster role bindings when you're actually deploying the virtual cluster but not when you're creating a virtual cluster by itself. So from the host cluster perspective if you look at that there is a virtual cluster that is running within this namespace and except the scheduler everything else will be available from the Kubernetes component perspective. Instead of the scheduler the virtual cluster has a component called sinker. The sinker will actually get the command of scheduling the ports and it will actually pass on to the corresponding host cluster scheduler attaching a few different constraints like this particular pod must be present within this namespace and et cetera, something like that. So with that the observation for us on the virtual cluster is like you might literally need a host cluster to deploy the virtual clusters and this provides remote development environment or remote development cluster experience for the developers. So you've got to deploy your own registry if you're in need of registry because this just deploys Kubernetes by its Kubernetes nothing more than that. It is a certified CNCF Confirmation Certified Kubernetes Distribution as well. Main takeaway of virtual clusters it provides an impartial cloud native developer experience for the developers who are using different ways like Mac, Windows and Linux maybe like in different machines as well like Mac, Windows, Linux and VDIs for example. VDIs are like it's hard to create I mean it is impossible to create a hypervisor or virtualization in VDIs and if you're using a company provider laptop it is impossible to enable the hypervisor in Windows or something like that. So in that cases the virtual cluster are your life savior. So that's all I had today for my choose your own Kubernetes for your local development. With that thank you so much this is Karthike again Karthike in Govind Raj you can follow me on Twitter at GKRTHACS. If you have any questions I'll be on Slack and you can ask me on Twitter as well. Thank you so much for the session thank you so much for watching.