 All right, it's three past the hour. So let's go ahead and get started. I'd like to thank everyone who is joining us today. Welcome to today's CNCF webinar, Navigating the Sea of Local Kubernetes Clusters. I'm Jeffrey Sika. I'm a senior software engineer at Red Hat and a CNCF ambassador. I'll be moderating today's webinar. We would like to welcome our presenter today, Otto Pulido, a developer advocate at Datadog. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants and presenters. Please also note that the recording end slides will be posted later today to the CNCF webinar page at a link that we'll put in the chat but at cncf.io slash webinars. With that, I'll hand it over to Otto to kick off today's presentation. Thanks, Jeffrey. So yeah, today, thanks for joining. Today we are going to be trying to help you navigate the Sea of Kubernetes Local Clusters. So you may be new to Kubernetes or fairly new. You're still learning how to deploy your applications in Kubernetes and you would like to get a local cluster running in your laptop and your workstation to help you deploy those applications and test those applications and you may wondering what you should use. So this is what we are going to try to help you make a decision with today. So Datadoc is monitoring an analytics platform that helps companies improve observability of their infrastructure and applications including Kubernetes, of course, the cluster itself but also the applications that are running inside your cluster. This is not a talk about Datadoc itself but just so you know where I work. So I'm a developer advocate there at Datadoc. I've been working on Kubernetes projects for the past three years. I'm a certified Kubernetes administrator and I was also part of the team that created the curriculum and exam for the certified Kubernetes application developer. Both are certifications by the CNCF and the Linux Foundation. And those are two ways of contacting me. You can DM me or ping me on Twitter. That's my handler and that is also my mail. So if you have any questions or you want to follow up afterwards, feel free to reach out on either of those two mediums. But this is a talk about Kubernetes. What is Kubernetes? Kubernetes is an orchestration platform for your containerized applications. So basically it's going to help you run your containers and your applications as containers in production. It's completely open source. It's a gradated CNCF project and it's a super successful open source project. So it's had 19 major releases in 2015 when it was first released. It has more than 90,000 commits on its repo from more than 2,000 contributors. So huge successful open source project but it's not only successful as the project itself. It's also very popular and it's increasing popularity. So for a user point of view, this is the Google Trend Searches for Kubernetes from 2015 to 2015 more or less until January this year. So as you can see, it's a trend that keeps on growing and probably it's going to continue that way for a while. Why? Why it's so popular? Why companies are choosing Kubernetes for their containerized applications? So first of all, it's super extensible and flexible because everything is going to be API driven. So everything in Kubernetes is an API object that you interact with but also that API is extensible. So if you want to create a new object to solve an issue that you have but particularly on your space, you can do so. So it's super extensible. The second reason obviously is it's large community. As we can see, there is a lot of companies putting a lot of effort and developing effort to Kubernetes. So new features, backfixes, et cetera, get ready very quickly. So something to take into account. And another reason is it helps with your multi-cloud strategy. So if you want to run your application on several clouds, if you have this extra abstraction layer with Kubernetes between your application and the cloud, it may help you migrate those applications between the different clouds. So this is fantastic. It's a great production environment but what about the developer experience? How does that matches that great production environment? So this is how your production cluster may look like. Maybe this is way too abstract but you get the idea. Maybe this is a less abstract. So in your production cluster or your staging cluster, you will have several nodes. Each of those is going to be running different workloads and each of those nodes are going to be connected to each other through the network. All in all, so what we are trying to say is your production cluster is a distributed system and as a distributed system, it has its own needs that we need to take into account. But then your workstation looks like this and these workstations that we have as developers are actually very powerful. They have several cores, they have a lot of memory and they're getting better and better every time. So with those powerful workstations that we have, are we able to kind of mimic what we have in production within our own laptop? Unfortunately, the answer is no. Production is never going to be the same as your local cluster. As we said, it's a distributed system. It might be running on bare metal or it might be running on a cloud with its own different things. So it's never going to be the same experience and that's something that obviously we need to acknowledge, we need to take into account. So now that we know that it's never going to be the same, is it still worth running a Kubernetes cluster in your workstation? So I think the answer is yes, because if not, I wouldn't obviously be giving this talk and why I think it's useful. Many things, so first of all, it's a great learning tool. So if you want to start deploying your applications in Kubernetes, you need a way to learn how to work with Kubernetes. You need to learn a way to learn how to create a deployment object, how to create a pod, how to create a service, what the difference type of service is that we have. All of that you will need to learn and having a local cluster may help you learn all these concepts very, very easily. The second is it has a very quick feedback loop. So if you have an application that you want to run quickly on Kubernetes cluster, you can try that on your own workstation with one of these solutions. And it's also very useful for CI-CD workflows. So if you have an application that you want to test in a Kubernetes cluster before shipping that application, we will see some of these solutions are very good to be able to spin up very quickly a cluster, turn it down, have several combinations of versions of Kubernetes, for example. So it's very useful as well for your CI pipelines. Okay, so first, before we dive into this sea of local clusters, we just said that basically what we're doing here is talking about containers. And when we talk about containers, if we are not specifying the type of containers, if we are not saying if it's something else, what we mean is that we're talking about Linux containers. Obviously there are other type of containers like Windows containers, but in general, when we talk about containers, we are talking about Linux containers. And that's something that obviously changes everything because if you're not running Linux natively in your workstation, everything that we are going to learn here today is going to be virtualized. And it's okay. And we are going to see some of the examples that we are going to see some of them are going to be very transparent, how they manage the virtualization. And some of them are going to be less transparent. It's going to be more specific. But from a technical point of view, all of this, if you're not running Linux, it's going to be virtualized. Okay, so let's try to navigate that sea of local Kubernetes clusters. The goal of this webinar is not to tell you what you should use. The goal of this webinar is to give you an idea of the difference between the technical point of view and the user experience point of view or these tools. So you can make your own decision of what it's best for your use case or if you want to try several ones depending on the several scenarios that you have. So these are the ones that we are going to see, the five slash six that we are going to see today, our mini cube kind micro Kubernetes, K3S and K3D and those two are very connected. So that's why I put them together. And far cube, which is a little bit different, but I thought it was interesting to put here in the list as well. Okay, but before we go into each of those, I think it's important to mention cube ADM, which is a project inside Kubernetes to help you create Kubernetes cluster in an easier way. So everything that you have to do from an admin point of view of adding nodes, managing certificates, all of that, cube ADM helps you with that. You don't have to know what it could be from our developer point of view what cube ADM is, but many of these tools are based on cube ADM. So are using cube ADM to actually create the cluster. So I thought it was interesting to just mention it. So in case you see on some of the output, something related to cube ADM, you know what it is. Okay, so let's start diving in. The first one that we are going to see is Minikube. And the reason why we are studying with Minikube is because it's the most popular one. When you start with Kubernetes, probably the first thing that you do is installing Minikube, why it's so popular. So one of the reasons why it's so popular is because it has been around for quite a while. The project started in 2016, just a year after the first Kubernetes release. It provides you with a single node cluster and it's also cross-platform in Linux, Mac OS and Windows. And in those three, the user experience is exactly the same because in those three, it's going to be virtualized and we will see a little bit in a while how. Another good feature that it has is that it follows Kubernetes releases very, very closely. So as soon as there is a new Kubernetes release, a day or couple of days after, you're going to see a new Minikube release that supports that new version. So it's really straightforward to be running the latest Kubernetes release. It also supports a lot of features, night features that you want to learn when you're learning Kubernetes, like load balancer and node port services, ingress, different container runtimes. So it has a lot of those nice features that can help you test and learn those concepts. And it also has an add-ons-based service where you can add more stuff to your cluster, things like metrics server or ingress, things like that. You can add those, enabling those add-ons. Okay, so from an architecture point of view, this is how it's going to look like. So you're going to have your hardware layer, so your workstation, you're going to be running your OS, and then you're going to have to have a hypervisor, and then you're going to run your node as a Linux VM. So, and that node is going to be both acting as the control plane of Kubernetes and also the worker node. If we move to the three different OSes, we can see that we're going to see for your workstation, we can see that it looks exactly the same. The only thing that changes is the hypervisor that is hyperkid in my Quest by default, KVM in Linux and Hyper-V in Windows, but the rest you can see it's exactly the same. Okay, so let's have a look to the demo. All the demos are pretty cool there. In this same laptop, I've given the presentation because I needed to speed up some of the stuff so it doesn't run for hours. So, this is how it looks from a user point of view. So you can run the version, it's a simple CLI tool. You can check the list of clusters that you have on your workstation. We don't have any, so let's create one. Super simple, when I do Minicube start, it's going to create the VM for me. So I don't have to create a VM before. It's going to do that for me. In this case, using hyperkid, I'm running macOS, and it's going to start a cluster, a 118 cluster in this case directly. One of the great things that it does is that as soon as you finish creating that cluster, it's going to configure Qubectl to point to this cluster. So Qubectl is the client tool that you use to connect to interact with a Kubernetes cluster, and it's going to be configured for you automatically. So as soon as you finish doing this, you can do Qubectl, get notes, and it's going to point directly to our cluster. Okay, so how does it look like? So we have deployed Datadoc just to check how this looks like. So it has a note with a role master running 118. We have all the logs that it's producing. And if we check the list of processes that we are running inside this node, we can see that it's a system debase Linux system, and it's running DockerD as a container runtime, and it's running the Qubelet. And those are running as OS processes, but the rest of the components that we need for Kubernetes to run like the controller manager, the API server, the Qubet proxy, all of that are running as containers. So if we go to the containers list, we can see that very easily. So we are running the API server. Obviously we are running the Datadoc agent to see all this information, but we are having all these components as containers. So we have the API server, ETCD, controller manager, core DNS, et cetera. Obviously we can drill down to see what command and how it's running Qubet API server in general, but this is running as container and something that is important to know. Okay, so how do we access our application? One of the things that we want to do obviously is access and test our application. So we were saying that one of the great things that Minikube has is that it implements very easily, node port and load balancer services. So to put an example, we have here this application has several services and one of the services from 10 is exposed as a node port. In order to access that application, the only thing that we need to do is Minikube service, the name of the service, dash dash URL. And if we remove the dash dash URL, it's going to open for us our default browser and it's going to load our application. So super simple way to deploy an app, expose it as a service and being able to see if it's running correctly. Okay, let's check the second one, kind. So kind obviously has the cutest name for a tool. Kind stands for Kubernetes in Docker and it's already giving us an idea of what we are going to see here. So Kubernetes in Docker is based on Docker in Docker, which means that you're able to run a container runtime inside a container. So what we are going to do here is to run Kubernetes node as a container. So the container runtime of our node is going to run inside a container. We are going to see a graph to make this a little bit easier to understand. Kind is a project that started in 2019. So it's fairly new. It's great because it works anywhere Docker works. And obviously we are going to see the trick here because obviously it has to be virtualized. But one of the things that Docker does is that the user experience in MacOS and Windows is very transparent. Everything is virtualized, but they make you feel like you're running Docker natively. And we will see how it looks like in the demo. It was first designed for automated testing of Kubernetes itself. So the Kubernetes project needed obviously to be able to spin up different cluster and down test everything. And it was created for that project. That gives us, if Kubernetes is using it for testing, that means that it's very good to include as well if we want to add it on our CI pipelines. As I said, it's using the nodes that are going to be containers instead of VM, but they are going to be using a container image that is going to be a little bit special because it's a container image that tries to look like a VM, like a proper, because it's going to have system D in it. It's going to have its own container runtime. It's going to run all the Kubernetes components. So it's going to look a little bit like a VM, but it's not. And this is multi-node cluster. You're allowed to create several nodes for your cluster because they're super lightweight. Okay, so this is how it looks from an architectural point of view. So you have the hardware, you have your host OS that is running the Docker runtime. And then for each node for your cluster, it's going to be actual a container that gets another container runtime inside. So this is the Docker in Docker experience. Again, we know that actually this is not what is happening in some of the OSs. So in Mac OS and Windows, everything here is virtualized. But the thing is that, as I said, Docker makes it believe, makes it, well, not believe it. They don't want to lie, but they want to make the experience very good. So you feel like you're running Docker natively. But actually what is really happening is that they are going to create a VM for you based on a distro called LinuxKit. And that Linux machine is the one who is going to run the Docker runtime. And then inside that one, you're going to have the containers as nodes running a second runtime. So a little bit of layers of abstraction, but we are going to see in the demo that it's very transparent for the user. Kind, another benefit that it has is that you can define, put all your configuration in YAML files instead of passing them on the command line. So it's very good to have all this on your, for example, on your Git repo to help with the CI aspect of it. In this case, for example, we are defining a cluster that has two nodes, one as control plane, one as worker, and both running Kubernetes 1.18. Okay, let's have a look to the demo. So again, we start with kind version and we can get the list of clusters. And we are going to apply this configuration to the creation command. The same one that we've seen in the slides. And we are going to just kind create cluster and we are going to pass that configuration. And this is what is going to happen. It's going to start creating some nodes. It's going to install a CNI, so the networking plugin for you. In this case, it uses kind net, which is specific to kind. It's going to add a storage class for you as well, which is great. And similar to MiniCube for this session, it's going to make kubectl point to the right context. So after this, we can just do kubectl get nodes and it will point to the right one. But we've said that these are actual containers. So, and this is where you're going to see how Docker makes it very transparent because I can get the list of containers in my machine. And these are the two nodes, both are containers. And although this is actually inside a VM, I didn't have to SSH into that VM. Docker, that's all that heavy lifting for me. So it's super easy to use containers outside Linux. Okay, let's have a look again on data doc, how this looks like from a process and container point of view. So if we check, for example, the control plane, in this case, we have two hosts. We will see that this looks a little bit like a VM because it has system D, it has the container DS runtime, a bunch of containers running, and it also has the kubelet. So those are the kubelet and container D, are the only two processes that are going to be running as OS processes. If we check the list of containers, the rest of the components that are needed for Kubernetes are going to be running as containers. So we have the API server, obviously data doc agent to get the data, controller manager, EDCD, code DNS, Kynet, which is the networking plugin, and we can dig in to see the process information, or for example, the API server, in case we want to understand how Kynes is executing the API server, so all the arguments that it's going to use. Okay, so how do we access our application? So you may think that with all these layers of abstraction we have the VM, then we have a container, then another container for our application, and we may think that this is, it's going to be tricky to access our app, but it's again, very transparent. So for example, if you want to access this app, this pod using port forward, it's very, very straightforward. We just do port forward, and everything is done transparently. We can access our application that we saw before. So pretty great, super lightweight way of doing cluster, of creating clusters. Okay, the third one that we're going to see is micro Kubernetes, which is a little bit different in the concept of, maybe me in a cube and kind are the most popular ones, but definitely not the only ones. So let's have a look to some others. Micro Kubernetes is a Kubernetes distribution by Canonical for developers and IoT. So it's not only meant as your local cluster, it's also meant to be deployed on IoT devices. It's packaged as a snap, which is a packaging system created for Ubuntu. There are other Linux distros that support snaps, but really when you're talking about the snaps, the chances are you're talking about Ubuntu. It's using Flannel as CNI. So instead of using something specific for micro Kubernetes, it's using just a simple CNI plugin called Flannel. It allows for multi-node cluster and same as mini cube, you can add several add-ons to your cluster. This is how it looks like from an architectural point of view. You have your hardware, you have Ubuntu running on your hardware. It's going to be your Kubernetes node. You have a SnapD, which is a demand that you require to run one of these snaps. And then you are going to have all the components that you need for your node as OS processes, not as containers as we saw with mini cube and Kine. So you're going to have the cube proxy, container, the API server, Flannel, scheduler, cube, et cetera, all of that. As processes in your node. Again, if you're not running Linux, this is going to be virtualized. And this is when we start to be less, we have to be more specific about creating the VMs. And if you go to the instructions on how to create a micro Kubernetes cluster on macOS or Windows, they recommend using multi pass, which is basically just a VM management tool that Canonical created. It's very tailored towards Ubuntu. You can really use whatever you want. If you want to use Vagrant, if you want to use virtual box, anything else, you can do so. So basically what you need is send Ubuntu to VM if you're running macOS or Windows. But because they recommend multi pass in the instruction, that's what I did on the demo to make it things simpler. So the first thing that we need to do is to launch an Ubuntu machine. They have this command multi pass launch and it's going to launch the latest Ubuntu LTS release by default. I recorded this before 2004 was released. So this is going to run 1804. And once we launch it, we can SSH into it. They call it shell, but it's basically SSH into the VM in a more transparent way. So you can see that this is basically running normal Ubuntu 1804 directly from the cloud images. So let's install the SNAP. So the way we install it is with SNAP install. This is how you install this packaging system called SNAPs. And it's going to take a little bit while. This is many of the boring things of the demo is are a little bit speed up. So if you do that yourself, you may notice that it takes a little bit more. And then once you install it, you have to wait a little bit for it to be ready. So once it's ready, you can do Qubectl get notes. You see a special command called micro Kubernetes, Qubectl, and then you're ready to go. So this list that we see here are the list of add-ons that I was mentioning, things that you can add to your cluster in a very easy way. So they are already pre-tested to work on micro Kubernetes. So let's use this to enable DNS because DNS is something that in general you always need on your system. And now we are going to have a look on how it looks internally. So we have only two containers. This is already different from what we've seen before. We only have the Datadoc agent that we deployed outside the recording, but we deploy it. And code DNS that we just enable, where are the rest of the components? So if you go to the list of OS processes, you can see them there, like the API server, controller manager, the Qubelet container deal, the Qubes scheduler, all of that are running as OS processes, directly as system D units. Same thing, you can check what are the defaults of how they run each of those components easily if you navigate into the actual process. And yeah, that's a little bit different from what we've seen so far. Another thing that it's different from what we've seen so far is the user experience. So in Minicube and Kind, I was able to access my cluster without leaving my workstation. In this case, I had to SSH into the machine and I'm still on the machine, which just doesn't seem ideal. You want to be on your workstation without having to access any VM directly. So the user experience is not as good as the other two, but you can overcome that quickly. They have a command to expose your Qube config into a file, so we are going to do that now. And then we are going to go back to our workstation and copy that file into our local file system. And once we do that, there is a command called transfer in multi-pass, which is SCP, really. Once we do that, we can expose our Qube config and then access that same node, that same cluster using Qube CTL. So it can be done, it's not a straightforward, but it definitely can be done. So we said this was multi-node, so how we add a node. So let's SSH into the control plane. They have a command called add node. And that basically what it does, it gives us a token that we are going to use in a second node. Obviously, if you want these RVMs, so if you want a second node, you're going to need basically a second VM running micro Kubernetes, so I super speed this up because we already seen all these things. Once we have it up and running, the only thing that we need to do is to add that command into our working node. And once we do that, we check quickly that the processes that we're running here look a little bit different because it's going to be a worker node. So it's only going to have the components that are needed for a worker node, like container D and Plano, Kube Prox and the KubeNet, but we don't have the API server, we don't have a Kube controller. All of that is on our second node. And if we go back to our workstation and do again, Qube CTL get nodes, we can see that both nodes are up and running. So fairly easy way to add a multi-node cluster because all the, what it's need to secure the connection between the nodes is already done through that token, so very straightforward to do. Cool. So the fourth and five, four slash five on our list are K3S and K3D. Let's start by the original project, which is K3S and we will explain how K3D is related. Again, similar to micro Kubernetes, it's a lightweight Kubernetes distribution for devs and IoT devices, so very similar. It was original created by Rancher and the same way that micro Kubernetes package, Kubernetes as a single snap, K3S goes a step further and package everything as a single binary. So you only have to download a binary and you're ready to go. Same as Flannel D, same as micro Kubernetes is going to use Flannel D for the CNI and again, as well is multi-node cluster. So how does it work? So we have this binary that we run in our Linux machine and once we run the binary and we can run it in two different ways, we can run it, the first way is to run it to get a control plane node. So we run this command K3S server and it's going to create two processes and only get two processes. It's going to create the container D process and it's going to create the K3S server process and all the components that are needed to run Kubernetes, as we've been telling all the time, the API server, control it, et cetera, are going to be embedded inside that single process. So very out of the box thinking when designing this thing, where they embedded everything on a single process. You can see that here that you have SQLite which is a little bit weird to have in our Kubernetes cluster. Usually the state is managing in Etcd but then they wanted to make it very, very lightweight and they replaced Etcd with SQLite. The K3S team is preparing a release of K3S with Etcd as an option but it's still not available. So that's for the control plane. What about if I want to add a worker node? I use exactly the same binary. The only thing that I do differently is that I run a different command called K3S agent and it's going to create again two processes, the container runtime and this process that it's going to embed all the components that are needed for a worker node like KubeNet, Flannone, KubeProxy. So very, very interesting cluster. Let's have a look to how it works from a user point of view. So this requires Linux and in this case, it doesn't give you, it's not like the mini cube that is going to create the VMs for you. You have to create them themselves if you're not running Linux natively. So you can do whatever you want. I've used Invagrant for this example. They don't recommend anything in particular. So I'm using Invagrant and I'm creating two VMs, exactly the same running Ubuntu. One, I'm going to call it nodes, control plane and they're going to be connected through a network. So I'm going to do Vagrant app to run it and I'm going to SSH into the control plane. And once I'm on the machine, it's just a fresh installation. The only thing that I need to do is download a binary, a single binary that I can get from the list of releases. And when I download it, what I'm going to do is just give it some execution permissions and maybe put it in a path, but nothing else. So how do I run my control plane? So it's, if I run K3S, the binary, I can see that I have two commands, the server and the agent. This is the control plane that we are trying to build. So I'm going to run K3S server. And you can see that there are many arguments here and some of the arguments are not related to K3S itself. So for example, you have things like kubeleg arguments and kube proxy arguments because all of these components are going to be embedded on this process. If you want to pass different arguments to those components, you have to through K3S. In our case, the only one that I'm going to use is the Flannel interface, basically to tell the networking plugin how, what interface should it use to talk to the rest of the nodes. So it's going to run this. So I'm going to leave it running in a terminal and then I'm going to SSH again onto the same machine. And now I can do a special command called K3S, keep CTL, get nodes. And I can see my node is up and running, which is just great. So I only had to download a binary, run it, that was it. So how does it look like from a process and containers point of view? So we are checking the host, looks like just a normal VM. If we check the containers, we have some containers that are not really part of the core Kubernetes components, but K3S is a little bit opinionated in the sense of what it thinks that you will need anyway. Remember that with micro Kubernetes, we needed to enable the core DNS, planning to get core DNS. Here is by default. So it has things like traffic for your ingress needs. It has the metric server, which is very interesting to get the CPU and memory that each part is using and core DNS. And also some storage class. So all of that is done for you, but where are the processes for my components, the components that are part of the... So we can see that there are only two, two of them. So there are no API server, there is no Q-proxy. So the only ones that we are going to see is the K3S server and container D. And this process is the one that is going to contain all those components. So something to... I think it's important to know how this works in case you need to back something. You need to understand where are all those things. Cool. Let's do a second node. So we had this second VM that we had called node and we had download the same binary K3S and we have given some permissions and we are going to paste a token. We can get this token from the control plane. So it's only the instructions and then we only have to run the agent command with the same binary pointing to our server and passing the token so we can communicate to the server securely. And it's going to do the same thing. It's going to run that command. And if I go back to the control plane and do QCTL get nodes, I can see the second node over there. So again, very straightforward to add that second node. And that was great. But we saw that those are not super transparent developer experiences because we had to create those VMs. We had to SSH into those VMs. We had to download the binary, et cetera. So it doesn't seem like a great, great user experience but a lot of people loved K3S for developers. So they decided to create this second project called K3D that basically is a wrapper to launch Kubernetes cluster, K3S cluster, sorry, in Docker. So this seems to be looking a lot like Kine. It's the same concept. So running containers inside containers. This is going to be, instead of VMs, our nodes are going to be containers. It's very easy to install. You only have to call bash and installation is script. The only thing that a script does, if you check it, so it's safe to do, is check what is your operating system and download the right binary. You want, if you want to skip cool-washing stuff from the internet, you can just go to the releases page and get the right binary. And it's also multi-node. So let's have a look from a user point of view. So we've run that command to get the right binary. Again, that's the only thing that it does, is going to get the right binary in the right path. And once we have that binary, we can just run the command and quickly create a cluster, simple as that. And this is going to be very, very similar to what we've seen on Kine. It's going to run a special container image that is going to look a little bit like VM. But if you run Docker PS, again, this is an actual container, and we can see that it's running this command, K3S server, that we were using when we were doing K3S directly on our VM. Once you export the right Kube config that they have a command for, you can access the cluster through your Kube CTL in your workstation. So pretty pretty straightforward. Okay, so let's check how to add a node. So the way we add a node is super simple. We don't have to create this second container ourselves and then join that one as we did with K3S. We just do K3D add node and point to the control plane node. And it's going to just work. So if we do now, Kube CTL get nodes, sorry about the alias, you get both nodes. One of them is not ready yet because it's going to take a while. But if we run Docker PS to check the containers on our machines, we can see that we have the second node running K3S agent instead. So if we check the list of processes, very same thing as K3S, we only have the K3S server and K3S agent on each of those nodes and the rest of it is going to be embedded. And if we check the list of containers, we are going to see that it's going to have the same containers that we saw on K3S. So your local pod provision or your traffic code DNS. So same things that you obtain on K3S, you're going to have it on K3D, but running on a container and with a great developer experience. Okay, the final one that we are going to see is completely different. It's not really meant to be a local cluster, but it's very easy to deploy and it's a little bit special. I thought it was interesting to put it here as well, if not a secure curiosity. FireCube is a GitOps ready cluster. So what is GitOps? GitOps is a way of working with infrastructure and applications where everything, the source of truth and everything is on a Git repo. So all your infrastructure code, your app code, your app configuration, everything is on Git, developers or admins do not interact with Kubernetes directly just through the Git repo and the Kubernetes cluster is going to be really in that configuration and applying the changes. So it's kind of similar as another concept called infrastructure as code, but it goes a little bit step further, not only for infrastructure, but also for your app, for your app config, even your secrets in a special way. They have ways to add secrets as well and crypt it. So FireCube is an easy way to get a GitOps created Kubernetes cluster. So if you want to try GitOps and what it is, it's a good way to get you up and running very quickly. It's going to create your cluster using Firecracker using Ignite, which is a Weaver project. This is FireCube was originally developed by Weaverworks. And if you're not in Linux, it's going to fall back to Docker in Docker. So again, in my case that I'm running my OS is going to create this container nodes and also it's multi-node as well. So let's see the demo and this demo I start differently from the rest of the demos. This demo doesn't start on a terminal. It is started on GitHub. Why? Because if this is GitOps ready, the first thing that we need is a Git repo. So we are going to do that. And this is the instructions that they ask you to do. So the first thing that you need to do is to fork the repo. We are going to do that. We fork the repo. And then we are going to clone that repo. And this new repo that we have created that we have fork is going to contain all the information about our cluster, our infrastructure. We just run setup. It's going to download some tools. This is basically a wrapper for different tools that they have. And this is the interesting bit. Before creating my cluster, it's going to produce some configuration here and it's going to push it to Git. So instead of creating the cluster and then maybe afterwards pushing to Git, it's going to push to Git because that configuration is how my cluster is going to look like. And then a tool is going to read that information and it's going to create the cluster. So it's going to create that cluster for you and it's going to take a while. It's using ADM as many of these tools. So that's why you see some ADM happening here. And once you export the Kube config, you're getting the notes. We have two different containers. And what if I want to deploy my application? Should I do Kube CTL apply? Okay, that doesn't seem like the case because I want to do everything through Git. So basically what I'm going to do is to copy those manifests that I have for my application into my Git repo and I'm going to push that to my Git repo. First I'm going to commit my changes and then I'm going to push to my remote. So once I do that, I'm not doing Kube CTL at all here. And if I get the pods and I watch a little bit in couple of seconds, you're going to see those containers appearing magically. How did that happen? It happened because FireCube comes by default with this component called flux. And this component, what it's going to do is going to watch all the time for changes on my Git repo. And once that it see any changes, it's going to apply those changes directly. So interesting way of working. If you want to learn about GitOps, you can use FireCube to get you up and running. Maybe not use it as your usual local cluster, but if you want to try this concept, it's a good way. Okay, so we're finishing. So some takeaways. Local clusters are not production. They don't mean to be production, but it's still, they're very, very useful. They're very useful for CI, for testing, for learning tools. So knowing what they are, I think they can be very, very useful. Also, I think it's important, or at least it's well, important needed to know a little bit of the architecture of your local cluster, whether it's a VM, whether it's a container, whether it's the components of Kubernetes or running our processes or containers. All these things are going to help you debug in case something goes wrong. And my final takeaways that Docker in Docker is actually a good compromise. Running your nodes as containers is fast. It's lightweight. You can have lots of clusters running on your machine. So if you, and that's why many of these tools are falling back like kind. We've seen on K3D, many of these tools are using Docker for your nodes. And that's it, thank you very much. I hope you learn a little bit how you work with these local clusters. Feel free to reach out anytime and visit data.hq.com if you want to check how you can monitor your Kubernetes cluster as well. Thank you. Awesome. Thanks, Ada, for a great presentation. I think we have time for one or two questions. So let's just get right into that. Can I install Minicube and MicroKate side-by-side in my Ubuntu VM or will Kube CTL get confused? So that's a good question. You definitely can. You can do both. What is going to happen? It's going, so the way this works when you have access to several clusters is that each cluster is going to have what it's called a context. So what you're going to have is several contexts. So Kube CTL will point to each of those clusters depending on the context that you're switching. So what you will need to do is to maintain those two contexts and switch between those two, depending on what cluster you want to talk to. Awesome. Next question. In Windows Minicube, which driver is good, Hyper-V or VirtualBox? So both are good. If you can run Hyper-V, probably I would recommend it, but both should work, definitely. Thank you. The next one, which flavor of local Kubernetes cluster is best suited for a team of developers using workstations that run Windows? So there is no right answer to that. Again, those are different. Anything that runs on Docker is going to use Hyper-V by default. Minicube have more choices. So depending on the hypervisor you want to use, anything that it's Docker-based is good. So I would say any of the K3D or Kine should work. Minicube should work as well. Awesome. Unfortunately, there are a few more questions left, but we're running out of time, so I'm going to kind of cut us off. Thanks again, Ara, for a great presentation. That's all the questions we have time for. Thanks for joining us today. The webinar recording and slides will be online later today. We're looking forward to seeing you at future CNCF webinars. Have a great day. Thank you. Thanks, Jeffrey.