 So, hi all, we welcome you all to this Kubernetes workshop where we'll be learning the basics of Kubernetes spread across two sessions. So we have designed this content in such a way that there are a number of lab sessions and this lab sessions has also been updated in the GitHub repository for you to refer practice and learn at your own convenience. Okay, so now let's move on to the topics that will be covered in the first session. So in the first session we are going to see what is container orchestration and its benefits followed by we are going to dig deep into the Kubernetes architecture and we are also going to explore the Kubernetes clusters that can be deployed across both cloud solutions and also on-premise solutions. Then we are going to have a couple of lab sessions. So in the first lab session, we are going to set up our first Kubernetes cluster using K3D. So this can be set up on the local laptop itself, run some basic Qtl commands on top of this cluster. Finally, we are going to have some Q&A where we'll be answering the questions that has been posted. So as I mentioned earlier, we have this content already uploaded in GitHub repository and the content can be visited here. So we have uploaded all the lab sessions step by step so that you can practice it at a later time. Okay, so now I'll go on to the first slide here. So before I begin right now, in most of the slides we will be referring the Kubernetes documents across various topics. So the reason to refer this is like the Kubernetes documents is considered as kind of a holy book with regards to Kubernetes. So each and every topic that is covered in this specific document right, it details and it has all the details in a step by step procedure. So even for someone who is aspiring to write the certified Kubernetes administrator or a certified Kubernetes application developer writer, it's open book exam where you can actually refer this Kubernetes documents to actually complete the exam. So it is that was and it has got all the details. So that is the reason we have shared this Kubernetes link wherever possible. Okay, so now let's move on to the first topic, which is container orchestration and its benefits. So what are containers and why do we need them? So before we go into container orchestration, right, I would like to actually explain what is containers and how it got evolved. So for this, I'm going to refer the Kubernetes link here. So as you can see in this diagram, right, in the traditional deployment, we'll be having a physical hardware and on top of this will be running an operating system. So this operating system will be having multiple applications running on top of it. So the problem with this approach is there is no resource boundary. So for example, that could be one application that could be consuming large amount of resources that by slowing down another application. So this is not a viable approach. So in order to address this, that that could be one other alternate solution here where we can actually tag one application to one physical server, but that is not a cost effective approach. So that is how this traditional deployment evolved into a virtualized deployment. So in a virtualized deployment, we are going to have the same physical hardware and operating system, but on top of the operating system, we are going to introduce a new component called hypervisor. So once we have this hypervisor rate on top of this hypervisor, we can actually run multiple virtual machines. So these virtual machines will have its own operating system, binaries and everything. So the advantage with this approach is we can actually, what does it bound an application to a specific resource, and we can also have an advantage of application isolation. But there is one drawback here. The drawback here is we are introducing a number of intermediate layers. Like for example, we have an operating system here and then a hypervisor and then again an operating system and then the binaries and then only we will have the application here. So there are a lot of intermediaries here and this intermediaries also consume a lot of resources. So what is the next approach? So that is how we have this concept called container deployment. So in this, what we are going to do is instead of hypervisor, we are introducing a new component called container runtime. So on top of this container runtime, we are going to convert our application into containers and then run on top of this. So what does this container runtime do? So this container runtime is like an component that is installed in the operating system. They actually help mount the containers and also interact with the kernel process and then help run the container effectively on a physical hardware. So that is the purpose of this container runtime. And how are we actually, what is a container here and what does it contain actually? So I would actually like to explain this with an example. So let's say I have actually developed an application on my application, which runs on like kind of a Python 2.7 version. So this I have developed locally on my laptop and I have also tested it locally. So now I need to, what does it deploy this application? I commit the code here and this application will be deployed across multiple staging environments. So when this application is deployed under multiple staging environments, I can't actually guarantee this application will run the same Python version. So that could be environments like kind of staging, like kind of a test environment or a prod or pre-prod, which could be running different Python versions, different Python versions. So how do we address this? So this container is actually a solution to your problem, where we can actually reliably deploy an application across all the staging environments without worrying about any kind of 3PPs or other OS issues or anything. So as I have seen shown in this diagram, right, container, in a container, we can actually bundle a minimal 3PPs that are required and then an application can be bundled. So in my example, I will be actually binding my Python 2.7 as well, along with that, along in the container. So this will ensure my application runs without any issues across all staging environments. So this is the advantage when we use containers. So now let's assume that I have developed my application in Python 2.7 and I have deployed my application in a production environment. So what if I need to scale up my application? Let's say my application is having a lot of traffic and is used by multiple users. So how do I scale up my application now? And what if I introduce a new feature in my application and I also want to deploy this new version of my application in this as a container. And also that could be also a disaster scenario where my application could get killed. I need to restart my application. So all this needs to be done without any kind of manual intervention. So that is when we have this container orchestration. So the container orchestration actually helps in managing the containers that are deployed in a deployment environment. So that is the advantage with regards to the container. So as we mentioned here, right, since everything is a kind of an automated and no manual effort is required, it helps in increased productivity and faster deployment of our application. So considering the lesser amount of resources that the container is going to consume and the effort that is required is quite minimal. It also helps in reducing the cost of deployment. And then we also have a stronger security here. Since the application is actually isolated and this container orchestration also gives us the benefits to isolate a container and also provides many R-back rules wherein we can actually, what is it, reduce the attack on a given container, right? We can also easily scale up an application or scale down application based on the traffic that it gets and the faster error recovery. So whenever the application fails, right, this container orchestration can actually help restart the container in a very quick time. So now we have seen the benefits of container orchestration and why Kubernetes? So Kubernetes is like kind of a, what is it? It's kind of a default container orchestration that is used globally across all the organizations. So it is an open source project and it is recognized by the Cloud Native Computing Foundation as well. So along with this, Kubernetes also provides various other advantages. So it helps in load balancing. So suppose if I'm going to scale up an application, it helps the load balancing and traffic routing to the specific containers that we have deployed. And then, so if a container requires a storage or something, it also has a provision to automatically provision the storage. So like I said, the automated rollout and rollbacks is provisioned by itself. And then the cell feeling that we just seen, like whenever application fails, it can automatically restart the application and the secret and configuration management. So for example, any application that we deploy, right, it requires something to log in, all throughout the application. So rather than storing this credentials directly or packing them in the container, Kubernetes offers the options to store this containers as kind of a resource so that we can actually directly refer them externally so that we need not actually restart or rebuild an application whenever there is a change in credentials. So that's it about the container orchestration and we have also seen what is Kubernetes on the basic level. So now we go on to the next slide. So in this next slide, this is the Kubernetes architecture. So this is like kind of a high level architecture of Kubernetes. So the major components here are the control plane and the worker node. So this control plane actually controls all the components that are deployed on the Kubernetes cluster. So then we have the worker node where the actual container runs. So the control plane has got many components here like kind of an API server, scheduler, controller manager, I mean, ATCD. And the worker node also has some components that needs to be run. So now we are going to see the functionality of each of this component. So for explaining this Kubernetes architecture, right, I'm going to use a code load document which actually explains this architecture by using an analogy of ship. Let me open the document. So as seen here, right, this control plane component is represented as kind of a master node here. These are the four components and the worker node has this three components that is a kubelet cube proxy and the container runtime engine. So this will be used as kind of a ship analogy here. So as you can see, the master node or the control plane will be recognized as kind of a controller ship and the worker node will be kind of a container ship here. So now let's go into the first component here. That is the ATCD cluster that we have seen here, the ATCD cluster. So what does the ATCD cluster do? So any controller ship, right? So it will, any controller ship will have a number of containers that are incoming and these containers are also transported onto the various, various container ships. So it needs to, we need to maintain some kind of an address book here where all these details are entered, that is incoming containers and where these containers are deployed and what is the status of this container. So this ATCD cluster can be compared to that of an address book. So this ATCD cluster acts as a key value store which has all the information related to the parts which are deployed onto the various worker nodes and what is the status of the parts and when it has been started and everything. So the ATCD cluster acts as kind of a metadata information of all the resources that are deployed on a given kubelet disk cluster. So moving on to the next component. So the next component here is a kubed scheduler. So what does kubed scheduler do? kubed scheduler can be compared to the, of a crane in a controller ship. So the crane here, right? This is responsible for scheduling the container or transporting the container to various worker nodes. Similarly, this kubed scheduler is responsible for scheduling a container that has arrived here. So whenever a new container is given to the, introduced into the cluster, this kubed scheduler for scheduling this container onto the various worker nodes. So this, this scheduling considers many factors like for example, it takes into account the resource capacity of the specific container and also resource availability in that specific worker node. So all these factors are taken into account before a container is actually deployed into a worker node. So moving on to the next component, which is the controller manager. So the controller manager is kind of a different officers that, that exists within and controllership. So all these officers engage in some kind of maintenance activities, like kind of a traffic navigation or ship traffic control or damage control and everything. So similarly, this controller manager here in control plane, right? It has various sub components, like kind of a node controller, replication controller, pod controller and various other components. Like for example, this node controller here, right? This is responsible for node management in a cluster. So whenever a new worker node is introduced into the cluster, this controller manager is responsible for taking into account of this new node and scheduling containers into this new node and also maintaining the balance between all the worker nodes. Similarly, whenever a, whenever a node is down or something, right? This node controller makes note of it and also ensures that the parts that are running on the specific node is moved on to other, other available worker nodes. So the controller manager is kind of a maintenance or the control controller within Kubernetes, which has various sub components. So now we have seen the three components in a cluster, that is the ETCD controller manager and the scheduler. So now we move on to the final component here, which is the API server. So what does the API server do? So API server is like kind of a centralized component, which helps in communication between all these components, both internally and as well, as well as externally. So it provides various API calls through which we can actually manage or communicate with the cluster from outside. And also this API calls are also used by the components internally to communicate or send messages between each other. So this is like kind of a centralized component that is actually used by all the sub components within Kubernetes. So now we have seen the, all the control pane components. So next we move on to the components at the, at the worker node level. So the first component that we are going to see here is the container runtime engine. So the container runtime engine we have seen in this previous slide also, right? We have this container engine. So this is like kind of a default. So this is like kind of a default service that needs to be run on any worker node. So the purpose of this is like they help in mounting the container and also help interact with the kernel process to actually enable the communication or start and management of a ports on a given worker node. So this is the purpose of this container runtime engine. So the most popular ones are the Docker, container and everything. So the next component that we are going to see here is the kubelet. So what is the purpose of this kubelet? So kubelet can be compared to that of a captain of a ship. So the captain of the ship is responsible for, responsible for the containers that are run on a given worker node. So along with these responsibilities, he also sends communicates with the control plane or the, about the status of this containers. Similarly the kubelet here is responsible for all the containers that are deployed in this worker node. And this kubelet also periodically communicates with the control plane to send the status of these containers and the status of this worker node. So this is like kind of another node which acts like a captain of the ship on any given node. So the next component that we are going to see here is a kubroxy. So kubroxy, let's say there is one container that is deployed in one worker node. So this worker node, right? So this container needs to communicate with another application that is deployed on another worker node. So how does this communication actually takes place? So that is where this kubroxy comes into the picture. So the kubroxy helps in communication between these containers internally by setting up all the network configuration, the traffic rules and everything between these worker nodes. So all this internal communication takes place by, with the help of kubroxy. So now we have seen all the components that we, that we have, that we have just shown in this architecture diagram. So the control plane has this ETCD, which acts as a metadata, which has a metadata information of a cluster, the controller manager controls various components like kind of a maintenance activities and the scheduler helps in scheduling the container on the worker nodes. The API server is like kind of a centralized component, which has all the API calls for both internal communication and external communication. So this is about the control plane components. And then the worker node components, we saw the container runtime engine, which helps in the mounting of the pod and the working of the pod on a given node. And then the kubelet, which acts as a captain of a ship in managing all these containers. And the final component, which is a kubroxy, which helps in the communication between the containers that are running on a cluster. So this is the architecture. So all through this, through this couple of slides mentioning about container. But in this specific diagram, right, you could see a new component introduced called pod. So what is a pod? Pod is a kind of a minimal component that can be deployed in this cluster. So pod is a place like kind of a container. It could contain one container, two container or more containers running on it. So it is kind of a wrapper on top of the container. And this pod is a component that can be deployed on a given node. So we will see like how this pod is deployed and how this pod runs and how this container is wrapped inside a pod in the, in the upcoming slides. Okay. So I'm done with this container orchestration. So now next we have this lab and I'm going to hand it over to Karthik. Karthik, go to you. Yeah, thank you. So yeah, so let me share my screen as well. Hope you all can see my screen. So yeah, we talked about the cluster, community cluster, like what it is, what is a community cluster and the architecture behind the cluster, how it is built. And when we talk about the community cluster, we have two primary classification. One is the managed community cluster. The other one is the unmanned or the on-premise, which is self-hosted clusters. So be aware of the fact that we're not going to deploy today a cloud managed cluster. What we are going to deploy is a locally installed community cluster. So when we talk about the managed clusters, what are the options available today? We have from Amazon, the EKS, which is the Elastic Community Service. From Microsoft, we have EKS, Azure Community Service, and from Google, the Google Kubernetes Engine. So these services are offered from the cloud vendors and most of the control planes are managed by them. So you will not have get access to the complete control plane component. So they'll be managing the control planes and we will be responsible for the application deployment and management of the services within the cluster. So that's the basic idea behind the cloud managed clusters or cloud-administered Kubernetes clusters. And when we talk about self-hosted clusters or the on-premise clusters, we will be creating the clusters using one or two machines, physical servers, or it could be a virtual machines. We will be combining the virtual machines or the physical nodes with physical servers, and all together to form a cluster using one of the utilities, which is listed down there, like the QBADM, COBS, Minikube, K3D. These are the different utilities which are available, which helps you create the clusters. And you will require some resources on the infrastructure, be it is a virtual machine or it is a physical server, or it could be even a container. So what we are going to make use of today is one of the utility called K3D, which is funded by Rancher, Rancher community. So we will be using this utility to create the clusters. So enough theories, let's jump on to the next topic of how to deploy this cluster on your local laptop or desktop. So to set up the cluster, we have a few prerequisites which needs to be completed. The machines can be, this cluster can be set up on your local laptop or desktop, and the requirement is to get started off with an operating system which can be your Windows 10, primarily the version 2004 plus or later, or you can have this workshop done on Linux as well or on a Mac system also. The same thing can be followed on all the operating systems, skipping a few of the sessions listed in the topics below. So suppose if you are using the Windows operating system, we will get started off with installing the WSL environment. So this WSL is nothing but Windows subsystem for Linux. It comes with Windows 10 and it ships with the Linux kernel. Windows is shipping the later up, I mean Windows 10 plus versions with the Linux kernel. So to activate the Linux kernel, you've got to run this command, WSL ifen ifen install. So let me start the command prompt and try to install this WSL first. So by default, if it is installed already, you will get the command help usage. So which means that it is already installed. So once installed, you can reboot and take a reboot of the machine and then come back to this workshop page. Now the next command to fade in is the installation of Linux environment, which is actually a Ubuntu variant of Linux is what we are going to run inside the WSL. So I'm going to do the copy and paste for most of the activity. So let's again bring in this command prompt. So before that, let me show you, I don't have any other environment other than the default WSL session installed on my laptop. I'm going to create a new Ubuntu environment with the version Ubuntu ifen20.04. So while this is getting installed, let's talk about the overview of what is going to be installed today in a live environment like. So what I plan to install is set up an initial host cluster, host machine, which is my Windows machine, Windows laptop. And I'm going to wrap it up with the WSL. Okay, within that, I'm going to install a Docker environment. So there is going to be a Docker environment that I'm going to install next. And then within the Docker, I'm going to install the cluster, K3D cluster. And I'm going to call this cluster as dev cluster using K3D. So when you start the machine for the first time, you will be creating a user account for the machine for the Linux operating system. So we have the Linux or the WSL setup done. So let's go back to the next step in the workshop, which is the installation of Docker. So this, you can follow the official Docker documentation to install the Docker engine or you can copy and paste the information whatever is available in this page. You don't require it anymore. So this is going to take some time. So let's get back to the slide which we are preparing. So we have host cluster, host machine, which is... Then we are installing Docker within that WSL environment. And then we are going to create a K3D dev cluster. And this cluster is going to spin up a few components. One is the server component or the control plane of the cluster. And we are going to spin up three worker nodes. So this forms our cluster, K3D cluster, which is nothing but our Kubernetes cluster. This one is nothing but your Kubernetes cluster. So why do we need this Docker in first-hand? So Docker, as we talked in the earlier slides, we need a container runtime to host the Kubernetes cluster. So without a container runtime, Kubernetes cluster cannot run. And this K3D uses this Docker container engine. So what we are going to do here is, we are going to follow an inception model of installation, wherein cluster is going to have its own container and time. And we are going to wrap up this cluster and then put it inside a container. So it is like a virtual machine inside virtual machine. That is how it is going to function. So far we have installed the WSL wrapper. And then we created a machine. And then we are in the process of installing the Docker first. So let's wait for the installation to complete. So as you can see, the container runtime is getting installed. So it depends on the network speed and the bandwidth that server, which is going to send you the packages. So based on the installation time can vary. So we do have the other forms of self-hosted clusters, which makes use of the utilities through which you can build the cluster. One of them kind is that Minikube, which comes with Ubuntu Opera System. So the primary difference between these utilities is like how the backend is handled. For example, K3D makes use of Docker to create the cluster. Minikube use system CTL service to create the clusters. And QBADM or COPS will make use of the server, the node, primary node itself to create the clusters. And kind also makes use of the Docker images or the services to create the clusters. So it's purely up to the preference of individual who want to create the cluster. Can you make use of any of these utilities to create the new clusters? So the container runtime is installed. Next, the Docker command line utility is being installed. So once this is installed, we'll have the Docker runtime installed. And then we will be able to install the K3D, which is the utility which we are going to make use of and create the cluster. So each of this worker node is going to host pods. So each worker node can host as many pods as it wants, as is created by these control plane. Control plane will decide which pod to run on which worker node based on the available metrics, available capacity. Suppose this worker one has more resources available, CPU and memory. It will try to allocate the pod into that worker one. And if it finds that there is not enough capacity available, CPU or memory is not sufficient, then it will move this pod or it will try to create the next pod in the next worker node. So that is how the control plane will decide on how to allocate the pod, how the new deployment is created, on which worker node the workload is sent to. So almost the Docker is installed now. And if you have Docker for desktop or Rancher desktop installed on the Windows machine, you will be able to create the Kubernetes cluster right away. It is not recommended as it has some limitations in the form of using the cluster, real cluster. We will not be able to make use of our own ingress or load balancer using the readymade cluster, which the Docker desktop provides. So that's why we are creating a cluster using a perfect environment which we are going to create using the K3D. So once the Docker is installed, we are going to have a startup script added so that the Docker daemon gets started at each time when you boot up the WSL machine. So for most of the workshop session, I will be doing the copy and paste because that works. You don't have to manually copy or manually type the commands into the terminal. Okay, so it is added and we got to add a pseudo rules so that you will have the privilege to start the Docker daemon as normal user. And then we need to add the user, that is your own first user to the Docker user group. So these are all part of the Docker installation. We haven't reached to the point where we are going to deploy the Kubernetes cluster. So Docker installation is completed and we configured it to auto start. Now let's reboot this machine which we created, the WSL machine. So this has to be done outside the WSL terminal. So let's go back to the command prompt. Let's terminate the new machine which we created and then boot up the machine. Okay, yeah. So the first step is completed. Let's have the second, I mean the next step installed and completed. So we need to have a browser installed so which is because whenever we are deploying an application in the cluster, we have to verify the application on a UI. Basically you need a browser to verify the web applications. So by default the Ubuntu doesn't come with an, the WSL doesn't have a browser installed. If you type Google Chrome, it will say that the application is not installed or the command not formed. So we got to install a basic browser. So once it is installed, we can install, I mean once the application package is fetched, you can install it directly. So to summarize what we have installed so far is we have installed the WSL Linux, which is the Ubuntu 20.20.04. We have installed the Docker, Docker runtime and configured it to auto start. And we are in the process of installing the Chrome browser now. So remember, if you're using a Linux operating system, native Linux without Windows, then you can skip this first part of this workshop, which is installing WSL and Linux. And similarly you can skip this part, install GWSL, which we will be installing next. And you can directly right away install the Docker and Chrome on your Linux machine or Mac, and then you can get started with the installation of QMTIS cluster. So these are preparations which we are doing now. We have not reached the point where we will be installing the actual cluster. So since it's taking some time, let me do one thing here. So once the package is downloaded, you can install the package using this command. Okay, let's wait for the installation to complete then. Sorry, the package download to complete. It's almost 70% now. Let's wait for a couple of minutes. It should get completed. So you can install any browser. It's not mandatory that you should install Chrome. You can install Firefox as well. Since Chrome follows the most web standards, I'm installing this browser. It's really up to the choice of an individual who wants to use the, I know, validate the web applications. So there we have the package downloaded. So let's go back to the workshop material. So we have kickstarted the installation of Chrome browser. So while that happens, let's go on to the next step of installing GWSL. So why this component is required is because you need a UI to verify your application, as I mentioned in the previous step. So you need a browser, but the browser will not know where to render the browser. Since it is installed inside a virtual machine, this whole WSL itself is a virtual machine. So the virtual machine will not have a UI by default. The WSL machine will not have a UI. So in order to activate the UI, you need a XSRW component. So this XSRW component is installed with Windows 11, but Windows 10 will not have that component installed. So we got to install it manually. So you can head to this URL and download the GWSL component. So once that is downloaded, you can install that. So Chrome is being installed on the Linux system. GWSL component is being installed in your Windows host machine. And this optional component GWSL is not required to be installed if you're using Linux or Mac. So it's purely in a Windows environment, we need this dependency to be installed. And that too, it is required only if you're using to verify GUI applications like web application or even native applications, graphical applications, which is being developed in a Linux environment. So this is the last proprietary steps that we need to complete before we start with the installation of Kubernetes cluster. So GWSL is installed. So let's finish. So what this does is it will redirect the display port to your Windows machine. So this is your terminal, the next terminal. And this is Windows, you know, display service is running within the GWSL application. It starts on next service. So it has the capability to receive the forwarded port information from the virtual machine. So the configuration is very simple. So once the installation is completed, Chrome browser is completed, I will show you how to configure it. So basically you have two mode of configuration. One is the automatic configuration of GWSL, wherein you just need to click this option auto export display or audio. So which will activate the display feature for your virtual Linux. The other, the second method is to set it manually inside your virtual machine or the Linux WSL environment. So these two steps has to be completed if you are configured manually. But let's follow the automatic method of configuration. So yeah, Chrome is installed. But if you run the Chrome browser now from the terminal, so it will not work because we don't have a display port or the Linux machine doesn't know where to render this Chrome browser. So for that reason, we're going to make use of this GWSL. So let's click this and go to GWSL distro tools, click the machine which we installed open to 20.04 and click this option of auto export display or audio. It will ask you to, it will prompt you to restart the machine. Click yes. Okay. So it has machine. So let's reboot the machine again. I mean, boot the open to machine again. So now, so let's verify the Docker service once. Yeah, Docker is running. So now if you run Google Chrome, it will open the browser on your Windows machine. So this Chrome browser is from window. It's all GWSL wrapper. Okay. So let's verify. So this is our section, which is the installation of cluster. So we'll be installing the utility first, the K3D utility with which we will be spinning up the cluster. So the installation of K3D is very simple. It's just a single line command execution. I've copied that. So I don't need this terminal anymore. So let's close this browser browser for now. So I'll be using the browser in the next session. So where we'll be deploying the application inside the cluster. So before that, let's start with the installation of the cluster. So let me copy it again. So it is installed. You can just verify by running this command K3D version. So it is installed the latest version of K3D, which is 5.4. So that is what we have executed. So yeah, the next step is to install the cluster itself. So this is the primary step with which we'll be creating the cluster, given this cluster. So this command is going to create a new cluster in the WSL environment which we created just now. So the command is self-explanatory pretty much. So here we have a parameter called the agents, which is nothing but the worker nodes, number of worker nodes. So I'll mention here is three. So we're going to spin up three worker nodes and one server. The server we haven't mentioned, but you can give it as a parameter, and give the number of servers you want, number of control planes you want. I have disabled the some inbuilt feature that the K3D creates a cluster along with. So one is the traffic ingress and the other is the load balancer. I don't want these two components. In fact, I'll be installing these two from a different vendor. So and then also I'll be creating a registry for the container images with this option registry create and the container registry name on a specific port number. So why do we need a container registry? I will come to the point in the next slide. So let's copy and paste this command and see the cluster creation going on in the back end. So it is going to spin up several containers. Since K3D is going to create multiple containers and each container is going to represent a node. So let me have another terminal window opened so that we can have. So as you can see, there are a few containers already spin up. You can see the Dev Registry, which is the node which actually represents a node here. But since we do not have a dedicated machine, so all the machines are represented as a container, Docker container. This Dev Registry is a node, but it is actually running as a container. This server node, which is actually the control plane for our cluster itself is running as a container image with the container ID. And I have mentioned that that will be using three worker nodes. These three worker nodes again themselves are running as containers. Okay. And if you talk about the memory requirements for this cluster, it is bare minimal. Let me show you the Docker stats. Let's see the conception of memory, how much they are using. It's pretty much very low. So these are very, very minimum. When you compare it to your real-time full cluster, this is a Kubernetes cluster, if you're spending it on a real virtual physical machine or on a virtual machine, this would be obviously in terms of Gigabyte. But here what we see is in MBs, in terms of MBs, which means it's very lightweight. This cluster can be run on your local laptop, typically on a machine with very least configuration, like even a 4GB system is capable of running this kind of Kubernetes cluster. So it is pinned up and the cluster is completely ready. You can see that it has pinned a successful message. So now let's see whether the cluster is up and running. There are a few commands, which the K30 utility comes with, which verify and see the details about the cluster. So one of them is the cluster list and see we can create multiple clusters. In fact, you don't have to end up with using only one cluster. You can have multiple clusters created using this K30 utility. So this is about the cluster administration you can make use of K30 command. But suppose if you want to have the insights into the cluster, like what is running within the cluster, then you need to make use of a command called kubectl. So this kubectl binary is a utility that is provided by the Kubernetes community and that is an CLI utility, okay, which is provided by the Google and it talks to the API server and retrieves the information from the cluster. Whereas K30 will not have the capability to talk to the Kubernetes API servers, but this will have the capability. So let's go and install the kubectl next. So again, it's a couple of commands to install these utilities. It's not a big thing to install these utilities. It's very simple. It's a single binary. So just pull it from the URL and move it to the local directories. That's it. So this is for the Linux version of kubectl. If you want to install it for Mac or native Windows system, then you got to hit this URL and go to the portal and download the one that is available for your operating system. This is purely for the Linux one, Linux variant kubectl. Yeah, it is downloaded and installed, which you can verify now. It should print yes. So you can run few commands to check whether your Kubernetes cluster installed cluster is functional. You can see the nodes command is listing all the component of the cluster. We have one master server and then three worker nodes. And this is the version of the Kubernetes cluster that we have installed, 1.22.7. And if you want to get all the resources that are installed in the cluster, you can run this command kubectl, get all if any. So that is going to throw out a few resources which are running inside the cluster starting from the namespace. By default, we will have two namespaces. One is a kubectl system and the other one is a default one. And within the namespace, you will have pods, services, deployments and replica sets. So these are the default resources that comes with the cluster. We will be installing few more resources on top of this already available and running resources. So let's remember, I have disabled the ingress controller load balancer when I provision the cluster. So now I will be installing them and for the registry which we have installed, it's a private registry which is going to, which we'll be using for our internal image build process. In the upcoming session, we'll be building the image, container images on our own for the application which we create. And we'll be pushing the registry, container onto the private registry. So to talk to the registry, we need to add one more entry to the etc-force file. So otherwise, if I simply ping this registry from the terminal, it will say it is not found. So we need to have a fake DNS creator for the registry. So let's do that. And then, as I mentioned, so we are going to bring back the ingress and the load balancer. And so these two commands are going to install the load balancer. And it is going to install a load balancer called MetalLB. Okay, and Docker is dead. Okay, you will see a few messages like this, which are nothing but the accumulator resources which are specific to this MetalLB load balancer. I'm installing this Metal load balancer service because if you're having your cluster on a managed cluster, it's all thought and in the cloud, cloud managed clusters. This is not added as a component in the range, but it is, you know, the cloud managed clusters comes with that. We're going to simulate the similar behavior on our local cluster. So I'm installing the third party load balancer called how we configure the load balancer. Install the load balancer. We configure the load balancer in the next steps. So I hope you're all aware of what is a load balancer and what is the function of your load balancer. So this load balancer is going to take the traffic from an external world, typically from the user who is going to invoke a solution, invoke an application which is running on a remote server or a remote cluster. When you want to access an application, it is usually directed through a load balancer IP. So this load balancer IP typically requires a public IP and since we do not have a public IP in our local machine, we're going to simulate some few public IPs. We're going to fake down the public IPs from our local network. So that is what we are going to do now. So there are a few variables that we are creating here which is going to take the IPs from the network locally installed cluster network. This is the cluster which we created and we're going to create a subnet and then within that subnet we are going to fetch in few IPs and allocate to this range variable. Let me show you that variable output. So this is the subnet which we are going to use for our load balancer network. And we're going to create a config map. This config map is a resource type within the cluster which is going to store configurational information about the application. It could be in any application. So in this case the config map is holding few information about our load balancer addresses. So config map is created. So now load balancer is configured. Let's go and install the Ingress Controller. An Ingress Controller is a special Kubernetes resource type which is usually used in the case where we will be using domain names to an internal application which is the cluster. So when we host multiple applications within the Kubernetes cluster, each application can be accessed using a domain name. So on this domain name has to be connected to the traffic within the Intra cluster services. So we'll be requiring services and then Ingress for it. So it is always good to have an Ingress installed. So now the Ingress as well is so now you will be seeing additional resources which we have installed earlier that before we installed a load balancer Ingress these were not available. We have a new namespace that was created and we have a few parts running under the namespace. Also Ingress iPhone Engine X has been created and it is running its own set of parts. And we see that the load balancer IP which we talked about earlier was assigned to the Ingress Controller through which we will be able to access our applications. So in the next section we'll be installing the applications and we will see how to access those applications from an external world and how do we build an application, how do we deploy an application in a container registry and how to fetch that application and put it in a container and then access them from a browser web browser. So that is what we are going to cover in the next session. So now pretty much our cluster is up and running with all the components we have installed. You can see everything is in running status. So we have installed a load balancer Ingress, the basic cluster. Yeah, pretty much that's it. So I will have it handed over to Arun to go a few basic commands using the kubesit utility. Thank you. Let me share the session, share the screen. Hope you guys can see my screen. Can you guys see my screen? Hello. Yes, sir. So I have actually set up my cluster here using K3D. So this is a local cluster that I have set here and it is created by the name local cluster. So we are going to execute some basic kubesitl commands on top of this cluster. So as Karthik explained, kubesitl is kind of a command line utility that is provided by Kubernetes to interact with a given cluster. So that could be multiple number of clusters. And we need to interact with the cluster for managing our resources or for deploying our resources. So kubesitl is one form where we can communicate with the kubesitl API that we defined here. So the API server is there, right? If we need to interact with it, one of the option is to use this kubesitl. So as Karthik just now said, right, that there are kubesitl is installable on multiple flavors. So these are the step by step instructions that I think Karthik showed us on how to install on Linux, but we can install kubesitl on macOS, Windows and everything. So this steps that is mentioned here is straightforward and you can easily install them locally. So in my local mission, I will install this kubesitl. Now I will be connecting to the cluster. So you can see, right, I have this kubesitl locally and I could have like, there may be even three to four clusters that are that I have created here. So how does kubesitl know to which cluster it needs to connect? So that is why we have something called kubesitl config. This kubesitl config, right, this will hold the information of the cluster like the IP details and the username, password or some kind of a token through which we can, kubesitl will know that it is a cluster that we need to communicate. So what I am going to do now is I am going to execute this command now. So as you can see, right, with k3d, I am getting this kubesitl config from git local cluster, which is the local cluster name here and I am redirecting it to a location. So this file will be in a kind of yaml file. So I am just going to open this file for your reference. So this will have all the details that are required for a kubesitl to connect to a disk cluster. So you could see the cluster detail is a user and all the tokens and everything. So any cluster that you need to create, you just need to give this k3d kubesitl config with the git cluster name and you will have a yaml manifest which you can use it for connecting to the cluster. So now I have downloaded this file. So what is the next step here? I need to define an environmental variable here. So this environmental variable should be export kubesitl config and the file location of this kubesitl config. So what I am going to do now is I am going to export this kubesitl config now. So I have exported this now. So now I should be able to connect to the cluster. I am going to give kubesitl get nodes now. So you can see I have deployed a 3 node cluster here. So the 3 node cluster is I have a control plane. So as mentioned in this architecture diagram, I have a control plane and 2 worker nodes in my cluster. So what is the first thing that I am going to do now here? The first thing that I am going to do now after I have connected to the cluster is I am going to get the version of the cluster. So you can get the version by giving kubesitl version. So this will actually give you 2 versions. One is the client version. So the client version is the kubesitl version of the component of the client that I have installed and the other version is the server version. So the server version that cluster that I have created, right? It is running in 1.21.5. So this is a Kubernetes version that I have created. So we can get the version details by using kubesitl version. So the next command that I am going to run here is kubesitl. So what does it throw? So as we mentioned here, right? We have this API server and all the communication with the Kubernetes cluster takes place through this API course. So this kubesitl should actually know what API should be contacted to manage or do something. So these are the various API components that are available. So this list, right? This list is based on the Kubernetes version. So this is the list of versions that is available for this specific Kubernetes version whereas a version the next version or 1.22 would have a different set of lists. So in that list, right? Some of the APIs listed down here could have been deprecated or we could even have some new APIs that I created. So whenever we need to communicate or whenever we need to execute something, it is best to refer like what is available APIs and then we can run or interact with the cluster. So this is about API versions. So now I am going to list the resource. I think we have already listed down kathik showed how to list on the resource but I am going to show one second. So she can see kubesitl get nodes which I executed previously. So this will show you the nodes that are running and one more concept that we are going to see here is namespace. So what is a namespace here? So these are the available namespaces. So namespace is nothing but kind of a logical partition in a Kubernetes cluster. So as we know, when we move into various staging environments, one cluster could be used by multiple application teams. So that could be one application that meets or this cluster could be shared by multiple teams. So how do we segregate them logically? So that is where this namespace helps. So once we create a namespace, we can actually allow or permit that one application team to use that specific namespace alone. So that way they will have their own kind of namespace where they can run their parts and everything. So let me create one namespace here. For example, I am going to create a namespace app1. App1 is created and I am going to create another namespace called app2. So if I get namespace right, I will be having two namespaces. So each namespaces can be allocated to one application team and they will have exclusive access to this namespace to deploy their resources on this specific one. So now I am going to see, so let us see like what is the components that are available in this app1 namespace. So if I am going to give kubectl hyphen n, hyphen n is nothing but namespace and I am going to give the app1 as the namespace name. So I am going to give get parts. So you can see nothing is deployed in this specific namespace. So by default right, if you are not going to give this namespace name, the application will be, the application of the parts will be deployed by default into this default namespace. So if I am going to give kubectl get parts, you can see no resources found in default namespace. Since I have not specified this namespace name, it will actually execute this command on this default namespace. So since this is a newly created cluster, we do not have any resources right now running. So now we are going to deploy our first part on this cluster. So I am going to use NGNX as the first part that I am going to deploy here. So NGNX as you know is kind of a web server and I am going to use the image NGNX. So this NGNX image will be downloaded from Docker Hub which is kind of a public repository which hosts all this images. So I think even Karthik explained about creating one local registry right. So this Docker Hub is kind of a public registry which can be used by anyone or any open source software to upload their images. So I am going to run this command and this should create this part. Let us see what happens. So you can see this container creating and suppose what exactly is happening right. I can give this cube CTL describe pod and give the pod name. So the pod name here is NGNX and I am going to give this here. So you can see this event right, what exactly is happening. So this pod is getting scheduled on the agent one that is a worker node and it is trying to pull the image from NGNX public repository. So the image is successfully pulled now and the container is created and the container is started now. So now I am going to give this cube CTL get pod command. So you can see this NGNX pod, one minute I will just pull this to the top. So you can see this NGNX pod is running now. So what is, so this describe command will show you what exactly is happening. So now if you need to know what exactly is happening inside this pod, there is one more command called cube CTL log and I need to give the pod name. Logs. If I give here right, it actually prints what exactly is happening inside the pod. So you can see like this the process has started and the pod is often running now. So this is how you can check the logs of a pod that is deployed and then also the describe option will give you a detail about what exactly is deployed. So if you see here it is shown in this diagram that pod is nothing but a combination of one or two containers. So in this case right, we have deployed only one container which is NGNX. So that is why we see like the container is containers. So since I said pod can have one or two containers so it's mentioned as NGNX. So the container image is downloaded from the Docker hub and this are like the various environments or the various what is it created or configurations that is done when creating this. So if I'm going to have one more container I will be having another kind of a pointer here with that container name. So this is the commands that I wanted to show you and similarly just like we explained what is there and describe what is there in a pod right. We can also describe at a node level so I'm going to execute this command to describe node and I'm going to pick this agent one so if I pick this right I will be able to see all the details here. That is what are the labels and what is the available CPU capacity and what are the pods that are running on this mission. So you can see this earlier we saw right the NGNX pod was allocated here so here you can see the NGNX pod is allocated here and we can also see the events of the specific node like how this node is synced and how this agent and everything is running and how the agents and the cubelet proxy and everything has started now. So this will give you all the details of at the node level. So with regards to scaling and deploying application we will go through the steps when we do or deploy an application in the next workshop in the next second session so that's all we have for this first session and we are open to questions. I already see a question being posted in so K3D is ready for production yes but the primary development for the K3D is for local development only it's not ready for production although it's a mature tool and it can take up the production workloads but it is not really recommended for real-time production environment because it runs on top of Docker Docker is almost deprecated now but it is still being used in few commercial projects and organizations but going forward like in future it will be completely deprecated and people will be moving towards container D and different other virtualizations so the primary objective of K3D is to have a local development environment so where a developer can develop their applications and want to test their applications in a cluster they cannot afford to spend a lot of money invested in a cloud vendor where they provide the service and take up their service and then deploy the applications in the cloud cloud-based decubitus cluster and test their applications it's really an expensive way of testing their applications so it's really meant for local development and also it's a way of going forward all the edge and IOT nodes and these kind of projects we can make use of K3D because it's very least like the K3D binary itself is in 50 MB space so for that reason it's really recommended for very less resource crunch environment where we can deploy this cluster and make use of it and the question is container D so container D is like just Docker D so Docker uses container environment called Docker D container D is cloud native you know container environment container D is an CRI authorized environment which is cloud cloud native runtime environment like that we have CNI, CSI cloud native storage interface cloud native network interface multiple terminologies related to that so they all definite standards which cloud native foundations have specified and formulated so anyone can take up those standards and develop their applications so this container D is one of the standards one of the form of cloud environment based which is authorized to take the container workloads it's very similar to Docker D Docker D is specific to Docker container D is specific to it's an open source environment any tool can talk to the container D back end so that's what it is Thank you I think there is one more name space so let me share my screen to explain the same again so I have just opened one diagram the question is important can we have multiple clusters one name space so this name space is right these are logical partitions so it's like in this KITS cluster that you can see you can see the default name space dev name space or qa name space so similarly if I give this good CTL get name space the ns is taught from name space so I have created multiple partitions here so these are kind of default partitions the default is something created by itself and cube system this will post all the system related parts that are required for this container to run so again we created this app one and app two so to explain it again let's say there are two applications and if I am going to deploy both this application into Kubernetes let's say if I am going to create this application in the name space default itself so we will have nginx running here and there will be one other application like kind of a Tomcat that will be running on the same name space so this could actually be confusing or we will not know who is actually working on the specific part of how do we maintain this so that is why we are isolating this application within the name space we could actually use it for another scenario as well like what if we are running a cluster which posts all the pre-prod and the test environments itself so the nginx could be the name of the application but it needs to be deployed twice like one for the testing environment and one for the pre-production environment so in that case we can have two kind of name spaces here I can create name space pre-prod and I can create name space test here so I can deploy this pod on both this name spaces so the one will be for the testing purpose which will be given to the testing team to work on the specific pod the other will be kind of a pre-prod which will be used by the various users end users to actually see like what is how the application behaves and everything so I hope this is clear now the name space it can be it is like segregation only within the cluster all would have very informative I thank you Mr. Karthik and Mr. Arun for this enlightening workshop on Kubernetes audience part three and four of this workshop at 2.15 p.m. so please stay tuned and for any other questions you have please post it in our Slack channel I think Mr. Karthik and Mr. Arun would be happy to answer them last but not least I can call the audience for I request the participants to please switch over to common track we have now there no clock we have a keynote so the next session in this beginner track will begin by 144 45 so once again thank you all for attending thank you Karthik and thank you thank you all so welcome back again so I hope you like the topic that we did in the first session so now back to the second session so in this second session we are going to cover the following topics that is how to get an application so the next one is like how do we explore this application using services and go forward so now then we are going to have a lab session where we are going to deploy this cloud native application on a Kubernetes cluster so followed by observability so this observability session will include the monitoring metrics collection, logging and basic level troubleshooting followed finally by Q&A so as in the previous session right we have all this content readily available in this specific workshop so and this the content as well is also available the content as well is available in this github repository sorry yeah so the content is also available in this github repository so now we will go on to this first slide so which is about how do we deploy and scale an application so here is like kind of an overflow of how an application is deployed into Kubernetes so as you can see here right a developer here will be writing his code on a local laptop so once the application code is written then he will be writing a docker file so what is a docker file so the docker file right let me show you docker file probably we can take an online example somewhere here which could so this could be a basic docker file that that could be returned by that application developer so what would this docker file contain so in this previous first session right we would have seen the container should have a minimal OS the 3pp binaries and then the application so in order to build this container right we need a docker file so that docker file will have this kind of commands like here I am deploying a base ubuntu image and then I am installing utilities like htop and components that are required for my application to run so this is kind of a basic docker file that an application developer needs to write then once this docker file is returned so using this docker file this application is converted into an container image so now we have a container image ready to be deployed into Kubernetes so we have this two components here which actually defines how this application needs to be deployed so the first component here actually helps us to define the application like this application could we can actually define the resource usage for this application like what is the resource that is required for this application to run the minimum and the maximum resources and we can also define like let's say I am assuming my application has a high availability then I can actually define the number of replicas that my application requires and various other features like I could actually define like if my application needs to be scaled up or scaled down and I can also define the upgrade strategy like how my application needs to be upgraded like whether everything needs to be whether we require a downtime or whether it needs to be faced out the upgrade needs to be faced out in a sequential manner so all these things we can define in this kind of define Kubernetes deployment or resources section so the next thing is now let's assume we have defined everything and we have deployed our application so how do we expose our application our application could be running on a Kubernetes cluster but we need some point or some service or something to actually access the application so that is why we have this Kubernetes services so here we will be defining what type of exposure is required for the application whether an application requires internal exposure or whether it requires external communication or it exposed to the web and everything so now we will look into the details of how this is done in a Kubernetes so the first component that we are going to see here we require a component called part controller so I am going to open this link here so the part controller in Kubernetes actually controls a part of how it is deployed and everything so these are like kind of a various part controllers that are available the one is deployment, replica sets stateful set, demon set and everything so the widely used ones are like kind of the deployment and a stateful set so now for this example I am going to show how a deployment actually works so in the previous session actually we deployed an NGNX part here so this NGNX part is still running so let us assume a scenario where this part is going to be deleted for some reason so I am going to delete this part let us assume this has been deleted for some reason so this part is deleted now let us wait for the part to be deleted so the part is deleted now so no we do not have any parts running on this specific NGNX part running so what if I need a kind of controller which actually spins up this part even if someone accidentally deletes or if the node on which this part is deployed has some issues so that is why we have this kind of a part controller called deployment so in this lab I am going to create a deployment here so instead of this NGNX image I am going to use an image from other image here that is from quality so I am going to see here I am creating a deployment the deployment will be of the same type that is NGNX the image is from the image is from a different image repository and this also is kind of an NGNX image and the number of replicas I am defining here is 2 so I am going to deploy this now so now let us see I am going to give get deployed or get deployment so this will show the NGNX is deployed and there are 2 parts I mentioned 2 replicas right so that is why it is mentioned as 2 by 2 now let us see get parts so you can see the 2 parts of NGNX are running here so that is why it is mentioned as 2 out of 2 and up to date is 2 and the available parts are 2 so this has been started just now we have this get parts so what I am going to do now is I am going to give get parts I can write so here you can see the node on which this is deployed so the node this is kind of the node I piece so what I am going to do now is I am going to delete one of the part here and let see how this deployment spins up and other part automatically so I am deleting the second part that is running here and I am giving delete let us wait for it to recreate so you can see it has automatically created another part so this part has a different name here that is that we deleted WTL2X whereas this one has got NXN6NMN so the Kubernetes deployment actually will monitor like whether it has got 2 replicas at a given time and this replicas if it is not available it will automatically spin up and other part here so even if you see the part IP right for this the part IP was 192.168.16.78 and here the part IP is different and even the node on which this is deployed right this is deployed on another node not the same node where it was previously deployed so that is the feature of this deployment so using this deployment right we can actually even do an upgrade like for example if I need to upgrade this application to next available version then I can give command var in I can actually what is it upgraded in kind of a rolling fashion like only once one part will be upgraded first and then followed by the next part so this will actually ensure the high availability of the application so moving back to the slide right so here you can see the controller options and I have pasted here two of the widely used part controllers the one is the deployment the other one is state so the major difference between these two is like the deployment is used for stateless application so application which doesn't bother about a session affinity or transactions right they will prefer to to be deployed as a kind of a deployment so if you can see here right I have three replicas so all this three will be having three parts but the same three three parts will be using the same storage layer whereas in the case of stateful set right so this as the name suggests is predominantly used for stateful applications so even here right we will have these three replicas here but this replicas will each have its own storage so ideally this will be used for applications which require as what is it session affinity transactions and details to actually prove it so this one example is a stateful set example would be predominantly databases so the databases ideally when deployed in Kubernetes right it should be deployed as kind of a stateful set because each status each part should have its own kind of and storage access to fetch in a transaction or refer a transaction so that is the difference and that is why we need this part controls so the now next step that is that we have defined this Kubernetes using the part controllers and everything so the now next step is how do we expose application for exposing a Kubernetes application right we have the following options like the we introduce a concept called services so what is a service service is kind of a layer of abstraction on on on top of this parts so as we have seen in the previous two parts running here so now if you want to access my application how do I know which one to which part to communicate so that is why we introduce a layer on top of it so this would ensure the load is equally spread to all the replicas on a given for a given part like for this we have three different types the one is cluster IP load balancer and then node port so what is cluster IP so the cluster IP right it is used for internal like for example I am going to deploy a kind of an application a which needs to communicate with application B which resides on the same cluster so this is kind of an internal communication between two parts so in order to provision this right I can create a service so that this service would be used as a reference for the other application to communicate with this so this is predominantly cluster IP is used for internal communication so the next one is a load balancer so what is a load balancer do so this service type load balancer right it interacts with the cloud load balancers or any external load balancers and exposes the application so predominantly right this is used for external communication so now the next one is cluster type next service type is node port so this node port option is predominantly used for non-web application like a database like let's say we have a postgres deployed in our cluster and this postgres needs to be external exposed outside of a cluster so the postgres default port is 5432 so if I need to expose this port I will be defining the service type as node port so once I define the service type so this port will be accessible on all the nodes in a cluster like all the work on on all the work on the specific 5432 will be open so in this example you can see the 3000 port is open on all the nodes so you can pick and choose the node on which you want to connect locally to that given DB so these are kind of first service types so before we go on to ingress I will show like how we can expose this service using this command so I am just going to place this command and show you like what exactly is happening so here I am going to create a service so the service will expose the deployment so the deployment created right where is it? the deployment name is nginx so I am giving deployment slash nginx and the name of the service is going to be nginx service and the type is going to be cluster IP so it is predominantly for internal communication and the port the port on which this is going to be exposed so I am going to expose this service on port 80 and the target port so the port where the container is actually exposed so this is also going to be 80 because this is a web server so I am going to pick and choose both the ports as 80 so now I am going to create this service so if I need to see the service that is running right I need to give the service so now we can see nginx service is created and this has a cluster IP and this is exposed on port 80 so if you notice the difference right the parts that we listed down here these are kind of dynamic will be changing dynamically so where is the service IP right once created the service IP the IP for the service will remain constant so that is why we can actually use this nginx service and the port can easily come internally it will refer to this ports which dynamically change so that is why that is how we create a service so next one that we are going to see here is ingress controller so what is ingress controller so the ingress controller is nothing but a layer on top of the service so on any production cluster right we could be having several applications that are exposed outside like that will be exposed to the outside world so ingress controllers acts as kind of an all of us to define the rules through which we can communicate with this service so we will be defining the so this is like kind of client and the ingress will be on top of this service so we will be defining the various rules here so based on this rules right this ingress will route the traffic to the respective service like for example you can see food.mydomain.com so whenever someone is going to enter this URL this ingress will direct it to this service one so if it is going to be another DNS it will again be routed to service two so it will have all these rules that define internally to route the traffic so ingress will be on top of service and service is kind of layer that helps in routing the traffic to the respective ports so now we have seen the service and types and the ingress and ingress controller so what is port forwarding so port forwarding is something like I have deployed an application into kubernetes and I want to check like how the application is behaving locally on my laptop so using this port forwarding right I can actually check the application behavior only on the current environment where I port forward like for example in this lab session right now I am going to run this command port forward so you can see given qtl port forward and the service name so the service name here is nginx service that we previously created and 9595 is the port on my local laptop where this service is going to be exposed and 80 is the nginx service port so you can see the port here right I think we gave some get service so this port is accessible on port 80 so I have given this 80 here let's see how this is going to be here so now I can copy paste this and access this locally so you can see the application that I have deployed is I can actually visualize this application locally on my laptop which I have deployed now so this is the advantage here it is used to locally check how the application behaves so this is about the two components that we majorly use before we deploy a port into a Kubernetes cluster so while running some commands can you please I mean for the font in the command so there are some reflections yeah sure I will do what I will do is now we have a lab session I will meanwhile work on increasing the port for the respective sessions now we will hand it over to Karthik for deploying a cloud native application in a Kubernetes cluster over to you Karthik thank you thank you Arum so last session we have gone through how to build a Kubernetes cluster locally on your laptop so I still have a copy of the session opened so this is basically an AWS machine prompt and we have that cluster also up and running so what we are going to do now is to deploy a cloud application not cloud application, cloud native application on this Kubernetes cluster so what is cloud native application so there are a few standards that we can say it has to confirm when we talk about cloud native application so the application should be capable of running anywhere in the in any sort of environment that is what the cloud native specification says and one another factor is that it should also be scalable resilient and accessible and it should be like it can also coordinate with other application seamlessly with other applications that are going to run in front and back so what we are going to deploy today is a similar application so it is built on ReactJS and we are going to deploy this application on our Kubernetes cluster so for which I am going to take a copy of the application code sorry let me share I hope you can see my screen now yeah we can so let's talk about the application first so as I mentioned that it is a simple React application that is built on top of JavaScript framework and it is going to mimic an interface of how Netflix looks like so the only exception is that it is not going to have backend that can talk to storage and get the movies or the videos directly playable on your browser so what we will be seeing is that is what the application is all about so let's without talking much so I have the application build the code has been already written and it is available on GitHub repo so I am going to copy that and paste it onto the terminal so the font is visible at least so let's have a quick glance of the code as well so before we deploy this application we need to build a container out of it so the code we have written as the application code sorry yeah so this is the application and let me open it again so here is my source code folder it has a bunch of JavaScript files within it I am not going to go through all the source code so what we are concerned more about is the Docker file so this is a file that is going to help us out of building a container image out of this application full source code so we are going to build a Docker image now and I think we already created a repository if you remember when we deployed the K3D cluster we created a repository called a dev registry and which is also visible in the Docker command so this is our registry server dev registry and it is exposed on port number 3500 so we are going to create an image and then push it onto this registry so for that let's go back to the workshop material this is the command which is going to help us creating an image already inside the folder this is the Docker build command and my application name is with the version of the application and then my Docker file is there so this is going to take some time so we can have a so it is having two stages one is the build part where it will build the artifacts out of the core source code and then in the second second stage we are going to create a deployment out of the artifacts which we created in the first stage also going to build a bundle and engine image within the and expose it on port 80 so that's why it's a simple Docker file so and so and then we also have a folder called manifest we have a couple of resources written basically one it's a standard way of deploying your application so it will have the a similar data and with that you have a harmony in the cluster you have the specifications for the application so and it has to the application from this registry port number that these are all we already discussed where the application is going to be stored or in a container version and this is the image and all these things are coded in the image tag so this is a simple deployment file and to access this application we will also require a service component it will have the standard and it will have the container port on this exposed application and then the and you can use and I'm going to access this application using the service we will be which I will be discussing in the later part of the session we have discussed about the how this cluster is built what are the different we had for our cluster the post machine talker cluster inside which we have four service node and then three worker nodes in each worker node we have multiple parts running now as Arun spoke about these components as well like the four part services and deployments now we are going to have multiple within the worker nodes okay now how are we going to access this part is a service for it okay so this service as mentioned that we are going to have three types of services cluster IP node port or load balancer in our demonstration which we going to have for our application it is node port service okay now when we have the service created okay as in that this is a service this service typically accessible from outside okay so we need a external service this service can typically communicate between the cluster resources between the parts so by default it will be having a cluster IP it will not have a public IP associated with it so we need to have a public IP associated so that we can access the application which is running inside one of the part through the service so we are going to spin up one more component called ingress and then this ingress is going to access the separation so the communication is going to happen like this and then from service to the parts okay but again ingress will not have a direct communication direct accessibility ingress has to come through a public IP ingress cannot be directly associated with a public IP so we are going to have a load balancer load balancer component installed so that is also we already installed if you remember we have installed a component called mentality okay so this load balancer is accessible from an external world or from anywhere even from your laptop can directly access it forever we are going to make use of a private IP but in real world it will be a public IP okay this load balancer will have an public IP through which we can access our application so load balancer will talk to ingress ingress will talk to the service and then from service we are going to talk to the directly to the application and get the response delivered to our browser so let's go back and see the application build it is still building so yeah so basically if you want to access the URL of an application you need to have a public IP but let's see what are the services that are available currently in the cluster yeah Karthik can you please see the points of this as well thank you yeah so we have a few set of services already running in the cluster one of them is the ingress controller which we already talked about right ingress nginx controller so through this ingress we are going to access our application and this as you can see the all the services has cluster IP but none of the services has an external IP associated but that is the ingress controller has external IP associated so through this IP we can access the applications that are running inside the cluster so that we can verify using your browser so this browser is running inside your WSL environment which you can yeah as I mentioned earlier you can check with whatsmyos.com and that is going to return linux which means you are inside the WSL machine sorry since the machine is building we see with the build process it's causing the machine to slow down so you will see a response like this which means we are able to reach our ingress controller which is ingress so that means the configuration whatever we created so far is working perfect so then we talk about the container build so we see our application build has already started build is completed in fact so now I think it's creating artifacts for the build so the next process will be a deployment phase where it will create a container image out of the artifacts which has been created so this is just a frontend application it will not have any application like an irresistible API services or storage or database nothing is going to be associated with this application just a simple application that will provide you only the UI for the interface compilation is completed this should be a much faster on most of the systems because this laptop is 80 years old so you will find it a bit sluggish but it should be ideally faster so don't be about the slowness which we see here yeah it's almost done so application is built and we have a container created with the name Netflix iPhone clone version 1.0 so let's go back to the continue with the session so next step is to tag this the container which we built this is available locally for storage on your local hard disk now we are going to tag it and then send it to the container registry so these two commands will do that function so in the first command I am tagging that with the registry and then second command I am pushing that image onto our local registry which is private registry so it has been successfully pushed so it is now the image is this cluster we can call this image and get a pod created out of it so this one we completed application is completed we are on to the deployment now so as I initially spoke about the manifest file we are going to create these two resources on the cluster so we are going to create the Kubernetes using the directly apply command kubectl apply iphone and in the folder in which the manifest are written let me log into the folder application folder and then run this command kubectl deployment as well as service created it is saying 0 of 2 it will be ready so we got both the applications up and running you can see the parts related to the applications you can see two parts each running probably on a different worker node let's see if you want to see more details about the parts you can run this command check but otherwise it is not required you can see on which node it is running and these sorts of information is available it is running on the iphone which is the node 1 and the port at which the port is exposed and these details so now we have the application completely deployed now we want to access this application so we have a service and we have this object and then load balancer to access this application how are we going to associate this interest to the how are we going to map the interest to the service but we do not have a mapping between the interest and the service so that is what we are going to create now so before that as I mentioned we need a real DNS but unfortunately we do not have a DNS but we can fake the DNS entry using these two commands basically we are going to get the load balancer ip if you check the variable value it is just going to print the load balancer ip ok now I am going to map this load balancer ip to the dummy domain so my dummy domain is this one netflixiphone.kcd chennai.in ok that entry has to be added to etc whose file so this way you can avoid the need for a real dns mapping a real dns entry or a valid domain name you need not having the need for your testing so you can create your own domain and then map it with the load balancer ip and then you can access this application now if you try to access this application it will not be available we need to create an ingress mapping so next we will be creating the ingress object so the step to create the ingress object is this one so it says the ingress object has been created so basically the ingress object is nothing but the ingress object is in a Kubernetes manifest you can see the manifest details by just running this command it is not given in the workshop material but if you want you can check what is going inside the ingress object as you can see all the Kubernetes resources will have the standard attributes api version kind metadata spec and the objects attribute that are specific to the resources are the only ones that are going to change so this is the domain name which we are going to use for our application and that is going to talk to my service called netflix iphone iphone service this service we have created as part of the mapping now this application should be accessible right from the browser it is pretty much opening now you can see the page which is similar to what a real netflix page looks like basically it is mimicking the behavior but it is not going to have the entire subset of web series anything of that kind so what we will be seeing here is obviously since my system is running very slow it is going to render a lot slower than what it is going to be so that is what your own Kubernetes cluster so this cluster is zero cost you can run and dispose of your own wish so that is pretty much about the application deployment so what we have seen there is a deployment of cluster and then we have seen how to create a deployment out of it and then create a service in this how we done the mapping so so now if you want to as a developer if you wish to update your code so basically you will be updating the source code suppose I am modifying some code in the app.js and I want to replace this text with netflix and then save it here so now this application if I want to put it in the Kubernetes cluster again I have to rebuild this image with a different tag so again you got to go to this first step in the deployment application stage where you have to code it with version 2.0 and then run all these commands re-run these commands once again so the cluster will automatically see that it has in the GitOps environment this will be automatically spin up to a newer version but in your development environment what you have to do in addition is that you have to delete these services delete this deployment and recreate it so this is the only step that will be required whenever you want to change your code and then re-deploy it so that's all the manifest are all going to remain the same you can just update the man so basically this deployment is using latest it will be 1.0 initially now if you want to change the application to version 2.0 change it to 2.0 and then just connect it kubectl apply icon manifest so that will you know re-spin your application with the newer version of your application so whenever you change the code just change the deployment version change the image version and then re-apply the code of course you need to rebuild the image within whatever version that you want to build so this is how cloud native application is usually built in the cluster kubectl cluster environment and now it is deployed yeah so that's pretty much so back to your voice is not very clear can you see my screen can you hear me clearly now that's very low yeah we can see but the voice is a bit low yeah better let me maybe increase my voice okay so now we are going to see we have deployed our cluster on an this one like we have deployed the cluster and we have also started running our application so now we come to the interesting part which is observability so observability here consists of matrix collection logging and also basic troubleshooting so in this session right first we will see the matrix how the matrix are collected so kubectl is by itself provides an optional component called a matrix server so this matrix server is actually can be opted on a kubectl cluster so this provides basic matrix components like the resource usage of cpu memory of a given pod or a node like for example right in the cluster that we have deployed I am going to give kubectl top node so you can see the usage cpu usage and the memory usage of each of the individual nodes that we have deployed so similarly right if I need to see the pod usage I need to give cpu top pod so here you can see the engine that we deployed right you can see the amount of cores cpu cores and the memory that this particular pod requires so though this matrix server is available the the matrix that it provides is actually quite less and it is not actually something that we are on a distributed cluster expected so that is why we move on to a kind of an advanced matrix collection tools called Prometheus and Grafana so let me move on to this workshop so as just a quick overview like so in the traditional login matrix tool right we have seen like how the IP address and the environments remain stable whereas in the case of a cloud native application right the IP or the node where the specific pod is going to be deployed is not going to be constant and it could change over time so like in this example we have seen right we had this pod IP changing from one IP to another when this pod was actually deleted and brought back so for this kind of dynamic changes which literally happens in cloud native we need a kind of an advanced monitoring mechanism or matrix collection mechanism so that is why we have the Prometheus here so the Prometheus is like actually stores its matrix in a time series data and it collects the information periodically from all the sources from which it is it can collect so for example let me open this Prometheus documentation so this is again a CNCF project and it is a graduated CNCF project so if you can see here so the Prometheus joined the CNCF in 2016 and the second hosted project after Kubernetes so it is actually widely used matrix collection tool so what we are going to do now is as part of this we are going through the architecture here at a high level but before that we will start with the installation of the one so because this installation takes a few minutes so probably we will just start with this installation and as we go I will explain the architecture so I am going to as I mentioned earlier right namespace is kind of a logical partition within a cluster so what I am going to do now is I am going to create a namespace monitoring so I am going to clear the screen I am going to create a namespace monitoring here so the monitoring namespace is created so now I am going to deploy the Prometheus stack so before deploying it I would like to quickly give an overview of Helm so we are going to use Helm here so what is Helm here so Helm I think we had a beginner session previously where Helm concepts was explained so just a quick look here Helm is kind of an open source package manager for Kubernetes similar to what we have for Windows or APT for Ubuntu Helm is a package manager for Kubernetes so we can actually bundle all an entire application with kind of all the services like in this example right you saw like we created a deployment a service and everything individually so with regards to Helm we can bundle all this together in a kind of a software kind of thing and we can deploy it at one go so in order to install that particular this one we need to add that repo so here we are adding this repo Prometheus community and this is a GitHub link for this Prometheus where this particular Helm charts are located so in my I have already I have already added this chart so you can see this Prometheus community is already added so I am going to directly install this Prometheus stack I am going to give a Helm install so this will take some time so during this time we will see like what are the high level components that are required so at the top right we have this Prometheus which actually helps in pulling the metrics from various targets so in this case the targets could be at the node level or like the node level like we have multiple worker nodes right that Prometheus would be collecting this metrics from each of this node and it will also be interacting with the Kubernetes APIs to collect all the metrics data so once this data is collected right then we have another in this Prometheus stack we have another component caller so in alert manager we can actually create rules so whenever a threshold or something is hit right this alert manager can be used to alert the end users so it could be either email or it could be any third party or what is it alert triggering app so we have this Prometheus and alert manager the next component here is the Grafana Grafana is kind of an additional component which is actually used for visualization because we have all this metrics and how do we actually visualize or understand like what exactly does this metrics do so that is why we use this Grafana component so we can actually build attractive dashboards that can actually help us or screen like what exactly is happening at a given point so I think this is still taking time so I'll just see if the ports are getting created here it's still being created let's wait for some time so as part of this Helm packaging right this not only packages since I mentioned this as a Prometheus stack right so actually this stack will include both all these three components like the Prometheus, alert manager and Grafana all these are bundled together and we will also few other components that are required for this interaction with the node level components so now we can see this Prometheus stack is deployed on the cluster so now I'm going to get ports here so you can see the various components deployed here so now we will go through the first component that we are going to see here is the node exporter so as I mentioned right so the Prometheus will help in collection of metrics from the node level is actually deployed on each of the nodes in a given cluster so it will be running on each of this node and it will help in collection of this metrics at the node level so the next one here we are going to see is the operator so operator is kind of a centralized component manage all other components that are deployed here so it will check and validate like how it behaves and it basically helps in the operation or the availability of the rest of the components so the next component these are all the node exporters so the next component that we are going to see is the Grafana so Grafana as I mentioned is for visualization so the next here is the cube state metrics so the cube state metrics is used for collecting the metrics that are available at the API level so it will be interacting with the Kubernetes API calls to get the Kubernetes API metrics so the next one is alert manager which is used for alerting or triggering to the end users so this one is kind of a Prometheus deployment which actually force the Prometheus operator or the Prometheus actual Prometheus we have deployed this and everything here is up and running so the next step that we are going to do now is so we have installed this and we are going to expose this GUI using port forward so first I am going to expose the GUI so as before I will execute this command I will give a get service here you can see the get service like various so as I said with Helm everything will be bundled together so the deployment like if I give deployment here so we will be seeing the deployment manifest being deployed the service and everything will be clicked so that is the advantage of using Helm so now we have the service here let me clear the screen and move this to the top now I will copy paste this command to port forward so this is kubectl and the namespace is monitoring and I am executing this port forward and the service is Prometheus stack queue prompt Prometheus which is mentioned here and this will be exposed on port 1990 so I am going to give an ampersand so that it can run back in so next I am also going to expose this so before we expose let me first show you the Prometheus GUI this is exposed port 1990 so this is the Prometheus GUI and by default it has got various metrics readymade metrics available for us to consume like for example we saw that node exporter that I showed earlier like we had this node exporter that is running on various nodes individually so I am just going to see what are the node level components that are available so let's say we are interested in kind of a memory level components so previously in the kubectl top node we were able to visualize only these two components whereas here we have multiple metrics that are readily available so like for example I am going to click this memory active bytes and I am going to execute here so I will be having the node level metrics for all the components that has been installed suppose if I want to visualize this component in a kind of a graphic right so I can just click this graph option here and I can see all the metric level components so since this has been installed just 5 minutes back the metrics has been flowing only for the last 5 minutes so this is about the Prometheus so apart from this right I mentioned about the kubed related components that is the kubed metrics that we have deployed here like for example we have this kubed where is it so the kubed state metrics so let me check if there are any kubed related metrics available here so these are kind of an API information that it can pull like for example I am just going to keep deployment created and see if I am getting any metrics so it will list down all the deployments that are created on this cluster so this is how the Prometheus works so now we have this metrics collected in Prometheus now let's see how the visualization of this looks in Grafana let me actually port this Grafana here so similar to the earlier port that we did right here we are going to mention the name space is monitoring and this is the port forward command and the service and the service name here is Prometheus stack Grafana so I am going to put it on port 1991 so it is now exposed now so it requires credentials by default right when we are going to install Prometheus directly the credentials for this is admin and slash pro so we can actually customize these credentials but we are not going to get into it so we will use the default credentials that is available and I have logged into Grafana so similar to Prometheus right there are many readily available dashboards like these are kind of the dashboards that are readily available and we can quickly check them or use them to check how it works like for example I am going to click this node exporter nodes to see the node level metrics so here I have the various nodes and I can check this quickly by click of a button so or I can even monitor Prometheus from using this dashboard like for example right I have this Prometheus dashboard that is readily available and if I click this it will be having all the Prometheus level metrics so it helps in building the dashboard and everything so as I said like we have this only recently since we have deployed this just now so this is about the dashboard creation suppose if you want to customize this if you want to create new dashboards right you can click this create and you can actually build your own dashboards so you need the Prometheus query or the metrics that we displayed here right you need to build your own Prometheus queries and use that to build the dashboards and apart from building new dashboards right we can also introduce new data sources into this like for example by default Prometheus that we have installed here as a data source Grafana is kind of a default data source for this installation suppose if you want to add few other data sources so if I am going to click this data source right this Grafana with all this monitoring tools like here you can see the various monitoring tools on the even the databases through which it can bring in the metrics so all these integrations are available so this is like a kind of a default visualization that we use whenever the Prometheus base metrics collection is done so this is about Prometheus so next we are going to go and look into the elastic stack so what is elastic stack so elastic stack basically is a log collection mechanism so it comprises of three to four components here so it as mentioned here right it is a combination of three open source projects and we also have another project here maybe I will just open this link here to go through the architecture so it comprises predominantly of four components the first component here is the beats here so what is beats it is like a component similar to the node exporter we have seen right beat is a component that needs to be installed on each of the node in a cluster so this beat will tell the logs that it needs to forward so it will be telling that log and it will be sending it to the log stack so log stack is kind of a log aggregator so it will actually aggregate or consume the logs from various sources and actually convert it or modify it as per the so once this log the logs are received that this is log stack then it is forwarded onto elastic search so elastic search is basically a search and analytic tool so it could actually consume or have a large amount of data and using the data it could actually crunch the data and bring you the desired results like search something it actually search as it brings you the necessary result at very quick time for the size or the the log collection is going to be huge so that is the functionality of this elastic search so the next we have is a kibana so kibana basically is again a visualization so we have this what is it the logs and everything stored and how do we visualize this data so that is where we have this kibana to visualize whatever log that we have so similarly similar to what we have seen here right we have I have provided this instructions to actually install this like for example we are going to in this scenario I have already installed ELK stack following this instruction the reason why I didn't want to do this in the workshop is like it again takes some time and I don't want to wait for the installation to complete so here we have this installation done on this cluster let me show you here and I am going to give you CTL and the name space that we have created is logging similar to the Prometheus write the repository is the repository for this chart is available in this location so I have added this repository and I am installing each of this component one by one the first component is elastic search and then we are installing the file bit and then we are installing the kibana so let me go into this logging and give you get ports and see these are the components that is installed here like for example the file bit so that I mentioned here the file bit is installed on each of the nodes individually so this helps in collecting the log level information from each of the parts that are deployed on each of this node so then we have this elastic search which actually is kind of a search and analytic tool and then we have this which is used for visualization so if I get service I will be able to also is created here and I am going to get deploy and the kibana is deployed here whereas the elastic search will be kind of a state full set so you can see the three replicas and we have this three replicas that is 0, 1 and 2 here so now what we are going to do now is we are going to port forward the elastic search and the kibana to see how it actually looks so first one I am going to do is I am going to port forward elastic search as before I am going to give the namespaces logging and port forward and the service will be the elastic search master that we have here and the port will be on 9200 so this is exposed now so let me check this on the grid this actually means that the elastic search is working as expected you can see the elastic search master name here and all the build details and everything here so this means the elastic search is working as expected so now we are going to give or visualize this using grafana so next I am going to port forward the kibana again I am going to run this as a background background process so this will be exposed on 5601 so now we have the kibana GUI here so now let's see how the logs needs to be ingested so what I am going to do now is I am going to give to the stack management here and create an index pattern so in elastic search and everything we need to create an index pattern to actually visualize the log so what I am going to do is I am going to give file bit logslab and I am going to give the timestamp I am going to create this index pattern so the index pattern is now created now let's see if the logs are coming in so I am going to give analytics I am going to give discover so you can see the logs are so you can see all the logs are now available in kibana for us to visualize so similar to Prometheus right we have this query language where we can actually filter out or what is it filter a specific log or something and see how it behaves so basically now we have actually deployed both metrics collection and logging using this lab instructions so now we are going to move on to troubleshooting right for any kind of troubleshooting the first step that we would require is kind of how we would be looking is for the metrics like what is the metrics or how the pod is behaving at a given point of time so we can ideally use this Prometheus to see like what is the current status of the pod and then the next step here is to see how the logging or the logs has got any errors like at the pod level or the container level so we have actually configured this too and we can actually leverage this to actually to troubleshoot an application so the troubleshooting basically in kibana it is split into two components so the one is troubleshooting at the application level and the troubleshooting at the cluster so I have referred the Kubernetes document link here because it has got the troubleshooting steps for each of the component or services that exists like for example it got specific instructions on how to debug a pod or how to debug a service or anything so let's for example go into the debug parts so usually what we do is we give a describe pod command so the describe pod command when we give it will give you like what are the events that takes place like for example I will not go into this whatever this one that you have created I am going to give kubectl get parts where we will have this engine so I am going to give describe pod on one of this pod so here we can see the events that actually takes place so you can see here right I think I already showed this like how the pod is successfully assigned to your node and how the image is pulled and how the data is created and started so whenever a pod or something is going to fail we will have the event message clearly select what exactly is happening so that is why the first step that we do while debugging is we have to look into this describe pod so we need to check the status of the pod like you can see here right whether it is in ready state or this pending state or kind of an error in image pool or something we will be getting this error so we need to check the kind of things and the next step that we do obviously is to check the logs like for example in this case right I am going to just give kubectl logs and do log level this one so you can see here the logs does not have anything what is it any kind of warnings or fatal because this pod is running fine so that is why the logs does not show anything so this is like how we ideally troubleshoot so apart from this right we can also leverage this prometheus and the and the kibana to actually see like if there is any CPU spike or memory spike and everything so that is how the debug happens at an application level so now we are going to see happens at a cluster level so again we can actually list down the cluster here so we need to get pods we can actually give a cluster get nodes and list down the nodes here and if we need to describe a node or something we will be able to see what exactly happens similar to how we do for pods and there also we will be having an event and everything to see like what exactly so we have also another command called cluster dump info cluster info dump this will give a detailed information about overall health of the cluster we can collect this dump and actually we will be using this when we need to what is it the cluster at an detail level so this is one other command so this is how the troubleshooting works at the various level that is the up and the cluster level so this is about the observability and that's all we are open to questions yeah hi so it was an awesome session so I started the question crap so waiting for questions so when are you like how long are you with Kubernetes you are on mute I guess sorry so we are Kubernetes for the last almost 4 to 5 years oh that's a long time yeah I started my journey like couple of months ago so I am learning other things so it was a very awesome event I understood services and Karthike here almost the same level of experience what I have and so before that I was into cloud and visualization with the VM double not on containers so I would like to know what are you using for your cluster testing development which cluster you are like the most yeah for me I use AKS like a short Kubernetes service because I am great as a student ambassador okay then you should have the luxury to afford that but this workshop is mainly targeted at people who can't afford the cloud based clusters the AKS or GKE or the AKS so that they can't have the luxury or the affordability to spend a large amount of money to test and develop their own clusters so K3D is absolutely no-cast cluster you can run it wherever you want you can install it on a VM you can install it directly on to our windows cluster or the Linux machine or even on a Mac machine and it is disposable as well so that is one that you can just create it and then you can export the cluster and move it on to a different machine that is also possible with the case of K3D these are some of the features and many cluster creation tools like kind or MiniCube doesn't offer the functionality of having your own private registry this is another functionality that is very specific to K3D and it's a community funded project you can develop this for as far as you can so you folks use that for most of your local testing environment right? and it's optimized for the latest technologies H, IOT and on these devices if you want to run your containerized applications and you want to do it in a humid cluster based environment K3D is the only way to do it because it is very light and it's the lightest cluster I would say which is current in the market and what are your K0S is coming up that is based on K0S image and that's even more lighter but K3D is exceptional what about K3D S? K3D again uses the same image K3D S is the base image what K3D uses this one K3D as well as there is one other concept called VCluster again that is a separate track that is going on VCluster they also use the same image K3D both VCluster and K3D use the same back end the base operation is K3D S so that is how they are able to get that light weight concept I have used K3D S with magnet to deploy node and then do that but those are interesting magnet is quite heavy because of the VM concept but container is light weight yeah as I think we don't have any questions for now today anyway the participants can always try the workshop from the euro which we provided they can try that anywhere and then if they have questions they can post it on the Slack channel yeah it's just join our Slack and you can post questions over here thank you for the awesome event and so much knowledge I will stop it in the broadcast thank you