 Again, welcome you all to this wonderful event, Kubernetes Community Days, Chennai. We are super excited to be here. My name is Yunati Meshralong with Akshath Khanna. We'll be talking about open source and Kubernetes for students. We are working as an MTS intern at VMware and final year student pursuing BTech in computer science from SRM University, Chennai. These are the important topics that we'll discuss in this session. Since we cannot cover every topic about open source and Kubernetes due to limited time, we'll still try our best to cover every important concept here. So let's begin. First of all, what is open source? Open source is a term that originally referred to open source software. Open source software is a code that is designed to be publicly accessible. That is anyone can see, modify and distribute the code as they see it. Open source software is developed in a decentralized and collaborative way. Relying on peer review and community production. Now comes the most exciting part. How to get started with open source? So these are the few simple steps by which we can get started in contributing to open source. First of all, we have to choose a programming language of your choice. And then we have to search for beginner-friendly projects from GitHub to get started. First of all, suppose I want to contribute in JavaScript. So I have to search for any project that uses JavaScript. Then I have to understand the project by looking at the docs. There is a contributing guide or you can say, read me file in which the list of all the steps to get started with that project is listed there. Then by reading that, you can try to run the project on your local machine or VM. Then you can find the issue on the GitHub from the issue tab on that project to focus on the good first issue because you are a beginner. Then you have to focus on the good first issue from the issue tab. Then suppose you find a bug or you want to make an enhancement in that project. So you can make that. And by doing that, you have to fork that repo and then implement that feature or bug. You have to fork the repo and then you have to code it in your local machine and then you have to push it to the repo. Then you have to raise a pull request or PR and then wait for the maintainers to review the PR. And if your request is valid and is accepted by the maintainers, they'll merge your pull request. And then you have made your first contribution to open source. Now we look into the open source communities and events. These all are the open source communities present worldwide where you can connect, contribute and learn a lot about open source projects. These are the communities that are super active and that events are conducted every year on a specific time. Suppose there's digital ocean. They conduct Hectober Fest every year in the month of October where you can contribute to open source. There is Google Summer of Code in which many startups are there in which you can contribute. There is Major League Hacking. They conduct a lot of hackathons every now and then. There's Outreachy, Girl's Grip Summer of Code, JNX Kernel mentorship program. So these are some open source communities and events by which you can get started and contribute to open source and meet a lot of people who are contributing in open source communities. Now that you have learned about open source communities and events, we'll see how to get involved in Kubernetes community and SIG. So what is SIG? SIG stands for Special Interest Groups. It is a group of contributors who maintain and publish the Kubernetes components and the website. Getting involved with SIG is a great way of contributing in the Kubernetes to have a large impact on Kubernetes projects. So head over to this website, kubernetes.io slash community to learn more about Kubernetes community. Now we'll see the contributor guide. First, we'll see the prerequisites. You have to create a GitHub account and then sign the CLA. Signing the CLA is an automated step done by the GitHub bot to every new contributor. Then you have to read the code of conduct because there are many important terms mentioned there. Then you have to set up your developer environment to start contributing. Now your first contribution. You have to find something to work on related to your interest. Suppose in Kubernetes, I want to contribute in container D or C&I plugins. Then you can go to that repo and then you can start looking in the good first issue label. The good first issue can be like enhancement in the documents or any small feature that you want to implement. So you have to look for good first issue label. As shown in the picture, there are good first issue label put in many of the issues. So by clicking in any issue, you can assign that to yourself. Going to that issue and then in the comments section, you can type slash assign. So the GitHub bot will assign that issue to you so that you can start working on it. Then you can contribute to special interest group that is SIG. Under SIG, there are many projects that is SIG, apps, CLI, multi clusters, storage and windows. And there are many sub projects under that. So you can contribute here also. So this was all about Kubernetes and how to get involved in Kubernetes community and SIG. Now Akshay will take you forward to know what is Kubernetes and what are Kubernetes components. Hello, everyone. This is Akshay. I hope you are enjoying the session today. So after learning about open source, now it's time to get into Kubernetes. So Kubernetes is a platform and a container orchestration tool which is used for automating deployments, scaling and the operation of your application containers. And it is also portable, extensible and is open source and have a community of great contributors around the world. So it has a rapid growing ecosystem and Kubernetes services support tools are widely available these days. So Kubernetes is a container orchestration tool as I told. So we can automate deployment, scaling and the operation of the application in the cluster. And it was open source by Google in 2014. Before that it was internally used by Google for many years and it supports various platform like vSphere, Azure, Google Cloud and AWS, et cetera. And now next thing comes is what is a pod and a node? So pod is the smallest or the most basic deployable object in a Kubernetes cluster. A pod represents a single instance of a running process in a cluster. Pod can have one or multiple containers but in regular cases it is only one Docker container. Then comes the node. So node is a worker machine in Kubernetes that may be a virtual or a physical machine depending on your cluster. Each node is managed by the control plane. A node can have multiple pods and the Kubernetes control plane automatically handles the scheduling and auto healing and these cases of the pods across the cluster. So here comes the diagram. This diagram speaks 1000 words. In this, first we have a node, this is a worker node. Inside the worker node we have two components, Kubelet and Kubroxy. So this is a diagram which is, these are the basic components in a Kubernetes cluster. So as I told in the worker node we have Kubelet and Kubroxy and also the container D which is the default runtime of the Kubernetes that is running inside a node, worker node. So here in this diagram we have three worker nodes having the respective Kubelet and Kubroxy and all the requests are handled in a control plane that is depicted here. All the requests that are coming from the nodes are received by API server. So this here is the API server which receives all the requests and controls and sends back the request to the nodes. So we have the API server which interacts with all of the nodes in the Kubernetes cluster. Then we have a scheduler. So scheduler is used for scheduling of the nodes and keeping a track of the nodes that are running. And also we have ETCD that is a key value pair store which stores the current state of the Kubernetes cluster or the nodes that are running. So in ETCD it is a key value store where we usually store the state of the cluster. And then we have a controller manager which controls all the things inside the Kubernetes cluster. We also have a cloud controller manager which receives the request from the cloud provider API through cloud provider APIs and it controls the Kubernetes cluster. So we have two components. First is the control plane and the worker nodes. Now it's time for a quick demo. So let's get started. As we learned about Kubernetes and Kubernetes concepts along with the Kubernetes components how these components interact with each other. So now it's time to get our hands dirty with Kubernetes cluster. So I'll walk you through a step-by-step guide how you can create your own Kubernetes cluster on your local machine and also deploy a simple application in that cluster. So like setting up local Kubernetes clusters is incredibly simple these days. All thanks to tools like Minikube, kind cluster and even Docker provides Kubernetes nowadays. So but in this tutorial we'll be following kind because like it is the fastest and with minimal dependencies we can set up Kubernetes cluster on a local system. We don't require any kind of clouds like GCP as your AWS. We just need a operating system and we can have our Kubernetes cluster up and running in our local machine. So before moving forward we need to have some install or prerequisites that should be installed on our system. So let's go ahead and see that. So initially you need Docker desktop so you can go ahead and download Docker desktop from their official website. So if you are using a Darwin based operating system then you can download Docker desktop for Mac and similarly followed by Docker desktop for Windows or and if you are a Linux user then go ahead and download Docker for Linux. You can download the Docker from their official website and the next tool that we require is Qubectl. So this is the command line tool that we use to send commands to our Kubernetes cluster and run commands on our Kubernetes cluster. So you can simply search install Qubectl on Google and then you can follow the steps that are shown in this documentation of Kubernetes. So again, based on your operating system you can go ahead and download these and install Qubectl command line on your system. And the third dependency or prerequisite that we need to have is the kind. So in all these three, if you are using Mac you can directly go ahead and install these three of them using Brue. You can simply like Brue install Docker Brue install Qubectl and also like Brue install kind simply and if you are a Windows user you can install kind using Chocolaty package manager just write Choco install kind. So these are three prerequisites that we require before starting with creating our own cluster Kubernetes cluster in our local system. So please ensure that Docker desktop is running in your system. So let's check that out. Docker stop. To check that Docker is running or not we need to run the command Docker PS and we get a empty list that means no Docker container is currently running. So once all these components are installed we are ready for local deployment of local Kubernetes cluster. So ensure we have also ensured that Docker is running in our system. After this we need a simple configuration file that is kind.config.ml file. So this is a configuration file in which we'll specify few of the small things to create our Kubernetes cluster. So this is the configuration file here. In this we are specifying the kind as cluster. So this is a cluster configuration and the API version that we have to follow. So it is like kind.kx.io slash v1 alpha 4. So this is the version of the kind cluster that we are following. And inside the nodes we have to specify the role as control plane and these are few of the extra port mappings that we have done. So we are using container port 3080 and the host port as 80. So this is a port. So this is optional. I have provided the listening address. You can also provide this or this is an optional step. So you can omit this. By default it will take 0.0.0.0. And the protocol that we are following is TCP. So you can save this file, your configuration file as kind.config.ml. Then to start your kind cluster you have to run the command kind create cluster. Then followed by a name. The name will be KCD cluster. Then we have to give the configuration. So the flag config followed by the name of the file that is kind.configure.ml. I'll hit enter to run this command. And we can see that kind is creating a node here, preparing nodes and writing the configuration, starting control plane. We can see the status, right? So we are getting the output that cluster is being created. We need to wait for some time. Okay. Now we got the response that your cluster has been created successfully. And after this we have finally fired up our cluster, Kubernetes cluster. Now we have a Kubernetes cluster running in our system. And you can check that by using QCTL get all. So we can see a default service Kubernetes that is running here. And this means that we have successfully created our Kubernetes cluster. Now that the cluster is up and running, now we can run the process or deploy applications on the Kubernetes cluster. So we'll be deploying some simple web server over our Kubernetes cluster. So Kubernetes describe all the workloads through a simple ml file. So ml called a manifest. So in that manifest or the ml file we have all the configuration to run a deployment in a Kubernetes cluster. So yeah, so we'll quickly create that file. So that file is known as application underscore deploy dot ml. And here's the file. So in this we have defined the API version as F slash V1 according to the version of our app that we are deploying. And the kind is the deployment. And we can also give the name. So the name will be KCD app. And in the spec, we have to specify a few things like replicas, how many replicas do we want? Like it can be one to as many as you want. And also the selectors, like we have brought the match table here. And also we can provide a template, all the metadata and all those things. So and the important part here is the spec. So here we have to define the container that we want to deploy. So we are deploying a engine X image inside our Kubernetes cluster. So we need, we can save this configuration and now let's go to our terminal back and we'll write Qubectl apply dash F application deploy dot ml. Okay, so our deployment is created. It gives me the output that the deployment is created. So this deploy, this is a engine X Docker container that is running the process on the cluster. And we can confirm it by running Qubectl get pods. So this is the port that is running. So this is our KCD app and our container. So we can check the status. Okay, it is still not ready. And we can see the status that container creating. Let's run Qubectl get pods again and check. Yeah, now you can check the status. It is ready now and the status is running. So what actually happened? So when you create a deployment in Kubernetes, and you also specify number of replicas that we saw in the configuration file. Yeah, so this is the number of replicas that we want to create. And you want to set, you have already, we have already set the manifest file, which is this one. So each replica copies the container that are in the spec. So we have already defined the container and engine X and this running instance is called a pod. Okay, a pod can have one or more containers running in a logical group. And also once we init the containers, so it runs a process on each part. So in this instance, if we are running an engine X of our own inside the Kubernetes cluster. So one is enough so we don't need more of them. So you can write Qubectl logs. And you can paste this name here to check the logs. So here we got the logs from our node. So these are a few of the logs. The process is running now. So the next question comes to our mind is how a user can visit the page, how we can see the deployment of the server that is running. So Qubectl offers a powerful service which will route your connections to the containers which are running your server. So to do that, we have to make a service in that we can directly pass the request to the pods. Inside the container where it is running. So to do that, we have to make some changes in the configuration file. So this is called Exposing Expose the service. So the service will be export and that can be accessible through a port in the local system. So we need to make some changes here. So we'll specify ports, container port as 80. And the name as engine X. So let's save it and run it again. Qubectl apply F and followed by the file name. So the deployment is configured. Let's check the pods, Qubectl get pods. And we can see the pod is running. Once we have done this, we can see the pod is running and it is updated. Now you can see the age here. So this means that the deployment is updated here with the new configuration. Once we have made the changes in the configuration of our app, now we have to create a service through which are, which will be running in the givenities cluster through which we can expose our port and a user can interact with the application of the web server. So for doing that, we have to create a service. In order to create a service, we have to create again a configuration file that is application service.eml file. So this is the service.eml file in which we have given the ABI version as V1 and the kind is the service. So here we have given service. The name that can be KCD service because our app is KCD app. So we have given the name convention, followed the name convention as KCD service. And then in the spec, we have to give node port and also specify a few of the protocols here like app here should be KCD app. Yeah, so the app should be KCD app and also we have to give here also it should be KCD app and yeah. So this is fine. And here we have given the selector as KCD app and the protocol that should be followed is TCP and the target port is 8080 because we have already exposed that in the app configuration file and also the port or the Docker port that is running which is this one. We can save this configuration file and let's apply this again. So kubectl apply dash f and the application deployment dot eml. Sorry, not deploy, it's service dot eml. And when I run this, the KCD service is created. So we can check that by running kubectl get services. Yeah, so this is the KCD service that we have created now which is of the type node port and this is the default Kubernetes service that is running. So this is the KCD service that we have created just now. Now let's check in our browser. We'll go to the browser and check local host and we can see welcome to NGNX. So our NGNX server is running. Now at the local host, we can see that we have a NGNX deployment up and running. But now what? Now we want a custom page of our own design. So for doing that, we are creating a resource. So these resources are useful for passing config files to process running inside the port. So for our instance, we have to transfer our index dot HTML file inside our NGNX. So in NGNX, the static files are inside slash user slash NGNX slash HTML. So all the HTML files are there. So let's check that. So here we have an index dot HTML file. And this is our sample HTML file that we'll be displaying there in our web server. So now I'll be running kubectl create config map index dot HTML dash from file index dot HTML. And we can see the config map is created here. We got the response. Now let's check that kubectl get config maps. I got my HTML file listed here. So index dot HTML. Now once we have created the config map, after creating the config map, we have to make some changes in the application underscore deploy dot ml here. So in previous step, we wrote it here. We have specified the containers. So after specifying the containers, we have to write volume mounts. In volume mounts, we have to give the name as, we can give the name like I have given here HTML content and the mount path. So this mount path is the path inside our pod or the container where exactly our static files will be. So that is, this is the path which Nginx uses to read file from. So the read only is true. And we also have to provide the volume. So the volume, we can give the name as HTML content and the config map that we created in the previous step. So config map we have provided here is index dot HTML. We can save this configuration, go back to our terminal and I'll run the command kubectl apply dash f and followed by the configuration file. Let's wait for some time. And we saw the output as deployment is configured. Now let's check kubectl get pods. And we can see the status running and 13 seconds. This means that our application is updated. Now let's go back here and run the local host. So now we got our awesome output that on our page. This is a beautiful looking HTML page, communities, communities, Chennai. Thank you. I hope you learned something new today. So thank you for attending the session. Thank you, bye-bye.