 Hi, my name is Prabhara Grawal and I welcome you all today at the Kubernetes Community Days Channel. I am thankful and very excited to be a part of this event which is happening for the first time. I believe you all must be familiar with Kube scheduler and I have got an interesting and fairly newer tool called Kube scheduler simulator to present amongst you. Before we move forward a little bit about me. I work as a DevOps engineer at IBM ISL for the IBM Kubernetes service and those are my different handles to reach out to me on Twitter, GitHub and Slack. For today's agenda we have what is Kube scheduler simulator, its different components and its architecture, how it works and a little demo to show how to get started with. Kubernetes scheduler is one of the main master components responsible for scheduling pods onto the worker nodes. The scheduler may look as one but it is made of small different parts doing clock work to stay everything put. For the cluster admins, the scheduler may always look like a single component running either as a pod or a service on master nodes but when it comes to visualizing how scheduler actually works it is pretty hard to do so. Now we have Kube scheduler simulator which can easily help us in visualizing how to create, edit, delete of pods, nodes, volumes which works by simulating inside a cluster. Kube scheduler simulator is a Kubernetes 6 project which is maintained under 6 scheduling. It is an open source project which is written in Go and it can run very well locally using talker or the talker compose. It mainly focuses on visualizing resource creation from KITS scheduler's perspective. We can simulate the creation of different resources like nodes, pods, storage class and volumes. The node assignment of pods by scheduler is done on the basis of scores and filter plugins but in the Kube scheduler simulator we have custom filter or score plugins which are used in the place of default plugins. We can also configure custom scheduler configuration by either passing the configuration file or we can use the default scheduler. Kube scheduler simulator has a web UI which can be used to try out the behavior of scheduler and check the working of plugins as well. Using the web interface you can create a new resource like node, pod, priority class or the storage class and we can see the result of the scheduling in this section. We will see the UI in detail once we move forward. In today's presentation we will mainly be taking a look at how scheduler simulator works, how to get started with it by installing in local. Let's take a look at the different components involved in the Kube scheduler simulator. Once the Kube scheduler simulator is bootstrapped using the talker compose it basically creates three containers one is the HCD server, another one is the simulator server or the actual backend server and the other one is the front end server. The backend server which is also called the simulator server executes the Kube API server at first. Since the Kube API server is the brain of any Kubernetes cluster it is followed by the persistent volume controller which will help in maintaining the persistent volumes across the cluster. Then it also triggers the scheduler which is the default scheduler which comes with the Kube scheduler simulator and then this simulator server is initiated which will help us in actually visualizing how the different resources are scheduled inside the Kubernetes cluster. We also have a web UI just like we saw in the previous slide and the HCD server which acts as the default data store for the whole project and this is the architecture diagram of the scheduler simulator. The main components are the simulator server so as soon as we execute the talker compose up it basically executes the API server at first. The API server then triggers the persistent volume controller which is followed by the scheduler which is the default scheduler inside the cluster and then the simulator server. These are the different API end points which are also registered mostly simulator server or the respective component has been created or executed. We can directly get the metadata or the other information by querying these end points running inside the scheduler simulator. For the installation we can simply give the following command by after cloning the project into a local which we will just see in the demo which is the make talker build end up and it will start building all the three containers which will involve the HCD scheduler simulator and the front end and now we will see the demo we will first see the installation of the scheduler simulator in my local and then I will run the talker compose in my local machine so that we can execute the scheduler simulator in the real time. So, I will follow the guide as mentioned in the scheduler simulator GitHub wrapper. So, we basically need to clone the project and then we will simply give the command as mentioned here either make talker build end up or make start. So, make talker build end up would basically trigger the talker compose file and we will try to create all the containers as mentioned in it. So, we will try to create the simulator server the simulator front end and the HCD in the interest of time I already have the project cloned in my local and I will try to run the talker compose command. So, we will wait while the containers are being bootstrapped it is done. So, you can see that three new containers have been added just now whereas the first one is mentioned it is of HCD running at the version 3.4 then the simulator server which is the backend server for our project and the simulator front end where which will be running the simulator UI. So, UI would be accessible at the port 3000. So, what I would do is I would just reset it and I will delete the existing resources and then we will try to create fresh resources from scratch. So, you can see on this UI we can basically create either new storage class, new priority class, new persistent volume plane or the new node or new pod. So, we will start by creating new nodes. So, once you click on the new node option it will basically give you an editor where you can put in the details of the memory specification or the CPU you want your node to be configured with it. So, I will not change it and we will try to schedule it with the default values and if you click on the node you can basically see all the relative information with respect to the newly created node. So, it says that the phase is running for the node. Now, I will try to create another new node with the same configuration and node 2 will also be having the same status. Now, I will create a new pod. So, by default the pod which will be created will be based on the pod's container image. We will just try to minimize it to some 2M of CPU and 1 or maybe 2 gigs of and you can see all the other default information mentioned which is the termination SH path image polarity and then the DNS policy and then I will apply. You can see that the pod has been scheduled on the node 1 and now I will create another pod with a little change in the configuration. We will keep it as 5 and 5 gigs of and it will also be having the same configuration apart from the resources as compared to the other pod. Now, we will go and see the first pod which is being created just now. You can see that the few fields have been added apart from the resource definition fields. So, one is the filter by default the official filters defined for are both of nodes are like azure disc limits, inter pod affinity, node affinity. So, these are all the by default filters which are defined as the as part of the cube scheduler basic configuration and then if we check the score. So, here it says that the scores are defined on few or the certain factors which is the image locality inter pod affinity, node affinity, node resources balance allocation, resources fetch, port upon which is spread and the taint on which. But in our case since we did not define any of the inter pod affinity or node affinity or even port upon which is spread. So, we could see that the values are termed as zeros in that case. Whereas, for the node resource balance allocation and node resources fetch we have a default value of 96 which is a score and then at the end we basically have a sum of the finalized or the normalized and the applied plug-in scores. So, here we can see again that balance allocation and resource fetch are again as 96, but the port upon which is spread and taint toleration have been termed as 100 and 200 for each node. So, to understand plug-in or filter score allocations which we saw just now in the cube scheduler simulator UI for each node and every pod we need to understand the motivation behind scheduling of pods onto the different nodes. The scheduling of pods onto the nodes are managed in two phases. One is the scheduling cycle and the other one is the binding cycle. The scheduling cycle it selects a node for the pod and binding cycle it applies the decision to the cluster and we also have few extension points like pre-filter, filter, post-filter, pre-score, score and the normalized score. All these plug-ins are used for various reasons like filtering out the nodes that cannot run the pod, some for pre-scoring work which generates a state for plug-ins to use and some are used to rank the nodes that have past the filtering phases. The scheduler it eventually calls each scoring plug-in before it finds the pods onto the respective nodes and the allocation is done based on the best score value calculated. So, before we move on to the next part of our presentation, I will try to bring down the existing set of which I have in my local. To do that, I will basically call a docker-down command from the make file and as you can see it has started stopping all the three containers which is the HDD, the front-end server and the back-end server and it has been cleaned up now. So, moving on to the next part, we have few other implementations being done for the scheduler simulator recently. The project was earlier using GitHub actions to run its CI checks which were mostly being triggered as part of the PR and push-based checks and there were some works done to migrate to Proud which is a Kubernetes-based or the Kubernetes community-maintained tool to have the CI streamlined with the very common tool throughout the org. So, currently the CI part of scheduler simulator is being handled using Kubernetes Proud and the end-to-end test for the scheduler simulator which is currently in progress. These will focus in testing all the different components of the scheduler simulator like the Qube API server or the response return from the Qube API server, Qube scheduler, whether there was a node or pod created properly or not. So, if you are interested in knowing more about the scheduler simulator, you can always follow the project link. There is a whole lot of documentation available which will be helpful for the beginners to get started with. There is a guide mentioned in the root of, mentioned in the readme of the project which will help in getting started and how to use scheduler simulator, how to bootstrap it locally. We also have a doc section which will help basically explains how the scheduler simulator works. The different configuration you can use for Qube API server and if you are interested in checking the responses of the different APIs exposed, you can also check the section. Few links to follow, the project link is there, the documentation link is also there. You can always reach out the maintainers of scheduler simulator on 6 scheduling channels over the Kubernetes slack and that is all for the presentation. I hope you would have enjoyed knowing a few things about scheduler simulator. It is a really good tool if you are looking for some help in understanding and visualizing how scheduler works and the different components around it works. The community is always looking for new contributors who can come and pick up issues and start working on it. It will be very great to have different folks working on it. Once again I would like to extend my thanks for giving me a chance to present. Hey Praveer, thanks for the insightful session. It was really helpful and in the end you said you are a welcoming contributor. One question that I have around is what is the skill set that you are looking for as in say if I have to be a contributor, what is the minimum requirement that you would expect for us to contribute in your space? So to contribute in any project under Kubernetes, you can basically start off fresh. Even if you don't know or go that much or you are not that much aware about Kubernetes, you can start helping with the documentation or maybe bash clips which are used for end to end testing or CI CD path. So there is always way up the ladder from there and if you are really interested in Go development, you can pick up a few issues which are around core helping with the Go port. Otherwise in any of the project under Kubernetes you can find your way in. For scheduler simulator, similarly, you can always help in the documentation part. There is front end part which you must have seen just now in the presentation. If you are good with JavaScript or if you have keen interest in that, you can also pick up issues around there. Then we have APIs written in Go. You can pick up issues around there as well. There is a lot of work going on with respect to end to end testing which I mentioned. If you really like to have just knowledge around bash scripts, you can pick your way up from there as well. Sure, Prabha. That answers my question. Thanks a lot. So participants, do you have any questions? Prabha is around and he is daily. He will be more than happy to help you guys. One thing I would mention is please take part in any of the community meetings for scheduling. Those happen every other Thursday around 10.30 p.m. in your time and there is one which happens for the APAC, the Asia Pacific, every month, the first Thursday of every month. You can reach out to me on Slack. I will share all the details. Any other community member which is available on Slack, you can always reach out for help. Please don't hesitate to ask any questions. The community is very friendly as you must have been aware of. Sure, Prabha. One of the participants is already interested to contribute. You have already answered that you are reachable in Slack. Is there any other community member who can reach out to you or is it only Slack or you are active in LinkedIn? Yeah, I am active in LinkedIn. My Gmail address would be there or I can have them via you. The other ways would be Twitter. I am active on Twitter as well these days. So all these three, four things. I have a personal question. I will wrap it up and have the last question. I like this idea of simulator. What was the starting point as in like what was the spark to come up with a simulator? Because generally people don't, we all tend to work in CLI and we generally don't or rather not have the requirement to have a simulator as such. But how do you think that you want to come up with a simulator and where did this idea come up from? I may not be the best person to answer this because I am not the one who created this project. But it basically came out as a GSOC project and the main idea was to have a showcase kind of which can help people to really visualize how the different plugins and the different scoring happens when you are trying to assign pods to a different node or to distribute to different nodes from the scheduler side because we would always read in the documentation and we can see in the cluster that the pod goes to node A and node B. But in actually how does it look like if it is to happen somehow. So that was the main intention behind it. The project is still growing. We are adding different new components, different support for different custom plugins. You may be able to write your own simulator one day. So yeah, it is growing from there. Thanks a lot Prabha. You have answered all the questions and the session was very insightful and helpful. I hope the participants are benefitted as well. Thanks a lot. Have a good day. Bye. Yeah, you too. Enjoy the rest of the session.