 introduce ourselves. Pako, please go ahead. I'm Pako Xu from China and I work for the cloud. I have been contributing to the Kubernetes community for several years and my focus is on QVDM and also Stake Node. My name is Rohit and I'm technical lead at NEC. I mainly work on SCL QVDM since 2017. Let's get started. So, as of today is QVDM introductions and some recent updates about QVDM and how to contribute to QVDM. So, QVDM is a node build scrapper. Someone or something should provide the machine. QVDM creates a Kubernetes node on the machine so QVDM doesn't going to provision a machine for you. You need to provision machine for yourself but beforehand you need to install a cubelet and container runtimes on the machine and you also need to install the CRI and CNI over the machines and it's kind of agonistic to infrastructures CNI CSI as well as CSI. Now, what the QVDM is good for? If you want to try Kubernetes possibly for the first time you can use QVDM to install Kubernetes cluster. It's a way for existing user to automate the setting of a cluster and test their application and it's a building block in other ecosystem or installer tools with the largest scope. Now, QVDM is a part of C cluster life cycle and the objective of C cluster life cycle is to simplify creation, configuration, upgrade, downgrade and tear down of Kubernetes cluster and their components. Now we would like to thank all our contributors who have been contributing to QVDM in the recent two years and QVDM is GA now so we would like to thank all our contributors. Without their effort it would not have been possible. Now, the basic QVDM workflow I start with initializing the cluster and the first control pane node. So you just need to fire a few commands. So in the beginning you need to find QVDM in it so you just create the control pane node and the output of the QVDM in it you just need to paste on the walker node where you can attach the walker node to your cluster but after installing the QVDM and Kubernetes you just need to apply the CNI so you can apply any CNI like Calico, Veeve, Flannel, etc. And you also need to apply CNI for your code DNS to work. So code DNS will not work until you apply the CNI. So what QVDM deploys? So on the control pane nodes it deploys the Kube Proxy, Kube API server, Kube CDular, controller manager, etc. code DNS and as I said we also need to install CNI plugin as well. On the walker node we need to install the Kube Proxy as well as we need to install the CNI plugins. So coming to the QVDM workflow we need to check for the level upgrades if you want to upgrade your Kubernetes cluster. So you need to run the QVDM upgrade plan. It will fetch you the version you need to upgrade to. Then after fetching the version you need to upgrade the first control pane node to the cluster and then you can run QVDM upgrade apply to the version it faced, the version you want to upgrade to and the third step is upgrading the rest of the node using QVDM upgrade node command. So QVDM scope is simple and extensible. So QVDM is divided into various phases. So you can run the preflight phase, you can run the certificate generation phase, you can run the you can say bar control phase. So it's highly extensible solution. So who uses QVDM? So there are a bunch of users who are using QVDM. So some of them are mini-cube, kind and the cluster API also uses QVDM as a node bootstrap. Now it's part of composable solution. So what do we mean by composable solution is that C class life cycle is designed in a such a way that it manages the full life cycle of the cluster. So we have a bunch of project in the C class life cycle on 20 plus and few of them project are ATCD ADM. So ATCD ADM is a project which create the ATCD instances. It is same as QVDM cabinet and QVDM join. So you can run the ATCD ADM in it and ATCD ADM join to create ATCD clusters. And top of that we have QVDM. And we have cluster add-ons which are critical for running Kubernetes clusters like code DNS, Qproxy, etc. And on top of that we have declarative approach to create Kubernetes cluster using cluster API. And we have cluster provisionals as well like cluster API provider for AWS, GCP, Azure and other cluster provisionals like cops and Q spray. And we have component config to manage the flags. So if you want to specify any setting on the components you can manage the flag using component config. And we have image builder. So it's used for building virtual images. Now Pako will talk about QVDM highlights. What do you Pako? Here I will talk about QVDM highlights. So this project is not that active because we want to keep it simple and keep it stable. So it is not updated frequently. So here we involved caps and features since 2022. Maybe more than a year we have downed those features. So the first one is about Qubelet config map. At first it used the version Qubelet config map. And later we find that there's a problem because when you upgrade you get an old version Qubelet config. And after you upgrade several times there's many Qubelet config maps but some are not cleaned up. Now we change the strategy. We use on version Qubelet config map. So this would be much simpler. And the next is about the master node rule because we have renamed that to control plan. And during this long process we started the deprecation at version 1.20. And then we have tried to handle those master labels at first and later we have removed the tent from the node. After we removed the tent for one release and we started to remove the toleration for the master node. Now in latest version there's no master node rule now. But for some other applications you should try to the same process to fit this. And the next is about security. Currently we are running control plan as non-route. But this is not default. It's the alpha feature. There's a reason that Stig node is working on user names with this support for Qubelet. So we are waiting for that feature and I will elaborate more in the following slides. The next is about the Qubetium customization with patches. So there are some updates recently about we added the support for the Qubelet configuration so we can specify some node for node. Every node there may be some different configurations. The last one is just merged in the last release. We used the UC learning mode in 1.27. It is currently an alpha feature. You can avoid it. Then when you join a new control plan it is the stiffer. The first one is about security. Currently the security is very important and this feature is added since 1.22. We use the security contextualized user to make all the control plan pods to run as non-route. Here is the list. We run everything in non-route. Besides in Stig node we did run Qubelet. We can run Qubelet in username space since 1.14. This is just running the Qubelet binary in non-route in username space. Another thing about it is running all pods in username space. This is also alpha. Currently it is active working. We need more feedbacks from the Stig node side. After this is graduated to beta I think this can be involved by default in Qubelet. The non-route control plan pods is not needed. We kept it alpha. If you want this non-route now you can use that. If you want root less in the future I think this Stig node working on username space is a better choice. Next is about ETCD. ETCD has announced a new feature in version 3.4. They added a learner mode. ETCD can join as a learner and then promote it to a voting member. When it is a learner it did not vote. So it just kept catch-ups with other followers, leaders to make it the data log and then when it can promote we can promote it to a voting member. That's what we did in the last read cycle. Currently we support it and it is more safe. If your ETCD have more data and the network or the disk is not that fast there may be some risks if you join using it before we. The next is patches. This is about the extensible of Qubetium. Since 1.22 we add the support for Qubetium API server controller manager and scheduler. Those things we can do some patches. Here's an example. You can see here. Here's an example to patch ETCD. Jason, we add a notation. This is a patch to Jason. You can patch like this to your static ports. Now, since version 1.25 we can support patch Qubelet configurations. The Qubelet configuration is in YAML so you can patch some things in your locally. The configuration is like you can specify specific directory when you are in it or join. You can put those patch files in the directory. Then the Qubetium will respect the patch. Here's something about Qubetium configuration updates since 1.10. There are many feature updates. This is about the history of Qubetium configurations. Currently, there's only version 1, beta 3. Something is working process for reset configuration and upgrade configuration that is candidate for the next version. Version 1, beta 4. Here we mentioned that the roadmap, most things are very stable and simple. The configuration is a big part. We will have some updates in the configuration. As I mentioned, the upgrade configuration and reset configuration API types may be added, but this is still in discussion because your users may upgrade with some flex, reset with some flex, but those flex cannot be saved in config maps now. Some users want this so they can do it using a config file. The next is about Qubout Reader. This is an initial idea. We have a prototype in Qubetium project at first, but then we removed it because there's no feedback at the time. But the discussion is still open and we need more feedback for this. Later, there's a personal project demo. First, we talk about the Qubetium configuration. There are some candidates for new versions that users have raised. The first one is about skipping add-on image pooling because some users use the skip face flex. They can skip the add-on install. For example, they don't want Cardiath or they don't want a Qubetium proxy, so they want to skip that image pooling as well. Currently, Qubetium don't support that, but this also can be done later. The upgrade and reset configuration I have mentioned and the next is about customized environments for control plan. Now, we support extra arguments and extra volumes for APS server, for example. Later, we may support custom environment because some tuning or some other needs. The next is about controlling the time-outs. Because many users use edge or some slow devices, so the time-outs is different and they want to change the time-outs, but currently it is not easy to do that. The last one is about allow multi-time times. This is a special case because some APS server or control manager, they support to apply multi-times of lag, but we currently save it in key-value style, so it is hard to do that. We have to change the API, but this is also in discussion as this is very special cases, only several special cases. So now we talk about the current candidates and if you have any needs, you can add to this issue, comments on this issue to let us know if there are any features requests we can do. The version 1 beta 4 may be introduced in the next redis cycle. We will discuss on this in later meetings of Kruvedem as well. This is about Kruvedem operator. The link here is a personal project of mine and I have tested it to... This is an amazing feature. I think we can upgrade cross versions. Currently, Kubernetes only is about you can upgrade from a version to next version. You cannot cross versions. So this operator helps you do that, but it indeed just upgrades the cluster to the next version and when it's down, it can trigger another job to upgrade to next version. So this may make it automatic. And Kruvedem operator is the focus on the day 2 of Kruvedem clusters. So it mainly features cluster upgrade, reconfigurations and renew and some upgrades. And also the drive run is a big feature of it. So before you upgrade, you can run this to get the plan or something from the operator side, get every node drive run result. I think this will help. And here's a comparison about current Kruvedem related things. I think Kruvedem is the focus on the node. It is like a node operator and it is very simple and extensible. It is focused on the node. And it's binary and so you can use it to manage the cluster on node. Then the next is Kruvedem array. It is more like some OS operator. It uses Ansible and it can support bare mental and most cloud. Because it uses Ansible, so it is free to control those nodes. It can control the binaries. That is why some users choose Kruvedem array in bare mental. And the cluster API is a cluster operator, I think. It uses community style APIs. It is declarative APIs and the patterns to automate cluster lifecycle management. So currently cluster API is very hot because you can manage multi-clusters here and the upgrade it uses is to delete the older version node and just join a new one, a new version with new version. So it don't need to upgrade it on the node because it can touch the infrastructure. The next is, sorry, I can mention the Kruvedem operator now. It is focused on the data of Kruvedem and it cannot touch, it is using POD. It is operator so it is hard to touch the system D part. So when you want to do some upgrade, there would be some limitations here. I think that is why Kruvedem operator is not that appropriate, I think. Because if you want to, you can touch the infrastructure. Maybe there are some other good choice and if you just do it manually, you can use Kruvedem directly. If you have Ansible, you can do it with Ansible. But Kruvedem operator is like other operators just did something in POD. So there are so many limitations. So it can do some mix-offs utterly but not that good enough, I think. So this needs some feedback. If you are suffered to maintain a cluster with Kruvedem directly, I think it will help. Next, I also involved COPS and Kruvedem. COPS is like Kubernetes ops and it can generate telephones. That is very attracting people and Kruvedem is based on Kruvedem. It is like an operator for Kruvedem. It wants to make it now Ansible, just there it is. So how many of you have contributed to Kruvedem in the past? So you can raise your hand. Yeah, quite a few, no problem. So we love new contributors and as you can see, Kruvedem is here now and we have some beta projects and alpha projects. So we love new contributors. As you can see, here is a Kruvedem contributor guideline by Lubamir. You can watch this video on YouTube and you can navigate to our community phase, look for good first issue and or help wanted issues. So maintainers will help you in the issues. And you can also help with non-code contribution with docs and testing. You can attend our Zoom meetings and ask questions. Introduce yourself on Kubernetes Slack. Attend on watch new contributor sessions of Sieg Contrabex and chop wood and carry water, be kind. Everyone knows their place at the table. So here are a bunch of help wanted issues. You can pick any of them or either good first issue. So thank you so much everyone for joining the talk. And here is some frequently asked questions. So the most recent update is that kats.gcr.io redirects to registry.kats.io. So traffic from older kats.gcr.io registry will be redirected to registry.kats.io with the eventual goal of sunsetting kats.gcr.io. So thank you everyone for joining this talk. If anyone has any question, can... So we have bunch of issues in the QVDM project. So issues are in separate repository but QVDM codes it integrated with K by K code. So it's under the part of K by K. Yes, there's also QVDM office time hour and also state meeting. You can attend if you have any questions.