 Today, we will be giving you a short update from the Kubernetes CGUI and also some introduction to the project. I'm Martin Maciasek, I work at Kubermatic, Sebastian Florek, and I also work at Kubermatic. And unfortunately, Shumoto, who is working at NEG, couldn't join us today, but he's also one of the CGUI chairs, and his part of the presentation is pre-recorded, so we will play it, and then we will continue with our part. I'm Shumoto from NEG. I was preparing as much as I could to see you there. I'm very sorry that I couldn't go to Valencia after all. So I'd like to talk the introduction section with video recording. Kubernetes dashboard is a general purpose webview-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. This project is managed in a repository on GitHub written here. The release is at the project's own timing, and the current version is 2.5.1 that supports Kubernetes to 1.23.6. We are aiming to support the latest version of Kubernetes. I've already created a fully guest for supporting Kubernetes version 1.24, so maybe a new version of dashboard has been released already. Kubernetes dashboard is packaged in one binary, but it consists of the front-end and the back-end. Also, to get and to visualize the metrics of the cluster, metrics server and dashboard metrics script are needed. The front-end is written in TypeScript, running browser, and it accesses the back-end to get information for your Kubernetes cluster. The back-end is written in Goran, running your Kubernetes cluster, and it accesses Kubernetes API server using Kubernetes Go modules and metrics server via dashboard metrics scraper. Dashboard metrics scraper is also managed by CBUI, so please ask us about this. Please ask CBUI instrumentation about metrics server. There are several ways to try out the dashboard. The simplest way is to apply the manifest in the dashboard repository. We also provide a Helm chart, which is also available. Personal access to the dashboard is not allowed by default to prevent easy security incidents, so use kubectl proxy or kubectl port forward to access dashboard. This manifest has little authority by default, so you need to add more permissions to show and operate various Kubernetes resources. You can use the sample user for the dashboard trial described in our documentation, but this has a lot of authority, so be careful when using it. Dashboard users should familiarize themselves with Kubernetes RBAC and set appropriate permissions to protect on resources. We are also focusing on internationalization to improve the usability of end users. Dashboard supports English by default, French, German, Japanese, Korean, and Chinese are supported for now, and Spanish is newly supported this year. Your language would be automatically selected based on browser settings to show your language try to set your local on your browser. You can also change languages manually from the settings view on dashboard. We also welcome support for new languages to add support for new languages. First, organize your translation team. The dashboard maintains transfer the authority to each translation team to manage translations because we cannot review each language. So each translation team can proceed with the translation work independently. One team for Kubernetes documentation such as kubernetes.io may work with you, so try to contact Kubernetes docs something channel on track. Then add settings while looking out our documentation and learn the npm command on your development environment to create your translation file. The translation file will be created in the internationalization folder. Please add your translation to this file. For your translation works, you can all use development container that the script on our repository create. Finally, create blue request to Kubernetes dashboard repository. When the changes added into the front end, your translation file would be changed also. So keep to watch our blue request leveled language something. And please update your translation file after the blue request merged. That's all from me. I hope to see you at the next QCon. Thank you. Bye-bye. This part. Yeah, let me do this. Okay. Okay. So the next part of our talk is roadmap. So we have recently reached the state where we support most of the Kubernetes resources. So the most important ones are displayed. You can get some information about them using Kubernetes dashboard. But the problem that we are having right now is that the scalability and also performance in the biggest clusters is not that good. So our next steps that we are going to take will be connected to that, to solving that problem. So first one is decoupling the container that we have right now. Because current container is for the API and UI, and it's all in one. Then we also wanted to do styling improvements, implement new API architecture with refactored outlayer. And we will go through all these steps to show what we want to actually achieve. So as I mentioned already, the step that we are working at the moment is splitting API and UI of Kubernetes dashboard into two containers. And our first container will be Kubernetes dashboard UI with its own web server. It will be very light, and it will be mostly for displaying the UI in the browser. And there will be second container, which with all the backend that we have extracted into it. And thanks to that, we can scale it, and also it will be easier for us to manage it. And also for everyone to install and handle their Kubernetes dashboard deployments. We have started soft code freeze. That means at the moment we are working mostly on splitting and changing the project architecture. But of course, bugs, security fixes, things like translations and updates to documentation are allowed, and we are doing them. So the next step after we will finish what we are working at the moment, hopefully within a month or two, is to work on styling improvements. So right now it's possible in dashboard to apply white labeling, your own styles, but it's not possible yet to mount the custom themes into the container. So everything would have to be pre-compiled. And we want to solve that problem by mounting custom themes into the separated UI container. And then the biggest change that we want to do, and I think Sebastian will start working. Next part after what Manci mentioned, would be implementing brand new API basically architecture. Since the current one is also a big monolith and scaling is more complex and handling new features from contributors also is more complex for us because of that. So wanted to a bit change everything and introduce quite a new approach to the API. So after we will split dashboard UI, we will be starting to slowly working on the new API architecture and the important part is that we will have API gateway that will stitch and collect all the parts. So we will also be able to slowly migrate to the new architecture by disabling and connecting to the new endpoints that we will start introducing. And also the new thing that we would like to use is the GraphQL API. Because we would like to offer a bit better way to handle live updates in the dashboard. Right now it's mostly polling in intervals and with the GraphQL API we want to be able to make it like live updates without this kind of work arounds, let's say. We also want to introduce new outlayer which means that we want a single microservice to be a single source of truth for us and to support OIDC logging in through OIDCs. So for example GitHub, Google and stuff like that. It also of course depends how your Kubernetes API is configured but we want to support all of that. And thanks to splitting our services and resource support to smaller microservices that will actually connect it to each other probably through GRPC, we are still figuring out and trying to decide completely upon the whole architecture but I think it will not change that much based on this picture. So we can go next. Yeah, this is basically a plan but the decisions will be taken when we will start implementing that. Yes, as I mentioned I think the last one thing is that we want to use informers for our microservices to connect to the Kubernetes API since they already have the cache layer implemented in them and right now we do not have caching basically which also puts a bigger pressure on the API server when there are bigger clusters involved and thanks to that it should also put a little bit less stress on the API server. And the last thing I think it will be also part of the migration of the API architecture is refactoring our authentication and authorization layer. It might also be connected to a slightly factoring of the dashboard UI because we are thinking about some changes to maybe hide things that users do not have access to instead of showing our notifications but this is our user experience that we are still trying to figure out how we could improve that. Yeah, so as you can see these are mostly changes toward better performance and better scalability. We also want to, when performing these changes, improve the tooling that we are using and also make the project like better for all the contributors and perhaps more people will be willing to come in and help us with some issues and stuff if it will be like with interesting technologies. So we are looking for contributors and of course we have issues labeled with good first issue and help wanted. You can always comment on that and we will try to help you to get started. We have also a CQI channel on Slack where you can reach out to us and yeah. I think that's it right now. If there are any questions, please. Yeah. If it's connected to our presentation or not. I was wondering, this is the first time I'm actually hearing about CQI. I was wondering what is in the API layer? Why don't you just connect to the API server? What is this intermediate server offering? Basically why we have to create our proxies API server is because the Kubernetes API server does not offer things like filtering, pagination and search. Filtering is search. So filtering, pagination and sorting at the same time. If you want to do those things at the same time, you can't really do that with the pure API server and you have to build something on top of that. Also if you want a bit more complex features like find resources that are connected to resources. For example, on the pods, let's say, details you want to get all owners, events connected to those pods. And there are storage classes and volume. So all that information that is connected to your resource, then you need your API, you need your own logic to find those resources and to return that to the UI that can use them. Second thing is that you would have to fetch all the data into UI, into the browser. So all the logic would happen in the browser. So the resource consumption would be on the front-end side, which is not the best. Because we want to move that to the back-end to be able to scale it, to split it into containers. So that's why it should happen on the back-end side. Hello. One question regarding the OIDC feature you're implementing. In the current dashboard, I've seen that at the moment there's no way of implementing a lockout link. Will you include this in the new version? Yeah, for sure. We want to have the full support of those OIDC features. We were always hoping that the Kubernetes would offer a way to auto-discover how the API server is configured. But since this is not really happening, then we will have to offer all those, let's say, dedicated parameters. Sometimes you can configure the dashboard the same way that the API server is being authenticated. Because right now you can use this proxy server to reverse proxy to authenticate for the dashboard. But it's definitely not the best way and probably not that user-friendly. So we want to have full support for lockout logging in and handling whole authorization. Okay, thank you. Hello. You said scalability is a topic for you. And I wanted to ask if you have any numbers on how many namespaces or how many deployments where scalability or performance is an issue. No, no, no. We don't have... There were some issues created by users some time ago that were actually pointing out that for clusters with, for example, 10,000 of bots or like 100 or more nodes, then it might become harder to use the dashboard, especially the workloads view that has to download basically almost the whole cluster inside this single pod, which can take a lot of space, of course. So that's why I mean to split those into separate containers with services and have some caching so that you don't have to refatch over and over again. Yeah, but we on our own do not have any kind of metrics that are touching user clusters. So we can only rely on the feedback of the users, so of the people. So the dashboard itself doesn't do any kind of tracking analytics, etc. Okay, so I guess that's it. Thank you. Thank you very much for coming.