 Hello everyone, today we will talk about Dexhouse Kubernetes platform and what makes it special. So a couple of words about myself. My name is Maxim Nabokikh and there is a link to my GitHub account. And I am currently working for the company called Plant. You may know about this company because of our tech blog with the illustration of little ants. And now I am the architect of the Dexhouse Kubernetes platform. So in my free time I am the maintainer of the popular CNCF project called Dex. This is an identity provider. And I attend special interest group authentication meetings and try to contribute to Kubernetes a little. And I also contribute to other great open source products from the Kubernetes ecosystem. Yeah, and Kubernetes is a complicated thing. So maybe on top of it you can find some familiar stuff like Docker, POTS, you can run some workloads using kubectl run command, which is actually pretty cool. And there are also resources that are easy to understand like secrets, config maps, jobs. But if you want to run proper production Kubernetes cluster, you need to go deeper. And that's where you find challenges like how to run stateful applications in your cluster, how to write your own custom resource definitions, how to monitor your cluster, what is horizontal, POTS, vertical POTS, what is multi-tenancy about authentication, about authorization, about service mesh, about not local DNS, what is CSI, what is CNI, what is CRI. And I promise if you decide to walk this path, you will find yourself one day in the middle of the night patching some Kubernetes control playing components or writing caps to Kubernetes on directly interacting with data in your CD cluster. So, and it will take you so much time to be expert in everything. And yet, running Kubernetes in production is fun and it's interesting and that's why we like it, it's a great software. But what do you need to deploy a fully functional Kubernetes? I think you can do it just in two steps. The first step is that you need to deploy Kubernetes somehow using your favorite tool. Maybe Koop ATM, maybe Koop Spray. Maybe you can follow Kubernetes the hard way guide. It doesn't matter. And for the second step, you just need to deploy the rest of your platform services. That's it. And of course, it's a joke. Of course, we all understand that there is a gap between the first step and the second step. And that's why we need platform. And Deckhouse is the Kubernetes platform that is capable of doing many things. Like for example, Deckhouse can provision cloud infrastructure and bootstrap operational systems so you can run Kubernetes. On top of this service, Deckhouse can actually deploy Kubernetes. And Deckhouse can deploy also essential add-ons to your cluster for monitoring, for login, for authentication, and other things. And Deckhouse also automatically manages and updates these add-ons and also Kubernetes and also cloud infrastructure. So you do not need to care about this stuff anymore. What makes Deckhouse really special? So there are five points we will discuss today. And the first one, that Deckhouse is no operational Kubernetes platform. What does it mean? So we imply that Deckhouse manages all software on nodes and in system name spaces and automatically provisions it. For example, system software on nodes, Linux kernels, CRI, Kublets are automatically managed. Imagine that there is a node, it's not a real Kubernetes node. Right now, it's just a server or virtual machine. And there is a Deckhouse agent installed on this node. It's not a containerized agent, it's more like a system D unit. And then it starts to provision node by deploying all necessary software on it, including container runtime interface and Kublet. So Kublet makes this server a proper Kubernetes node. And with Bootstrap config, we automatically connect this node to the cluster. And the most interesting thing about it is that Deckhouse agent is also connected to the control plane. So there is a dedicated extension API server for all Deckhouse agents to spread configuration files among them. So Deckhouse agent nodes the version of all software components on the node. And if there is a no version, Deckhouse downloads all necessary steps from the Kubernetes API and then applies it to the node. So Kubernetes core software like control plane, NetCD, Certificates, so these all components are also automatically managed. Let's pretend that there are three servers for the control plane, like 0, 1 and 2. And to deploy control plane components, Deckhouse deploys a diamond set called control plane manager with the node selector pointing to control plane nodes. So inside of the ports of this diamond set is a familiar tool called Kub ADM. Yeah, it's slightly adjusted to be able to run inside the container, but still it's a familiar thing. And we all know that Kub ADM deploys static manifest of SD and for the control plane manager. And as for the readiness check, control plane manager ensures that deployed static ports are running on a node and then the diamond set controller will deploy port to the next node and to the last node. And it works so well, like that even if there is a power outage, so we can just add a new node and label it properly and then the control plane manager will do its job. And that's pretty much it. And another great thing about Deckhouse is that it runs anywhere. Like there are lots of options to run Deckhouse. For example, the first group of options is popular clouds, like Google Cloud Platform, Amazon Web Services, Microsoft Azure. You can run Deckhouse on top of these clouds and Deckhouse is fully integrated with their APIs. It can order load balancers, disks and machines. And if your company is a government company and it doesn't want to accidentally spill the beans and it only trusts private cloud solutions, so it's also okay. Deckhouse supports private clouds, OpenStack, VMware vSphere and Bremat installations, except that Bremat installations has no auto scaling feature. And Deckhouse can be also deployed on top of managed Kubernetes solution. In this case, Deckhouse doesn't manage control plane, but still can deploy useful add-ons to the clusters. And if you're curious what is this thing in Deckhouse, you can also install it on top of your current platform on top of OpenShift or Rancher. And if you are just testing, if you just want to see the interface of Deckhouse, you can run it on top of kind clusters. That's also a possibility. And for the operation system, Deckhouse can be run on top of Ubuntu, Debian, CentOS and Red Hat Enterprise Linux and on top of other Debian or CentOS based Linux distributions. For example, on top of Rocky Linux. And which is more important, the cause clusters created with Deckhouse entirely identical and no matter which underlying infrastructure is used. You can use hybrid infrastructures and deploy some clusters to Google Cloud and some clusters to your private cloud. For example, to your private OpenStack and this will work perfectly. And in the interface of these clusters will be the same. Yeah, and by following the no operation approach and everything undergoing careful testing priority to release, the asset of our platform is reliability. So there is a module to measure SLA called OpMatter and OpMatter does periodical checks to be sure that every system in the cluster is operational, not degraded. And two interesting things about OpMatter, the first one is that OpMatter can send metrics from the cluster we are remote right to some long term storage so that you can see the SLA for all of your clusters in one place, which is good. And OpMatter also deploys agents for smoke testing that are migrating from node to node. So you can be sure that every node in your cluster works properly. Yeah, and this is the web interface of the OpMatter module. In this picture, you can see that there are groups of props and the first group is opened and we can see that the basic functionality prop for the control plane is failing. And by clicking on a pie, we can see for how many second this prop is up and for how many second this prop is down. Yeah, and we also can see a percentage. And if you don't want to bother yourselves with historical data or pie or percentage, there is a simplified status page and this status page just shows you that whether you are systems of the cluster, operational or degraded at the current time. Moving further, the CAS is an open source project and the CAS is built on popular open source tools like for CNI, we have Flunner or Scenium. For monitoring, we have Prometheus stack with Prometheus operator, Grafana for dashboards, Trickster for query cache and Vector to collect locks. As our security offerings, there is DEX as our authentication provider, set manager to issue certificates and open policy agent as our policy engine. And for network, for ingress controller, we use the most popular solution on the market called ingress engines controller created by Kubernetes team. And for the same reason we use Istio because it's the most popular service mesh. And we also have some little things to make the life of users more convenient. Metal B for bare metal clusters to make them able to load balance traffic. And also open VPN server that runs natively on top of Kubernetes and provides access for developers to service networks and bot networks. For storage, there are two options. The first option is Lean Store and Piraeus operator managed by Piraeus operator. So it can manage your PVC out of the box. For the CIF, we only have a CSI driver so you need a existed CIF cluster to connect the CIF cluster to this CIF cluster. And SCICD solutions, there are Helm and Terraform that are used internally by deck house to provision infrastructure and deploy modules. For users, there is a combination of Argos CD and Verve and all our images, our platform images are based on Alpine distribution because it's robust, it's lightweight and all backs are frequently fixed. So there is the mechanism of security updates for the Alpine distribution which is actually pretty cool. And you can find more info on this page if you want because this page is auto-generated and if we add something new to the house you will see this on this page. That self is also an open source software and our code is hosted on GitHub so everyone can go and see what we are doing and we are also certified Kubernetes platform by Cloud Native Computing Foundation so we are on landscape. And you can find more info about deck house by following this link to our GitHub profile. And the last thing about deck house which is really cool and which is I admire the most is that all deck house services are connected all together. For example, if you want to deploy a cert manager to your cluster you need to deploy a Helm chart, this is okay. And if you want to deploy a Prometheus you can also deploy it via Helm chart. But if you want Prometheus to use certificates issued by a cert manager you need to adjust Prometheus chart configuration a little and for a single chart it's okay but if you have from seven to 10 charts it's not so convenient, right? That's where the house takes its place. So in the house there is a logic how to configure all modules globally. For example, if there is a cert manager enabled in the house we need also to check is this a private environment or not? So if a cert manager is enabled and there is a private environment we try to use self-signed certificates for all deck house modules. So if environment is not private we will use, we will try to use Let's Encrypt Certificates. If cert manager is disabled that's not a problem we can disable HTTPS. So only HTTP will be available into the cluster. Another great example is authentication. So if decks is enabled in the cluster so we can deploy house to proxy and configure all ingress resources for our modules to use these two proxy with the out request module for authentication. So if decks is disabled we can just generate some basic authentication passwords. Yeah, for some development purposes. This is also good. And not only connected services make the house great there are also managed services. For example, Grafana. So Grafana is not so cloud native. Yeah, Grafana needs a database and SQL database like MySQL and to run it on top of Kubernetes you need to create a persistent volume, persistent volume claim. And this also not so convenient. So our Grafana is managed and our Grafana is controlled by our own set of custom resources like Grafana dashboards for dashboards to add a data source to Grafana. You can deploy Grafana additional data source resource and to connect the Grafana to alert manager you can deploy Grafana alerts channel resource and the same goes for decks. So decks is not capable of being configured by custom resources. So we created our own set of custom resources and the house can configure the decks with these set of custom resources also. We discussed how great the house is but how to install one. So to install the house you need a personal computer or a dedicated installation server and access to the cloud side. So the installation is based on two configuration files. The first file is called config.yaml and in this file we describe the first state of our Kubernetes cluster like the Kubernetes version, the state of the house modules, some provider specific settings like whether we want to deploy cluster to exist at VPC. Maybe we want to add some security groups to change flavor for the master nodes and so on. And the second file it's not an ordinary config file. So in this file we put resources Kubernetes resources that we want to deploy right after the cluster and we'll be ready for using. And this is great because everything in the house is a custom resource and to deploy additional nodes you need to also deploy a custom resource to deploy ingress engines controller you need to deploy a custom resource. So we will use these two things to provide access to the cluster right after it is deployed. We give these two configs to our CLI tool called deck house control and these tool uses Terraform under the hood to deploy, first deploy basic infrastructure to the cloud like network, security groups SSH case and then it uses second Terraform file to deploy first control plane instance. Then the house control connects to this instance by SSH and installs Kubernetes onto this first master node. Yeah, and after this it installs the deck house controller to this Kubernetes cluster and now the deck house controller becomes in charge of configuring this cluster. So it start installing modules, it start creating nodes because we ordered them by creating node group on a previous step and it also creates ingress engines controller to provide access to the cluster because we also ordered this controller. And yeah, the cluster is ready to work, great job. However, it is not enough to just deploy Kubernetes we want to update it, we want to leave with this Kubernetes cluster, we want to deliver security patches to this cluster and that's why we need to update the house and I will show you how it works. So there is a Kubernetes cluster created on a previous step with the house on board and there is registrydecouse.io somewhere in the cloud and in this registry there are container images with newer versions of deck house and there is also a special image with the text table. This is not an ordinary image, this is an OCI formatted image and there is no operation system or binaries in this image, there is only a single YAML file with information about the upcoming release. So the house pulls this container image and if there is a new release, the house creates a custom resource in the cluster which is called the house release. If the tag exists in the registry, the house will be updated by patching all its own deployment to the newer version and the same goes for the next upcoming release, 1.36.4. Yeah, the house is updated. There are some great things about deck house releases and the first one is release channels. Remember that previously we pulled our updates from the image with the text table. So this text corresponds to one of release channels from less stable to more stable, it's alpha, beta, early access, stable and rock solid. So for example, for alpha channel, you will receive the most fresh and yet less tested updates. So you can also declare some maintenance windows in your deck house configuration. So for example, maybe you want to receive the house update only on Friday evenings. That's also allowed, we cannot judge you for this. And if you want to have full control of your deck house updates, you can set the updating modes to manual and then you will receive some notifications about upcoming releases waiting for your approval using webhook notifications or alerts. And if you receive such message, you just need to go to the cluster and run the following comment. The comment will be hinted to you in the message. So, and as we previously talked about the house release is a custom resource. And in this custom resource, you can see a change log for this release, change log link, the version and its transition time. So where your deck house was updated to this version. So it's pretty convenient because you have a history of releases lying in your cluster. And we also have a dedicated dashboard which is called flowdeco.io for all releases on every release channel. And we also have dates of upcoming release on this site. And we created this thing because we want to make the process of updating the house as transparent as possible. And the final conclusion is that Kubernetes is a great ecosystem, yet it's an iceberg and you do not want to touch iceberg with your bare hand because it's sharp, it's cold. You want some kind of a box to ship this iceberg and you want this box to be wrapped and with a ribbon and this is what is made by deck house. This is the final slide of the presentation. Thank you so much for coming. So if you're willing to install the house, there are links below and it would also be great if you can go to the GitHub account and click a star button because that's how we know that you liked our project. Bye-bye, I hope to see you next time.