 Hello everybody. Welcome to this webinar that is hosted by CNCF together with the partner of the Kubermakik. My name is Michal Mancho and I will be guiding you through the webinar which main topic is about spinning up the Kubernetes infrastructure using the GitOps tooling. So have fun and let's get started right away. So just briefly about my introduction. I'm living in Czech Republic and I'm working as the consultant and the cloud architect at the Kubermakik company and I'm helping the customers with their cloud native journey. Feel free to join me on LinkedIn if you're interested and if you like this webinar, I will be happy to respond. Today during the webinar I will briefly introduce the Kubermakik company just very briefly but main focus will be on the project that is composed and using the set of the CNCF tools and the project is called start.kubermakik. I will also explain the motivations and why we have created such a tool and what kind of tools from the CNCF landscape we are using. I will also explain a bit more concepts about the GitOps and on which levels we can utilize it and last but not least I would like to give some focus to the security which is an important piece of the puzzle in this setup. Hopefully we will have also the live demo so I hope that we will fit nicely in the given time. So let me start with the introduction of the Kubermakik company. So just for you to be aware, so it's a European based company and the employees are all over the world. So everybody are working in the full remote cooperation and it's one of the top Kubernetes employer in Europe and there are tools like the QPON and the Kubermakik Kubernetes platform and those are the tools which will be involved in the webinar today as well. So I will just try to briefly explain those tools as well. The overall mission and vision of the company is the power through automation. So really we are heavily focused on the automation and simplification of the operations which is connected with running the workload and the application on the cloud native or Kubernetes based stack. So first of all, I've already mentioned the Qube1 and the KKB tools. We won't be today that much focused on the Qube carrier so we can skip this for now. So let me start with the Qube1. So Qube1 is the tool that is used for an automation of single cluster. It's completely vendor neutral so you can use this tool to deploy the Kubernetes clusters. In the vanilla way on all well-known public cloud providers as well as on the environments like the on-premise or open stack vSphere and providers like that or DigitalOcean, Hetsner, Pucket and so on. So effectively with this tool it's the CLI tool so it's from developers to developers so there is no UI and anything. So it's just a CLI tool with some YAML definition with that you control the provisioning of your Kubernetes cluster. So the result that is usually created after you execute the Qube1 tool is that you have an HA cluster deployed where usually there's a load balancer which is used for accessing the Kubernetes API. The control plane nodes are provisioned and also the worker nodes are provisioned and those are managed again in the declarative way using the resource called machine deployment. We will talk about this a bit more later and we will also see it during the demo for sure. So this is one part that's the Qube1 and the second one is the abbreviation is the KKP so I will be using that during the webinar. So this tool is effectively the platform that is being used to create a single UI that can be used for management of the Kubernetes clusters in the multi-cloud environment. So effectively that gives you the one-stop shop for all of your Kubernetes clusters across different environments including the public cloud providers, your on-premise environments or potentially bare-metal clusters or also newly with KKP 2.19 you can also seamlessly integrate your managed Kubernetes clusters from AWS, Google or Azure, so the services like GKE, AKS or EKS. So it's also supported and you will have the full control over these clusters in the single place as well. So I will just briefly explain some of the major concepts because it will be handy to understand throughout the next steps I will be describing. So usually how the setup look like is that first of all you provision one let's say management Kubernetes cluster with the Qube1 tool and on top of this management cluster you will install the KKP platform so it consists of a couple of core components like the API, the operator, the dashboard and so on. And then next to that so this is usually called a master cluster where these components are deployed. Next to that so it can be either a separate cluster or it can be also deployed on the same cluster. We provision so called seed cluster and the seed cluster again is the set of the operators and controllers that are running over here. But mainly on this cluster we provision the containerized control plane components of your user clusters. So the user cluster is the entity or the cluster that is actually being used by the end users. So by user cluster you can imagine the Kubernetes cluster on AWS, Google and so on. And effectively it's represented as the workers that are provisioned in the given environment. And all of the control plane components are running as the containers on the seed cluster. So I would keep this level of detail for now. For sure if you are interested I would recommend you to drill into the documentation to see more details and so on or just reach me out and I will be happy to share more details as well. But for now for the context of this presentation this should be enough. So right now let's get started with the idea of the start dot Kubernetes project. What it is so very high level. It's effectively the wizard or the web application that is running in the browser. It has some UI layer and it has some backend logic or some API that takes care of preparing some content. And the content is the pre-configured setup of the Kubernetes platform where everything is done or prepared based on a couple of inputs that you provide in the wizard. So the overall idea is to go through the wizard. The wizard is composed of the six steps. And based on your selection the content will be downloaded or generated from this wizard and you can take this as a starting point to spin up your Kubernetes infrastructure in a very easy way within a couple of minutes. Instead spending a couple of days or weeks trying to do the same just by following the documentation. So I will just briefly try to describe the steps which are available. So in the first step in the wizard you will be able to choose what is the Git provider where your repository that will represent the infrastructure as a code. Because we automate everything so for sure we would like you to have the declarative definition of all the configuration files and so on. So here you will pick one of the most famous Git providers. I would like to highlight that this week we have actually added the Bitbucket support. So if you are active Atlassian user and you are having your repositories on Bitbucket so we are more than happy to hear your feedback if you will try that out as well. Anyway, so first of all after selection of the Git provider you will select the cloud provider. For now we have added the support for five most common cloud providers. So those are the public cloud providers like the AWS Google cloud or Azure. And from the on-premise environments we have picked the VMware and the OpenStack. I would like to mention one more disclaimer that actually the purpose of this project is to learn and help you to quickly bootstrap the KKB. For sure you can customize this and extend it later on or you can just use it for some learning and onboarding purposes. And based on that you can build your own structure in the similar way. So in the next step there is the cluster configuration. So here we will ask you about the specific details like what version of the Kubernetes cluster you would like to use for the main cluster. What is the container runtime to be used and so on. In the next step it says the details of the KKB. So here we will ask you what will be the domain that will be used to expose the KKB through the ingress resource. And there are also information about some email and the user that will be used for the authentication. And then in the section of the bootstrap we also demonstrate how you can manage some resources inside the KKB platform in the declarative way. So effectively you will be asked to provide some name of the project that will be used, the configuration of the seed cluster. So here you will let's say configure in which location the seed cluster will be running and how the seed cluster will be configured. And you can also optionally provide the credentials that are being used for provisioning of the user cluster. That is the resource that is in KKB called a preset. And the last step is effectively the summary and on the summary page you will just click generate and you will get the content. The content may look like this for example. So you can see that this is the structure of the archive. And it has some, first of all it has some readme files. I will talk about these readme files a bit later but in general these are the instructions how to make this work in your repository or on your local machine. Then there is if you choose for example github as the github provider so you will receive the github workflow definition if you will choose the github. You will have the github CI, YAML if you choose the big bucket you will have the pipelines YAML instead. So it's very dynamic and all of the content is generated based on your selections. Then this is one I didn't mention yet that for the github's tool we have we choose the flux version too. So that's why we have here the flux directory and you can see that there is some structure like flux, clusters, master. So effectively these are all the resources that are delivered on the master cluster. Then we have some additional directory which is called SOPS. I will talk about this a bit later. So this is for the purpose of providing some encrypted values directed to your Kubernetes cluster. And then some high level directories are cube one. So this directory includes some declarative definition of the cube one cluster. We will see the example later on. Then for the installation of the KKB we also need two configuration files. So one is the value symbol. So this is effectively set of values for the Helm charts that are being installed as part of the installation. And the second one is the custom resource called kubermatic configuration which includes some high level configuration about the authentication, what will be the features enabled on the KKB and so on. Then there are two more files. One is called the secrets. I will talk about this later. So effectively here we are generating some kind of secrets for you. So for example, some encryption key pair or the user password. So this is the file where we provide to you these generated values. And then there is the Terraform directory that is being used together with the cube one tool. So first of all, you will provision some cloud resources with the Terraform and then cube one has the native integration with the Terraform output. So it will read the values from that and on top of the created resources it will provision the Kubernetes cluster. Again, I will try to demonstrate later. So this is the second step. So first of all, just to recap, you went through the wizard, then you download some zip archive, which may look like this. So this is just an example. And then the next step is how do you actually deliver? So there is a clear separation of responsibilities between what is being delivered by the automated pipeline, because for sure we are not able to start everything from scratch, so it's kind of a common chicken neck problem. So first of all, we provision the master cluster and install the KKP using the automated pipeline. So there is a schema how the pipeline look like. So there are some stages like validation of the Terraform, application of the Terraform itself. Then we apply the cube one, install the KKP, and then we initialize the flux tool or the GitOps tool in general. So this is all managed by the pipeline. But then there is the second part of the responsibilities and all of the resources that can be defined. And that is fully managed by the flux. So anytime you will update or add or delete some files under the flux directory. So this will be the responsibility of the flux to reconcile the state of these resources on your target cluster. We are also providing two ways how you can spin everything up. So first of all, you can really can utilize the automated pipeline of your Git provider. So like the GitLab pipeline or GitHub and GitHub actions or Bitbucket pipelines. Or we have also an alternative way, which can be very handy for understanding the product and all of the steps by yourself. So we also provide an alternative readme files, which includes the step by step information. Again, pre-configured, very based on your inputs that you will just follow to install and provision everything by yourself. And with that you will also initialize the flux. So still you will utilize all of the main concepts. You will have the infrastructure as a code in your Git repository. But the provisioning of the main cluster won't happen with the automated pipeline. So just to recap some of the motivations why we were doing this, because as you may imagine we are doing this kind of installations in various environments for various customers and so on. And with this project, we wanted to simplify the bootstrapping and onboarding of the customers so that we can really start in a very quick way. Customers can also try this by themselves and based on that they may decide whether they like the platform or not. So the base art and the documentation are very detailed at this moment. So it should be easy to follow. If not, we are always welcome to hear any feedback from community as all of the stuff we have is open source. The next motivation is the full automation. So truly we try to avoid any manual steps at all costs. So by using this we are pretty sure that you will only do a couple of the preparation steps and you will have the full automated pipeline which will do everything for you. And on top of that, we will also set up the flux tool which will be used for the management of the resources in the DTOP sway on your Kubernetes cluster. And last but not least, we wanted this to be secure and so that you can really provide all of your configuration files in the Git repository. So to avoid the situations like you are limited to commit something to the repository or of course it's always a bad practice to provide any plain text values and secrets in the repository. So to avoid this we have decided to use the Mozilla Substool which by the way has the native support in the flux version too. So you can integrate with this tool directly by using the decryption provider with the flux. So we will see this later on how we utilize that. And next to that why we have chosen the Substool is that we are not only delivering some secret resources in Kubernetes. For that you can use the tools like sealed secrets or Bitnami secrets and so on. But there are also some other files which include some sensitive configuration like some values file. Or for example the preset or other files may have some sensitive values. So in general we use the Substool to encrypt all of the sensitive values in the repository. And these are either picked by the pipeline or the flux has this direct integration. And we are using that so that you can commit the encrypted human readable file in the repository. And that will be safely delivered to your Kubernetes cluster. Last but not least this is built for the purpose of the onboarding as I have mentioned so you can take this extend it or simply customize it to your need. And you should have a very sustainable base for your Kubernetes infrastructure. I also wanted to briefly mention about the CNCF landscape. I believe you have all seen this kind of landscape which is kind of massive. And it's a categorized projects which are in the various level of the acceptance in the CNCF in general. And I just wanted to see which kind of projects are involved in what we are building. And this is the output. So I found about 20 projects which are connected and which are actively used as part of the Starb KKP project. Because we are not only delivering the playing Kubernetes cluster but together on the KKP platform we are directly providing the support. For example for some observability stack or monitoring logging and other things. So this is everything what we deliver out of the box. We utilize for example the cert manager to provide the certificates. We use the engines to expose the application. There is in both Kupon and KKP there is a new support for the Serium CNI that was recently added in the landscape as well. So here this is just a very brief input. What kind of projects and tools are involved in the Starb KKP project that you can try on your own and use very quickly. So to move on I also wanted to briefly explain how it actually works under the hood. So right now I will try to be a bit more technical and I will just briefly try to explain the specific steps which are happening either by you if you will follow the local reading files or which will be happening inside the pipeline. And with that you will hopefully understand which components are involved and how we automate the whole thing. So first of all we start with provisioning of the cloud resources using the Terraform tool. So for each provider we have the Terraform example that can be used. And then the output from the Terraform is being used by the Kupon to provision the Kupon or the simply HA Kubernetes cluster. So this will be usually the result. So as already mentioned before so there will be usually some load balancer service. There will be set of the control plane machines, virtual machines or whatever is available on the provider. And also set of the workers that will be used for running the workload on the master cluster. So for actual installation of the KKP these workers will be utilized. Then so this is how let's imagine that this is the empty so-called master cluster in the terminology we use. There is also some concept of some add-ons so out of the box let's say we provision some storage class. We also set up the nodes autoscaler so you also don't have to care that much about scaling up and down the machines based on the current utilization and it will be all managed automatically with the nodes autoscaler. You can just configure what will be the maximum and the minimum nodes that are in use. So after that so for now we have an empty Kubernetes cluster, Vanilla. And right now we will use the KKP installer. So KKP is officially delivering the archive which includes the binary called Kubernetes installer. And after running this installer and providing the values file and the Kubernetes configuration, there will be four namespaces created. So first one will be Kubernetes where the already mentioned dashboard API and operator components and controllers will be running. Next to that, as the identity provider we are using DEX which is also part of the CNCF projects. So that will be used for the authentication. Next to that for exposure of the application we deploy the engines ingress controller. And for the provisioning of the certificates the cert manager will be installed. So this is let's say the core components that are installed by the installer. But right now we continue because we would like to simply demonstrate how to automate the other stuff and so on. So we continue by... Okay so this is one more preliminary step. So after the previous step there is a required step that to be able to access the UI on some specific domain you have to register the DNS endpoint. This is very much cloud provider specific. Again there are instructions how to deal with that in the documentation. And for example for AWS we also provide some automated module in the Terraform which can take care of this step for you. Anyway next step after you have the KKP installed the domain is registered. After the domain is registered the certificates will get provided automatically and so on. So at this moment you can start using the KKP in your browser. But right now the next step is the installation of the flux tool. There is for version two there is the CLI tool called flux. That has the bootstrap command which is used and it will effectively in your Git repository it will bootstrap itself. So it will also create the commits with the definitions of the components of all of the deployments, all of the service accounts and so on. So it will be all created declaratively in your repository and it will also create so called flux customization which is please do not mix that with the native Kubernetes templating system or tool. So this is the flux specific customization which is an API resource that is being used for the purpose that you describe that this from this repository from this path deliver the resources to my Kubernetes cluster. So this is just a very brief description. So let's try to have a look what happens next because as you remember we have some flux preconfigured files. So effectively it's a bunch of health charts and next to that some other resources. So at this step the resources like the monitoring, logging and OAuth2 proxy and Minio. So these components are installed by the flux in the automated way. So these are the examples of the health charts that are being delivered from the Kubernetes repository. And next to that we also deliver a couple of of Kubernetes API resources. So this is based on what you have provided in the wizard. So with this step we will configure the seed resource so effectively that configures where your control plane components of the user clusters will be running. Next to that we define some project we will configure the user that will be out of the box admin. We provide some additional KKP settings which is like what should what features should be visible in the UI there are some custom links and so on. And then sorry I have skipped one more step actually this is this is what is being delivered first. So we also delivered the customization another one which has the already mentioned subs description provider defined. So it means that here we are saying okay we have some other directory which may include some encrypted YAML resources in the Git repository. And from this repository please deliver the resources to my cluster as well. And this is the example like how we for example there we were the preset or so called cluster templates and so on. So these are some additional resources that are managed by this customization. So this is let's say the complete picture what you will get as the result. So you can you can see a lot of logos over here and a lot of well known tools that will be preconfigured automatically for you. And you can start provisioning your Kubernetes clusters on which you will run your actual applications and and the workload. So I also wanted to mention that effectively this gives you a huge power of what you can do with the with the GitOps. And also I wanted to demonstrate a bit how we demonstrate everything in the declarative way. So for example here on the left side so this is an example how you can bootstrap the flux tool and this is here at the bottom it's effectively the result. So it creates the commits in which there are a couple of files and inside these files you will find the customization and all of the definition for the flux itself. So this is also managed in the declarative way if you will for example decide to upgrade the tool itself so you can do it by updating these files in the repository. Or of course you can upgrade the flux locally and if you will perform the bootstrap again it will update the components and potentially update the synchronization files as well. But more for the context of the KKP on the right side I also mentioned some more examples what you can manage in the declarative way. So first of all I already mentioned some concept of the cluster template so this can be used to really simplify the bootstrapping of the Kubernetes clusters for your end users. So if you are for example having the Kubernetes platform as a service for other customers so that you can use the features of the multi-tenancy and set up the roles and permissions for each customer in different project and for each customer declaratively defined different options for the cluster templates and so on. You can there is a concept of the add-ons that you can use as well so it will be all again declarative. There is also native support for the OPA policies and bunch of other KKP resources. There is one exception that the user cluster itself is not yet supported as the resource that you can manage through the YAML or the Qubectl. This one can be only managed through Kubernetes API directly but this is about to change in some of the early upcoming KKP releases. And right now I would like to mention some kind of inception that you can do with this platform because first of all we have created in an automated way the management master cluster with the KKP and so on. From this KKP you can create other Kubernetes clusters and on these Kubernetes clusters you can install again the GitOps tool that will be again automatically delivering the resources or your applications to these user clusters. And in community repository we have an example of the Flux tool KKP add-on as well as the Argo CD. So if you have any preference and if you are using the Argo CD so for sure you can use it as well in this way. Next to that if you are a programmer so you may not like that much to go through the wizard in the browser and UI all over again if you will be doing that multiple times. So everything of course can be managed and controlled by the API. So the wizard itself that is running on star.cubramatic.com has an API. Here I have added a little disclaimer I will be happy that we will provide some open API definition for this API so that you can easily get some example how the structure of the payload should look like. But then you would simply run the single API command and out of that you will receive the zip file that will get everything reconfigured so this is also possible with the project. So right now you are I believe interested in the ways of the security aspect so as I already mentioned one of the main goals was to really put 100% of all of the. Sensitive configurations and everything in the repository so do not really have the mixture that. We manage declaratively and in the GitOps way like 80% and we have to do 20% manually so we wanted to truly avoid this so. For this we have chosen the the substool which can be used with different encryption backends so out of the box we are using H. So that's a go binary that is used for. The encryption and the encryption by using a key pair of the secret key and the public key. But you can also configure subs. To use the world or some PGP keys and so on the very nice feature of the subs is that as you can see from this gift example which is taken from the repository that is public. So the nice feature is that the files are still human readable and only the sensitive values are encrypted so for example if you will imagine some Jason or Yamal file. So you will only encrypt the specific parts of the of the file and the rest will be still human readable and so on. So you can use the substool locally to update the values to see the actual values to decrypt or encrypt the file so we also provide some cheat sheet documentation how to how to use that property. And this is an example file that we generate out or that is part of the. That is part of the archive that you download it's called secrets and the. But at the same time we also put this file in the git ignore directly because we don't want you to commit this in the repository. So this is only meant to be used for the person that is doing the setup. Because here we provide you the information about the secret key that can be used for the description of the sensitive values and we also generate some random password that is used for accessing the KKP dashboard. So again this one should not be exposed anywhere in the repository so this is really available for the purpose of the repository preparation. And then you can you should never ever commit this kind of file in the repository. Here is another example. As I mentioned so only the person who creates the or who downloads the bundle will have access to. To the H key effectively to the secret key that is used. But this one has to be for sure used by the pipeline so we guide you through the steps how to set up your git repository to make this secret available. Or in case that you are doing this locally it won't be never exposed anywhere and the secret key will be only on your machine. But the second place is also the Kubernetes cluster where the flux is configured. And here you can see the example of the sub customization that we deliver which is saying that from my repository that is created this definition is done automatically. But from this path there is a definition that there is a decryption provider subs which is pointing to the secret with some name. So in the steps and the instructions that we deliver we give you the guidance how to properly create this this secret so that the secret key is available for the flux on the Kubernetes cluster. And here on the right side you can see an example of the preset so that as I already mentioned before the preset is set of the pre configured cloud credentials usually that is used for provisioning of the user cluster. This is an example of the AWS preset which accepts the access key and secret access key. Next to that there is also the VPC ID that is being used for the deployment. But you can see that VPC ID is the plain text value there is nothing secret if in case that it's secret for you so for sure you can configure that it will be encrypted as well. But then the specific two values are encrypted and there is only the public key mentioned over here. Here you can see that there are some additional metadata for the subs. Here you can see that there are a couple of ways how you can configure the back ends but we are as already mentioned we are using the H. And there is for example also the reg X that was used for encryption of this file so that you can control which fields are encrypted than which are which are not. So right now it's a demo time so I will be happy right now to demonstrate live how it looks like so I have prepared some. GitHub scenario as well so I have the life repository that I prepared in advance and I also run the pipeline already because it takes about 15 to 20 minutes to spin up everything so I wanted to avoid this during the live demo. But anyway, I will still show you the life, the life of this art. So, here you can see that I was talking about the six steps so right now we will try to go through these steps. So first of all, you can select some Git provider. You can notice that if I choose the GitHub I have available just AWS and Google Cloud. This is because of the limitation that we need to store somehow the Terraform state somewhere. And for these providers we directly utilize some S3 or Google storage for the specific bucket. But for example if you pick GitLab we are directly utilizing the feature of the GitLab which supports the storage of Terraform state directly in your project or repository. So if you pick the GitLab you have available all of the currently supported cloud providers so for example let's pick Google or maybe let's pick AWS. It doesn't really matter for the demo. Here you can see that you can define your own cluster name that is being used as the prefix for most of the cloud resources being created. You can select the version that will be used for provisioning. You can select the container runtime. You can enable or disable the autoscaler add-on that will be on the cluster. So here we are saying that we will let the autoscaler to go from one up to 10 workers. Here we define the region where it will be defined. You will define which instance type is used for the workers. And there is also the optional step to enable. I've already mentioned that we also have for AWS the module to automatically register the DNS. But let's skip that for now in the demo. The next step is the setup of the KKB. So here you can choose which KKB version you would like to use. Here you can define the endpoint. So it can be something like this. So it's kind of common domain that we can easily go for a new domain. And the next one is the issue because of the box verifying the incorrect cluster issuer. So for that we need to have some email which gets the notification about some potential expiritions and so on. And we are also using the OAuth2 proxy for the authentication. And we can, for example, control that only the users with the Kubernetes domain will be able to access some monitoring services like Grafana, Alert Monashr and so on. But for sure there is a very huge set of options that can be used. So this is just a very small example of how it can be configured. So right now I'm in the step of the KKB bootstrap. So let's call the project E-mail here behind the data center. So it will give you, let's use a sample example. That is for the sign-on in the script. I'll give it the region. And here I won't be providing the real values right now, but here you can generate or I would for sure recommend to have some separate IAM user in AWS. And for that, generate the secret keys that will be used for provisioning of the user cluster. And here you will just write the values and those will be encrypted inside the preset resource in your repository out of the box. So right now we are in the last step, so it's effectively the summary and recap of all of the inputs that you have provided. So as you can imagine, and based on what you have seen, there are a lot of options, some of the conditional steps what you can provide and so on. So here you should really see all of the options and the inputs that you provided. And as soon as you click the generate, it will download the archive and you effectively start to becoming the automation superhero and so on. So the next step is that you just unzip the archive. And inside this archive, I will demonstrate this on the live repository already. So here in the next step, I have prepared the repository. So for this demo, I have decided to use KitLab with combination of the Google Cloud. But you can really, there is a huge metrics already, so you can do your own combination and try it with your favorite kit provider and the cloud provider. But here I just wanted to show you what you get out of the box. I mean, live. So first of all, there will be the KitLab CI demo. It's not that complex, it's only about 200 lines and it's split it inside five stages. You can, I can show you the real pipeline that I executed. Couple hour spec. So this is, these are the stages. It's matching the image and the flow I was describing at the beginning. So effectively here it creates the cloud resources with Terraform. Here the Kubernetes clusters provision with the coupon and the KKP is installed and then flux is bootstrapped. That's it. Nothing else. So this is the responsibility of the pipeline. But we are also providing all of these files that are preconfigured and so on. So here's for example, the coupon. Here you can see, for example, Kubernetes configuration. So all of the values that we have provided somewhere are somewhere defined in the specific configuration files. And you can see that the values that are somehow sensitive are encrypted using the SOPS. So it's very, very important that we do not expose anything in the plaintext in the repository. So here's another example of the flux directory structure. So here we can see this is corresponding to the namespaces where the resources are being delivered. And here we have some examples of, for example, the CV definition. So again, these were some of the values I was providing in the wizard. Then we generate the project. We do some binding with the user and so and so and so on. So effectively the result is, so let me switch to the other tab. So the result is after you set up the DNS record, so you will have the KKP platform up and running. You will be the admin user, so you can go to all of the available settings that are in the KKP. But mainly you can start creating the clusters out of the box. So right now, because we wanted to keep it simple, but at the same time you can extend it. For simplicity, because I have picked the Google, so also my seed is configured for the Google provider. So right now I can set up a new cluster on Google. I can pick my favorite network management so I can generate some name version. I can decide what users will have access through the SSH. I can define the headers and set up some additional features on the cluster. So I guess more, so it's nice generated name. And here I have the selection of the KKP preset or I can provide, so for Google, to be able to provision something you need to create the service account first. So here effectively you will provide the service account, which is the base 64 encoded value of the JSON. And after that you will be able to create the cluster. It takes some time, so I've already created some cluster in advance. So this is an example of the user cluster that was created. Again, just to recap the concept, the control plane components are running on the seed cluster. You can separate, like you can decide that you will have the seed cluster per region or per the cloud provider. So for example, all of your AWS clusters will be running with the control plane components on the single seed cluster. So this is for sure possible as well. So this is an example of the user cluster that has one machine deployment. So this effectively represents a Kubernetes cluster that has one worker node of the specific type, which is, I do see it quickly right now. Okay, it should be. Okay, this is the machine type and one standard two. Anyway, right now on this cluster, you can decide to scale the replicas of this machine deployment. So effectively you will get more workers. You can also set up the autoscaler add-on again for the user cluster, or you can set up the add-on for GITOPS, so that your applications will be delivered here in the automated way. But let me do another quick demo. I have prepared a pull request on top of my repository. And inside this pull request, I have some commits. The first commit is creating the cluster template. So that is, as you can see right now, I don't have any cluster template. And I'm only able to create clusters from the Google Cloud. So in my pull request, I'm also adding the support for AWS. So I'm defining more data centers under the seed resource. So I'm enabling the deployment in the US and also in France. And next to that, I'm also adding some more regions for Google. So it should enable Google in Finland and India. So I just wanted to try to demonstrate the concept of the GITOPS. So let's consider that somebody did a review and I will just blindly merge it for now. But for sure, this should follow some regular process that you are used to. I'm going to click on the switch. In this terminal, I have already configured the access to the Kubernetes API. And I can have a look at the customizations that are on this cluster. So you can see, let me check what is the commit actually. Okay, so right now, actually the reconciliation already happened. So as you can see, this commit is already matching my merge commit, which I have done just now. And it's already reconciled. So it means that the change was already applied on my cluster by the flux. Here at the top, this is just another example. So these are the three machine deployments that were created automatically. And because I'm using, I have configured the autoscalar. So it has already done some job and it has already scaled the nodes across the regions. So effectively, right now I have seven workers available in my cluster to run the workload. And I can try to see where my changes were applied. So I was creating the cluster template. So right now you can see that the cluster template is available over here. And I can quickly provision the cluster out of this template. You can see that there are the details of the configuration. So which options I have enabled, how the cluster should look like, we can try it out. So right now another cluster will be started, but it will take some time. Maybe it will be super quick, but we don't have to wait for this. Anyway, the second change I was doing was enablement of the AWS provider by updating the seed resource. That was applied by the flux. So right now you can see that I'm able to set up the AWS cluster in Paris and US. And also for Google, I'm able to use the Finland and India regions. So that's about it. And that's the very quick example which I wanted to demonstrate. How you can keep everything up to date and in sync by using the automated pipeline and the tools like a flux on your repository. And with that, I can really manage all of the resources and all of the configurations of that KKP platform. There is also, I will provide all of the links in the end. So you can really go through that. Also the repositories that I've created are public, so you can just look around. But I also wanted to mention very detailed documentation that we have created. So if you go to, sorry, if you go to the Kubernetes documentation and go to the KKP, there is a very quick link which is called start with KKP. That will guide you to the documentation pages where we explain all of the concepts, how it works, very detailed documentation of the wizard and also the steps how to set up your repository to be able to run it. And next to that are also some troubleshooting cheat sheets, how to get access to cluster, how to validate the readiness, how to work with the subs and so on. So with that, let me switch back to the presentation. So here comes the supporting step like really try it out if you are interested in what you have just seen, try it on your own. So the wizard and the star.cubramatic.com is public endpoint that you can use. And with that you can try to build the very same infrastructure using a bunch of the CNCF tools that were already mentioned as well. In case that you will have any questions, so there is the community select called star.cubramatic where you can ask the questions as well. Or feel free to reach us through some general contact form and so on. Or feel free to reach me directly or anybody if you have anybody in your network from kubramatic. So they will be for sure very happy to help you with your questions as well. This is one of the last slides where I just wanted to give the kudos to the guys who are participating on the project. So first of all, it started as my idea and from Sasha I have received the support and we have decided to get a bunch of the reexperience the engineers from the company who participated on the development. Both on the API and the UI part so I would like to thanks to Marco, Sasha, Martin and Sebastian and also to the ladies who are helping us with a couple of UI and UX stuff. So we are at the very end of the webinar. I hope you have enjoyed that. It was a pleasure talking to you at least this way virtually. Here you can see a couple of the links to the project itself to the demo repositories either on GitLab or GitHub. And also the link to the documentation. So thank you very much for watching until the very end and have a great day. See you. Bye bye.