 Okay, so welcome to Lighting Talk, Platform Engineering Done Right. My name is Yoazas, I'm a DX engineer at WeaveWorks and I was hoping to present in person, but it could make it last moment. So I made a video demo. I hope you can see it. This is a bit of misleading title because there's not a really single fixed way of doing Platform Engineering right. It depends on your team and organization and your project needs. And I can only cover a very small fraction of this topic. So I'll just show you how to use kind and V cluster to set up a multi cluster management cluster and how to set it up so that developers can have self service capabilities. Self service is they can make whole request that will create clusters for them. And then additional integration will give them ways to log in and so on, which we also can cover. So right now I'm just installing kind of prerequisites set up be cluster or kind. And you can see there's no V clusters. So what we're going to do is we're going to we're going to create a V cluster. At first using cluster CTL command. There's some parameters that the whole structure of a demo is first I do something manually, then I automate it, then I put it into GitOps way of automating it. And so we develop a full GitOps structure. So if you run cluster CTL by itself, it will not create the cluster, but it will export the manifest like everything else. And these manifest will create the cluster. So that's where we want to start. Here is here's our cluster that will run on kind as a namespace. It is restricted to a namespace. And when you're connected to the V cluster, you cannot see anybody else in their clusters. There's a problem with a generator. It provides some wrong values. So I replaced them with empty strings, which seems to work. Now still no V clusters. We will apply it and we will see some clusters come up. Okay, now now now we're going take some minute to load. So now now we have a way of turning manifest into clusters, but obviously we want to automate it. So we don't have to run these commands every time we want flux to managers. When the cluster is connected, you can see when the cluster is created, you can see you can connect to it. And you are restricted entirely to your namespace and to your resources in that V cluster. The approach should work for EKS or for any Kubernetes system that supports SCAPI. If there is a way to declare clusters, then this is a good way to manage that. Okay, so we're going to create customization, which will be watching our cluster YAMLs and putting them to the cluster. First, I'll just install flux because we want to start very simple. And we want to show that flux can create clusters. Okay, now we're going to create some flux resources, a Git source that will watch this repository called flux multi demo and a customization that also applies the cluster YAMLs. And I'm using a VS Code extension, which is one of the offers, and we can visualize clusters and customizations and all the resources. Later we'll be working with Helm as well. There's the problem though, there's no such file, so because we forgot to commit. I will add the file, I will push it to Git, I will reconcile, flux will see it and then we'll have flux creating clusters from manifest. In the repo, reconcile, reconcile. Okay, now there's a cluster again, this time flux is in charge of it. Okay, so this is a, it's still not the right way of doing it, but it shows you the building blocks, the flux. We want to bootstrap. What bootstrap will do is it will make flux manage itself. And we also want to apply the same pattern to other manifests. So we want, we want a single sync customization that will be managing this whole system for managing clusters. So I'll just copy GitOps toolkit sync from flux and, and I will apply that to the infrastructure. In this case, infrastructure refers to, to the code and the architecture for managing multiple clusters from this control plane cluster. It doesn't refer to infrastructure as all the clusters, but just this particular task. So yeah, just sync. Okay, so now we want, we will not be having to create customizations anymore. I think everything will be managed by flux. This will, this will prove useful later because it will make it very easy to delete and restart the whole cluster. And it's especially fast on kind, but on another provider, valuable to be able to rebuild not just tenant clusters, but the control cluster and everything. One thing I didn't get to talk to is repo structure. I tried to do some research and watch different talks and read what people are doing. And there's multiple approaches to organizing your control plane repositories and then your tenant repositories. In this case, we're not even talking about tenant repositories is all control plane, but there's still several ways of doing it. And I can't promise this is the best way, but it makes sense and it's based on practices. I found it that seemed that seemed reasonable. Okay, so now we have a second thing watching, which is the clusters.yaml. And this, this particular, this particular thing is what will be applying our cluster manifests to the control cluster or forgive my video editing. It's not very snappy as well. But yeah, now a thing is independently synced. You can try delete things and so on. But it should all be coming back. This is what will happen. Here's, here's, here's our infrastructure. So I am referring to the same repository twice once as flux system repository and once as flux multi demo. And the reason for that practice is I found that for discussions that for debugging, sometimes it's better to have some multiple resources that refer to the same thing because if they're errors you can trace them if they're doing different things with the same repository. And later on that also makes it much easier if you want to separate just the cluster definitions from the sole logic of managing clusters you can put them into separate repos because there's already to get repository objects in Okay, so so far. Yeah, I think I think I skipped over something but there was supposed to be an example of creating a pull request to add a cluster. But the point of that example is it was pretty complicated you had to copy the whole definition a whole manifest make sure you make no mistakes. Don't forget to have namespace and so on. So that is not good self service we don't want somebody have to understand all the stuff just to get a cluster. So what instead will do is we'll take helm and we'll create a helm chart that will be used as a template. We will provide values to the helm chart and from these values will be generating cluster manifest. And the helm repository for this because it's a simple helm, simple helm chart so instead we'll just keep it in our get repository under infrastructure and under clusters, we will delete the individual cluster definitions and we'll replace them. The helm release that will provide all the settings to the helm chart for creating all the clusters. Suggestions to use a config map for for that instead of a helm release or have a helm release and then have another layer of indirection with a config map, but I tried to do that and they found issues with nested values, but I'm not very good at helm. So maybe there's a better way of doing it. But the simple way of doing it cheap way of doing it is just make a make a helm release. And now, if I'm a self service user. These are the only lines I need to add edit that when I add another cluster, I make a pull request cluster 04. That's all I need to know. And later on we can put other settings in here. So for example, it's multi cluster multi tenant, we can add a list of tenants for each cluster in this one place. And the approval process is simple. There's a platform team that has full administrator rights, and someone makes a very simple pull request and you can say, Okay, this works fine. Okay, so now we have a helm release for the clusters. And fortunately, though, it does not see all the stuff that we gave to it, because it's not tracking it, only the new clusters tracked by helm release. So we could try to fix that and fit all of it. But because everything is get off so we're just going to delete everything and reboot and then it should work. Delete kind create. And then we enable the cluster for kind. And helm is not necessarily the best templating language to use in this situation as well. There's other ways of doing it. It's maybe not one perfect way, but it demonstrates the principle that you can have a template set up and you can provide values that are easy for people to add. Okay, so now we have just these four clusters. And also we have values that provides cluster, it should say clusters 00. But what we're seeing here, when we go to our list, now we have all the clusters right there, all of them are managed by this customization which manages this helm release. So that's just even a fraction of what was promised an abstract. So you can imagine if you want to have tenants, you do the same thing. You have a folder for tenants and you start out putting the manifest in there that directly create the tenants and then you iterate to make it more get ups. And then later on you can take your tenants and take the clusters and then use another templating mechanism to associate tenants with clusters. You'll have to worry about security in our back and authentication and very many things that I could not possibly cover. Not in 15 minutes anyway. Yeah, maybe few days. So yours is live. If anyone has any questions, we can answer questions. King can also answer questions. I am also in the spot for questions. Any questions? I don't hear any questions. You all just can't hear us. All right. Thank you everyone. Thank you.