 Hello everyone, welcome for today's CNCF webinar, Kate's Cluster Vending Machine. I'm really excited for this one because I worked on Kate's cluster vending machine actually 2020-22 was it at the beginning of 2022 with the projects of the CNCF which were available at this time so everything was still very new and the tools did not suit well with each other because they are still on the development and they need to find out how things can work together and now I'm very happy to present a new version of my Kate's cluster vending machine and I'm really excited to share this with you folks and see what you are thinking about this. Just one thing, I have a new setup so the light here and I'm using my smartphone for recording hopefully the quality is a little bit better but maybe some little technical issues could happen so I'm sorry for this one. Okay, so I want to do a one take of this, I'm not going to cut and everything so please maybe there will be some issues during the demonstration but let's hope the demo gods are happy with me and everything works fine so let me share my screen and remove the stuff here okay and share the slide just a second okay so as I said Kate's cluster vending machine is the title and what I mean with our Kate's cluster vending machine I will explain in more detail maybe you visit a couple of years ago the first talk of this and you can guess what's going on here but if not stay tuned I'm going to explain this so shortly to my person, yes my name is Engi Deghe, I'm working at Pulumi as a customer experience architect I love cloud transformation, cloud enablement, do this stuff since many many years working before in big enterprise here in Germany as you can guess from my accent and I really love to continue everything so CI CD is my thing I started everything with cruise control using and so you can guess where I started from and if you look now the tools where we are now and everything is so much much better but still room for improvements underneath you will see my socials feel free to follow me if you want I even have my github account if you want to see on what kind of projects I'm working or customer issues I'm trying to solve okay what you see now is a typical thing when you work with different cloud providers you have AWS, you have Azure, you have Google everybody has big big big services and you sit there and you're like okay hey what should I use now what is the best service and even if I know what's the best service how do I connect them someone that you create virtual networks you have to create subnets firewalls and so on and so on so the combinations are big and the confusion is also big as you can guess so what what could be a possible solution if we were not not talking now about the solution but yeah the confusion is big and I want to show this awesome gift so that's how I feel that's sometimes also everybody working with different cloud providers feel is like why is it not working I mean it worked now then you change something and it's not working again there's a new update of a specific service and you need to find out what's going on when you want to update or you want to create new infrastructure so you probably guessed it and you probably heard this because platform engineering is now a topic if you're on social media or you're going to conferences you will see that platform engineering is the big thing everybody is talking about platform engineering the maturity level is already on a place where we have plenty of offerings for platforms where you can just install them and you get the best breed the best selection of tools outside already created for you so platform engineering a very very big topic and actually also this year at Kubecon Paris again platform engineering gets its own track and a whole dedicated day on the nine teams I'm going to speak on this one so look out for me I'm going to have a lightning talk so feel free to join this one but not about me let's talk about platform engineering so to put you into the context to say okay where we are now with this talk what I'm going to cover with this talk if we see here now the capabilities the platform capabilities and the platform interface map here now to say okay on top documentation project templates graphical user interface API CLI that's one part and then of course the capabilities and underneath the infrastructure providers and the platform capability providers so a good platform of course does was what the customers needs what the user asking for because at the end of the day that's why we build it or that's why we're going to provide this as platform engineers and you should discuss all the offerings and everything you want to create of course with your users so this are some conditions I would say they're really independent of the actual solution you choose and very important for you do what the customers want and talk with the customers work together everybody's talking about this we need to work collaborations of platform engineering and treating your pla your infrastructure as a service is a good way to interact to discuss with your your users the internal teams for example so where we are now with this talk I'm going to cover now in this talk project templates a graphical user interface and of course documentation and we also talk about the way to provide environments and infrastructure resources for our customers for our internal users and this is how we set now the scene for this talk so let's dive into it before I go into further into the talk and there is a reference implementation I'm currently working on it so stay tuned on this one they're going to be a talk soon called gain platform superpowers with the kebab stack so I'm very happy about the acronym I created here so stay tuned for more at the moment I'm not going to spoil this but that's a good thing if you already followed me for a while and you watched some of the other webinars I gave or some of the talks I gave you can guess it some of the pieces are now working together and the talk from before with another one are the foundation for this one so everything keeps on evolving and it's going to be a nice reference implementation so for now I'm going to keep my secret but in April at GitOpscon in Seattle I'm going to talk about the kebab stack so as I mentioned before now you're still right you're still in my webinar so I didn't switch the webinar but as I mentioned before in the introduction 2022 in January so it's really really two years ago nearly I gave this talk with our older version so here again Kubernetes cluster vending machine with v cluster I already emphasis one of the tools here and say okay how are we going to provide at this point for me the focus was really on giving a Kubernetes and don't have the connection to the under to some cloud infrastructure so here I just wanted to show okay how are we going to to provide Kubernetes cluster v clusters virtual Kubernetes clusters and to the customers again I'm not diving here now into v clusters they are already many many talks covering this topic and saying okay how v clusters working they are already a new distribution so you have now we are EKS distribution you have a k0s distribution you have the k3s kates distribution this was all not available at the beginning I started with this one I think it was only k3s and vanilla kates but again now the solutions are much much better and there's I think also the people from loft also offering a pro version where you get much more functionalities so that was the initial talk here let me put this away so there is the QR code where you can check the documentation so I brought a blog article about this and there's a YouTube video about this I also put the link here so feel free to watch this and see don't be don't be surprised it was 2022 and everything was in the beginning so it was not really one of my best talks I ever gave but still it's the content which is important so feel free to follow up on this one if you had not and yes of course give out Islamabad also follow on on Twitter EKS he's really doing some nice content all around Kubernetes and cloud and very he's very engaged also in platform engineering so let me OK the quintessence about the first version I created for the vending machine was I use Argo as the central orchestration system to say OK that's the delivery if I order something if I created a pull request because I needed now a new virtual cluster for my different teams I went into my git repository and then I just created a new application a resource of type application and created the pull request waited that the platform team for example depending again on your maturity level and the way you want to work created the pull request and then the platform team the owners of the platform could review this again approve this and then Argo detected any change and deployed this one and created them for the different teams whatever topology you have then dedicated Kubernetes virtual Kubernetes cluster to use for them to deploy then their application so that was the quintessence of this very very basic but it was interesting and it worked it really worked fine but now we evolved we understood how things are working now how we can increase the automation the tool itself Argo itself really really increased also gained new functionality just to mention one thing application set for example application sets are now really mature there in built in Argo CD before it was our own controller you had to install separately if you wanted to use application set now everything works really fine so we have now the tools to create an even better experience for our users and another thing joined yes backstage was known also at this time but now backstage becomes even more important to say okay I want to provide now an user interface with a with a big framework behind this where I can just create stories where I can create software templates for my user to consume I can create a catalog for all my projects working in so the discoverability is in built in backstage and also the way to self serve your your infrastructure using backstage so backstage really went into this and changed much of the case cluster vending machine approach I had two years ago where I really used for example Argo UI and github for example or your git repository as a way of a user interface and as you can get and not everybody maybe know how to use github not everybody knows how to to use the UI of Argo for example maybe you don't want to give people access to the to the UI of Argo there are many many reasons why the first version had his flaws and why now for example having backstage as a solution really really helps here to create this interface without building this whole engine because backstage everything created from the good folks of Spotify contains everything and I just did. I can just adapt this and create my my my software templates or my golden paths as we call them. If you want to know more there is, as I mentioned before I already made a talk about this one called enabling self service infrastructure on any cloud with backstage on Pulumi here I really discussed how to configure Pulumi as a infrastructure as code solution with backstage and execute then your software templates so Pulumi can roll out your infrastructure you see now we will be made to jump to infrastructure so before we were only able to create virtual cluster only on the Kubernetes API and now we are also able to to define and deploy infrastructure on different cloud providers which is really cool so this is one talk. And the second talk is how to start building a self service infrastructure platform on Kubernetes and now here. Entered a new piece. Into the stack at in this talk I used flux as a github's engine here I'm using again Argo. But now I could we could combine backstage. Pulumi on Kubernetes using flux that was also very very awesome so go watch this talk if you want to see this solution. And now as I mentioned before you see the different pieces are working together so we discovered one way we discovered okay Pulumi and backstage we discovered how we can add now the github's engine back again so that was also very interesting. And as a last point. Yeah, go watch this and that's going to be awesome. So, now to give you a little little introduction to say okay what is Pulumi you may heard about Pulumi or not so I think that's very very important to understand again, the solution I created here because I use Pulumi as infrastructure as code tool not only to to provide a user story a self service user story for the customer but also to bootstrap for me everything. So, the architecture I created here needs also deployed on a cloud provider for example the github's Kubernetes cluster needs to be provisioned so this is everything done via Pulumi so what is Pulumi. Pulumi is a modern infrastructure as code tool. You can use generic programming languages. So that's a very very cool functionality so Pulumi currently supports go typescript JavaScript.net Java Python so every common generic programming languages supported so you can just use if you're used to this for example to create your application projects or your your web services or whatever you are working on already for example in Python or typescript, you can just continue working and defining them in Python for example and then you can guess or you can use all the mechanism your programming languages creating abstractions creating loops creating conditions and then deploy your infrastructure. Pulumi is a declarative approach at the end so you use an imperative programming language to create a declarative state the state gets stored and that's a really really awesome. It works with any major cloud provider so that's completely fine and which is also very important it's completely open source so you can use Pulumi no credit card needed. And if you want for example to host your own state file, you can do this you can create an S3 bucket and upload your state files there. Or you can use the free tier on our managed test providers or this generous free tier you can really store plenty of resources there we take care of this. And let me continue so just again. These are the three columns I can divide now Pulumi say okay we have to build part where we support different programming language which is what one language is missing here that's Yaml so Yaml is also supported. And every other language I just mentioned it's just embeds fine into the IDE you don't need to to change an IDE or you don't need to. You know what you're used to it and then you can consume Pulumi package with the package system of your programming language or you can share Pulumi programs using the package system, creating an npm package and sharing that's very awesome in the middle part we have our ecosystem, yes, you can use plenty of different cloud providers, which is very awesome, and gives you the flexibility and power to configure and create bigger stories and say okay if I create now for example my my next journey, it going to provision and a multi cloud approach or multi service. So that's also cool. Right part. And this is the last thing I'm going to explain is CI systems for example everything where you can integrate Pulumi so Pulumi is not even anything so you can just continue use what you're already using so Pulumi is not dictating you what you have to use it's really up to you it's a building block you can use in your existing journey. Again, here on the slide code fresh and octopus deploy as we know no merge together we should update also this slide. Okay, how does it look like how does a Pulumi code look like in this in in in a very vanilla form so I just created now free examples I can I did not want to choose now every language. So here, creating SS free bucket and go this is the code you need to create. So no a conditions no loops or something like this inside. This is really written everything down you have your main function and then you just say as free new bucket because you import your AWS module and then you can just create your free bucket. And same code, same functionality using TypeScript. Again, this is importing your npm package for AWS, and then also using the S3 bucket. And then exporting this for example the bucket ID. And last but not least, as you can see with the wiggly lines underneath I'm not using Python every time in IntelliJ so I did not configure the interpreter but this is the way you would do this in Python if you create a Python program. So similar. This is very beautiful I like this very very much I mean, yes, I may be a little bit biased, but that's awesome. Okay, so let's go to the action because I don't want to bore you too much with all that theoretical part. So let's head over and give you an introduction on the code. I'm going to take this here over. So, I have here my Excali draw before we head over to to the actual code. Let me draw you what we got what we created here. So everything runs of course here using AWS. This shows AWS as the place where my wending machine lives on. You can change this also you can say okay I'm not going to use AWS I'm going to use Azure. That's not the problem. So I have here my AWS what I created now on top of it so let's see if we can find. We have now our EKS cluster so created an EKS cluster. And I like to call this one this is our github's infrastructure. This is where all the magic happens. So we have our EKS cluster we have it's called the github's infrastructure and on this github's infrastructure we install now using Pulumi so let me grab here. Another one. So we use Pulumi to define this infrastructure. We define with Pulumi the infrastructure I created for this case. So it's zero zero. Infra. And this one is deploying the github's infrastructure. So next part. I want to deploy backstage so but here comes not one of the one plot twist so I just so and let's take a container. So what we do in our backstage program. Let me write this. So we create a container from the backstage code because backstage at the end as I said it's just an TypeScript project using yarn to build everything there's a doger file inside. So I'm using Pulumi to create a doger image. So not using doger build everything is inside Pulumi. So using Pulumi to create a container from backstage and this container gets deployed into the elastic container registry. That's one part. And what I did not mention is here in this part is I create also and let me put this here. So I also deployed the initial Argo CD deployment. So let me put this here Argo CD. So I'm also deploying inside Pulumi using the helm release and release resource. I also deploy Argo CD with and now comes to clue a link to the initial repository because we would like to handle everything including Argo itself in the github's way. So just kickstarting the stuff provisioning some some resources maybe we need because of secrets and so on inside our github's infrastructure. Deploying Argo CD and telling Argo CD and now please look into this repository. So for this case we're using the so-called apps of app pattern app of apps. Okay. And here we have then also the definition of any additional tools. So what happens now here and we change now the color. We deployed now via the f of apps pattern additional tools. So we will be going to install and now comes the whole stack. So we're going to install the key verno. Let me see that I am. Argo CD again. Yes, because we want to manage this one backstage matrix server and the Pulumi operator. And last but not least external secrets. So this are all the tools. We know I just want to dismissing of course V cluster. Okay. That's tools we're going to install via the app of apps pattern. So just giving the context. So we start with the zero zero infrastructure deployment. Deploying the EKS cluster deploying Argo CD pointing to a GitHub repository where we then install the application of Argo CD additional applications, including Argo CD itself. So in the future we just need to update and hear the Argo CD here and keep everything on the GitOps way. And when this is finished, because backstage is not going to deploy now. We're going also to build the backstage image and push the backstage image in the container registry. And here comes now the clue. So how does the backstage deployment knows about the image and which version we are going to use. So let me take this here. We also deployed the image of data. So Argo CD. Let me get this one here. Plus the image updater. And the image updater is using pot identity. That's a very new feature. So additionally, what we're going to deploy is a pot identity in this one here. Because here in the pot identity, we defined the namespace of Argo CD and the Argo CD image updater. So the image updater is now able to log into the Azure container registry, elastic container registry and check for new images. So the moment we push stuff here, the image updater detects there's a new image available and will then annotate the application. I did not create a writeback here, just holding in Argo CD because the image is already here. When I destroy everything and need to redeploy this, it will automatically get the latest one. So that's one thing. So there is no writeback, but of course you could do also writeback to be completely on the fine side. If really something happens and you need to recreate this specific hash version of Argo CD, you could do this also. But here in this case, the writeback just goes for the Argo CD, annotates the application with the new tag and then modifies this. So that's the whole stack. So how does it look like? Before I show this one, let's go into the code. As I mentioned before, we have here the different folder structure. I created a mono repository just because of laziness. It's a demonstration. You could split it a little bit more if you want, for example, getting rid of the backstage templates and the GitOps repository could be also our own repository, but that's for demo purpose. It's very handy. So I have here my infrastructure, as I mentioned before. So here, let's dive into the code. So creating our VPC, just scroll a little bit over this. You can watch the code. I will share the details for the code. So creating the VPC, creating the Kubernetes cluster with some properties. That's our GitOps Infra cluster. And now you see, I'm going to install the add-on here, an EKS add-on for pod identity agent. That's very cool. So this one is created. And as I mentioned before, I would like that a specific service account can connect to the container registry. For example, I create here the specific assume and text session role on the service pod and assign here. Yes, that's a little bit broad here that everything is by star. You could also tweak this a little bit down and separate the different actions from each other. Again, that's a demonstration. Create here a policy and create a role policy, attach the role policy, attach to the role. And now comes the pod identity association. So I say now, hey, my EKS cluster connect to the role on and this is the important place. So I say which namespace and which service account I don't need to set any annotations on some kind of Helm deployments because I need to set the specific aren to the service account. And with the user approach here, I can just create the pod identity association. That's really awesome. And create here a Kubernetes provider programmatically using JSON. And as you can see, here comes now the deployment of Argo CD. So I created an abstraction called Argo CD, which gets some initial objects. As I said, the repository where we're looking at. And after this, I also want to provide my Columi access token for my Columi for the external secrets because I'm going to use Columi ECS secrets management using external secrets. And this one needs an access token. So it's a chicken and henzo somewhere you have to provide a token for the initial one. And then everything is handled on this way. So this was the infrastructure deployment when we have a look how Argo, how I create the abstraction. As I said before, so people don't need to know that, okay, you need to create a release here. And here I create the Argo CD apps where I initially read in the initial objects to deploy and then create also a class in cluster secret. So that's all. Okay. And now we head over to our backstage deployment. So this is here the backstage deployment also very, very quick. This is even quicker than the other one. So as I mentioned, I create an ECR repository to some authentication stuff here and then build the image. So I'm using here the Palumi local command, which is just calling yarn install and build backend. And then calling here the doger image. That's all. And then this will automatically pushed to the ECR instance. So this is also done. And last but not least, let's head over. So everything is installed Argo CDs installed Argo CDs running pointing to the git repository where he's now looking into the GitOps folder. And now when we have a look, what are we going to install as an initial? Yes, as an initial application we're going to install is called GitOps management. I just called like this and say, hey, please deploy now my app of apps inside here. Everything you will find inside this folder management, please deploy. Go recursly through it and maybe exclude some of the stuff I don't want that it gets deployed. So this is now the deployment as you can see here now. What is this code now doing? It goes now in the management folder and looks up all the YAML files. So here it will go for the apps of app and deploy additional stuff for us. So that's very, very cool. So here we go through and GitOps management and then deploy additional tools here. Where is it? Back again. So that's this part. And now I created also a folder for apps. So I told him, hey, please go into GitOps teams and into the apps folder. There you will find applications. Same goes for the cluster folder here. And the last one is the V cluster folder. And this one's, there are already some applications running into it. But you can see there are some V clusters and there are already some applications deployed. Everything which gets now pull requested will land in some of these folders and the system will look into this. Same goes here for the application. So just configure and QVNU backstage and everything. So that's relatively straightforward. Again, for giving you insight with the image updater. This is the annotation I did to look up for my ECR instance. And the update strategy is digest. To say, hey, when, every time when the digest is changing, then please update the image. And the helm repository tag you need to update is now under backstage image tag. As you can see backstage image and then tag. I did not overwrite this here. But this is the way inside helm. So when there's a new image, the image updater will update this one via set parameter. Okay. That's very awesome. So this is all done. We set here and we come to the last point. This is here the templates for backstage. So I created three templates, one for V cluster. So creating here a templatized application of Argo. As you can see getting the values here and creating also catalog info. And the same goes for microservices or for helm deployment. If I want to deploy a helm chart. And if I really want to create a physical cluster, so not a virtual one. If I want to go to a cloud provider, I created now here template for digital ocean. So it's a DO cluster. And as you can see here, I'm pointing here now in the application to a Git repository inside GitOps teams clusters. So when we can see GitOps team clusters with the name of the cluster. And inside here we have our Pulumi YAML program. So that's also very great. So I created a Pulumi YAML program. Here you see using all the resources of Pulumi. I could create this also in TypeScript and just point to a TypeScript repository. But here I just want to templatize also this YAML using backstage. So everything is templatized. And as you can see, creating the Kubernetes cluster, taking care that an Argo service account is connected because I want to connect every cluster in a hub and spoke to my main cluster. So here everything goes in. And at the end of the day, I'm going to create a specific Kubernetes secret in the host GitOps cluster and the GitOps infra that Argo CD can detect this and connect the clusters to Argo CD. So creating a specific secret with the annotation Argo CD cluster is the way to go. So, okay. If I create, okay, that's maybe something also to mention here. If I'm going to create this using VCluster, I created here, where is it? Management, Kiverno. So here's a specific Kiverno policy. And this Kiverno policy is also VCluster is creating a secret with the cube config. So when a secret is detected from Kiverno with the admission controller, it reacts on this and then creates with the generate a secret again of type cluster and adds all the informations here. So this is all the parts we need to automatically create our hub and spoke. So if we go back here, so when everything is created and our user comes now and let me where is, so we take you. Let's take this. This is now backstage. So, okay. The user comes uses backstage. Backstage will then create a pull request in GitHub with the definition with the new application. And this one gets then deployed on the cluster using Argo. And at the end, when we create now, where is our Kubernetes sign? So, and depending now what is it? Is it a virtual cluster? Or is it the GitOps? Is it Pulumi creates the Kubernetes cluster and attaches? Oops. Let's do this here and attached this new cluster as a hub and spoke approach to our main Argo city, not charted again after a certain of time. Maybe that's the containers that the clusters become too much. But every new cluster gets attached to this. Okay. So how does it look like? Let's have a look into the implementation. Things are running. So this is now the, let's display everything. Okay. So now we can see that's also stupid. Okay. This are all the applications deployed. As you can see, we have here the apps of app pattern. We can see that's all the applications. I'm going to deploy. So we have Argo CD backstage, backstage config, Kiverno, everything is here inside. Then also some app projects because we want to have a separation. That's very important. So I created for every team. I created its own project. The infrastructure has its own project and you can then separate everything. And Argo CD backstage here, as you can see, there is, no, here's the manifest. You can see that it detected the new image tag and updates the backstage image tag. So this is also working. So this is now everything in backstage and then Argo CD. And if we can see the clusters, yes, we have our clusters are also present here. So that's very cool. So now let's create a new cluster. Let's create a digital ocean cluster and a v cluster to see that everything is working. So let's open backstage. Okay. So this is backstage. We see our templates are here. What I just mentioned before, we see the teams. So I created the similarity team because we want to use them in our templating. So the teams here are the teams, the project you see also in Argo CD. So there's the relation to this to have a separation. And now I can click on create. I see my different stories and I would like to create now a second v cluster. So I give it a name. My name plus the backend or the backend team. Let's give it a stage. And we say now the owners are the team deployment. So I can review. Okay. We call it backend and say review. Okay. So what we are doing now, you can also fully automate this. If you don't want to create a pull request, but I create now pull request, I can see now which resources gets created. As I mentioned before, the catalog info and the most important part here, creating the application pointing to the v cluster. Helm chart, some values for testing purpose. So I can check this now and say, yep, that looks fine. I agree on this one. I say squash and merge. Go back to Argo CD. And because we don't want to wait, we just say, please refresh. And we see here now the v cluster backend deployment gets deployed. So let's have a look in our case. So we can see now we cluster B starting to run. That's completely fine. So now let's see the secret. And we can see Argo CD created also a secret for. So Kiverno created a secret now on the secrets v cluster created, created a secret automatically for Argo CD that Argo CD detects as a new spoke. And when we go to settings and we go to clusters, we can see now deployment B is there. Let's see if the API is really reacting. Yes, successful 1.29. That's completely fine because there's no application deployed. That's the reason it was not checked for the status. So that is working fine. And now with the time and everything, I'm not going to provision a digital ocean one. Let's do this. Let's create a digital ocean one also. So we go to digital ocean, choose and we say my do for backend. We say that's a development one. The owners are development be we say preview say again create. This takes a little bit more time. Yes, of course, it's a cloud provider depending on the cloud provider. It can take a little bit time. So what we see now here is again, yes, we see the application. That's completely fine because we want to point in the folder where our Pulumi program is because we need to deploy this also. Yeah, Helm chart also. So it's absolutely up to you. You could create a generic Helm chart with some parameters to have an abstraction and just call here the Helm chart and just override the stuff again as you wish. So everything is filled out. That looks good. We also creating now here the Argo CD cluster inside our GitOps Infra cluster where the operator is running because we need this to have access to it as a spoke. And we pass via external secrets. We pass the digital ocean token to this. That's also fine. So I can now squash and merge this. Okay, that's running good. And let's see. Okay, this one is also checking here the cluster. So let's go to apps Pulumi because I'm using the cloud to store my stuff. So we should hopefully see, wait, we forgot to roll out the stuff. So let's go back here and then we go to cluster because it takes some couple of seconds to roll out the stuff. So we should now see in clusters. Yes. So it rolled out the development B. So and we are here program and stack get the roll out. So we expect now hopefully that the deployment starts. We can see my the backend is kicked off. And this should when we see the details create now a cluster. We seek the Kubernetes cluster. Let's have a look to digital ocean. How does it look like? We go to Kubernetes. And yeah, the do cluster gets created. It should hopefully be soon there. Yep. That's the do cluster. This can take a couple of seconds. But while this is doing, let's go ahead and deploy an application on this one. So I created here now the generic deploy help chart task. So this is just completely random. So we can say pot info. What info back end. The owner is docu is development team B. And I don't need to if I don't give us stage because we are using application set. And using the matrix here. I can just disable this one. And that's not bothering us. It will deploy the pot info to every stage. So this is really to fan out. So I'm not thinking anymore stages. I don't need to know how many clusters I have for product or whatever. If I want to deploy this application to all the clusters which are labeled as owner. For example, development team B. I can just roll this out now. So we are not anymore in this low level application. We are now really going on scale and using application set controller. So how does it look like? So when we go back to get ups to teams and we look for apps, I just want to show you. So this is the way you see application set. And this is the selector as you can see here. I say everybody gets in team a development team a the pot info. Independent of what clusters they are using. I don't care which stage and what clusters they are. Actually, I just know this Argo CD is a hub and spoke. It's connected. It has some kind of labels capability. So I know this one. I need to deploy this one to the clusters which are owned by the team. So just roll this out. So this is with the generator cluster. They're many more generators. So try to use get a little bit time to see if this is working. So that should normally be relatively finished now on 192. Okay. That's should hopefully be soon finished. Let's see where we are. I think he's now just okay. The default cluster is already there. So that should be normally done in any seconds. So he's just waiting for the node provisioning this one. Come on. Give a second. If you watch this on replay, please go ahead and fast forward this part where we just ran wait for the digital ocean. Now it's done. Thank God. So I stop, stop. If you're fast forwarding, it's already over. So this one is great. As you can see, roll binding secrets, everything gets created. So I expecting now on the settings clusters now also my digital and there is it. Here's the digital ocean clusters also deployed. And let's see the capabilities. Yes, it belongs to the team development. T the environment is a deaf environment. Yep. That's what we expect. And just enter the cluster type if maybe you want to just deploy stuff to be cluster. For example, you could also go for cluster type. You can add much, much more labels to it. So save this. We see it's also connecting. That's also perfectly fine working. So now let's deploy our, our hot info. And we go with pot info. We take this version six. That's fine. So let's review this great. This will also create a pull request. Everything works a little bit gated here. It's of course, you could also just push in main if you feel completely YOLO. So we are, let's see what changed. Yes, we get our application set. And we want to hit all the clusters, which are label team development B. And this is all the stuff we need. It will create an application with the name and so on. So that's fine. Okay. So let's accept this one. Squash and merge. I don't have any notification and so on. So I just need to kick this by hand this time in the real cluster. So I say, please. Refresh here. The will be enough. What I'm doing. It's in the apps. So I say, please apps refresh. Because now I have the development be hot info back end. I want to deploy. And now I should see. Yes. So let's filter here for development team B. And we can see now here. The cluster is deployed. For development B. The V cluster, the digital ocean one is deployed. And where is the application? Where is the application is deployed? What info application said? Cluster app. Just a second. What happened to my other cluster? Okay. Now. I just do a fast forward of a port forwarding. So sometimes the weak cluster here has a little issue. So this is not working fine. Also, let me just quickly. Port forwarding. Okay. So let's check here what happened to this cluster. So this cluster is here. And we have the environment is. Okay. The environment is B. So that's the reason he. Okay. So we have the string splitting that was unfortunate. Okay. So my fault on this one, the team. Was a little bit too. Too big. Okay. That's that's my fault. But as we can see. The application gets deployed to. To our. The O cluster. So that's completely fine. And the other application is also deployed here. So when we see now the O cluster application, where is it? Here. So, and yeah, what info is also deployed. So that's perfect. We can also watch the locks and it's running. Okay. So, so far. So good. Just give a recap. So this is what we created created a github's engine. Argo CD taking care. We are pull requests to deploy everything we are up and spoke. Applications are there. And then every application. Business application development teams, for example, want to deploy. They can just go over backstage or the, for example, a helm deployment or whatever more fancy are creating also a git repository. And just. Point to the github repository. Argo CD will check for this when the pull request gets approved. And because we are using application set with the generator cluster and the labels, we can roll out the stuff without knowing which cluster gets hit. For example, because we don't need to know anymore, we can create now unlimited clusters and tell roll this out. I can create more sophisticated roles. So this is completely up to my. How I like this. And let's go to the slide. So we saw the action part. It's working. That's completely fine. Some some tweaking is done on the one of the. Deployments of the application because the splitting was not really nice. The cluster was detected as happened spoke, but the labeling was not really perfect. So it deployed it. It didn't deploy the deployment. The pot info there. Okay. If you want to talk about this, if you're interested and you made it up to the end of this webinar and you want to see it in action again, or you want to see, want to talk about this or tell me that's completely rubbish. For example, because I forgot some of the parts feel free. I will be at the booth K 13 at cube con Paris next in two weeks. So I'm looking forward for this. If you want to get. Earlier more involved with us, for example, join our community slag. I'm also there. We can talk also here. If you want to play around, please sign up and try Pulumi AI. There are also plenty of talks about this one. And upcoming workshops here on our side. We have Sam here. This are the workshops the next couple of days. Feel free to join them. And if you want to come resources, you can see any workshops. There are also plenty of Kubernetes workshops. So with this. Workshop is done. And I hope you enjoyed this. So let me stop sharing. So I hope you enjoyed this. The case cluster vending machine 2.0 now completely on steroids completely pimped. So it's much, much better. The ordering is much better. The application of the hub and spoke approaches completely nice. And then I can now deploy application at scale without thinking about which cluster. Actually, I'm deploying. I just trigger capabilities and deploy the applications based on this capabilities, aka labels. Okay, then thanks a lot. And leave a comment to let me know how you like this. And see you next one. Bye.