 And I'm getting the green light to begin. So if you are going to be following along with the workshop on your own, this is a good time to get your laptop open, get GitHub open, get connected to the Wi-Fi, if you haven't done that yet. It's the passwords cloud native. So giant QR code, this is gonna take you to the docs. Everything I'm gonna do is also in the docs, all the explanation, also in the docs. So you don't have to focus too much on understanding every little bit today. Just let's enjoy the fun of standing up some application sets, some Margot CD instances, and see if we're gonna do this without breaking anything. Cause I don't know if you've noticed, this is about a 90 minute live demo that I'm gonna be doing. So let's pray that this goes well. I'm optimistic. So if you haven't gotten a chance, it's also docs.acuity.io, and then it's the cluster add-ons tutorial. In that section. And give everyone a chance. Anybody scanning a QR code? No, sweet. All right, so let's go over a brief agenda. I swear I have like three slides, and then the rest of it, we're actually doing stuff, so bear with me. So we're gonna do a quick introduction, brief backstory on why you even need a solution like this. We're going to be creating your workshop environment. So we're taking advantage of dev containers, and the acuity platform for hosting the Argos CD instance. So your dev container is gonna have a mini cube in it, and it's gonna create a couple local clusters, and then we're gonna connect that to our Argos CD instance. We're going to bootstrap Argos CDs. So if you're not already familiar with the apps of apps pattern, we're gonna cover that really quickly, just to demonstrate what it takes to get our application sets into Argos CD using GitOps principles. We're going to take two learning breaks as we go through deploying the cluster add-ons with labels, which maybe I skipped over that part. The theme of this workshop is that we're going to use application sets and labels on our cluster configuration to determine when we should template and deploy an application to our cluster. So if you happen to attend my talk this morning with Carlos Santana, not the one that does music, we demonstrated this more extensively with connecting it to Terraform today in the workshop. We're gonna focus specifically on labels being the selector for when the application should get deployed, not transferring metadata from Terraform into our applications. And then we're actually gonna just deploy some applications and the applications that just kinda round out the tutorial. So some context to go into this with, you're probably likely, who here is using Argos CD right now? Okay, that's a good sign, because I don't really go over all of the basic details of Argos CD, I kinda assume that most people have that context at this point. But you're likely already familiar with the open source architecture where you can have Argos CD in each cluster, or you can have Argos CD in a management cluster that connects out to multiple other clusters and has to connect directly to the API server of those external clusters and have cluster admin credentials for those clusters in the control plane to be able to go and manage resources in those managed clusters. For this workshop, we're going to be demonstrating the agent-based hybrid architecture of the Acuity Platform, which is kind of a balance between the in-cluster model and the hub and spoke model for the open source architecture, where we're going to have a central Argos CD control plane, a central Argos CD UI, that same one that you're familiar with from the open source, but then we're going to deploy an agent into each of our clusters that contains the application controller and the repo server, and it's going to be responsible just for the resources that are relevant to that cluster. And so the advantage to this approach is that with the agent running in that cluster, it then connects out to the Argos CD control plane, meaning that your control plane doesn't need direct access to the API server of your managed clusters. In addition, that agent is running with a service account that exists in that cluster that it's responsible for, so the control plane no longer needs to have admin credentials for every single one of your connected clusters. Instead, they're kind of each, this is actually on topic, if you were here for the last talk, they're each kind of their own, let's say, security concern in that the agent is only responsible for that cluster. So the control plane is no longer this big central point of failure that contains all the cluster admin credentials. If the control plane gets compromised, it's not going to affect your managed clusters because it's running with a service account in that cluster, talking directly to the API server of the cluster that it's managing. So just keep this in mind as we're going through the workshop that we're working with a slightly different architecture here than agent-based architecture. The control plane can host all of the Argos CD configurations, so applications, application sets and app projects, but every other resource, say the ones generated by your applications are going to get deployed to connected clusters, not to where the Argos CD control plane is running. So let's jump out and jump into the documentation or the workshop tutorial. Can I get, is it better dark mode or light mode? Yeah, show hands maybe. Dark mode and light mode. Yeah, all right, light mode, I'll take it. All right, so the basic premise of this is that managing clusters at scales requires standardization and automa, wow, this is the end of the day, I've done a lot of talking today, bear with me, automation, there we go. So tasks that looks trivial with a handful of clusters becomes a burden when managing even 10 or frankly eventually organizations get to hundreds of clusters and at that point copying stuff around is not great. So I'm just gonna tell a little bit of a backstory here so that everybody has the same context for why we're getting to this point, why we need a solution like this. So this backstory is told from the perspective of a cluster administrator named Nick. That's me, but. So when scrolling through Slack one morning, Nick, me, got a ping that one of the product teams wanted to do performance testing on their application because of a new feature launch that ties together many different services. So they're worried that the influx of traffic might affect the entire environment and given this fear, I decided to provision an environment dedicated to performance testing and I'm going to base that perf environment on the stage environment. Now, right now they're using a cluster for each environment. So we've got dev stage and prod and each one of those has a cluster to it and those clusters have a folder in our GitOps repo that contain the applications, the add-ons which are a standard set of Kubernetes resources used by the rest of the applications in the cluster that are expected to be in each cluster. And so just to reiterate there, we have, let me do this, we have a folder for each cluster and that folder contains applications that represent the add-ons for that cluster. So these are things like cert manager, Kyberno, external secrets, the things that all the other applications rely on. And so we're telling this from the perspective of a cluster administrator. So this is what effectively you're gonna be responsible for in the context of this workshop. So I go and I add this new environment with, when I add this new environment with a new cluster, it requires a lot of copying and pasting of Argos CD applications from the existing cluster configuration to a new folder. So we're taking our stage applications, we're creating new folder, we're copying them all over and then I'm going to edit the configuration in those applications. And I'm just changing simple things like the cluster name or maybe the subdomain that some of those applications have in the values. And this isn't a huge burden at the scale of like three to four clusters, but it is easy to forget to change something. So I go ahead and I update the Terraform configuration. I create my new perf environment, create the new cluster and I add that cluster to our central Argos CD instance. I create the app of apps to deploy those applications from the clusters folder in the GitOps repo. I confirm that all the applications are synced and healthy. My job's done. I can hand that off to the product team to deploy their stuff. But after I hand it over to the product team, I get a ping on Slack. And does anybody know what that means? Maybe something's wrong if we're getting a ping. So they deployed their application to the perf environment, but it's not working the way it did in stage. And I'm gonna go into the weeds just for a moment here. After some investigation, I found that in the transition to the new cluster, the cert manager application was pointing to the values file of the stage cluster. And this is a problem because those values file contained the wrong subdomain leading to the DNS challenge for that devs application when it requested a new cert to fail because it had the wrong subdomain. Granted, this is a fairly minor oversight. It's a new pull request, a quick said replace on those files, but this is frustrating. And I think to myself, there must be a better way of life without toil. So with that backstory in mind, in this tutorial presented by Acuity, you guys are going to learn how to leverage application sets, a powerful feature of Argo CD to streamline the deployment of a standard set of add-ons. Remember, that's like sealed secrets and cert manager to each cluster. And we'll address the dynamic nature of clusters by demonstrating how they can opt in to specific tooling and configurations, allowing for flexibility without really sacrificing that standardization. Now, manually creating applications is error-prone and tedious as the story demonstrated, especially if they're nearly identical except for a few values. Like if all we're changing is which values files we're pointing to and the cluster name in the name of that application, it's really, it's dead simple to move over to application sets. So application sets, if you're not aware, template applications and populate them using generators. And in probably 15 to 20 minutes, we're going to get real deep into generators and how we can use them for some really interesting functionality. So bear with me, we're gonna get into that. But core premise application sets creates templates or takes a template of an application and populates them based on generators that pull metadata from other places like the cluster configuration from Git, from pull requests from lots of different places. So if you have the docs open, you may have already found the control plane repo, but at a high level in the control plane repo, there's a bootstrap folder that contains two application sets. One for the cluster add-ons, which is clusteradonsappset.yemo, and one for the cluster apps. And so this distinction between add-ons and apps is the delineation between cluster admin and developer. Your developers, you're giving them all the access to manage their applications within the RBAC that they're provided and you as a cluster administrator are going to be responsible for the cluster add-ons, the cert managers, the external secrets, the cavernos of the cluster that everything else relies on. So, and you know, a little graphic here. Just, I like to add visual elements, the wall of text thing doesn't really work for me. So the prerequisites. I hope everybody maybe had a chance to read this before they got in here, especially if you plan on following along, but they're pretty simple. You basically need a GitHub account. Full stop. But we are going to use a dev container, and if you don't want to use your free code space hours, you can also use the VS Code dev containers extension with Docker on your local machine to spin up that dev container that's going to contain the minicube clusters. And you're gonna need internet access. I always debate whether or not to put this into here, but just in case it wasn't implied. So let's start by creating our workshop environment. So everyone here should be able to follow along with the steps that I'm taking, especially if you've already got the docs open, but you're each gonna get your own environment to work out of. So personally, I'm gonna use GitHub code space to host my dev container and my clusters. This is frankly very easy because we're on conference Wi-Fi, and minicube is like a gig for their Docker containers. So pulling that on conference Wi-Fi, not great. Pulling it directly from Azure's infrastructure on the code space, pretty sweet. So if you just click the button right there, you'll get loaded up into a new code space, and I'm going to create a new one so that I don't cheat for this workshop. That all looks good. I do recommend the dev config is set to request the four core machine because we're gonna have two minicube clusters. Each of them is going to request about two vCPUs. So highly recommend that. And we'll go ahead and create our code space. So a little bit of context, code spaces, run on VMs, and then create containers within them. They take the dev container spec of the repository that you're creating them from to determine what that container should look like and what tools that you want in it. So it's a great way to get kind of ephemeral and consistent development environments for your own development. And frankly, it's great for giving to teams so that you can standardize your tooling. While that happens in the background, I'm gonna move on to creating my Argo CD instance. So we're going to be using the Acuity Platform because this workshop demonstrates deploying resources to clusters connected to Argo CD, not the cluster that it's running in. So simply enough, we can go to, if you've got the link open to the docs, or you can go to training.acuity.cloud and just to demonstrate here, you'll land on a page that looks like this. I recommend continuing with GitHub because that was one of the prerequisites. You'll get logged into probably a page that looks more like this. Once you've logged in to the Acuity Platform, we need to create a new organization. And I know I'm doing click ops. It's great for demos, not great for reality. So if you want to get exposure to the CLI, I do have a sub document here that just lists out the commands for the various steps and you can go and play with that. And the cool part of the dev container is the Acuity CLI is already inside of that environment. So you don't really have to go and download that or anything, but I'll be going through the UI just because it's a better visual aid as we go through the workshop. So just backtracking for a moment. We're creating our Argo CD instance. We're on the Acuity Platform. So we're gonna create a new organization, probably name it after the account, the GitHub account that I'm going to use here. And then I need to create an Argo CD instance within that organization. So if I click back to Argo CD here and I'll create my first instance, I'm gonna name it cluster add-ons just to make it apt for this workshop. Really any arbitrary name would work in there. And we're gonna stick with the latest version and to get Argo CD, that's all you need to do. You set a name, click a button, and you're gonna get an Argo CD instance that both scales has a publicly accessible, well accessible from anywhere URL and you can manage it all either declaratively using the Acuity CLI or you could manage it through clicking through the UI. And so this is gonna take a moment to spin up. Just like Argo CD would, there's pods in the background that are spinning up on the Acuity Platform to create your Argo CD control plane. But while that's happening, I will go check on my dev container. Looks like it spun up just fine or my code space. And as a sanity check here, I'm just gonna check that the dev cluster is running and if I do okay, get nodes. All right, and same thing. So if you're following along with the dev container, I added two quick bash aliases as a part of the configuration, cluster dev and cluster prod, and that'll just switch the context for it. If you're interested in seeing how those are created, you can poke over to the post create script here and it will describe how we got to this point, but it's just a bash alias, nothing too fancy. Let's see. All right, so I've got my dev container. I've got my two Kubernetes clusters. I've got my Argo CD instance spinning up here. Next, we're going to create our provision, our agents for the cluster. So I talked about the distinction between the open source architecture and the architecture that we're working with on the acuity platform. We deploy an agent to the clusters. So we have to go in provision one alongside our Argo CD instance. So if I go to clusters and click create cluster, sure. So when you set the cluster name here, that corresponds to what you would set in the destination dot name of your Argo CD application. So it's the same as if you were going to create a cluster secret with the open source Argo CD. That name is how we're going to tell the Argo CD applications where to deploy to. So it's important that you name it the same as what's defined in the docs, which is dev and prod. So I'm going to create my dev cluster agent here and then I'm going to get presented with a CLI command that I can go run, which is going to pull the manifest for the agent and run kubectl apply to get those resources into the cluster. So if I run, if I copy that command presented to me, pop back over to my code space here. And again, I like sanity checks. Okay, get nodes. I'm running on the dev cluster. I'm deploying the dev cluster manifest. So we do that. I hit enter. And all of the resources related to the acuity agent are now getting deployed into our dev cluster in the code space, in the mini cube cluster on the code space. So if I do K get pods and we check the acuity namespace and I'll throw dash w in there just to see it happening. You can see that it created the acuity agent. It created the application controller. It created the Redis, the repo server instances, and the, yeah, all the pieces that we need to run that agent within the cluster so that I can connect back to Argo CD, the Argo CD control plane on the acuity platform, as well as manage the actual resources relevant to that cluster. So that looks like it's coming along nicely. And like I said, we're working with two clusters today. So I'm gonna go back and repeat those steps that I did for dev for the prod cluster. So I'm gonna name it prod. I'm going to connect cluster. I get presented with another agent install command. And if I go back here, I'm gonna switch over to the prod cluster using that bash alias. I'm gonna do K get nodes. And we see that I'm connected to the prod cluster. So I'll paste that command, run it, and once again, deploys all of the resources relevant to the agent for that cluster specifically. And at this point, we should have a healthy Argo CD instance. That means that the platform is reporting that all the pods and workloads related to that on the Argo CD or on the acuity platform are up and running. So your Argo CD control plane is happy. Now we've got our two clusters connected. The dev pods are already up and running or the pods for the dev agent. And we can see that production is still progressing here, probably pulling some images for that cluster. We're just looking for one agent workload to finish. But while that's happening in the background, we can go back and progress to the next step. So now we're gonna access our Argo CD instance here. So with open source, you'd probably go to the Argo CD config map, you would enable the admin user, you would either take the initial password or better yet, you'd probably generate something with Bcrypt and then put that into the Argo CD config map and apply that to the cluster. Since we're on the acuity platform, I'm just gonna simply scroll down to system accounts under settings here. I'm gonna click the admin account toggle, click confirm that I do wanna enable it. And if we go to set password, we are gonna generate a new password similar to like how the initial admin password would work. I'm just gonna copy whatever it has generated for me. And then you can see that when I did that, it's now, the instance is now progressing because it's applying that configuration to that those Argo CD workloads on the acuity platform. And if we get lucky here, yeah, see. So login is disabled by default because we just enabled the Argo CD user. That configuration is applying. So once we see that that progression has stopped, we will get access to login to our Argo CD instance. So I'm gonna login with that admin user and that password and it says healthy, so it should be ready. And then voila. So yes, we are using the acuity platform to host our Argo CD instance, but everything after this is just normal Argo CD. It's gonna be the same CLI, the same API, and the same UI that you're already familiar with, that all of your users are probably already familiar with. So it should feel like home after this. So we've got our two clusters in our code space or our dev container, both running on Minicube. And we've got our Argo CD instance up and running. If we check settings here, we should see that our clusters are connected. We can see dev and prod, and we can also see the in cluster destination, which is in this context the acuity platform. And so thinking back to that architecture I mentioned earlier, the in cluster destination is only going to support the deployment of Argo CD configuration because it's just an Argo CD control plane. It's not an arbitrary cluster that you can deploy anything to. But we are going to use that destination to deploy our two application sets using a bootstrap application. So speaking of, our next step is to go and create that initial bootstrap application. So we create our first application manually because it's the initial bootstrap. You have to tell it how to get to your GitOps repo somehow. So in case I skipped ahead, we'll click new app from the Argo CD UI. We'll click edit as YAML and we'll take that. And then if we go back to that control plane repo for the workshop, you'll see a bootstrap app at the root of the repo that tells it how to get to the control plane repo and it's going to point it to that bootstrap app or that bootstrap folder that contains our two application sets, our cluster add ons and our cluster apps sets. So I'll take that YAML and I'll bring it over here, paste it in the Argo CD UI, click save, that'll translate it into the fields in the wizard. And if I click create, we will get our initial bootstrap application. Synced healthy before I even have a chance to click into it, but let's make this a little more friendly. So we see that it created those two application sets and those application sets currently exist on the Acuity platform, on the in cluster destination as specified by our application. But you'll notice that they haven't created any applications yet because our clusters don't have any labels that indicate which add ons they want and whether or not they're ready to receive apps. So we're taking an opt-in approach for this workshop to say I want to explicitly set which add ons should get deployed to my clusters. And so to do that, we have to edit the cluster configuration. Now I'm gonna take a brief break, to explain how this app set works. So if you're following along, you can go and open up the cluster add ons application set and go and look at the desired manifest for it. And as I'm doing my next spiel about how these generators work, I would be staring at this section here of the application spec, the generators portion. But I'll go back here so that I have all of the resources I need to describe how this works. I get bigger, nice. All right, so the add ons are located in the charts add on folder of the control plane repo. I'm using Helm for demonstration during this workshop, but frankly the concept would work just the same with customized frankly, it could just be a folder of arbitrary resources and you could let Argo CD assume what templating tool to use whether it be customized or Helm based on the contents of that folder. But in this case, I'm working with Helm charts because we're gonna do some stuff with values later. So the charts add ons folder contain Helm umbrella charts, which if you're not familiar, a Helm umbrella chart is essentially just a local chart that you use with dependencies to describe what version of a chart that you wanna pull and you can add your own arbitrary templates to that umbrella chart to add additional resources or to set your own default values. And let's go back here. Okay, so we've got that charts add on folder that contains all of our add ons that we wanna deploy to clusters or that we want to be available to deploy to clusters. And these Helm umbrella charts contain the default values that any cluster that opts into that add on will use. And just to poke back here, that's this file. I'm gonna use cert manager for an example for most of this, so it's a good place to look. But we're saying that any chart that takes the, or sorry, any cluster that takes the cert manager add on chart should get the CRDs installed. We're setting install CRDs to true. And so this is all just Helm stuff. It's nothing specific to application sets yet. So each cluster can then specify the overrides, the default values of our Helm umbrella chart with a file in the folder clusters slash the cluster name slash add ons. And then that file in that folder should be named after the name of the add ons. So following the dev cluster and cert manager add on example, we'll have clusters dev add ons cert manager for the values file that we could use to overwrite the defaults of that add on to be specific to that cluster. So at a high level, it essentially looks like this in our repository. Now we're gonna get to the fun part. So who here is currently using application sets? Okay, 50, maybe 30%. Who here is currently using the matrix generator with your application sets? Okay, yeah, so we're gonna learn some new things today. This is the cluster add ons application set is a complex example of using generators. It contains two different generators, the git and the clusters generator. And then on top of that, we're going to use the matrix meta generator to kind of bring that all together, the results from those generators all together. So I would have this open on your own if you wanna follow along as I describe it because you're probably gonna wanna be able to reference this as I try to explain it because there's a lot of yaml there that yeah, the first time I looked at it, it was kind of scary, so we're doing better now. So we'll start with the simple one of this example, the git generator, it's fairly straightforward. So we've got a repo URL, we've got a revision and we've got a directories attribute. And that's basically telling that cluster, that generator where to get attributes from, where to get the values produced but that generator from. So for every directory that exists under our charts add on folder, which if we remember contain all of the different add ons we wanna make available to clusters, for every add on directory, it's going to produce a path attribute that contains both the base name and the full path for that particular directory in git. Then we get to the cluster generator. So we've got our git generator looking in git, seeing all of these add ons and producing results for them. Now the cluster generator is doing a similar thing but for all of the clusters that are registered to Argo CD. So for each cluster registered to Argo CD, it's going to produce the name, the server and the metadata label and annotations on that cluster configuration, which if we look in Argo CD here, you can see, let's take the dev cluster for example, it's basically all of this information here. It's gonna take the labels, it's gonna take the name and it's gonna take the server. Wow, we don't display the annotations, interesting. That metadata is also available on the cluster configuration. Maybe a better example to show that would be on the acuity platform, you can configure the labels and annotations on the cluster configuration here. So going back to the explanation here. So for every cluster that's registered to Argo CD, we're going to get those attributes available in our application set. Now the next part here is that if we look here, we've got, and what I just explained is just the clusters level. You could stop there and get all of the clusters registered to Argo CD and get all the metadata for them and use them in your application. Now we're going to filter those results with a post-selector and some expressions. So looking at this, the first expression in the first selector is looking for the label, acuity.io slash Argo CD cluster name on the cluster configuration and is checking for the value in cluster. And so this is a label that's added automatically by the acuity platform when you configure a cluster, but we're essentially telling this to say, don't deploy any add-ons to the in cluster destination to the Argo CD control plane because it's not a managed cluster. It's the Argo CD control plane. We don't need to deploy arbitrary add-ons to it. The second one I think is a little more interesting. So we're looking for enable underscore. Maybe I'll highlight it here to make it more clear. We're looking for enable underscore and then we've got the application set templating syntax looking for path.basename and then filtering that to the snake case filter. And if you'll remember what I was talking about, the cluster generator, that's one of the attributes produced by the git generator. So we're actually using the results from the git generator to filter the results from the cluster generator. And that's kind of the special sauce that brings this all together and that we're dynamically checking for any add-on that we've got in our GitOps repo and making it available to deploy to clusters, but the cluster doesn't receive that add-on until it has the appropriate label on the cluster configuration. So if we take cert manager again for the example, we're looking for the enable cert manager true label on the cluster configuration. So the git generator returned path.basename cert manager, the cluster generator will then only return results, clusters that include that label for that add-on. And now we can get to the matrix generator. So we've got git checking for our add-ons and returning any folder in there as a potential attribute. And then we've got our cluster generator checking for any clusters configured to Argo CD and filtering those results based on the cluster labels or the labels in the cluster configuration on it. The matrix generator will combine those results from the git and cluster generators and will produce an application for each one of those resulting combinations. So let's go through a specific example here in case it's not clear. If we have a dev cluster and on that cluster configuration, we've got the enable cert manager true label and the charts add-on folder contains two folders in this example, cert manager and external DNS. So the git generator will return two results, path.basename for cert manager and path.basename for external secrets and the cluster generator will return named dev cluster. Then the matrix generator will combine the results and to one instance to the combination of dev and path.basename cert manager. The combination of dev and external secrets here is omitted because the dev cluster configuration doesn't contain the label enable external secrets. It only contains the label enable cert manager. So this is how we're filtering and opting into I want my dev cluster to contain the cert manager add-on but I don't want it to contain the external secrets add-on. So I just don't add that label to the cluster configuration. Once all of these conditions are met, we've got the appropriate label, we've got clusters configured, we've got a folder with our add-ons and add-ons in that folder. The application set will produce an application sourcing that helm umbrella chart using the path attribute supplied by the git generator. So the application template in our application set, the source for that looks something like this where we're pointing to the control plane repo, we're taking the path returned from the git generator and telling it, okay, this is the add-on that you wanna go get and use for this cluster and we're targeting the head revision, the tip of the main branch. The application uses the add-on name, the path.basename attribute supplied by the git generator as the release name for the helm chart. So when you do helm template in Argo CD, it's going to have cert manager as the release name if we're following that example. And it also uses the cluster name, the combination of the cluster name and the add-on name in determining what values files to supply to the chart. So this is the example of how a particular cluster could set specific values for the add-on that it's receiving that overwrite those that are default from the chart or default from our helm umbrella chart or default for any cluster that takes that add-on. And we also use the flag ignore missing values files because if the cluster doesn't have anything that is specific to it, it can just use the default standard set of values for that add-on. Then we don't even have to create that file, it'll just use it if it exists. And then finally, the cluster name is used for the destination cluster in the application template and the add-on name is used for the destination namespace. So following our earlier example, our cert manager add-on will get deployed to the cert manager namespace on the dev cluster. All right, so that's the explanation of how the next thing, the next three buttons we're gonna click is going to work. So 10 minutes of explanation, 30 seconds of clicking around to get our add-on deployed. So once we understand the concept, actually using it is very straightforward and removes a lot of toil from our lives. So just going back to recap, we have our bootstrap, I can do better, there we go. We have our bootstrap application that deploys our cluster add-ons and cluster apps application sets to the Argo CD control plane. They are watching for labels on cluster configuration and folders that exist in our Git repo. So when I go to the acuity platform here and I go to the clusters and I edit the configuration for the dev cluster, when I add the label, enable cert manager true. And so I may have skipped over this inadvertently but the cert manager folder name is cert-manager and our application set is using the snake case attribute here to turn that hyphen into a lower case to standardize the presentation of those labels. So that's why I'm putting enable cert manager with the underscore here. And so as soon as I click update cluster here, maybe I'll get this ready so that we can see it happen live. So cluster add-ons, nothing currently deployed, right? I go to my cluster configuration, I add enable cert manager true, I click update and then I bet by the time I click back here, voila! We've deployed an add-on to our dev cluster with a label. We didn't have to do anything else, it's that easy. Like once you get it set up, that's the cool part here is like, you have all of these add-ons and these default standard configurations and somebody comes along and they're like, man, I really need external DNS in this cluster for X, Y, and Z. All right, let me go add that label and frankly, I'm doing click ops for the example here but we can manage all of these configurations in Git as well. So it could be as simple as, all right, I'm gonna submit a PR, okay, everybody agrees, we should deploy that add-on, boop! And then it gets synced and voila, we have an add-on on our cluster thanks to our application set. And the important point here, if we think back to the story that I told at the beginning, is I didn't have to go through this application and paste or copy and replace the name of that cluster when I wanted to deploy this add-on to that cluster. I didn't have to go to somewhere else, find the right file, copy it over, oops, I forgot to change the cluster name, great, I just installed cert manager twice and broke it in the initial installation, great. No, we just add a label to our cluster configuration. And so I could repeat that same process for, say prod here, I could say that for prod, I want enable cert manager, true, let's see what else we have. If we go to our charts add-on folder in our control plane repo, you can see that I've got external secrets in Kyverno as an option as well. So I will add those two, I'll enable Kyverno, true and I'll enable external secrets, true, and we'll update the cluster. And if I go back here, we'll see that the application set has produced new applications for add-on prod cert manager and add-on prod Kyverno. I bet I spelled external secrets wrong if I go back because we can see that it didn't get deployed. External seek-rits, plural, it doesn't just deploy one secret apparently. So update the cluster and we go back, I made a simple mistake, but voila, I didn't have to go and create a new commit, I didn't have to actually break anything in the application and now it's seen it and deployed those to our cluster. So that's how the add-ons work. There's one more example here which I think is just gonna help the concept of application sets sink in a little bit further and that we've got our add-ons deployed to our cluster and we're happy with the state of that cluster. You'll notice that we also had that other application set. We had that cluster apps application set that hasn't produced anything yet and that's because I chose to add a manual gate of if my cluster doesn't contain a certain label, don't deploy any of those applications. And so to explain that a little bit, the cluster apps application set also uses the cluster generator. This is gonna be a much simpler example of using generators with application sets, but it essentially the apps for every cluster could be unique. They might not be as standardized that we can just automate it away with application sets. So we have a folder for each cluster called apps and in there you can basically drop any arbitrary Argo CD configuration. Mainly it's expected to contain applications. So for every cluster that exists in Argo CD that is if we call back to earlier not the in cluster destination that contains the label ready equals true then go and template an application. So using that same selector technique as we used for cluster add ons, we're gating the deployment of these applications with the ready true label. And once the container or once the cluster contains this label the application set generates the app hyphen cluster name application which points to that cluster cluster name apps folder using the name attributes supplied by the cluster generator. So we can see that it sets the name of the application and then it sets the path to look for that application. So a much simpler example of using application sets but it's kind of like instead of app of apps it's application set of apps to create your app of apps. So application set app of apps, sure. So the application is configured to take really any plain Kubernetes manifest in the folder and deploy it to the Argo CD control plane. In this case that is the in cluster destination with the Argo CD namespace and therefore it's expected to contain Argo CD manifests an application, an application set or the app project. So it's a very simple example of an Argo CD application. So to demonstrate this functionality I'll go and mark my production cluster as ready because it contains all three add ons that I expected to. So same process as before. I can go to the cluster configuration. I can edit it. I'll add a fourth label here that says ready true. I'll update that cluster configuration and almost as fast as I can click back here we will get an application deployed to my cluster. There's a reconciliation loop here that I'm waiting to happen. Hey there it is, yeah, okay. So we created our apps prod application and if we poke into that it's deploying a basic guestbook prod example because that is what our clusters prod apps folder contains. It contains a guestbook prod application. So we deployed our add ons with application sets and then we determined that our cluster was ready. We marked it as ready with that label on the cluster configuration then we deployed our applications to that cluster. And so it ultimately ends up being fairly simple. It's basically cluster management through labels and then those labels can ultimately be managed through GitOps practices as well. We're doing a demo, basically a demo here so I want to click ops route but I hope that helps demonstrate how you can use application sets to standardize the deployment of cluster add ons to your clusters without having the toil of copying applications around and potentially making small mistakes that just create more pain for yourself. We can automate that all away and just have it be determined based on our cluster configuration. So this is essentially the end of the workshop. The end of the official workshop that we slotted here. The docs are going to be available as long as you want. So if you wanna go and try this out on your own if you chose to listen today you're welcome to go try it outside of the context of this workshop. We do have a special treat for anybody who has heard of the cargo project. Yeah, damn that's more people that are using application sets. Where's Wojtek? Good job. Hey, okay, so we're going to also do a demo of that because frankly we think it's pretty cool and deserves showing off. Before I bring up the CTO of Acuity, one of the founders of the Argo project, Jesse Suin to demonstrate that I did wanna ask if there was any Q and A questions that I can field on the application sets for add-ons workshop. There's a mic up at the front there if you don't mind sprinting for it. We'll probably spend 15 minutes or so on Q and A and then switch over. So in the demonstration you had the one application set for many of your add-ons. Is there any benefits in having the one application set for all add-ons versus categorizing your add-ons so it's one application set per add-ons type so to speak. So multiple application sets versus the one application set. Yes, ultimately it depends on what you're doing with that application set. If you saw Carlos and I's talk this morning that's the pattern that we used and because we chose that pattern because each the applications that were being generated for those add-ons weren't completely similar. They weren't identical other than these attributes. We were pulling more cloud metadata from the cluster configuration and using it in the Helm values for the add-ons charts we were deploying. So we had to have separate application sets because the actual templated application is different for each of those add-ons. So here, granted, I'm using a really simple example just to kind of get the point across. In reality, you might have an application set for every add-on because there's values or maybe you want to auto-sync some but not others. So great question, yes. Thank you. Hi, great talk. I have two questions actually. One is following up from your latest label, the ready one. So you got your cluster in a ready state. Let's imagine you want to do all over again. What would you do? You set ready to untrue or false, whatever and then we evaluate, but that will actually delete the gas book you have deployed in the prod. Yeah, so it depends, like I hate answering questions when it depends, but it depends on what you're trying to accomplish. If you want to clean slate your cluster in terms of applications, you could set it to false and it'll just delete all of the applications and give you a clean slate. But the reality is that you probably want to use the application set creation policy to set that I only want to create and update applications. I never want to delete them. So you could change it to false and that'll prevent it from creating any new applications that are added in that apps folder, but the ones that already exist out there will continue to operate as necessary. So you're like, I don't want to deploy anything, any new apps, any new arbitrary dev apps to that cluster. You could set it to false with the creation policy to create update and the ones that already exist will exist, but nothing new will get added until you change that label back to true. Okay, thanks, that explains. The other question is, we have tons of cluster running, tons of application age controlled by one Argos day instance, and we are using application set. And we have basically dev, TDS and prod. And for some application, for example, serve manager, if you want to change the version, we actually go through all the cluster and then if none fails, we go to the next stage, TDS and then prod. Is there any recommendation instead of this painful process that we can adopt? Do you see Carlos over there rubbing his hands together? We had a whole talk on this this morning that I should have done this the other way around, workshop and then this talk. So I will very quickly jump to the GitOps bridge to show you what that would look like. But yeah, that's a totally, totally valid use case. So if we, let's go to our KubeCon demo here and if we load up GitOps, bootstrap, control plane, add ons, AWS, let's go here. So you could use the merge generator. So taking the gentleman's question from earlier where you have an application set for each cluster instead of an application set that does everything, which granted doesn't happen in production, you can have your main cluster's generator be like, okay, for this chart in Grass Engine X of this version, for every cluster that has that label on it, deploy the Ingress Engine X application to that cluster and use this chart version by default. But using selector labels, so if I put, say, environment staging or environment true on that cluster, I can say for any staging clusters, deploy this version of the chart or for any production clusters, deploy this version of the chart. And then that way you can go, yeah, anything that's just like not stage or prod, deploy this version of Ingress Engine X. But for staging, I want to try upgrading to the next one now that I've established it, say, for the other ones. So long story short, yeah, you'd use the merge generator, you'd use labels to select clusters based on which environment they are and then you can use an arbitrary value to say, this is the version of that chart for that cluster. For anybody that's interested in learning more about that part of the pattern, go to the GitOps bridge dev GitHub org and you'll see examples in there on how to play with it. And I'm sure in a week we'll have a talk up that you can go and see the full explanation of how to use it. Let's see timing, yeah, we still got more time, go. So what is the recommended GitOps pattern to add and modify and delete those labels? Look at him, he's so happy about it. So recommended GitOps pattern. So this is kind of what the GitOps bridge talk was getting into, is that those labels you could almost consider to be cluster configuration and not Kubernetes configuration, like their attributes of that cluster, they're not necessarily GitOps configuration. So in our talk we talk about using infrastructure as code. When you create that cluster and you create that cluster secret to connect it to Argo CD, you manage that cluster secret with your infrastructure as code and so in there you're setting the labels for that cluster because you don't really need an active reconciliation loop for those labels because they're not just gonna disappear, you basically are adding or deleting them, you run Terraform, that changes the cluster configuration and then you let the application sets react to that to actually deploy those applications into the cluster. So to step back, it depends on how you connect your clusters to Argo CD. If you're using Terraform to create your cluster secret and then use that to connect your cluster to Argo CD, then you'd use that to define those labels. So really the question boils down to how do you manage connecting clusters to Argo CD and whatever solution you use for connecting clusters to Argo CD, that should be the source of truth for those labels. Thank you, appreciate it. Of course. This one's probably a little bit weird, but we've got app sets, but they're doing the original terrible DevStave prod folder, but as app sets, is there any good way of getting those converted without having to like uninstall everything from your production cluster? Yes. So you currently have those applications in like, so you're before the application set, right? Yeah, I'm the before, but it is an app set. So there's a Dev folder that's got, I don't remember exactly how it's set up, but there's a set of labels in there that tell it install Kyverno, insert manager, and prod, but not in Dev. Right, right. So all the ownership is already tagged that way. And if I just implement the after that you just did the entire workshop for, I'm probably going to start having some conflicts and potentially delete some stuff. Yeah, yeah, I think you can get the application set to take ownership of those existing applications. So like, you just have to make sure that your application set template lines up with what you expect to exist in the cluster and probably set it to like, not auto sync by default, just like test that it's rendering the correct manifests. And then if those line up correctly with your existing ones, then you can enable it. It'll take ownership of them and start deploying it, but Asterix haven't tried that, test it. Yeah. There's a video. Use the microphone if you do mind. If you can show it. There's a field in the application set at the top. If you pick one of the application sets, it's a setting that you tell the application set. When I remove this application set, please do not delete the apps that are in the cluster. So then you can delete the file, right? Delete the application set or keep CTL, delete the application set. The apps will stay in the cluster. Then when you apply the new application set.yaml, the one that you want, it will find its children and then marry them. But you have to do it in that sequence. You can get with us and we can explain that field. But it's at the top, right? You cannot remember the name. Preserve resources on deletion. So that's like one of the two knobs. The other knob is the creation policy for application sets. You could say that I only want to create applications. I don't want to update them. So as you're going through, you could test like, okay, maybe create it, but don't make any changes yet because I'm not confident. Fantastic. All right. Yeah, we got time. All right, so I just have two quick questions. So one from the example we just went through with the application sets. Can we still use it with like a custom Argo plugins? Like for example, the Argo CD Vault plugin for Helm. Like I know there's like, because there's like some things like, you know, ignore missing values or Helm values. That doesn't work, I believe. Yeah, yeah, you would have to build in the ignore missing values file into your, you know, CMP or your custom config management plugin that you're using. But yeah, ultimately, we're just templating an Argo CD application. So it doesn't, there's nothing special necessarily to Helm or to customize or anything in here. It's just, I'm templating an application. So you just have to make sure that that template aligns with what your applications, you want them to look like. Okay, cool. And the other question was with the labels again. So is it recommended we can actually use them to pass in any values? Like when bootstrapping clusters, like I don't know from AWS things, like our subnet IDs and everything else. That's exactly what our talk was on. Yeah, so you would put that into the annotations instead of the labels. Labels should be used for things like selecting, but annotations should be used to contain arbitrary data. And then I'll just, I'll scroll down into the example we showed earlier. You can, oh, let me go back to the cert manager one and then I can 100% answer your question. I want AWS cert manager. Yeah. Yeah, so you have an IAM role that Terraform manages. You put that into the cluster configuration and then you use the app set to pull that using the cluster generator and put it into the values file for cert manager or what have you. So I answer it, yeah? Oh, yeah. Yeah, sweet. Yeah, that's exactly what we need, okay. Okay, fantastic. So go to the GitOps hyphen bridge hyphen dev GitHub org and there's tons of examples in there for exactly what you're trying to do. Yes, sir. All right, I kind of have two questions. First is related to this. If the values property changes across your various apps, you cannot use our example we learned today, right? In this demo, right? Sorry, if which value changes? The dynamic helm values. Yeah. If each of our add-ons has a slightly different requirement for the helm values, we can't dynamically pass that in somehow, right? Yeah, that's why earlier, I think you sir mentioned having a different application set for each add-on, that's why is because the values files ends up being different. So I had a nice clean example, but the reality is in the GitOps bridge. Yep. So the second part was with our add-ons we kind of have an order of dependency like contrived CNI, you need your CNI first then you need to deploy other stuff. So is progressive sync something we can use with the example today? Is there any reason we couldn't, I guess? I couldn't see one. Yeah, if I don't think it would work because like following your example, if you have different application sets for each of the add-ons, then you can't orchestrate like deploy this add-on before that one because they're in different application sets. Honestly, I would probably just rely on eventual consistency, like get the CNI in there, our goal just retry and then deploy the next one and like add some wait time and there's probably some optimization there. Am I missing anything? Oh, I'm sorry, I thought it was gonna be that easy. Test, test, test. Okay, nevermind, sorry. Coming, TM. Yeah. So, yeah. The way I deal with retry is by default our goal will fail and that's it but there's a section that you can put back off and limit and retry and we're playing eventual consistency, right? So yeah. Yeah, fantastic. Next up. And then after this question, it'll be you. Hello, I have a simple maybe silly questions. Just I just would like to know what happened if like one of the member cluster is in trouble and the application said they're still like working or is there any like failover thingy or just the whole ArgoCity just failed and retry any mechanism? Sorry, do you mind just starting at the beginning there? Oh, just one of the member cluster is in trouble. No, the whole application set would be fine. Just the application that's relevant to that cluster would end up showing as possibly degraded or unknown because that cluster has maybe gone away or is broken or what have you. So we're saying that the application set control are gonna detect that? It would be the, yeah. Well, I guess it depends on how that cluster is in trouble, right? Like if the connectivity is gone but the cluster is still configured in ArgoCD, it's still gonna try to produce applications for that cluster. Just the actual applications produced won't be able to deploy anything and will show like on your own. Just throwing like error message, something like that. Yeah, it might show like connection timeout to cluster or the application controller would be fine and healthy. The application, sorry, the application set would be fine and healthy. Just the actual applications would show a relevant error like can't deploy or can't reach or something to that effect. So it depends on the failure mode, unfortunately on how it would respond. Cool, all right, thank you. Yeah, we could do a hallway track after. And we're acuties at booth 010 in the solution showcase. So if any of you have ArgoCD questions, we'll be around to answer them for the rest of KubeCon. So we're happy to talk to you. Next up, we have Jesse Suan, the CTO and one of the co-founders of the Argo project to demonstrate our new open source project, Cargo. All right, so we actually didn't plan to do this until like an hour ago, but we found that we had extra time and there seemed to be some interest in actually seeing Cargo in action. So I'll, I'd be surprised if nothing goes wrong, especially since this is actually a nightly build in this cluster that we're running on is a bit unstable. So, all right, so let's cross our fingers. But what I wanted to do here was actually give some better, I guess explanation of what we're trying to do with Cargo and explain some of the concepts that we are doing with Cargo and just take questions and show what's going on. So what we have here is basically, we're looking at a Cargo project and this Cargo project has three stages. And so we're modeling at our three environments, dev, staging, and prod as interconnected environments. Each of these stages has a backing Argo CD behind it, Argo CD app. So if I were to open this and click this, this would take me to the guest book dev app and then I can open the staging and so on. It would take me to the staging app. Okay, so at the top, so our Cargo introduces some concepts. The first thing is the stage. So stages are how you model your deployment pipeline. If I were to show you the dev stage, so this is the simple example and this is under my personal GitHub under Cargo Simple. And then let's look at what dev looks like. Okay, let me make this bigger. So what this is saying is I have a stage and it subscribes to what we call a warehouse. So warehouse is basically a producer of things I want to deploy. A producer of another term that I'll introduce called freight. So basically what this is saying is I'm subscribed to this warehouse called guest book which I'll bring up in a second. And then when I see new freight produced by this warehouse, I want to promote it. And the way I can promote it is by writing to this Git repository in the master branch using customize. And then after it does this customize edit set, it will sync the Argo CD app called guest book simple dev. So that's how we are defining this stage. It's basically saying subscribe to a warehouse called guest book. And then when it's a new freight comes in, promote it using GitOps against this repo, this branch using customize and then sync this app. This is probably stuff you're doing in your CI pipeline if you're practicing GitOps and you're trying to do this pipeline of progressive syncs. But what we're trying to do is formalize it and standardize it into a tooling that knows how to do all this for you so that not everyone is repeating this same pattern in different broken ways and even GitHub actions or Jenkins and among other things, this is the same story that we see with our users. And then if we take a look at a downstream stage, it looks very similar. So pretty much everything down here is the same except you'll notice that where it wants to write the customized edit set image is under a different folder and the app that it wants to sync is a different app thing. And then but what this is subscribed to is an upstream stage which we call Dev, which is the thing that I just showed you. So what you're noticing here is that cargo's having you define and model your environments from the bottom up as opposed to the top down. So what I mean by that is what you might be doing today is you have maybe this big large Jenkins file. It has full knowledge of all your environments and it has this long lived job that is basically repeating like customized that set image, Argos to the app sync, Argos to the app wait and then run some tests and then do that repeat that again for prod or staging and then prod. But in reality no one like that thing that I just described is unwieldy. It gets too much for when you start to get to even more complex multi-stage pipelines that is more than like 3D. Like the example that we like to show is something like this, right? This is maybe an example where I have Dev staging but then I wanna canary something. I have an A group and a B group and I'm doing A, B testing and then I'm gonna choose like a winner like okay, the B tested better. So that's the one I wanna promote and then but when I wanna promote, I wanna promote it to three different prod environments at the same time. So this is more of what we think might be a real world, you know, more complex deployment pipeline than the textbook, Dev staging and prod example. This is what we wanna enable. We wanna enable more complex or sophisticated deployment patterns. But I'll start with the simple example before we get to the advanced example and just kind of explain what's going on. So, oh, I forgot to show the warehouse. So just to remind you, Dev was subscribed to something called a warehouse called guest book. And what a warehouse, as I mentioned before, a warehouse is a producer of freight. Freight are things that you promote. In our case, freight, this guest book warehouse is subscribed to a image repository called guest book. So let's look at, let's look at back to this UI. I'm gonna make this a little bigger. So at the top is what we call our freight line and it's ordered from the newest stuff that is incoming on the left side to the oldest stuff coming on the right. So this is something that I maybe pushed, I don't know, like an hour or maybe yesterday. And these colors are indicating what things are running and where. So the Dev is currently up at zero, zero 25. Staging is at 22 and prod is at 19. So to promote something, you can either promote into a stage or you can promote out of a stage. I'll be just showing incoming. So what you do is I click on the left side of this stage because it's into it. And then you'll notice that I have some options up here. So to promote something like this, I can select this freight and I can say yes, promote this. So this will kick off a promotion job, which as you recall was promotion in the case of Dev was a Git commit to the Git repository branch. So now let's look at what happened. So there was just a commit made by cargo. And if I look at that commit, you'll see that it made a customized edit set image from 25 to 26. And so that's what a promotion means for Dev. So now let's actually show promoting, going down the chain to something like staging. So actually prod might be a better example because you'll notice that my options for prod are actually less than my options for staging. And there's a good reason for this. Because our stages are subscribed to each other, we only allow what can be promoted as from what is an output of a previous upstream. So prod can only produce what comes out of the staging stage. Staging can only produce to promote what came out of the Dev stage. So that's why you see more options here than you do over here. And then Dev is always gonna have all the options because it's looking at all the incoming freight from the warehouse. So today promotion means a Git commit. Tomorrow, where we want promotions to be other things, like instead of making a commit directly, open a pull request. And your promotion might be a long-lived promotion that takes as long as it takes for someone to approve that pull request or deny that pull request. But the idea is that it should be flexible. Like sometimes you're using Customize and it's writing a commit to Git. Other times you might use Helm and you need to update something like a values.yaml file under some JSON path for the image tag. We leverage a lot of the upstream tools like image updater code, although we're kind of heavily modifying it through the bunch of bugs. We make use of Argo CD config management. And so sometime in the future we would support config management plugins as part of the rendering set. And actually, I forgot to mention that's also a feature of Cargo itself. So I gave you the simple example. The simple example is images, image tags. But one thing you'll notice about this freight line, when compared to this freight line, this one was just images, right? Cargo actually can subscribe to Git repositories and image repositories, and it will even mix and match them and permutate the results. So if you're looking, if you stare at this a little closer, you'll see that there's a reference to a Git as well as a reference to an image tag. And I can click and see, okay, what is this thing I'm about to promote? I can click in here, and this is a environment variable that I just made about 19 minutes ago. I added this to basically this base customize, and it's something I can promote into my environment. I can choose this thing. And what I just promoted was a combination of the 0026 image tag plus this environment variable that I just made about roughly 20 minutes ago. So I think that's kind of what I wanted to cover. Just introduce some of these new concepts in cargo. We have this kind of nifty history view of what recently got deployed in this gradient color. So you can, generally speaking, your prod environments are always lagging behind your upstream, so it makes sense that there is kind of this triangle shape of history. But yeah, okay, any questions? Yeah, yeah. So if this is something that looks like it would interest you, if you're looking for an environment promotion tool that is fully GitOps-based, we have a community that we're creating around this, and the rest of the Acuity products to discuss and get design partners early in the process to get your feedback on how this should change, because this is, I don't know, MVP, maybe a little past MVP now that it has the UI. So, but there's still time to influence this. I guess I wanted to step back. If you're interested in joining the community, you can go to acuity.community, and that'll get you the Discord link, and you can come jump in and share your feedback about what you hate, and hopefully what you love about it. But I wanted to ask quickly, what do you currently do for environment promotion? Maybe, I'm guessing you had a question ready, but maybe I can override it quickly with, sir, what do you do for environment promotion? Okay. It depends. For tenants, we basically have Jenkins jobs, and we, unfortunately, we're still on legacy flocks, so we're trying to, that's just for the tenants though. For the Argo stuff, it's basically just pull requests with basically what I call like a build of materials of all the different tooling versions, all of our platform tools are there. So basically we're using Renovate to detect the tags when our different repositories are updated for each of our tools, and then we kind of promote that up together as basically one big glob of changes. So. Okay, so using, I guess not technically CI, but CI like tools to manage like checking when these things change, and then grouping that into a pull request, and then you apply that to your GitOps repo. Yeah, apply that, and then it syncs those, so yeah. Right, right. So you, yeah, yeah, okay. Maybe I'll field your question now. Okay. All right, so is there a difference between promoting into a stage versus out of a stage if it's just a one-to-one of like, as you see here, dev to staging? Is there really any difference in that scenario between in and out? Does that question make sense? Yeah, good question. So when it's one-to-one, it's basically the same thing. So I think the resulting promotion object will be identical to each other. When you wanna promote out is in this right side. Basically I have from prod US, this is actually what we call a control flow stage. There's actually no Argo CD app behind this one, but we wanted to show stages being flexible to use similar to like a join point. And then as a control flow to like promote to all three things downstream at the same time. So rather than going here and clicking promote, clicking there, promote, clicking there, promote, I can click this and say, okay, promote everything my dependence at the same time. That's why this thing says. The other question was around the subscriptions to prior environments. You said like basically prod couldn't get, had fewer options than stage because it needed to go to stage first. If you needed to for some reason skip it, I'm not saying that's a normal occurrence, would you have to subscribe to both stage and dev or is there a way you would maybe inherit the subscriptions of stage if you wanted to be able to skip? Another good question. So yeah, there's just some times where you're gonna need to go to prod and just bypass all this stuff. Like let's be real. So what I'll say about that is that these environments or maybe let's say this one, prod US is today, I mean in the example, I'm subscribed to these two upstream stages, but this could, I could write this to subscribe directly to image repository. So this thing could promote directly, you could go basically direct to prod if you constructed a stage to subscribe to the image. It would kind of at this moment, like just give you all the options as up here in the freight line when you're clicking. So we would have to figure out intuitive ways to, I don't know, indicate the provenance of these freight to know like, hey, this one, the provenance came from this warehouse over here rather than this warehouse over there. So make it clear that it's maybe something dangerous. Okay, cool, thank you. All right, yeah, if you're interested in more, we have Discord server and or just go to our GitHub repo, write some try it out and open discussions, you know, it's typical open source stuff. Cool, thanks. Yeah, and I just kind of wanted to like reiterate acuity.community for the Discord instance, right? So I'm Christian, the head of community at acuity. So we have a Discord instance there. If you want to continue to discussion, if you want feedback today after just his keynote, we got a slew of people, there's already conversations going on, there's already feedback going on. So if you really want to like get involved and try to help us build this and make it useful for everyone else. I guess one last thing and I'll let you guys go. I know there's a happy hour somewhere. That's the CFPs for KubeCon and ArgoCon EU are open. And the reason they're open so soon is because ArgoCon and KubeCon EU is happening in March. So if you guys are interested, if you want to submit CFPs in the talk there, please go there, go to the CNCF website, there will be a link there. And you know, just getting the word out that the CFPs are open now and actually they're closing pretty soon in a few weeks. And it caught me by surprise that it was so early this year. So anyways, that's another thing. And I guess with that, that's it. If you have any questions, you can grab us in the hallway. If you see us, we'll be at KubeCon again, booth zero, not zero, oh, one zero. I thought it was zero, one zero. No, it's oh, one zero at KubeCon. Come, we'll talk Argo, we'll talk Argo CD as well. Anything else? So thank you. 26th November, so even sooner than I thought. That's like, yeah. So yeah, all right, thank you everyone. Okay, so I'm very happy to be here today and on stage for the next five minutes. And because I don't have a lot of time, I'm going to jump directly into the answer. And the answer is yes. Yes, and the question to that answer is can backstage deliver value beyond engineering teams across the organization to other people? My name is Olivier Lyfty. I'm founder and CEO of Avalya Systems. And one problem space that we explore with our clients is how can we better understand and explain what is happening in the black box of software development? And can we communicate the business value, the business impact of technology? There are two reasons why we decided to use backstage to work on this problem. And this is because in the solution space, backstage gives us two important capabilities. The first one is the generic data model that we can use to create a model of the entire software ecosystem. The second capability is this very generic application development framework that we can use. And so with these two capabilities, we are equipped to implement very different use cases for different people in the organization. Instead of talking about developer portals, we tend to speak about digital portals. And in the projects that we do, we focus a lot on content, insights, and stories. And we create personalized experiences by selecting the type of information and the way the information is presented to the users. During these projects, we have identified three teams that we find important and that we keep revisiting. These teams are revisit the UX, embrace model extensions, and be smart about content. Revisit the UX. I think we can all agree that even if the UI of backstage is very logical, is well-structured, it drives us to the design of portals that really feel like developer tools. And if you consider non-technical people, this can be overwhelming, if not scary. I think we can also agree that the table view in the catalog is very effective to search for information, but it's not very good to understand the overall structure of the ecosystem, its state, and the relationships between the entities. The good news is that in the end, a backstage instance is a React application. So with some work, it's possible to turn the UI inside out. Instead of adding custom panels, custom tabs into backstage, we prefer to look at it the other way. We look at a broader collaborative web application, and we inject backstage user interface elements into that broader application. This is a technique that we use to create a first experience with the portal that is more appealing for non-technical people. Embrace model extensions, I mentioned before that the system model is really one of the features that drew us to backstage in the first place. And at the same time, if we read the documentation, the documentation makes us cautious about modifications, extensions of the system model. Is it a good idea to change entity kinds? Is it a good idea to define new relations? So we are aware of the trade-offs, but because for us the modeling part is so important, we tend to favor them. In addition to the taxonomies, an important part is the metadata that you can associate, that you can link to entities in the catalog. And here, I would like to make an analogy to a GIS. The GIS gives you different perspectives on the world by allowing you to use different layers on the map. And the idea is that backstage can become the GIS of your software ecosystem. Because the user interface of backstage is fully customizable, it's possible to create these interactive data visualizations, these interactive maps. In this example, we are looking at the organizational structure where squads are groups in tribes. And when you have these interactive visualizations rendered into backstage, you can annotate them to represent the state of the ecosystem. Here, it's the happiness of the teams, but you can do similar things with the quality of the systems or the business value generated in the domains. Last but not least, be smart about content. Creating a digital portal is not trivial, but when you have released the first version of your portal, you have only done one part of the job. If the goal is to generate a stream of insights, content, and stories, how are you going to keep this stream lively and relevant for the people? Who is going to do that work and how much time is it going to take? There are, of course, different strategies that you need to combine, but one very interesting use case that we see is to combine software analytics with generative AI. And one thing that we have done is to implement data pipelines where we combine these tools. I don't have the time to go into a lot of details, but just to give you an idea, it's possible to extract data from systems, for example, issues captured in GitHub, to give this data structure to CharGPT and to ask what are interesting questions that you could ask about this data set, to then ask CharGPT to write the code that once executed can answer the question proposed by CharGPT, and in the end, to ask CharGPT to write a story about the answers in a way that makes sense to an engineer, a CTO, or a CFO, and therefore creating this content that will be understandable and make sense for the different personas. I hope that this has triggered your curiosity. I will be at the conference for the whole week, and I would love to chat and show you a demo of these portals. Thank you.