 So let's get started because I think we are behind on time a little bit, so we'll go fast here. If you're here, it is because you want to listen to their street tracks. So we're going to be talking about marrying Terraform and Argo CD and see what child will come out of it. My name is Carlos Santana. I don't play the guitar, but I play Kubernetes. I'm a senior specialist essay at AWS. I'm also a CNCF ambassador and Nick. I'm Nicholas Moria, developer advocate and recovering platform engineer from Acuity. Great. So let's get started with the storytelling. So I work with a lot of end users in the enterprise business, and Kubernetes has a lot of tools. So you are in ArgoCon. We'll be talking about ArgoCon, which is one of the continuous delivery tools. There's also our builds and integration tools, our CI tools, but there's this one that we don't talk a lot about the infrastructure as code, which is the tools that you configure your cloud infrastructure tools or resources to work with Kubernetes. So we have this trifecta of CI, CD, and then infrastructure as code, and everybody tries to do it efficiently and try to do it in an innovative way and solving issues and putting it in open source. For today, this pattern is applicable to any of the infrastructure as code, cross-plane, CDK, Ansible. But for today, we're going to concentrate on see how do we make Terraform work with ArgoCD in a better way when we talk about Kubernetes, right? So who here have deployed Kubernetes with Terraform? OK, bunch of you. Good. Who here have deployed apps with ArgoCD? OK, good. Pass. Third one, who has problems with Terraform and GitOps? OK, so you are in the right talk. So when we work with Terraform, usually it's to create the cloud resources that we need for our Kubernetes cluster. For example, in a cloud provider myself, I work in AWS, a lot of folks use Terraform to create IDKS cluster, mainly a control plane. But since they want to do automation everything, the platform team goes ahead and installs the add-ons, the help charts that are needed for that cluster and the next cluster and the next cluster. And they install help charts and add-ons such as external DNS, the ingress controller, cert manager, load balancer. I have end users installing almost 40 help charts from Terraform. And I think we can do better than that. In terms of managing help, in Terraform it's kind of a little bit difficult because you have your TF state that leaves outside the Kubernetes cluster. And then you have the state that is reconciling inside Kubernetes. So the idea is we want to move the help install and the help management and say, which version of the help chart or the add-on gets installed? What are the various YAML files that get installed? You want to do that in Git. You want to do that in YAML. And you want to organize it by clusters. But the problem is there's some metadata that these help charts need, these add-ons in help. Metadata such as the workload identity, AIM roles in your formula with Ursa, SQS URLs for Carpenter, domain names for filter insert external DNS, or IP addresses that Nick is going to talk about for NGINX, for example, external IPs. But then how do we get that metadata into Argo CD when we install the help charts that we want to put in there? We want to stop putting that information in Kubernetes or through Kubernetes installing the help charts, but we need something in the middle. And usually, people have different patterns. And some folks actually have solved this from the research that we have done. They have done it, but they have not put it in open source or make it a pattern, a repeatable pattern. So what we did was we created a GitHub repo, GitHub org, and we started putting patterns in there, me and Nick, on creating, like, how do we do infrastructure code? How do we pass this metadata? So we created an org called the GitHub Bridge. And basically, the GitHub Bridge, what it does, it takes that metadata, and it puts that metadata into the Argo CD cluster secret. And that's the cluster secret that the previous talk was talking about. That is the representation of a cluster where we mode cluster in Argo CD. So we put that information into the annotations, for example, of that metadata. And we also put the labels, which represent things about the cluster, like what is the environment, staging versus prod? Or what is the region, for example? Or which atoms do I enable in this cluster? So there's different ways of getting that information from Thetaform into that cluster secret. One of them, which helps some organization, is have Thetaform use the Git provider or GitHub provider, and put the files in Git. Another one is, you saw the previous talk about using the Argo CD API to talk to it, to create that secret. And we're going to show that example. Also external secret. So when you have Hub and Spoke, and you have 50, 100 clusters that remote, that they don't have Argo CD, then you just need to create a secret. That's the only thing that you have to create. And external secret operator, it can help there also to create that secret if you have that pattern. And the last one that Nick is going to show a demo is Acuity can register clusters in their platform. And as they register that cluster, they will create the labels and annotation on that cluster into Acuity. And then the next thing is, and what we're using in application sets. We're using the power of application sets. And Nick is going to show that on how we use application set, but who installs the application sets? It's kind of the chicken and the egg. We couldn't go any talk about talking about Thetaform and Argo CD without chicken and the egg, but here it is. You use the apps of app patterns. So in this case, it's the app to deploy application sets. And in large organizations, they have problems managing so many application jambles that they discover or they realize that the best way to generate these apps is using application sets in Argo CD. So we need the bootstrap. And you go with the same concept of this bootstrap app is going to create the application sets. The same thing, you can use Git, you can use Argo CD, or you can go directly into Kubernetes API. The word is not perfect, but at least you're not installing the home charts from Thetaform. And then the other pattern is, who manages Argo CD? If I need to tweak Argo CD, do I do it from Thetaform? Or do I do it from Git? So Argo CD itself is an add-on that you can manage from Git. So let's take a look at the first demo. The first demo is I have an EKS cluster that is sitting in AWS. I just want to deploy the cluster with Thetaform and install Argo CD, a seed Argo CD, and install all the home charts or all the add-ons through the GitOS bridge. So with that, let's show you a little bit of Thetaform. Do not run out the door. I'm going to show Thetaform. So the GitOS bridge, this is an example that we have. We have multiple patterns. We have Thetaform and GitOps. And we're building the GitOS bridge. Under Thetaform, we have three examples, EKS with Acuity, EKS with Argo CD. Nick is going to show the other ones for the other cloud provider. I'm going to show the most simple one that is Argo CD sitting inside the cluster instead of in the Acuity. In here, as you can see, typically you create a VPC.D.TF. That's normal. And then the EKS cluster, which uses this module. That is very popular where people do EKS. This is an open-source module. And you create your cluster like that. And then we go into variables to make this example simple, which we have here a variable called add-ons. Then you select in here how many add-ons you want in this cluster. You want 10, you want 12, you want 13. You need, right? It's not that you want, but you need to install, as a platform team, you're in charge of these add-ons. Maybe the application team deploys the app. But you, as a platform team, are in charge of the cluster and the add-ons on those clusters. So you will enable things like cert manager. You need that metadata. CloudWatch metrics, external DNS. In this case, I'm installing all of this. If you want to install Carpenter or, in this case, I'm installing the AWS Ingress. What else? Metric server, Kubernetes. These are the open-source ones that you can choose of not putting it in here. So we have all the ones that, but this, all of this, you got some cloud metadata. So all of them you just put enable through or false. And then at the bottom, I have a few examples of open-source, like Argo Events. Who uses Argo Events? Argo Workflows. Who uses Argo Workflows? Couple of you. So you use them, install them for the app team to use them. GPU operator, you're doing AI. And then what you do is, with this, you, let me go to main.tf. Main.tf, you go and we have a project. My team has a project called the EKS, let me show you. I'm assuming the EKS Blueprints. And anyone using the EKS Blueprints? OK, a couple of you. So recently, we added this new variable, which I wanted to call stop-terra-form from installing Helm charts using the Helm provider resources. But that was too long. So we make them create Kubernetes resources because false. And basically it says, do not do the Helm installation, but do the cloud resources. So it will create the resources, give me back the metadata, the area enroll, the SQS URLs. Like usually it's for Irsa. And then you enable all of them here. We have a bunch of them. The output I take into cluster metadata. So that would be annotations. And then the labels on and off, or maybe the environment, if this cluster is dead versus staging, or the version of Kubernetes. Sometimes you need the version of Kubernetes in the Helm chart. For example, the Kubernetes autoscaler, you need a specific image based on the version of Kubernetes. That's kind of tricky to do in Helm. So you put it here. So labels that help you group clusters and any other information. And then this module that we created, it's a very simple one. It's called the Github Bridge Bootstrap. It installs argocd and creates the secret. So if we go to that one, it's sitting in GitHub in here. It's called, the organization is the Github Bridge Dev. We're trying to get Github Bridge because somebody is parking on it. But this is the Terraform module that we plan to put in the Terraform registry. It uses Helm. So here's the witness. We use Helm, but it's only for the argocd seed to install it if that's your use case. If you use an equity, you don't need this. And then you install argocd, and then you can use the argocd Terraform provider or you can put it in the secret. So in the secret down here, where is the Kubernetes secret? So I'll be talking to the Kubernetes secret. And I put the annotations, which are the metadata and the labels. So with those two things in the cluster secret, we can use the application set cluster generator. Anyone familiar with the argocd application set cluster generator? A couple of you. So that's what Nick is going to show. So basically you Terraform apply this folder. You get all the add-ons installed from argocd. And then in Terraform, the only thing that Terraform is in charge is creating, talking to the AWS API. But this small bootstrap is one of the patterns. And as you can see, we have argocd here with all the add-ons installed, the 13 add-ons out of the box in a very easy way. So with that, I'll give it to you, Nick, to show the second part of this demo. Let me show you. We are here. Go ahead, Nick. So Carlos demonstrated deploying open source argocd into the same cluster that argocd is going to be managing. The next demo that I'm going to demonstrate is having argocd outside of the clusters that you want to manage. So this changes the deployment pattern slightly, but the premise itself should stay the same. So essentially we've got our GKE cluster. So this pattern is cloud agnostic. You've got a Kubernetes cluster and you've got cloud metadata that you want to pass to your applications. Then we have acuity managing our argocd instance and connecting the managed clusters that we're creating with Terraform to argocd, as well as putting in that metadata into the label and annotations of that cluster configuration to make it available to the cluster generator in the application set. Then we ultimately use the argocd module in Terraform to create that initial bootstrap application. And this is always the fun part with bootstrapping is that there's always that one application that you have to create at the start so that you can manage the rest of the applications from Git with argocd. And so we give that responsibility to Terraform to create that initial bootstrap application and then everything after that becomes GitOps. So that's the high level. Let's go into what we're actually working with. So we're talking about GKE, but the premise is the same. We've got a cluster created with Terraform and I can zoom in a bit here. So we've got a cluster created with Terraform, nothing special there. We've got a VPC created and we're creating a static compute address that we ultimately want to pass to an annotation into ingress engine X. So this is that cluster metadata or cloud metadata that Terraform is aware of and argocd needs. So Terraform becomes the source of truth for that metadata and argocd becomes the consumer of it. So that's all the same. What comes next and is different in this one is that we're using the acuity platform to create our argocd instance and this is similar to having a management cluster that runs your argocd instance. So it's outside of the managed clusters that you want to actually deploy resources into. And then we've got our cluster configuration. So this is just as similar as the cluster secret for managing with open source. We've got a cluster resource that we define from that variable that Carlos showed earlier. The labels and annotations that we want to put on to that cluster configuration. And this resource also deploys the acuity agent into the clusters that it can connect back to argocd but it's all the same. We're getting that cluster metadata from Terraform and putting it into somewhere that argocd can get to it which is the cluster configuration. Then I mentioned that bootstrap application. This is the argocd Terraform provider which is currently managed or maintained by a member of the community. But we're using that to talk to the argocd API to deploy that initial bootstrap application. You could just as easily do this with rest calls to the API or using local provisioner to call the argocd CLI. So that's all the Terraform side of it. We've got our cluster metadata, we're putting it in our cluster configuration and we're connecting that cluster to argocd. Let's see what that looks like then from the application set side. So here is that application set that we've been talking so much about here where we're using the clusters generator which will go even zoom and enhance, right? So we've got our cluster generator which has values. These are arbitrary values that we're defining for that generator where we're saying this is the name of the add-on chart or the add-on that we wanna deploy. This is the version of that chart we wanna deploy and this is the repository to pull that chart from. So all of the chart information is still in Git. It's still GitOps, just like you would hope it would be. Then we use selectors to choose which clusters get which add-on. So you saw all of the enable add-on name examples in the Terraform configuration. This is where the selector is checking for that label to determine if we should deploy that add-on to this particular cluster. And so the next piece of special sauce here is that we actually have three cluster generators in this example with a top-level merge generator that's going to bring all of them together. And the advantage of doing this is that we can take that label that we used to define what environment that cluster is in and use that to mutate which chart version, which add-on version we wanna deploy to that particular cluster. So in this example, the default chart version is 113.1 and that gets deployed to any cluster that has the environment label, typically dev or non-staging or prod. And then we've got two other cluster generators, one that's searching for the environment staging and is changing which version of the add-on we wanna deploy to that particular set of clusters. So we've got a slightly older version and then prods running a really old version, classic story. But this is how you can determine which version of an add-on you wanna get deployed to which set of clusters using the labels that we put on that cluster configuration. Then we set the name and the sources deterministically, but the next special part is that we set the values files that we want to pass to that add-on chart. And we do it in a very particular way to add layers to these values. So first off, we say ignore missing values files. If we haven't set a specific values file that overrides the defaults, then don't worry about it, just continue on. But the important part here is that we have three layers of values files. We have the environment's base, which is the default for any cluster that's going to get that cluster add-on is going to get these values for that chart. Then we layer on top of that the values file that is specific to the environment that it's running in and we're pulling that from the, can I move over? We're pulling that from the metadata from the cluster configuration here. So we're saying from the metadata dot labels of the cluster configuration, determine which environment, use that values file for that add-on. And then finally, we say if we have a cluster that's a little special, it's a unicorn, it's not like the rest of them in that environment, you can add overrides for that add-on using this values file that is specific to that cluster. So if there's nothing specific to that cluster, just don't create that values file and it'll use what's determined from the base and the environment that that cluster is in. And then finally, we take, I was talking about getting that cluster metadata that's determined by Terraform and putting it into the values file. We've finally gotten to that point. So that annotation that Terraform is adding to the cluster configuration with the IAM role that we wanna run to pass to the values of this Helm chart. We're adding it here from that cluster configuration. So the application set cluster generator pulls that information from the cluster configuration and we can use that in templating the applications. So we'll step back and see what that actually looks like. So I've got my Argo CD instances here. If I go to my GetOps bridge and I look at my cluster configuration, we can see that this cluster has all the labels and annotations determined by Terraform on it, which includes things like which add-ons we want to deploy to it, the cluster name, anything that we wanna use in the applications. And then ultimately that looks like this where we've got our bootstrap workload here that wrong application I want bootstrap add-ons. So we've got our bootstrap add-on, which is that application deployed by the Argo CD provider in Terraform. And then it creates all the application sets. So there's like something like 30 or 50 in here, but you'll notice only one of them is actually producing an application. And that's because that's the only one that the cluster has a label saying, I want this one deployed to this cluster and that that application is taking advantage of the metadata that we put into the cluster configuration to take something only known to Terraform and use it in the values file for that Helm chart. I think that's the whole demo. And now, so that's the whole premise. If you come out of this talk with anything, these are really the four things that I wanted to distill it down into. First off, don't use Terraform to manage Kubernetes resources. They have totally different reconciliation loops. Terraform, I don't know, I maybe run it every couple weeks, maybe when somebody wants something different. Kubernetes is constantly reconciling its desired state from or reconciling the live state from the desired state. So you shouldn't manage Kubernetes resources with Terraform because they're more likely to change. This is why we have getups. Then to step back, Terraform is the source of truth for certain cloud metadata. The cluster name, the IAM role, the load balancer IP is not known to Kubernetes inherently. So Terraform is the source of truth for that information. Then you should leverage application sets to extract metadata from the cluster configuration. So Terraform knows this cloud metadata, puts it in to the cluster configuration, application sets extract that and use it in the applications. And then finally, it's good to group your clusters by label so that you can determine everything in this environment should get these add-ons with these values. Is that anything? No? So the call to action is to come try it out. So we've got this repo that we were poking through for this demo. It's available on the GetOps bridge GitHub organization. We highly recommend you try it out. The goal is to establish a set of patterns that is cloud and provider agnostic so that ideally, the infrastructure's code has metadata that your GetOps needs. Come try it out. We've got Terraform in our GoCD examples, but we're looking for Pulumi and Crosplain and frankly, Flux could come in and use these patterns. And one more thing here, very important. I brought my dad. Now he's going to take a picture with this. Oh, I can take a picture. Well, he's very proud of me. Thank you, Dad. And then, sorry, finally. Yes. We have a workshop at the end of the day today at 350 that goes over these patterns in hands-on ways. So if you want to come and try this out for yourself, you get to spin up your own ArgoCD instance, get some clusters. We're going to use dev containers. They're really cool if you haven't tried them yet. And you can actually get hands-on experience with this. Yeah, try it out and get hands-on experience today on this new project. And please rate this talk. It gives us good feedback. We're looking to improve. And please add me on LinkedIn. I love connecting with people. Thank you.