 Hello and thank you for joining the session. In this session, I will describe how to manage SAS application using infrastructure as code and GitOps workflow. I will describe the solution and I also will do a live demo. By the end of the session, you'll be able to take some of the principle and implement it on your system. Let's start. My name is Eran Vivi and I'm the co-founder and chief product officer at Firefly. We are a Cloud Asset Management solution for DevOps and Cloud Engineer and we are automatically transform Cloud into infrastructure as code. Before that, I was the head of DevOps at Aqua Security, and in general, I'm doing DevOps for more than 10 years. I really like to learn new technologies and then explain them to others. I'm going to describe about two principles. One is GitOps and the other is using infrastructure as code to manage SAS applications. Infrastructure as code is most common usage on deploying Cloud infrastructure. We have tools like Terraform, and I'm going to use Terraform in the example in this discussion. With Terraform, you can basically describe your resources and then using the Terraform binary in order to deploy them into the Cloud itself. You are basically enjoying all the benefits of writing code using a Gitflow and pull requests, peer reviews. You can embed in your CSID scanning tools and basically doing a shift left for infrastructure provision. One of the nice things with Terraform is that there is SAS extensions, SAS providers, that you can use Terraform to manage them. You can see here in this slide, using Terraform, you can manage your Akamai, paid your duty, new rally GitOps, Datadol, Cloudflare, and much more. We basically take in the concept of describing Cloud resources and doing it for other kinds of resources and configuration that you have on your SAS tool. In the left-hand, we have the GitOps. And GitOps, you are basically having a workflow that once you are provisioning something into the Git, one of the branches, it can be main branch, it can be different branch. You have a component that basically reconciled that manifest into the real actual state. And I will explain about it in one minute. So first of all, see the number of downloads that we have for SAS providers for Terraform. So you can see this list on the Terraform registry, you can browse for provider and you can see whether one of the SAS tools that you are using have a provider. Again, this is an open source, the community is contributing to the provider and there is a very nice download rate. For example, Datadog have 32 million download rates of provider, meaning there is a lot of teams that managing their Datadog configuration using Terraform. I will also use Datadog in my example in the demo of this session. So in the other end of GitOps, really in a natural, GitOps is when you are pushing your manifest into the main branch or different branch and the GitOps operator basically reconciled into the Kubernetes workload. So GitOps is the main use case of GitOps right now is the provision changes on your Kubernetes workloads. But what we are doing in this discussion is basically taking this concept and the tools and combining them in order to use that reconcil also for SAS management. So the main tools in the GitOps ecosystem is Argo CD and Flux, very, very cool open source tools. Argo have a very nice UI and those are basically GitOps operator. They are a component that they are listening into your Git repository, looking for changes. And once you are introduced to change, meaning a developer push a change into the Git, they are checking whether it's a correlate with the real state of the cluster and basically reconcile the manifest that you have on Git into the real actual running configuration. So you have like a continuous delivery workflow which is fully automatic. The developer just need to push stuff into Git and eventually the real state of the cluster is going to be changed. And if somebody doing a manual change into the cluster in the Kubernetes example, if somebody doing a kubectl command and basically edit one of the deployment or one of the other components like config maps, the GitOps operator basically will reconcile and override the manual change with the source of truth, which is the Git main branch. So I'm going back to the demo that I'm going to present. So the traditional way right now for managing Datadog is going through the Datadog UI which is very easy to use and very nice, I will say. And this is what we call the ClickOps. When you do an operational task using a UI and dashboard rather than using code. So if a developer or a DevOps or SRE, whoever that is managing the monitoring in the organization would like to introduce new configuration into Datadog, it will basically go into the dashboard and click add monitor. And then the configuration will be on the system. The downside of the ClickOps approach is that you don't have a version in or changes in the configuration. And it's not immutable, you cannot replicate it to different instances. So if you have, for example, a Datadog tenant for different kind of regions like you have the US and the Europe and you would like to replicate everything and to be consistent, the better way to do it is using infrastructure as code because then you are also tracking the changes but you also have like the mutability configuration and you have the option to deploy the same kind of manifest many times even for disaster recovery purposes. So if I'm taking the infrastructure as code management for Datadog and also combine it with GitOps, I will get workflow that is look like this. I have a developer that will push the new monitor that change that you would like to do in Datadog into the main branch, going through all the CICD and pull request kind of flows. But once it merged into the main branch, I have the reconciliation component. In my case, I'm going to use ARGO CD, plug subsystem and TF controller. Those components will work together in order to reconcile the telephone manifest that I have in my main branch into Datadog. So if someone will do a manual change into Datadog, the ARGO CD will automatically override it and align it with the stuff that I have on my Git. In this way, I will be able to enjoy all of the benefits of infrastructure as code for managing SAS application. So I'm talking about Datadog, but it's applicable to any SAS that have a telephone provider. Another example is GuraFana Cloud, but it's not like just monitoring system, it can be any other SAS provider that you are using. Here I put some example, you can use it for managing octa, hot zero, even GitHub itself and much more. So let's go to the demo. So what I have here is my Datadog dashboard. This is the user interface of Datadog. And even if you are not familiar with it and you are not using and you've never seen before, it's very intuitive. So in my example, I would like to add new monitoring, basically adding a new alert to my system. And the traditional way, as I mentioned, is to go to this screen and click new monitor and then go through the very nice UI that I have here, choosing exactly what I would like to do. And once I will complete this wizard, I will have another line item here that is basically describing the monitor that I just introduced and that's it. And if I would like to use Terraform to do the same activity, I will go to the Terraform registry site and to the Datadog provider and I will look for the Datadog monitor manifest. So this is an HCL code. This is a Terraform language syntax that is basically describing how to introduce monitor into Datadog. So I have the name of the resource. This is the type of the resource Datadog monitor and I have some fields and mandatory fields that I need to put. Basically, this is the configuration. So this piece of code is the Terraform configuration that I can use in order to provision new monitoring to Datadog. But for this topic, I would like to do it the GitOps way. I would like to have it pushing into my Git repository and the magic will happen and it will automatically go into my Datadog configuration. So as I mentioned, I'm going to use Argo CD. I'm not going to cover the installation of Argo CD in this session, but it's very easy. In this case, I just pin up a kind cluster which is a Kubernetes cluster that I'm running locally in my computer. And I just use the Argo Elm chart to deploy and within a few minutes, I have this local Argo CD instance ready for me. I use a version with FSA, which is Flux subsystem for Argo. This will allow me to deploy the TF controller. Basically, this component called TF controller, this is the component that reconciling Terraform configuration. So in order to use Argo and provision Terraform manifest in GitOps, you need to have this FSA embedded in your Argo CD, it's very easy to get it there. And you also need to have the TF controller and I'm now going to deploy the TF controller in this demo. So let's do a new application. I will call it TF controller, default project. Again, everything here is local, I guess for the production level, you need to have additional configuration, but just for the demo, I'm going to use the default stuff. So I'm going to ask it to use the Flux subsystem, this is for the TF controller to work and auto create Flux resources and auto create namespaces. This is in case I don't have any namespace, it will automatically create it for me and apply out of sync only and the repository. Basically, I'm going to use the L chart. So TF controller is an open source product maintained by Weaver. So this is the address, going to use an L chart. The name of the M chart of course is TF controller, very nice. And let's see here a space, okay, TF controller and I'm going to use the latest version, the 0.5, destination, it's my local cluster and I'm going to deploy it on the namespace that it's already created by the FSA, which called Flux system. So again, this is an infrastructure component, it's not the application itself that I'm going to use in order to sync the data log, but I just want to demonstrate how easy it is to put a TF controller. It's took a few seconds, I'm doing create and I don't have permission, sorry for that. Let me quickly log into the system using the micro-dential. So I have it runs, I put administrator permission, I have the TF controller in place. So I would like to click sync and basically deploy the helm. It will take a few seconds and you can see here this is all the components of the TF controller and this will basically help my application to be reconciled using Terraform inside ArgoCity. So very nice. What I'm going to do next is going to deploy the application that will sync my data log configuration. So I will call it datadog.com, sync, deploy project, use flux, auto-create namespace, apply out of sync and automatically create flux resources, this is the mandatory fields that I need for that. And I'm going to put my repository that I'm going to use in this demo and of course I will share all of the links of this presentation. So let's go to my GitHub, okay. So let's copy this one, very nice. So let's, the main branch and in this case I would like to mention the infra configuration. This is like a sub directory I have. Let's have a quick look on how I have the configuration in my repository. So I have two folders, the infra. This is basically describing the reconciliation, like what is the Git repository that I would like to reconcile and what kind of, what I would like only to do a dry run or I would like to apply the configuration. And of course what is the path into the Terraform configuration that I would like to reconcile. So this is a flux configuration. This is inside the infra sub directory and the code for the new monitor is basically inside the Terraform directory. So I have three file. This is a typical structure for Terraform. I have the main, in my main I have the configuration that I would like to introduce. This is a data log monitor. The same, just copy paste the example that we have on the Terraform provider documentation. I put here triple A, this monitor was created using GitOps. The reason I put a triple A is just to have, it's the first in the line of the monitor because it sorted alphabetically. And I have the provider, which is describing the provider that I'm going to use in the credential I'm going to use in order to integrate with a data log using Terraform and the variables which I'm using for the API key and the application key for data log. Again, this is our secret that I'm storing in Kubernetes. And highly recommend to use that in any kind of sensitive data. So this is a structure, it's a very simple one describing the reconciliation, one describing the code itself of the monitoring. And I'm going back to my Argo screen. So I will ask Argo basically to listen to the infra path where I have the flux configuration in place. And the destination is my local cluster and I will put it on a namespace, a new namespace, I will call it that. So let's see if everything is all right. This is the application name, data.config sync, manual, everything here is there. This is my repository. It's basically going to listen to this repository. And okay, seems okay. Let's create it, very nice. So what I'm going to do here is just click sync in order to everything to be in synchronized. And we will wait a few seconds to see if everything goes well. Okay, I just click refresh. I have my repository and then the repository pickup that I need to deploy something using Terraform. So I have these components as well. I have service account Terraform. This is like the Terraform controller kind of object. And let's wait a few more seconds and I will able to see the new monitor introduce into my monitoring state here. Let's just refresh it. Okay, not yet. It might take a few seconds. Let's go back into RGCD and see if everything is okay here. Okay, now I see here that everything is in sync. Let's go back, refresh, let me see. Great. So as you can see, now I have the triple A. This monitor was created using GitOps that is now appearing in the data dashboard. This was created by Terraform using the GitOps workflow. So what I'm going to do right now is because the default setting was manually sync, meaning that the operator need to go to RGCD UI and click on synchronize in order to get the changes. I would like to have it like fully automated. So what I'm going to do is go into the details and then click enable automatically synchronize. And the meaning of this is once I will push change into the master or the main branch, Argo will automatically introduce the change. So once it's done, what I'm going to do is to create a new monitoring. So I'm going to my Terraform configuration. This is the monitoring that is already deployed. I'm going to replicate it. I will give it a different kind of name and I put here a triple A and let's just give it a different name. OSS Latin America is awesome. And just for the sake of the change, let's change here team, open source and I don't know demo. I will keep the rest of configuration the same because this is just for demonstration. So what I will expect is once I will approve this change in my Git workflow, it will automatically appear in my data.dev dashboard. So let's see if it's working. This is my terminal, Git status. I see that I modify the main.dev file. Let's edit, omit, new monitoring for data. Assuming it's a private branch, I will have to have a full request and somebody to approve it and of course the CICD. But right now I'm going to skip that part and just push directly to the change to the main branch which is of course not recommended. But just to make things more short, I will git push and put this change directly into the main branch of my repository. And now what I'm expecting is that Argo CD will identify there is a new commit and will reconcile the change using the Terraform controller into Datadog. So just click, let's go directly into Datadog and maybe wait a few seconds to see whether I have a second monitoring in place with the new name and the new tag. So I guess it will take few more seconds for Argo to pick up the change. Maybe you'll just click refresh here to see whether there is the new commit ID, not yet. Let's go here, everything is in sync. Everything seems normal. Let's wait few more seconds. Amazing. Now I see I have the new monitoring that I just introduced. Only by pushing the code into master, I changed the configuration. This is the team open source, the other tag demo. And again, this is the same kind of monitoring just a different kind of name and tags. But I think you got a point. So let's try to conclude what we did here. I have a developer pushing into the branch here. I have a CICD pipeline and peer review and other kind of gates that I need to make sure that everything is going to the main branch. It's basically production ready and qualified using manual and automatic tests, of course. And once it's in main branch, I have this technology stack contain a Kubernetes cluster, Argo CD, Flux app system and Terf from controller, basically reconciling my Terraform configuration that is managing SAS and applying changes into the system. And it's taking something like two minutes from the time where a developer pushed to the main branch until a runtime configuration is ready. So I have here all of the material, like the stuff that I used for the demo and that's it. I hope you were enjoying this session. You can reach me out via LinkedIn, email, Twitter and feel free to ask any questions. And of course, I would recommend to check out PowerPly. We have a free tier and take care and thank you very much.