 Hey everybody, well, thanks for waiting with us here. We were just getting started on our webinar. So it looks like a few people are joining in and we'll go ahead and get started. So, hey, my name is Robbie and I'm really excited to be talking to everybody today. We're talking about automating your source of truth. What does that actually mean? Well, we're gonna be talking about get-ups, right? And your source of truth there is actually source control. But what is like day one and day two look like? When your source of truth is source control, how do you go about and automate what's happening here in your particular, in your source of truth? And so Nick, next slide, please. Gonna be introducing a few folks today. So just a quick piece of housekeeping. This webinar will be recorded, so FYI. Say we have Nick, who's a developer advocate here at Harness. Then we have Mark, who's a product manager at Harness. Maybe Nick, a quick second about your song. Yeah, hey everyone, my name's Nick. As Robbie said, I'm a developer advocate here. I come, kind of my specialty and experience is in the software-defined infrastructure space and CI CD. So I'll be talking through a lot of concepts and we'll have a demo as well as we go through this presentation. Mark. I'm Mark Ram. I've been doing infrastructure as code and declarative operations for well over a decade. I've been doing GitOps for about as long as the term has existed. Previous to coming here at working at Harness, I worked at Canonical on their cloud infrastructure as code solution. And then at Weaveworks as product manager on Weave GitOps and various products. So I've been around the GitOps industry for quite a while. Heard all the questions, I think. Just kidding, I always hear new ones. All right. Well, folks, this is our agenda that we have for today. And the way we're going to approach this presentation is it's going to be a combination of kind of a little bit philosophical, a little bit strategic, but also very practical. And the goal is to leave you with both strategies and tactics, systems and tools that you can use as you set up a well-architected GitOps workflow. It is going to be a little bit meta as well, meta in the sense of like, how do we automate that which we're automating? That's the title of the webinar. So we're going to be trying to develop a strategy for how we can use GitOps to manage our GitOps and we'll define all these terms as well. So our agenda shown here, our first topic is going to be automating your automation paradox. We're going to be describing the problem statement. We're going to move a little bit more specific then into automation of GitOps platforms. What is the typical tool chain look like? What are some best practices? And then we'll get even more specific with our tools integrating GitOps and Terraform and Harness, seeing how they can work together to very, very quickly manage your infrastructure in a declarative way. And more so, declaratively manage that which you use to manage your infrastructure. So that'll be the focus here. Yeah, thanks Nick. And also just, it looks like several more people have actually joined the webinar. And so a little bit of housekeeping. If you have any questions, just hit the Q&A in Zoom and you can ask a question there and I'm going to be happy to answer you. Also, we might save some of the questions towards the end. So feel free to post as you see them and I'll try to answer them and then also ask some of the panelists here to answer them. So yeah, please continue. Yeah, definitely. All right. So let's start off and this is going to, again, we're going to get a little philosophical at first, kind of begin with the problem statement. This first topic is automating your automation paradox. And why do we have paradox here? Well, kind of the root of this is the fact that these days we use software to define everything, right? As if we can describe something in information terms, we can use software to manage it. And 12 years ago, Mark Andreessen over at Andreessen Horowitz wrote a famous article that got published in the Wall Street Journal among other players, places he gave, the bold statement at the time, software is eating the world. And really it was applied to the fact that even like traditional industries, manufacturing, the service economy, ultimately computers, software, is being used to describe all these things. And very quickly, in our industry, what we saw was infrastructure became one of the very first things that became software defined after applications themselves, right? We started with the virtualization movement in the early 2000s and that moved to cloud services. And now basically anything that we possibly can, we wanna make programmable. And what that really resulted in is a sort of decision that we needed to make. And it resulted in also an increase in complexity. We realized, well, if we have infrastructure that hosts and maintains our software, we also need to use software to maintain the systems we rely on to maintain our other systems, right? Are you noticing this a little bit of a chicken and an egg problem here? How do we automate our tools that we're using to automate our deployments and our applications? Now, this is actually a problem we've encountered before. There's a term that's about like 55 years old now called the software crisis. And if you were to like do a Google search on the software crisis, that term originates from 1968. It was a conference conducted by the North Atlantic Treaty Organization. And as teams started to use computers to define and manage defense systems and honestly everything else, we realized that like systems as they get more complex, there's a tendency, if you're not careful, to trend toward brittle and expensive. And that just means it's hard to maintain, right? Especially when we're used to imperative runbooks. We'll talk about imperative versus declarative in a moment. But the idea here is that if we're used to like a sequence of steps, the more steps and dependencies we add, the more likely it is that some or the more the possibility that something could break and break the whole chain of steps. And as a result of that, in more recent years, there's been a whole cottage industry of how to manage the complexity in tools and job roles and platforms and systems. So just as an example, you might be familiar with these. If you go to O'Reilly.com, if you look up kind of any of the main sort of DevOps knowledge centers, the idea of site reliability engineering and declarative state management and DevOps, there's been a whole cottage industry on like how to manage the complexity. So it's not just like, how do you build a feature and deploy it is how do you manage the whole end to end process also using software? So that's a little bit of setting the stage of where we're at. And where things become like really relevant to us is we've come across the rise of desired state development. And really all that means is, when you first learn programming or when you develop applications and when you just write code in general, one of the ways you can write that code is imperative. It's the kind of base level of computational execution. You do a sequence of steps. You run this step and then you run this step and there might be an if statement that would go a certain path depending on conditions. And then you have ultimately tons and tons of sort of gated logical flows that you could follow. And when you have a very good understanding of all the possible conditions and all the possible dependencies you could manage, imperative can work, right? Cause you have a very clear decision tree of the things you need to do. However, over time is we've had layers of software automation, declarative has become much more relevant. So if we kind of distill these words, imperative means do these things. I want these things to happen in this order given these dependencies. And declarative says, give me this outcome. I don't necessarily know or even care the exact like linear sequence of automation steps you take. I just want this end result, right? It's like, I want the birthday cake out of the, to look like this, I don't necessarily care like the order that you add the flour and the sugar and the eggs. It's important, but it doesn't matter to me as long as I have like this specified outcome, right? As long as it meets the spec. So yeah, go ahead, Mark. And this is like really important if you're doing things at scale. If there's a one in a thousand chance that something is gonna go wrong and you do it a million times, it's going to go wrong a lot of times. And if you have to figure out how to handle every one of those things and solve it, you can't do it. This is why industrial automation, robots in factories almost always are, you know, torque this nut to this specification not turn this nut 34 times because that may not be correct in this case. So you just have to sort of think differently if you want to achieve scale. And so declarative operations is a big part of that. Yeah, exactly. Yeah, thanks Mark. And for that reason in a lot of ways, not in every way, but in a lot of areas declarative is kind of one out as a desired model that we use if we have the choice. And there's been a whole tool chain that's developed around it where we can focus on things like business logic and governance who has access to our stuff and in what role we can focus on that and abstract away from the imperative set of steps that we let computers do for us. It allows us to focus more on like our end result that we want that drives the business instead of having to focus every time on like the internal plumbing of deployments. And then what it really comes down to is automated deployment tools end up doing the heavy lifting, right? You're probably familiar with some of these. Some of them are like proprietary and attached to a single service or application. AWS cloud formation is a proprietary declarative state tool for managing AWS resources. But even like tools like Terraform and Pulumi and other tools have become the standard for managing infrastructure in general. And then Kubernetes as a container orchestration platform has core to it a declarative model. When you write a manifest, you say I want these resources to be created. And when you apply the manifest like Kubernetes and the administrative layer does the work of creating those resources. So it's a model that's tried and true but can be applied as we'll see the all sorts of other areas of management as well, right? So we said there's a paradox, right? We said that the title of the section is automating the automation paradox. So like what is the paradox? Well, the point of automation is to do things automatically, right? It's supposed to reduce our manual workload. At the same time, when you write computer systems to automate a computer system that's another system you need to maintain, right? So it's another tool you need to set up and maintain. So there, if we're not careful we could arrive at a contradiction where the various tool we're trying to use to maintain our other systems itself needs to be maintained. In addition, the goal of automation is we want to maintain consistency and reliability in our infrastructure. As Mark mentioned, right? Imperative workflows, when relying on humans to repeat those workflows there can be a lot of failure points in those. Well, the tools we choose to automate our workflows themselves need to be reliable, right? If we're using a broken automation tool or something that is not consistent that defeats the purpose of what we're automating in the first place. So the automation itself has to be reliable and consistent. And then again, automation allows for a declarative source of truth so we can focus on the end result, like we mentioned but also like we need to know what that desired final state is, right? Like the computer will only do what we tell it. So even if we have a great system of automation in place we need to know what we want our end result to be because whatever we tell the computer to create it's gonna do what it's told. So we need to make sure that our final state is well-designed as well. Okay, so this is now where we can introduce kind of a core strategy that has been created and adopted from this problem statement from the need to use declarative methodologies for automating our infrastructure and our workloads. And that is the concept of GitOps. And GitOps will define, if you're familiar with the term DevOps, right? It's about a culture and a set of tools. It's essentially having operations and follow a close development model to put it really simply. There's a whole industry about how to actually explain and describe DevOps but ultimately it's managing operations under a development culture. GitOps is very similar conceptually. It's a term coined by Weaverworks as early as 2017 and kind of the Wikipedia quoted description is an operational model that uses Git as a single source of truth for declarative infrastructure and applications. That really means like store what we want on the infrastructure side in source control and think like a developer when we're managing not only our applications but the workloads and environments and services and infrastructure that it runs on. So kind of the core GitOps principles are that it's declarative, we describe what we want. It's versioned and immutable, right? So we have versions in the case of Git, it's commits that if needed, we can roll back to a previous state if we wanna undo a change as an example. And then another core aspect is that it's pulled automatically. And there's a lot that comes with that statement. One of the reasons is just purely from a networking and infrastructure perspective it's generally a lot easier to call out from your internal infrastructure than it is to access it as ingress from the outside, right? Or maybe I'm using that term backwards but it's harder to push onto your infrastructure because of firewalls than it is to pull something when you're already inside your infrastructure. So the idea is we can pull state from a Git repository and update our infra accordingly. And also that it's continuously reconciled when we make a change to the source code management system the idea here is that our infrastructure detects the change or whatever system we use to manage it detects the change and can update our resources accordingly. So there's a lot going on in this chart here and I'll talk through a little bit of it and we'll have another slide that'll explain some of these steps as well. What I really want the lesson to be here is that just like DevOps, you can get like tied into the minutia of tools, but like DevOps, GitOps is a combo of cultures plus tools. You have your tools that you need to make it work but it's also a culture of doing things. So understand that there's a lot of ways that you can build this plane. But if we look at this chart here or this flow here the left hand side is showing that we have all our configurations, infrastructure, configuration, applications, they're stored in a Git repository under source control. And then let's actually go over to the right for a second. There's a whole set of tools, tools like Argo CD and Flux and other GitOps tools that live inside your infrastructure. They live or at least in your network and they detect the changes that have taken place in your Git repository and then reconcile your infrastructure. Here shown is one or more Kubernetes clusters or Kubernetes deployments or resources in general based on the changes we made to those configurations as defined in our Git repo. And the reason we have the Harness logo here is that there's a lot more decision points you run into once you have this basic workflow down. Like we'll see in a moment, how do you manage things like role-based access control? Who can access your stuff and make changes? How do you manage things like advanced deployments? Aside from just a single rollout how do you manage just governance and secrets and all those important aspects that you need to manage at scale even once you have the core workflow down? So we'll look at a similar graphic to this here again in just a moment. And that's where we wanna get into kind of beginning to move into the practical automation steps. So Mark, would you be willing to go ahead and kind of discuss through this other graphic here? Sure thing. I mean, this central graphic is sort of a standard depiction of the GitOps loop. In the center is this bright orange line, the immutability firewall. That is a commit in Git that will be applied. And on the right, you have all the series of things that you need to do, develop new features, integrate new libraries, integrate new versions of upstream components, validate that everything works and which point you commit to Git. Oh, I wanna change the production repo. Well, GitOps does not give you anything really to solve anything on that side of the line. It says, when you've committed to Git, I'm gonna make sure that that's what's running and keep it running and tell you if something changes. So what people who use GitOps still need is a promotion pipeline where they can say, I have a app in dev, I've run all the tests, I wanna promote it to the test environment, have all the tests get run there. If that happens, I wanna PR to be issued that will update the production database. Maybe I wanna manually actually push sync on that so I can control the exact timing of when production gets updated. So what we have here is sort of across the bottom a pipeline which is a harness continuous delivery pipeline that is running an application in GitOps but you still have all of the sort of traditional software delivery pipeline concerns around that commit to Git. And here we have a rollback step. So we have a script that runs and it validates whether this deployment has worked out. And if it does not, then it triggers a rollback which reverts to the previous commit or reapplies the previous commit on top of that branch and issues a pull request. And so you can get automatic promotion, automatic rollback and integrate with any of the other sort of pipeline tools, validations, approvals, et cetera, whatever other systems you have, you still need to integrate with your GitOps endpoint delivery. So that is sort of the key, what do we bring on top of GitOps in terms of how do you manage this at scale is how do you manage promotions from one environment to the next. And we can do that for all your sort of GitOps entities, your Argo CD, Customize and Helm or Flux using Customize and Helm, et cetera. We can do cross-plane and terraform resources. We can integrate with controllers that help you manage those if that's what you need as well. Yeah, exactly. There's a lot of opportunity here because again, one of the key lessons is we can start to think more like software engineers to manage everything. And but at the same time that creates a lot of decision points in which we say, okay, how do we set up this workflow in a way that's reliable and consistent? And that's a little bit where the tools discussion starts to become relevant, right? So if you're an engineer, you're probably, you think in terms of systems and concepts, but you also think in terms of tools because that's what you're doing in your day to day. So you probably have kind of a mental tool chain that you're used to using to kind of engage in like each step of the software development process. You might use Git for source control. This is like the SEMGREP logo for like static analysis, in CI, you have Selenium for maybe automated testing. Here's like the Flux and Argo logos here for continuous delivery. You have Terraform for declarative infrastructure management. You might use something else for like more configuration management. And teams can and have pieced together tool chains to manage the entire GitOps flow or software development flow in general. Like I was just discussing here, right? And this is completely, obviously it's non-exhaustive, right? This doesn't show all the tools you need to use. And what you arrive at is you have another kind of decision point in which how much do you rely on kind of a tool chain of specialties versus starting to incorporate platforms that can act as a management layer as well. So again, all of these tools are great, and they're the standard, right? Most of these, I think all of them on this slide are open source, but ultimately you need to manage the complexity. And there are, even with your entire tool chain, you still need to handle, like what is actually your single pane of glass if you have one for managing changes? Like when you're pressing go on the changes you make to your application or your infrastructure or both, how do you handle synchronization, right? We mentioned earlier that part of the goal of GitOps is you make a change to a Git repository and those changes are reconciled and they appear in your infrastructure environment with a new or updated deployment or some infrastructure definition change. Like how does that synchronization take place and what level of manual versus automated change happens? And what about, again, all of the important things that like governance and auditing and role-based access control and secrets, like what is used to manage all of those? And then finally, once you wanna start getting advanced, like, okay, we have a deployment down using our tool chain. Well, what if we wanna get more advanced? Like what if we're rolling out a new feature and we don't want it to be available to all users at first? Do we wanna do like a Canary or incremental rollout? What if we have a more like blue-green approach and a cutover point? How do we declare that and specify that using kind of a GitOps model? And again, a lot of an open source tool chain, that's not an off the shelf approach for managing these kind of more advanced situations that you'll encounter. So that's really where platforms like Harness come into play. They work with alongside whatever tool chain you might have set up. So even as you go through the GitOps globe, okay, we have open source SCM and CI and we have our config management and infrastructure management tools in place, Harness also provides a platform that allows, again, that management layer, the pipeline approach that Mark mentioned that allows us to have that very specific customized but also like systematized way that we manage changes to our deployments. And it also includes those advanced features mentioned here like allowing for Canary rollouts, blue-green rollouts, incremental rollouts, but also like any custom workflow that you might choose. Okay, so what we're gonna go into now is the actual setup of this, right? Remember that the goal is automating the automation. How can we set up the entities we need so we can have an end-to-end deployment? In other words, we need to set up our systems that are performing and managing our GitOps workflow. So that means our SCM, Argo CD, for example, and then also because we're including that Harness platform, the platform entities that can kind of manage the overarching process in that management layer. And luckily for us, tools like Terraform are very well-suited for that process. So ultimately we can use infrastructure as code as our core management layer for unifying the automation. So if you work in DevOps or SRE or infrastructure or even as a developer, you're probably familiar with infrastructure as code, at least as a concept, right? So many teams already use infrastructure as code tools to automate their infrastructure, even if it's just like VM and network provisioning and management. You can do the same with GitOps, right? The point of Terraform is it's a declarative infrastructure management tool. I have this kind of like funny little Terraform definition on the side. And if we think of Terraform as like the original definition, meaning we're kind of like altering some far off planet to make it human habitable, if we apply that to software, we're saying we're altering our infrastructure so that it can support our applications and users, right? We're provisioning that which we need so our application can be properly hosted, our users can access it and it's in a predictable declarative way. What's nice is that Terraform has the resources that the providers we need to provision these. Aside from your typical compute storage networking resources that Terraform can hook into, we also have a Terraform provider for provisioning harness entities. Terraform actually has a GitHub provider. If you wanna map and even create your associated like GitOps repository that's holding your declarative state. Terraform has Kubernetes integrations as well and Kubernetes provisioning. Now, Kubernetes itself is very declarative as well. So often teams will kind of mix and match how much of Kubernetes is Terraform but that's an option you have as well. All right. And again, the goal here is just kind of comparing and contrasting like the original definition of Terraform, right? The goal is we are constructing, provisioning and maintaining the environment that in here can host our users and all the infrastructure that is needed to host our applications and deployments in whatever level of complexity we might require. Okay. So based on what we've learned with GitOps so far, where does Terraform come into mix? This specific slide is talking through Terraform as the infrastructure of code for managing our GitOps entities, right? So this is where automating the automation comes in. We're using Terraform as our software defined infrastructure to set up our GitOps infrastructure, right? Because again, we're trying to further abstract away that which we need to imperatively manage. So we have layers here, right? We have our GitOps layer that does the work of deploying our application based off changes to an application state. And we can also use GitOps plus Terraform to manage like the GitOps tools themselves, right? We're automating the automation. So we can use Terraform, maybe a Terraform module that we wrote to create all of the resources, harness entities, Argo CD resources needed so that we can then use that to also do the GitOps workflow on our cluster deployments themselves. So what that allows us to do is it allows us to apply software style everything from a development standpoint to as many layers of our management as possible, right? So like our base level infrastructure can then undergo code review. You can choose your choice of Gitflow, your branching and tagging model for how you develop software you can apply to managing your infrastructure as well. You can choose your choice of repo architecture. Do you want everything in one single, large repository? Do you want to break it up into a polyrepo approach? How do you like tag releases whether it's changes to your application or infrastructure or both? And again, on the branch side, like how do you manage your branching model? That's ultimately up to you but it becomes possible with this model. And then, and again, also from the governance side you also have the decision to manage permissions and code ownership. Again, choices that are well-established in software development. You can apply to operations management as well. Okay, so to make this practical, we're going to do a demo. And what we're going to set up just to kind of set the stage and is we'll set up a harness GitOps workflow that will do a few different things. It's going to use Terraform to provision the harness entity. So the entities in harness that we need and I'll kind of compare and contrast that with the UI and result. We'll also set up a repo relationship that'll have our application details that we want to deploy. And then Terraform will also do the deployment using GitOps. So like once we set up our GitOps entities and harness we'll kick off the application deployment and harness will handle the automatic synchronization where from then on, if we decide to update our SCM that'll automatically be reflected in our environment on whatever changes we choose to make to our application or infrastructure. So I'm actually, I'm going to do a little bit of a change over here. I'm going to get out a full screen and I'm first going to kind of introduce the resources that we're going to have. So first of all, am I connected here? Let me reconnect. So I'm using a shell environment here. This is for convenience's sake on my side. This is just a Linux VM running in Google Cloud but this can be any arbitrary Linux environment. This just gives me access to a cluster that I have provision but I do have a cluster which I'm going to authorize my access to here, right? So I have a Kubernetes cluster. This is just one node, doesn't really matter but I have namespaces I can deploy to. So the goal is I have a computer that can access a Kubernetes cluster that I can deploy things to, that's step one. Step two is I have harness. This is harness.com. This is just a free account I have on harness and this is going to be my management layer. I don't have anything deployed yet but this is where I'll create entities referencing my source code repository and my cluster and then my application deployment. And once we have this set up this will give us so many options for kind of creatively maintaining and expanding on our deployment model. So I have harness, I have a local cluster and I have a couple of resources as well. One of these resources is the actual application I'll be deploying. So this is a GitHub project that I've forked just into my own GitHub namespace and this contains kind of a sample deployable web-based application we call guestbook. It's just something that we can deploy and then manage and then see the end result. And then I also to make this easier I have a Terraform module and the Terraform module is what we're going to apply to create all the different harness entities. So I'm gonna clone down this Terraform module in a moment and then kind of walk through it a little bit and then basically just run a Terraform apply and that'll spin everything up in harness that'll in turn deploy our resources to our cluster and we'll have that connection between harness as well. All right, so let's kind of start off here in my Linux environment. I'm first gonna clone down this directory that has my Terraform module that I'm gonna work with and I'll explain what's going on in this repo here as well. So I'm just gonna get clone that. And if I then CD into this directory, I see there's several files. Now this is all one Terraform module. I just have like the different resources I'm creating broken up into different files. The core files we need to worry about is first of all, this provider's file. This shows us that, okay, we can hook into the harness Terraform provider to create Terraform resources. We can also, yeah, this is where it's mainly defined and then we can also access the default providers as well that Terraform gives like deploying to a Kubernetes cluster and creating other resources as well. And then the actual resources we're creating using Terraform are defined in these three files. So we have a file called agent and we haven't talked about this yet, but ultimately the agent is a resource that runs in your cluster that talks to harness, right? It's sort of the messaging tool, messaging tool that allows harness to talk to it so it can deploy the resources you need in your cluster. Remember, there's often a lot of like networking considerations where we need to make sure that entities have permission to communicate with a cluster. The agent will be one of those core items that ensures that things we create in harness and reference in harness can be deployed to our cluster when we ask harness to manage a deployment. The next is resources. Resources are gonna reference these things we're creating here like our repository definition that we're creating to reference our example app that we're deploying. We have a reference to our cluster entity. We're also gonna have a service and environment entities created in harness as well that again represents sort of the collective way that we can manage our deployment. And then finally, our deployment itself is referenced in a file called app.tf. If I open this up, this is deploying that guestbook application into our referencing the services and environments that we're also creating back in that resources file. And we're gonna set up automatic synchronization and I also have the repository referenced here as well. It's gonna deploy that which is specified in my Git repo. That means if we update that Git repo it'll automatically update the deployment. All right, so we're almost ready to use it. We have a couple other things we need to set up. We have a file here called variables.tf and this variables file contains all of our defined variables. Most of these we have defaults for but a couple of things we need to specify include like our harness account and then down here at the bottom, I scroll down, sorry a little further I could have just skipped to the end an API token and I'm just gonna use like an API token that'll just live for the life of this demo. But this is so we can authenticate the harness regarding the resources we're creating. And then the actual values for most of these variables are in, sorry, terraform.tfrs. If I open this up, most of these have default values that I'm just gonna stick with. I am gonna make one small change and I'm gonna reference my fork of the sample app that we're using. And then I'm not gonna put like my harness ID and personal access token directly in here. I'm gonna kind of do a little bit of a compromise here. I don't have like a dedicated secrets manager that I'm working with but what I am gonna do is I'm gonna reference my account ID I'm gonna export it as a variable that terraform can also use. And then again, I'm gonna share this personal access token here. I'm gonna delete it as soon as we're done this is just a very temporary thing just for the life of this demo. Very kind of unique. It's harness API token is what we called it. And those two variables are available as well. And then honestly, at that point we can be off to the races. We're gonna run terraform init if you've used terraform before you should be well-acquainted with this workflow it's gonna import the providers that we specify including harness. I'm gonna run terraform plan running terraform apply will also run plan but I'll run terraform plan just to make sure I didn't accidentally foobar something. And it says we're creating nine resources that includes all the harness entities we're creating it also includes all like the Kubernetes actions we're doing as well and the resources we're deploying to our cluster. And then finally, crossing our fingers and praying to the demo gods here running terraform apply it's running plan again we are off. So this is gonna take a minute. It's worth noting I'll give a quick note here you see like this null resource definition this is not 100% declarative. It's mostly declarative the reason it's not 100% declarative there is a tiny bit of imperative dependency is because when we create the GitOps agent entity in harness that then creates a manifest file we need to apply to our cluster. So there is a small bit of dependency there we need to wait for the YAML entity to be created before we can apply it to our cluster. So it's 96% declarative it's mostly there but it's still gonna work out here. So in about like 30 seconds or so we'll see this stuff created. We can actually see some of these things created already for example, if I go to harness and I go to settings here I should now see that there is an agent that was created that should come up as healthy in just a moment once it's fully deployed to the cluster but this entity reference was just created. What I should also see is if I open a quick new tab here I should see, hey look at all these resources that were just made. We see some Argo CD entities we also see our agent that we're calling test agent that was just deployed. And it also looks like the apply just finished and it appeared to work, let's see. So we can check this out in a couple of different ways. Let's start off by going to harness. So if I refresh the page here we see, hey it looks like the agent looks healthy. So again, this we have the reference in harness and this is something we actually deployed to our cluster to like manage all of our cluster deployments. If I go back to settings I'm gonna actually go back to overview we see now there's a few other entities created. We see there's a cluster reference here in harness literally our cluster that I deployed to. You see it's able to connect. We see a repo connection. Notice here's my forked sample app that lives in GitHub. And then we see the actual deployment. This is the application entity. We have the guest book deployment. We can see it is deploying base off the master branch in my forked application. And then if I go to the resource view here first of all what's nice is we also see the latest commit that references the current state from that application. We see that ultimately what was created was a deployment and accompanying service. And as a replica count of three so we actually have three pods that were created. And we see that reflected locally as well in our cluster right if I run kubectl hit pods. Here we have aside from our like Argo infrastructure and our agent we have three deployment pods just like a chosen harness. If I run kubectl get service we see our guest book UI service and I can actually access this application if I do this. And then in like a, oh sorry my connection let me reconnect here. There we go. If you're in like a local host environment you would just do like local host 8080. I'm in like a cloud hosted shell. So Google cloud show has like a preview that's basically the equivalent of like if you were running this on local host here's this guest book app, right? This is a basic little web app that pops up. It's not super interactive not a whole lot to do here immediately but it shows a successful deployment that if you were to kind of dig into the code base of that sample app it's deploying a sample Argo CD application that's from a Docker image that Kubernetes is pulling from. So yeah, go ahead. Pretty impressive. Yeah, like someone who's done put these steps together pretty manually like tying all the stuff together. Like that was very, very clean. Like it really solves like a paradoxical question. Like as you know, people purveyors of like let's say a dev off for platform engineer you're a purveyor of developer experience. Like what is your experience like as a platform engineer, right? And this was actually, I like this a lot. It was a great explanation. It kind of all the entities got created. And yeah, like everyone in this entities and harness is like something you have to wire somewhere like without the automation here. So we have automation to care of that. We have a few questions that are coming in. Mind if we take a few of these questions really quickly and then yeah, for sure. Cool. So I'm gonna paraphrase some of these questions here that we do have. And so let's see. So the first one is, okay, I guess it's a compliment to Nick there. Are these examples in the Terraform registry? So like if they wanted to build their own is it like on? Yeah. Yeah. So there's a couple pieces here. So the first is what I didn't directly show is the harness Terraform provider, right? I just said it was imported. But if you wanna look into more detail, let me actually pull it up as a different tab here. We have our provider in the official Terraform registry. So this is like how I'm creating those harness entities in the harness UI using Terraform. Now this is just the reference. What we've also done is we've compiled this into a overall module that is what I was ultimately running here. So we created a module that creates these Terraform resources that you're welcome to clone down, edit, use for your own purposes. This is just in the harness community repository but you're more than welcome to like, again the key area is gonna be this Terraform.tfvars file where you can substitute any values you might like to create your getups resources. So yeah, those are the two areas. And again, that sample app is also publicly available too. Awesome, yeah, thank you for that. I have another question here. This one might be actually for Mark. Why even use Harness over Argo CD? Well, I mean, we made a decision at Harness to integrate with Argo CD and Flux rather than to try to replace them. But what we provide is this pattern around how do I promote from one environment to another? That's something that neither Flux nor Argo have specifically taken on. They are very much focused on the, there's a commit and get, let me make sure that the world matches that commit and get, which is good. I mean, that's good software design. They do one thing and do it well. But there's a need to manage promotion, to manage the full pipeline, to integrate with scanning and all kinds of CI CD steps that you might have that add value and validate your system before you go into production. You need automation around rollbacks and you need, in this example, we had one agent, but when you have multiple agents, multiple Argo CD instances across multiple clusters, you get multiple UIs. With Harness, we have that agent so we can funnel that data together to us where we can provide sort of a central dashboard across Argo CD applications and now in beta Flux applications as well. So whatever you're doing with GitOps, what we're here to do is sort of help you manage promotions, it's not tear out Flux and Argo and replace them with Harness, but how do we augment that? How do we give you a central dashboard and how do we help you manage the full pipeline? Yeah, that makes sense. A very complimentary type of approach. Yeah, like you're exactly right, like kind of like GitOps agents or like the hammer drops immediately. It's like, second, like your declarative state changes, it's like someone who's like, oh, we have to like go and like match the state right now. So it's like, it's very instant, like designed to be... Well, and it's... They're designed like an Argo application is one instance of some software running in one environment. And if you have multiple instances that you wanna update in some sort of pattern, you have to have some tool that helps you to do that. There's various options, but I think the Harness pipeline has the deepest set of integrations and is the most flexible and easy to use. Yeah, it makes sense. And now we have like a quick time to... Okay, one or two more minutes. I'll have one more question and then we'll just kind of like wrap it up. I might want to reach out to the person who asked this question here. It's a little vague, but it says why GitOps? Not sure what that means, but... Well, I will just start with sort of how GitOps came to be. I have been doing declarative operations at Canonical. We had a product called JuJu. Kubernetes is declarative operations. We've been doing... And I was doing declarative operations long before Kubernetes. But when you have declarative operations, then there's this obvious, I have a goal state. How do I manage changes to the goal state? And the sort of insight of GitOps is managing changes to a set of files like your deployment, Diyamo and Kubernetes is managing a set of changes to a set of files. We have a tool, Git, that is already sort of well designed to develop a workflows flexible, useful, has the properties of item potents and distributed so you can have a copy of it in the cluster when you wanna not have to reach out to the cluster every time you're checking the current state. All those sorts of things were already there in Git, so combining declarative YAML files and Git just made sense as a way to manage change in the system and as an alternative to big complex software change management systems that have existed in the past. This is very much a developer driven approach to managing change applied to the problem of managing change for your infrastructure definitions or your application definitions, not change for your application source code, but it's the same patterns and everybody knows them. So I mean, I think that's the main reason for GitOps is manage your change over time, track it, understand what has happened, helps reduce errors, pull requests help reduce errors and helps find and fix errors much more quickly because you can see what changed when and why. Yeah, good luck. Commit. Yeah, I'm forgetting to get blame. Yeah, I'm trying to remember that. Yes, get blame is very useful. Get blamed all the time. And looking through the commit message to see what the reasoning behind this change, like there's just a lot of tiny useful quality of life improvements that Git has for managing change over time. Yeah, perfect. I know we're right at time. Thank you to the presenters and the audience for joining us today. Sounds like we have some great conversation. We should do another one just about GitOps strategy, but with that, on behalf of Nick and Mark, thank you everybody. And yeah, this concludes our webinar. All right, have a good day. Good day, everyone. Cheers, everybody.