 Hi, everyone. So this is the top GitOps for Helm users. I am Scott Rigby. I'm on the Dev Experience team at WeWorks. I'm involved in various CNCF things, including co-chairing the GitOps working group. I also co-maintain CNCF projects, including Helm, Flux, and Open GitOps. So that's why I'm giving this talk on using Flux to implement GitOps for Helm users today. Here's my Twitter handle, R6BY. I'm also Scott Rigby on all of the Slack's and also Scott Rigby on all of the GitOps. So feel free to reach out to me in any other ways about open source projects or other things related to CNCF GitOps, Flux, or Helm. So quickly, what to expect. I am going to clarify the intended audience for this talk just so you know you're in the right place. I'm going to level set for everyone by briefly introducing what Helm GitOps and Flux are for newcomers. That'll be very short. Then I will explain some of the benefits of Flux, specifically for Helm users. I'll give a few notes on comfort levels around migrating from the Helm CLI, whether in event-driven CI or used manually, to GitOps in continuous and progressive delivery. Then finally, we'll end with a demo, showing you how Flux makes it really easy to do just that. So you may be using Helm CLI by hand and or in CI automation. You may be just getting started with Helm. However long you've been using Helm, whether for years in very complex or simple situations, or whether you just downloaded it today and started playing with the commands, this talk is definitely for you. So I've met a lot of you over the years in the Helm community on the Kubernetes Helm and Charks Slack channels in weekly Helm Dev meetings, IRL at conferences, a lot of you, and through GitHub and in the Twitterverse generally, et cetera. So I'm really looking forward to following up on your questions after this talk with you. And I meant what I said about reaching out on these topics. So the brief context for new users, you're probably watching this because you have some interest or at least curiosity in these topics. But I'm not going to assume any specific knowledge about GitOps or Flux or Helm, in fact. But that'll be the quickest one. So we'll start with a short intro to all of these, to be welcoming to all Helm users and to level set so everyone can follow along. And then we'll get right into the meat of more seasoned Helm users are looking for. So what is Helm? It is a widely adopted package manager for finding, sharing, and deploying apps on Kubernetes. In the context of Helm, you're going to hear terms like Charks, releases, revisions. If you're not sure what those are, here's a super quick intro. So Helm, similar to apt or yum for Linux, Helm manages packages for Kubernetes called Charks. Charks are the actual packages that this package manager for Kubernetes manages. So Charks contain a set of related Kubernetes resource definitions for applications. So these application definitions can be deployed to a cluster as a set, along with user-specified configs, things that you specify for how the application is supposed to behave in Kubernetes or just behave on its own. So the deployed Kubernetes objects and those user-defined configuration values together are called releases. Releases also have revisions that that's important to know just because when we talk about rollbacks, users can roll back to previous revisions if needed for a reason. So if you don't know, you probably do, but it's just important to say that Helm Charks when I say it's a widely adopted package manager, it's very wildly popular and it's a big way that many people get introduced to Kubernetes because it helps to simplify some of the complexity around Kubernetes resources. And people are often not only interested in infrastructure, they're interested in what they can do with it. And that usually includes apps. So Helm Charks exists for most applications that can be run on Kubernetes. There are nearly 7,000 of them, 7,000 charts to choose from on the CNCF Artifact Hub. So go check that out. That's at artifacthub.io. So in order to guide you through the benefits that Flux brings to Helm, I'm gonna briefly note some things that are in and out of scope for Helm by design. This is important to note so that, you know, which tool does what and why you would even need something to extend Helm to Helm. So Helm is a client and SDK only. That was by design for Helm 3, which has been around for several years now. The current version of Helm, the only supported version of Helm. So Helm provides the CLI direct, that SDK is meant to provide the internal functionality of Helm so that other tools extending it don't have to be constricted by the limits of the CLI. It's output exact, it's specific output. It's things that are meant to be used for CLI only. Those internal functions that powers the CLI can then be mixed in other, can be used in other ways. So here are some of the things that are designed to be out of scope for Helm. One of the big ones is CRD upgrades. This is a long asked for feature that is explicitly outside of the scope of Helm. There's a dedicated page to this in the Helm docs. It's not just because we don't want to do it, it's because there are very good reasons why you wouldn't want this to be inside of this in the scope of a CLI tool. So please read that if you're interested. I don't want to spend too much time on that here, but that is something that users of charts that update CRDs care about. So other things just briefly listed that are out of scope, you can read this on the slide, but I'll just note them real quick, managing or structuring multiple environments. So that's most people use scripting for that of some kind or some kind of tool to help do that. Helm file has been a popular one over the years and there are now others to do this, including Flux. So any kind of control loop or retry logic outside of just under the hood functionality for short, for short functional retries are outside of the scope of client because it is a CLI. It's meant for attended operations. You can automate it and script it, but you have to write logic around the CLI or to catch any failures and retry if you need that kind of functionality using the CLI. So, or any other kind of control loop functionality has to come, has to be outside of that scope. Any kind of automated responses beyond automated rollback, which is an amazing feature of Helm, by the way. Automated drift detection, this is definitely outside of the scope. So, imperatively you can do this with the Helm diff plugin in some, and that is a wonderful tool, but anything else around automated detection of differences outside of the scope of Helm. So, brief intro to GitOps. And then I'm gonna do a brief intro to Flux and we'll give you the benefits and we'll give you the demo. So GitOps is a set of principles for operating and managing software systems. These principles are derived from modern software operations, but they're also rooted in pre-existing and wildly adopted best practices. So this is not a brand new thing at all. It is just a current iteration of knowledge that's been around for a long time, just put together in a way that is now becoming very popular. So the desired state of a GitOps managed system must be declarative, versioned and immutable, pulled automatically and continuously reconciled. Those are the four principles. So a quick note about what we mean by declarative in principle one, because you may not know this. So declarative and imperative are generally, they're two different things. So with declarative management, you declare the system that you, you declare to the system what you want the end state to look like. So the system then works to make this reality, usually reports on the status, on the progress of making that happen, to making that declared state a reality in the end. Over time, the way the system makes the declared state, a reality can change without the need for you to declare the status of the progress to change in any way. So that is, you say, I want this and the system does that for you on your behalf. Imperative management has to do with telling the system what to do step by step. So instead of declaring what you want at the end, you tell the system each step that you want it to take to achieve that end goal. Help CLI, for example, is a tool that offers imperative commands just to put that in perspective. So principle one is very similar. So the getups principles in short, principle one is very similar to infrastructure as code, except that applies also to apps as well as infrastructure. Principle two is where the get in getups comes from. Although any system that fits the criteria defined by this principle can be used, any other version control or any other storage system that has these criteria. Principle three is what differentiates getups from event-driven CI jobs triggered by changes in get. Here the desired state is pulled by the system whenever it's needed by the system for any reason without the need to be triggered by some kind of action on the source repository or something like that. And principle four is where software agents are always assessing the actual system state and they're working to bring it closer to your desired state that you declared in your version control. So that's a note I didn't mention Kubernetes at all. The getups principles are agnostic about your system, but we're gonna be talking, we're going to be using this in the context of Kubernetes to explain how getups can let you manage your home releases. Excuse me, how Flux can help you manage your home releases and getups. So that's the brief entry to getups. Brief entry with Flux is this. It's a CNCI incubating project. We call it the Flux family of projects because it's a set of continuous and progressive delivery solutions from get to Kubernetes. This includes the Flux CLI, the getups toolkit, which is a set of controllers and Go packages that I'll show you a brief diagram of next. And it includes Flagger, a progressive delivery tool that can be used on its own at this point, but it's also heavily built with getups in mind and built to work with getups. So remember those things I said that Helm didn't do by design, Flux picks up where that leads off. So where Helm's the packaging and release tool, Flux allows for collaborative, declarative and automated management of complex environments. Flux is some of the benefits or Flux is the only CD tool, continuous delivery tool that I know of anyway, that change, but that purely uses Helm's SDK. There's no shelling out to a binary and it also doesn't fork Helm in order to do that either. So this allows Flux to be very flexible and powerful to do many things while maintaining a solid architecture and also a low memory footprint in your cluster. And because we're big SDK users, the Helm SDK, it means we contribute to Helm upstream quite a bit. So everyone gets the benefits, not just Flux. So, you know, benefits of open source. Some of the other really important things to note are Flux's Helm controller manages CRD upgrades. So when I said that was out of scope for Helm, I focused on that because it's not out of the scope for Flux Helm controller. And that is huge news. So while Helm CLI has initial installation support for initial installation of CRDs that are required for the other resources in your chart, it doesn't handle upgrades. And that's, as I said, by design. So normally this is a manual step for charts that install apps which update their CRDs periodically. For example, ThinkCert Manager, and there are other charts that are widely, that are very popular this. So the Helm controller allows you to do this out of the box in an unattended way. And it's got options to let you fine tune this automated process as you need timeouts and other things. So that's a huge benefit. Helm controller also includes a depends on feature that allows you to manage a tree of, could be complex, could be simple, but a tree of chart dependencies without having to make a large umbrella chart which is one of the main solutions when you're doing this with the Helm CLI only. So this saves a bit of memory because umbrella charts store all of that information in memory in order to do what it does. So that's very, very helpful that depends on feature. So Flux also makes it easy to manage multiple environments for Helm, it's built-on controller runtime, Kubernetes support controller runtime. So it includes control loop and other retrial logic out of the box which can be extended as well. It gives feedback on how automating your Helm release is going through the Flux notification controller. I'll show you in a second. And it includes automated drift detection. That's one of its big features. So when the resources in the cluster that are defined by your chart diverge from your desired state that you specified in your chart and your configs, it will notify you on divergence or whenever anything goes wrong. So that's a slightly detailed but introduction to Flux in the Helm controller. So here are some things that Flux does in short term. I'm not going to go over them now. Just please take a look at the slide and these are also on the homepage of the Flux website, fluxcb.io. Just note things like Flux does support multi-tendency, multi-cluster, multiple Git repos. It integrates with existing tools with almost all popular tools for working with Kubernetes. And just take a look at some of these points and there's a lot of information about other reasons why Flux is useful and that you may want it anyway on the Flux website. So in short, Flux is a set of specialized controllers that rely on each other to do a specific job. Here's just a theory. It's not an architectural diagram, but it's a short, it's a brief slide showing you what the difference is or it's a brief slide showing you what the different controllers do. I'll just list them real fast. So the source controller watches your defined sources where you specify your desired state. So when you put your home charts in Git and your other YAML files in Git, the source controller brings any of those changes that you make into the cluster so that these other controllers can act on those. It does that very well. So the desired state can be in a variety of popular formats. It can be in plain YAML, custom updates or home charts. There's even a newer Terraform controller, a Flux Terraform controller that lets you store your desired state in that format. And I expect more controllers to support other formats to come soon or as we go as needed by the community. So the customized controller, next one on the list gets your changes from the source controller because that already brought them into your cluster, right? And it applies those with the plain YAML as well as any optional customized overlays to the Kube API. It's basically applying that information from your source to the controller. It's actually doing the reconciling of that in the Kube API. So the YAML files also include Kubernetes resources, well, they include Kubernetes resources, but they also include custom resources that other controllers such as Helm rely on to do their job. So that's really an important thing to note because some people when they install Flux, wonder why if they're installing it primarily for Helm, wonder why they're using the customized controller as well. When they don't use customize as a tool in their workflows, you don't specifically have to use customize as a tool in your workflows. Customize is just built into Kube control and so it's built into the go libraries as well. So that controller was made to handle both plain YAML and any additional customized overlays that you might be using on top of those. So once the Helm repo and Helm release CRDs are applied to the Kube API by the customized controller however you decide to get them in there if you want to get them in there in your own way, the Helm controller uses that information, those CRDs in the Kube API to automate your Helm releases for you. Then I'll tell you a little bit more about this as we go through the demo, but notification controller keeps you posted about how things are going and it notifies you of any problem or divergence as I mentioned, that's the notification controller is what handles that side for flux. It can post to Slack, it can post to many different team tools that you need for notifications. And finally the image automation controllers handle writing the exact image versions back to your Git source. If your desired state, if when you declare what you want for your images in your desired state as a Simba range or a Simba constraint. That's really helpful if say you just want to, you want images that are any minor or patch versions of a specific image to be updated automatically because you trust that source but you want to open a PR, have an automatic PR open for whenever there's a new major version or something like that. There's, you can specify your own rules and your own constraints but this is essentially version pending for GitOps. So about home controller in the middle there. You know, I'm experiencing how myself as a user and contributor and maintainer and I wanted to contribute to flux a few years ago because I was very interested in GitOps. And after researching what was available I found the home controller to be for me anyway the most stable powerful GitOps tool for the wider Helmica system. So I want to put my efforts there. There are other tools and there are many good ones. And I'm hoping that we'll continue to work for more interoperability between those in the future. But in short, the home controller is built on Kubernetes core controller runtime, like I said before. So that means anyone with Kubernetes knowledge can contribute. You may be here listening to this talk as a user but you may be interested in contributing. So just keep that in mind that it is made not to be a special snowflake but to follow best practices and be very open to contributions. So let's take a very quick look at the architecture diagram of how these controllers work together to manage Helm releases and then we'll just move right on. So here's something that you see on the Flux CD website if you go to the Helm, the components of the home controller you'll see this diagram. So a very short description is when running a Flux Bootstrap, which is optional but it is the easiest way. That sets up a Git repo for you if one doesn't already exist. It installs Flux components in your cluster and mirrors those manifests for you into your Git repo. So you can see exactly what's going on. It imperatively adds custom resources telling the source controller to watch your Git repo and it mirrors those manifests in your Git repo as well. And from this point on, the customized controller as I mentioned before periodically works to reconcile all those manifests with the resources running your cluster. So while you have the Bootstrap Flux for it to actually do its GitOps magic, once those controllers are running, the customized controller will actually be reading your manifests including the manifests for Flux itself. So you can from that point on really be doing GitOps and you can use the Bootstrap command to even update your Flux installation or any of your other tooling including the Flux components themselves. So that's one of the features of its design that I think is really excellent that you're doing GitOps really right out of the gate. So the Helm controller syncs, the Helm controller as I mentioned syncs the Helm repo and Helm release custom resources which I'll show you in a moment. To do the thing that uses the Helm SDK to do all the Helm things you're already used to. That's kind of a summary of this slide for Helm users. So, okay. So the benefits ensure Flux introduces an additional layer of reliability, consistency, observability and auditability to the benefits of using Helm CLI or that manually or in CI. So on comfortability levels and kind of like just what you need to do to move from the Helm CLI to the Helm controller right before I show you a demo of doing exactly that is that it all starts with using Helm declaratively. So you may want to look at this CNCF blog post that was just published by Tamo Nakahara, the VP of DX at WeWorks on how Flux lets you use Helm declaratively. There's a link to it in the slide deck that we'll provide to you later and just please check that out. It's a very good write up of how that works. So you just need to make sure one way or the other whatever tools you're using that in order to do GitOps with Helm or any other tool it needs to be declarative first. So Flux has a format for you to do that for Helm. So for Helm CI automation users, this is mainly just think of this as a process of decoupling your CI and your CD. So and for any Helm users whether you already have CI in place or whether you're just starting from scratch Helm releases through Flux are properly separated into continuous delivery for you already just by using the tool. So just a quick note on comfortability that change can be scary for various people but I assume that you're here because you want that change and you know it's necessary. So here's a few list of tips to help convince those that you need to share ownership and to share the risk ultimately of making that change. There's a lot of good resources on the Flux CD website about this, how to convince who needs to be convinced and just know that there are organizations mature organizations and risk averse ones that are large, small, old and even new ones that are they're adopting get ups. So and they're using Flux. So that is also fuel for you to help make your case. And if you need to defer to additional experts to help with that, there are many people in the community through you can find us on Slack. As I said before on the various ways to do that in the CNCF Flux channel and in other channels on Slack. And if you have a, if you're in a business that is, that cannot be talking publicly about what you need, there are vendors, there are people who can help for paid product in a paid way. So if you go to the Flux CD website, the support page has a section called My Employer Needs Additional Help and it lists people who provide paid support. And if you're in a company that does provide paid support and you're not on that list, let us know and we will add you or just make a full request and add yourself. So all right, it's demo time and I am now going to stop sharing these slides and instead share a link to the Flux CD website and then share a gist that I made and a terminal and we'll go through this really quickly so that you can see how easy it is to move from Helm Releases Managed with the CLI to manage with GDOTs. So here's my terminal on the left, terminal on the left and here's a gist that I made on the right and the link to that just is in these slides. Okay, so I have gone ahead and started this process so you don't have to wait for it to happen, but I haven't created, I'm starting from scratch here. So there's nothing special except for I've saved 55 seconds creating a kind cluster, but just there are some instructions for users on Mac or Linux to use Homebrew if you want to, you can install Flux and kind any other way. You can also do this demo on a cluster that's not a kind cluster. I just recommend this because anyone can do it without even having a cloud account that or pay for a Kubernetes instance to demo them on for themselves and it's cross system compatible. So this should work out fine regardless of the system you're on. I just is in shell or bash, so or rather just shell. So it should be posits compliant. If you find a bug in that, I'm using bash if you find a bug in that in another shell, let me know but this should get you on your way and at least help you there. So just make sure that you have the most updated versions. I do. And make sure that you have a personal access token that's exported into your CLI. The personal access token have a real scope. I have that, just know that know that this is handled from a security point of view. Don't be scared when you hear that because that gives any tool access to all of your repos in GitHub if you're using GitHub like I am in this demo. But don't fear, that is not passed in any way to any of the automated controllers towards your cluster at all. So don't be scared. It's only used within the context of your CLI session, excuse me, your terminal session and then Flux will do take care of security for access to your GitHub properly by creating a deploy token in GitHub, loading what's necessary into the cluster, et cetera. So I'm just gonna show you that I have my and know myself that I have my GitHub token set that is the length that I expect to see for a GitHub token. I've created a kind cluster and now I'm gonna do a Flux Bootstrap command. So I'll just paste it here. So that's starting while I'm explaining it. The output itself explains this. But just to be clear on some of these settings that I made, I wanted this to be in my personal GitHub. So it's under my Scott Rigby GitHub username. I set the interval to 10 seconds which is unusual for an installation, that's for a Flux installation, that's not a demo. I'm doing that so that for demo purposes we'll never have to wait for more than 10 seconds and half the time we'll have to wait for less than that. Generally you might wanna set this to something, I think the default is one minute and you may wanna set this to something like five or 10 minutes depending on your setup that can be covered and is covered in separate demos. But just know that when I gave a path that creates the structure for where Flux writes those manifests in your Git repo, as I mentioned before. And there's no gold standard for that. There's no specific way that that needs to be done. Flux is completely unopened about that and it's made to be very flexible in that way. That said, there are some good practices that are covered in other talks. And for now I'm just doing a very basic one at a path called cluster step where you may also have a folder, clusters, stage, clusters, product, et cetera. So that was already done and all components are healthy within the Flux system. So we're good to go. I'm gonna go ahead and go to where I keep my code. Notice this all happened so far without writing any code or for bootstrap inconvenience only. And to help make this process much easier on you, the Flux bootstrap command does write this Git repo. As you can see, well, let's see, I'll show you now. Yes, it's there. I just created this two minutes ago now. So you do not have to use the bootstrap command to install Flux. You can do it in various ways and there are instructions in the Flux docs on how to do it, but this is by far the easiest way and it sets up a lot of best practices for you. So I'm gonna just show you now that I have, I've cloned that repo that it's made into my local I've seeded into that repo. And now I'm going to go ahead and show you the files that it made. You can inspect them when you want to. The GOTK components shows you literally everything that Flux needs to run. If you have these files and you just did a kube control apply, it would also similarly install Flux. You would just then need to set up the deploy keys and other things yourself that the bootstrap command does for you in a handy way. Let's go ahead now quickly and I'm gonna simulate a Helm release. You probably have one or dozens or possibly hundreds in your cluster that you made using the Helm CLI either manually or through automation. And I'm just gonna go ahead and use the pod info chart, which installs this pod info app that was written by Stefan Proden as an example because it loads quickly and it's good for demos. So I'm going to go ahead and in order to show you, in order to show you something slightly more complex, I'm going to add custom values too and show you how the Flux CLI tool allows you to migrate those very easily from your existing Helm installations into a declaration for your Helm release. So you can see that I'm just gonna go ahead and use a few of them, replic account, log level, and I wanted to show you a nested one so I'll just do UI color. Those are here in the normal Helm way. Again, this is doing nothing with Flux yet. I just have it installed waiting for us. And I will go ahead and do a Helm CLI installation using the upgrade dash, dash install or dash, I command, making a release called my release of the pod info, the pod info chart. And you can see you can use normal Helm tools, Helm list to see that that's there and it's that revision one. So we're gonna go ahead and quickly convert these to declarative custom resources that Flux understands. So I'm gonna go ahead and create a custom resource, sorry, a custom resource locally for the Helm repository that maps to really the same thing that we did, excuse me, here with Helm repo add the name of the repo and the path. This is very similar. We're going to do instead Flux create source Helm, which is a Helm repo definition that's in the cluster. And we're gonna give it the URL. I'm gonna tell it what namespace. I'm gonna use the default namespace just for now. And I'm gonna export it into a file as opposed to loading it directly, imperatively into the cluster because we want to do this declaratively and showing you how to do it. So I'm gonna load this into a file called source some repo pod info.yaml and I'm going to show you what that looks like. Sorry, I cleared my screen without trying to... Oh, I see, I went to the wrong tab. Okay, thanks for your patience there. Okay, so let me show you what that looks like and this is it that just shows you ultimately the same. And this is really just the information about the Helm repository. Okay, there are several ways to... There are several different types of sources for a Helm chart with Flux. This is just using the most popular one, a Helm repo. There's also just a get, a folder in get a get source itself. So you can have a custom Helm chart or something like that inside of your get repo and that will work. You can also use a storage bucket and any S3 compatible bucket, whether Minio or any other cloud providers version of a storage bucket. And coming very soon, you'll be able to use OCI as a source. Now that that is a full feature in Helm, that's pretty big news we have by pushing that through and congratulations to all the rest of the Helm team on that too. We now have unblocked SDK users specific, including Flux from doing that too and providing that support. So there's a design document in progress or in process and there's actually going to be a demo of this working with the source controller with OCI working as a source at the next community Flux meeting. So please join that, I'll be next, next Thursday. Or if you watch this recording afterwards, you can go back and look at the recording of that community Flux, the demo at that community Flux meeting, it's pretty exciting. So next we're going to create a Helm release custom resource locally. The reason that this is separated, the reason that this is separated the repo and the release is because you may have one Helm repo with many charts in it or you may have many releases that use the same chart or some of the same charts that are in the same repo with other charts. So you don't want to have to give the information over and over about your Helm repo as a source, especially if it's a private repo and you need off and other things like that for your source. So you do that once and then we reference that when we create the Helm release custom resource. So just to show you the example with values, I'm going to go ahead and use the Helm CLI as you normally would to get those values into a file and show you how easy it is to migrate those into your Helm release when you create your custom resource. So I'm going to go ahead and do Helm get values and I'll show you what that looks like in the MyValues file I just downloaded. That's what I just said we had created when we deployed the chart. And now the Flux CLI has a command that makes it super easy to create this Helm resource. I'm calling it my release because that's what we did. That's the release name. And the source is the Helm repository that I just mentioned. So it's a pointer, not all the information about it because that object will be there on its own. The chart is the hot info chart. We're giving a simver constraint for the version. It's just got to be above 4.0. We're telling it the default namespace and we're importing the values automatically from that file. This command, that could be loaded direct, excuse me, that could be loaded directly into the CUBE API but this command exports it into a file so we can do this declaratively. I'm going to do that and I'm going to show you that file just so you see what this looks like. And there we have it. You can see that the values for the chart itself are their own, its own map. And that works for any chart out here. So we no longer need to use this temporary values file so I'm going to remove it. Not necessarily have to, but I'll do it to stay tidy so I can use get add everything because now we're going to go ahead and deploy this to our repo, you purely using by pushing to get. And that will work because the flux components specifically the customized controller is looking at our, excuse me, the source controller is looking at our source and when we push it there, that will be automatically imported into our cluster. So we can do a good status. You can see these two new files, we do it get commit. For now I'm just going to do something simple. I'll do it, but ultimately you're configuring the chart. Okay, and now I'm going to do get push. So let's go ahead and check out the magic now. If you had a longer time out, you could demo this by using the flux records off command. We don't need to do that because we set the limit attempt. We set the interval to 10 seconds. So I'm going to go ahead and I'm going to go ahead and show you that that should have already changed Flux should have already taken over this Helm release that you had earlier made with the CLI. I'll show you this by first doing a Helm list again. It's now a revision status too. And that's because Flux have added labels to those resources and updated to the new revision there. So I'm going to go ahead and show you the deploy object managed by that chart. Oops, here I'll just go ahead and show you the Amel itself so you can see it in context. If you look under the labels, Flux adds late, excuse me, the Flux Helm controller adds labels for this to denote that it's controlling this. But, you know, let's, don't take my word for it. Let's prove it here. I'm going to go ahead and update the Helm release custom resource that Flux understands and push it just through Git. And that should update our Helm release as well. So I'm going to add my change. I'm just changing the UI from red to blue. Yes, let's go ahead and commit it. But before I push it, I want to show you what it looked like before and after. So let me get a visual here. I'm going to go ahead and port forward so you can see, excuse me, I'm going to go ahead and port forward so you can see, yeah, you can see that when we first launched the Helm chart we colored it red. I'm going to go ahead and do a Git push. And as long as once that revision updates, it's in between revisions now, there we go. Once that revision updates, I should be able to port forward this again and show you that just by pushing to Git, it automatically changed that Helm release. Okay, so we did that. We showed you that it worked. We get to the visual, it's blue. Okay, so that is really great. It shows you how the Helm release object and the concept of Helm release itself is retained in the cluster using this tool for GitOps. That is not the case with all tools. There are different strategies, but we think that there's a high value in being able to still use the Helm CLI as you need it, for example, during instant management. So if you wanted to use Helm rollback, here is an example of seeing that happen real fast. And we're nearing the end of this demo, so let's do a, well, let's go ahead and do, in order to do a Helm rollback, I'm gonna show you pause and resume, how that works with the resources. For whether it's for instant management or for some other purpose, you still as a human operator have the ability to control what's happening with this automated process. So don't let the fact that, don't let the idea of continuous delivery scare you. You can pull the emergency brake anytime you want and you don't have to do it for all of your processes. You can do it on a per Helm release basis. So I'll go ahead and suspend this Helm release. There's a handy command to do that. You can also do it with annotations. So you don't need to use the flux CLI at all, but this is just very convenient. And then let's go ahead and rollback to the last revision, back from blue to red. And to be clear, the reason that we suspended this is because I have this on a very, the interval on a 10-second interval. So even if I hadn't done a Helm rollback, 10 seconds or half the time, five seconds or less later, it would, flux would automatically heal that and change my roll, my manual Helm rollback back to the desired state that I have specified in Git. So we paused it in order to show you that this can happen. I'll port forward again. So you, excuse me, I'll have port forward again so you can see that we're back to red. And then I'll go ahead and resume. Say we're finished with our incident management or whatever we wanted to do. And we're gonna go ahead and resume reconciliation, automatic reconciliation. I'm gonna resume that again. Well, yeah, I'll resume that again. And now I will port forward again just to show you that yes, indeed that did happen and now we're back to blue. So now we can clean up. And we can do a kind delete cluster to clean up. But don't go away yet because there's a big gotcha here. While this is a cleanup, I'm gonna go ahead and use this to show you one last thing. You could delete your get repo now. That's demo get repo. But wait, there's more. Now that we deleted this cluster, I'm gonna go ahead and show you how flux handles your Helm release has been a disaster recovery scenario. I'm gonna go ahead and create the cluster again. We deleted it entirely. And all we should need to do now is make sure to bootstrap the flux components back into your cluster. So it can do the automatic reconciliation with what you already have specified and get. You don't have to do anything else. So I'm gonna go ahead and go back to the bootstrap command. This should take only a few more seconds. In the meantime, I'll describe to you why this is valuable. It may be obvious just from what I've shown you so far but this is not something that most Kubernetes users, Helm users or Kubernetes users have the ability to do normally when your cluster is deleted. You have to rerun a lot of things to make that happen. So I'm gonna bootstrap this again. This time it'll go much faster because it doesn't have to create that repository. This command is item potent. So you can run it over and over without fear of messing something up. It sees that there's a cluster in place. It finds that deploy key. It generates the source secret. It puts the proper keys in place for access. And as soon as the flux components are healthy and running, it will automatically reconcile your Helm release that you have specified and get. If you had more than one, if you had hundreds of them, it would do the same thing. And so now it's done. Let's do a Helm list. Let's wait for a minute for it to reconcile or a second. And there we go. We now have your app and you can see it running right now. So there we have it. Thanks very much. I'll go back down to the bottom. That's basically a wrap. Just as a quick review of what we did so that you can walk away and remember this without necessarily having to watch every step of the video again. On a local kind cluster, we simulated an existing Helm release by just using the Helm CLI that you already are familiar with and you know how to use. And if you're a brand new user, this is essentially Helm install chart. We used the flux CLI to bootstrap flux components into the cluster, simultaneously define and create if a Git repo for you, if it didn't already exist. And go ahead and include its own manifests for itself to run in that Git repo. So that if that cluster died just then, you could go ahead and get flux back up and running just from Git. So it also gives you a lot of visibility into exactly what's on there and what it's doing. There's really no magic behind the scenes that you should be unaware of. It's all very accessible. That's good for security folks. We use the flux CLI to easily create custom resources for the Helm repo and the Helm release, which can multiple releases can reference that repo along with our existing Helm releases custom values that we wrote when we first used the Helm CLI to make it. That was pretty cool. For those of you that have more complex charts or lots of values, maybe values from different environments, you can immediately see the benefit of that convenience function. We pushed the files to Git and we showed how the flux labels are there, which means it has taken ownership of managing that existing Helm release that you had earlier deployed with the Helm CLI. And we proved that that actually is managed by making changes to Git only. And we watched flux magically update your Helm release from Git. We showed you how to pause and resume, which you might do use during in-cluster development or incident management. And we simulated disaster recovery of your Helm release by deleting the entire cluster. All we had to do was bootstrap flux again onto a new cluster and we got your application up and running again. So I hope that helps give you an idea of the power of flux for Helm users, why you as a Helm user would want to do Helm declaratively and you would want flux to do that and how additionally by using flux, you're getting all of the benefits of GitOps as well. So thanks a lot and we'll see you in the Q&A.