 Hi, everyone. Welcome to Helm Users. What can Flux2 do for you with Scott Rigby and Kingden Barrett? I'm Scott Rigby. I'm a Helm and Flux maintainer working on the developer experience team at Weaverx. And I'm Kingden Barrett. I'm an OSS support engineer also working on the developer experience team at Weaverx. So, yeah, first of all, welcome the assumption for people coming to or watching this recording and coming to the Q&A are that you have some familiarity with Helm. There is no assumption about familiarity with Flux yet, but we can guess that some people have, there's a variety or there's a range of experience. So, some of you may have tried your current version of Flux. Some of you may have tried the older version of Flux. Some may just be looking to get into GitOps from your existing Helm installations and releases. So, I think Flux is Helm controller is the best way to do Helm 40 GitOps principles. And our team and me, in particular, I'm dedicated to doing whatever I can to help make sure that you feel that way too. So, please, in the Q&A and outside of that, please keep in touch about how you feel about it and how it works for you. So, first of all, I'm just going to explain for a moment what Flux adds on top of what you already get when you use Helm. So, at least as of Helm 3, Helm is designed with a client and an SDK on which the client is built, but no running software agents of any kind. And this architecture intended anything outside of the client scope to be addressed by other tools in the wider ecosystem, which could then make use of Helm's SDK just as the Helm client does. So, why am I even mentioning that? It's because Flux is Helm controller built on Kubernetes controller runtime. It's an example of a mature software agent that uses Helm's SDK to full effect. Every feature, at least that is non-experimental, is available, of Helm is available in the Flux Helm controller using, purely using Helm's SDK. However, there are some things that are added to this. Otherwise, why would you bother using it? The main thing that's added is the get out side of things. So, Flux's biggest addition is structured layer for your releases that automatically gets, for the release, that automatically gets reconciled to your cluster based on the rules that you can figure in your in-get or in-version integral. So, the way I like to think about it is, Helm client commands let you imperatively do things, or you say, Helm repo add, it's a repo. Update that repo, Helm install, Helm upgrade, etc. Flux's Helm controllers custom resources let you declare what you want, the Helm SDK, what you want, and the Helm SDK will do that for you automatically. And that's the main benefit that it offers. But there are a couple of additional benefits that are worth noting. I won't get into every detail, but just at the high level as we're diving in, managing and structuring multiple environments outside of the scope of the Helm client. You can do that with the Helm controller very well. Also, a control loop with configurable retry logic. There's automated grip detection between your declared desired state in your GitOps declared desired state in the version control, and the actual state of your operations running in the cluster. And then there are automatic responses to that drift. So, that includes trying to reconcile those two things, note of patience for you when that's not happening the way you intended it to happen, and unified logging as well, and other really valuable things that come with Flux's version of GitOps. So, I hope that gives you some sort of a sense of why, or at least what, and some of the why. So, let's go ahead and show you something. So, just to introduce what King's going to say is that, here's the truth of it. I wanted a blog, and I thought, I really need a blog, and I'm doing all this Kubernetes stuff. So, why don't I run this blog on Kubernetes? Because why not? What could go wrong? And it should be easy, right? But King, then how do we do that with GitOps? Yes, that's right. We're running our blog on Kubernetes because we make bad life decisions, and we like to use Kubernetes for everything. I'm going to start the screen share here. Actually, it's a great decision, and GitOps makes this really easy. So, to back up just a little bit, talk about why we chose to present Helm itself. Let me get this full screen here. So, orient myself. So, Helm itself is great at correlling configuration, and what I mean by that is we talk about configuration in Kubernetes a lot. We usually would be in YAML files. Sometimes those are the full deployment resource, service resource, custom resource definition, all different kinds of resources that we may install defined in their full glory as YAML. And sometimes what we mean is, no, just parameters, just what we actually want you to configure. And that's what Helm does with values.YAML. So, values.YAML is not a first class object. Maybe it is if you apply to scheme it up JSON. But as you work with Helm releases, you find that Helm enables you to install a lot more things. And what we're going to find on our cluster is that we actually need more than just a blog to have a blog. Our blog uses persistent volumes. Our blog uses WordPress. There could be vulnerabilities in WordPress. And we're not just going to use this cluster for one thing. We want other things on the cluster. We want to keep those things protected and safe from WordPress. We want to keep all of them isolated and protected from each other. So, we're going to start with live activity. We have a flux cluster that we've already bootstrapped. And I've suspended a bunch of things so that we can simulate a bootstrap so you can see what comes up. We were going to do a live bootstrap, but it caused too many problems. When we get there, you might see some of the problems that haven't been resolved still. If we're lucky, we'll get a chance to debug something. And we're going to install some other things alongside of Flux. Coverno, Webhook, WordPress itself. There's lots more. We wanted to show you Jenkins, way to get a terminal in a web browser. LinkerD is a service mesh. Flagger is a progressive delivery tool that's also under the Flux CD org. So, we're going to get to as much of this as we can. And we have a little timer robot to keep us pace. You might see at the bottom of the screen the tortoise and the hare animation will help us with timekeeping. Okay. So, starting with Flux Bootstrap. Let's see if I can remember how did I get that? Narrow and expand. I have to remember the keystrokes here. So, to expand the whole in my presentation. I'm using Linux to present here because I think the presentation tools are nicer. I'm a desktop Linux user for about 20 years. That's a little joke. Okay. So, I apologize in advance for some aliases that you will see me use throughout this. If you're not familiar, who is not familiar with K as the other cube control, probably most of you KS. There's a namespace cube system, if you're not familiar with it, where lots of things are deployed by default on most clusters. So, you can see what those things are. I want to see all of them. And we won't see them again for the rest of the demo. I'm going to filter them out with a grip. So, lots of things in the cube system. These will not be controlled by GitOps today. We provisioned our cluster out of band. So, moving on. I'm going to try to avoid using abbreviations until I've explained them. So, we see here we have one customization and it's going to suspend its date. So, it's actually on suspend? No? Resume. Yeah. Okay. Here we go. So, one of the things you want to know when you're setting up a demo for Flex is make sure you've got your tab completion hooked up so you don't have to fumble around with cube CTL explained. I just pushed tab there and it let me know what verbs I had available. If your tab completion is not set up, you're going to miss that. That is not explained here, but it is in the manual that we'll show you later. And what this is going to do is basically spring to life everything. All of the things that we've put into our Git repo that we haven't shown you yet. This is our cluster. Well, while Kingdon's doing that, it's probably worth me mentioning. So, this is a talk on Flux and Helmry and Kingdon's resuming a suspended customization. Why are we doing this? It's important to note for the moment that Flux's customized controller is what is used not only for people who are doing more advanced customization overlay things and other benefits that customized can give people if you choose to use that, but also the customization controller is what's used to ingest plain YAML and reconcile that into your cluster as well. So, in this case, right Kingdon, we're using the customized controller to actually bring in the manifests for Helm. That's right. We're installing our Helm release manifests using the customization controller. So, our one customization Flux system, we're going to look again and we're going to see there are a lot more now. Hopefully, everything worked. Okay, great. We got a flagger customization customization for secrets. We'll get to that in a little bit. And moreover, we have Flux gets Helm releases, some Helm releases that have been installed for our being installed. And so, this is one of the points of the demo here is to show you can install a bunch of stuff with just a couple of these strokes. So, granted, that's a little bit of an exaggeration here. I've done a bunch of work to prepare this in advance, but it's all in Git. So, there's nothing undocumented here or surprising. There should be nothing surprising if you've read the Git repo itself, which we can start looking at as we move along. But it'll also give you a gist, a gist at the end so that you can do this at home. That's right. So, we have these examples prepared in order to talk about them in So, first, I'm going to show us how we can install Coverno that I've already prepared before I talk about why or what it's for. So, in the clusters folder here, this is where Flux Bootstrap does its work. I'll show you the Flux Bootstrap command I want to use here. So, we give it a path. You tell it to sync a cluster directory. This is a familiar form that you'll find if you've worked through the Flux Getting Started guide. We tell it we want it to be a personal repo as opposed to a repo in some work that we belong to. The owner is me. The repository is for this demo. And we're installing some extra components. We haven't talked about these. We'll explain what these are for in a moment. Image automation controller and image reflector controller are the extras. So, we're going to install Coverno. I wanted to show you the tree of things that we have here. It's grown a little bit since the last time we recorded the take of this demo. I've expanded some things in the Coverno namespace and for the most part, these were meant to be individually carbon copies of each other. But what you find as you try to prepare a demo like this is some tools are not really meant to be installed with them. And what Flux shows is that you can still use those things, even if your strategy is all get-ups. So, the things that Helm tends to struggle with are CRDs because everybody has a different approach on how to install CRDs. And if CRDs get upgraded, the lifecycle gets complicated. Suffice it to say that if you're using CRDs in your Helm chart, you'll probably find another way that's been provided in the documentation for that tool. So, Coverno, in particular, we've commented something out here. And Kingdom, just to be clear, in Flux World, that's taken care of by Customize. And you can use Helm and Customize together in that way. Yeah. And that's what we're doing here. We have a customization that is strictly to install a Helm release. So, if we go to this path, which we've excluded with another file, customization.yaml, so we've excluded it by referring to all other files here. And you can see what I'm talking about. We want to install the CRD separately from Helm release because if we don't, it will cause problems when we do upgrades. And this is not exactly the best way to do it. In the toolkit guide, there are stronger demonstrations, but anyway. So, Coverno, Helm release, we'll see the back file. Okay. So, this is another Flux custom resource. We're defining a Helm release, which, if you've used Helm, this is equivalent, roughly equivalent in Git terms to an actual Helm release. It's the declarative form of a Helm release. It says, I would like this Helm release to be installed in the cluster. That's right. The imperative version would be Helm install. I actually could close the window, but in any case, you know, those other values, the name of the chart, the version, etc., and all of those parameters that you can generally pass imperatively to Helm are expressed within the Helm release custom resource defined by the Flux. Oh, no. We've got a merge from Flux. Blow past it. Okay. So, Flux Git case is an abbreviation for Flux Git customizations. I'm not sure there's an equivalent one for Helm releases. Yeah, I think it's HR. Yeah, there is. Okay, great. Thanks. So, what you notice here is that nothing happened when we did our Git push, and that's because nothing is reconciling our Flux system where that change was just applied. So, we're going to reconcile it manually here. And then just to be clear, what interval do we have our Flux system automatically reconciling? Sorry. What is the configured interval for the Flux system automatically reconciling right now? Yeah, the default interval here is one minute for the Git repository and 10 minutes for the customization itself. So, if we wait a little while, this will take care of itself. But we're doing demo and we'd like this to proceed rapidly. So, we're going to synchronize it. And we're going to use this with source flag to reach through. This is probably unnecessary at this point because it's already been longer than one minute. So, the Helm, the Git source is already synchronized or it'll synchronize now. We'll see status output that indicates what revision it's reached. And then it's going to reconcile our customization, which should in turn reconcile our Helm release. Oh, it failed. Well, here we go. What went wrong? It's fine now. Validation error, unknown field release name in customization spec. Well, that's because that's not a field in customization spec. What were you thinking? This wasn't tested apparently. Is this the right file here? This is good to show. Yeah. Okay. So, I put this here because I will see why in a second. I'm going to take it out. That doesn't belong there. And we're going to do the same reconcile. We're going to take this away in the next step of the demo. I just want to do a time check here and see how we're doing. Yeah, where's our tortoise in here? We're doing great. We're doing great. We have lots of time. Okay. Okay. It says that was successfully applied this time. The dash A is for all namespaces. Okay. So, we don't see any new Helm release yet. And that's because we have to see if we can catch it in time. We're still waiting for Kavarno HR to reconcile. We've set it up to wait for flux system to finish. So, we're just going to poke it because otherwise it'll wait another 10 minutes the way we have this. Okay. Let's get Helm release. All right. We see Helm controllers as reconciliation and progress. I'm going to switch to the default namespace here so I can use the Helm client. And you can see that Helm itself is aware of these releases. And in a moment it will become aware of Kavarno when it succeeds hopefully. Looks like it did. Nice. There's Kavarno. And that's the reason why I put that release name there because I've been trying to clean this up. We had a couple of these duplicate things. Flex will name your Helm release. Anyway, long story short, I should have put that field release name into the Helm source. Next time. Yeah. It was good to see the debugging. That's good. That's right. Okay. So, back up here. We've got Kavarno installed now. Let's get back to the slides. So, the reason that we installed Kavarno is because you need it to make flux safe for multi-tenancy. Follow the link to find out more. This is one of the most important guides. Now we're going to enable the webhook. I have a feeling that they told us it's not really keeping pace. But let's keep moving here. It looks like it is. It's moving now, but it seems like it hasn't gone far enough. So, I've prepared this in a directory called flux resources, flux system receiver. So, this is our webhook. Let me just strip out the comment here. So, that can be applied. It goes with this notification controller load balancer that we've also provisioned already. And so, maybe I can explain this real quick while Kingdon's enacting it. So, we explained why Kingdon was manually issuing these flux reconcile commands, mainly just so that you don't have to wait 10 minutes. And our demo can proceed along like a cooking show. It's something that you can also do on your own flux system if you want to issue a reconciliation faster than the normal interval that you've configured, right? One common use case is you want to make that happen every time you push to get or your version control, whatever, your buckets or whatever they are. You can do that with a webhook. That's a really common feature for Git workflows. I just want to be really clear, that that in itself is not, while it's acceptable to extend what you're doing with GitOps, that in itself is not GitOps. That is Git CIOps. The distinguishing feature of GitOps, not just for flux, but for other tools in the GitOps ecosystem, the wider GitOps ecosystem, is that you've got automatic reconciliation in a closed loop based on your own configured interval and rules. So if we didn't hope this, if we didn't have a webhook, it would eventually reconcile itself. We just want to speed it along. Hope that makes sense. So we're going to look at our source, our Git source, and the revision is caught up. If we look at the customization, it has also caught up. I'm not sure if that's because we got lucky or because our webhook is working. It is called a receiver. You can check on the status here. It looks like it should be working because this hook is the same as the last one I saw. So I think that's a hash based on the configured pass in. So for now on, we don't have to run flux reconcile. If you follow this webhook receivers guide, you'll get the same behavior. We looked at these files, flux system receiver, and webhook lb. We're going a little quickly because we have a lot more examples to get through. Moving on, we have the WordPress example. Yeah, where's my blog? Yeah. So we've also skipped over some things. There's should be, yeah, there's an ingress controller and we have a WordPress installation here, as you can see. So you should be able to see some pods. Looks like there's a MariaDB database and the WordPress itself. So how can I prove that this is working? Well, if I haven't managed to change the load balancer accidentally since I ran this yesterday, you should see WordPress is actually responding there. I'm using a local DNS name to skip the DNS step altogether. I just want this to work for the purpose of showing for 10 seconds. We're not really going to use WordPress for anything for very long. I'm not using HTTPS. Hey, it's WordPress. Oh, thank you. I have it set up. Okay. Yeah. Go to your WordPress admin and log in and do all the things that you want to do. We're not going to do any of that because we have more examples to get to. Yeah. So what else can I do with that style that I have this belong to? Well, you can do all kinds of things and that's what we're using Jenkins for is all kinds of things. Lord only knows what we're using this for. Maybe you know, if you're in the audience, everyone knows Jenkins. It's a big mysterious boondoggle. Who's using it for what? We don't know. Everyone. Nobody. All at the same time. We have Jenkins installed here too. We're going to skip that for expedience because we're almost out of time. Actually, the TARDIS is really not keeping up. But we have another application installed here that is a terminal. We can show this one real quick. It's a little bit easier. So we've been careful to configure this so that it's not publicly exposed. We want this to run on. I'm going to run it on zero. Yeah. And zero just dynamically. If you haven't used that trick, it's kind of nice that dynamically assigns, excuse me, a dynamically assigns from the example range. Yeah. We're just using that to make sure that if we manage to have port 80 open some other way, we're not going to get a conflict. It's a terminal. Nice. What? I have root. You can do anything I want here. This is great. Okay. Moving on. Linkerd. All right. This is in the list because this is a prerequisite for flagger, the way that we have configured it. Don't really have much time to talk about Linkerd, but Linkerd is a service mesh that lets you use technologies like flagger that depend on request inspection and other interesting behaviors that you can find. I recommend you check this stuff out heavily. We will not unfortunately have enough time to demonstrate flagger right now. We can't do everything in one demo, but there are demos out there. There are lots of demos in the FlexiDior. We hope that you will check them out. For example, the customized helm example, which gives more detail about how to use customizing helm together, including how to avoid duplication of resources in your manifest repository when you are working with multiple environments and want the same software to be installed in multiple environments, perhaps with different configurations. This example goes into detail about that. We're winding down. There is a lot more that we wanted to show you. There are some examples that will be included in the gist at the end. Secrets Management with Mozilla SOPS is a core feature of Flex2 and Customized Controller. We recommend you check that out so you can handle secrets safely. It uses GPG or new encryption types like age. We actually might have time for a quick demonstration of some automation. Let's do it. Let's try. In the Jenkins helm release, we've pinned a particular version here, 3.3.0. We can insert some directors here. I'm not exactly sure of all of the directors that will work. You can, for example, greater than or equal to 3.3.0, less than 4.0. Yeah, so to be clear, Flex2 and the Flex2 Custom Controller specifically allow sember ranges. So if we add this, what we should see, if we look at our helm releases, we have Jenkins 3.3.0 and upgrade to latest in 2.2.x. Actually, that's x.x. The tilde means anything within the same major number, sorry, the tilde is patch number, the carrot is anything with the same, within a different minor number, same major number. And you can see the rules for that in the mastermind sember project. All of the range rules are there. That's linkable from the helm.sum sember ranges, but we'll add that at the end in the gist as well. So as you can see, helm controller is doing something. That's promising. Looks like our Jenkins release is temporarily disappeared. Well, it's reconciling. Go find out what it's waiting for. Ah, it's an init container. Well, it's doing work. It is upgrading. Would you mind doing a helm ls real fast while you're here? Yeah, it looks like our helm release is vanished temporarily. The config map that describes our helm release is managed by helm controller. So we should expect it to put it back as soon as that finishes. Our release is not gone. Just the affirmation about it while it's updated. I'm not sure what the init container does, but it takes a minute. It also had to connect the persistent volume. There's a lot of stuff in here. Arcade, we used this to get some of the defaults in order. Rather than curating our own opinions, sometimes we borrow those of others for how your package should be configured to work with other packages. CertManager is installed. We haven't done anything with it. Multi-tenancy is a very complicated area. There are many tools out there. We recommend you take a look at kiosk for some interesting perspective on how to manage multi-tenancy. Ingress, another subject we had to skip over. But all of this we were able to configure using helm releases and flux in less than 30 minutes. This presentation tool is called Revit. And here is the bit.ly link, where the instructions will be ready by the time you read this. So anything else you can go ahead? Other than say thank you for our audience for paying close attention. We hope you join us at the Flux Pavilion. Yes. Thank you to the CNCF, everyone else. We'll sit on this side. We'll be at the Flux Pavilion after this talk. So please come and see us. Yes, we'll see you at the Q&A too. Bye.