 And we're alive. Hello, everyone, and welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Itai Shakuri, and I work on open source and Aqua Security. I'm also a Cloud Native ambassador, and I'll be hosting this show today. So this is Cloud Native Live, a weekly show. Every week, we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. So, Jim, this week, we have Lee Capili from WeWorks to talk about flux. And before we get into the session, I just want to mention that KubeCon Euro is upcoming next week. So Jonah's there to hear the latest from the Cloud Native community. And just a reminder that this is an official livestream of the CNCF and is such a subject to the CNCF Code of Conduct. Please don't say anything in the chat or questions that would be in violation of that Code of Conduct. Basically, just be respectful of your fellow participants and presenters. So with that, Lee, the floor is yours. Hey, friends, yeah. We're going to have a pretty interactive session today. I'm going to be talking about migrating from Flux version 1 to the newly developing Flux version 2. We've got a great project, really solid foundation for you to move over to something that's a much better experience. And we've got a lot of folks in the community who are already totally in love with and using hundreds of Flux deployments sometimes in lots of clusters or even just one Flux deployment in your one cluster. Out in production, lots of members and users. And so it's super important to the Flux community and the teams that contribute to and work on Flux that we have a great migration story, awesome migration docs, and a strong community of people who have done this migration. So I'm excited to be here with Iqai today and go ahead and light up that chat or jump into the cloud native live Slack channel in the CNCF Slack. And we can have a little conversation as I hopefully do everything successfully in my terminal today. How's that sounding, Iqai? It's a good plan. Awesome, yeah. Yeah, nice. So today I mentioned, yeah, we're going to be doing migration stuff, right? And before we get into a terminal, just want to provide a little bit of context. So I'll keep the slides light, but let's go ahead and just look at a little bit of diagrams and documentation before we get into the meat of what we're doing. So everyone kind of has a good orientation, right? I have a little presentation about considerations that you might want to keep. I'm going to keep the scope pretty limited on migration and then point to some very detailed documentation as well. And then so hopefully we can keep like a clear head on exactly what's happening in our demo today. So for the considerations, the first thing you should just know is that flux two is a huge evolution from where we started with flux one. And so flux one, the deployment is really simple. The options are command line flags. It works on a single repository. You put it in your cluster, it runs as a daemon, it goes and checks out the thing. You can manage it as a single namespace user of Kubernetes, all of that. But there were a lot of pain points that people started to come upon as they really pushed a single flux, single repository deployment style to its limits. As the GitOps community has matured and we've developed like very diverse practices around what it means to do GitOps properly, how you can fit GitOps into your organization since GitOps is not just a technology but also a way of working with each other, a culture thing, similar to DevOps, right? Folks have learned to build structures and assemble platforms with flux. And with flux two, what we wanna do with the project is provide a solid base that helps guide you along that path without forcing you into particular opinions. And so you still have those great things like using flux with helm operator. Flux two has helm controller built in and you can do declarative helms. It's such an important part of the ecosystem. But we haven't just kept feature parity with helm operator, we've improved upon things. So there's health checks, there's dependency management between multiple helm releases, which is a huge ask from people who want to deploy infrastructure first and then do applications. I keep talking about how flux one is a single daemon to a single repo. With flux two, you install it once and you can do multiple repository composition. That means that I can take multiple paths from a single repo or one path from lots of repos or whatever I wanna do. I can build a get-offs platform from multiple sources really easily with flux two. Now still getting all of that constant reconciliation, something that we've improved is that everything is way more observable. We've kept in mind that multi-tenancy is super important to flux users and we even have additional proposals now that we're working on to make that story even better than it already is. The hard need to tendency with some other cloud native components is already something that's very possible to implement and we have a great example. There's clear boundaries now between developers or users of the cluster and the platform administrators. It's really easy to build platforms with a single flux installation so that you can kind of delineate and delegate who owns what sources and what configuration. The platform team owns this part of the config, developers own this part of the config and that comes from like the multi-repo composition. And then there's like good auditing, right? So now we have our back control of individual objects. We've split up fetching sources, reconciling them into cluster, making alert policies. You can do different release engineering policies that weren't possible before since now we support like following tags on repos and you can have more control of which image repos you are pulling and for how long the pulling periods are as well as like web hooks and things. So like just way more control now over every single part that flux users have been asking for granular control for a long time as well as a much better experience. Of course, like anytime you do a migration, right? You wanna be considering the benefits, why you are going through the work necessary to get from like a major version upgrade. Now there's also the support period, right? So flux one is starting to fall off the support period but we do have several months of that as well for CD patches and bug fixes and stuff like that, right? We wanna take care of the community but we also wanna recognize all of the amazing work that the community has done to really pioneer what it means to do GitOps on Kubernetes with flux, take that and innovate and build a new project that serves the modern needs of where we've ended up. So flux two. So flux two is not like an experimental data version or something like that. This is the current stable and recommended version that uses flux one should migrate to as soon as possible, right? I would say, yeah, like you should really be considering migrating now, right? I wouldn't say that it's urgent since we're still in the support period for flux one, right? But you should be developing a plan, especially if you are a big flux one user with lots of flux infillations in your cluster or multiple clusters. And we are on the road to general availability, a full version release of flux two. Most of the APIs right now are beta as the community, some of our more accelerated users, right? Are already using flux two at scale in production. This is very solid software with an upgrade path and everything. It's kind of the classic like cloud native response, right? Is that like, the new software is kind of edge and the old software was new, but now it's old. So. Yeah, I mean, we've been using Kubernetes in the early days when every API was alpha or beta and people use this interaction. So it is in a cloud native way. Yeah, certainly that's our community is moving so fast, right? And that's an incredible testament to the quality of the software that the entire ecosystem is producing and the speed at which we improve it. So getting to better patterns. I hope that it was clear, right? That like, as I'm talking about all of these benefits of flux two, like flux is a community project. It's driven by community needs. We've got maintainers from multiple companies. And so like when we build features and change them and refactor them or as with flux two from flux one, it's like an entire like rebuild of every system. Like these, it's because the community has decided that there's a better way to do things. So excited to help make it clear how to start moving, right? So the first thing that we should be concerned about, right? Is that the deployment model from flux to flux two, the default path that most people are gonna wanna follow the model changes a bit. And the way that we configure flux is much different now. Instead of using flags, we use custom resources. So it's like very common pattern. Lots of projects are building APIs using Kubernetes extensions. And in the flux V one world, there's flag-based config, right? So you have the branch and the URL of the repository that you're interested in fetching. And then you have the path inside of that branch inside of that repo that you're interested in reconciling to the cluster. And then you deploy a Damon set to your cluster and it can reach out to GitHub and it can reach out to your Docker registry, whether that's like on a container registry supported by your cloud or Docker hub. And it looks for image updates and it looks for config in your repo and it makes sure that the up-to-date images get written back to your repo and it just applies your config constantly, right? And so you configure these three things and flux does its sync loops, awesome. And then if you, since flux is just a one-to-one Damon to repo mapping, what people started doing, right, is they're like, well, I've got multiple teams. I have different repositories. I can use flux from like my platform team's control repo to install other fluxes or I can have two organizations and install two fluxes into the cluster and do their own thing, right? And so you start to get deployment apologies like this where for each flux, right? Flux in team A's namespace, Flux in team B's namespace, Flux in team C, you configure it to point to your repository and it syncs either into your namespace or into the cluster. And the access control here is based off of the service account that the flux D Damon inside the deployment is running. So with the version two config, what we've done is we've realized that there is really a separation of concerns with what sources you wanna fetch into a cluster and how you wanna apply bits and pieces of those sources. So just again, sources, reconcilers, one fetches things like git repos, helm repositories, and we've added buckets. And then with reconcilers, we can then take the contents of those sources and then on a different interval or with different events, different policies, different dependencies, health checks, garbage collection, all of the reconciler stuff, that's handled by a different object, right? So we can set up auth and fetch things from one namespace and then get our git repo cached into the cluster. We can ignore folders, we can do all of the sourcey type things that advanced Flux users have come to do when we still have a good experience for if you just only apply certain fields. And then with the reconcilers then, we can pick a path from that repository and sync it. So that means like pruning old objects and health checking things and doing dependencies. And then now that we have a split of these two things, you can actually have one repository be fetched but then reconcile multiple paths from it separately. So that's a really big idea. And the mapping from the Flux V1 flags kind of, some of those are source configurations, right? And some of them are reconciler configs. And here I've only just pointed out like these three flags. I don't wanna like fame simplicity but like if you go to the Flux V1 documentation and then you go into the reference statement config, I mean, there is like a lot of flags that you might have been concerned with with your Flux deployment. You can turn on features and chain intervals and turn on garbage collection and like change where memcaches for Docker image registries and all that stuff, right? Like this configuration could be simple but it also could get quite complex and that's all just happening like in a deployment flag list, right? So now that we've moved to the custom resources, we can separate some of these things and have multiple instances of the things that make sense to have multiple instances of. Does that make sense? Yeah, I think so and in addition to maybe making the configuration process simpler because you don't need to configure multiple flags, you need to configure a file. Does it make it more agile so I can update flags as after deployment make it easier? Yeah, that's a great point as well. It's a different way of viewing the advantage that you get from turning these flags that are on a deployment into actual APIs in the cluster that are monitored by the Flux controllers, right? So there is a source controller, there's a customized controller, there's a helm controller and when you change the configuration objects inside of the cluster, that's done through custom resources, right? For those API groups, then the controllers can see that immediately whereas before when you had a Flux D deployment, right? It would like restart your pod. So like you, since it was like flag-based config, right? So you don't even have to like redeploy the workload that synchronizes your repo. As soon as you update your configuration, the controller picks it up and starts changing the way that you get off the path. That's a really good point. I hadn't thought of it that way, but it's a huge difference. And now that we have custom resources, before like people would manage like child Flux deployments inside of their repository, so you could still like do a Git commit and update it. It's like GitOps friendly, but it's a workload experience. Now we have these Kubernetes API resources, right? Customizations, Git repositories, you can do webhook receivers, alerts, buckets, helm releases, all of these objects, you just put them in your repo and you can update the config and it changes the way that the reconciliation, changes the reconciliation behavior. That's a really good point. So yeah, let's get a clear picture of kind of what we can do in this demo. So if we install Flux, right? The Flux system inside of a namespace, lots of controllers, CRDs, our back network policy, all that stuff to keep your deployment secure and performant, we put it into a namespace. And then we put our source and our reconciling. This could be anything, but in this demo, we'll use Git repositories and customizations. It's worth noting that customization is the thing you use for plain manifests. That confuses a lot of people. I'm an advocate of changing the name actually. I think we should call it something else. But yeah, Git repos, customizations, and then we have our Flux installation of all the controllers. So if you wanted to say, do the multi-source example, like we were talking about before with Flux one, we can install Flux once, right? With administrator privileges, and then we have a multi-tense module for all of the reconcilers to drop privileges. So then we can put sources into different namespaces, fetch them into the cluster, and we can have multiple reconcilers for each of those sources, or just one if you want, right? Here we can see an example of pulling different sources from some from GitHub, some from GitLab, or maybe it's BitPocket. So the picture then is if you're a platform administrator and you've got a bunch of Fluxes, ultimately you kind of want to move to something that looks a bit like this, where for each of your Flux Ds, you would simply just install Flux once in your cluster and then do the configuration that each Flux deployment would represent. Now, we do support a namespaced installation of the Flux controllers. Of course, CRDs themselves cannot be namespaced, but we do have a namespace scope for the Flux install if you still need that kind of hard separation or it makes more sense for you. What we find is that the namespace deployment of Flux 2 is very uncommon and that people do want that platform-centric cluster-wide operation of the controllers since our APIs aren't already good enough to be multi-tenant. So let's get into some demos. Here we go. So I set up two repositories and one is called team A and one's called team B. I've used these repos before in some previous demos. They're actually old repos that I used to demo Flux 1 and it's my internet's working, yep. So we can see that these deploy keys, I just uploaded them today and set up a cluster. And what I have is a K3D cluster called Tunnel and it's just a single node, although that's really not an important detail. And then if you look at the deploys across all of the namespaces, then we can see that we have a team A Flux and a team B Flux. So these two deployments inside of the team A and the team B namespaces are Flux 1, right? If I describe, and K is just an alias for poop cuddle here, I'll make the text a little bit bigger so this is easier to see on the stream. If I describe those deployments, say the team A deployment for Flux and you can see here's that flag config that I was talking about for the Flux CD Flux image that runs Flux D, right? And this deployment is hooked up to this repository on this branch with this path and then there's other config. And then if we look over at the team B namespace, then I can see, oh, okay, they're hooked up to a different repository than team A, still using the master branch since these are old repos, they're not called main. And then the get path here is staging, okay? So that's a difference, right? So my role here as a platform administrator or a cluster administrator is I'm going in to inspect what is the state of the Flux deployments inside of my cluster. That's going to tell me say if there are a lot of teams in my organization, right? Who I need to talk to, to start migrating workloads, right? Or who I might notify if something goes wrong if I'm just handling the migration as a platform admin myself. So I want to understand what is the configuration that that team needs since the deployment model between Flux One and Flux Two is so different. I'm gonna take those values then and then kind of convert them into something that works for Flux Two. So the other thing that I'm gonna do kind of for two reasons. One, just to show how awesome the Flux Two boot trap experience is. But two, to give the platform teams really good visibility and control over this cluster is right now, if you look in here there's these two Flux installations but there's no cluster level flux that's like managing them, right? If I had a cluster level flux managing them I would be modifying these things inside of the control repo. But in this case, it's just two teams who have happened to install Flux. So I'm going to use Flux Two to bootstrap a control repo for this cluster, right? And I usually have a bootstrap command that should be created good. Maybe we'll say cluster zero, I will use the main branch. We will make a private control repo here and I can call this cluster zero Flux Two. I'm not gonna get lab today. I'll be using GitHub and my GitHub user is just stealthy box. We'll also add a token off here just in case we end up doing the webhook installation. If you could make your text a little bit larger, please. Yeah, sure. I can do that. Thank you. Do you think maybe that would be better? Just like that. Even one more if you could. Okay, cool. Thank you. So what I've done here is I just have my GitHub token. This comes from a demo token that I just keep in my GitHub config folder. And now that I have a token exported, oh, I might have to rewrite that bootstrap command. GitHub user repo, Flux Two control. Main personal token. cluster zero. Good one on the other flags here. We don't need to install extra components right now. Cool. So what this is going to do is I'm using the Flux command line tool. This is the command line tool that's used for Flux Two. The Flux version one command line tool is called Flux CTL. It's just a subtle difference there. We're using this command line tool much more often so the shorter name is kind of nice. What it's going to do is ensure that I have a repository. So this repo, it doesn't exist, I'm pretty sure, in my GitHub org. So it went and created a repo for me. Now this command is item potent. So if the repo is already there, it's just going to fetch the repo. And then it clones the branch down into a temporary directory, generates a bunch of component manifests for Flux Two. This is done with customize and some go packing code. But if you want to do a declarative experience for this, like if there's already a Flux system directory inside of your repository, then that's fine. Lots of people do that. We also have a Terraform module for this whole process. So if you want to use Terraform to do this with your cluster instead, definitely good recommendation as well. And then it makes sure that the manifests are synced into the cluster. It installs the Flux system namespace. And then it's talking about secrets. So what the Flux Bootstrap command does is it actually ensures that a repo is there. And since our Flux CLI actually knows specifically how to talk to GitHub and GitLab, like we have provider support for this, it's actually able to generate the PKI for me, the SSH deploy keys, and configure my control repo. So that's pretty sick. Like if I go to this repository, and see here in settings, this is a brand new repo, right? And I go to deploy keys. Then, oh no, I used token auth, nevermind. So it put my GitHub token into that secret. That's how that's working. And then here in the clusters, cluster zero directory, which is the path that I specified for my control repo, I have a new Flux system folder that was generated by the command line that checks this customized build of the GitOps toolkit components that make up Flux too, as well as this synchronization YAML. So that other file has all of the CRDs and the controllers, and this think YAML has my source configuration and my customized controller configuration. So this is what tells the cluster, hey, you should be using this repository, and you should be thinking this path every 10 minutes, unless otherwise notified, right? Like by a webhook or something. And then this customization references a source. So you can see here, that's a GVK namespace kind of thing. The source ref can point to repositories or a bucket or whatever you would like. And you can supply a name and also a namespace here if your source is from a different namespace, like from the platform to a namespace or something. We're looking at adding R back here, but it's possible to just restrict what source refs are also supplied here by using a policy controller like Gatekeeper or Koderna. And we have examples for that with our current multi-tenancies approach. So here, the Flux CLI just makes sure that everything's healthy and that everything is synchronized. If we do a Flux get source get, you can be able to type properly, Flux get source get, right? We can see here that we have a source called FluxSystem that has fetched a revision, right? And then that repository is brand new, but if I do a hub clone, this is just the command line tool that authenticates a GitHub and does a few shortcuts for you if you don't know about Hub. I will clone down my brand new repository. What was it called? Flux to control. Yep. Now we have a control repo that can manage the entire cluster, right? So if I add anything to this directory, clusters, cluster zero, then these things will show up in my cluster just like how we were using Flux one, right? Now I have a GitOps repository that's bootstrapped to my cluster. If we look at all of the pods inside of the cluster, you can see there's a new FluxSystem directory and we have all of the controller installs, right? So here's source controller, customized controller, helm controller and notification controller. If I also wanted to play with some image update functionality, we have a preview of image update automation controller and image reflector controller. So you can add those components to your Flux installation using a flag when you generate your Flux manifests or using the Terraform provider using our customized base, et cetera. Could you maybe explain what these two controllers do and what's the difference? Yeah, yeah. So source controller, remember how we were talking about the split of configuration, right? Now we have sources like Git repositories and buckets. Source controller is responsible for looking at the, say, Git repositories in the cluster, the bucket type. But here I'm looking at all of the toolkit FluxCDIO sources, the Git repository kinds across all namespaces and we can see that inside of the FluxSystem namespace, we have a Git repo kind, right? An API object in Kubernetes called FluxSystem and it has a URL to the repo that it wants to fetch and then it has a status, right? Similarly, if I got a little bit of a better view like you described, you can see the Kubernetes events that are related to that thing. You can see that some controller is watching this. So this is all of the intricacies of if you're an advanced Kubernetes user, you know to look at like status conditions and things to understand if a resource has been reconciled, right? So here's the specification for what I expect to happen. I know as a Kubernetes user, right? As somebody who's familiar with clusters that there is probably some controller that's responsible for reconciling this Git repository specification. And that in this case would be the source controller. What source controller will do is it will fetch that Git repository into a temporary directory and make it available on an HTTP server that's only accessible inside of the FluxSystem namespace restricted by network policy so that other controllers can then start doing things with that source. There are a couple of benefits to this. One is that it's a caching layer, right? So that you can actually reconcile from the source much faster than you fetch it from the upstream. If you want to fetch your repo every hour but reconcile it every minute, that's possible. Similarly, if you wanna do the other way and pull your source every minute but only reconcile for configuration drift every hour and then only reconcile when the source changes, like if there's a reference update, then you can do that. So that's the separation of the fetching and the reconciliation of those sources is important. So these other controllers like customized controller, helm controller, right? I guess I can be looking at the deploys instead of the pods necessarily. So customized controller, it's capable of syncing a directory of plain manifests or patching and using a customized build and templating like environment substitutions and doing all of these advanced customized features for any directory in any source in your cluster. Helm controller is able to look at a helm repository that's pulled from source controller, right? And source controller also manages chart influencers. Notification controller can then serve as an event bus for all of this kind of things, right? So the architecture of flux is that we've got individual controllers that operate on the thing that they are concerned with. And then we have notification controller which is able to manage events of all of those things and hook it up with web hooks as well as alert to external systems like Slack or Discord or Obstaining or something like that. Is that making sense, Ithai? Yeah, and this is, what is it, four controllers? It will always be these four controllers regardless of how many resources are created to operate flux, right? Yes, yeah, and so that's a huge benefit, right? Like sometimes people have very large clusters. We've worked with users who have like 700 namespaces inside of a massive cluster with many of those namespaces each with their own flux installation. And the observability was a real pain point for platform admins. But here, these controllers, if you scale them up, depending on the environment and that kind of thing, these are instrumented with Prometheus. We have a sick Grafana dashboard that you can use to monitor the time that it takes on average or whatever like the long pole in the tent is for reconciliation latency. Like this is a really empowering deployment model for platform admins. And of course, you know, you can still deploy flux multiple instances if you would like. That's a great possible thing when you're doing, yeah. Yeah, so why don't we take the opportunity to review the chat for a second if you don't mind? Yeah, sure. There is a question about a graphical user interface for flux. Graphical, does flux provide a GUI? Yeah, we are working on a flux specific GUI. One thing that is very important about flux is that it is purely configured through custom resources. So if you are interested in what is happening inside the system, right? Like say, if I describe the customization inside of the flux system name space for the bootstrap repository, like I can see what is happening inside of the Kubernetes events. I can look at the objects that the catalog is tracking, right? And I can see revisions and all of this kind of thing. So you don't necessarily get a purpose built UI if you use general tools, but here I'm just using kubectl. If you're using K9S, you get a great text UI experience. If you're using Octant or Contenas Lens, or KinVolks, what is it? Lighthouse, I think, something like that. Yeah, I think it is called Lighthouse, right? Yeah? Yeah, I think so. Those general purpose Kubernetes UIs already provide you a good experience. Similarly, the VS Code plugin for Kubernetes works pretty well with the flux custom resources. And that's a place where we're interested in doing some more plugin development because VS Code and editors are in a much different position than a web interface. For a GitOps workflow, you actually want to probably be modifying the files and then committing them back to the repo. And that's much harder to do from a web interface with a web architecture than it is like for a plugin client that's inside of a developer's editor. So a cool little differences there, right? Like tying references and doing code search and stuff like that, making sure that you can control click through a Git repository name. These are some of the ideas that we're working on with the VS Code plugin. But... That sounds awesome. Yeah, but we are working on a general purpose flux UI. Jordan Pellezare from the WeWorks team and Bianca Castanza from the WeWorks team have been pairing on that in the open. There is a flux UI repo. The goal of that is to just like really show you, okay, like what's happening in your cluster? What's managed by flux and what's not? The answer is not yet specifically for that question. But also there's tons of other user interfaces and things like that. Since flux is Kubernetes native, where you can just use those tools and see exactly what's happening already. Yeah, the quality of the events and like status objects here, like right now I'm looking at a reconciliation type object, right? This is customized. But then if I look at the Git repository instead, right, I can see, oh, okay, like at this time we fetched this particular revision from this branch and that this has succeeded and that the source is ready for other things to use it, right? So if you're ever confused now, like what's the state of my repository, source controller always updates the status whenever it operates on it. And previously the experience here was to like read the flux D logs and do parsing. And we even had platform teams at like people building products on top of flux with controllers that were like parsing flux D logs. So this is a much better machine readable structured experience that you can also use a UI for. Thanks, there's another question about can we manage multiple clusters from the single people? Ooh, I'm excited about this. Yeah, for sure. This is not particularly migration related unless you're interested in taking other flux D configs and then moving them like into some kind of management cluster. Technically it's possible in flux one to actually load a Kube config and then have it be hosted in one cluster but applying it somewhere else. Now, flux one again is like a very build your own platform kind of thing, right? So with flux two, this was an important use case. And if you look in toolkit components at customized controller, we're actually, I think we, I'm just trying to remember where I wrote the docs for this. Customization, and then we have, yeah, remote clusters and cluster API, right? So if you go to the flux two website on the sidebar under toolkit components customized controller customization, we have remote clusters and cluster API, a section about applying resources from a source in one cluster using a Kube config secret to another cluster. And this composes really well with cluster API but it's actually a general mechanism. So inside of your, like basically say you had cluster API, you have this cluster definition. It outputs a secret with the same cluster name that contains an administrator Kube config into the same name space. And then inside of the customization, make sure to increase the font size here, where you would normally say, hey, from this source inside of my cluster at this path, on these intervals, pruning resources, you can do garbage collection and health check configurations on all that stuff. I would like to actually reconcile all of that, those resources into a Kube config or in a cluster that's accessed with a Kube config from a secret in the same name space. And this is safe to do because we don't allow this to be a cross name space reference. That's pretty cool. So the repo is being, the Git repo is being watched by flux in one cluster and that flux in cluster one will apply to cluster two, right? Yeah, specifically, if we look at the deployments again, get deploy. So yeah, you would have these configuration resources say this is the management cluster, source controller will pull the repo and then customized controller sees the Kube config secret and knows how to apply those resources to a different cluster. Yeah. Cool. So I would understand that question is maybe a simpler use case. If we want to manage multiple cluster from the same repo, I would imagine like maybe I'm asking this and not the user, but if flux can just watch, multiple cluster can just watch the same Git repo and somewhere we can say you should look at this directory and you should look at this directory kind of like what's been one repo. Yeah, I kind of, I guess got a little bit distracted and excited about remote cluster management from a central management cluster, but you're totally right. That question is really about 10 multiple flux instances watch the same repository and answers totally, yes, yep. They could watch the same path, they could watch the same ref, right? Or you can point flux at different tags. And this is a common thing. We actually have a video in our YouTube series. It's called the power of GitOps part three where I talk a little bit about release engineering and usually best way to kind of keep track of things is to whiteboard a little bit. Sorry for anybody who's got a bright television, Microsoft Paint defaults to the white background, right? But yeah, like you were saying, Kyra, like you can have two clusters each with their own flux installation and then you can have sources in different namespaces. Maybe this one just has one and then each one of those has recon filers. But those sources can point to the same repository if you want, right? So like say we've got a GitHub repository over here, how do we draw a cat? I don't know, but I just wanna say your paint skills are just amazing. Using this course with a mouse, really impressive. Yes, yeah. I'm thinking of getting a tablet. That would be excellent, right? Yeah, so I don't know. Let's create like multiple Git repos and then let's make one more of those but then instead of GitHub, it's like GitLab, which is like more triangular. Yeah, there you go, right? So the diagram here, right? You've got two Kubernetes clusters and then the flux installation, all of those controllers, each of these resources can be configured independently. All right, so this one can point to this repo, this one can point to this repo at a different path, right? Since the path configuration is here, this one can be on like A and then this one can be on B. So those are the customizations or Helm releases or whatever you want, yep. And then similarly, like this one could point to that GitLab repo instead, right? And then you could create other configs, you know, that point to another repo but these two clusters then can like get the config from wherever it makes sense for you, right? So again, with this pattern of like building the most powerful GitOps tools that are flexible so that you can build the platform that you're looking to, yep. All right, and maybe just one final question from the chat before we move on. There was a question about clarifying the, clarify if the V2 Helm controller would do something more than existing Helm release object that exists in V1, maybe just reiterate the benefits there around Helm for V2. Yeah, that's a good thing. I would almost say that the Helm operator migration is even higher priority. Like you can install Flux 2 already and then start moving your Helm releases over. We have an entire section of the documentation about migration. So if you go to the Flux 2 docs and then you go to the migration section here, I need to zoom out a little so it's more legible. So we've got stuff about the support timetables. We have a general one about migrating from Flux V1. Here's the image automation stuff and then here's migrating from Helm, operator to Helm controller. And you can see that there are a lot of very nuanced considerations here so I don't wanna say that the migration is simple but the Helm release API is largely unchanged. So most of the things that you're doing in a Helm release will port over very well from Helm operator to Helm controller and the great thing about GitOps is like you already have all of those things inside of your Git repository. So as far as migrating from Helm operator to Helm controller, you have feature parity and then like you mentioned earlier, there's additional benefits. You get better statuses like separation of fetch versus reconcile. So one major difference is now your specification for which Helm repository you're using and which chart will go into an object that's managed by source controller instead. So you will move your Helm chart definition to either a Helm repository or a Git repository. And then your Helm release will reference that thing. And that's a great question. Hopefully it kind of covers it at a high level but I would highly suggest just starting to read through this migration doc if you are using lots of Helm releases or you could just look at the API reference and try it out. So if we go to the toolkit components and then under Helm controller, this is the API reference here. It just shows you all the fields. If you compare this to the Helm operator API reference, it's very similar. And then if you go through the guide for managing Helm releases, then you can see how to set up a Helm repo, right? So here's the source controller API object. You just say, hey, I need to fetch this repo and then you could use a Git repo instead, which is kind of a cool feature. There's the cloud storage example and then defining a Helm release looks very similar to before with one difference, which is that you would reference that sort or that source and it's inside of a chart template. Right. Yeah, I think we can continue with the demo flow because we are nearing the top of the hour so I don't want to take too much time. Let's get team A migrated over from flux one over into something that's managed by our flux two control repo. That would be awesome. So if we look at the deploy or team A's flux, we can see right here that this is their repo and this is their config. I can almost even just step forward yet. So if I do a flux create source, inside of the team A namespace and then I call this like team A app repo, then I just need to provide a URL. You'll notice that this is a fully formed SSH URL. It's not the short form here. That is important. And then I think it's branch master. And then we have a path here, right? But a path isn't necessary to clone a repository, right? So that's not a necessary part of the source configuration, right? Remember that path is actually part of reconciliation. I have a different object that tells customized controller, hey, go look at that source and reconcile this path and you can do it for multiple paths without cloning multiple times. I think this is basically what I want here. Yep, so you'll notice something, right? Like every time we wanna clone a repo, it's good to have an independent credential for that. And if I use the flux command line tool to create this object, instead of just using like kubectl apply, we actually have an experience here that generates a new private key for you. Certainly we could try to reuse the previous private key. There's a few differences because we don't, sorry, second. Kind of struggling to read the output here. I think this is the public key that I want. Okay, so I'll need to go into the team A repo and then add that deploy key. We're generating a new credential here because in flux one, it wasn't required to have like the known hosts inside of your secret. And now that's an explicit configuration. So it makes your infrastructure deployment of things like a little easier to manage now that it's not like an artifact of the flux deployment. And for here, we can just make this read only. It's not gonna be super necessary. There's the deploy key that we've added. So it says, hey, have you added the deploy key to your repo? Yep. So here it's applying the secret and the repo credentials through the cluster. It's configuring the repository source in the cluster. And then it is then making sure that that thing reconciles for me so that I know that it's configured properly and that that repo is fetched, right? Similarly, if I do a flux kit, get source kit inside of all namespaces, then inside of the team A namespace, you can see I have that resource and it's fetched to the master branch at this commit. And then as a person with good GitOps hiding, I would want to then export that source, write it to a file inside of my control repo, right? So team A app, gitrebo.yaml and since I modified the cluster, right? You could also do this the other way. It's just you wouldn't have the credential flow that like does the PKI for you. You would have to do that with your own systems, right? So I could have just added this file to my repo. This Git repository would have shown up in the cluster but then secrets management would have to happen like some other way. But using the flex command line pool as a platform administrator, you have like this really easy access to make sure that a repo gets bootstrapped into the cluster. And probably some room for improvement on the create source tool. We are working on a refactor of bootstrap. And I think that having it configure that off for you so that you don't have to go to the web UI would be logical steps since we already have code for that. Well, I added that to the repo. We're just gonna say configure team A repo, right? You've never seen the password prompt for commit before that's because I'm signing my commit with GPG. So that's just only configuring the repository, right? But then if we were remembering how the flux is configured, right? We've fetched the repository into the cluster, we've got credentials for that, that's awesome. But it's not reconciling yet. So before I actually configure the reconciler, I want to scale the flux inside of that namespace because it's actively pulling the repo and then applying it into the cluster, right? I want to scale it down so that it's not doing anything. And if this flux was managed by control repo instead of modifying the cluster, we would manage that in the control repo. I just removed it a point or something. So now nothing is reconciling team A's repo but I can quickly correct that by creating a customization of the source is called team A app. It's in the team A namespace. And yeah, I suppose we could probably just export it already. And we'll call this team A app. All right, so we have a, I just used the flux command line tool here to generate this and I'm using the export flag. So just showing a different way. If I take the export flag out, it will apply it to the cluster. But if I just put this into my repo, it will also apply it to the cluster. I'm also noticing I'm making an error here, right? Because the path is supposed to be different. So with that, a path. So there, config, I think that that was right, right? It path is config. We'll just do this like everything. You might as well set prune to true as well. There we are. Cool, so go ahead and write that to a file in the repo. We'll call this the, in that file, configure team A customization. So if we use the flux command line tool to just ask the repo to reconcile since I don't have a system. I don't have a web hook set up. Like in reality, it's probably reconciled already, but let's just make sure for the purpose of the demo that we're up to date. We can see that the control repo were at this commit. This source got reconciled into the cluster. Similarly, you could get the sources for all namespaces. You see that we're here on this repo and here on this repo. And we can also get the customizations in the cluster. And we can see now that we have a team A app customization that was applied to the repo using GitOps and it has applied this provision. So at this point, team A with their flux one deployment has been migrated because their repo was really simple. It just had plain manifests, right? We didn't have to do any API changes. We didn't have to like move their image update functionality. We didn't have to do like manifest generation. So there are some nuances in that kind of thing and we've got guides for all of that. Like the more complex features that a team or platform administrator is using, like if you're using manifest generation to do customization, you may need to restructure some things, just depends on what you're doing. But the migration story is pretty good. And what you can also see right is that we have team A reconciling from flux two and it's managed by a flux two control repo. But then when you look at all of the deploys in the cluster, the team B stuff is still managed by flux in their own namespace on flux version one. So it's possible to do this as a staged roll out, right? Where you have lots of teams in the cluster, some of them running on flux one and some of them running flux two, not simultaneously per team, but within the cluster, these things can go habitat, which is a really important part of the migration path. So we hope, yeah, that like you'll go and try this out in your environment and get involved with the community if you're having trouble, because we need to make sure that this is a good story for the incountable number of flux users out there. All right, thank you. So I see that there were a few other questions in the chat, but fortunately we're running out of time. So I just wanna point out that you can ask those questions in the CNCF Slack under the cloud native live channel. And Lee, is there any other way to reach out to you or the team to ask further questions? Yeah, also in the CNCF Slack, which is where we've been suggesting to ask questions for this webinar, there is a hashtag flux channel. That is the main Slack channel for like the community if you are trying to have ad hoc conversations. If you want something that is searchable on Google so that when you have a conversation there about a problem that your team had or you think that a solution to your problem would be best documented like where someone can find it forever, not Slack. GitHub discussions on the Flux 2 repository. I might as well show that since I'm sharing my screen, right? You go to the Flux 2 repo and you go to discussions. This forum feature of GitHub, if you haven't used it before, it is awesome. You can see we have tons of people in here asking questions. Maybe your question has already been asked and you can find it. If not, feel free to please contribute to the discourse that we have here in discussions. And hopefully we can get a good answer to you. You can see all of our cool contributors over here are pretty active with getting back to people. So yeah, discussions and our Slack channel there in the CNCS Slack are some of the best ways to get to somebody unless you have an actual issue that you want. All right, thank you. So thanks again. This has been really interesting. This has been Cloud Native Live. We are here every week. On Wednesdays, I'm Itai Shakuri. Together with me today was Lika Pili from WeWorks. Join us again next time. Thank you. Thanks, Itai. Bye, friends.