 Hey friends Stefan and I are here To talk about flux becoming an incubating project and to give you a little bit of the view of the road ahead Super excited here to be joining Stefan Hello everyone, I'm Stefan Prodan. I'm developer experience engineer at WeWorks and I'm a long-time Flagger and flux maintainer and very happy to be here and speak about the future of flux and where where we are going with it Yeah, me as well Lee Capelli also on the developer experience team at Weave Stefan and I have been in the cloud native space for a while If you contribute in the Kubernetes community, you've may have noticed me over there used to work on kubudam quite a bit and Now we're doing cluster add-ons and interested in some other kinds of deployment artifacts for flux Also really interested in security and multi-tenancy. I maintain a project called Ignite Which does micro VMs in a Docker-like way Both Stefan and I really like helping people which is why flux is a great project for us Flux is super awesome and I'd like to tell you a little bit more about why we think so It's one of the most mature Technologies right now that's developing in the cloud native space here you can see flux along with a bunch of other amazing projects and flux alongside Helm as Reported by the CNCF technology radar last year mid-June they were saying hey, these are some things that you should really be looking at adopting into your production workflows and many users have chosen to be on that journey with us in the project the flux community has done so much to Really learn and pioneer what it means to do GitOps. You can see we have a diverse set of users and users who Operate flux and do GitOps at different scales and different kind of formats and varieties and so GitOps we find is working for people whether they are Managing their own services or offering flux as part of their product And as the flux project has developed We've come to start to formalize a little bit about what a good opinion for doing GitOps can be There's a couple of principles that we like to help people get on the road to guiding towards good social and technical solutions with it and Declarative systems. So if you can describe your system declaratively, this is a good prerequisite that you need to be able to meet If you're working with systems that have imperative APIs Thinking about how to do those things declaratively can help you on the road to GitOps Then you put those declarations as Configuration stored in source When you have source control systems like Git or subversion Or even things like Google Sheets, which can be a version to declarative store for information Then you are getting to that second tenant and then using software agents that can then take those machine readable configurations and Reconcile them towards your infrastructure your process or your policy. So whatever you are trying to manage with GitOps To either ensure correctness or alert for a drift when it occurs Is kind of that third point you you really do need some sort of constant reconciliation here It's what we found with a good recommendation Not just something that's only eventing although events can be a powerful part of the GitOps platform and help you integrate with other systems like CI So when you follow these principles you get some pretty clear benefits One is this first point that maybe seems obvious, but it's a little bit more nuanced in that way Why do we get collaboration on infrastructure? We know that devs know Git nice some would even say that they love it other people may not right, but devs all have adopted Git and This means that you likely have a solution For managing Git already in your organization. So regardless of the complexity of your team You have some of that organizational complexity coded into how you store and version your code So if you do that with your declarative config as well You get the benefits of that existing platform, which means that you get access control you have an auditable history and on that Historically controlled Declaration of how you want things to be configured you get drift correction and these clear boundaries of access between the actual infrastructure and Where your dev teams collaborate? And so there's a security boundary here and you get the benefits of how your organization already works and all of these concerns are actually Things that Kubernetes is not really Scoped to take full responsibility of right? There's no Version control that's meant to be accessed by humans inside of the Kubernetes API and in the same way Collaboration on comments and missing fields and things like that why things were done what order they were done who was involved in making decisions That's not Kubernetes responsibility But it can be the responsibility of your Git platform. And that's how we get GitOps And so why is flux a good set and what's the scope of the project we talk about GitOps? Well, we talk about flux helping you Provide complete continuous delivery capabilities on top of Kubernetes specifically and then supporting Kubernetes best practices by tying in the best in class cognitive tools that are emerging things like customized helm metrics with Prometheus and so on and We've broken up that architecture to be Kubernetes native and to be very extensible open community friendly and I mentioned That we have such a large flux user base already and that points to really a multi-year journey As we've come to learn and make GitOps more mature We've also learned that we need to make the software more mature and so for the past the better part of the year now The maintenance team that has been involved with flux has grown and we're working on flux too The main kind of difference that we want to point out is that flux one was really built as a targeted but monolithic piece of software that was responsible for syncing a single Git repository and also applying it to a local cluster and then there was this image automation feature that people grew to really use and love so in flux to What we've done is we've really identified how to compose the different pieces of GitOps in a more accurate way So if you need to implement GitOps to meet your particular Organizations needs we've split up the APIs so that you can do exactly what you want and We've accomplished the reconciliation of those GitOps configurations Or GitOps related configurations I should say how you actually assemble your platform with kubernetes native APIs and microservices Rewritten from scratch so that it's possible now to sync multiple Git repositories Apply them at different times. You can get this really rich feature set You can apply them to local clusters or remote clusters and much more and so the project structure has changed And if you're interested in getting involved and contributing Or even just keeping a heartbeat on what issues and bugs and releases look like Then we now have multiple project repos that comprise of the several controllers that now make up the flux to effort to move the project forward Also, step one happens to be really the original maintainer and creator of flagger And we're happy to just note that flagger is also now part of the flux CD org. So we have that repo I Want to talk about what makes flux to so awesome All right, so flux one really great first step and if you're using flux one I Just want to help you understand What is getting better and what you're getting out of flux to things that have traditionally been challenging We've really thought about them and the project is moving in a super exciting direction Flux gives you flexible tools to implement GitOps for your team's specific needs And so if you want to do declarative help and you can take those imperative like local laptop workflows where you're iterating on a cluster and modifying it You can move that into someplace where it's possible to collaborate with GitOps flux lets you do declarative help If you want to represent each piece of your infrastructure's configuration as separate bits separate packages components or folders You can then configure health checks you can create dependent ordering between those components through the use of a DAG a directed cyclical graph you can create a dependency tree if you would like and That creates a super awesome bootstrap story that we've also Done extra work to help with external platform creation like github and gitlab and bitbucket so When you put all of these pieces together, it's you get a story where you can create a brand new cluster Point it to a declarative repository Run a flip bootstrap, and you never have to modify it manually with any kind of imperative workflow Something that needs to be deployed in a specific order This is an amazing feature set and something that brings organizations a lot of success and peace With regard to disaster recovery scaling out new deployments for new customers or whatever it may be these are some of the immediate benefits that you get from doing GitOps in the declarative way and Flux can now do dependencies as part of that Flux can do very sophisticated actuations of Continuous delivery So if you want to say have your git artifacts or your helm artifacts your config Be tagged or versioned in a specific way You can ask Flux to make sure to stay up to date with that tagging policy If you want your git repo to only release config when you make a tag that is a Sember patch bump You can tell Flux to do that. The same thing is true if you are holding Helm charts inside of a helm repository and we also support storing your home charts directly and get with some coffee And so lots of very powerful feature sets there for Automating the Mechanisms that are needed to do continuous delivery so that you have to do less stuff when you're actually just trying to release software Along with that we've got some forming APIs and Controller mechanisms for doing image tag updates that compose really well with these tagging strategies And so lots of mechanisms available for you in Flux to do awesome GitOps the way that you need to Something that I'm particularly excited about is our ability to compose multiple Repositories folders branches refs everything whatever you could imagine like oh, I want to store my configuration in that place Or this way or in a bucket It's possible to do with Flux This also gives us really Nice synergy with our multi-tenant fee store So each one of these pieces that you would want to reconcile to a cluster for particular tagging policy or whatever you need You can also then restrict it via our back to our support for service accounts Which also leads into our support for kube configs It's possible to for any of the artifacts that you're synchronizing to your cluster using Flux APIs Also mentioned that you wanted to be synced to a remote cluster Which gives you the ability to do central management fleets or central cluster multi-cluster management from a specific management cluster or all kinds of fun things So you could even use this in a B2B relationship where you're offering a service right you could use a management cluster to remotely apply Stuff from your GitOps control repo into a customers cluster So Lastly because we've built everything with Cloud native tools in mind and in a kubernetes native way Flux is way more observable than ever And so previously in Flux one we would frequently have issues where folks were struggling to read the log messages or understand Why syncs were not occurring And because things have now been broken out into their individual pieces and they're represented declaratively in your git repository subsequently in the kubernetes API You can tell if something is failing to fetch on the git repository object You can tell if a validation is occurring a validation issue is occurring with your configuration on the customization object If helm is failing to upgrade a particular thing you can check the helm release object And so all of that status is exposed in K status compliant custom resources and With events and from atheists metrics that you can instrument on you'll see later in stuff on slide that we have an awesome dashboard for this Additionally our notification controller is super generic and very powerful just across the board Flux is built in a way. It's factored so that the individual pieces you can control them And you can also extend them, which is an expectation that I think we should hold a high bar for in the cloud native ecosystem that flux gladly needs Probably the most important feature I think of flux is the strength of the community. I've tried so hard to emphasize and highlight that Where we are with GitOps today is because the practitioners in the field have taken a bet on the flux project taken a bet on the ideology of GitOps and moved it forward. We've seen More of this maturity occurring as we've come to constrain and expand GitOps, the definition of what it is in To to really make it more mature with things like GitOps days. If you even just look at the statistics with the project. We're at 40,000 contributions and counting 26,000 of those have been since sandbox and so just from sandbox to today, which is incubation lots of activity. 1,888 contributors is a pretty insane stat. I shouldn't say insane, but it's incredible. 16,000 plus commits. We have 14 maintainers now from five different companies. And there were six maintainers and three companies at sandbox type. So really points to sustainable growth in the leadership and maintenance of the project. We are building something that is sustainable and a true community effort here. 12,000 plus GitHub stars. Definitely lots of growth there. And it's just clear when you get involved with community that we're building something super healthy people helping each other and lots of love, as well as an excellent code of conduct. So if you're looking to get started with flux, say you're new here and you're just learning about it. Our website has a great path for you. So go ahead and check out our getting started guides. Also, we've done our best to really transparently show throughout the flux to development progress or process what these new features look like how they're supposed to be used. We've we have a call every other Monday that you can join. Check out the meetup page and the recordings there get posted to a YouTube playlist where you can see very often me, but also other flux community members either fail through a demo or do something really fun and cool. Also, just call out to Victor Farsi, who did a great dive into and demo of how flux to is working. Go ahead and check out the YouTube videos. If you learn well in that format. Now, this would be incomplete. If we didn't also talk about flux one users and their road to maturing with flux as we move to be to. So we've got an excellent migration story that we're continuing to improve already tons of docs that have every caveat that you can think of every consideration or change in behavior that we would like to encourage as you move to flux fee to go ahead and check out the specific migration section in the sidebar of our documentation site. Again, you can get there by just going to flexi.io, or at these links below. So, flux to community. I can't stress this enough. I mean, I'm so excited about what we're doing with flux to produce flexible tooling that lets you do things the way that you need to to accomplish, or to build the proper get ups approach for your work, which includes thinking not just technical solutions, but also social ones. And flux to is going in a really good direction. So that's why we have stuff on here today is to really give you some context and a deep dive into how flux really works. So that's why I'm taking away. Thank you, Lee. I'm going to try something different today. I'm going to talk about flux and flux features from a user perspective. So we've we've split up the flux personas into three categories. We have cluster operators, the nice people that are creating clusters for us they are maintaining them they are doing provisioning upgrades and so on. We have platform engineers, those that build continuous delivery pipelines, they help the developer teams get more velocity, and they also do engineering work. They can extend flux in in ways that we haven't figured out yet, or they can trim it down and use only those components that they, they need in their workflows. And finally, app developers, of course, they rely on continuous delivery to get their code on production systems, but it's not only about production system is the journey of, you know, you commit something to your source code. It goes through different stages CI CD environments, promotions, you know, feedback of what's going on with your app before it reaches its final state in our production cluster. So, I'll try to explain flux from these three perspectives. Let's start with cluster operators. So, what, what the cluster operator has has to do it has to develop work on a cluster definition. That's right, it can be, let's say, NICS cattle config, where you set up your I'm role your VPC is your node groups and so on, it can be a Terraform project where you use some cloud provider or you target your on-prem clusters or even bare metal systems. So that's, that's step one, you figure out how, how to define your cluster and how to create that cluster inside your infrastructure. Step two is what you want to provide with within that cluster, right? The cluster add-ons, what CNI you are going to use, what Ingress controller, are you going to use a service mesh and so on. So, there is a lot of, you know, different, there are so many differences between how clusters get composed. You, you, it's hard to find two clusters alike, like everybody, there are so many add-ons out there. If you look at the Kubernetes cluster is always something you maybe run some add-on that you've never heard about, because there are so many add-ons out there. And third, as a cluster operator, you want to onboard tenants. Now, what is a tenant? A tenant can be a dev team or a tenant can be a whole organization if you provide this as a service to others and so on. The idea of a tenant is when that you have to, you know, isolate and put some boundaries around what a tenant can do to your infrastructure. Can a tenant delete nodes? Can a tenant, you know, wipe out your Ingress controllers and so on? Maybe not. So you have to set some boundaries for, for these tenants. And lastly, you have to maintain these clusters. You have to upgrade them, CVs, Kubernetes on a fast track. And now it's not that fast, but it used to be very fast. So you have to keep up with the latest version and not only of the cluster, but also of the add-ons and so on. So how do you bring structure into this and how you, you make it traceable every change? What you do is you store everything in a single repo and you call that your infrastructure or fleet management repo, where you can put together all these definitions. So you need, therefore, be some config. That's one consideration, but once you have the cluster up and running all the add-ons and all the tenants, they can be, they can subscribe to the same GitOps principle as delivering apps. The main difference between delivering apps and delivering cluster add-ons is the fact that you don't control the CI system, don't control the build system of the cluster add-on. You are a consumer of it. You are also, in most times, a consumer of the configuration of the add-on. You may be using some Helm chart to install, let's say, an Ingress controller. You will not be developing that from scratch. You want to reuse what's already there and make some small changes to it. So you'll be changing things in your infrastructure repo, then there is some continuous delivery system that applies those changes to your fleet of clusters. That's the GitOps principle. I'm calling these different clusters environments. You can have a dev environment, staging environment, and so on. So you'll have, we call them overlays, like in customized overlays, where you take an add-on and you do small modifications over it, depending on which environment it needs to end up on. And so, for example, maybe you want to have a database with FML storage for your dev cluster, but on your production cluster, you want to change how the storage is managed and switch to, I don't know, PVC or stateful sets and so on. So there are challenges in making these overlays and feed them into your environments. Now, Flux version 2 comes with tooling, so to help you bootstrap clusters and have consistent way of creating new clusters, modifying them, upgrading them, and so on. So in terms of bootstrapping, we offer two things. One is the Flux CLI, and the Flux CLI offers a bootstrap command that works with GitHub and GitLab for now. We are also looking at extending it to Bitbucket, and we are working as we speak on an SSH agent implementation. So the idea is you tell Flux, hey, I want to create a repository on my Git provider where I would like to store all the infrastructure items for not only one cluster for my whole fleet of clusters. So you run this Flux bootstrap command, you give it your organization name, your repository name, and you tell Flux which cluster to target, right. So Flux will use the kubectl kubectl config, so whatever you have there in the default kubectl, the Flux bootstrap will address that. Now, maybe our Git provider implementations are not enough for you, so that's why we also reused much of the Flux bootstrap code and created a Flux Terraform provider. So you can use our Terraform provider and target your own Git host, your own clusters, and so on. So there are two ways of, you know, setting up Flux on your cluster, configure deploy keys so that Flux has access to your repo. You can also specify things access to that particular repo, and Flux bootstrap is also the way to upgrade Flux on clusters. It's either important, you can run it. No matter how many times you want, if something is new, if you specify a new version or you are using the embedded version in the Flux CLI and it will detect how I need now to upgrade and what it will do, you can update that to your Git repo and Flux will update itself. So Flux version 2 is managing itself through Git. Let's see how this looks. So you have the Flit repo, there you store all your definitions and an overlay for each cluster, then all these Flux controllers are running on each cluster and then they will pull changes from the Flit repo. Any change to an infrastructure item or let's say you add a tenant or you change the, you know, the access policy of a particular app, you change some RBAC and so on. Flux will detect that change and will apply it on one cluster or on many clusters depending on how you want to do this multi-cluster management. Version 2 comes with many features. One important feature that we've built into version 2 is dependency management for infrastructure and apps and I will give you an example of what that means. Let's say you have, you want to install a controller that comes with its own custom resource definition. Let's say you want to provision certificates from Let's Encrypt dynamically for your apps. The best way to do that is by using CERT Manager, which is a CNCF project. And the issue here is inside the same repo, you'll have the CERT Manager definitions. Maybe you refer to the upstream Helm chart for CERT Manager and you want to install CERT Manager from that chart. Maybe you want to get the CERT Manager custom resource definitions from their GitHub repo story release page. Flux allows you to combine things like plain YAMLs that come from URLs with other YAMLs that come from Git repos and other configurations that come from Helm repo story. So you can bind them together and create a definition that reconciles CERT Manager on your cluster and keeps it up to date. Now, let's say you want to create now certificates using the CERT Manager custom resources. If you apply the certificate definition along with CERT Manager deployment at the same time, in Flux version 2, they will fail. And why they will fail? Because we've enabled by default Kubernetes API server side validation. Why we did that? Because we want to make sure that every commit that you do, every change of your infrastructure is applied as a transaction on your Kubernetes cluster. So if you change, let's say 10 manifests, one change is not acceptable. Maybe you have something that, I don't know, something like gatekeeper will reject or maybe even the Kubernetes API will reject. There is a typo there instead of, I don't know, type load balancer, you mistype balancer, stuff like that. The Kubernetes API will reject it. So how can we compose our infrastructure items in a way that works with Kubernetes and also it enforces validation. We have these depends on the field inside our custom resources. So you can say, I want to have CERT Manager installed. Then I also declare here health check, which looks at CERT Manager deployment and I'm saying, hey, I want to install CERT Manager. It's custom resource definitions. I want to make sure CERT Manager is up and running only then apply the certificates manifests. And this applies to many things, not only CERT Manager. Maybe you want to enforce policies right at the cluster bootstrap. So maybe you want to say I want Qverno or OPA gatekeeper to be the first thing that gets deployed on my cluster and only then apply other configuration that I don't have control over. So maybe those other configurations are coming from repo stories that are managed by your deaf teams or by your clients and so on. You want to apply the policy right at bootstrap. So you can do it by building a dependency graph out of your infrastructure and apps. And you can define dependencies between Helm releases, between plain YAMLs and other customizations, between customized overlays and so on. The depends on graph takes into account all the flux syncs, so to say. Other features that we've added in version two is the possibility to impersonate a Kubernetes service accounts when you do a reconciliation. So let's say you have a Helm chart and you don't have control over what's in that Helm chart. So know for sure that that particular app shouldn't create, for example, when it gets installed shouldn't create an Ingress definition or it shouldn't create a cluster role binding. It shouldn't make itself cluster admin and so on. How can you prevent that? You can prevent it by telling flux, hey, when you install this particular app, use this service account. For that service account, you can set up restrictions using a role binding, for example, and you can say from this repository or from this Helm chart, all the things that are applied on the cluster cannot modify anything else but objects inside that particular namespace, right? So we have namespace encapsulation, but it's not only about namespaces. One app can reflect itself inside the cluster in multiple namespaces, right? Maybe an app is composed of microservices and you have a database namespace and a front-end namespace and so on. What you can do is create a cluster role binding for that particular service account and grant that service account access only to those namespaces. So it's not only about one app, one namespace, it's about what that app needs to do inside the cluster. You'll allow it to do it but only that thing, nothing more. So this is how I call it soft multi-tenancy that Kubernetes offers through namespaces and RBAC, but that doesn't mean that, you know, is the right way for everybody. In many cases, you may want hard multi-tenancy where for each tenant, you dedicate a cluster or a set of clusters. And for that, Flux integrates with Kubernetes cluster API. And what you can do is in your fleet repo, you can place their cluster definitions and you install Flux on your management cluster. There on the management cluster, it's running Flux and it's also running your cluster API provider, right? When you add, for example, a new cluster definition, then you can tell Flux to apply that cluster definition, wait for the cluster to be created, and then you can tell Flux, hey, on that particular cluster, please reconcile that particular repo story, which is your tenant repo story. And this way you can isolate tenants at the cluster level and have a hard multi-tenancy approach to, you know, dealing with tenants and so on. Other things we have in Flux, we integrated with Mozilla SOPS, if you haven't heard of it, Mozilla SOPS is a tool, is a CLI that lets you encrypt fields inside that file so you can safely place Kubernetes secret manifest in a public Git repo and no one will be able to see what's in there. The manifest will be encrypted and only Flux or the cluster where Flux runs only there is the private key. And you don't need to run a yet another controller, Flux does that by default. For example, Flux, say use this PGP key to decrypt the secrets in my Git repo or connect to this cloud KMS implementation, B, AWS, Azure, Vault, Google, KMS, and so on. SOPS integrates with so many backends, even HashCorp Vault, I think. Flux will use Mozilla SOPS as a library and can connect all these providers, pull the private key from there, decrypt the secret before it applies it on the cluster. Right, so in this way you have secret management out of the box with SOPS on the client side and Flux on the server side in your clusters. We also use, you can also use GPG to authenticate, not authenticate to verify that the person that made a particular change in the Git repo is allowed to do that. And the way to enforce this kind of authentication, this kind of validation is by making your teams use GPG to sign commits. Then you collect the public keys from all the team members that are approved to make a change, let's say on your production cluster, and you tell Flux, hey, only these people are allowed to make changes. So let's say someone hacks your GitHub account, it will have access to the repo history, everything in there, it can change something and that something will be deployed on production. But if that person doesn't also steal your private key, your, I don't know, UB key and so on, then even if it commits in your name because it has your GitHub authentication, that change will not be applied on the cluster because Flux will verify the signature, it will not match or it will have no signature in to reject any change to the cluster from that moment on. And what Flux will do will send an alert, Kubernetes event, and you can configure other things through Slack, through other messaging platforms. And let you know, hey, someone has used, has commit something, but is not in the proof list. So you can, you know, act on it and figure out what's going on. But the important part is that unauthorized changes will never be applied on the cluster if you use commit signing. So it's going next. Lee already mentioned Flux or observability features. We expose now things as custom resource in the custom resource substatus resource. So what what that means is that you can do to get to get to describe and see if something goes wrong. And when, when did the Flux last apply the commit, what Gitsha is, is applied on the cluster and so on. We, we also allow you to create health checks for workloads, and also custom resources. For example, you could create a check for open fuzz function or something like that. It's not native Kubernetes, but if it has ready condition or or something that's compatibility case status will be able to look at it and, you know, give you the end result. Flux when you, when you define a health check what flux does is, it waits for that health check to resolve. So if you push 10 commits, and flux will apply the first one. It will not apply any other changes to the system, unless the health check has a resolution, either works or it fails. Right. So this way you can ensure yet again a transactional mode of how things are applied on the cluster. We also issue Kubernetes events for everything that's happening. So if you have a, if you have a tool that, you know, has the Kubernetes events and stores them in your elastic search and so on, you can build your own notification system only based on those events. And also also the flux controllers are using structured logging in JSON format. So if you use a cloud for storing logs you will, you can automatically create, you can easily create others based on type error or filter by custom resources and so on. We also ship Grafana dashboards. You have an example here where everything that's defining in flux exposes Prometheus metrics so you can use let's say Prometheus other manager to build your other thing you have so many options now, or how you want to, you know, look into what flux is doing. And I also want to mention the commit status update feature. This was implemented by Philip, one of the flux maintainers. So, like CI, if you, if you go to, I know, GitHub and so on, you'll see that when a CI job runs, the result of that job is posted back on your commit in Git and you can see if that commit has been successfully built or it failed. Now, in the same way flux can reflect what's happening with your changes inside the cluster. If you configure flux to write back to GitHub commit status, you should create a token that allows flux only to do that. For example, if a health check fails, flux will write back to your, to your Git on that commit and it'll tell you what, let's say what deployment is failing or if the validation fails and so on. So if you want instant feedback in your, you know, you can do platform without going to Slack or Discord or other platforms, you can have that status posted right there and it works with also with Asia Devos, Bitbucket, and we are working on, on expanding this commit status feature to other platforms. Okay. Let's talk about platform engineers. So you are a platform engineer, and you want to build something that, you know, flux doesn't do. You can, you can use our toolkit. So in order to develop flux version two we first build. It's like an SDK, if you think about it. You can deploy the GitOps toolkit and GitOps toolkit is composed out of APIs, which are represented as Kubernetes custom resource definitions, controllers, and go lang packages. So, all these things together, if you put them together you can build CD pipelines, but maybe you don't want to build continuous delivery things maybe you want to build continuous integration things. You don't want to use some components from, from flux to achieve that, but flux will not build your source code or not do anything with it, but you can use the toolkit and build your own controllers and extended in that way if that's what you're looking for. And the core of the GitOps toolkit is a controller and an API called source controller. What source controller does is it can pull artifacts from external sources like Git repos, like S3 buckets, Minio, any kind of S3 compatible storage will work. Also, Helma repo stories and so on. And you can build your own consumer that reacts to source changes. Let's say you do a Git commit source controller will pull that commit inside the cluster, then we'll let your consumer know hey there is a new version you want to do something with it, and your consumer can take that new version and act on it. So, based on this workflow, this is how we've built Flux version 2 and we've, we've developed specialized reconcilers that are using the source APIs and artifacts the source controller creates and these specialized reconcilers are customized controller, something that knows how to apply customized overlay or even plain demos from a repo or from a S3 bucket. We have a Helm controller that is specialized for Helm operations, it knows how to install a Helm chart, how to upgrade it, how to run tests for it. It knows how to roll back that particular version if the test failed and so on. And we also have controllers that are built for automation like the image reflector controller and image automation controller, I'll talk about that in a bit. If you want to get started with the with the GitOps toolkit and write your own controller, we have a guide published on our docs, where you can create a source watcher. So it's a controller built with QtBuilder and controller runtime that collaborates with source controller, detects that something has changed and pulls the artifact from source controller and from there you can take decisions on your own. So that's what you have to do with those changes. Yeah, so please check out source watcher if you are into, you know, building your own pipelines. Okay, we finally got to app developers as our use case. As an app developer, if you want to do, if you want to deliver your app on some cluster, you'll have to take several steps. Of course you make a change to your source code, you'll have to build that change into a container image. You'll be pushing that container image to registry. You'll have to update your deployment manifest with the image tag that you just pushed. Then you'll do a Q-cattle apply of your deployment manifest on a particular cluster or on several clusters. So this is the journey from source code to a cluster, manually with CLI tools. Now, if we add automation to this workflow, CLI can help you get to an immutable artifact, a container image that pushed your registry and you never change. And you can do that by using your git shot and the timestamp or December or something that you is unique. You have to tag your image with something that is unique. And the important part is you'll never override that image tag. That's the only way to have a consistent versioning of your system. Now, the continuous delivery part will deal with something change the deployment manifest. Okay, I have to deploy on one cluster or many clusters, depending on the environments. Now, who does the update in the deployment manifest itself? You can have your CI system right to the git repo where the manifest start. Maybe the git repo is the same as the source code. But that means CI have to deal with YAML, have to replace on that and so on. Flux comes with its own automation solution for that. So instead of having to deal with replacing image tags and so on from CI, you will push the image to your container registry and from there, Flux will decide based on a policy that you have defined if it should be updated or not, and how that works. Let's see. So we have two controllers that are collaborating on this feature. One controller is called image reflector controller. And what this controller does, it scans container registries based on a policy that you have defined and based on image repository configuration where you, for example, specify how the reflector should authenticate to connect to that repository. And inside the policy, you specify which tags should be taken into account and how Flux should order them. For example, you could use assemble ranges and determine the latest version based on assemble expression. Or you can use regx and ordering by timestamps or using a numbering order. Let's say you have an incremental build ID and you can add that to your tags and you can tell Flux to order them by the build ID to determine what's the latest version that you want to define. And finally, you configure Flux to write back that change to the build repo. And that's the image automation controller. What the image automation controller does is, when the reflector detects a new tag, it takes that tag, it uses customized libraries called KAML, it clones your repo. It finds where it needs to replace that particular tag. It replaced that tag in your YAML file, then it commits the YAML file back to the repo. When that happens, source controller will detect, oh, there is a new change in the cluster config repo or in your fleet management repo. And it will do what it's supposed to do. It will pull the change, then have more customized controller, whatever reconseller you have will apply that new image on your cluster. So how Flux does image automation is not by changing the cluster state directly, is by always reflecting changes from other systems inside your git repo. What that means is that you can see a commit made by Flux. You can, you know, during an incident, you can pause the image update automation, you can roll back that commit, you can go there and disable maybe the automation for, I don't know, the weekend and so on. Maybe you don't want to deploy on Fridays, right? So you can disable only that object, which is the image update automation. That doesn't mean your cluster state is not correcting drifts and so on. What you are doing, you are just pausing a certain automation in your continuous delivery system. And we've added futures for you to feel comfortable during an incident, to not fight Flux. So in version one, you had to scale Flux to zero because if you want to edit something, then Flux would override it and so on. In version two, we allow you to suspend for a time interval, a particular reconciliation. So let's say you have an incident for a particular app and that app comes from a helm release. And when you're enabling everything, you can just say, I want to suspend the reconciliation of that helm release from this moment on, doesn't matter what changes, I don't want the helm release to be upgraded. I want to go there, use, I don't know, helm CLI and do my own thing to fix the issue. Then after you determine what needs to be fixed, those changes should end up in Git as a fix, right? If you've committed the fixing Git, then you can resume Flux operations and Flux will pull the latest version. Maybe it's the same stuff with what you already did on the cluster and nothing will change. But the idea is that now you have fine grain control of what's happening and when it's happening inside your cluster. It's not the do it all thing like I'm reconciling everything or nothing. That's, those are some good steps we made in that direction. You want to mention something here about updates automation. Yeah, this is a cool thing to point out. Oh, you can't hear me? Now I can. Oh, yeah, this is a cool thing to point out is that image update automations because we built Flux in such a composable way and there are these other powerful continuous delivery features. It works really well alongside two things. One is the tagging policies that you can use with Git repos and helm releases is you can rely on Flux to, like you said, reflect those changes from an image registry. You can get those changes into your repository, but that doesn't necessarily need to release at that point to every environment that you have. So because you can still keep manual control or use other systems to decide when a stage tag gets applied to that commit that Flux made to your repo right or when the production tag gets promoted to that point in the repo and I think that that's really cool. The other point is that the way that we've done image updates composes super well with more mature manifest workflows so I'm really excited about how that comment annotation on any structured file in the repo can function with manifest generation in all use cases like from Flux one where you can now get those image automation updates into some config that then produces your manifests without being worried about it having to be coupled to the specific type of deployment resource you have. So just like way more flexible way of designing it. Composes super well. Yeah. Right good way to me, we in the past flux version one was only capable of patching image tags inside the native Kubernetes workload definitions that means a deployment stateful set demon set and cron job. And now in version two. And because we use the, the, the KML library, we are able to patch any kind of Kubernetes custom resource we don't have to know about it. If it's a custom resource if it subscribes to Kubernetes API we based on on the marker that you've had there we are able to patch it so for example we can now patch customization configuration, we can patch a hand release file we can patch on a tecton task that builds, builds your images maybe want to let's say you have a tecton task that builds with go right so and you want to always use the latest go patch version where CVs are shit right so you want to keep your system up to date you can just use these two controllers and you can, you can update all your, all your CI pipelines. So they will be using the latest. It doesn't even need to be an object that lives in Kubernetes that you update the image tag with it can be any YAML file that has some Kubernetes structure like the customization YAML that you mentioned. And that opens the door for like communicating your image updates to not just Kubernetes right but systems that produce Kubernetes manifests or other things that would want to know about your image updates like CI system that does security vulnerability scanning lots of cool things you could do. Yeah, and one one last thing about image automation what we are currently working on is being able to push the change to a different branch, and how this works, you say, Hey, I want to use the image automation but I don't want it to commit back to the same branch. It doesn't get deployed automatically I want to commit it to a new branch, then maybe I'm using a GitHub action or a good lab CI helper that will open a pull request with all the changes that flux has determined that you need on your cluster so someone from your team can review it, merge it and only then those updates will be applied on the cluster and that's that that will be released in the in the next in the next couple of weeks. I'm running out of time I'm going to skip real fast here. So this is how flux version two looks like built on top of the toolkit it's in a way it's a continuous look here you push changes. So here we see CI pushes artifacts flux pushes patches and so on and everything ends up as an event back to you as a notification and so on. And of course flux now can, you know, control more than one cluster and reconcile it from more than one source. So if you have questions if you have proposals if you want to talk to us. We are happy users of GitHub discussions. That's where all the API changes all the proposal are going. So, of course you can reach us on the CNCS slack but for, you know, future requests and so on please use the discussion is a is a great environment to debate on on new things. And that was it. Thank you very much. Bye friends. Check us out on flux.