 Today we're going to talk about a roadmap Flux B2 through to general availability or GA and peak beyond that. I'm going to talk about roadmap weighted somewhat towards future developments then here we'll give you a demo of what you can do with Flux B2 today. First of all, who am I, we're just some engineer, let's move on. To summarize the history of flux, what's now called Flux B1 was created at Weaveworks to deploy new versions of services to a software as a service implementation called Weave Cloud. Prior to version 1.0 it concentrated mainly on upgrading, upgrading container images. But then we made a big change in June 2017, which turned it all around so that it applied everything from Git and made updates by committing to Git. This was the big bang event for GitOps. Flux was inducted into the CNCF as a sandbox project in August 2019. But by then it was creaking, it needed to be modernized. Things like custom resource definitions went around when it was created. In early 2020 what became Flux V2 was started with the same scope but using modern, modern tooling like controller runtime and supporting multiplexing, e.g. running more than one sync. The Flux project was classified as adopt in the CNCF technology radar in 2020 and promoted to incubation status in 2021. Here we are roughly 18 months after the inception of Flux V2, looking forward to it being generally available. What does that mean? What does general availability mean? Usually it's taken to mean that a piece of software is considered ready to run in production. With an open source people have different appetites for risk, so many will have already adopted a product before GA release. But here is what it means Flux V2. First of all, it means covering the bases that Flux V1 covered, roughly speaking, syncing from Git to a cluster, updating image reps in YAML files and committing that to Git, sending notifications, e.g. to Slack when things have been done, and clarity of installation of helm charts. Most of these things have been improved and generalized on Flux V2, and in particular as I said everything is now multiplexed. You can define as many sources, syncs, updates and notifications as you need. After a GA release, the rule is usually backward compatibility. Public APIs must be stable from this point on, so it's a point of no return in a way. There are a few areas that still need development before we're happy to make it generally available. To have confidence that Flux does what it says, and that the bug escape rate is under control, we need better coverage for end-to-end tests. The controls in Flux have varying unit test coverage, because some things are just hard to test, like notifications to external services, but it's mostly okay. The command line tool has end-to-end tests, but these are closer to smoke tests for the whole set of controllers together, and coverage is not that great. So one initiative in progress is to rewrite these so that it's possible to permute the inputs and thereby cover a much wider variety of scenarios. Another area still in development is standardization, and this comes in two parts. The standard for Flux types and controllers, and libraries to help controllers stick to that standard. The API for Flux goes beyond just the CRDs. There are protocols for how to interoperate with the various objects, for example. There's a protocol for requesting that a resource is processed outside of its specified schedule, and that's used by the command line tool to synchronize with operations that it's initiated. And there's a protocol for discovering and fetching a bundle from a source object, like a git repository or bucket, which other controllers need to understand. Flux uses case status for determining whether dependencies are ready, and it's important that its own types are compatible with case status for that reason. And there are standard metrics that Flux dashboards expect controllers to export. So it's important to have these standards stabilized and implement them consistently. And that will mean that people building on Flux can be sure that their software will interoperate correctly and continue to do so. A big upcoming addition to Flux is a new model for restricting syncs with Kubernetes role-based access control or RBAC. Flux is different from most controllers, because by the nature of syncing, it can affect almost anything in the system, including the rules for what it can affect. So you have to be very careful. The proposed new security model gives platform operators more power to restrict what Flux can do per namespace or tenant using RBAC. And this model also enables scenario, which often comes up, where you want to separate concerns so that, for example, a platform operator can create a secret giving access to a git repository, and an application team can sync using the git repository without learning the secret. So the picture on the slide shows the general situation, the source, the application, and the target for the application can all have different owners or namespaces, but you still want RBAC to be in effect. If you know Flux well, you might ask, isn't this already possible? Yes it is, but it's not well enforced. So at present, you can give a service account for Flux to use when syncing, but the default is to not use a specific service account, which usually means that it inherits cluster admin privileges. This is convenient for clusters in which everyone is trusted, but it needs work and extra infrastructure to undo otherwise. In the new model, it'll be possible to default to an unprivileged user, making this aspect secure by default. Another initiative underway is to orient Flux around the different, the documentation around the different reasons people have for looking at documentation. Sometimes you just want to get something up and running, or you sometimes want to learn how it all works, or sometimes you need a recipe for some specific task, like setting up on a particular cloud platform. Sometimes you're tweaking a working system, and you need to dive into a reference to find the name of the right option. The aim of reorganizing the documentation then is to better serve these different entry points and get people to the most helpful material for them directly. Another change coming up for documentation is having specialized sections for the cloud platforms. For example, you can use your GCP AWS and so on. It's useful to collect docs together this way, because typically you're using one of those at a time. And so the sections will serve as a one-stop shop for all the relevant information. These changes to documentation are not all prerequisites for GA, but I wanted to mention them here because they're nonetheless a part of Flux's growth and maturity as a product. The ability is merely the end of the beginning. There are more plans of it in no particular order. Flux v1 and Flux v2 have always included Kubectl or Kubikutl in their container images as the means of applying configurations to the Kubernetes API. And that's because Kubectl contains a lot of logic that's difficult to reuse or to reproduce, for example doing three-way merges of resource definitions. It's now possible to rely on Kubernetes API server to do the tricky bits like merging definitions and avoid the need for shelling out to Kubectl. And so the field management mechanism also lets you see fine-grained conflicts, which means Flux can be more sophisticated in how it applies configs and keep out of the way of other actors like autoscaders. One goal of Flux is to provide primitives to people to build their own GitOps flavor continuous delivery. And there's different varieties of that, hooking up your system to the inputs and outputs of the controllers in Flux, which I would call integration. Making your own types and controllers that can be used with those in Flux, which I would call extension. For the purpose of extension, the API is not as open as it could be. By open, I mean that you don't need Flux to be changed for you to be able to add to it. Right now, it is possible, as shown in the picture there, to write a controller that consumes sources. It is not possible to write a controller that provided new kind of source because the source types are hard coded. There is a design in progress illustrated here. For removing that limitation by using a type representing the output of a source. And this would open up the API to new kinds of source, like images and other artifacts from OCI repositories, for example. This exciting flagger is part of the Flux project, but doesn't yet use the Flux conventions that the other controllers do. It's its own standalone thing. It's also something of a monolith, which does all of creating shadow workload objects and controlling the version and the routing of traffic to different versions of those objects. And it's also reweighting kinds of various kinds of gate during a rollout like running a load test and sampling metrics to determine when a rollout can proceed. Some of these are pretty close to being standalone and generic functions in themselves, which could be moved into their own API types and controllers. The benefit to doing this would be extensibility again. It would be easier to build a flagger analog that caters to a scenario not in the scope of flagger. For example, flagger deals with workloads, deployments and demon sets, but you might want it to roll out another kind of object that doesn't follow the same rules. You could use the metrics, routing and gating parts of the flagger and write your own shadowing logic to have your own form of progressive delivery. One last thing I want to mention regarding future plans is other kinds of automation. So version two includes an automation controller, which fulfills the same purpose as the image update feature of flag flux V1. And that controller was developed to give some continuity for people who would be migrating from flux V1, but people have also found it to be useful in its own right. The picture in the slide shows how bits relate and image repository scans for an images tags and image policy gives a rule for choosing a tag from those. And the image update automation updates YAML files according to the policies in its namespace. The pattern value put together can be used for other kinds of automation. For example, one kind that's been requested is updating the chart version of a Houston and Helm release. So that's a possibility. I have not talked much about what flux version two does as of today. So I'm now going to hand over to Hida, who will fill that gap with a demonstration. Thank you. Thank you, Michael and a lot of folks just another engineer who's going to demo the most recent part of flux to reach this table API image update automation. If this is not a topic of interest to you, you can find other demos and how to use on the flux website. To run the demo, I'm going to share my screen, which means you will have to miss out on my face for a little while. So with my screen shared, we are ready to dive into the automation of image updates with flux. Flux comes with the CLI, which allows you to generate flux API objects and control the lifecycle of them, but also provide access to bootstrapping features. For this demo, we are going to bootstrap against GitHub. The bootstrapping against GitLab or plain Git server is possible as well. I'm now going to type out the bootstrap command and explain some of the flags as you may have already noticed by typed out the bootstrap command. Flux is built out of components and not all of them are included by default. To allow it to keep track of image tags and automate updates to Git, we need to install the image components using the components extra flag. In addition, we need to enable a read-write key so that later on we can write back to Git. Now let's run the command. Now as you can see, the bootstrap command showed that everything was actually already set up and the controllers are already running in the cluster. This is because I cheated and run the command before the demo to save us some time and show you it's idiom potent and will only make changes if things aren't up to date. Let's quickly clone our bootstrap repository and navigate into it. To showcase the image update, we first need something to update running in the cluster. Because deploying isn't the main topic of this demo, I'm going to deploy the Stefan's PodInfo application to a namespace with the same name without providing too many further details. As shown on the screen, I created a simple deployment which makes use of PodInfo 5.0.0 in the PodInfo namespace. After we get commit and a quick on-demand reconcile request using the Flux CLI, the PodInfo deployment should now show up on the cluster. Let's see if this is true. As you can see, the deployment has been applied by the customized controller and is now running on the cluster. Now, as you may have noticed, I put a template for the deployment from my storage. Stefan tends to be a busy bee and I can be quite sure the version I deployed isn't actually the latest version. Let's see what Flux has to offer to automate this. On the diagram on the right side of my screen, you can see the controllers are just bootstrapped. And that we just need a couple of custom resource guides to actually set up the automation. I will set them up one by one and explain what they do, starting with the image repository. The image repository object I just created, once applied, tells the image reflector controller to start maintaining an index of the available tags for that image. But this is just a bag of references and doesn't make Flux understand what's the latest version you want to have running on your cluster. This is where image policies come into play, which define rules for selecting the latest image from the bag of references. As PodInfo is released with proper SAM versioning, I created a policy to select the latest version within the major 5 SAM file range. Now, with these two objects, Flux knows where to look for image tags and what the latest tag is based on the defined policy. But it doesn't do anything with this information yet. To make it start looking into outdated references in a git repository, we need to create an image automation object that tells Flux where it should look and how it should write the new images back to git. Let's go ahead and create this object. Now, because the command arguments are quite verbose, I'm going to give you a quick overview of what is included in the generated YAML. The source ref at the bottom of the YAML document defines a reference to a git repository object which contains the information to connect to the git repository. While other Flux controllers use artifacts that are generated from the git repository object to apply resources to the cluster, the image automation controller performs a git clone by itself to ensure it's looking at the latest information. The git object at the top defines what git reference should be checked out and expected for outdated images. What the commit message should look like and where the commit should be pushed to. With the current configuration options, changes are pushed directly to the branch that is synchronized to the cluster and updates should be fully automatic. The commit template ensures that multiple images are updated in a single commit, their list in the commit mail data. The update object defines the scope of the update and limits it to the part of the cluster. The strategy is something you can forget about for now as there is only a single option available at the moment. Now let's commit these files to git and see what happens. I run Flux Git GetImagesAll, which provides you an overview of all the image-related objects in the cluster. While you can see it has applied the image-update-outstimation object, it also mentions that no updates have been made to the git repository. This is because their deployment object lacks a small but important item. Markers. Flux doesn't really understand the structure of objects. It doesn't know where the image is in the deployment or other resource. You have to mark the image values in your resources so that it can identify and compare them to the value of the policy. The markers come in three flavors. Then you can mark a full image, an image tag, or an image name, which together should be sufficient to mark almost any type of resource. Let's add the marker to a resource referred to, the podinfo v5 image policy I created earlier on. Let's commit the file to git and see if we finally have our automation running. We now reconciled the last changes. So while the image automation object doesn't show that it made a commit yet, I'm quite sure this is a glitch in the distributed matrix. So let's go ahead and see if actually nothing has changed in git. It happened to be a glitch and a commit was made. Let's inspect the podinfo deployment in the cluster and see what version it is running. As you can see, the customized controller has processed the git commit by now and applied the change made by the image automation controller onto the cluster. And podinfo is now happily running version 5.2.1. Congratulations, you have witnessed image update automation driven by Flux. Now before I call this the end of the demo, I want to highlight the existence of two dock pages that will probably help you further shape a mental model of how image automation works with Flux. The first page is the one showed earlier in the demo which lists the markers. Remember this page and come back to it if you ever happen to be stuck. Also remember the markers can be put in any resource and the sky is like the limit. Second, if you create an image automation object that pushes to another branch that it reads from, you can create a gated automation approach which will force the updates through a pool request. The page shown explains how to do this for github, but the same technique can be used with many other git providers. Having made those last remarks, this concludes my demo and I wish you all a happy day. If you have any questions, don't be afraid to ask them during the Q&A or find us on the CNCS Slack in the Flux channel.