 I'm Stefan Frodan. I'm here with Hideo. We are both maintainers of Flux. We both work at PliWorx. Today, we are going to talk about Flux the project, how is it composed of, how we develop it, and we are going to talk about the new direction of where we are taking Flux to, and that is OCI and container registries with OCI artifacts and so on. So we'll get that started. So first an overview of the Flux project. So Flux is split into multiple controllers. We have this architecture where you can pick and choose Flux components based on what you want to do. For example, if you want to deploy hand releases, you'll need hand controller. If you want to do image automation, you need to deploy the image automation controller. If you want to do, for example, progressive delivery and shift traffic from one site to another and have a better way of safer way of deploying user-facing gaps, you could use Flagger and so on. So Flux is not one thing. It's made out of many controllers. And we have an architecture in place where others can extend Flux without modifying its source code. So if you want Flux, if you want to add a new functionality to Flux, if you want Flux to do something else, you can use our Go SDK, build a controller according to our documentation and specification, and that's how we can extend Flux. So Flux is kind of different from other solutions where you have this concept of plugins or you change something in how the main execution happens. In Flux, we don't do that. All these controllers are very specialized. We don't touch the disk. We work only in memory and we build them with a security first concept. And that's why you can't just put a back script somewhere or write some Python script or whatever and extend Flux like that. We also have a Terraform provider for those that are, for example, provisioning clusters with Terraform. After you create your cluster, you can also set up Flux in a GitOps way using our Terraform provider. And of course we have the Flux CLI which can do anything. You can use it to install Flux, bootstrap Flux, monitor it, debug it, and so on. So next I want to share with you some ecosystem news for the Flux project. We are very happy to welcome GitLab and Orange to our ecosystem. GitLab joins Azure, AWS, and others who are offering Flux to their end users. So currently Flux is in beta in GitLab, in all the GitLab editions. And we are working closely with the GitLab team to add great feature to Flux and offer a great experience inside GitLab for doing GitOps that way. Orange and Deesh Telecom and other mobile carriers, they are relying on Flux to do 5G deployments. And last week there was a nice talk by the Orange team and you can find a lot of information on the internet what Deesh Telecom does with Flux. They have their own solution which is built on top of it. A lot of integration and extensions. For a very long time people said, oh, we find Flux hard because we don't have a web UI. We can't see things, we can't click buttons. The Flux project still does not have a web UI. We are not offering you buttons. But WeWorks has an open source edition called with GitOps. It's basically a Flux hand release. You can add that to your cluster and you get a full future web UI for Flux. You can see all things Flux in there, monitor Flux, debug it and so on. And also one of our colleagues at WeWorks has been working for over a year now with others too for making Terraform controller. So what's the idea behind Terraform controller is that you can use Flux and the GitOps principles and everything that Flux does to manage resources outside of Kubernetes. And AWS, today they launched CloudFormationController which follows the same pattern as Terraform controller but of course is specialized for AWS Cloud Resources. So basically you deploy Flux on EKS, you add the CloudFormationController for Flux, you put your CloudFormation in Git and things happen. Okay, back to our initial discussion about open container initiative. So if you are not aware, there is this governance structure under Linux foundation called OCI. And the people there are doing a great job by defining specifications and standards for how OCI artifact looks like, how can you distribute it, what runtimes are supported to run container images and so on. And I think you've heard this idea before, we've been talking about it for a couple of years now where container registries are moving towards universal storage. It's not only about container images. The OCI artifact specification allows you to store other things beside the actual container. And in our case what we are doing with Flux, we are storing there your app configuration. What's important to note here is that OCI artifacts are today supported on all container registries out there, everybody supports them. Major package managers have moved to OCI as their default storage. Helm version three for example, they did this switch and we also supported in Flux and recently Homebrew. And I encourage, if you are building today a package manager instead of building your own distribution system, please look into OCI and you'll find there a great community, great solution, great SDKs, it's all built in, you can use OCI to distribute whatever you want to distribute, it doesn't have to be a container image. It can be other types of binaries, it can be configuration, it can be whatever documentation, you name it, right? It's a great way to unify package management and artifact distribution. Okay, so let's explain a little bit what Flux does today and how the GitOps workflow looks like. There is this thing called desired state, right? You want to describe how your clusters or your fleets are looking, what apps should be deployed there, what policy and so on. And how this works with Flux, you either install Flux on each cluster in your fleet or you install Flux on a management cluster and you point that Flux that instance to one or more Git repos where the desired state is. Which is great, this is GitOps, but there is a little thing here. There is also a container registry. You have to have a container registry to even run Kubernetes. So the desired state is not in Git. It's in Git and in the container registry, right? In Git you have the Kubernetes deployment. Inside the Kubernetes deployment, you have a reference to a container URL and that container gets pulled from the registry. So in order to have the whole desired state defined, you need two types of storages. One is Git and one is the container registry. So what we have been doing with Flux in the last year or so, we are trying to offer this model. So GitOps is okay, we are doing Flux GA, Git will be supported forever, but this is the new way of thinking about desired state and where we want to offer people that are open to do such a thing. What we are thinking of is, let's have the container registry hold the whole desired state. Configuration, signatures, S-bombs, and containers. All of those can be stored in the same place, signed in the same way by the same identities. So it's in a way easier to manage. You have a single storage. If you are an organization that runs on-prem, you definitely have container registry sorted out. If you run on a cloud, you probably are using a cloud container registry. So it's there, you can take advantage of it and unify all the things under it. And the major change here is that now you have to use Flux in your CI pipelines and you have to use Flux the controllers on your clusters because Flux no longer goes back to Git. When you do something, you modify something in Git, you have to use the Flux CLI that has the same command as Docker, it's called push, and you push the configuration, whatever is there. Customize overlays, Terraform scripts, you name it. Those things that Flux understand, you can push them to the container registry as an OCI artifact. And how can you do that today with Flux? So Flux has a bunch of APIs under the source controller where you tell Flux, you define the sources that Flux should look at. And the most popular thing, what everybody's doing right now, they're using that defining a Git repo story, right? Which has a URL, it can be an HTTPS URL, it can be a SSH URL, give it a secret, maybe it's a token, maybe it's an SSH key and so on. If you want to verify the authenticity of what you are applying on a cluster, for example, you want to protect your production cluster and you want to say only these people with these keys are authorized to make changes with Git today and Flux you'll be using OpenPGP. So everybody has to sign the commits and on the server side, Flux on the server side has the public keys of the people allowed to do that. So this is Git. Now, we are offering an interchangeable API which is called OCA repo story, where instead of defining the source as a Git repo, you define the source as a container repo story, right? And instead of following a Git tag or a Git branch, you can tell Flux to follow now OCA artifact tag, right? And you can use Sanver, you can use Digest, you have plenty of options here. The same way you can pin, for example, the Git repo to a particular SHA of a commit in the same way in the OCA repo story you can pin the artifact to a particular OCA digest. So it's a mirror of what you could have done with Git repo story today, the same things you can do with the OCA repo story. The major changes here are in terms of verification of the integrity of the artifact. If with Git you are forced into OpenPGP and you have to use that for signing, which most organizations are not into OpenPGP, it's quite hard to adopt. For OCA repo story, you choose to integrate with cosine. And basically you can use also, you can sign artifacts with cosine private keys, but you can also use cosine keyless. And as you can see, there aren't many changes, so you can quickly switch between them. And now I'm going to ask Hideo to work us through various API options and how we can use OCA repositories in Flux to do things on clusters. So, well, for most you probably want to install Kubernetes configurations to your clusters. The Flux CLI now has a push artifact command, which basically takes a path, pushes that into compressed star ball and then makes it an OCA image, which you can then sign and refer to. The provider here is generic and you can see that there is actually no secret reference, so it means that it's pulled from a public source. It's then used by the customization like how Git repository will also be used and apply it to the cluster. The same can be done for Terraform modules. Stefan mentioned it earlier before. There's a Terraform controller. The Terraform controller can consume it in the same way and basically roll out your whole configuration. Then if you want to push changes from your CLI safely, Flux can react to the pushes. So on the left you see GitHub workflow action, which pushes the artifact based on a pod and adds source and revision information. This then receives another, the receiver sees the event of creating the package in GitHub and then triggers a reconciliation of the OCI repository, which in turn creates a new artifact and that artifact is then seen. Wait a second, so I want to say, go back. Here I want to say that this is the way you do push-based GitOps in with Flux, right? It's still pulled, but it's instant. Once you push the artifact to GitHub Container Registry, the GitHub Container Registry notifies Flux and Flux pulls the changes immediately. So even if there is no such thing as push-based GitOps in Flux, you can get the same speed by setting up a receiver and configuring, I don't give up Jenkins or whatever to call that receiver and tell Flux, hey, the artifact is there now, go and fetch it. Yeah, and then if the notification controller happens to be down and not see the event, it will eventually be picked up because of the interval that's still being set. So I mentioned this before, when we pushed the artifact, we created kind of a revision reference and also a source reference. These things are pushed into the image and you can then see from what commit, for example, the OCI image you created was originates from. And this can also be used to, I think it's used in Flux Trace? Yes. Yeah, so Flux Trace looks for that information and displays it so that if you are dealing with an OCI repository and you want to see where the change actually happened, you can still see the reference to the Git repository and go change it there. One thing here, when you switch from Git to OCI, you may feel like you are losing the assurances of Git, how do you know where are you at? Because if you look at OCI digest or an OCI tag, it doesn't tell you much. So we make you add these annotations. When you do a Flux push, you have to give Flux the source and the revision of the source. And then we reflect this information inside the cluster and when you do Flux Trace, some particular pod, what Flux the CLI does goes from the pod to the replica set, to the deployment, finds the deployment and then it tells you, hey, this deployment comes from this particular OCI artifact, but this OCI artifact is created from this Git repo and this shot. So you don't lose the traceability you had when you used only Git. There's also support for Helm charts. This is different from an OCI repository because a Helm chart has a concept on its own of packaging. So you don't deal with an OCI repository object, you deal with an Helm repository object, which is actually kind of static in that it doesn't do anything by itself except for providing the configuration for credentials, for example. Then if an Helm chart object is created for a Helm release, it will look up that information, do the whole Helm logic of looking what the latest version is against, I don't know, a sent for reference. So one point something in this case and then pull the latest thing, which is different than the whole flow we talked about before, and kind of also makes use of Helm's own APIs. Then the benefits of OCI compared to Git. So you have your image call creations and your signatures and everything in one place. You only have one thing you need to authenticate with. You only have one endpoint, which has all your stuff. The records have often a higher availability. I don't know if any one of you is using GitHub. Yeah, you get the point. OCI records is IPI-based and Git has some APIs like these remote LS commands, which you can use to get the latest references, for example, for tech, but it has no other way of going through the data that's stored in Git. Regional traffic also saves you money. If you have some Git host and it's somewhere on the other side of the world, it's probably expensive to pull from it all the time. Then you can have password-less identification, which I will get into later, and also keyless integrity verification due to how code time works. So contextual identification towards records sheets. With Git, you can only have an SHA key or a basic authentication token. With OCI, there are many options. You can have a Kubernetes workload identity, which is attached to the controller service account and piggybacks on the, I don't know, AWS role you have set or whatever. You can have an image pool secret attached to a reference service account, so you can have some other role that's specific not to the controller, but to your service account you're working with, and it will use those credentials, or you can have the most classic version of a secret reference with a Docker config, and it will then use the credentials from there. Then the integrity verification of OCI services. For Git, there is currently only OpenPGP, and not just SHA signatures, we're working on it, but it's taking quite some time to get it into GoGit. For OCI, there is a six-store project which has actually two options. You have a keyless version, which pushes things based on some other identity you have, which you see as trusted, or you can have your own private key, which you sign with, and which you then later give to the object that's being reconciled. So that is verified with that. We are also working with some people to get notation from the CNCF into the project, and I expect that to be probably two months maybe. Something like that. Okay, some scenarios where OCI may be a better fit. We are not saying that today if you are deploying flags from Git and you are happy with it, you should just drop Git and do the whole CI stuff and switch to OCI. This is not what we are saying. What we are saying is we are offering an alternative, which in some cases, it may be a better alternative than using Git. And for example, Edge is a great example, right? Where Git is kind of expensive. When you do a Git clone, it has a Git history. You have to do checksums for every single file. Well, if you pull an OCI artifact, which is a target with only the YAML manifest inside or only with that code that you really care for, Terraform, Pulumi, whatever it is, that's way more cheap in terms of transfer and also in terms of how much CPU flux uses on your cluster to verify the integrity of that thing. Another thing that OCI opens up for flux is we know that, I mean, who likes writing YAML all day here? Okay, right? There are so many other solutions to not write YAML, to generate YAML, right? For flux, flux needs YAML, but you can think about that as the assembler of flux. You can use many other frameworks out there to generate the YAML and then package that YAML into an OCI artifact. And this is what we try to offer with OCI instead of creating a JSON at controller in flux, a Q controller, whatever, other JavaScript controller and so on, you can generate the YAML with any language you like, with any SDK you like, and at the end you'll do a flux push of that result and that gives you flux with customization and then you can change namespaces, you can patch stuff even if you want that on the server side using a flux customization, right? So that was one of the reasons we started looking at this because we can't just add 101 controllers for every single SDK that's out there that can generate some configuration. Other things like local development environments, you may not want, I mean, if you are using Git, right? And if you want to test something on your local cluster, you have to push the change to Git, then your local cluster has to synchronize the change so you can see how flux will react to it. Will it upgrade the handchat? Will it do that? Will it do whatever is happening, right? With OCI, you can run container registry locally in Docker, it's just a container, it's the CNCF distribution and instead of going through Git and so on, all your local changes, you can push them there, it takes under one millisecond and flux can synchronize locally so you don't have to go through Git to test things on your local machine and this is what I will try to show you today if internet here works. I've studied the cluster creation before the talk so hopefully by now everything is set up on my machine. Here are some resources, we are going to share this presentation afterwards on the KubeCon website, you can download it. About the idea is we have OCI repository page in our documentation which contains the whole specification, explains what fields are, what you can do with it, how you can configure all the stuff that we are showing you and also for Henry Boy Story. We also have an OCI cheat sheet where we show you how you can build CI pipelines with flux push and then reconcile what we've shown before but it's really easy, you can copy paste and get started with it. And two experimental projects, what I'm trying to show you today with Flux Local Dev where you spin a flux with OCI and an experimental distribution that I wrote in Q-Lang for Flux where Flux updates itself. So you customize Flux, you make your own Flux with Q-Lang, you generate the YAML, that YAML gets pushed to GitHub Container Registry and from there all your clusters are synchronized from that artifact without Git. Yeah! Okay, going to try the demo. So let me show you real fast what happened here. So I have repo, it has a bunch of make files and best scripts and everything but in the end you run a single command that creates a Kubernetes kind cluster, creates a Docker registry using the CNCF distribution, the open source one and it sets up some controller so you can do something with the cluster like exposing your demo apps, having monitoring, it also comes with the Flux UI and so on. Going to try to run the command and see see if it will really work. The command is called make up. So what this command does, it looks at the files in my repo, it packages some directories like my apps, it packages all the YAMLs in there into an app OCI artifact that contains my app definition, the cluster addons like set manager, the ingress, Prometheus operator, everything, it's in its own dedicated artifact so if I want to update only the infrastructure, I'll only push changes to that thing and so on. And the idea here is that instead of, you can have this monolithical Git repo with everything inside, but then you can create layers out of it by deciding which things you push to which OCI artifact. So it's kind of flexible, you can make your own infrastructure layers. For example, one layer will be, you have to install first all the controllers. Then only then you can apply custom resources of those controllers, right? Because if you do the other way around, Prometheus say I don't know this custom resource and only then you can put your applications in the cluster, at cluster bootstrap. And this is what Flux is trying to do here. I'm not sure if it's working. Looks like everything is running. Well, let's see if I can access locally my demo app. So what I'll try to do is I'll try to uninstall Flux. You can see if internet can pull all the images again. So if we are here, let me explain a little bit about Flux Uninstalled. So install Flux, Flux install all the other things in your cluster, right? When you do Flux Uninstall, nothing that runs on your cluster is touched. So how we design Flux Uninstall, Flux Uninstall only removes all the custom resource definitions, all the controllers, but everything that you have on your cluster is still there. So I don't know if you have an issue with upgrading Flux or whatever, you can just wipe Flux out of the cluster and the cluster will still be there, right? Yeah, I wanted to demo something else, not uninstall, but let's see if we are more lucky this time. We still have three minutes. Any questions until this thing goes? Yes, OCI offs. And I want to explain why. Using OCI as an intermediary, as a unified storage, doesn't mean you are not doing GitOps. And I think Git has a very important role where that's where people collaborate. You are not going to make changes with your team inside an OCI, in a TAR, in an OCI artifact. You'll make these changes in pull requests, you'll have your teams review everything in Git, and once you merge that, that trace gets injected into artifact and in the cluster. So it's still GitOps, but the thing that Flux looks at is no longer Git. That's the difference. For your teams, it doesn't change as much. It may be for new teams that are adopting GitOps, it may be even easier to move to GitOps because right now what you are having, let's say if you don't do GitOps and you deploy from your CI, you have Docker push, push the image, you take the tag, you do, I don't know, set, replace in your YAML, then you do a kubectl apply of that YAML. And you have that constancy, everything is push based and so on. How that workflow changes when you want to adopt Flux through OCI, instead of kubectl apply, you do a Flux push and it has the same result, right? But instead of CI connecting directly to the cluster, having all the secrets, divulging the CI runner, so if someone hacks your CI, they can do whatever on the cluster, right? If you are doing kubectl apply from there. But if you are doing Flux push, the same container agency that Docker has pushed the container image, then your cluster no longer has to be exposed outside of your perimeter, no longer has to be exposed to CI. So I think in the future it will simplify the adoption of GitOps because you replace one command with another and another great thing about it is that with Git, when Flux has to look at Git, right? You have to manage your SSH keys. You have to generate SSH keys for Flux. Each cluster should have its own unique key and so on. And then you have to set up those keys in each repository. And you have to rotate them and so on. So pushing to a container registry through OEDC to authenticate with that and then Flux on the server also using something like IAM role bindings or workload identity and so on, simplifies the security aspect and it makes it stronger in my opinion because you no longer deal with long lived keys and SSH. Right, yeah, I'll repeat it so everybody can hear. So on some organizations are not allowed to reach outside to go to github.com GitHub. So you are now forced into running a Git server next to your cluster just to do GitOps. But you already have a container registry so it's more natural to use that instead of running a second Git server just for that. Okay, let me see if we got any more luck this time. Okay. So it's finally working. Oh, no, no, let me show. Let's see if it's actually working. Like this is cluster bootstrap. I have deployed my app. Everything comes from the container registry which is locally installed. And now I'm going to change some YAML and instead of doing a Git push, I will do a Flux push. Go in here, run, make, sync. Right, no Git. I didn't push to Git. This is, this file is locally changed here and the whole idea behind it is we allow you to use untrusted and unsecured registries inside the OCR repository. There is a field where you can set and set Flux. This is a local environment. Use a container registry with no authentication, no TLS, no NTLS, no nothing. But please don't use that field outside of your local machine. This is just to speed up all things locally. Any other questions? Yes, yeah. So the question is now that you use a Flux receiver but that receiver receives the event of there is a new image pushed to the registry. What happens to Git because Flux knows how to write back to Git, the Git status and every time you do a commit you'll have there a green check. Yes, I have synchronized this commit or no, I haven't and it failed and why it failed. So today if you switch to OCR repository you lose this facility of having commit status updates but because we have added the metadata as a requirement when you push the artifacts. So the Git SHA is there, the URL is there. Soon you'll have the possibility to also update back in Git because we know the SHA. So we know that this artifact contains that commit SHA so we can go back in your Git repo and put the green check on the commit and say this was synchronized even if it wasn't synchronized from Git, right? OCR is just in the action. So good point is not there yet but we have the tools to do it. Yes. Thought or recommendation with integrated with Helm since now we can use probably same artifact registry to store both the OCR artifacts and Helm and yeah, maybe even embed the uncharged within the OCR, the artifact that generated by flux. Yeah, that's a great question. Like we, there is this idea inside OCR and I think a lot of people are working towards a proposal on this. How you can merge, how you can have a single artifact that defines your container images that are part of your app and the configuration for that app, the S-bomb for everything, the signature for everything and you have like this one single artifact that represents everything. It can also embed the chart inside. Well, currently the OCI spec has this index specification but inside the index you can only refer to multi-archimages and now metadata like S-bomb but if you look at what Docker did in the latest release those layers are of type unknown. So we are not there yet. We don't have yet a specification for being able to wrap everything into an OCR artifact and say, here is my app containers, configuration, Helm chart, signature but hopefully next you can talk about that. I'm not sure if it will happen that fast but yeah, it's on the horizon, let's see. Done. Thank you very much everyone. Thank you. Thank you.