 So as I told you earlier, this session is a talk shortened from a long workshop, three-hour long workshop. So we shorten things a bit. So when it comes to deploying on Kubernetes in a traditional way, we've got UPS teams use kubectl or Elm to deploy and it comes with several issues. And that's why GitOps do exist. We don't have trustability on several cluster scale. We don't have tracking of the history of our changes. And most of all, we've got security issues. We have to keep secret. We have to transmit these secrets to our CI CD tool. We have to get connections between the runner, the CI runner, and the Kubernetes cluster, and so on. So it's quite a back practice. And then come GitOps, as you may know. And what is GitOps? GitOps is first of all, GitOps is a way to deploy by keeping a single source of truth in Git. And so as it comes with Git, it comes with every pattern, every collaboration pattern that we can have in Git. So PR, branches, tagging, blaming, and so on. It relies, and what we do love with Flux is that it's a very keep it, stupid, simple tool. And as a key tool, it relies mostly on the reconciliation loop of Kubernetes. So this tool is just by feeding Kubernetes with the desired state. And then it's Kubernetes that does what it masters, that is deploying resources. It simplifies the management of clusters since one single source of truth can address the configuration to many clusters. And it keeps tracks of any change just as a CVS system does. Flux is just one tool that really well does DevOps. It provides GitOps for any kind of configuration and any kind of resources regarding the Kubernetes ecosystem. You just push to Git, and then it's Flux that watch this configuration and feed Kubernetes. And then Kubernetes does the deployment. It works with all your existing tool, including your text editor, and so on. For an example, I performed this talk in another conference in France. And the demo went off. The network went off. And so I just do all the demo by using the Flux CLI and just displaying the configuration file since it's almost all the magic of Flux that relies on these things. Flux relies mostly on customized. Half of what we show today is customized tricks to override some configuration, to aggregate some configuration, and so on. So customized works very, very nearly to Flux. Of course, it's designed with security in mind. So as far as Flux has read-only access rights to your Git sources, it works. And of course, Flux must be installed on your Kubernetes cluster. Of course, what we will show you is a multi-tenant example of what we can do with Flux. And it embeds alerts and notification to notify operation guys or, eventually, dev teams that deployment are performed. In a sum up, this is the schema of the components of Flux on the cluster way, on the cluster side. In addition to these components, there is a single CLI which is used mostly to bootstrap files as we can see later during this talk. Mostly, we've got three operators that are the source controller, which is just watching several kinds of sources, mostly Git services like GitHub, Bitbucket, GitLab, and so on. Also, single Git repo locally or remotely. And we've got a source controller that is able to watch charge repository on the Elm ecosystem. We've got the customized controller, which is getting the configuration files that are retrieved from the GitHub sources. And it aggregates all the configuration files into a single resource tree, a single customization resource tree, to send to Kubernetes as the desired state. And we've got an Elm controller, which does the same but on the Elm perspective. All the configuration of Flux and of the desired state are stored in CRDs, so something which is very, very natural Kubernetes technology. So in our US case, we've got three personas, Laurent. Yes, so three personas, developer teams, operation teams, security teams. Every team has their own focuses, like the developers wants to ship features faster. They want to fix these bugs. They want to be autonomous and in isolation. But they don't have any operation or run in mind, and of course, not security in mind also. On the other side, we have operation teams that, of course, knows how to run applications, knows how to do monitoring, alerting, scalability, resiliency. They are managing hundreds of clusters across the whole company. And the last one is security. And the security team wants to have this centralized way to deploy policy, authorization, authentication. And of course, they want to isolate each team on its own sandbox. And they want to enforce policies like putting some security account, putting some policy around that, having like ingress or ingresses or ingresses network policy. And you see that every each team has their own focuses and their own mindsets. So how do they interact with each other? First, we have the developer, which pushing code into the repo, have some continuous integration builds that creates some images on the container registry. And then we have to deploy that images on the cluster. Then we have the ops that's creating some best practices and shared configuration to be able to scale to hundreds of clusters without any friction. And of course, they are creating some application config because developers don't use like request or limits on communities. They don't know how to use monitoring, logging, alerting, of course. So the ops team creates some best practices in the repo and just creates some application config in another repo. And one for each environment, because you know that running in non-production or production is some kind of different cluster, different environments, and of course, different configuration. And then the last part and the last personalized security team that will just create some policies, network policies. We will talk about Kiverno or Open Policy Agents. And we will try to create some security policy on top of that. And so they have their own repo in which we will create some policies and just apply that with Flux. This is the workflows that we are trying to create and to push. And to do that, first we need to install Flux in our cluster. So everything will be done by just using the Flux CLI. And as Ludovic said, it only creates some YAML file in your laptop or in your workstation. So you have to create that in your laptop and then just push it on GitHub or GitHub. For this example, we will just create some YAML file on GitHub. And it's the only moment that Flux will ping the API of GitHub to just interact with GitHub to put some maintainer roles on the repo. For example, for the Dev1 and Dev2 team, we will create some maintainer roles on top of that. So we can just have some developer workflows, like PR, pull requests, reviews, and all of that. And on your right, you will see that it creates some YAML files to install Flux. And one YAML file that is a tenant.yaml that we will see later on. So we will strap the Flux in our cluster. And this is the YAML file created by Flux. So we have GOTK sync YAML, which creates a Git repository. So the Git repository is only the source, the source of truth of the installation of Flux. And you see that in the same file, we have some customization. And the customization is something. Yeah, a convention, exactly, to search for customization.yaml file on the repo. And you see that the customization file is here. And you will see that it will just apply the GOTK components and GOTK sync YAML on the cluster. So if you want to just deploy a new version of the file, if you want to update some parameters, if you want to update the version of the Flux, you just have to edit the YAML file, just push it to Git. And Flux will reconciliate this file in your cluster. Then as I already told you, in this context, we will try to create one name space per application because we will have one cluster for production and one cluster for non-production. And in France, we have seen that in lots of companies. Yeah, of course. And we don't want to manage hundreds of clusters and one cluster per application because it's too expensive. So we have one production cluster, one static cluster. And every application will live in their own name space with their own airbag model. So with Flux? So every application will be isolate from the other. But network connections or something like that. But the Dev1 team must not deploy its application in the Dev2 name space and reverse. Yeah. So for setting up the tenants, we have the tenants.yaml file created with Flux. So it just creates some lines inside the tenants.yaml file to create, for this example, the staging cluster. And just say that, OK, you will just watch the Git repository that is called FluxSystem. And it's our senior source of truth. It's based on Git. On top of that, we will have to create physically the tenants. So we are creating the airbag file system. So you see that it creates the name space, dev1.dashns, the service account, then dev1.saviceaccount. And we create some airbag on top of that to be sure that only the service account dev1 have access to the name space and to not be able to conflict with each other on the application. So that is the content of the tenants.yaml file. But of course, we will have to add some applications, some policies around that. So we will add some cluster role. And you see that the cluster role of dev1.dashns is, of course, FluxS, but you can customize it with some more restricted cluster. And we will have some deployments, like I said, put a list, watch, create, update, stuff like that. And now we have our name space, our service account, our cluster role, ready to be deployed. And it's time to onboard the developer with Customize. So to sum up, we have created with Flux and with only Git push on the cluster, the name spaces, the service accounts, the Git source. And now we have to create a new source repo to point to the developer team. So let's do that. So we are onboarding the dev team. And we are creating a new yaml file in the tenants. And I'm not sure if you are familiar with Customize, but it relies heavily on Customize. And Customize has this kind of overlays and base that you can overwrite some parameters. So you create the base file system on Customize, just saying that here is my application. It relies on the github.com, slash one community, slash dev one aspect application. And it creates a new Git repository source. And you can just push it on Git. And of course, Flux will just pull the new yaml file and reconciliate that on the cluster. You have, of course, to have some customization with Flux. So you create a new customization. And you will say that, OK, this customization will only deploy the application is dev one dash NS. And you put some service account name to be able to deploy on that name space because only dev one cluster role, dev one service account, sorry, will have access to the dev one dash NS. So for example, if another team wants to deploy a new application in this name space in particular, dev one, it will not have access because they don't have access to a dev one service account. And they don't have access to this particular name space. Of course, all of that is being done by the ops team. So the developer team just say that, OK, here is my repository. And just create some parameters in Flux to be able to pull the new application and to deploy that on top of Kubernetes. And because it relies on customize, you have to add some overlays on top of customize. And here, you can create some overlays to deploy a new patch. So here, we have created a new patch that basically does nothing just to have some patch. And then we told customizer it to deploy that patch to be able to have the base deployments and on top of that, to have the overlay. It will be useful if you want, for example, to add some requests or limits on top of the application. If you want to add secrets, because developers don't have access to secrets for databases, for example. But only operations have access to that. So operation can override parameters of the application, only creating YAML's files and to be applied by Flux. And so you have clear isolation between developers and operations. I know that, working with DevOps, we are trained to the two teams to talk to each other. But here, we have a tool that relies on Flux on GitHub to be able to have some collaboration between developers and operation without the needs to just talk to each other. OK, so here is our final example. I don't know if you want to present it. What we show you very quickly is how to configure a Flux component and then how to configure and maintain a cluster and even a group of cluster and have a different configuration derivated from the same configuration base. And then on top of that, we've got a configuration to deploy here and a single application for Dev1 team. But this might be replicated for a lot of applications. So we've got the first configuration, which is the configuration of the Flux component, which is in the folder Flux System here, as we have seen before. And this configuration targets the tenants.yaml file, which is the description of the tenant that will be the base of the configuration for our cluster. And our cluster is just configured by this couple of files here in the staging slash Dev1 tenant. And then this configuration is another ride of a base configuration, which is on top of it in the base folder. And here we've got the configuration of the Dev1 tenant with the namespace isolation, the service account, the cluster role base, and so on. And in this configuration, we've got a configuration of a source, which is in the Dev1 as p cut app repository. And here we've got the couple of files, which will be able to feed a Kubernetes with deployment instruction for the Dev1 as p cut application. So by heavily relying on the customized aggregation feature, we are able to have a single configuration based upon multiple sources. Do you want to point? OK, so now we are able to deploy our applications. We have shown an example based on customized, but you know that Flux can also deploy some hand charts. Now, what I really love about Flux is you can have one source of truth, just one repo, and have multiple cluster connected to this repo, and just deploy some installation of third party components like Istio, Prometheus, Volt, whatever you want, only pointing on that repo. Of course, you can have also a canary strategy if you want to test a new version of Istio, a new version of Prometheus. You can just have, for example, service-blue, service-green, and just point to the right folder and just apply some customized patches to be able to deploy some blue-green deployment. To be able to demonstrate that, we will try to install some Prometheus hand charts. So here, we create a new source based on Helm to point to the Prometheus community hand charts to be able to deploy Prometheus. And to the left, to the right, we have values.yaml to configure hand charts. So now we create some sync.yaml file, and Flux will reconciliate that. And here, we are creating a new release, hand release, sorry, with the value we have created before, just to install Prometheus and to configure Prometheus more easily, all based on the Git repo. And you can see that the values file points to the folder, microster, slash, cube, Prometheus tax, slash, value. And here, we can also work with the security team to install, like I say, ECO, Prometheus. Here, I install Kivano. Kivano is like Open Policy Agents. You can create some security policies around that. So like before, you install Kivano. You just create a new policy. Here is to enforce on every deployment or every hand chart that the second is enabled and required. So application or operation cannot deploy applications without a second name to enforce policy. And you can just create some policies based on network security policy or tags or whatever. And you just apply this policy with customization. So that's the same mechanism that before. You create some YAML files. You get pushed to that. And of course, it's being deployed in the cluster. But you can also benefit from the developer workflows, like security team can just create some YAML file, create a pull request on GitHub with the YAML file, say, OK, can you review my security policy? Can you apply that on your cluster? And the ops team or whatever team they want, just accept the pull request or which is the pull request. And it's being applied to multiple cluster at once. That's the same with the developer team. The developer team have the maintainer access. So developer teams can just update their version of the application easily with pull request and just apply that in the cluster. Just to sum up and to conclude this talk, if you want to have access to the repository of the workshop we deliver, you can just go dtap.com slash one communities slash workshop. The slides are on the SCAD site, so you can just put that. We have some step-by-step instructions to be able to replicate that on your own cluster. And of course, if you are French or you speak French, you can just go to github.training. And we will have the slides we deliver for the workshops. Thank you very much for your attention and thank you for github.com to have invited us. And we can have a break now, if you want. Yeah. Do we have time for any questions? We do have time. We started a few minutes late, so it would be nice if there's time for questions. Does anyone have any questions about their setup, about flux?