 Hello, everyone. Thank you for joining this talk. Today, I'm going to talk about how we can combine infrastructure as a code with GitOps and how we can manage your multi-cloud infrastructure. And I will introduce a new concept, and on that way, I will introduce two new tools. Short about myself, my name is Fasen Salahi. I work as Cloud Engineer, Cloud Consultant at Liquid Reply. Short about my company. We are a company based in Germany, and we are highly focused on cloud native development. We help our customer to build, design, build container orchestration platforms, and also to help them under way to the cloud native journey in topics like cloud native development, and especially FinOps. What are my focuses? Like my company, I like to design and build cloud and cloud native infrastructures. I'm highly focused on automation. I like to automate boring stuff that I'm daily dealing with them. And I also like to make myself busy with cloud platforms. I have OpenStack background, and now I'm highly focused on container orchestration platforms. What I do beside my job, I've been part of Combined Release Team since release 122, and I really love it. I do it beside my job. As next, let's take a look where we are standing when we are talking about multicloud infrastructure management. How does it look like these days? I think let's talk about infrastructure as a code. It's one of DevOps best practices. It's focusing on avoiding environmental drifts. It means every time I'm going to deploy the same infrastructure, I'm going to not to face runtime issues, missing dependencies. So infrastructure as a code is lying on the pillars of DevOps. It means it's using Git as a single source of truths, where our code is lying. And we are using GitLab CI-CD pipelines to automate the deployment and provisioning this infrastructure, and also maintaining the lifecycle of these cloud resources. I'm sure most of you are more or less familiar with this picture. We are starting to manage multicloud infrastructure, or even against one of these providers. Some of these tools, now we have in our landscape, they are neutral, like Terraform, Plumi, and some of them are highly opinionated, because they are developed for a specific cloud provider. I'm sure you have come to this point, you are using Terraform to provision an infrastructure, and you are getting this point that Terraform cannot handle all the requirements, cannot tackle all the dependencies I want. So in my case, I'm ending up writing really long, terrible, dirty shell script inside Terraform, and using AWS CLI to achieve what I want. But this is like really painful. This is this heterogeneous setup that I think most of you know from your own companies. I have seen by lots of customers. At some point, it's really painful. Let's look what challenge this brings to us. First, as I said, one tool is not a silver bullet to tackle all the requirements. It cannot be, because some of these APIs of the cloud providers, they are highly opinionated. They are developed for their own toolings. And our infrastructure as a core tools are not talking to the same API all the time, because these APIs are changing all the time. The other challenge I've personally been dealing with lots of customers, someone is going and deleting in a VM in AWS stack. And my tool is not reconciling. There is no reconciling working. So if someone deletes one VM or a three bucket, the next time I'm going to run Terraform plan apply, it's going to break, because it says, I'm sorry, a VM is missing. I don't know what happened. Someone deleted my mistake. So this is a pain. And such a setup, as we saw, brings complexity for the platform or SRE team. And it puts them on the back foot, because they are limited. They are limited in terms of resources to deal with such a complex setup. And if one developer comes with a specific request like I want this and that specific cloud resource, please do it for me. They cannot handle it. And the other last but not least, I know from my career, developers are not interested into managing cloud infrastructure, because they are, from my point of view, they are our customer. They don't want to deal with any tooling. They just want to get the resource and get the benefit of it. They don't want to be involved at any part of creation or managing the life cycle. I would like to take you first to a definition of GitOps. Let's take a look at what is GitOps about. Let's make a recap. GitOps is born in cloud native. It's the declarative way to get to continuous delivery, continuous deployment. Basically, it's born in Kubernetes. So at the end, it means nothing than Kubernetes manifests. The core part of this concept is Git as a single source of truth. All our manifests, all our configurations are lying there. And the other integral part is the GitOps operator, which is based on Kubernetes operator concept, is a tool that's running on a target cluster and is reconciling all the time with the Git. It's the way how you describe your system. And as soon as there is a push into the Git repo or there is a pull request accepted, the tool gonna detect it and will update all the resources in the target Kubernetes cluster. Let's take a look at the tools that we have in the landscape. So Flux and I will go see they are most well-known tools in a CNCF project, widely used by everyone. But today I'm going to talk about Fleet. Why Fleet? Why I've chosen Fleet? Fleet is a GitOps engine from Suzeraincher. They have this mentality in their mind that Suzeraincher is the central platform for managing not only the workload on top of Kubernetes cluster, but also managing many clusters at a scale. So this makes it like really nice tool to give it to SRE team to manage huge multi-cloud setup. Short about Fleet, like the other GitOps operators, it has a controller which is running in a Kubernetes cluster and this controller is all the time, reconciling with the Git and checking the state of the Git. And as our classical CI part, as soon as a developer makes changes in a Helm chart or container image, it will be built by our CI CD pipeline. It will be pushed to the Artifactory. And one of the features that just came out in Fleet is that Fleet is also able to check the state of the Artifactory to see which images are lying there. If there's an update, it can detect it, but it won't be applied till the user, like more specifically allows it. And at the left side, we have our Donacrim clusters. So in terms of Fleet, Donacrim clusters are our target clusters. They can be bundled into different type of groups, stage-based, provider-based, and that's how you can deploy Kubernetes application scale. So at the end of the day, there is one agent from Fleet which should be run at the target cluster and that's how the agent is always in contact with Fleet controller and asking, give me a job to do, give me a job to do. And Fleet says, okay, there are changes in the Git repo, so yeah, update the target cluster. So short about Fleet Bundle, Fleet Bundle is nothing than a Kubernetes manifest or Helm chart or customized configuration. I'd just like to point to this tweet from Casey Hartover from 2019, where he envisioned not only deploying Kubernetes workload using GitOps on Kubernetes, but also managing infrastructure using the same concept. How we can achieve this? How we can achieve managing infrastructure using GitOps? So we have to take a look at the principles of GitOps, which is declarative semantic. So we have to be able to define cloud resources with the same semantic. And that's where we are going to benefit the Kubernetes operator, which is the approach to implement this concept. So based on nothing than extending Kubernetes API using CRDs. One of the tools to achieve this goal is a crossplane, is based on the same concept, and uses the same extension of the API and it enables us to define managed resources with CRDs and create highly opinionated controllers for different cloud providers. With crossplane. So having such a tool, which enables us to also define our infrastructure using custom resources, it's creating opportunity. We are not always interested into creating Kubernetes cluster. We are, sometimes, our applications, they need to talk to other services. And why not taking the benefit of managed services from cloud providers? Like S3 bucket, message queue, message pop, or a database like Aurora or Postgres. So this tool enables us to create a self-managed layer. So it enables the developers just to use the same API to talk to cloud providers and define the dependencies. And recently they introduced a new project called TerraJet. TerraJet enables us to use the existing Terraform providers to build crossplane providers. So we're using the underlying existing providers to benefit the existing ones, not to write the new ones. Let's take a look at the crossplane building block. At the heart of this concept is Composite Resource. It's nothing than the way how it enables us to create the schema of our cloud resources. This is exactly how when we want to create the managed resources, it defines how this managed resource looks like. The other important component is Composite Resource definition, similar to CRD, but a bit different. It's how we define the type and the schema of this Composite Resource. Actually, it enforces all the time when we are creating a claim. So I will talk later about it. Every time we want to create the managed resource, this is the Composite Resource definition to enforcing the semantic and type of the resource. Composition is nothing than one-to-one mapping between the managed resources in terms of crossplane and the real cloud resources. It's the component that knows, okay, when I want to create RDS instance, I need to talk to exactly the real cloud resource. And the provider is similar to Terraform provider, so it's how you are talking to any specific cloud provider. There's a controller running, and your Kubernetes cluster is able to talk to your target provider. So imagine such a concept enables us to create the self-managed service platform. At one side, we have the dev team. They don't really care about what's happening at the background. It's like, let me give you an example of persistent volume claim and persistent volume. Persistent volume in terms of storage class, back-end storage are mostly managed by SRE or platform team. And mostly developer team, they have no idea or they don't want to be involved at all about what's happening in the background. So they just want to create a claim, like persistent volume claim, and crossplane will take care of creating the real cloud resources. This is a great opportunity just to put both of this team under the same concept and the same platform, and this enables the platform SRE team just focus on the background and taking care of the definition of resources or creating configuration or a specific bundling of cloud resources. And at the end of your site, dev people can easily use and create resources. How does it look like one of these custom resources? This is example of a cloud SQL instance from GCP, it's nothing than a Postgres. So this is like the claim and at the end, it's gonna be a cloud SQL database in GCP. So Navina, Felit and crossplane, let's put them together. Imagine the first photo I showed you, we had a heterogeneous infrastructure management platform with different tools for different cloud providers, as I said, some of them are highly opinionated. We have one central comitis cluster, Felit is running and at the same time, crossplane is deployed. Felit is taking care of deploying crossplane, so if you want to change something in terms of crossplane version configuration, it will use the same concept, the configuration of relying in the git, and as soon as you change something in the configuration of crossplane, Felit's gonna detect it and it will update the crossplane. So this enables us to talk to one single API at Universal API, which is Kubernetes, in a declarative way, we know YAML manifests. In the same way, we can create cloud resources. So this saves us from the hustle of using different languages, different tools, and just to talk one API and get the benefit of the declarative manner. I will show a short demo, but before jumping to the demo, I will just talk about what I will show in this demo to you. So my central cluster is running in the AKS cluster and Felit and crossplane are deployed and I will show how we can create a JKE cluster, and then gonna be a fleet agent running there and at the same time, gonna be a Cloud SQL Postgres created. Shortly, I will show how the git repository looks like. So as I said, I'm using the fleet to deploy crossplane, so this is how it's defined. So using the custom resource definition concept of fleet. So I'm just pointing this to the Helm repository and saying, okay, I want this chart with this version, and please deploy specific cloud provider packages when you're deploying crossplane. Also, I would show how these claims or custom resources they look like in terms of GCP. So this is a really simple hello world example how you can create a JKE cluster using crossplane. So yeah, it's simple, straightforward. And I will also just go and show the Cloud SQL. Yeah, we saw it in the slides. I will just zoom a bit. Can everyone see it? Okay. Let's take a look at crossplane. So the crossplane is running. So as I said, I've deployed two providers, AWS and GCP. Yeah, as you can see, they're running and we can check the state of the providers. Just short K standing for QCTL just alias. So as you can see, both are the providers are deployed. They're healthy. And as next, I would just like to create a secret to be able to talk to the GCP. It's standard JSON secret, just like formulated inside the Kubernetes secret. So I'll just have a, just let's take a look. Setup provider. I just have a shared script. It deploys the secret, nothing fancy. So it's already there, nothing changed. And I will just show you another Git repo, which will be just for the sake of this demo. So one of the custom resources from Fleet, so I'm just defining another Git repo. With Fleet, you can always specify specific passes and I'm just pointing this to JKA cluster and Cloud SQL. Just I have another shared script, which actually registers this repo. I'm just going to use it. So it has been created. So let's take a look at what's happening there when I, okay, short about what's happening right now. This bootstrap, yeah, we can ignore it because it's for the initial setup. But the second one, you can see this new Git repo, I just deployed, yeah. So the hash, it's standing for the hash commit. So that's coming directly from the Git repo. And what's happening, you see, it's saying it's not ready because it's now talking to the Cloud provider. It's trying to create two resources. So as you can see, now it's trying to create. Most probably I will just jump into the real manager but now it's trying to create a node pool for the Kubernetes cluster and also trying to create the Cloud SQL. So it won't satisfy till it manages both the resources and then you will see it will just be idle. So in the meantime, we can take a look at the real clusters. What's happening there? So it's in the provisioning state, so it's synced but it's not ready. We can also take a look at the Cloud SQL. Same, it's synced but still trying to create. I can jump into the GCP console, just need to refresh. Yeah, it's spinning up the cluster. Most probably, yeah, it's deploying. And if you take a look at the nodes, yeah, it's starting to provision the nodes. It will take a while and let's take a look at the SQL part. Yeah, it's already there, it's already there and running. So this is state of reconciling gonna continue till both of the resources are healthy and running and then it will just, as I said, it will be satisfied till you make the next change in the Git repo. We'll just jump back to the slides. So I would like to summarize my call. So as we saw, this creates a great opportunity to have a common API, a common language and the same workflow for both of the DEF and SRE or platform teams, they can use the same Git repo and everything gonna be gatewayed using the pull request and it's gonna be really transparent for both sides, which resources have we deployed, what is the state, because everything is declarative relying in the Git repo. And the principle of least privilege, so it's my own experience from my customers, don't give Kube CTL or any kind of CLI to the DEF team because this kind of enter cares. So Git is the best place to direct them. So please use the Git, create your resources, we're gonna see, we're gonna gateway it, we're gonna look through the pull request and keep their fingers away from any kind of command line. So as we saw, this one I help us to avoid having different setups with different configurations because everything is declarative, everything is managed with the Git so we can always see the state and always the Git operator is reconciling with the Git and the state of the system is up to date. And this also good opportunity for managed data operations like enforcing policies like LPA or Kaikai Werner or network policies or any monitoring observability, tools can be deployed with the same concept. At the end of the talk, I would just like to say we have two more talks in the KubeCon from three of my great colleagues. Please join our other talks and come to our booth, let's connect and great to see you there. Thank you. Are you willing to take questions? We have a few minutes. Any questions, anybody? Hey, hi, this is Anil. How cluster APA versus fleet in the even like, I know cross plane, I got it, but cluster APA versus fleet, any insights you can give? To be honest, I took a look, but I think there's not that much movement in regarding of cluster API, but I can see a good opportunity there that it can be integrated. Anybody else? Any questions about the TeraJet? TeraJet? Did you have a try with Opus Tech because it actually is one of the provider that is missing for the cross plane? To that end, yep, I was just checking a couple of days ago for this. I think I saw example for the main cloud providers. I was looking for KVM or Libvert or Opus Tech. Unfortunately, they are still in the beginning, so I hope there are going to be more movements there. But yeah, that's, I think it would be great, but I think it's coming. I sense a theme. There's, it's almost like GitOps is new or something. Anybody else? All right, awesome. Thank you so much for your talk. Thank you.