 Hello, everyone. My name is Asa Santürken. I'm a platform engineer at App Bound. And today I'm going to talk about how we are using cross-plane to provision and manage our own infrastructure, and also talk about why cross-plane is a great tool to construct building blocks for cloud deployments. App Bound Cloud offers a hosting solution for cross-plane. And when you go to App Bound Cloud and log into the console, you will get this screen where you can create and manage control planes. And a control plane is actually a dedicated cross-plane instance running under the hood. And what happens when you hit the Create Control Plane button here is closely related to what I will talk about in this talk. And this is why I wanted to start with this slide. So here you can see a high-level overview of our deployment model for hosted cross-planes. A hosted cross-plane contains a dedicated Kubernetes API server backed by ETCD. And cross-plane and cross-plane providers are configured against this API server such that they can operate on the API resources living this API server. We are also configuring this API server against Vault so that it can use it as a KMS provider. And this way, we can encrypt the Kubernetes secrets before writing them into ETCD instead of storing them as plain text. Here you can see we are running multiple instances of hosted cross-planes on each host cluster. But we also have a defined capacity of host clusters. And when we reach to that capacity, we are dynamically provisioning new host clusters. And this is the responsibility of the scheduling operator. And it needs to trigger the creation of new host clusters when we reach to the capacity. So as mentioned, for encryption at rest, we need a production-grade Vault deployment. What we mean by production-grade, it needs to be highly available. It needs to be backed by a bucket as a storage backend. And we are using Google Cloud for our infrastructure right now. And this needs to be a GCS bucket then. And since we don't want to require any menu intervention while operating these host clusters or that Vault instance, we want to configure Auto Unseal with Cloud KMS so that we don't need to seal Vault when the pod gets restarted for some reason. And since this Vault instance will need to consume Google Cloud services like buckets and encryption keys, it needs to authenticate Google Cloud. And Google recommends using GCP workload identity to authenticate workloads running in Google Cloud Kubernetes Engine. So we also want to enable and use GCP workload identity here. And finally, of course, we want to have all communication as encrypted. And we want to deploy Vault as TLS enabled. So remember, our host cluster has a Vault instance. And we want to dynamically provision host clusters. So we need to automate this deployment. And since our scheduling operator is responsible for dynamically provisioning when we reach to the capacity of existing ones, we need an API for our scheduling operator so that it can just request a new host cluster with Vault. Yeah, as mentioned, our scheduling operator needs an API. And it will just say, give me a production ready Vault. So here, I want to go through the provisioning flow for such a deployment. So first of all, we need to interact with Google Cloud APIs and create a GCS bucket for storage. We need to create KMS keyring and a Crypto key in this keyring for auto-unseal. We need to create a service account for the Vault servers. And then we need to grant access to the service account so that it can access to the previously created resources. And finally, we need to create a GKE cluster so that we can run our Vault servers on it. And also note that we need to configure this GKE cluster as workload identity enabled so that we can use GCP workload identity. And final step of this deployment is actually deploying the Vault itself. And while deploying Vault, we are using the Vault Helm chart, and we need to provide the names of the cloud resources that we have created as Helm parameters. So you can see we are passing the bucket name, keyring name, Crypto key name, and also we are passing the service account as an annotation so that it can authenticate with workload identity. Also note that there is a tight coupling between the infrastructure and application. And we need to consider this as a whole rather than just infrastructure application. I would like to talk a bit about evolution of cloud deployment so far. In the beginning, when the cloud providers started providing services, we were going to their UIs and creating cloud resources by clicking couple of buttons there. They also provided CLIs interacting with their cloud APIs, but except just scripting, there were no meaningful or feasible way of automation. With the emergence of infrastructure as code tools like Terraform, Ansible, and Chef, we get better ways of automation, but we still lack of an API which we could use to provide this automation to the consumers. And here comes Crossplane. Crossplane bot enables automation of deployment and management of cloud resources, but it also allows an API so that the consumers can just create these resources or even manage them. So coming back to our original example, which is deploying a production ready vault on GKE, I would like to show a couple of repositories. So this one kind of corresponds to the first phase of cloud deployment, which is making that deployment using CLI step by step. And the next one that I would like to show is following the same deployment model in the previous repository. This repository automates it with Terraform. And now I would like to introduce this one, which is making the same deployment using Crossplane, which enables making the same deployment using Crossplane. And this repository contains the necessary compositions, crossplane compositions, to achieve such a deployment. And once we deploy this configuration in a Kubernetes cluster that have Crossplane running, we have an API resource, like we have an API resource, which we can just create and expect it to be up and running. So coming back to our deployment model with four hosted Crossplanes, as you might remember, we have a scheduling operator there. And this scheduling operator needs to dynamically provision host clusters with production ready world. And having Crossplane API, which is powered by Kubernetes API, enables us such a clear separation of concern. And our scheduling operator could just request the host cluster. And then by pulling that API or watching that resource, it can just get notified when the resource is ready. Another advantage of using Kubernetes API here is we can just use existing tooling and machinery around Kubernetes API. So we could, for example, use Qubectl to interact with these resources. And we could build an operator and use controller runtime to manage those resources. So let's have a closer look what happens when we create a resource, a Crossplane resource. So you might remember the right-hand side, which is the provisioning flow for our example application. And on the left-hand side, we see that we have a composition, which contains the required managed resources. These managed resources, in this case, are like GCP resources. Or we also have Bolt release, Helm release, as a managed resource. And all of these resources are composed in a composition called world cluster. And when we create a world cluster, custom resource, then Crossplane composition controllers goes and creates corresponding managed resources. And then the corresponding providers acts on that and creates and manage those resources on cloud or on the cluster. Having a control plane and resources living in the same API enables building new resources on top of existing ones. I think most of us are familiar with the relationship between pods, replica sets, and deployments. Replicasets are built on top of pods, and deployments are built on top of replica sets. So with Crossplane, we can represent everything as a Kubernetes resource. And just like the relationship between any Kubernetes resource, here we can also build new resources on top of existing ones. For example, in the figure, you can see my cluster resource is like contains other existing resources like network, subnetwork, a GKE cluster, and a couple of node pools. And there is another resource called My Vault, which contains the required cloud resources, as well as a help release, which is also a managed resource. And again, we can continue building new resources by combining these newly constructed building blocks. So we can say that with Crossplane, each and every resource or each and every cloud resource is just a block. And we can build new blocks by combining, like by composing new resources thanks to Crossplane Composition. OK, it's time for demo. And in the demo, I will create this custom resource, which is created by a composite resource definition. And we will expect that all of the required cloud resources to be created by Crossplane providers, and a production ready world is deployed on the newly created GKE cluster. OK, let's start with the demo. I have a local kind cluster, and I have Crossplane installed before the demo. And I will now start by deploying the configuration package, which is like build from the repository that I showed in the presentation. So let's start installing it. Let's check how it goes. Yeah, here you can see that Vault on GKE configuration package is installed and ready. This configuration package contains the composition, compositions, and composite resource definitions that we need for Vault deployment. And now the next step is deploying the resource. Since we will use provider GCP, we will need to create this provider config, which refers to a Kubernetes secret named as GCP crates. I have created that secret prior to the demo. So let's create this provider config as well. OK, and the next step is actually creating a network composite resource. And now we can create a Vault cluster composite resource that I showed at the end of the demo. So now let's see what we have. CTL gets Vault clusters. Our custom resource is created. And let's check what managed resources we have for this composite resource. Yeah, here you can see that like held of managed resources created, GKE cluster, notable service accounts, key ring, crypto key, crypto key policy, buckets, policies for buckets, et cetera. And let's also check the cloud console. Yeah, as you can see, a Kubernetes cluster is being provisioned right now. And since this will take a while, I would like to go to the repository. And we can use this time to go over the configuration in that repository. So we have multiple compositions here. And let's start with the GKE composition. This composition contains a GKE cluster resource provided by provider GCP and contains a notable resource again provided by provided GCP. This composition also contains one last resource, which is a provider config resource, which is going to be used by provider Helm so that it can make the deployment to this newly provisioned GKE cluster. So when this composite resource is ready, we will have a provider config pointing to that cluster. So the next composite resource or the next composition is the one that named as Vault. And in this composition, we are creating a lot of GCP resources. One is for service accounts, key ring, crypto key, crypto key policy, service account policy, bucket, bucket policy members. And we have two Helm releases here. One, this Helm release is just to deploy TLS secrets for Vault servers. And we are using the capability of HelmChart to create certificates. This HelmChart contains nothing but just a TLS secret which will deploy Vault TLS secret. But this Vault-based HelmChart could be used further. If, for example, you want to enable network policies, you can put your network policies here in this chart. Or if you want to run some pre-installed configuration script or job, then you can put your manifest in this Helm release. This is quite convenient. So and finally, we have our actual Helm release which will deploy the Vault itself. And here you can see we are using the official Vault HelmChart. And we are providing chart values here. And since we will need to use dynamically created like resource names, we can just mention, for example, GCP bucket name here. Rather, we need to patch it from the composite resource. So let's take a look for an example. I think the best example is this one. So here you can see that we need to pass, for example, bucket name. And we need to pass project name, key ring, and crypto key name. By the way, we are using a single key ring, but we are creating multiple crypto keys under this key ring. So dynamically created crypto key name will be fed here. And we are doing this by using the cross-plane composition patches. And also I would like to mention that at the time of this recording, there was an open PR which enables multiple input compositions. And I'm making the demo from that open PR. And that PR enables such patches. Yeah, so these two compositions that we had a look. But as I mentioned in the presentation, the cool thing is once you define your compositions and composite resources at Kubernetes API, you can go further and create more compositions by combining them. So this composition actually defines a new type called world clusters. And it combines the previously defined compositions. So this is a composition of previous two compositions, which composes a GKE, which contains the GKE cluster, multiple, and provider config for HAL. And also it contains world's composition. Yeah, so let's have a quick look at how our deployment is going. So yeah, it looks great. So as you can see, all of the resources seems to be ready and synced. So this means that our composed resource should also be like. Yeah, our composite resource is also ready. Let's also check our HALM releases. So that, yeah, here you can see that HALM releases are like actual HALM release is deployed on the GKE cluster. OK, let's connect to that Kubernetes cluster. Let's system namespace. And yeah, as you can see, all world clusters, world ports are up and running. And they report as ready, which means that they are unsealed. Maybe we can also check the world configuration from one of them. So here you can see that world server configuration, like it is configured for using this KMS Crypto key, this key ring. And like HA is available with GCS storage. Yeah, and if I could scroll down, yeah. And as you can see, like, world is unsealed. OK, yeah, so that's the end of my demo. I'm happy to answer any questions if you have. So thanks for joining my session.