 It's a tutorial form, so we're gonna like, you know, hands-on develop a cross-plane provider, which is like, you know, a set of Kubernetes controllers with their own CRDs. So before starting, how many, like, hands-up if you have been using cross-plane with like, you know, other providers right now? Nice. So how many of you just heard cross-plane but not using it yet or just like, you know, at the consideration phase? Nice. And how many of you started actually developing provider and like, you know, got stuck somewhere? So you're looking like, how do you develop that? Yeah, cool. All right, cool. So yeah, right in cross-plane providers, so cross-plane providers allow you to bring any external resource, external API, into your cluster as CRDs. So this is not like, you know, really like a unknown or like a very novel thing, but like, you know, with the tools and runtime and generation tooling we built, we have now a framework optimized for interacting with CRUD external resource APIs. For example, like, you know, the main thing with the Kubernetes control is like, apply logic, like, hey, update if it already exists and like, you know, create only if it doesn't, et cetera. So like, you know, all these are like, you know, handled towards like, you know, optimized for assuming that the external API is like, you know, designed in CRUD method, not like in a declarative fashion. So essentially what we do here, like, you know, have a framework that contains like, you know, runtime and the code generation tooling that will allow you to take an imperative API and expose it as declarative API and use cross-plane primitives, abstractions and compositions on top of it. So yeah, a little bit about us. So I'm Mubafaq, I work at Upbound and I've been a cross-plane maintainer since we is 0.3, which is like three years now. And I had worked at SAP before that, again, developing Kubernetes controllers in Hassan. Yeah, I'm Hassan and just like Mubafaq, I'm working at Upbound and I am the newest maintainer of cross-plane. With cross-plane 1.7 release, we shipped the external secret store feature and after that I am honored to be a cross-plane maintainer. Yeah, and there are lots of like, you know, provider maintainers which are like, you know, on the orders of 10. So feel free to like after this story, you'll feel free to go ahead and contribute and like, you know, become a maintainer. Cool, so we can start. So what is a cross-plane provider? As I just mentioned, just like you create pods and deployment resources, cross-plane providers brings the ability to your cluster to create external resources like buckets. For example, this is from provider AWS, S3 AWS cross-plane IO and you can create a bucket with like, you know, a lot of like all the configurations that are available via AWS CLI or the API. So this is like, you know, in its essence, that's what like, you know, what it adds to your cluster. And then, so technically though, like, you know, what makes up a cross-plane provider is first customer resource definition for this type, which is like a schema and one thing that is different, like, you know, than the generic Kubernetes controllers is that we have this concept of XRM, cross-plane resource model, which we will go into details in a minute. And then implementation of the Kubernetes controller, reconciling that CRD, and then provider conflict type, which is also a little bit like a cross-plane specific term that tells like, you know, for every resource, you need to tell how it will authenticate to the target API. For AWS, for example, this points to a secret with a like AWS access key and stuff. So per resource, you have to like, you know, say, okay, this is the provider conflict that you should use for this resource and the package metadata. These providers are packaged as OCI images and you can use cross-plane package manager to install them. Yeah, cross-plane resource model. So this is, I think the crux of like, you know, cross-plane providers and like, why would you develop a cross-plane provider as opposed to like, you know, standard Kubernetes controller. This resource model is based on Kubernetes resource model, KRM, which like, you know, you can take a look at after the session, which is like, you know, a ball of text with like API conventions, references, optionality and everything. So on top of that, as a super set of that KRM model, we have many conventions that make up cross-plane resource model. For example, the first and most important one is high fidelity. So on the right, you see DB instance, which is like, you know, RDS from AWS. High fidelity says that you should be able to do everything with that CRD that API allows. So like, you know, there are like only a couple of fields right now to make it like, you know, fit there, but there are close to 100 fields of RDS instance. So like, you know, because this is like, you know, the lowest level primitive in your cluster that we will build on top of, it has to expose everything. Like it's not an abstraction, but more like representation of the external resource. So it has to show all knobs and toggles. And in the status, it should show like, you know, everything that you can't configure, like, you know, health of the DB instance, for example, or the URL generated by the cloud provider. And then there are like, you know, other features like for provider at provider. So when you are targeting like any external resource that it's really hard to come up with like, you know, common denominators, right? Like some APIs are like, they have different names and different structs and everything. So what this one says, like, you know, the spec specification should go under for provider struct. So other than that, like, you know, deletion policy and like, you know, highest level things are cross pane specific. So at every cross pane provider, you can expect to see deletion policy. You can expect to see provider configured, for example. But inside for provider, that is resource specific. And then under status, you have at provider. So status that at provider would be like, you know what that specific provider returns from the API. And similar to deletion policy, for example, you can find a ready condition, which is also like, you know, signaled by the cloud provider. So you can, for example, have a standard there. Like, you know, hey, if a resource reports ready, then it's definitely like ready to be used. And that is across all cloud cloud providers and all APIs. And I don't want to go to like that in the detail of all of them because like, you know, I know everyone is ready to start coding, but like, you know, just at the high level, external name and tagging, we have the standard identification for all APIs. So like, you know, you don't have to find VPC ID, RDS ID and everything. It's always external name. And we handle the binding to like actual identification in the code. And no sensitive information, as you can see here, there cannot be a master user password field on the CR. So you have to like reference a secret and then controller will take the password from there. And references, I think this is like, you know, one of the best features of cross pin providers and we'll go into detail how we handle that. It is about like, you know, referencing other resources only with their custom resource name. For example, VPC ID, right? If you create a VPC, its ID is assigned later. So if you, you can, you are able to give its name metadata.name, CR name, enough folder like, you know, with a bunch of infrastructure and when you keep CTL applied, it will wait for VPC ID to appear and then use it. So it will become eventually consistent. You don't have to like, you know, create VPC, take the ID and copy it to other resource so that it can start creating. And then connection secret is also like, you know, a standard, right? Connection secret to ref. So when I create this RDS DB instance, it publishes, once it's ready, it publishes a connection secret that you can mount to your pods, which we will do as part of the demo to connect planet scale database to WordPress via connection secret. And then deletion policy is like, you know, if you don't want DB instance to be ever deleted, you can keep it orphaned and even though you delete this resource, it's gonna be like, you know, still in AWS. And yeah, and others, readiness conditions, CRD categories, which we can take a look later. And there's this link to for the whole XRM spec. Yeah, so all these XRM features, as I said in the beginning, we have a framework that composed of like, you know, a runtime, code generation and some scaffolding so that it's like, you know, you already have to write only the cloud provider specific stuff, right? So for example, this is a usual GCP cloud SQL instance CRD. Here you see like, you know, we have inline XPV1 resource spec that comes from cross-plane runtime and it has the standard fields. So under for provider, then you start to give like, you know, hey, cloud SQL instance requires these parameters to work with. So that's like, you know, this is like scaffold file, which we will create with a command and then populate inside this parameters and observation structs. Because like, you know, they are the ones that are like, you know, API specific and the rest is like, you know, cross-plane generic. So as an example, the resource spec struct from cross-plane runtime contains this right connection, secret ref provider configure and deletion policy. So you can be sure that like, you know, at every cross-plane provider and CRD, you can find these fields and build your assumptions on top of them. And then the controller, I think this is one of the key parts. So when you develop a cross-plane provider, you yourself don't actually write the whole controller logic. And what I mean by controller logic, if you have developed Kubernetes controllers before, this function signature should be like, you know, really familiar to you because it was upstream controller runtime requires, right? So in cross-plane runtime, we actually develop that signature for you and then ask only the CRUD methods of the API. So you don't have to think about, like, you know, okay, is this function idempotent? Is this function, like, you know, if at the same two controllers at the same time, we'll create be called two times, for example. How does the, like, you know, calls are made for, like, you know, deletion, for example. So all these, like, you know, basic logic that you have to write for every API are handled in this controller. So for example, if you can see here, like, we get the resource, like, new managed and I will go into, like, you know, I will show you the interface there. So it works with, like, you know, interface and, like, you know, I'll show how it's generated. And then here, the logic starts, right? For example, it's like, if the resource is deleted and if it's orphaned, like, you know, don't touch it. If it's not orphaned, like, you know, do that and do this, like, you know, the whole logic is implemented here. So in order for this controller to work, that is, like, you know, what you need to provide as functions of the correct functions in your actual implementation. Because, like, you know, that is the point that we cannot generalize across all APIs, right? We cannot know, like, you know, observe implementation of cloud SQL instance while we write this, like, you know, in cross-platform runtime. So that's where you come in and, like, you know, add these functions which we will do for two resources in a minute. Like, observe, create, update, and delete. And then this is the, you remember, there was a new managed call, and this is, like, you know, another interface that has to be satisfied by your CRD. And that is, like, you know, how we impose these standards at a technical level. So, like, you know, it's not only a convention. Whenever we can, we impose those, like, you know, as technical limitations. For example, it has to have, like, provider-config reference interface. It has to implement that. It has to implement conditional interface which is, like, you know, ready condition, sync condition. And it has to, like, you know, have connection-secret writer-to-interface implemented. And for all these, because they are, like, generic across all resources, we have code generation tools that will help you just to, like, you know, scaffold, make your changes, run, make, generate, and all is filled and, like, you know, ready to be used. And this is an example of generated code by our EngRiget tool. So, as you can see, Cloud SQL instances, get condition, get deletion policy, these are all, and, like, your code is analyzed statically. And then, like, you know, if it sees resource spec in line, it generates all of these automatically so that you don't have to think about, okay, like, you know, now, what do I need to, like, you know, implement extra to satisfy this interface? Because we want to keep as little as possible for implementation so that you can focus on your API bindings, like the CRUD methods that I showed earlier. And the next one is the referencing that I mentioned in the XRM. And this is, like, you know, one of the key differences between, like, you know, other infrastructure-as-code tools is that it works just like Kubernetes references, right? In Kubernetes, you refer to, like, you know, specific, like, with label selector, you refer to services, from services to pods, for example. And that is, like, you know, how we do the same thing with infrastructure. For example, you can refer from RDS instance to 3-subnets and VPCs and security groups all in the similar way with Kubernetes. And that is, like, you know, how we technically implement this. You add reference and selector, and these are, like, same as Kubernetes structs. And then we annotate this field so that it generates the actual reference-resolving function. For example, in other tools, whenever you need to reference another object, you have to give the field path, like, hey, go and take this information from spec for provider VPC ID and that, and then maybe normalize it with, like, you know, concatenate it and do that. All this is built in the provider code, so you only need to give always, like, you know, the custom resource name. And the eventual consistency, and, like, you know, with one-cube CTL apply, you can bring up the whole world of cloud resources. Yeah, and this is the actual, like, you know, generated code for reference-resolving, and this is also, like, you know, what we will say in a minute. I'm just, like, you know, giving you, like, a little peeks of the code that we will be working with. You see, like, resolve references, and then we use the fields and report, like, you know, it's not resolved. So, yeah, it's all generated code. And yeah, now we will start the implementation. Hassan will be helping here. So, plan a scale provider. We will implement a database CRD, and it requires password, password API is a different API, so we have to implement a different CRD for that. And then we will have, we will write a composition, a cross-paying composition that has one database, one password, and one WordPress installation with Helm. So, with composition, we will have a new API. We will define it via YAML, XRD, and we will go into details after the implementation. And then, like, you know, with one namespace YAML, you will have both, you will have all these three, like, you know, provisioned immediately. And yeah, so I will let Hassan take over, and start the implementation. Thank you. Yeah, so maybe I should quickly talk about why we choose plan a scale in our implementation. So plan a scale is a managed service with cool features, and it's based on, or works on top of open source with this project. And the first point is, it does not have a cross-paying provider yet. So we will implement it here, together from scratch. Another thing is, like, it allows you to have one database for a free account, so that you can just, you know, like, try that out by creating an account and getting an access token. So this is why we choose plan a scale. Yeah, so let me start. So first of all, we will start, I don't know, like, I believe there will be someone who will try following the implementation, so I will try to be as slow as possible. So first of all, we will need to go to cross-plain slash providers template. So this is a template, GitHub template, and it has all the, you know, like, scuff holds or all the functionalities, including CI, including build, make files, et cetera. So we can just make a quick start from here, and then it will be easier to build our provider. So first thing is, we will need to hit this, use this template button, and then select an organization. I will start this under cross-plain country, but you can select it, select your own GitHub org, planet scale, provider plan scale. So now I'm hitting create repository, all right. So now I'm cloning it to my demo, directory, this text size is good for everyone, okay. So CD provider, planet scale, and let me open my GoLand. So right now everything, like, we only have template inside the repository. You can see, like, in the readme, we have provider template. In image names, we have template, template controller. We have a sample API, and we only have a sample controller. Like, in this state, you can already build and run this controller, and it will have a sample API. You can play with it, but now we will not do that. We will immediately start building our own provider. So first step is, as you would expect, to replace the name template with planet scale so that we can, you know, like we can have the correct name of our provider. So we have recently added a helper utility, helper make target, I would say. So let me open my notes. Yeah, so first of all, after, you know, cloning, we need to run make submodules so that we get build submodule, which is a common submodule that is, you know, under a bound organization, but it contains, like, really useful utilities for building and pushing images and Helm charts, et cetera. So we have, we got it, and the make target that I mentioned is make provider.prepare, and then we gave the provider name, and it will be planet scale, yeah. So let's check what happened. Yeah, as you can see, all the name template names replaced with planet scale, and we got rid of the sample API. We don't have any API yet in this controller. So now let's use the other, you know, helper make target and to generate and generate the database type. So for any type, you would need to define a group and a kind in Kubernetes and also a API version. So I will not give API version, and it will default to V1, alpha one, but I will need to find a group and a, you know, a kind. So typically for AWS services, for example, you can have EC2 as group name and then instance as kind, but in the case of planet scale, like we have database and we could of course say it's database and kind as instance, but I choose to use database, group and database as the name of the kind. So now I'm running this command provider, add type, provider name, group database and kind database. Yeah, so, yeah, here at this time, I don't know if I can make this bigger. I can try this one. Window, okay. You know what's going on? I'll do everything again. Okay, anyway, it doesn't work well. So tools, okay, anyway. So here you can see like database API is created with just some placeholder, but this part is just common. We don't have to do anything specific to our resource at this section, but here you can see like spec has a database spec and database spec contains the cross-plane run times resource spec, which already has the built-in or common XRM methods or functions or types. So here you can see write connection, secret, publish connection details, provider config reference, deletion policy. We don't have to do anything for those. So they just come for free. We only need to change this database parameters section. And here you can see like it is a placeholder configurable field and we will change this in a minute. But before that, let's also generate the other type, which is a branch password. In plan scale, after you create a database, to use that database, you will need to create a password. This is a different step. Like in UI, you can click and get the password, but behind the scenes it makes another API call and for the branch that you selected, you can create a password. So this time we use group name as branch and kind will be password. So yeah, this password type is also added. There is two steps that is not automated yet with this make targets, but they are relatively simple. So first of all, we need to register our APIs in this API slash planscale.gov file. And then we also need to register our controller. So let's go there. Yeah, here you can see we got rid of actually the sample and we now have database. And this should be database v1, alpha one. Database v1, alpha one. And then we also have a branch group. Okay, so we have registered our APIs under APIs directory. Now the next thing is to register our controllers. So you can see we have database type and password type and we no longer have my type. So let's change it. Database, database.setup, password, password.setup. Password, password.setup. Okay, so we have types generated controllers generated as placeholders or as scaffold and registered those APIs and controllers. Now, if we check here, like in schema builder register it fails like gives an error like some methods are missing, deep copy, object, et cetera. So at this point we use code generation tools. Controller runtime, controller tools will generate these deep copy methods like the methods come into any Kubernetes object like just like you are building a normal crossplane, a Kubernetes operator. But the crossplane tools will also do the same and generate the required methods for crossplane XRM. So let's call make that generate. Okay, so let's see what we had generated. Yeah, so you can see like generated that deep copy. This is generated by controller tools and we also have generated that managed. Managed resource, managed is a concept in Kubernetes in crossplane we call external resource representations as managed resources. So here we see like the generated methods for to satisfy crossplane XRM. So I think we are in a good state. So it's good to commit these changes not to lose. So I will just add everything and generate types, generate and register types. So now we have our controller with types but there is no business logic in it. So let's do that right now. So first of all, since this is like the controllers are implemented in Golang, we will need to find a go client for the API that we will be interacting with. So I will search for planet scale, go client. So let's see what we have in database. Yeah, so here you can see create database request and it has these fields. So in our spec, we will also need the same fields. So I will just copy them here and go to API, database, database type and four parameters and database parameters. So I will just put them here and do some minor modifications. First of all, I will need to add a JSON tag for this as well. And it is organization. We don't need name because we will use the metadata.name of our custom resource. So I'm, yeah, or external name. Yeah, external name and I'm getting rid of this. And one thing, like one important point here, like nodes and region has omit empty tag which indicates that these are optional fields. You can verify this by creating a database from UI and you see that those are optional. So since they are optional, we will need to change their type from string to pointer to string type. So when it is string pointer, we can check whether it is nil or not. If it is nil, we just do not pass anything. So for optional types in crossplane, we set this. So let's also add the QBuilder tag. Optionals, optional, yeah. So I think we are good to continue. So in the observation section, this will be the output of the, or add status part. So let's see what we have. We can always add more along the way, but let's say we have a created add field and a state that would be interesting. At the end of the day, we will add all of them, but let's start with just having state here. So state and have it as string for the sake of simplicity and states. Okay, so we edit our parameters and observation fields, and now we can continue with the controller part. So here is the controller. This just has the, you know, CRAT methods that Moafak showed with a good level of initialization code. So we will implement observe, create, update and delete. And for that, first of all, we will need to implement the connect method. In the connect method, we need to initialize a client of the external service, which is a planet scale service. And for that, of course, we need to extract some credentials or find some credentials from somewhere. So in crossplane, this is done by reading the provider config and usually, or typically these provider configs refer to a Kubernetes secret and that secret contains the actual API credential. So these placeholder codes will extract the, you know, secret that we need and then pass to new service function. And new service function gets a credentials as a credential as input and then returns an interface. So we will need to define this interface. Like we no longer need to use knob service. So I will replace it as planet scale. Okay, so we will initialize the planet scale service. And before that, let me just go and get planet scale go client so that GoLand could help me to, you know, auto complete. So I get, I got the planet scale go client. So now in this planet scale clients, planet scale service, we will have a PCLI and this PCLI should be type of planet scale that client. And we will need to initialize this client somehow, credits and return it. So let's, so there are two ways in planet scale service to authenticate to API. One is service account tokens. The other is access token. Somehow I couldn't make it with, make it work with service account tokens, but it works with access token. So I already have my access token configured. If anyone interested, I can show you how you can get it because it's not straightforward. You cannot just get it from the UI. I had to initialize client and read from a local directory. So what the point is now we assume that this credits is access token and let's initialize our client, planet scale.newClient and planet scale.withAccessToken. And access token is actually coming as credits. So string and this also returns an error. And if error is not, oh, I think I don't even need that. PCLI, C and error. Okay, now new planet scale service function returns a client initializes and returns a client for us. Okay, so this should be pointer to planet scale service. And this signature has changed. So we also need to change this signature service. Okay, I think we are good to go. So we have our client initialized and now we will need to implement those crude methods. And here we of course need to pass service as planet scale service and let's go and implement our observe method. Okay, we can also start with create to make it more exciting. So let's implement the create. So what we have, we have that external and service and it has our planet scale client. Now I can call databases, create, we have context and it requires a planet scale database. Database creates requests or create database request. Yeah, so fill all fields and this requires a pointer. Okay, now we need an organization and in the UI I have my organization as Turkan He, Turkan H. So let's use that. Oh, actually we will just get what we have from a spec. So cr.spec.forProvider.organization. So we get our organization and for metadata, for name, actually we will use the external name. So metadata gets external name of custom resource and these are optional fields. So let's check if they are really set spec.forProvider.nodes. If it is not nil, let's say notes empty, notes equals. Okay, do the same for region. Okay, so this returns a DB and an error and now we can check the state of the DB. cr.status.atProvider.db.state. Not set conditions, status.status.atProvider. State equals DB.state and let's convert it to a string because it is an enum. Okay, so now we don't get any connection details here. I can show it so you can see like what are returned from the API, notes, region, state, HTML URL name created at and updated at. So as we said in the beginning we will need to implement another resource for that. Okay, now I think we are good to give it a try just to make it a bit more exciting. So what I will do is I will run, make generate once. This happens. Okay, so I have a brand new kind cluster. So first of all, I will apply the, CRDs that were already generated for me. You can see password, database, provider config and others and now I can run my controller with make run. So while it is running, okay, so let's find an example here and just, I think I can just rename it. Refactor rename, database and refactor rename, database and I have database as group, planet scale and then database and for provider I have organization here and it is my organization and let's also add a note. Hello from crossplane and I will also need to create a provider config, let's make it default. So an example, provider config. I have my token secret, so I'm deleting this one and let's use it. Okay, so QIP CTL, apply F, p scale token secret and I will create my provider config. All right, so now I can create our database. So let's have a final look if everything is looking good. So it will be named as example and it will be under Turkan H organization and it will have this note. So QIP CTL create examples, database, database.tml, QIP CTL, get database, let's watch it. Okay, so it says it is synced but since we didn't set any ready condition yet, it didn't report ready. We usually set this in observe method so we observe and verify that it's really exist and its state is ready. After that we set it as ready. So let's see what we have in here. We don't have anything. So let's see external resources up to date. Okay, so we should implement observe as well. Yeah, so we couldn't make it fast. Anyway, so I will stop and create method is not called because in observe we should observe and make sure that it is not existing. So we will also need to implement observe method. So let's do it quickly. So c.service.pcli databases, get CTX, planetscale.get database, request, fill all fields, organization, CR.spec.forProviderOrganization and meta.getExternalName, CR, and okay. So DBError, and now we need to check, like we tried to get the database and now we need to check whether it's really, it is a not found error or something else. So we do that like that and error planetscale error and if it is planetscale error and it's code is a not found then. So these group methods expect this, you know, managed external observation result and return managed external observation and if it does not exist, we should just say resource exists as false and if it exists, and if it exists, then we will need to, you know, if DB.state equals planetscale.pcli, database ready, then CR.status.setCondition, xpv1.available. Okay, so in the database spec, there is no field to be updated in planetscale database API. So if it exists, we just return as existent, that's it. So now we can just run it again. We didn't make any API changes, so we don't need to regenerate anything. Okay, what's this? Error is not, planetscale error. Okay, now it says true. Let's check. Yeah, as you can see, our database is created and our note is available here and we are almost done with the database resource. One last method that we will need to implement is delete method, but at this point, if there are any questions, we can try to respond. Any questions up to this point? Okay, so we have our database ready and we have our database ready and also the last method that we will need to implement is just delete and CR dot, not CR, C dot service dot CLI databases delete contacts and planetscale delete database requests, fill all fields, CR dot spec dot for provider organization and for the database name, meta dot get external name and we have CR here. This only returns an error, so we can just return its result back in the delete. All right, so let's run it again and also verify delete as well. How can I do that? Okay, here you can see the database is gone and we have completed the implementation of the database. So, let's just hit add, let's implement database. Okay, now the next resource that we need to implement is password. I will try to be as fast as possible because most of the parts will be just duplicate. So, I'm just copying this part here. Ideally, we should not repeat ourself, but just to be fast, I'm copying here. So, and new service function. So, we don't have to do anything here. Scale, service, okay. In observe, we get c.service.pclipassword. Get CTX, scale, get password or password. This branch password request and here, oh, we need to fill the API first. So, we cannot be that fast. So, we need to check the password here and we again have organization database branch display name. So, I will just get them from database and password. Yes, organization, create, there is the create, transition database branch. We also have a display name. We can use display name as metadata name. So, I'm deleting this one as well. And password ID. Okay, so let's get back to implementation. Password, spec for provider organization. And cr.spec.forProvider.database. And cr.spec.forProvider.branch. cr.name, metadata, get external name, cr. And p.error. And return error here. All right. So, let's also have create implemented. c.service.pclipassword. Create contacts. Planet scale database branch password request fill all fields. cr.spec.forProvider.organization. cr.spec.forProvider.database. And cr.spec.forProvider.branch. And this is an optional field. We can implement it later. And metadata name. Display name is just metadata, cr.name. Okay. So, this is create and error. And now, cr.status.itprovider.id, password. Okay. So, this is create. We don't have any update. And finally, we will need to have the delete method. Service.clipassword.txt. Planet scale.password. Not this one. These three. I don't believe. Okay. So, password should also be available now. And let's also create an example for it. Password.password. Okay. So, it is under branch group. Its name is password. It has a database. Okay. So, our password implementation is also ready. So, let's try it now. Yeah. Before that, we also need to publish the connection details. So, create. Yeah. So, host. We need to find this somewhere. And I'm just starting. And username.password. And database. So, for host, I remember it was somewhere. Yeah. This one. And this should be a byte array. So, let's convert it. p.red. No. p.ublic.id. Plane text. And finally, p.do we have it here? CR.spec.forProvider.database. All right. So, this should be good to go. Since we have API changes in password, I will need to regenerate and re-apply the new types. CTL.apply.f package CRDs. Okay. So, make run. Which one? password. Now, we don't have any managed resources. So, let's create the database again, so that we can create a password for it. And password fingers crossed. Did we have a write connection secret to reffield? I guess not. In the example. Okay. So, we also need to set to see the connection. So, I'm deleting this one. Okay. So, our database is ready. I will create the password now, before I would like to check if we are in a clean state, because we stopped in the middle. So, yeah, that one. Let me clean this. Okay. Now, I will create the password again. Write connection secret to ref. I'll get password. Yeah. We have our password is also created. We didn't set the available. So, it's not reporting ready, but we will fix it. So, but before that, let's see if our connection secret is also available. Get secrets. Yeah, here you can see the dbcon. And if we check its content, yeah, we have all the fields filled. So, let's see what we have as host, just as an example. Yeah, this is our host generated by PlanetScale for us. So, let's do the final touches. And then we will continue with using it from the application. So, the final touch, number one is... That's number two. Number one is, yeah, to mark it as ready after observing. We observed. It is available. And now we can cr.status.setConditions xpv1 available. Yeah, this is first one. The next one is, as you might have noticed, we have the database here. And actually, database is another managed resource. And instead of giving its name as hardcoded, we can also create references. And this is especially important in the case of composition because usually you would like to combine resources in the same composition with each other so that you can refer, for example, to the database in the same composition. So, we will also need to implement references. Actually, this is just auto-generation. So, let's continue. Let's do that password. And we will have database ref. Well, in fact, do you remember the exact type? Reference. Okay. And JSON, database ref, and let me empty. And we also have a selector. And now we will need to say that we will refer to that previous resource we have. So, let me check quickly. Cross-plane tools. It says that how to define resolvers. Okay. So, we will need to show the reference type. .com slash cross-plane contrip slash provider planet scale, then APIs, right, Moffack. And database, is this enough or should I go down? .database. This one. Is it okay? Yes. Look at the example. We want to refer one. Not this one. .database. Okay. So, one thing. What is our go-mujl name? It is this one. So, I would need to use that instead. Instead of repo. Okay. So, let's run make generate. Okay. So, what did happen is this new method is generated, this new file generated. And here you can see like the resolve references function results the reference from password to database. So, this is also good. And Ctl get managed. I think one last try and... Okay. Since CRD has changed, I need to apply this again and make run. And after seeing it as true-true. Why is that? .database. There is no problem with this. .database. You want to debug it in composition. .database. It's managed. So, what's wrong here? I'm guessing we hit some intermediary state during development. So, let's... .database. Ctl get managed. Okay. I think we are good to go except this one. We set that, right? Password observe here. Okay. We need to set the external name here. CR dot set external name. Meta dot set external name CR. Then, this was missing. It will be the same again. Now it will say what is that name. Okay. So, let's see how it goes. Let's also clean this. Okay. So, I think something messed up during the development, but we hope that like with composition, with clean installation, it will work. Yeah. So, now we will use these resources in a composition and consume them from WordPress so that WordPress works with planet scale users as database. Maybe a bit... Yeah, maybe a bit about the composition. So, here is the composition that we are going to use. So, the main point of composition is that you don't have to like, you know, you don't have to create database, password and everything else every time. So, you expose an API that is like specific to your use case. And then like, you know, let cross-plane create those for you with the configuration you give. So, for example, in this case, we are defining a new API with composite resource definition type from core cross-plane. And it has only one parameters, parameter, block name, right? And then we are saying that like, you know, for every instance of that API, go ahead and create one planet scale database, one password, and one hand release that installs the WordPress to the existing cluster, in cluster, the kind cluster that right now we are using. And there are like, you know, some details like the one example is here. So, we say that like, you know, the customer, the user gives like, you know, this small claim, let me show you. They only create this one, and we take spec.blog name here in composition to say, okay, like, you know, use this as the block name here by adding a patch. So, we don't write a controller for like, you know, aggregating all these resources, but we only instruct it via YAML. So, without writing code, you can compose everything like that. And similarly, for example, we can build like, you know, similar APIs for clusters, for example. You can have like, you know, in one composition, just like database and password, you can have VPC, 3 subnet, security group, internet gateway. And if you go to crossband.io, you can see an example, like more complex examples like this. So, right now what we are going to do is first to create our abstraction, API, which will be like QubeConvertPress type. Okay, so QubeCTL, composition, QubeCTL, apply, apply F, XRD. So, once I create this, there will be two CRDs created by crossplane, as a result. You should need to install crossplane. Oh, crossplane is not installed, so. Have a great day. Great install. Yeah. Yeah, for XRDs and compositions, you need crossplane itself. So far, we haven't needed it because of like, you know, with local development, we don't need to package the whole provider. We just run make run and we run it locally. So, yeah, so now Qube, crossplane is installed. Yeah, get ports. Okay, so now I'm going to say QubeCTL, apply, define my CRD, XRD. And then I will apply my composition, which you can see one database, one password, and then hand release. And by default, it uses the same cluster in cluster. And for that, I need to install hand provider. So I am going to use crossplane CLI to install hand provider in a minute. Okay, because like, you know, all the providers used in one composition should be ready and installed before you create your claim. So now I'm going to create my composition. So now I have the API definition and I told crossplane what I need to happen when an instance of that API is created, a CR custom resource is created. So now I will first check whether all providers are in place in crossplane system. Yeah, provider help is up now and I locally run the plan scale provider. So what I will do is to first create the provider configs. Plan scale provider config is already there. So what I will do is to create provider config of hand provider. Because I am going to use the service account of hand provider to deploy the same cluster, I need to give it more permissions. By default, it doesn't have the permissions to deploy to your own cluster. You would need to give a cube config to target another cluster. So I'm just like now with the small quakes, I will give permission with cluster role binding. Cool, so now I can create the provider config for hand saying like you know, hey use this same cluster. CTL get managed. Can't check it. So get managed just to make sure there is nothing in the cluster. And because right now we run the provider plan scale locally, the cross plane itself does not have the permissions to deploy its CRDs. So we will need to give that permission to cross plane. So normally you would install it just like provider hand and get the permissions. So what I'm going to do is to do the same thing with cross plane cluster role as well just for the sake of demo. Service account. Yeah, okay. Cool, so now cross plane has the permissions. As I said like you know, when it's installed through the package manager you don't need to do that. So I'm going to create this claim with like you know cross painting cube con, blog name. So cube CTL, apply, claim to my name space which is default. So now I'm going to like you know, see it started to keep CTL managed. See all these three objects are created by cross plane with the configuration that you gave with composition. So I did not have to go and create each one of them. So I think we have five minutes left. So maybe we can leave that to the questions until it's getting ready. Yeah, so yeah, we need to wrap up. So let me get back to the slides while it's getting ready. So yeah, this is like you know how you develop a provider. We had to like you know develop two managed resources so that like you know we can have a connection secret with the host and password details and mount it to WordPress which is also done in that composition. So like you know, composition allows you to like you know make all sorts of like you know relations between resources from different providers. So right now for example, my cluster has a Qtcon WordPress API available for my developers. So whenever they need, they can just create that CR and like you know get database password and WordPress. So yeah, you can go to CrossPane.io for more compositions and like you know go multi cloud or multi tire cloud you know use plan scale Azure AWS together everything like you know composition it's not opinionated on like you know which providers you need to use but it is opinionated that it needs to be a CrossPane provider. And packages, there are a lot more details about the packages so you can package providers and also the configurations which is like you know XRD and composition together so you can publish them just like you know Terraform modules registry. And lastly Terrajet. Terrajet is one of the latest code generation framework we have and it generates all what we did just a minute ago like you know using its own generic controller to call Terraform providers under the hood so you don't have to write the cloud vendor specific stuff. So yeah, if you have any questions we're ready to answer. And it's I think the press might be ready now. Okay, yeah, it's probably something small messed up so Yeah, we will like shortly after this demo we will commit that provider a planet scale to CrossPane organization so you can check the code there like this the code that we have implemented live here will be available for you and we will also share the composition that we are showing here. And there are like you know other compositions like for example right now we use the existing cluster but you can have for example GKE cluster from provider GCP and connect it to like you know ham release object so like you know once you create WordPress on KubeCon it will create a cluster it would install it would create database password and then install ham release of WordPress everything and you will get a URL saying like you know WordPress is ready to be accessed Yeah, go ahead Yeah Yeah, yeah Yeah exactly Yeah that is by default on and actually you can't really turn it off so like you know it checks like you know for every event it checks the custom resource and runs the whole reconciliation logic and if there is no event there is no changing your Kubernetes cluster for every minute it checks it still runs the reconciliation we checks the external API for example if I went to planet scale and change the description like nodes field then it will see that and it will correct that so source of the truth is the custom resource and it continuously reconcile for example in AWS if you change a parameter it will like you know fix that as well so it continuously fixes just like deployment and pod deployment here is the database custom resource and pod here is the resource in the cloud so for example if you delete pod deployment will recreate it immediately and same thing with cross plane providers if you like you know delete the database right now it will create a new one with the same name Yeah Yeah exactly so that is one part that we kind of skipped so you see there is a function there is a field here resource up to date so what you do here like you know in the logic you have the p available here right and cr available so you say like you know hey if p like let's say something like this p dot like I don't know name is not equal to cr dot get name let's say like you know nodes and nodes like in both sides so you report that and then reconciler sees it and it calls the update method so that it fixes the the changes and that is goes like in both ways fix the corrections and then also for every event in the CR it calls update if it's necessary actually the two resources that we have implemented today are a bit special in a way that like they don't have updateable fields so you cannot update anything on the password of course and also for the database there is no updateable parameters but for example if it was an AWS RDS database there was a storage size and in the desired state you specify the storage size and then if you go to AWS console and change it to something else a crossplane controller will go and actively reconcile it back to the desired state I can repeat the question here is the microphone so thank you for a nice talk really impressive work could you maybe also touch a bit on the subject of importing already existing resources and how the create method gets a little bit more complicated in that sense so if you noticed there is this external name notion here right like we don't actually use the custom resource name because like there are certain names that does not allow you to name the custom resource and you may not know it beforehand right so we use the external name annotation to get and like you know if provider allows we also use that like you know for example for database you are able to give it's name right in the create so we use get external name so if you create the custom resource with an existing external name annotation what is that first observe is run for every reconcile and it will not report that resource does not exist right because create is called only if you return false here so if it's going to like you know run the get call and just continue as if it was the one who created it so cross pin does not care whether it created the resource or not so it's just like you know if there's an external name it hits the API says like you know hey does that exist and if it exists it starts to reconcile it just like as if it was the one who created it is it okay that I ask another question yeah so you specified a password on for this database right let's say that I compromised my password somehow is there a way to like do key rollovers with this like so I could roll over my password and then my app could like get it automatically do you have like a way of changing like terraform yeah so so in this specific example plan scale does not allow you to give a password in plain text right so it's only like you know we request the password it returns and we save it to the connection secret so in this implementation for example it's only published in create right but you know like you know in an implementation that we would merge upstream it would also like you know publish it here in observe as well right if the plan scale returns it but a lot of times they either allow it to like you know to send over create or return as the result of create and not return it via get because it's like available for one time but like you know for the reason for example eks cluster is a great example it refreshes its token for every 15 minutes so at every observe we actually fetch the ks cluster and then publish it so that like you know new token is published and you get it updated and your application gets like you know refresh token and keep working so like it depends on the API like you know if the API allows it we do that but one main difference with Terraform there for example in this case password change would require deletion of the password right we don't delete it like you know just to get the refresh token you have to keep CTL deleted in Terraform like you know if you I don't know like change it it would delete and recreate it there is no logic such logic in cross playing like it always has to like you need to delete the resource and new one could be created which is like you know a great use case for composition like you know it's all deployed everywhere okay I want to refresh the password you go and delete the managed resource password and composition will create a new one automatically because that's how you configured it like you know in your composition yaml so that's how you configured it like you know if you want to change it it would be deleting my resource and then getting a new one in this specific case yes thank you you're welcome any other yeah last one yeah I think we are a little bit over time hi sorry I had two questions but maybe we'll see how much time we have if I'm making my own custom resource do I need to size whatever but what if there's a configuration change down the line that I didn't monitor for and that changes is cross playing able to detect as a change or only on things I'm trying to monitor so if I understand you're asking like you know do I have to like you know the comparison logic that I mentioned do I have to do that for every field right so actually yes like you know for example RDS is this has like 25 fields let's say so like you know what what I do is like you know look at the update API to see what's what can be updated and you check like you know every field there like you know with if conditions and stuff and like you know report back hey there's a we need to update the resource and then update start so there like you know every every updatable field needs to be checked one by one okay that makes sense thank you and the second part of this cluster goes down what happens when this cluster goes down yeah yeah so this is like a control plane it's not a data plane so like you know if this if I do kind delete database will still be there password will still be there because like you know unless you keep CTL deleted and provider sees and calls the delete implementation maybe don't touch it so it's not like a data plane that will like you know stop the data flow it's like a control plane it's just like you know provisions and deletes and like you know make sure everything is correct yeah actually one thing that we you know like for the sake of simplicity in the demo we use that control plane to deploy our application as well so this is not the typical use case of cross plane so in cross plane you don't have the application workloads on the control plane so as Mofaq said if control plane goes down you only lose active reconciliation if you recover it back it will just automatically get reconciling again but in the in the meantime your resources are just running there yeah usually like you know you have one control plane like special to cross plane and not run WordPress on it for example and like you know you would have an existing cluster let's say and your composition for example you can point to that or in your claim API you can say okay give me a cluster name that is already in the cluster so that I can use that so like you know I mean there are like you know different deployment architectures like you know one is like this one and but like you know the more like you know we see more and more people using like you know one control plane specific to cross plane and manage like you know all their deployments and infrastructure through that with composition okay thank you right cool I think that's it you can find me and Hasan at either cross plane booth or up bound booth thank you for listening