 Well, welcome to our session today. I am Stephen Borrelli from MasterCard. I'm Daniel Mangum from UpBound. Today, we're going to be talking about a project that I think is really cool. We're going to be talking about building your own enterprise control plane and basing that on Kubernetes. We hope to show you that this is actually really powerful and really easy to do. Really, that's our goals for this talk. First, we want to talk about, this is really one of the fundamental concepts of Kubernetes, but we really want to talk about like Trollers are really one of the best ways to manage your internal infrastructure. We want to show just how easy it is given a lot of the tooling and libraries around Kubernetes right now. So just really quick, what our agenda is today. This is an 85-minute session. I'm going to do a few minutes of an introduction, just basically talking about what infrastructure is and how people are doing it today. Then we'll really get into a lot of coding, what we'll be talking about actually developing your own controller. During that, we're basically going to walk through all the parts of doing that. That's from defining your custom resource definitions, talking to the remote API that you want to manage, actually how you get into the guts of a controller and do cred operations. And then finally, we're going to do some more advanced things like packaging your controller up so you can deliver it to clusters. And then finally, there's some really cool higher level abstractions we could do where you could take other people's controllers and other people's Kubernetes objects and combine them to make more complex infrastructure. So there's really very powerful things. So we hope to show you that this is actually fairly easy to do and that you can really make a very powerful platform on top of this. So first, when we talk about infrastructure, what do we mean? If you look at now what a lot of companies are provisioning, what's in the scope of infrastructure teams, it's usually things like storage and virtual machines and folks will be asking for firewall rules and networks and DNS records. So whenever someone wants, you know, an application team wants to deploy something, they're usually asking for a lot of these things. And if you look at how infrastructure teams, they used to do these manually and then slowly they've been getting into automation. So what you see is usually in most enterprises, what companies are doing is they're writing these things in scripts, they're doing some, you know, Chef or Python and Ruby. And you know, or, you know, some of the newer companies, like newer efforts, things that a lot of these tools are migrating it to go. And what folks usually do is build like custom scripting on top of DevOps tools like Chef, Ansible is very popular and probably one of the most popular provisioning tools recently has been something called Terraform, which is really good for managing remote APIs. And then finally like the runtime for these platforms is usually a CI system. So what happens is somebody commits something into Git, the CI system sees it and you know, fires off these scripts. So this infrastructure that you're building on top of your custom scripting using some DevOps tools is usually fired off, you know, with the CI pipeline. So, you know, and this is basically what it looks like kind of logically, right? So usually what happens is you'll define your own spec and you know, I've been on demo days where every single team defines their own spec. So we'll have five different spec files introduced during one demo day. So this is, you know, we generate a lot of these. Next thing is in your pipeline, you usually have some code, especially systems like Jenkins that allow you to do things like Groovy code that you'll start seeing running shell scripts in here, doing some advanced logic. And then finally it's the stuff you wanna do, right? Like you have to like compare, you have to connect to a remote API, you have to compare what you have to that and then you have to do something. So that's usually what these infrastructure pipelines look like today. So we're gonna talk about some of the problems that we're hoping to solve with the controller approach just, you know, what we see is an existing the kind of the issues that are with these kind of CI based deployments. And the first thing is like when everyone has their own spec file, there's usually validation is not done. You know, it might be done at the JSON level, but it's usually not a schema level evaluation. The next thing is a lot of these things are very command line driven and it's very hard to export these as an API to developers, right? So the developer experience usually involves opening up a ticket or committing something it to get. And then finally tooling is very basic, you know, you might be able to edit the files in JSON, but there's not a lot of other dev tools that support it. The CI pipeline is usually the main thing running this and probably the most important thing about this is like these fire off on changes in Git, not in changes of desired state. So, you know, it doesn't care whether you're adding a comment or whether you're making large changes, it's still gonna fire it off. And usually what happens is it runs only once. So once you deploy this, it doesn't go back and check. It doesn't check if anything is drifted or anything like that. And then finally, there's a lot of, you know, especially with homegrown tooling, there's a lot of issues around reconciliation and state management. And usually operations is, you know, besides sending out an email to someone, if a job fails is pretty much the most of the operational support that a lot of these tools have. So that's kind of, if you look at what the vast majority like internal tools are right now, it's pretty much like this. So what we wanna talk about since we're in Kubernetes, we wanna talk about how do we do this in a way that, you know, is Kubernetes-centric and that, you know, takes advantage of the Kubernetes platform. So the first thing is that we talk about the controller approach, right? What if we could take all those things that we wanna provision and model them as Kubernetes objects? So, you know, we'll have a SQL database kind. We will have a storage bucket kind. And then, you know, for this demo, we'll have like a GitHub team time kind. And as part of that, we can have a spec defined. We could have metadata. We could have annotations, all the good things about Kubernetes. We could have this in our files and we don't have to think about the spec file anymore because it's handled by somebody else. Then what we could do is we could put Kubernetes there and then, you know, a lot of the great things about Kubernetes is it has an API that supports a lot of different tooling. It can watch for events for you. It can look at changes and things like that and notify, you know, whatever backend you want. So that's kind of a powerful pattern. And then finally, what we want is that for each one of these things that we wanna manage, we'll probably have a controller running. And this will be constantly trying to reconcile whatever desired state you put into the system talking to the external API. So things like observability, failover. There's a whole bunch of features you get in a Kubernetes controller that basically spending all its time trying to get to the state that you've asked for. So it's a very powerful pattern in that way. So what we're gonna talk about in this tutorial today is basically how we build in all the layers of this controller. So at the very bottom level, you know, we have Kubernetes itself, which will run our controllers, which run a Docker containers and it'll handle things like well-based access controls if you wanna put a firewall in front of this or a load balancer. The next thing is if you haven't worked with Kubernetes before, like done any programming, there's this entire concept of machinery, right? This is how all the APIs are composed. And there's a lot of great things here but probably one of the most important things that's emerged recently is this idea of custom resource definitions or CRDs. You're gonna hear these a lot and this is how you can make custom objects look like native Kubernetes. So if you wanted to find your own type, like a virtual machine type, you could just apply it to Kubernetes and it'll look like anything else. On top of that is this library called controller runtime, which has recently become probably one of the million libraries and almost everyone who writes Kubernetes controllers at least uses to some part. And this takes care of things like what happens if someone applies a change to one of the Kubernetes objects, it watches, it helps you do reconciliation on the Kubernetes. Makes it very easy to have these controllers that respond to changes when your customers wanna ask for something different in your infrastructure, automatically get notify whatever's watching it. And then finally, probably one of the most interesting things about this talk is a crossplane runtime itself and crossplane makes it really easy to manage external APIs. Most of the other Kubernetes things make it easy to manage Kubernetes objects but crossplane makes it really easy to say find any API out there and then do these operations and treat it like a Kubernetes managed infrastructure. So this is really one of the most exciting things that makes it really good for building your own software because really the benefit of this approach is that basically you only have to focus on kind of the logic in your controller and you get extremely full featured control planes just by building on the rest of this. Things that are constantly reconciling, things that are validated via open API at the client level. So this is a very powerful platform and you can spend a lot of your time just focusing on your logic. And finally, it has to be said that when you have native Kubernetes controllers for your infrastructure, suddenly you could use all kinds of great tools. I use customize a lot personally. The Amazon CDK is really interesting too. I'm a big fan of Argo CD for deploying things to any cluster. It does ifs of Valero could back it up. There's crossplane demos using open policy agent. So it's a lot of exciting things. So finally, summary, there's a massive ecosystem support. CRDs let you expose infrastructures Kubernetes objects and really there's libraries that let you build really full featured powerful software. So really in summary, this is really an ideal platform for managing about sending infrastructure. All right, so before we get too far into the tutorial, I just want to walk through taking a look at what crossplane is and how we can get it installed and start extending some of its functionality. So I'm here on the crossplane website and you'll see the main thing that we're gonna propose doing with crossplane is managing our infrastructure from Kubernetes. If we go over to our documentation, we have a helpful getting started guide which we're gonna run through shortly right here. So I'll start off by going ahead and spinning up a kind cluster. If you're not familiar with kind, it's a great way to run a local Kubernetes cluster. So it's actually gonna start a cluster running in Docker for us. So while that's coming up, let's go ahead and take a look at what crossplane kind of proposes it will do for you. All right, so the first thing is provisioning and managing your infrastructure and that's the primary thing we're gonna be looking at today. At the end of our tutorial, we'll talk a little bit about how we can package that up and install it into different crossplane clusters to extend that functionality. But the first thing we wanna do is as a Kubernetes add-on, install crossplane into our Kubernetes cluster. So you'll see here, we have the alpha and master channels, alpha just being the latest release, which is 0.13 as of this recording. So we can go ahead and create a namespace and I've already added the crossplane alpha repo here. So it's just gonna be one command to helm install crossplane. So let's see if our cluster has come up here. It looks like it has. So I'll create that namespace crossplane system where it's where we like to install crossplane. And then we can go ahead and do the helm installation. So after a moment here, you'll see that it gives us a little bit of information as well as a chart description. And you'll see we're using chart version 0.13. If we look at the pods that are now running in our cluster, we should see the crossplane pod, which is kind of the core functionality of crossplane. And then the RBAC manager. Having these two separate processes allows us to really lock down security because managing RBAC is completely dedicated to the RBAC manager, which allows crossplane to run with lesser permissions than cluster admin. If you'd like to manage those permissions yourself instead of having this automated workflow with the RBAC manager, you're welcome to just deploy the crossplane pod and let that handle all the work for you. All right, so now we have all of these pods running. And if we look at our CRDs here, we still haven't installed very much in the terms of new API types. You'll see some kind of meta CRD types like packaging for configurations and providers, which we're gonna look at momentarily, but you don't see any external cloud infrastructure yet. That's because crossplane comes out of the box without supporting any individual provider. So it differs from other solutions in that regard that may be proprietary to a single cloud provider or something like that. We have a provider model, which is what we're gonna be designing today, but we also have a number of providers for common cloud infrastructure that crossplane as a community maintains. So we can go ahead and go into the documentation here. And the first thing we do after installing crossplane and checking the status is install the crossplane CLI. This is a cube control plugin. So you'll just be able to do cube control crossplane and a variety of commands. And this makes it really easy to add new functionality. So we have two types of packages in crossplane and we're particularly looking at provider today. And so you'll see here, we want to install a provider provider AWS. So this is actually packaged up in an OCI image and we're gonna use the latest stable version of provider AWS, which is 0.12. So I'll go ahead and copy this over into my cluster. And once again, we see that we have these CRDs and just the crossplane pods running. We're gonna go ahead and install the provider. What's happening behind the scenes is that crossplane is unpacking the contents of this provider that we created. And it's going to install CRDs and also start controllers to watch those CRDs and take action based on events that take place in the cluster. So let's go ahead and take a look at the CRDs and you'll see we have far more now. And if we actually scroll up, you'll see that there are lots of AWS specific ones, such as repositories for ECR, EC2 security groups, IAM roles, Dynamo tables, RDS instances, et cetera. So we've just extended the functionality of our Kubernetes cluster quite a bit by being able to write alongside our deployments and other native Kubernetes types, create external infrastructure, which we can then use to manage our infrastructure on those cloud platforms and connect that to our crossplane instances. And once we have provider AWS installed, we can see what CRDs it brought with it. And we can also see that there is a controller running for provider AWS, it's gonna be reconciling those resources. All right, so the next thing we wanna do is to be able to actually provision resources on AWS, we need to create a secret with our account information, which we have a helpful command here to use the AWS CLI to do. And then we're gonna create a provider config, which basically informs crossplane how to reach out to the AWS provider. And here we're gonna use the secret credential source to be able to do that. So I'll just run these commands real quick and we can make sure that we get something installed. All right, so we created our file and then we're gonna create a generic secret from it in our Kubernetes cluster. And the last thing we wanna do is go ahead and use this provider config. And we have this helpful command here where you can apply it from a remote URL. I would normally advise you to check the contents of something before you apply it against your cluster. But since we have a local cluster here and I put this link here, we'll go ahead and do it. All right, so we have our default provider config here and the next step is to actually provision some resources. So let's go on to the next section and look at provision infrastructure. And you'll see what we're gonna provision is an RDS instance on AWS. And we're gonna tell it where to write our connection secret. So that's another big part of crossplane, which we're gonna see later on when we're designing our own provider is credential information. So you can connect this to your workloads. You can have a deployment that talks to the database. We're gonna show a simple example here. So we're just gonna provision the database and see what happens. All right, we'll run this command. It looks like our RDS Postgres instance was created. And we have a couple of shortcuts that'll help you get resources more quickly. So you could do something like K get AWS here to list all of your AWS resources, which this frequently has large gaps in it. So let's do something more specific and just get an RDS instance. All right, so we have our RDS instance that we just created. We'll see some information about that. Importantly, we have our synced value, which is saying is our crossplane controller and the spec that we've specified for this resource consistent with what is happening on the external infrastructure. And then ready is indicating that it's false. And that's basically just to let us know that this resource is not available for consumption yet. And since we specified that the connection information should be written to this secret, when the resource actually becomes ready, we should be able to get secrets. And in this case, in the crossplane system namespace and see that it's there. And we'll see here that we don't see the secret at this point, or actually we do, it's already present. So that was given to us by the actual create operation. When this is finished provisioning, we're going to see more connection information in there. And you can look up the documentation to actually see all of the connection information that RDS instance publishes to its secret. And then you can specify via environment variables or something of that nature, how to get the secret into your deployment so your application can talk to it. So that's kind of an overview of what you can do at the basic level with crossplane. And you can write whatever provider you'd like to fit into this ecosystem. And crossplane will actually manage installing it and upgrading it and managing all of its different resources. And we're going to extend that today, but you can also do things like packaging up your infrastructure abstractions, which is another layer on top of these primitives, which are these cloud provider resources that we've installed into the cluster. At the end of our tutorial, I'm going to circle back and show you how we can create some of these primitives and package them up. All right, so now that we've gotten a look at how crossplane works and how you can extend its functionality with providers, let's take a look at one of the providers that we're going to work on today, this provider GitHub, we're calling it KubeCon provider GitHub here, and to kind of motivate the discussion we're going to have and the work that we're going to do today, and hopefully inspire you to go write your own providers. Let's take a look at what this can do. So if you take a look at the CRDs in our cluster, you'll see that we have our provider config and provider config usage, which are kind of the plumbing, which we'll talk about in a little bit. But the primary thing we're going to work on today is this teams resource here. And this team basically represents a GitHub team, and we're going to want to be able to create and manage teams on GitHub from our Kubernetes cluster. So just like you would with AWS or GCP, you go and create a database. Today we're going to look at creating cloud resources in a GitHub organization. So I'll go ahead and start off. I actually have an example prepared here for us. So let's look at the contents of that. This is a pretty simple resource here. You can see it's just a kind team in the org group, and we're going to use the name, which is going to represent the name of the team in GitHub. And then we have some fields to configure it, specifically the org, the description, and the privacy. We'll look at how those get set and how they can be modified later on. But for now, we just want to show that we can create one of these. So I'll go ahead and apply that resource. And you'll see over here that we've got some log messages coming from our controller, which is running against a local kind cluster here. And the important thing, we want to see that our external resource is up to date. So essentially that's saying is we went and first we created the resource and then we checked to make sure that it matches the configuration that we specified. Since we just created it, it's likely that that happens on the first reconcile. But this controller is going to go ahead and make sure that this stays in sync. And if we look at our team, we can say that we have a synced team here. And if you looked at the status, we can get a little more detail here. You can see that there's some information about when it was last reconciled. And we can customize this to give more information if we want. I also want to show that this team actually does exist in our GitHub account. So if you look at the configuration we had, we named it was cross plan and we have our description. And then we've also seen that it's a secret account. So we'll show how you can maybe modify that in the future. But before we jump into that too much, we've seen how these resources and different APIs get added to a cluster. So Steven, I wanted you to give us a little bit of insight onto what is a CRD, how are they structured and how does a controller kind of look at the two different main parts of a CRD to drive its action? Yeah, thank you, Dan. Yeah, so this is one of the really interesting things about this pattern is that you're gonna hear this term a lot if you haven't heard it before. And this is a custom resource definition. This is basically taking things that are outside of Kubernetes and making them look like Kubernetes. And what we're gonna show you right now is actually the code for doing that. So one of the most important things here is like when you, and this is pretty much the pattern, not only just for cross plane, but pretty much almost any controller that you're gonna be seeing that's written for Kubernetes that you'll have this directory called APIs. And in that is usually where we have the definitions of where we define our CRDs. So what we're gonna do is we're looking at a file here called types.go. And there's just a few important fields here that we wanna put out first. This is parameters and observations, right? So this here is, if you looked at the example that Dan showed before, we had the name of the organization, the description and the privacy settings, right? So when you create a CRD, what the way it works with Coobilder is that it looks at your Go code and then the make files generate the CRD, the YAML for that, but then you apply to your cluster. So this is how everything gets mapped and this is how you create your own custom Kubernetes objects then you could apply to a cluster and have controllers manage. So you can see here we have work description and privacy. The next thing is observations, like what's an observation? An observation is something that your controller is going to go talk to a remote API and observe the state of that. And based on that, it's going to decide what it needs to do in terms of reconciliation. So those are kind of the key things in terms of what you store in terms of your settings and the value. And there's another two important fields that you'll see and this is pretty much present in all Coobilder type controllers. There's a spec and a status. So the spec is the desired state. That's what you provide to the cluster, say, this is what I want things to look like. And that's the thing that the user provides. And then the status is the thing that the controller comes back with. It goes talks to a remote resource and it comes back and gives the state to it. So these are two of the important concepts about building out a Kubernetes API CRD. Absolutely. And one of the things that Stephen pointed out there is that we have the parameters and observation and we have the spec and status. You can think of it as the parameters and observation being the provider specific fields and they kind of roll up into the spec. So we have our parameters for the provider specific. In this case, it's GitHub that's rolling up into our spec here. Then the status, we're gonna have the at provider fields. And this is just a cross-plane pattern we have. And then within the cross-plane ecosystem, we also have these embedded structs that we put in our spec and status and that just provides some uniform fields across all of our resource types. And they're used to do things like connect to the external provider via the provider config, which is another type which we can look at quickly here. If you saw earlier when I created the team, we had a list of CRDs, including provider config and provider config usage. So once again, we're seeing things that Stephen just mentioned. This is kind of a special resource in the cross-plane ecosystem. It tells you how to connect to the external provider. So it has some credentials method in it. And once again, you're seeing we have this embedded struct. And these embedded types, these different things that we're going to kind of have abstractions presented for us are from cross-plane runtime, which we'll take a look at in a minute. But the provider config spec is going to include usually a Kubernetes secret, but you can also provide different authentication methods. And the status is basically just gonna show you that it's able to connect. So if we look at an example of what a provider config looks like, here we're using a secret to authenticate. So we're creating a secret in the cross-plane system namespace. And then you'll see that we'd have base 64 encoded credentials. In this case, for GitHub, you provide an API access token. So we create the secret and then we reference it in the provider config. And then if we go back to this first type, the team type that Stephen was talking about, and we look at this embedded type, you'll see that we have a provider config reference, which basically builds in the fact that we know we're gonna have to reach out to an external credential provider for every managed resource and a cross-plane provider. And one of the nice things about the provider config is that you could have multiple providers, right? That you could be managing multiple backends, even if they're the same platform, they could have different credentials. So you could use the same resources and apply it using two different accounts if you want some kind of DR or something like that. So what providers do is abstract out the connections and it gives you a lot of flexibility to separate out your resources from the backend provider. So it's a very nice pattern to implement. Absolutely. And in some of the other providers where they actually have Kubernetes clusters themselves, we'll have things like use IAM roles for authentication and that sort of thing, which can be really useful in some of that separation of concern that Stephen was mentioning. In GitHub, you may have two different access tokens that you want used for different organizations and that kind of separates you from creating something in the wrong organization or something like that. But if we look at an example actually here of our team resource, so this is what we created just moments ago. We actually don't have a reference to provider config and that's because cross-plane also provides ability to if you create a provider config with the name default, that provider is going to go ahead and use that one if none is provided. So there's little goodies like that that are kind of sprinkled throughout the cross-plane runtime ecosystem that make things a little bit easier for you. All right, so we've gotten a good look at these different API types, but you may be wondering how those translate into actual CRDs existing in our Kubernetes cluster. So a CRD is a resource in itself. It's a custom resource definition. And there's a controller running in Kubernetes that watches for custom resource definitions. And basically you're dynamically adding new API types. So we have to create those custom resource definitions to then be able to create something like a team. We have those custom resource definitions in the CRDs directory in our provider GitHub here. And you'll see that it's generated and looks like probably any other custom resource definition that you would see. You'll see that those different fields in the ghost struct were actually used to create this schema here that we have, which allows us to have validation when we create instances of our team resource. And we'll look a little bit more at that down the road. But Stephen, did you wanna talk a little bit about what some of these different annotations, specifically the cube builder ones do for us in that generation process? Yeah, if you look in the code before, and this is a feature of cube builder, you can see here that there's validation fields in cube builder, there's print columns, there's objects. So basically we're building upon a lot of the cube builder project here. And what it does is you give these annotations here and this instructs when you build out your CRDs, exactly how to represent them. So there's lots of things. As we said, it has recent versions of Kubernetes with the CRDs are imposing open API validation on everything, which suddenly means that now that you build this out, suddenly you have validation, not only at the server level, but newer versions of Kubernetes actually have the client doing validation now too. So basically all that work is gonna be handled for you and all the tooling we can automatically validate from these CRDs. So that's something very powerful. Yeah, and that's basically it. I think Dan could show us some code about how when you run make in a lot of these clusters, there's actually some hints of how to generate it. And we'll look right here at this controller gen and angry gen. And maybe we could show how to make and generate the CRDs. Yeah, absolutely. So to even point out, there are some commands kind of built in. This is probably an opportune time to mention that this entire repository right here, and you can kind of still see my get diffs here is generated from a project that we have in crossplane called the provider template. So I'll go over to that real quickly. And it basically has a mock provider for you here that you can just refactor for your own purposes. And this is a template repository. So to actually create this repository that we were looking at for provider GitHub, I just went ahead and click to use this template and created a new repository in my account and then refactored some of this provider template stuff to be what we wanted for GitHub. You'll see that instead of having a team resource, it has this sample, my type resource. So it just gives you kind of a boilerplate to get started. But one of the things we do in that is we provide this generation code, which you'll see just has go generate statements, which will basically run for you when you run go generate. And this is a convenient way to actually pin external binaries that you need to run generation methods. So controller gen is kind of the cube builder flavored generation. That's creating these deep copy methods for us as well as generating the CRDs, according to those directives that we were just pointing out there. And you'll see that we provide some information to controller gen here to tell it where we'd like those CRDs to go. Angry Jet is our crossplane flavor of that. And that does things like generate our manage.go here, our PC.go, which are basically helper methods to help us interact with these different classes of resource, commonly called duck types in the Kubernetes ecosystem. Resources that are different, but they kind of follow the same patterns which we can treat similarly to how we treat go interfaces. So this is basically just generating methods to satisfy those interfaces. So once we have these API types defined, next thing to do is to define how we're actually gonna talk to the external provider. So frequently there's going to be a SDK for whatever provider you're talking to. And you can just kind of look at the go types there and how to authenticate and follow that pattern in your controllers. In crossplane, we generally like to separate our clients and controllers into two separate directories. It's a little bit trivial in this case because our client is very small for GitHub. But you'll see in many other providers that will have all of our methods for translating between these Go structs that we define and the API type for the external provider, also in this client directory. So to generate this, I basically just looked at the go GitHub repo here. So we'll go ahead and open that up. And here's the go.forit and it basically gives you a convenient way to authenticate. And you'll see that it just uses the general HTTP type to be able to authenticate and you just provide your access token. So we've created this new client method here which essentially just accepts an access token and it's gonna give us back a type of GitHub client here. So after we've defined how to connect to the external API, we actually wanna use that in a controller. So importantly, we saw earlier, and I believe it's still running over here, that our controller basically continuously runs and outputs some information. And what it's doing is watching these API types that we've defined. And so we have a single entry point as you generally will for a Go binary. So let's take a minute to look at what we're doing here. An important thing to realize is we're heavily relying on controller runtime, which basically gives you a general framework for running a bunch of controllers together in what they call a manager. So there's some different options that you can supply here to the manager. We're just saying the sync period. So we're not configuring it too much, but the important thing to note here is that we have the controller manager and then we're adding our APIs to the scheme. So right now that would just be our provider config and our team resource. And then we're doing our controller setup and then we're starting our manager, which is basically gonna run all of these controllers, anything we provide in the setup here and keep those running and watching the resources and it'll be notified by the Kubernetes API server when events happen on those resources. So this setup method, this is just kind of a wrapper around some other setup methods. And what it's doing is actually setting up these controllers and registering them with the manager. So we're not gonna look too much at the config setup today, which basically just watches provider configs and takes some actions based on that, essentially confirming that they're not in use when they're cleaned up, because if you lost connection while you're still trying to reconcile a managed resource type like the team, that would obviously be a bad experience. We're primarily gonna look at our org setup method here, which is for our team. And this is gonna take us into our controller directory. So in the setup method, you'll frequently see in a QBuilder project that someone has defined a reconciler and they're basically populating it with whatever fields they need and then they're doing this new controller managed by and specifying the resource they wanna watch. Instead here, we have a call to this manage.newreconciler method. So when I talked earlier about these different methods that were generated for our managed resource types, which are just all of our types that represent an external API, having those methods allows us to use a generic managed reconciler, which basically takes care of talking to the Kubernetes components of your controller. So obviously when a resource gets created, you wanna take action based on that. And you may want to call different external methods based on the status of your, or really the specification of your resource in your Kubernetes cluster. And then manage reconciler takes care of a lot of that for you. So Steven, I know you have a lot of experience with writing Kubernetes controllers. We'll look especially into the different implementations of the different methods that are part of the manage reconciler that we define in crossplane runtime. But do you wanna talk in general a little bit more about the difference between a generic Kubernetes reconciler and maybe a managed resource reconciler in crossplane? Yeah, actually, one of the things that really attracted me to crossplane in the beginning was that if you've ever written a controller yourself, there's a lot of manual things you have to do in terms of reconciliation, logging, creating events, having written a lot of infrastructure software, you kind of repeat the same patterns over and over again. And actually, as we walk through this, we'll find out that there's a very structured way that crossplane does it that later is on top of Kubernetes controllers that makes it very easy to understand the logic. So I think that's good. And the most important thing to understand is that this controller is basically a piece of software that's gonna run in your cluster that when we look at that setup, it basically, when you bring it up, it registers and says, I wanna listen to anything that defines a team. Right, that's what the setup does right here. That's the most important thing because when you first look at this, there's like 15 different concepts that you have to understand. The most important thing to understand is that you wanna, you see, if you look at line 55, you're looking, and you'll see this group version kind here. This is pretty common in Kubernetes. This is the, you'll also see it like GVK in some of the source code. And this is basically like a kind would be the team. The version would be, the group would be, it would be a combination of, you know, Hashtan, you know, GitHub, IO and the version which is the API version. Yeah, so you will see this a lot. Yeah, you'll see these terms, you'll see these a lot in Kubernetes. I remember the first time I did, like when I was playing with Customize and it kept giving me GVK errors and I had no idea what those were. So to save you some pain, that's, this is what it comes from here. Yeah, so that's basically it. This core of this setup, there's a lot of things that have to be called here. But if you look at it, it says we're going to reconcile teams with an external connector, which means we're going to connect to the GitHub API. And then you say with logger and recorder. And these are nice fields because basically this sets up all your logging and events. Yeah, absolutely. Yeah, so there's a lot here, but that's basically what this part is doing. You know, you're saying, I want to subscribe to these events and I'm going to use this external API to manage it. And I like these different methods that we have here, these with methods, which are basically saying, I'd like to provide this kind of like customize option for this thing. But if we actually look over to the new reconciler code here, there's a lot of sane defaults that we provide for you. One of those being things like defaulting to that provider config that has the default name, if one is not specified. So we have a lot of different hooks to customize this behavior. And if we scroll down here a bit, this is actually the reconciler that you'd see a little more traditionally in a cube builder controller or something like that. You'll see we have some default fields here, things like setting finalizer, which you don't have to worry about at all when you're writing a crossplane controller because we'll take care of that. You can obviously provide your own API finalizer to do that. Another important one that I wanted to point out here, which isn't going to be relevant for us today because there's not really connection information for a GitHub team. But if you're provisioning a database or Kubernetes cluster or something like that, there's some connection information that you want to get back into your cluster so that you can reach out and communicate with that resource you've provisioned. So we have a default API secret publisher, which is going to publish those secrets into the Kubernetes API. And then you can run a deployment that consumes that either through an environment variable or by getting that resource directly. Or you could say, I'd like to provide my own secret publisher, which is going to send these secrets to vaults or something like that. So you can override any of this behavior, but you're gonna get a really good working controller just by using the defaults here, and then you can iterate on that. So it's kind of zero to working controller as fast as possible. Yeah, and the secret propagation is really a great feature because it's very underrated. Like when you deploy a new system and you get some credentials back, how do you actually get that back to the user who requested, like you give them a database and they're like, well, now I'd like to have the credentials and then usually somebody has to go log in somewhere and generate them or hook up some external system. So it's really a nice feature to have. Absolutely. And you'll see here, this is the new reconceler method that we're running. And we're passing in that controller manager and then our manage kind. So that just maps directly to what we are looking at here, our controller manager and our manage kind. And this is, yeah, and an important thing about reconcel here and Dan referred to it slightly is that if you look at a classic Kubernetes controller, they usually have a pretty light reconceler and then expect you to put all the logic in there. And there's a lot of corner cases and logic that you need to think about when writing your own reconceler, especially for like infrastructure tasks. So this actually makes it a lot easier to go through the common patterns of managing and provisioning infrastructure. So the reconceler is actually very powerful. Definitely. And if you wanna dive into some of the actual work that's happening here in the reconcel loop, which we're gonna get into a little bit more in a moment, but this is your generic reconcel loop that the controller manager is going to say, okay, this thing has registered for events that happen on Teams. Every time I send an event, I need to run this reconcel loop. So that's what's actually getting called. And we're just basically configuring this reconcel loop in our controller. So Stephen alluded to this a bit earlier, but there's two main types that you need to satisfy to be able to run a managed reconceler. The first being the connector and the second being the external. The connector basically has a single method which just is connect. And this is telling me how do I get this external struct which has our CRUD methods on it that are gonna be called by our managed reconceler. And then the external obviously is what's returned by the connect and has all those methods defined on it. So before we get into external, which is kind of the meat of how we talk to the external provider, let's look a little bit about the connect method. So we've already alluded a little bit when looking at the API types to the fact that we use this provider config to get credentials to talk to an external API. So you'll see that we have our provider config type here. So this is referencing our APIs directory that Stephen mentioned earlier. And we're basically using our Kubernetes client to be able to get that provider config. So as I mentioned, every managed resource has a provider config ref and we're using that here. In the example we were showing that provider config ref is being set by default for us. So this would use name default and I had already created a provider config in that last example. Once it gets this provider config, it's going to look at the secret reference on the provider config, which if I can go back to an example here, it's basically looking at this part of the struct and it's going to then get that secret from the Kubernetes API. Our secret has our API token on it and this is a byte slice here that we're converting to a string. And we're passing that to the new client method that we defined earlier that basically takes the string and gives us a GitHub client in return. One of the benefits of kind of moving some of this client logic, which like I said is admittedly very small here out of our controller body, is that it allows when you have many, many controllers to not have to duplicate all of that code. So basically we're just going to return from our connect method, our external struct populated with our GitHub client which is now authenticated. So now's the real kind of meat of what we're doing when we talk to the external API and that's our CRUD methods. So we have observe, create, update and delete. So I guess it's not really CRUD because we're using observe instead. But they basically get called in fairly sequential order that you'd imagine. And so when we return this external struct the manage reconciler is going to say, okay, I see that you connected successfully and now I have this external type and I'm going to call these in a logical order. So the first thing we're going to do is observe the resource and based on the result we get back from that we're going to take further action. So if the resource was deleted in the Kubernetes API then we're going to call the delete method on the external that was returned and we'll skip over some of this other logic for a moment. If the resource does not exist so if we observed and said, hey, I couldn't find this in the external API we're going to call the external create method and do some operations. And then the last thing that we're going to look at is is the resource up to date? So if the observation says, hey, this resource exists but it doesn't match that spec that we were talking about earlier which is our desired state of the resource then we need to issue an update and it's going to call that update method that you supplied. So you saw that we skipped over quite a lot of the other things that are happening throughout this reconciler and that's kind of the benefit, right? You don't have to worry about these until you get later down the line and maybe want some custom behavior and they're doing things like publishing the connection or cleaning up the connection secret or doing things like initializing the resource and populating fields that are set by the external provider or updating the status of the resource like showing us that it's synced which we showed earlier when we were demoing. All right, so what do we actually put in these different methods that we were talking about? So the first thing is our observe method here and we've defined our GitHub client which has a team service which has all of the different methods that we're going to want to call. So within our observed method since we've embedded this GitHub client in our external struct, we're able to call any of these methods that we want. So you'll always want to make sure that you have the information necessary to make all of the API operations that you're going to do present on the API type that you've defined. So here we're going to get our team from the API. So you'll see that we're providing that org and the name of the resource which we want to also represent the name of the team on GitHub. And if we get an error back from the GitHub API, we're going to go ahead and say, we're going to go ahead and return our external observation which is that struct type that the manage reconciler is going to check and you'll see it has these three fields here. Resource exists, resource up to date and connection details. They do pretty much exactly what you'd think. If the resource exists is false, then it's going to call the create method. If the resource up to date is false, it's going to call the update method. If connection details are populated, then it's going to create a connection secret with those details. So here we're saying it doesn't exist so we need to actually create it. So let's go ahead and follow this branch a little more before we continue down the observe method. So eventually the manage reconciler is going to get to the create method and say I need to call this because the resource doesn't exist yet. And you'll see that we're once again using this team service from the GitHub client to create our team with the configuration that we've defined in our API type. And based on the result we get back from that, we return an external creation which once again allows us to supply additional connection details if needed. And we also return an error. So if the manage reconciler gets an error back, it's going to say, okay, I see that I tried to create this resource, but it didn't work. So I'm going to try again in a little bit. And some of this timing and the delays that happens between reconciles, those are also defaults that we set that seem to be common accepted patterns for the manage resources that cross-plane provides. But you can also override those just like any of the others with providing your default weight and that sort of thing. And I want to comment something here, this pattern here, because usually when you write a lot of infrastructure software, you usually have like if then statements here when you're comparing things. One of the nice things about this is that this returns immediately and puts it in the reconcile loop. There's times if you're provisioning things like VMs that might take 20, 30, 60 minutes to do something, this you could actually keep observing it and come back with different kinds of status. Like other tools will just block a lot of times and not return. Whereas this usually you have this like reconciliation loop thing where it could keep just checking in the background for you. So that's very nice. So this is kind of a different pattern if you've spent a lot of your career doing, if exists then this is a slightly different, but it makes a lot of sense once you understand how cross-plane works. Absolutely. And one of the things you mentioned there is something I really like to point out, the packet, well, I guess equinex metal now provider is maintained by the packet equinex metal team. Their provisioning of their bare metal instances actually has different stages of provisioning. And since we have this status portion of our API types, here we have the node ID, which we'll show actually populating in just a moment. But in that status, we have I think a stage field for their bare metal device type. And we update that as you were saying, as it goes along so you can kind of monitor the success of your resource provisioning and it actually provides you a percentage value. So it's really nice to do your cube control describe and see, oh, my VM is 77% provisioned and go from there. Not all API types are that generous with the information they give you, but that's one of my particular favorites. So one of the more tedious parts I'd say of the manage reconciler is checking whether the resource needs an update. And there are a few tricks to get around this. We've seen folks actually generate JSON structs and then use libraries to kind of diff them. Here we've only defined two fields in our spec, besides the org which has to be up to date essentially because we're providing that as the way to call the methods. But here we're basically just saying if we have provided a description in our custom resource and if the team description, which is the representation we get back from GitHub is nil or the team description does not equal the one that we have, we want up to date to be false. And then we do that same check for our privacy field here. And then based on that, we're going to return whether the resource is up to date. If we return false there, then we're once again going to call the update method, which is going to use this edit call here to be able to edit our team and set it to the fields that we want. Once again, you can provide your connection details. We actually don't need to do that here because once again, we don't have any connection details. Lastly, when you delete a Kubernetes resource, if it has a finalizer on it, which is kind of like an annotation or label up there at the top in the metadata, it is going to hang around even though you've deleted it until that finalizer gets removed. So crossplane is going to go ahead. Once again, you can override this, but it's going to go ahead and put a manage resource finalizer on your resources. And when you delete them in the Kubernetes API, it's basically going to keep them around until it can guarantee that that resource was deleted externally. And so what it's going to do is it's going to check for the deletion timestamp on your Kubernetes resource, which gets set when you issue your cube control delete command. And then it's going to call the delete method that you've defined in your controller, in this case, delete team by slug. And if that returns successfully, then it's going to say, okay, I can now see that the resource no longer exists. And I'm going to remove that finalizer, which then allows Kubernetes to garbage collect that resource. So I think that was a pretty good overview of what you can do here with the controller. And we've got a pretty full working controller here with not too many lines of code and a lot of it boilerplate that we just populated ourselves. So what do you say we actually maybe change a little bit of the behavior here and try and rerun it again. And we can see how this helps us and how this continuous reconciliation works. So one of the things that I pointed out earlier is we have this node ID field, but we don't actually set the node ID anywhere in our controller. And if we looked at the YAML output here, you'll see that our app provider is just empty. And I'd like to know the node ID just because sounds interesting. So to populate our status, we usually do that in the observe method here. So we can do that by just saying, okay, so if we got to hear our team exists, so we'll say cr.spec.etProvider. Sorry, that's gonna be status.etProvider. And our node ID is equal to that of the team. If I can actually hit caps lock there. Team.nodeId. All right, so let's actually do a check on that and make sure it's not nil, nil does not equal nil. Then we'll go ahead and set that. And I'm actually just gonna go ahead and show some of the methods that we defined to also make this development experience really nice for you. So when you clone from that provider template, we have a make file in here, just have some really simple targets. I like to do this make run one, which is gonna regenerate your CRDs, apply them to the cluster. In this case, we have no changes and it's gonna start your controller. So if I hop back over here, I'll go ahead and stop my controller. Do make run again. Here we see it's using go generate. And it's just using my cube config to be able to talk to the cluster. So we actually have cluster admin here, which is a little bit different than when we showed earlier, installing a provider. So it looks like our external resource is up to date, which we would expect. And let's see, now we have that node ID present in our app provider. So you can provide any information that the API gives you back in this field. But one of the things I wanna do is show how when you modify a resource outside of your source of truth, which in this case is your crossplane Kubernetes cluster, that crossplane is gonna make sure it drives you back to the specific specification that you've defined. So something we might see is someone going into a GitHub team and taking the privacy from secret to closed, which is the two different values they give you. Closed basically just means it's visible. So let's go ahead and hop over to our cube con NA org here and modify this a little bit and see how crossplane drives it back. And this is a bit of a trivial example, but Stephen, have you seen any kind of like different examples in your experience that are a little more powerful in demonstrating this functionality? Well, I think this is good just for compliance purposes, right? Like the, like immediately, like one of the biggest concerns we have is that people are making changes manually, like they're adding users or changing permissions. And we wanna make sure that if that happens that gets corrected pretty quickly. And thanks to like having things like events in Kubernetes, you could actually see what's happening, right? You could be notified. So I think that's probably one of the best use cases for it, this constant making sure that what your desired state is actually out there. For sure. I totally agree with that. And a nice thing is we're using the default wait here, which basically checks every minute to make sure that things are up to date. You can obviously configure that to what you see fit. So if you said, I need to be more stringent or I really only need to check this resource every so often, once a day maybe and see if it's been updated. That works for you as well. Obviously, if you modify the resource in the Kubernetes cluster, then that's gonna be an immediate action taken. But I'm gonna go ahead and set this to visible and save the changes. And we should see that this is now visible. You'll no longer see that secret. And if we hop back to our controller here, our last status here was external resources up to date. But if we actually wait a few minutes, you can see our time here. So we should be coming back on a reconciled again. We'll actually set that field back to secret and we'll see that updated. And while we're waiting on that, we can also do the opposite, right? And modify these fields in our resource and then see that propagated to the external resource, which we'll show momentarily. All right, so there we see that we got a successfully requested update of external resource. This information is just because we're actually printing out the struct for a demo purpose here. But we should see if we hop back over here to the org that it's once again in secret mode. So it's basically making sure that you stay compliant. And this gets really powerful and this is one of the benefits of standardizing on that Kubernetes API when you leverage other projects as well. So a demo we like to show a lot is integrating with OPA. And one of the examples we've shown there is creating a database that you want to have a limit on the size of a database someone can create. So you don't have a GCP Cloud SQL instance that's 500 gigs or something like that, costing you a lot of money in a development environment. You could create a open policy agent policy that says only accept databases with a size of 20 gigs or something like that and put limits and then have those checked in your resources. Because once you have everything defined as a Kubernetes resource, you can integrate with any project that uses Kubernetes API. Another place we've seen this is backing up your infrastructure with a project like Valero. You can just save your infrastructure and restore it into a new cluster and get crossplane managing it continuously there as well. So let's try one other change here. I want to go into our team.yaml and let's just change our description. So right now it's our description and we'll say some other description if I can type here and we'll save that and I'll go back to the terminal. And like I said, when you change the spec in your Kubernetes resource, we're going to see immediate action taken on that behalf because the controller is getting an event from the Kubernetes API server. So we'll say examples or team and we should see that that immediately requests an update of an external resource. So we'll quickly see that propagated. Now it says some other description and we can also manage the deletion of our resource. So I'll do kdelete team all here and you'll see that we successfully requested deletion of external resource and it's actually going to reconcile again here in a moment and make sure that that's actually been deleted externally and then allow this resource to be cleaned up. But if we go and refresh here, we're going to find that this team no longer exists. So you could really do a lot of management. The GitHub API has a lot of different endpoints. One of the things that we've looked at doing for the cross-plane organization where we frequently spin up new repositories with similar permissions. We've talked about having a cross-plane cluster basically manage those for us and then having a GitOps pipeline that sends some of our, you know, cube control changes basically through it that create new repositories, update teams, update permissions, et cetera. If we go back over here, you'll see that our team has been deleted and see no resources found for our team. So we've cleaned up all of our resources. So I just wanted to summarize here. We just wanted to show you that with not a lot of code, you could have really full-featured infrastructure software. First, you know, we define the APIs that generate the CRDs that could be applied to your cluster. So we take your infrastructure and we model it, whatever features and parameters that you wanna manage, whatever things you wanna look at, and those automatically get generated into CRDs that you apply to your cluster. You also have multiple providers that you can talk to on the backend with different kinds of credentials if you wanna segregate out which accounts can do what different parts of infrastructure. So that's the first part defining our API. Then we define the controller and the controller has three main functions. One of those is to talk to the Kubernetes API and look for events for our type, our team. The next thing is the connector where it actually talks and creates a client connection to GitHub where it's going to talk to the API. And then finally, we have our managed events, right? We observe the external resource and based on what happens in an external resource, we either create, update, or delete the resource. So these are the core parts of the controller. And you can see here, this is about just a little over 200 lines and we have a controller that's basically managing the entire lifecycle of an external resource pretty much within, 60 seconds of any change that it observes on the outside. So that's kind of the end of our deep dive into this controller. And Dan is going to talk about some other topics related to cross-plane. Okay, so now that we've built our provider out, I'd like to show how we can package it up, push it to registry and install it just like we did with provider AWS before we did this tutorial. So we have some helpful commands here and to start off, we need to build the controller binary which is what's gonna run in our pod and manage our CRDs, package that into an image and then push it. Then we're gonna push our actual package image which has the metadata and instructs cross-plane how to install the CRDs and how to start the controller. So to start off, let's go ahead and make this version 0.0.1. And these are just some helpful make commands to make it easier for us to go through this process. So we're gonna build the go binary, we're gonna create the image and then push it. And then we're gonna use the cross-plane CLI to actually push our specialized image for our provider here which has some information that cross-plane is going to look at and then unpack the provider appropriately. And so we're gonna wanna use this exact controller image that we've built. So I'll specify here that we want v0.0.1. Okay, so switching over to our terminal, first thing we need to do is make build and you'll see that we are running the build there. And while we're doing that, we can actually go ahead and go into the package directory and start the build of our other image since it's just gonna have a reference to this one which will exist after we push it. So it looks like that build is still going. So let's go ahead and build our package image. So we have some helpful commands in the cross-plane CLI here. We can say K cross-plane build provider. And since we're in this package directory, it's gonna go ahead and look at the cross-plane.yaml and know how to produce a package image. All right, so if we look, we should see that we now have this X package directory which is basically a specialized tar ball which is an OCI image that can be pushed. And the cross-plane CLI is gonna know how to do that and format it correctly. But before we do that, let's go ahead and finish making our image here. So that was a pretty quick image build. And I'll go ahead and make push that, make image push I believe is the command we want. All right, so it looks like that's been pushed up with KC provider GitHub controller. We're referencing that in our provider package here. So we can now K cross-plane push provider. And since we're in this package directory and there's only one X package present, it's gonna know to use that one. And we just need to give that the tag we want. So hashtag KC provider GitHub and we want that version. And we should see that this is pushed successfully. And we also have a con cluster already running here. So let me scoot this over a bit and we can look at what we have. I already installed cross-plane. So we have the cross-plane pods running and let me go ahead and just close up this window so it's a little easier to see. So we have our cross-plane pods running and just like we did before the tutorial to install provider AWS, we now want to do the same thing with provider GitHub. So we can do K cross-plane install provider and I'll specify that image that I just pushed which was hashtag KC provider GitHub. And we want v0.0.1. All right, so it looks like that was created. We'll take it just a moment to get installed. So you'll see it doesn't have a status yet but if we keep watching this, we'll actually see that it does come available so it's installed, it's not healthy yet. We check again, it looks like it is healthy now and we can see that cross-plane has looked into the contents of our package and it's gone ahead and started our controller for us. So we see our KC provider GitHub running here and we can also see that we've installed CRDs that were specified in our package. So here we have our provider config, provider config usage and our team CRD. If I go ahead and once again create my configuration and let me go ahead and pull up our GitHub org again. I want to make sure that that team that we previously created has been cleaned up. It looks like it has, so we don't have any team here. So once again, I'm gonna create my provider config which has my access token in it. And so that's in secret config.yaml. All right, so we've created my provider config and this is the same workflow we were going through with Dev but now we've been able to install this KC provider GitHub which also allows other folks to be able to consume this provider and us to be able to distribute it easily. So I'm gonna go ahead and create our team again. That was an examples org team and we should be able to K get team and see that it is synced. And if we go back over to our GitHub org we now have our team here again. So packaging things up as a provider makes them a lot easier to install and you can also have specifications for how you want to reproduce whole environments. So maybe you want an environment with GitHub, AWS and GCP installed and you can easily reproduce that across Kubernetes clusters after you've installed Crossplane. All right, so we have our team present we have our pods running and we have our CRDs installed in the cluster. And now what we wanna do is add more CRDs and update our controller push a new version of it to our registry install it and have Crossplane updated in place without modifying our existing infrastructure. So we want this team to stick around but we wanna add new functionality as well. So I've added a new type here a memberships type which basically allows us to specify an org, a team and a user and associate a specific user with that team that we created in the last step for example. And so what we wanna do I've also added a new controller to manage this and what we wanna do is publish a new controller image as well as a new package image. So you'll see I've updated to v0.0.2 here and we'll also want to make sure that our make file commands here are going to update as well. So I'll go ahead and run those again we need to rebuild our binary and this will take just a moment here. All right, so it looks like that completed. Next thing we wanna do is do our make image and that should go pretty fast since we're using a lot of the same layers here I do have a little bit of latency it looks like on my side but shouldn't take too long. Obviously it would be a good thing to cache some of these module dependencies here in the future and once our image finishes building we can go ahead and do make push make image push I think I've forgotten that one twice now. All right, so that's gonna push that up to our registry with v0.0.2 and the last thing we're gonna want to do is rebuild our package image here to point to that. And I wish my network was a little faster but it's almost done it looks like. All right, let's go into that package directory K crossplane build provider. Once again, we see it here you'll see that the digest has changed and we're gonna go ahead and push that up to the registry so push provider and this time it's gonna be hashed and KC provider get hub and v0.0.2 so we're gonna match that controller image and we should see that pushed up successfully and now what we wanna do is actually update our provider that exists in the cluster. So let's look at our provider. So we have it here and we're using that 0.0.1 image. So let's go ahead and edit that and we'll bump it to 0.0.2. All right, and we should see if we get that provider package again that it's healthy and true and we should actually see that the controller has been switched out here so you'll see the old one is terminating we'll see this new digest actually present so that matches our image digest and we should see that our new controller is running we should see that we have all the same CRDs plus our membership one. So there we'll see our membership CRD and we should also still see that our team exists and that's true. So our team exists, it's still synced we didn't delete that or clean it up and we've automatically upgraded our provider here to pick up our new versions. Okay, so the last thing we wanna look at today is how we can compose these managed resources that we've created for this provider and we're gonna show a pretty simple example but we have another type of package in crossplane other than provider and that would be the configuration package type. You can see here, you can declare dependencies on providers and what a configuration package does is it combines different infrastructure primitives in this case, we're gonna use our team and our membership into a higher level object. This is kind of a trivial example but you can imagine that if you were using GCP or AWS or Azure and you wanted to define a VPC and then put an RDS instance in that and also an EKS cluster and have those all wired up and present that to users or developers within your organization as a simple abstraction this could be really useful. The other nice thing about this is the ability to declare dependencies on providers. So you can imagine if you were creating a networked database and cluster abstraction that you could declare dependencies on three different providers, have them all installed. Then you could create what we call composite resource definition which declares the schema. This looks a lot like a CRD in fact, actually renders out a CRD that you declare the abstract schema. So in this case, we're just gonna say give us the org and user and we're gonna create a team and a membership for that user to the team on the backend. You can imagine that we could say give us your database size and node pool size for your cluster and create a more complex abstraction there. And then we can have an arbitrary number of compositions that satisfy a composite resource definition or XRD as we call it for short. And these basically tell how a abstraction is satisfied. So in this case, we're gonna satisfy it with a GitHub team and a GitHub membership you know, in the more complex case you could have different cloud providers backing a single abstraction or you could have different configurations on a single cloud provider on-prem and in cluster all these sorts of variations that can satisfy definition really allowing you to define your own platform and your own console for consuming resources. So we can package these up just like we did a provider and push them to a registry and also be able to install them. And when they're installed, they're gonna automatically do things like check the crossplane version make sure all dependencies are there installed dependencies if they're missing, et cetera. So let's go ahead and do that. So I'll go into the configuration directory and you'll see this is a lot like how we package the provider. So we'll do K crossplane build configuration and that'll build our configuration. Once again, we see our X package here I called this our source control platform as a service. So let's go ahead and also push that. So we'll say K crossplane push configuration and we'll call it source control pass control has the 0.0.1. So once again, this is gonna put you to the registry and once that completes remember that we do have provider GitHub already present here so it's gonna see that that provider has already been installed and it won't have to do any extra installation it'll just bring these composite types that we were talking about. So we can go ahead and K crossplane install configuration and I'm just gonna copy paste this in and now we can do things like look at our configuration see what version it's using it's already installed and healthy and what we should see is the XRD and composition we installed are now present in the cluster. The XRD actually creates other CRDs for us to be able to actually create instances of this abstract type. So we should see that there is a user teams CRD here and I can now create instances of it. So let's look at what an instance of that might look like if we go down and I've created an example here. So here we're gonna create a user team we want it to be called user team and we're saying we want it in this cube con in a org and we're gonna say we want user Steven D. Barelli in it. So this is actually gonna create a user team team as well as add Steven to that team and behind the scenes it's gonna render out a membership and team here. So once again kind of a trivial example but it does show the power of this model here. All right so examples I need to get out of this directory. K apply dash F examples org user team. All right and if we actually look at the rendered out resources this should take a minute for it to become ready but we can look at the user team itself which is what the user is interacting with. So a developer who just wants a team with the user so they don't only care about the status of this but being infrastructure aware we are going to look at the actual rendered out resources and it looks like our team has now become ready and let's see if our membership has as well. Yep our membership is also ready and that will result in the abstract resource also becoming ready and if we go over and look at our cube con in a org we should see that our user team is present here and you'll see that both Steven and I are in it and we have the description that we defined and that's a good thing to point out there is in the composition you can have arbitrary mappings from the abstraction to these base resources you can also have the resources reference each other so here we're saying please use the same team that's composed with me so you can kind of resolve those references automatically you can also set defaults so for instance we're not exposing the description and we're just saying please always set it to a compose team and you could decide if you wanted to expose more or less in the abstraction and also make parts of that optional or not. All right so thanks for joining us today it's definitely a blast to go through actually implementing a Kubernetes controller for crossplane please feel free to join us in the crossplane Slack you can use this link here slack.crossplane.io set up an account and we'd love to chat with you and we'll also be in the chat here for the presentation so if you have any questions or thoughts or wanna talk to us afterwards please feel free to let us know and we'll stick around for any questions you may have thanks for joining us for today. Yes, thank you for joining us.