 Let's go ahead and get started. This is a fairly jam-packed presentation, so I want to make sure to jump into the content quickly. I do promise to do some live demos at the end, so hopefully you all can finish out your KubeCon with watching me break some stuff. But I appreciate you all sticking around to the very end. I hope you all have had a great week. Today, we're going to be talking about content-addressable CRDs, and that's really the solution to a problem. And that problem, generically, is that we aren't able to establish uniqueness of types across Kubernetes clusters. This has become more of an issue as we've moved to what you might call a cluster as cattle model, right, where we have lots and lots of Kubernetes clusters. And also, when we have new models of Kubernetes, I don't know if you all have seen any of the talks or demos around KCP this week, but that's an example of a newer model of Kubernetes that forces us to think about a larger scope than just a single physical cluster. So I'd generally like to start off by giving you a bit of an idea of where we're going to be going in this talk, so you can stand up and go catch your flight if this doesn't look good to you. But what we're going to start with is just generally talking about how you can extend the Kubernetes API surface. This is likely an area that lots of you folks are already familiar with, but we want to set that ground level for us to be able to build on top of. Then we're going to talk about two aspects of programming generically that are important that we don't really have in Kubernetes today. And that's internal versus user-facing types and programming in an abstract cluster sense. And I know those may not mean anything quite yet, so we'll make sure to define those terms as we go along. And then we'll come up with a solution for these types spread across many clusters and round out with a demo. So starting off with extending the Kubernetes API surface, there are a couple different ways to extend the Kubernetes API. But the Kubernetes API comes with a set of built-in types. You can really just think about these as endpoints if you want to think about a REST API server, which is actually what the Kubernetes API server is. And these are the ones you know and love, like pods, deployments, services, et cetera, et cetera. Things that are there to allow you to do the core job of what Kubernetes was created for, which is container orchestration. And this is usually the case. I did mention KCP earlier and some other projects that are similar. Usually, all of these built-in types are built-in, but that doesn't actually have to be the case. And thinking in that mindset, I think, actually allows us to think about the Kubernetes API in a different mindset. So you can add additional types to the Kubernetes API at runtime, so we're not actually modifying the code base of the API server, right? We're adding these new types as we go along. And there's two main ways that this can be done. First is API server aggregation. We were basically saying, hey, Kubernetes API server, I have another API server over here that can serve some additional endpoints, and just send them my way when you get a request. The next is custom resource definitions, which are a built-in type, a custom resource definition type that allows you to add a new type, or you can think of it as a new endpoint, if you like. These are both valid ways to extend Kubernetes clusters. Today, we're going to be focusing in on custom resource definitions or CRDs as the abbreviation, if you're not familiar with it. Custom resource definitions are generally viewed as a little bit simpler to use, and they can also be integrated in more environments. Some hosted Kubernetes offerings don't allow you to do things like API server aggregation, so it's a non-starter for generic portability. So when we think about Kubernetes in the sense of types, we have to kind of go back a few steps and think about, what is a type in general? Outside of Kubernetes, what does it mean to define a type? Well, at their base level, types just specify a structure for data. It's an ordering. It's a validation mechanism to say, hey, this data is organized into a certain structure, and we're going to call it by this name or this identifier. In the case of Kubernetes, when we have types that are either built in or we define at runtime, we do that via an open API v3 schema. Any given type to be a type itself has to be uniquely identifiable. If we're going to be able to say, hey, this type has this structure and can be validated in this way and also has this behavior attached to it, we have to be able to identify it relative to other types that exist. In the case of Kubernetes, we identify types by group, version, and kind, or you'll frequently refer here it referred to as GVK. All types exist in the context of a scope, so they have to be defined in some sort of namespace, which impacts what their identifier is. There's a key insight here, though, that it doesn't necessarily mean that they can only be instantiated in that namespace, depending on whether they're user or internal facing. That determines whether they can be instantiated outside of the namespace in which they're defined. In the context of a single Kubernetes cluster, every type is defined at the cluster scope. It might be able to be instantiated at the namespace scope or at the cluster scope, but the type is actually defined at the cluster scope. There's no namespace CRDs. In many cases, in many systems, types are composable, so they don't just exist as primitives. You can use existing types or types you've defined to build up higher-level types and then potentially even take those higher-level types and build them up further. Kubernetes doesn't inherently offer type composition, but as we're going to see in this presentation, projects like Crossplane allow you to bring composition into your cluster. The last thing here is that generally and specifically for the purposes of this presentation, we are going to say that any types that are not composed of other types are referred to as primitives. That's whether they're built-in or user-defined. If we're moving towards a KCP model or something like that, you could view all types as user-defined, perhaps. At the end of each section, I'm going to do a quick check-in just to make sure that we've covered the high-level points. So Kubernetes includes some built-in types, which can vary depending on implementation. Kubernetes allows you to introduce additional types at runtime via a few mechanisms. For the purposes of this talk, we're just focusing in on custom resource definitions. Both built-in types and types defined via CRDs were defining as primitive types. So they're not composing other ones. They're just defining a structure. And Kubernetes does not offer type composition out of the box, but we're going to bring it in and see how that impacts how we program the Kubernetes API. Before we continue on, if we're going to add a programming language-type construct to the Kubernetes API and be able to use that to do more powerful things, we have to start thinking about the features that Kubernetes offers to us to implement this kind of functionality. In other words, if we're going to add programming-like features to the Kubernetes API, we need to start treating it like a programming language. But in other words, with great power comes great responsibility. Now, I'm not saying that we want to take the Kubernetes API and make it into a turing-complete programming language. That would be very complex, very hard to observe, very hard to debug, so we don't want to do that. But we are bringing in some new functionality that adds a lot of capabilities to the Kubernetes API. And so we need to consider how we're going to be able to manage that additional complexity or additional power here. So one of the things we're going to do if we're going to need internal and user-facing APIs or potentially want to be able to define things that are used to compose higher-level types but that we don't actually want to expose as part of the interface to users necessarily is be able to separate what types are internal and which are user-facing. So we're going to make an analogy. This isn't a perfect analogy, but I think it's useful because a lot of folks in the cloud-native ecosystem are also engineers or engineering adjacent. So we're going to talk about another place where we can define types. And I've kind of already spoiled the surprise here, but that would be programming languages, right? We define types all the time in programming languages. They may be strongly typed or dynamically typed, but there's all sorts of different type systems that exist. So how do we define the types in the programming language? Well, usually via some sort of built-in type that offers the ability to define new types. And this sounds pretty familiar, right? We just talked about how custom resource definitions allow us as a built-in type to define additional types. So what does a very simple type declaration look like in Go, for example? Here we're defining a MyType type, and we're just using it as an alias for an integer. But this is a brand new type, right? It is specifying how the data is structured and organized, but we might define additional behavior that's attached to this type, and we may put it in this scope, for instance, in the stuff package. So we're defining a new user-defined primitive here. So the stuff here, the package, is the scope in which we're defining this type. And we're using a built-in type to be able to do that, and we're providing an identifier to be able to say, hey, when this type gets instantiated, it needs to adhere to these sorts of restrictions, or it's not valid. You'll see I have an asterisk on the identifier there. We're going to talk about why that's not the full identifier here in just a moment. So if we wanted to define a user-defined primitive type in Kubernetes, like I said earlier, we will use a custom resource definition, which looks something like this, where we give some metadata about how the type can be identified. And what you don't see here is the schema, the organization of data, the validation that has to take place if someone wants to instantiate or create one of these types in our cluster. Once again, as we said earlier, custom resource definitions are always defined at the global or cluster scope within a single Kubernetes cluster. We can also define higher-level types or composite types in programming languages. Once again, we have a scope. We have an identifier. We're using a user-defined primitive, the one we defined earlier, the my type, as well as a built-in primitive and composing that into a higher-level thing called higher-level type here. So together, those make up a composite type. Like I said earlier, Kubernetes does not offer the ability to compose types natively. But when we add cross-plane, we can get functionality of something called compositions. And compositions, kind of like a higher-level type in a programming language, allow you to say, hey, here's a new type that exists. And it composes these primitive types or potentially other composed types. In this case, we're looking at an example of a composition that composes an RDS instance, which would be a primitive that was introduced, as well as potentially some other ones that we're not showing here. But we're defining some metadata of how this is identified. And then we're saying that the types that it is composing into a higher-level data structure. Lastly, programming languages also give us the ability to define abstractions. In Go, we use interfaces to be able to do this. So here, we're saying there's an abstract type that is being defined by an interface. And here, we're just specifying the behavior that can be present for this abstract type. And with the behavior, you have these different methods that say, hey, this is the signature for how you can invoke this in an imperative fashion. And here's some data that can be passed into it. So once again, we have the scope, the identifier. We're defining some data that comes through the interface and eventually passes to a concrete implementation, which is some other type that we've defined, potentially that one that was on the last slide for a composed type. Mapping this to Kubernetes, once again, Crossplane introduces a concept of a composite resource definition, or XRD. It's very similar to CRD, a custom resource definition. And the structure actually looks quite similar. But you'll see here that we actually have a master scope variant, which is under the names, and a namespace scope variant, which is under the claims here. And these are two separate types. And individually, they don't actually have any behavior or implementation attached to them. They have to be matched to a composition, just like an interface in a programming language uses something like dynamic dispatch to be able to use an actual implementation, a struct that implements the behavior. So we have all these nice features now. How do we actually implement types that are internal or user-facing? Well, let's lay out some ground rules to start off. Both our primitive and composite types can be internal or user-facing. In a program, we might want to say, yeah, we have some higher level types that are not just primitives that are introduced, but we don't actually want to expose them as part of a library, API, or something like that. They're just internal. They're helping us do the job that needs to be abstracted away from the consumer of the package. Program languages allow us to do that by choosing what types are exported or made public or made part of our API surface. And this allows us to divorce the implementation of behavior from the interface. You all the time interact with packages or libraries or things like that. They give you really robust functionality, but they don't actually expose all the details to you. That's actually the beauty of having libraries and having portable code that you can move around. And we want that same sort of functionality in Kubernetes. So Kubernetes does not offer the ability to say, hey, this is internal and this is exposed, which might look something like, hey, this is just used for composing another higher level type. But we can bring that functionality into a Kubernetes cluster via convention. And you could actually enforce this convention with webhooks or something like that if you wanted to. In that convention, somewhat controversially to some folks, is that all user-facing types are namespace scoped. So not the type definition. The type definition still exists at the cluster scope. But they can only be instantiated at the namespace scope. And types that are not offered at the namespace scope are not user-facing. We're not expecting users to create those. They're more like intermediate types to be able to surface that higher level one to the user in a namespace. So in our interface earlier, we showed that we use capitalization, which is goes syntax for saying, hey, this type is exported. It's available to folks outside of the stuff package. And in a composite resource definition, we define claims. And claims are saying, yes, please expose this at the namespace scope, thus making it user-facing. So you'll see later on in this presentation some examples of internal types that are used to build up a higher level thing that we actually want to expose to users. And only that last level is going to be exposed at the namespace scope. So checking in, again, program languages, or Kubernetes rather, we're adding some additional functionality, something that's somewhat similar to how programming languages work. And if we're going to do that, we need to reevaluate the features that it offers. We have enumerated how programming languages can allow you to define types that are accessible within the scope that they're defined, as well as outside of it. And we've also noticed that Kubernetes gives us the ability to expose at the cluster a namespace scope, which is a differentiation. And all types are defined at the cluster scope. And so we can pick where they are instantiated. And using the convention of only allowing user-facing types to be instantiated at the namespace scope gives us that separation to be able to define what's user-facing and internal. So with that convention, we've accomplished one of the goals of what we want of programming the Kubernetes API. But there's a bigger issue that's really more important and even more relevant for this talk. Just being able to define the internal versus user-facing types gives us the ability to do more things. But really, the problem we're trying to solve is that outside of a single physical cluster, our identifiers of types in a Kubernetes cluster are mostly meaningless. So what I mean by that is if you put a manifest in a Git repo and you say, hey, everyone on Twitter, come apply this to your cluster, and hopefully no one would go do that. But someone might. And if a manifest exists in a Git repo, it doesn't exist in any cluster. So how can you guarantee that a type identifier that is unique to the existence of type definitions within a single cluster means the same thing when you apply it to two different ones? Another way of looking at this is that a manifest in a Git repo only exists in the abstract cluster. And if we think about it in that sense, we think about it as this global abstract cluster, we realize that we have to be able to uniquely identify that type in the context of all clusters. So let's figure out how we're going to do that. I showed this earlier. This was one of the types we defined in Go. And it had the asterisks on the identifier. And that's because higher level type is not actually the identifier of this type. The identifier of this type, if you were using Go modules or something like that, would be a DNS name and a hash of the content of the module that it came from. And then it would also be identified if you literally looked at the symbol table in the binary by the package that it came from. So that means that if you go to my repository and you clone it and you build the project, it's either going to fail because it can't find that package or it's going to build with the exact same content that I defined it should. So we can reproduce that behavior in different places on different machines where we're compiling. We don't have that same ability with types in a Kubernetes cluster. So we already went through this a little bit, but the GVK is unique within a single cluster. You can't have conflicts at that level, but they are not unique in the context of the abstract cluster. Another way to say what I've been saying over the past couple of slides is that types are not portable. They can't be moved across clusters with guarantee that the syntax has the same semantics. And so this causes some issues. And if you're Qui-Gon Gen, you might say, GVKs will do fine. And I would say, no, they won't. Before we actually address this problem, though, we need to address a key fact of types. And just the data structure is not enough to separate types from each other. You could have two types of their exact same data structure, but they are not the same. And why are they not the same? Because we're able to attach behavior to types. We saw this on some methods earlier that were defined on an interface. And when we're using an imperative programming language, we attach behavior to types and we call it explicitly. We instantiate the type and we say, create. Or we instantiate the type and say, do something. In a declarative programming model, which is what we're doing with the Kubernetes API, we provide the system with enough data to actually call the methods for us. So in the cross-plane project, we frequently think about a reconciliation loop, not in terms of just constant loop that goes and does a bunch of things. We think of it in terms of crowd operations, create, update, delete. And really, when you're authoring a reconciler using some of our shared libraries, you actually just define those create, update, delete operations because the rest of the logic that invokes that behavior is generic. But in both of these environments, you are attaching behavior of the type with its definition. And the key thing about not even being able to take a type and saying, hey, this literally has the same structure, the same open API v3 schema, is that its semantics are dictated by the controllers that are reconciling it. So we need to actually tie the behavior of creating a type in a Kubernetes cluster with the definition of its structure. So this is just an example of how we might attach some data to a type in a programming language. We define an operation with a signature here. We might provide data via arguments. And there might also be data that's inherent to the type. Like I said, in the context of a declarative system, we're actually just providing all that data up front. We say, here's the thing that I want to happen. You perform the operations system and make sure that it looks as I've specified here. So we said this causes some issues. The fact that we can't do this, the first one is potentially pretty straightforward. And that's opaque type instantiation. When I create an instance of a type in any cluster, I don't know whether a type by that identifier exists. And that's actually a great case because that's going to get rejected by the API server. If I try to create that RDS instance type that we saw earlier, and there's no type by that GVK in the cluster, it's just going to fail. And that's actually a pretty good outcome for us. Maybe not the best, but it is a good outcome. More critically, though, is it could succeed, but it could do something I don't expect at all. When we create an RDS instance when we're using the AWS provider in crossplane, we expect that it will provision an RDS instance on AWS. You could also say that it does literally anything else. It could delete all your infrastructure or delete all the pods in your cluster or something like that. So we need to be able to actually identify that the behavior that's attached to the type that we're instantiating is the one that we want. So here in this case, this is the example of opaque type instantiation. The GVK caught out here in the API version in Kindfield. Essentially what we're saying is in the abstract cluster, this doesn't tell us anything. The second issue that we run into is an opaque type composition. So if we're composing things into a higher level type, like the composition of the RDS instance we saw earlier, we don't know whether the types associated with the GVKs in the composition will match the types associated in the user's cluster. So we want to be able to say, hey, if we're providing a package of abstractions to you, we want to know the primitives that we are composing are what we tested with, what we said they will do, because the GVK could have gotten there any other way. So here, once again, we have an example of a composition. We're composing the RDS instance here, so we don't really actually know what that RDS instance is. So checking in again, we haven't solved the problem. We've just created all of these issues. In order for the Kubernetes manifest to be portable, their type definitions cannot be tied to a single physical cluster. We said the definition of a type includes both the structure of its data, which we traditionally think of, but also the definition of its behavior. We've said there are multiple levels of composition within a type, and that their dependencies must be composable. That's what we were talking about. We need to make sure that that RDS instance that we're assuming is present in the cluster is the one that we expected when we authored the composition. And that was all we did in that section. So now we're going to solve the problem, right? And we're going to do that by creating globally unique types, or types that are unique within the abstract cluster. And we don't want to do a lot of work that's already been done before. So let's look and see where we can find some examples of global uniqueness. So one of the first places, we've talked about it, ad nauseam at this point is in programming languages, because we need to know we need to be able to compile that program and have it be reproducible by just the contents of a Git repo or some sort of source control. Another place where we see it frequently in the cloud native ecosystem, especially is with OCI images. And OCI images, we need to be able to identify uniquely because they're doing a pretty sensitive thing, right? They're running arbitrary compute or they're running arbitrary behavior on our compute. So we need to make sure that when we actually run a container image, right, that it's going to be the one that we expect and it's not going to mine Bitcoin or do whatever your favorite malicious thing is. And so OCI images solve this problem by what we call content addressability. And the simplest definition of content addressability is that we ask for something by what it is, not by what it's named. So an example of asking for something by what it's named would be a DNS name, right, with no verification of the content or a tag on OCI images, right? When we ask by what it is, we're taking a cryptographically computed digest of the contents of something and saying that's how we're going to request it. And actually, that also frees us up from saying we don't actually care where we get it from as long as it comes from somewhere and we can calculate and verify that the digest matches. So right, we know what we, that what we get is what we ask for. So if we wanted to apply this to Kubernetes types or if we were thinking about how to apply it to Kubernetes types, which one of these systems will we leverage? Well, OCI registries are already ubiquitous. You can probably get one from your favorite cloud provider. You probably already have one internally in your organization if you're any sufficient size. And if you don't have any of those things, you can actually run one on a Raspberry Pi in your basement. OCI also has a thriving ecosystem. So there's lots of tooling present. It's actually growing all the time. There's a few new projects that really hit like 1.0 or gain maturity, I think around this KubeCon around the OCI ecosystem. So we can leverage all that tooling and all the work that folks are doing. It's also constantly improving. I know the GKE team released something about image streaming at some point this year, which is a pretty cool concept. Image signing has been a hot topic in the sigstore community and broadly in the Kubernetes ecosystem, something we've actually been working on for our packages in the crossplane ecosystem as well. So this seems like a pretty good solution, right? Folks already know how to use it. So a few years ago in the crossplane community, we said, what if we take our Kubernetes types and just put them in OCI images? And we can push them around. We have a good distribution story and their content addressable. And that could work fairly well. So let's look at packages as a reference for how we can achieve this global uniqueness for types. Crossplane packages are referred to as X packages. There's a specification in the crossplane documentation for what this means. It's basically a super set of the specification for OCI images, so additional constraints on OCI images. They come in two different flavors. Providers are going to bundle a bunch of CRDs, as YAML, into an image. And then they're also gonna have the controller. So you can literally docker run a provider if you want to or install it and run it as a pod. Crossplane knows what layers to look at to get the CRDs out of. And when it pulls down a provider, it cracks it open, applies those CRDs, and then runs the controller for you. All the same image, though. We also have configurations. Configurations are how you introduce compositions in XRDs to your cluster. So they just basically have a bunch of YAML in OCI image. They're not runnable in any form or fashion. But bundling the primitive types, those types defined via CRDs and included in providers with their controllers, mean that we have one digest, the digest for their manifest that uniquely identifies a type and the behavior associated with it. And another key point of that is when you install these types in a Kubernetes cluster or crossplane it's installed, crossplane will verify that only one controller or one implementation of controllers can be reconciling a type at a time. So you can't just bring in along another controller and install it via crossplane and have it also define behavior, right? The manifest is defining the only behavior that is applied to the type that's in the same package as it. So we have the content addressable image manifests in that GVK, another restriction here is that there's only one definition of that GVK within a package, which might be expected since they're going into the same cluster. So let's look at the two problems that we define and look at our solutions. So solution number one is verified type instantiation. This is not something we force upon users, but something you can do with something like an admission webhook. You can supply additional information about the digest of the package from which a type came and validate that digest when you actually create the manifest. So here's an example of putting a digest in a package reference on a manifest that gets applied into a cluster and admission webhook can very easily check and see, hey, is this package already installed? If so, I can verify that if it came from crossplane that that type came from that package and therefore this type should behave as the person who specified this annotation would expect, which means that you could put a manifest in a Git repo and actually have guarantees around when someone applies it to their cluster what that means, it's either gonna fail or it's gonna do what you expected. Second one is verified type composition. This is something that's built natively into crossplane packages by the support for dependencies. Crossplane will not install types into a cluster until the dependencies of the package that came in are satisfied. So here we have an example of a package we're actually gonna show installing in just a moment and it has some references to dependencies as well as some constraints on them. Now in this specific case, we're using tags, so we don't have full content addressability. We could have someone intersecting our DNS resolution and giving us the wrong thing back. So this is not actually achieving it fully but the reason why we do this is because a lot of times humans are authoring these and it's kind of hard for humans to work with digest, but we don't wanna sacrifice that functionality just because humans have some difficulty with stuff. So our problem is that humans are lazy or perhaps we're incapable sometimes, but the solution is that we can ask for things by name and then verify by their content. So before we actually do the thing that can bring malicious behavior into a cluster, we'll verify that the content is what we expected. So in crossplane, we do this with package revisions where you can set a manual activation policy. That means that when you install a package, crossplane will pull it down, it'll crack it open, it'll look at it, say okay, these are all the types that are gonna get installed and I'm gonna run the controller image that is associated with them and here's the digest, but it won't actually do it, right? It'll wait and say, hey, here's a revision, it has this digest, it has a list of all the types that will be installed, but it doesn't actually proceed with that until you say yes, activate, you flip a field and you say this revision is now active. So you see the digest, you say this looks good. This is now active and running and sensibly it's doing what you asked for. Then you wanna upgrade to a new version of a package. So you say I now want provider AWS v0.19. You still have the old revision in play, but you've got a new revision that's been created but not activated yet. So you can once again say, okay, what's the digest here? What types have changed? What controller is gonna be run? And if that all looks good, then you can say I'm gonna activate that one, crossplane will automatically deactivate the other one, migrate types to the new one, shut down the old controllers, start up the new ones. All right, so we've solved our problem. Let's actually see how it's in action, but before we go to that, let's check in. We can take advantage of the content addressability of an OCI ecosystem by packaging our content in OCI images. The packages allow us to expand the definition of types defined via CRDs to include its behavior that's associated. Package revisions allow us to present a human-friendly interface where we can use something like semantic versions without sacrificing, verifying using content addressability. And with that, let's get into the demo. I have one minute left, so this is gonna be a very fast demo. If you would like to follow along, I will post on Twitter right now the manifest for the description of what we're going through. So let me do that, and then I'll do it quickly. I don't know if anyone else has seen this around yet, but a bunch of demos have been failing recently because Docker Hub is rate limiting the KubeCon network. So that's been a lot of fun, but we're actually not using Docker Hub here, so ideally, that's not gonna happen. All right, let's see if this seems to be correct here. Yeah, can y'all see my term? I can't actually see the screen. Does that look okay? Bigger, smaller? Good? All right, so I have a kind cluster running locally. So you can see we have crossplane and the crossplane RBAC manager going. So what we wanna do is install that type, and let me see if I can get my browser up here. Okay, it looks like that's working. Let's actually look at the type that we're installing here. If we go over to configurations, we can see we're gonna install this platform ref AWS. So it has two dependencies. That's actually the manifest we're seeing in the slides there as one on provider AWS and one on provider Helm. It brings some XRDs and some compositions. So for instance, the EKS composition here is gonna take a bunch of the primitive types supplied by provider AWS, and it's gonna compose them into a higher level thing that represents an EKS cluster. If you've ever created an EKS cluster, you know this is very painful. So now you can just have a very simple interface and we'll create all of these things on the backside for you. So let me grab the image reference here and I'll go ahead and install this. Configuration, install. All right, that looks good. And we're gonna see a package get created or actually we're a little slow. So two packages are already present. So we have our platform ref AWS. It is not quite healthy yet. It's unknown because we're waiting for our dependencies here. But you see we've already resolved one of our dependencies to provider AWS. And let's see if our other one has come along. Not quite yet. I was having some issues with the network earlier. So we'll see if we experience any. It looks like they are coming along. We also have our package revisions. We have one for each. So here's the digest or the short digest of the configuration package we installed. And here's the short digest of the provider AWS version we have installed. And you'll see our revisions are numbered as well. And we have a count of the dependencies that we have, but none of our dependencies are quite ready yet. All right, so it looks like those are taking a bit of time here. You can check out and see. It looks like crossplane is still fine. Here we go. Now we have a provider helm coming along as well. And once those are both ready, we'll actually see platform ref AWS say, okay, all of the types that I expected in their associated behavior are now present in the cluster. And so I can add my types. And now when someone creates an instance of my EKS abstraction or my cluster abstraction, it's gonna result in creation of types with their associated behavior that are what I specified when I built this package, having no idea what cluster it would actually get installed into. So it looks like they're just about there. Provider AWS is pretty big. So it's just finishing up. And we can see that the controller image there is already running for provider home. So we snuck in a quick demo there. I know this was kind of a conceptual talk, but hopefully this end tied it together a little bit. And yeah, I'd love to answer any questions folks have if that's permitted given how far over time we already are. But thanks for being here. Yeah, he's got a mic. I don't know if these are good questions, nobody's here, so I don't care. Two questions. If you're defining with a CRD, the name of the CRD kind of tells you what it does. You know, there's a general convention. So if you're including the behavior as part of the definition of the type, is there going to be a convention for how you will with brevity describe the behavior in a way that will be unique and informative to people who say I want this type versus that type? And then my second question, which is founded my ignorance of crossplane, can your goals be achieved without crossplane? Yeah, so speaking to the first question, I think the brevity would go for there is like the org repo tag structure of OCI images. So an image itself is a bundling of types and then their representative behavior as well. So I guess that would be kind of the brief way to describe them. You are defining a bundle of them. Actually something that's come up quite a bit in the crossplane community is, hey, can I just say I want one type and just get that one and it's associated behavior? Even if we did something like that, let's just say just the RDS instance came and not all of our AWS types, which they actually show that here, but we can see that we now have lots and lots of types present in our cluster. If we just wanted one of those, we'd likely still say that the binary that's able to reconcile all of them is the associated behavior. So it'd still be the same digest of the package image uniquely identifying that single type. And then speaking to whether you need crossplane to be able to do this, in theory, no, right? If we just had OCI, if we put our types in OCI images or any content addressable storage, that could also work. Some of the guarantees that the crossplane package manager gives you, namely that one where we only let one controller or one set of controllers be reconciling a group of types is important to make this work, right? If you can just add arbitrary other controllers via other means, which by the way you can do if you're not adhering to convention, then you could in theory do this out crossplane, but you need to replicate that sort of behavior. Crossplane is a fairly generic project though, right? There's kind of anything you can do with it that we view it as a framework. So it can be applied for a lot of different use cases. So one of the things that has become a little bit difficult about managing custom resource definitions is often version updates and migration. And so when you showed the example of like, oh, crossplane will go from, I don't know, AWS 18 to AWS 19. How does it handle CRD version conversions and just anything that's a composite? So let's say the deployment spec change. Like how would you handle that? We can start with that. I have a few more, but yeah. Cool, cool, yeah. No, that is a good call out, right? It's not as simple as the diagram said there. So the first way we handle it today is we allow you to specify web hook configurations in your provider package as well. So you can put conversion web hooks in there. We've also been starting to experiment early on with common expression language. So CEL and using that as well and being able to bundle that in. So that's how it's handled today. And so if we install the new types, sometimes we have run into cases where some of them have a provider where they don't have a good migration story, in which case the install may break and you have to say, hey, delete the old CRD and have it replaced. We've also had folks develop tools where they say, hey, if I'm not actually, since we have a big bundle of CRDs in a single package, they're saying, hey, if I don't have an instance of the CRD, just go ahead and delete it and let it get replaced with the new version. So not a silver bullet, but it does the job. Did you have, I think she had a couple more. Yeah. You briefly mentioned that you check to make sure that only one controller modifies a given kind in a cluster. Does that include the composite types inside? Let's say you have a deployment that's wrapped by a CRD. Like how would you differentiate the scope of responsibilities between controllers or how do you identify that? Yeah, so only crossplane will touch the composition process right now. So if you defined an XRD, well, for instance, you can't put an XRD into a provider package. We say you can only have CRDs and those webhook configurations as well. And then in the composition package, there's no actual behavior coming in. So if you do everything through the crossplane package manager, there's no way to install additional controllers that can manage the XRDs, for instance, the higher level types. It would strictly be, let's say you define an XRD and then you put that in another composition, right? So you have nesting of abstraction here. It's still crossplane that would say, hey, okay, we'll take the highest level of abstraction. We'll render out all of the things it composes. If another one of those is an XRD, we're just rendering out again until we actually bought them out to a primitive type that was installed by a provider and then the controllers for that will start reconciling it. Sweet, cool, thank you. Oh, we got one over here, I think. Sorry if I wasn't following along, but is any of the suggestion for this to put this in upstream or just this be a feature of crossplane? Yeah, so I would like to see this become a more standardized thing, whether that involves lots of people installing crossplane to be able to do this with arbitrary things or it becomes some sort of more generic standard. I think I'd be fairly indifferent to it. I don't necessarily think it's something that, for instance, belongs in the Kubernetes API server. However, if someone did wanna have that discussion, I'd definitely be open to it. But we're kind of generically talking about the concept of using content-adjustable types here rather than saying this implementation specifically is the best example. It's just the one that we use that's been useful for us. So yes, if there is an opportunity to make this a generic standard, I think that would be great. So someone could use crossplane today to install their CRDs with the dependencies? Yep, exactly. And the providers who are generally thought of as managing external infrastructure, that's typically how people think of crossplane, they're not actually restricted to that in any shape or fashion. So you could actually have a provider that did almost anything. So it's kind of just actually packaging types with controllers. So if you wanted to use crossplane in that mode, that would be somewhat unconventional, but not restricted. Cool. I hope everyone has safe flights back to wherever you call home. And thanks for sticking around.