 All right, so let's get this talk started here. You all are here to listen about cross-plane, so let's get the cross-plane maintainer track talk going. So my name is Jared. This is Philippe over here, and we are both very involved in the cross-plane project. So this is always an interesting talk because we try to have something for everybody. We try to have some intro material. We try to dive deeper into some topics. So we wanna have something for everybody. So quick show of hands, who has never used cross-plane before ever? Awesome, this slide is for you. What is cross-plane? So cross-plane is your cloud-native control plane. You can use it to manage all of your resources. You can compose those resources into higher level abstractions, and then offer those to your developers so that they can provision infrastructure when they need it. Kubernetes, awesome control plane, really, really good for containers, but cross-plane goes and teaches it how to manage everything else, basically. Control planes are not a new concept, right? Cloud providers have been using control planes for years, so it's not a new concept, but now it's your turn to build your own control plane using cross-plane. So we've been doing this for a bit. We've been around for a little bit over five years now, and we now have an official formal proposal to graduate with the CNCF. So one of the things that really hit me hard when we were pulling together the proposal is how many people have gotten involved in the project. So almost 2,000 people have contributed to the project in some way now, which is absolutely amazing. We're the top 10% of all CNCF projects for people that are writing code for it. Almost 700 people have written code for cross-plane, which just blows my mind, it's amazing. Almost 350 companies have contributed something to cross-plane, too. So this community is getting bigger. The project is mature. It's amazing to see this growth, and obviously lots of people are using it, right? So it's in use in production, at scale, by lots of companies that you may recognize as well. Okay, so back to the basics for folks. So managed resources, key concept in cross-plane. So every resource out there in the cloud, let's say AWS, there is a cross-plane resource to represent it. So all the 900, I don't know, more than that services AWS has, there's a cross-plane managed resource for basically all of them. So each managed resource is exposed in the Kubernetes API as an object. So we've got this S3 bucket here, and then it's got what you'd expect out of a Kubernetes object. It's got a spec, it's got config fields, it's got status, it's got events. So it's a well-behaved Kubernetes object, and it represents an external resource that's out there in the real world for your control plane to manage. So how does that work? Well, probably like you'd expect. So you take a Kubernetes cluster, you put cross-plane in there, you added some providers for cross-plane as well like the AWS provider. The AWS provider has a series of controllers that watch for events from the API server, and when you create an object with the Kubernetes API server, probably through like get-ops or something, it's like creating an S3 bucket, the API server tells the cross-plane controllers to go and reconcile that with the real world. The S3 controller goes and talks to AWS over their API and makes that S3 bucket happen out there in the real world. So providers are what makes cross-plane know about external resources and kind of extend it further. So there's some big provider news as of late. So up-hounds, we created a framework and tooling to basically take any Terraform provider out there and then just co-gen generate a cross-plane provider for it. So we've did that with a few of the popular providers, and then we've gone ahead and donated that's tooling the providers themselves, et cetera to the cross-plane community. So those are all part of the upstream cross-plane project and now along with that, there was an architecture change in those providers as well, which led to some fairly massive performance improvements. So we're talking like more than 90% improvements in CPU and memory usage, sometimes like a thousand X times a readiness improvements, which maybe says something about how slow it was to begin with, but still it's fast now. They have a lot of coverage of all the resources, the reliable, perform it ready for production, and they're part of the cross-plane community now. So there's some links there in the cross-plane contrib org to check out those providers, and I think those are definitely ready for folks to be using in production. Where can you find all the cross-plane extensions? There's configurations, there's providers, there's functions, there's all this stuff that teaches cross-plane more tricks, and the place to find them is in this cross-plane marketplace. So you can go there, you can find all the extensions to cross-plane, you can see their documentation, how to use them, examples, all that stuff. So marketplace.upbound.io, that's where to go find all the extensions to cross-plane. All right, so gotta go one level higher now. We talked about the managed resources, and now it's time to talk about assembling those resources into higher-level abstractions and basically building your own platform API. So a good example here is composing together as a platform engineer, composing together GKE, a node pool, network, subnet, all those things, and then offering that as a simple, higher-level cluster abstraction to your developers so that they have like a limited surface area of configurations, just a few configuration knobs that you expose for them, and they can get a cluster for their workloads when they need it. All the complexity about how the cloud providers work, the configuration it takes, the policy, all that stuff is below the API line, and then your developers get a very simple experience on top of that. So this is what it looks like. This is a really important model in cross-plane to visualize. So your developer on the left, your app team, they have a simple abstraction, a claim, and that's their interface to be able to get infrastructure. Behind the API line, you as the platform engineer have defined your platform API with the composite resource definition. That's the shape your API, what config knobs you wanna expose, all that stuff, and then you write compositions to say, this is the resources I want to bring together, this is how I'm going to compose them, this is how configuration values flow, all that sort of stuff. To make it more tangible, we as a platform engineer have created a Postgres abstraction, a database abstraction. So our engineer, she says, I want a small Postgres please, and then behind the API line, we, this composition that we've written for AWS, specifically, it could be GCP, it could be Azure, it could be gold, silver, cheap, expensive, whatever. We have an AWS composition here, and that happens to be for Postgres, an RDS instance, DB parameter group, security group, probably some networking stuff, et cetera. But to the developer, all they've asked for is one small Postgres please, and the complexity of all the infrastructure and what it takes to do that is, in your hands as a platform engineer, and exactly as you said it to happen, goes and happens in the real world. So this is what they look like. To define your own platform API, you use the composite resource definition in crossplane, and there's basically like two things you two high level areas need to do. You say this is the type, this is the API object that I want to expose to my developers, and then you define the schema for it. What configuration knobs do you want to expose? What is the shape of your platform API? Then you write at least one composition, maybe a couple of them, so you have options at runtime, and now this is something that has changed recently. So in mainstream, mainline crossplane right now, to compose resources together, what you use is a pipeline of functions. So there'll be a series of functions that execute, and those are what define how to compose the resources together, how to mutate their values, and how to end up at tangible resources in the real world. So let's talk more about functions. So we said it's a pipeline of simple functions to compose resources, that's true. Really important things to notice here is that they are written in your language of choice. If you want to write a function to capture your unique platform logic into one place, you can do that within your language of choice with your tools, all that stuff. And we're trying with the design here to find a sweet spot between all declarative, no code at all, and then building an entire controller, building, you know, all writing all the reconciliation, loop logic, all that stuff. We're trying to find a sweet spot in the middle, so you focus only on your platform's unique needs. So also really important, you don't have to write code to do this. You know, if you have unique needs and you want to write code to define your platform, great, write a function for it. Otherwise though, there's a bunch of functions within the ecosystem now that you can use and you don't have to write the code yourself. This is what it looks like to write code. This is Go, we're programmically, dynamically creating a bucket, an S3 bucket. We're assigning values to it. We can take an input, blah, blah. We can use code to build our infrastructure. You know, unit tests, linters, validators, all that stuff, great stuff. If you don't want to write code, then there's lots and lots of functions to use. So this one here is a template function. So in your composition, you can say, hey, run a function for templates. Here's my template, make it happen. Also you could do things like use Q-scripts if you wanted to. KCL, there's a bunch of different experiences that are popping up now with functions that mean you don't have to write the code yourself. You just get all these new experiences to define your platform in the way that you want to. All right, this is the thing I was most looking forward to in this talk. All the folks that raised their hand earlier that have not used crossplane, you won't understand this, but folks that have used crossplane, it can be challenging to use sometimes. We have taken that to heart and really trying to make crossplane easier to use and faster to use and all that stuff too. So two huge improvements here. One, crossplane is now more powerful and flexible than it's ever been before. You can literally do things that were impossible before in crossplane. A year ago, you could not do all this stuff. Now you can, and a whole world is opened up now. You can do it in your language, in your tools of choice, and there's more and more functions like every week, basically. So that ecosystem is growing. Lots of things you can do with crossplane now. And maybe more important is that you can, when you're building your control plane with crossplane, you can successfully get it running into production easier and faster than ever before also. Basically the concept is we took all this work that you have to do, shifted it left, so you can do it on your local laptop, rapidly iterate and get it correct before you ever touch a live control plane. So we're gonna see that in this demo in just a second with all this new crossplane tooling. This stuff did not exist like six months ago, basically. So there's all this tooling for the full lifecycle of your control plane, initializing a new crossplane project, testing it locally, when it's out there running in the real world, tracing through things, observability, all that stuff. These are all new tools that didn't exist like six, nine months ago or so. Okay, demo time. I am pumped on this demo. And let's see if it works well on this window here that is not very visible for me. All right, okay, okay, okay. So we are gonna go back in time a little bit and we are going to build a platform with crossplane the hard way. So a year ago or so, I am a platform engineer. I wanna expose database as a service to my engineers. So I've created this database abstraction for them. When they wanna create a database, they can say, cool, give me an Acme database and make it 100 gigabytes, please. That's the only config knob I've given them. Now under the covers, what I did as a platform engineer to make that happen is that I created a XRD that basically says that here's the platform API for databases, here's the shape of it. You can ignore most of this stuff, but the key part here is that I'm exposing this one storage knob that says how many gigabytes the developer wants. Then I write a composition and I basically say, okay, for this composition for a database, what that means here in my organization is it's a Cloud SQL database. It's gonna run Postgres and then the default will be 10 gigs for the disk, but that value that the engineer gave of the disk, how much storage they want, I'll patch that down into the Cloud SQL instance. So in the cloud, what the developer wanted will happen. Cool, so let's do that. Let's go ahead and as a platform engineer, like apply my definition, my composition. It's on my control plane now. Cool, I'm gonna test this, does it work? Let's see if it actually works or not. So I will K apply dash F pre claim. So I'll create an instance of that claim, which if I recall correctly was 100 gigs or so. Cool, 100 gigs, so I'm testing this. Did this, is this database coming up? Does it work? Is it 100 gigs? Well, let's see. All right, so what I'm gonna do now is, actually let's just look at stuff here. Throwing stuff in there. Yeah, okay, it's not ready yet. Here's the database object. That's like the real thing in Google Cloud right now. That's not ready yet. Is it like what we asked for it to be though? Did my logic in my composition work? Let's see, oh man, it did not work. That stinks, cause I wanted 100 gig database. I got a 10 gig one and I had to go to a live control plane and like spin up infrastructure in Google Cloud to test this thing. Can't there be an easier way to do this? Like a faster way to do this? This is too hard, I don't like this. Yes, there is a faster way to do this. It's 2024, crossway in B1.15 is out and we're gonna do this the easy way now. All right, so that old style patch and transform composition that had a bug in it. I don't know where I had to go to the real world to try it. Nah, I'm going to use new cross-plane tooling to convert. It's a little small. Let's say I make it a little bit bigger, sorry folks. All right, I am going to use this cross-plane tooling to convert that old composition to a new function-based one. So automatically migrated, cool, it's done, it's there, it's ready to try. Now I'm gonna use local tooling to test this all on my local laptop. I'm gonna say cross-plane, please render out that composition for me with these particular values. Let me know what happens. Cross-plane runs it, all of my local laptop here. Ah, here's the problem. My default of 10 gigs is there, but then this silly patch that I typoed is also there, D-I-S-C size instead of D-I-S-K size. Oh, there's the bug right there, found it. Okay, let's fix it. So we go to our composition that's based on functions and we go ahead and fix that now. So it's with the K, we save it, let's try it again. So we asked all of my local laptop here, I'm not touching the real world. We run it again, cool, 100 gigs, that looks like it works, that's awesome. Now, to my human eyes, this looks like the Cloud SQL database will work correctly, that's good, but let's do a little bit further validation of this. So let's also go ahead and one more command where we'll run the render on the local laptop, we'll test our composition function, we'll test our composition, we'll make sure it looks good and then we'll go ahead and pass it to more new tooling of the cross-plane validate command and then that will tell us that everything is all good. It's gonna look at the schemas for it, it's gonna make sure all the fields are correct, everything's all good. So on a local laptop instead of out there in the real world where that database probably is still spinning up now. Yeah, it's still not even ready yet. So we have made cross-plane way easier to use by basically shifting all this stuff left and you can do it all in your local laptop and cross-plane is easy to use now. So Philippe, your turn, my friend. Okay, so yeah, as Jared said earlier, composition functions are a way to teach cross-plane new tricks and we, composition functions were introduced in Alpha in the 1.11 version of cross-plane, but we actually completely reworked them and released the beta version in 1.14. So if you previously tried composition functions in 1.11, that's a value, you should definitely have a look at the version we shipped with 1.14 because that's completely different. And let's see a few additions we actually added recently. We added a Python SDK, which is actually a way to have a first-class citizen experience, developer experience to develop composition functions using Python. Previously, you could do that with Go, but now you can also do that with Python. And with that, we also released a template repository, which you can use from GitHub directly or also from, as Jared was mentioning earlier, using the init subcomment of cross-plane. About metrics, we added some nice things for the observability of composition functions, like exposing the number of calls and some statistics about the execution times of functions so that you can monitor your functions and see how they're behaving and if there is anything you should improve to get your composition times tighter. And then we also added a completely new feature to functions quite recently in 1.15, which is the capability to request additional extra resources. So functions can actually request back to cross-plane additional cluster-scoped resources, which usually are everything cross-plane handles and can use those as part of their composition. So it can actually, it's kind of essentially, it's a very flexible cross-resource reference. And obviously, functions can expose that with whatever API they want through the input as we saw that. So we're gonna see a bit more later. As I was saying, we revamped the completely composition functions in 1.14, which proved to be the right choice because we saw the ecosystem of functions actually thriving. Jared already showed a few ones, but we'll go rapidly through a few highlights of the available functions at the moment. Function KCL is another function that allows you to define your composition logics completely in KCL. KCL is a constraint-based language, which is actually a CNCF Sandbox project. And so you can actually write stuff like for loops and conditions, which obviously they seem obvious, but given the previous inter-implementation of composition in general, it's actually a pretty good achievement to have all these features and the whole power of KCL to be able to define your compositions. Similarly to that, we have Function Q, which allows you to define your composition logic in Q. Here we can see a part of a more complex composition function using Q, which is actually composing an S3 bucket and creating YAM policies for an S3 bucket if only if a base ERN is provided and also checking if there are additional ERNs to create additional policies for. And then we're gonna talk a little bit more in detail about this one later, but leveraging the extra resources functionality that what previously was possible to be done in three is now with environment conflicts and the composition environment functionality can now be completely reproduced by a function. And so we can obviously see how in the future we could have more generic composition functions doing more stuff also. Function cell filter, it's another kind of, it's a different kind of function. It's actually, while the previous ones we saw are actually kind of producer functions, so stuff you expect to use at the start of your pipeline and then maybe go through other function or the simpler functions that modify the result of that more complex functions. This one is exactly that. It's a simpler function whose only job is to filter out resources based on some expression you can define in cell. It's actually in this case, for example, the usual function patient transform is creating a bucket, but function patient transform doesn't expose any way to filter out or any easy way to filter out resources and to create resources only on conditions. And so we can filter out downstream this bucket and only create it if the spec export field of the compositor resources set to S3. So that's definitely useful. And then once again, function sequencer. This one is about defining dependencies between multiple resources. So you can define multiple sequences as you can see down there in the rules. In this case, for example, second resource and third resource depend on the core resource. And so only once the core resource is ready it's gonna be second resource and third resource are gonna be created. So that's pretty awesome too. Another one, function switcher. This one is about exposing, simply exposing some annotations at the compositor resource level and you can actually filter out or enable or disable resources based on whatever the upstream user is deciding. So that's a nice addition too. And obviously there is a lot of functions to be written. So feel free to reach out on the Crosplain Slack channel and we can definitely help you out. And it's super awesome to see so many contributions from the community. Let's talk now about composition environments. Composition environments are an alpha feature that we already spoke about the function environment configs so it's the original implementation of that, the entry implementation of that. That's still, as I said, an alpha feature but we were discussing about the promotion to beta and it's actually the initial implementation, the initial intent of environment configs was to have some kind of environment dependent as the name suggests. Some environment dependent data which you could pull in into your compositions like a glorified config map but it actually ended up being a lot of other things and being used in a lot of different ways. So it actually became a way to share information between compositions. It actually became a way to patch resources inside the usual standard resource based compositions. And obviously the API became extremely complex as a result of that. And it's obviously became a little bit difficult to maintain. So we think that the path forward for that is actually functions. And so let's have a little demo about function environment configs. So here, for example, you can see a composition using the old approach. Big enough, yes. That'd be good. This is an old style composition. As you can see here, we are defining an environment, environment configs, a list of selectors or references either by label or by name and the usual approaches. And then we can use that from either a pipeline or the usual patch and transform based compositions as we can see down there. But the usual approach to debug this would have been to actually deploy it again. Similarly to what Gerald showed earlier, it would have been to just deploy it in a real cluster with real environment configs. And so it would have been pretty hard to understand what's going on without having stuff actually running in production or running somewhere. And so we actually, there is no support for the convert tool, but we are working on that. So we can actually see it's actually just a matter of pushing down the exact same configuration, function environment configs main goal was to implement the same exact API as the interimplementation. And so we can actually see that we can just use yet another function, get all the resources, as I said, under the hood, it's using the extra resources functionality. And we can see that we can actually run that. So how do we run that? We saw cross-plane beta render earlier, but it's actually, we recently introduced some more flags to that. Two flags are the extra resources, as I was saying. So we can provide us a list of YAML manifest that are going to be used by the cross-plane beta render command to serve the resources to the function if and when they ask for them. And we can actually also include the context in the output so that we can see what it actually understood and what's putting into the context. So let's have a look at the actual result. Cross-plane, let's just wait. As you can see, cross-plane beta render include context. So we're gonna see the context and extra resources we passed the environment configs as a few YAML, no need to replicate the same logic as we would have had to do previously. We can see, for example, in this case, the composite resource is produced with a field in the status actually coming from the environment, from one of the environment configs we defined. And we can see it's actually the right field coming from the context. So in case of any issue, we could have had just run the same command. What's going on? What's the actual result in the context? Why the context is not updated? And that's the thing. And so let's go back to the slide with that. Let's go through a few other new features we introduced recently. Server-side applying, you might have heard about that. Usually controllers in Kubernetes are written by the control loop of controllers is usually getting resources, mutating them and then updating. It was the only way available when cross-plane was started, but actually this has its problems, as you might know. It's usually, it's complex to handle. It's mainly additive, let's say, so it can remove state field from an object. And on a raise it's hard to handle existing elements maybe put in by another controller. So the solution to that is server-side apply. The Kubernetes solution to that is server-side apply. So let the API server figure it out for you. And we already used the server-side apply between composite resources and composite resources for compositions using functions. And then now we also have that capability between claims, so composite resource claims and composite resources. And you can enable that with an alpha flag right now with enable SSA claims, but it's gonna be promoted probably really soon. Real-time compositions, it's usually compositions are re-executed and let's say composed resources are re-updated according to whatever is your composition logic on a schedule, let's say, on an interval. Or if the composite resource is updated, but we weren't watching composed resources, so the downstream resources that your composition is creating with this alpha feature behind the enable real-time composition flag, you can actually tell cross-plane to dynamically spin up watches for composition, compose the resources. And given that the API server is gonna send events on any change to those composed resources, reactively the cross-plane is gonna reconcile all the resources as needed. And so let me pass back the ball to Jared. All right, thanks Philippe. So as we saw in the, you know, when we're talking about getting ready for graduation, we've, you know, the official formal proposal out there, we would not be where we are as a project without this entire community working together and building this thing, right? So many, many ways to get involved. Probably the one thing to remember here is cross-plane.io. You can go there and find links for all this stuff for very, very active on Slack and GitHub and whatnot. Come chat with us and, you know, start getting involved in either building things for cross-plane or building your own functions. We can talk about all of it. And then final thing here. So, you know, it's always good to know more about the people that are using cross-plane. So if you wanna share your story with the public, you know, with the rest of the community about how you're successfully using cross-plane, just go to the adopters.md file in the cross-plane repo and share your story there. So we've got three minutes for questions, I believe, if I am doing my time right. There's a mic stand right there. Does anybody have a question that they wanna ask? You started walking towards that before I even said that. Ha ha ha ha. So can we create dependencies in such a way that say before I create a Postgres database, I need to wait for a VPC to be created. Can we do that? Yeah, there's a couple of different ways to do that. One of the common ones is the, like if you need a field from that particular resource, then you can say like in the patching policy for it, you can say it's required field. So until I have that field, don't go any further. So that's one way to do it, depending on the use case. And then like some of the functions for the push showing off like function sequencer, you can in your function pipeline say this resource has got to get done before you even tried to start this next resource. So there's a couple of different ways to do something like that now. Another minute? Some more questions? Do you have a question? I'm gonna maybe repeat it for you. Outline. So the question is about like bootstrapping cross-lane. Yeah, like getting a cluster up and running, you know, having all the account information, like what do you use to do that first, right? Do you wanna take that flip or are you on? All right, yeah, yeah. Yeah, there is no actual, if I got the question completely correctly, it's left to the user to handle that. Obviously cross-lane runs in a Kubernetes cluster, so you need somewhere to be running the cross-lane controller. So yeah, the answer is no. But it's a similar issue you have with other similar tools that needs to spin up a kind cluster to run some stuff on that. And then you have to, you can create a cluster. Yeah, no, I've seen people do that before is like bootstrap a cluster with kind. And then there are managed services for cross-lane as well if you don't wanna spin up anything. So there's options for that too. 50 seconds, anybody else? Really good question because maybe a year and a half ago or so, you did have to install the whole thing and have 900 controllers running, which has a number of issues. One of them being that the Kubernetes API scaling thresholds did not handle CRDs scaling very well. So we essentially like tank the entire control plane when you did that. So what we did is separate them out into like families or groups of controllers. So you can install the S3 controller, the EC2 controller, the RDS controller, and have those specifically installed if you're gonna use those services and not all the other ones if you don't wanna use those. All right, we are officially out of time. So thank you, thank you so much. And I'll probably just go hang out the hallway for more questions. Thank you.