 Yeah, I think, first of all, the thing to say is that I am absolutely shocked and grateful as well. The number of people that showed up to this is literally the last session of KubeCon EU. So thank you very much for showing. You must not have a plane that's leaving soon. You're probably leaving after one of our speakers who has a plane at 7. So let's get this thing going and get out of here. All right, so this is the introduction and deep dive session for the crossplane project. We are all maintainers and contributors on the project. We're going to share some of our knowledge and experience with you. So first of all, what is crossplane? Just a quick introduction to it. You can think of crossplane as being a framework that you can use for building your own cloud native control plane or platform. You can do it declaratively, where you don't have to write any code to make this happen. But then let's also take a step back and say, what is a control plane? So a lot of good examples of control planes are things like AWS or GCP. They've been using control planes for years. If you ask for a cloud resource in their back end services, they've got a control plane running to do all the orchestration of the machines, the storage compute, et cetera, to provision and dynamically give you services. So that's a control plane. Kubernetes, obviously a control plane as well. We're orchestrating your applications, containers, pods, et cetera. But there's a lot more resources than that, right? So crossplane is something that will help you build your own control plane, but with all sorts of resources beyond just containers and applications, databases, caches, buckets, all sorts of things. And it allows you also to put your own opinion into that control plane. We're going to get into a lot of details on that as this is just the first slide. But let's think about crossplane as two points of extension. So when you're building your control plane on the back end, there's all sorts of ways with providers you can extend the control plane and basically anything with an API. You can now manage with crossplane, all the cloud providers, on-premises services, et cetera. Then on the front end side, you can aggregate these resources together and declaratively say, hey, this is an API or an abstraction that I want to give to my developers to be able to access my control plane. So we'll go into the more details about the back end and front end extensions of crossplane as we get into more slides here. So right, this is part of the maintainer track. This is a CNCF project. We originally donated the crossplane project to the CNCF back in 2020. And then we moved to the incubation milestone late 2021. So it's a neutral place for multiple vendors, companies, organizations, individuals, contributors, et cetera, to come together and start enabling and building control planes together. So it is a community, and we're glad you all are here, and we want you to be part of the project as well. We're not going to go into these numbers here, but the project is growing. We have this many people here at 4.55 on the last day on a Friday. So we're growing, and there's lots of ways to get involved in the project to continue being a part of it. So I'm going to hand it over now to Steven to start with some of the details about managed resources. Thank you, Jared. Is my microphone working? Woo! All right, thank you. So we're going to talk about probably the fundamental concept of crossplane. If you understand this, it's really going to help you understand pretty much everything else about crossplane. And this is something called a managed resource, right? So what a managed resource is, it's a Kubernetes version of something that's external to the cluster. It could be anything. In our demos today, we're going to talk about cloud resources. But if you've been to some of the other talks, people have been talking about managing things like ships and trucks. Basically anything with an API could be managed by crossplane. So let's talk about examples of this. In the case of AWS, what kind of things do you want to configure? There's hundreds of things you can configure on each of the cloud providers. In the case of AWS, it'd be things like certificates, queues, EKS clusters, databases, and networking, right? So what crossplane, our goal is with this project is with all these resources that are available in AWS, can we manage these using Kubernetes and using all the things, all the tooling that you have, the GitOps tooling, all the controls you have around it, things like validation webhooks. So what does this look like in a Kubernetes context, right? So we're representing external objects in Kubernetes. So that means that we have to translate things that are in the remote state into Kubernetes. So the first thing I want to show here is the YAML that we use in crossplane is basically 100% native Kubernetes, right? So things like groups, versions, and kinds are supported. We use the same kind of API versioning that's used in Kubernetes. It's all standard. And then you'll notice here that every single kind represents one resource on the remote cluster. In this case, like in the case of AWS, some of the providers have over 700 of these that they install. And then we have metadata support, right? So every object like in Kubernetes is named. You could also add labels and annotations to this. So like if you want to label that this resource is owned by your dev team, you could just add it there and you can manage it however you manage the labels in Kubernetes. And then finally, we have the spec. The spec is the desired state. This is what we want the remote object, the state, to be. And we have a special stanza called for provider. So this is what we actually send to the remote API server. And this spec in a managed resource is high fidelity, right? What this means is that if the cloud provider has 100 different things that you can configure, crossplane will have 100 different settings here. That's the goal of a managed resource to be as high fidelity as possible to the remote resource. How does this work? Well, when you apply a YAML to your cluster, there's two things that get installed when you install a crossplane provider. One is all those CRDs that we talked about. And that could be dozens or hundreds of CRDs. So those get installed in the API server using the CRD mechanism starts watching for changes in those. And then we install a pod, which is a controller that actually manages and knows how to talk to a back end API server. So what happens is this is running all the time and it's watching for any crates, updates, or deletes on the API server. So when you apply something to the cluster, it's going to say, I'm looking to manage RDSKinds. So when this gets applied, any change, update, or delete, we'll go to this. And then this will immediately start talking to the AWS API. Now there's two things that a controller does. First, it asks, does this resource exist in the cloud? And if it doesn't, it's going to go do a create. If the resource exists already, it's going to create a calculated difference. And then it's going to go and change it. The controller runs continuously. So this is a declarative state. So anything you want here, crossplane will attempt to get this into the remote state. So that's the spec. That's the desired state. Now we want to communicate. There's a two-way communication with remote APIs. So what comes back is the status. And this is the at provider. For provider goes out. At provider comes back. And in this case, when we create a bucket, AWS will give us an arm. And then there's the other thing that comes back is events. Providers don't generate standard out or anything like that. They generate events that you could use whatever eventing system. So the kind of events you'll see, you'll see errors. You'll see inability to authenticate. You'll see creates, updates, or deletes. So if you collect your events, this is how you can monitor things. And I think that's a very quick update. And so I'd like to do a demo of a managed resource just to show you what this looks like in practice. So what I have here is just a bucket. And you can see here, again, this is very simple. There's hundreds of things we could set. But this is a minimum. And you can see here, we have a region selected name. We talked about groups, versions, and kinds. And we're going to create a bucket in US 1 with private ACL. And we're going to add a tag to it. So this is how you create objects in crossplane. Like you're not running a script or anything like this. You're just applying a desired state to the cluster. We've created it. We're going to go. And now we can just get this. So immediately see it hasn't done anything yet. So we can watch this. So what's happening now is that we're just watching what's happening to this resource. And crossplane is asynchronous. So it runs as all controllers. It runs in a loop. The controller saw that something was requested. So now what it's doing is it's syncing. It's saying, can I communicate with the remote API? And then when it gets to be ready in a few seconds, and now it's ready. So now the bucket has been created. So you're not running a script or anything. You just say, hey, create me a bucket. And eventually, the bucket will come to a ready state. Now, if you're doing things like creating a VPC or a subnet, things that depend on that will not become ready until this becomes ready. So there's kind of like an internal dependency mechanism within crossplane, but it's a little more decoupled. So we've created our resource. And now what we could do, we could see what's going on there. So we've just created this resource. And you could see here we have a bunch of conditions. We got our app provider that I talked about. And then we have conditions. So all the things that came back. So in crossplane, you're not looking for the output of a shell script. You're looking at the conditions. And the other thing that's important to note is probably one of the more important is the concept of an external name. So this is the way that crossplane can match a Kubernetes object to remote. If you don't have this, crossplane thinks the object doesn't exist. So in terms of backup or restore or something like that, this is the one piece of data that crossplane uses to reconstruct in Kubernetes what the remote state is. And I think Chris talks about this a little bit in his controller writing. But this is really an important concept in crossplane. This is kind of the link between remote and on the cluster. Finally, here. Sorry there. Sorry, it's weird to type on this keyboard. And then finally here, you see the events that are being created. Like that we could create a brand new event here. So this is how you know that it's been. So I think in terms of my time, that's, yeah. So yeah, let's delete this bucket and then I'll move over to Yuri. He's going to talk about composition. Thanks, Timmit. Can you folks hear me? All good? Thank you. So hi, I'm Yuri. And I truly believe that crossplane rocks. Sorry, I always dream to say that from stage. So thank you. Yeah, just a quick one. OK, it's apparently a full screen issue. Yeah, sorry for that. Small glitch. Yeah, so Steven demonstrated the core ability of crossplane to be able to describe any cloud resource Kubernetes way. So you inherit automated configuration, drift, reconciliation loop, and all that core Kubernetes powers. So it's already very powerful. But crossplane has more. So crossplane provides you with ability to create your custom platform API, which is very specific to your use case, to your company, to your teams. And with the help of composition and composite resource definition, you can implement all of this custom logic declarative way without any coding. No need to write any kind of goaling code for control or any kind of code. Everything is declarative. So this is a simple illustration of the following concept. So here we define our custom cluster API. So it's purely coincidence. It's not a copy project. It's just our custom cluster API, which we implement to create managed services with associated cloud providers. So we have two composition in that case. One is AWS and another is GCP. And these compositions are actually like a collection of dependent resources which can create your very custom clusters from managed TKS down to Helm chart and services like Frometheus on top. And same for GKE. The core concept behind that is that you provide a stable API for your consumers, like platform consumers. Frequently, it's application developers in your company. And they are not exposed to any complexity of underlying infrastructure. So how, actually, we can do that. So we have a special kind composite resource definition. So it's called XRD. So it's very similar to Kubernetes standard custom resource definition, but it extends as Kubernetes resource model. We can define a custom API group, which is very specific to a company, and describe the desired API with a standard open API resource schema. So pretty standard stuff, very similar to CRD. So how to actually implement the logic behind this XRD definition? So again, you don't need to code anything up. You can use a special kind composition which would satisfy the associated composite type. And you will create a list of managed resources that this composition will manage. So if you can create an analogy from a Terraform world, it will be like kind of Terraform model or it's like a server-side helm. So everything is server-side. There is no client job here at all. So this is like a list of static resources. So how to make it more dynamic? So we have a concept of patches. So we can propagate the data from Composite Resource, which we define, create, instantiate out of Composite Resource definition and associated claim. And we can propagate this data down to managed resource that is composed by the composition. So you can expose only required API fields and name these API fields as you like. And in addition to that, we provide like runtime transforms as a form of patch. In this specific case, we have like we obstructing the instance types in some cloud provider. And we can expose to our teams only internal namings like small, medium, large. And the actual meaning of the small, medium, large can be defined by platform builders. So we can actually make a live demo of this stuff. Yeah. First of all, what we actually trying to achieve, right? We want to expose this minimalistic API to our customers. We want to hide complexity of underlying resource. So if it's just for the sake of quick comparison, right? So that's a managed resource of RDS instance in AWS. And it's just some possible values there, actually. I don't know, hundreds of them for to configure the RDS, specific RDS instance. And we want to encapsulate a platform builder, logic, and parameter security, all that stuff within a composition and expose to our developer's only required parameters, right? In this case, it's just a size of the database and a password secret to pick up from. So that's the goal. So how to implement that? So exactly as I mentioned in the slide, first, we want to describe an XRD, a composite resource definition, right? So again, open API with a schema, it describes the required parameters of storage size and a referenced password secret name. So pretty straightforward. So the only thing we should do is just instantiate it. So we defined this custom platform API right now. What we should do next is actually to implement it. And we will implement it in a form of associated composition. So there is a composition that satisfies this previously created XRD, right? PostgreSQL instance. And it composes a couple of resources. The main one, like FDAS instance itself, and associated custom parameter group. So here we have just a couple of resources for demonstration, but it can provide you a picture how we can compose arbitrary amount of resources in the dependent one. And we can create a composition of different complexity. And yeah, again, we have a patches. And we switch, we can propagate the required data from a claim instantiation. So as you can see, here, we're propagating the size with this from field path to field path of composed managed resource. And everything from the associated claim will be propagated to the resources that are composed. And one important difference, so a compositor source is a cluster-scoped one. And a claim is almost the same, but it's a namespace-scoped one resource, which is designed to be consumed by platform consumers. And XRD is a scope of platform builders. So how we actually, now we are changing ahead. And we are going to, so we built our custom API. We implemented it in a form composition. And we now can consume it as a platform, as an application developers, a platform consumers, right? So we're just making, we're applying the claim. Yeah, so as you can see, our claim is created. It's our custom resource, right? Our custom example or database API group and a PostgreSQL instance, our custom abstraction. So that's exactly what we want. We have a shortcut get claims, so we can see the status. You can obviously get, describe any standard API using any standard Qt Control commands over this resource. So as you can see, this claim is created. So we can use this full pass to it and see the status. It's not yet ready, it's a database takes some time. So current status, composite resource claim is waiting for resource to become ready. And we can get managed. It's a standard cross-plane shortcut to get the list of all managed resources that is current cross-plane instances is managing at the time. So as you can see, both composed managed resources are created. So it's a parameter group and RDS instance itself. And just for the sake of end to end demonstration, you can go to RDS and double check the actual state. Was it applied by cross-plane? I just refresh in the page now. Yeah, so we have a custom group. It's visible, but I'll go to database instances. It's our cross-plane deep dive instance in a creating state. That's why it's still waiting for the database to be ready. And we can go to configuration and check parameter groups. Yeah, and so it is referencing a custom cross-plane deep dive parameter group, which was created as a part of the composition. So that's pretty much it for the demo of a custom platform API and custom abstraction power. And we can proceed with the provider extension. Chris, please turn the stage. And we will do a small magic of laptop change, yeah. Thank you. So microphone is working. Cool, then hopefully also the beamer. So then let's talking about extending cross-plane. So as we said in the beginning, so cross-plane is highly extensible. And it's a framework to build universal control planes. There are both sides. So one is the back-end side. We call it providers. And with the providers, you can, in general, can build providers and manage anything or any API out of the world. So you get crowd operations for cloud resources on-premise and whatever you want. On the front-end perspective, we call it configurations. So that means you can compose your resources from the providers together. You can define your control plans, declarative APIs, and abstractions like we see before. And this is, at the end, what your dev or your customers see. And what they can consume from your control plan. And we also take care of the provider versions, in general, if everything we have is available. So let's have a look here at the visualization. In the middle of it, we have our cross-plane control plane running. We have, on the bottom level, the provider resources or the providers and the resources. So thought about, if you want to create a Kafka cluster for your customers, and you have two cloud providers available, in that case, AWS and DigitalOcean. And it doesn't matter if the Kafka cluster is running on AWS or DigitalOcean. As a creator of the infrastructure here, you can say, OK, AWS, we need the following version. For DigitalOcean, we need the following version. And for Kafka, we check this. If it's not available in your Kubernetes cluster, then we set it up for you. And on top of it, you have all the configuration stuff. So the compositions, the representation, what your app devs can consume in your control plan. So let's talk a little bit about the ecosystem. I think in the last months, it's growing and growing in the community. I think normally, folks starting using the public cloud providers, so like AWS, Google Cloud, Azure, DigitalOcean, whatever, and set up, for example, Kubernetes cluster. And then you can sort about, OK, I need my Kubernetes cluster if it's ready and available, hand charts, Kubernetes money fests, whatever. And then you start with the next providers. And so you can get a lot of resources composed from your control plan perspective. And then you'll see a lot of guys starting using more and more. So for about GitLab runners, whatever, then you also need tokens. And you can create it in the GitLab APIs. And then you can move it back to your cluster. And yeah, as an announcement, I think today in the ecosystem, there's also now a provider available for Ansible. So you see folks adopting more and more stuff in the ecosystem. So let's talk a little bit about the internal stack, how we build our providers in the platform. At the very bottom level, we have Kubernetes runtime. So the runtime takes care of running our controllers, our Docker containers, and also things like ingress services, load balancer in front of the services. And on top of it, if you're not aware of programming in Kubernetes, there's something like machinery. So this is how the APIs are composed together in the cluster. And a lot of other cool things around. But the thing we need here is the custom resource definitions. So that means if you create custom types in your cluster and you want to use it like real native Kubernetes objects, thought about in subnet group for elastic cache, you can set it up then. And it feels completely like Kubernetes. You can kubectl, describe, get, delete, apply, everything. Then the next layer is the controller runtime. So I think almost every Kubernetes operators are using parts of the controller runtime. The controller runtime helps us, in general, for the reconciliation stuff in Kubernetes. It helps also to watch for resources. So if someone changed the resource in the cluster, then it helps us to get the runtime knows what happened there. On top of it, there's the interesting thing from the community here. And this is a cross blend runtime. So normally, a lot of things are built as operators for Kubernetes. But we built things in Kubernetes for APIs that we can manage them. So thought about everything we can manage in external APIs. So create, update, delete. And this is, I would say, pre-configured for you. And you can thought about your custom logic. So what do you need to do for updating stuff, et cetera? And the rest is here. And the very cool part here is you can use your tooling from your Kubernetes ecosystem, so like Helm, Customize, Flux, AgostiD, whatever, on top of it. And then you have your cloud infrastructure completely as Kubernetes flows. So let's have a small demo how easy it is to implement a new resource in one of our providers. So I will change to my Visual Studio. I think it's there. OK, in general, this is one provider here from cross blend contribution repository. It's provider Jet AWS. Let's have a short look. I said to you, we want to implement a custom type subnet group for ElastiCache. So we open this and you see a lot of auto-generated stuff. So like the types from parameter groups, replication group, clusters, users in the ElastiCache group, and the specific API version. And what we want to do now is to go implement the subnet group. We can scroll a little bit. There's a config folder. And in the config folder, we have provider Go. There's this one. And as for reasons with scaling issues with a lot of CRDs, we need to enable or include the resource for generation. So we can do this here now. So this is at the end strings here for the resources. And then we have configurations. So config, ElastiCache, config Go. And what we see here is a configure function. And what we can do now here is to add the resource to the configurator. So I prepared this. So you can see now here, it's also AWS ElastiCache subnet group. We specify now the API version in the cluster. So I pick it up from the other resources here. And Stephen talked about external name. And we're using here name is identifier because this is represented in the external API. So let's save this. And then it's very easy. We have make script and stepster. And we make generate. And now the resource are generated. And it also looks for all the other generated stuff here. And in a few seconds, we can see a new generated CRD. And we can apply it in the cluster. I will show you this. It took a few seconds. Can scroll there. We have here now package folder and CRDs folder. And there are all CRDs inside. So it's finished now. You can see there was one green thing. And what we can see here now. OK, cool. There's for elastic cache a subnet group now available as a new CRD. We can apply this in my cluster. So QC here, apply the subnet group. Then the next thing we can do is to run the provider here locally because it's now in my environment. So we have also make run. And now the provider is starting up, looks for all the CRDs in the cluster and start managing them and also the stuff out of it. So here QC, QCTR get managed. So we pre-installed the VPC and two subnets because we need subnets for subnet groups for elastic cache. In AWS, you can see everything available. What we can do now, QCTR, the prime and stuff. Example, subnet group. I will show you the subnet group directly. Also set it up. So you can see, I'm using now our new CRD we created. So elastic cache. Here you can see the version, the kind. So it calls KubeConExample in region US West 1. We have a description. And the magic of references are used here. So we need from these subnets not only the metadata name, we need the ideas because AWS API needs the IDs to create this. And let's have a look. QCTR get. We can have a look here. So you see the new subnet group is ready to sink through. I will go in the AWS console. And you can see now here the KubeConExample is there. So we implemented a new resource in the controller, in the provider for AWS. Awesome. That's awesome. All right. We'll finish the microphone on you. All right, so we'll go ahead and finish up this session here. I think we just got one or two more minutes. But as we said before, this is a community project. It was donated to the CNCF. So there's lots of opportunities to get involved with the project. If there's functionality or things that's missing, get involved, open issues, give us feedback, contribute, pull requests, et cetera. So a great place to start is crossplane.io. That's the website there. And then all these other links you can basically find from there. But we're super active on Slack. We'd love to meet more of you all and talk to more of you all. So with that, we can go ahead on into, I think, maybe one question or so. And then I'll be here. I don't have a flight to catch. So I'll be here to continue talking afterwards if you want to come up. Yes, you're right here on the first or the second row. Oh, nice. Thank you so much, brother. Hello. Yeah. My question is, I might talk. Do we have any control on the transition between one state and the other? Or it's up to the provider to decide what to do? Because I can imagine, for some resources, it might be important if I recreate or I modify those kind of decisions. Yeah, yeah, I'll take that. Yeah, so that's a good question. Something that I think a pattern you'll find in general is that much like other controllers in the Kubernetes control plane, everything running in cross-plane is actively reconciling and eventual consistency is an interesting point there. So when certain transitions can't necessarily happen yet, the active reconciliation will continue to try to drive the desired set that you have in the actual state to eventually reach there. If there are more complicated sequencing or other types of operations, I think there's probably some declarative ways to describe like this depends on this resource, or we need this value from here. And once again, eventual consistency and active reconciliation will make that happen over time. So you don't need to intervene or be very specific with it. And then if there's further cases that, in my experience over the years so far, this tend to be fairly minimal, then there is an upcoming feature that's going to be an alpha sometime soon where you can write some, let's say, imperative code to make some decisions and control the logic even further. Right now, you can write the whole thing in code if you really, really wanted to to make sure exactly what you wanted to do as possible. But then otherwise, declarative stuff, and only when you need to pop into code, then implement it in code. Awesome. Let's do a quick time check. Yep, so that is all the time. But I'll be right here to keep talking if you want any more questions. And thank you so much for everybody for the WholeCubeCon week, I'd say. Thanks.