 OK, welcome, everyone. Welcome to Cloud Native Live, where every single week we dive into the code behind Cloud Native. I'm Bill Mulligan, and I work at the CNCF. Every week, we bring you a new set of presenters to showcase how they work with Cloud Native Technologies. They'll build things, they'll break things, and they'll answer your questions live on our stream. Join us every Wednesday at 11 Eastern. And this week, we have Dan Magnum, here to talk to us about cross-plane competition. So one note for the code of conduct. This is an official live stream of the CNCF. And as such, subject to the CNCF code of conduct, please don't add anything to the chat or questions. That would be in violation of that code of conduct. Basically, just please be respectful to all your fellow different participants and presenters. With that, I'm super excited to have Dan on the show. I've heard a lot of great things about cross-plane. I personally haven't had a ton of time to dive into it, so I'm really excited to be on the show today. So with that, Dan? Awesome. Well, thanks for the intro, Bill. And I'm super excited to be here. So for context for folks, I'm a maintainer of the open-source cross-plane project, which is a CNCF sandbox project that is moving into incubation or in the process of doing that. And I also work at UpBound, which is the company behind the cross-plane project. And so last week, as part of our UpBound offering, we had a big announcement about our new product offerings. And one of those was the first enterprise distribution of cross-plane, which we call a universal cross-plane, or for short, UXP, which is what I'll refer to it as mostly today. But around that, in order to be able to have distributions of CNCF products, you have to have a conformance program, which basically allows folks to create distributions and validate that they're conformant. So a lot of folks are probably familiar with the Kubernetes conformance program. So our conformance program is similar. So today, I'm going to talk a little bit about that conformance program. I'm going to talk about UXP, the first enterprise distribution of cross-plane. And we'll talk a little bit about what it means to be conformant, how cross-plane works, how UXP works, what UpBound provides, that sort of stuff. Oh, awesome. This is like everything I wanted to know and more along. Well, as I said before we came on air, definitely feel free to jump in at any point. I know we have the chat going here. So ping any of those questions off me, or feel free to bring up your own. But I'm ready to go if you are. Yeah, sure. Do you want to kick it off? Or I guess maybe the first question for people that don't know, what is cross-plane? Can you give a 10-second overview? Yeah, absolutely. So a lot of folks come to cross-plane because they want to manage cloud infrastructure from the Kubernetes API. So as many projects and organizations that have been on this live stream, we extend the Kubernetes API through the use of controllers and custom resource definitions. So the initial thing that folks encounter frequently is provisioning infrastructure via Kubernetes API types. So we have providers, which essentially give you the ability to create something like an RDS instance or an EKS cluster and EC2 instance on AWS or GCP or Azure, et cetera. But then on top of that, the real value that cross-plane brings is the ability to build a platform. So when I talk about building a platform, really what I mean is creating abstractions on top of these granular managed resources, as we call them. So you may have the ability to create an RDS instance on AWS, a cloud SQL instance on GCP, and a Azure SQL instance on Azure. And what we want to be able to do is for infrastructure operators or platform developers to be able to say to their developers within their organization, this is how you create a database and those developers are really not worried about the actual implementation details. So we're gonna go through that actual exact example of creating a database, but these can get much more complex. And we'll talk about our packaging system and how you can kind of build and share these different abstractions. Oh, cool. Yeah, that would be great. Do you wanna start with the demo or, I don't know. Yeah. Okay, yeah, I'd love to see it. Awesome. So I have a number of things that we can show today. But what we're looking at here is the homepage of UpBound. And this was just launched last week. And you'll see here in our product section, the universal crossplane UXP is kind of our big enterprise distribution that we just launched. I believe I also have a registry and a bound cloud components as well. But before we get into that, being a conformant crossplane distribution, which basically means you're gonna have the same API, both syntax and semantics when you interact with UXP and crossplane, you can actually go ahead and install crossplane. So we're just on the crossplane documentation here and then upgrade to UXP using some of our tooling. So this is the normal flow that all folks go through when they're first starting with crossplane. They typically do something like create a kind cluster or maybe they already have a development Kubernetes cluster running on a hosted offering. And then they install crossplane. So I'm just gonna go ahead and start off with doing that. So I'll hop over here into the terminal and we'll create a cluster and we'll see we're spinning up a 1.20.2 Kubernetes cluster here just using kind locally. I'm sure a lot of folks are familiar with kind, but if you're not, it's basically a way to run Kubernetes and Docker as the name would suggest. And it's just a nice way to get a local Kubernetes cluster. And once this is spun up, I'm gonna go ahead and install crossplane and maybe we'll talk about kind of the different components once that's installed. So I can go ahead and grab this Helm install command here once again that we see lots of folks use. I'm actually just gonna grab this portion of it and I'll go ahead and create a namespace to install this in. So I'm gonna create an upbound system namespace here and I will do Helm install crossplane and I'm gonna put that in the upbound system namespace. And once that's installed, you'll see we're using crossplane 1.2.1 which is our most recent release. And if we look at our pods, and I'm gonna shrink this down a little bit just so folks can see it looks like that's still large enough. Please let us know in the chat if it needs to be bigger. But you'll see here that we have kind of two core parts of crossplane that come as core crossplane which is what is installed via Helm. One of them is core crossplane itself and the other is the RBAC manager. And the reason why we have these different components is crossplane actually offers its own package manager to allow you to install extension points for it. So similar to how you can install controllers and CRDs into Kubernetes, we have packages that you can install into crossplane in the form of providers and configurations. So folks that are familiar with infrastructure as code tools are likely familiar with the concept of providers. These are basically things that just give you the ability to create, it looks like the name is covering the bottom line of the terminal here. We'll see if we can get that corrected in just a moment. But providers are basically just a way for you to extend crossplane's functionality. So if we go and look at our providers and I'll go over here to our documentation site, make that a little darker for us. We can take a look at something like provider AWS. And basically this is gonna bring a bunch of different CRDs and controllers to be able to provision these different things in Kubernetes. And once those are installed, you can, you know, Qt control, apply your RDS instance or something like that. If we go and take a look at configurations, which is what we actually install here in the crossplane getting started docs, they are basically higher level packages that declare dependencies on providers. So if we look at the actual manifest of a configuration, I'll take us down here and you'll see that you can specify a version range of providers. And so when you install this configuration package, it's essentially gonna say, please install these providers. And then within that configuration package, it's gonna tell you what the abstraction mapping is from kind of the abstract type to the underlying managed resources. So in this specific configuration that we use in the crossplane getting started docs, we have an abstraction for a Postgres SQL instance. And then backing that, we have mappings to an RDS instance, a different configuration of RDS instance where it creates a VPC and subnets for you. We have a Cloud SQL instance, as well as an Azure Postgres SQL server instance. And basically once you install this, the provisioning experience looks the same. So you'll see if we're using AWS or GCP, the only thing that we're changing in these manifests that a developer would actually create is the label that selects to them. So at this point, Bill, I know we're kind of sprinting through a lot of stuff here, trying to give an overview of crossplane before we dive into how up bound Cloud works, but is there any questions that have come up for you or any parts you'd like to drill down on further? No, I think it all makes sense to me so far. And we don't have any questions in the chat, so I think, yeah, it is moving fast, but I think it's good. Awesome, awesome. Well, I'm not going through installing any of these packages right now because we're gonna upgrade to UXP first. So I'm gonna go ahead and show that. So as I mentioned before, as part of our announcements last week, we also announced the crossplane conformance program with the CNCF, which you can see here in this GitHub repo. And basically this defines what it means to be both a conformant crossplane distribution, which is what up bounds UXP is, as well as conformant providers, right? So what makes a crossplane provider kind of like valid and compatible with other crossplane components? So as I mentioned before, our crossplane organization here has a number of different providers that we support. And if you actually hop over, this is something some folks are not aware of into the crossplane contrib org, you can see kind of in development providers as well. So you'll see things like GitHub, GitLab, excuse me, New Relic, things like SQL, IBM cloud, you'll see things like digital ocean, GitHub, et cetera. So we can really write a provider for anything that has an API and then integrate that into the crossplane ecosystem. And so go ahead. Who writes most of these providers or where do they come from? So we have a pretty robust community around crossplane. So right now the main providers were kind of bootstrapped along with the initial project. And when I say the main providers, I'm mostly talking about the big cloud providers. We also have a number of other ones as you saw there that are maintained by just different folks within the community. And really anyone can go and write them. So these are what I'm showing our example of open source providers. A lot of different organizations that use crossplane actually write custom providers for their internal infrastructure systems. So they might have some legacy data center infrastructure solutions that they write custom providers for. And essentially what that does is it gives you those building blocks. Again, it gives you those managed resources that then you can compose up into higher level abstractions. So if we go back and look at our configuration here just like how we have these mappings to types in provider AWS or provider GCP or Azure you could add on your own my internal infrastructure system and create a composition for that. And then you can do things like scheduling to the public cloud or private data center based on the contents of the workload that's gonna be consuming the database or other infrastructure. So it's really infinitely extendable. So I guess the answer to your question would be that anyone writes crossplane providers but those main ones are maintained by the crossplane community. Oh, cool. And would you say if somebody's just experimenting that would be a good place to start or do you think that's too complex? So providers and we can take a look at one of them are most popular right now by stars if you look and I think by image pools as well would be provider AWS. There is definitely a fair amount of complexity here in terms of it's like anything that you build using QBuilder or something like that but with some extra crossplane packaging. So you can see there's quite a few API types here. There's lots of different controllers. So there's a fair amount of complexity depending on the API you're interacting with AWS being one of the most complex. So if you're just getting started writing a provider might be necessary which of course you could take a look at and we have actually a provider template repo here which allows you to basically just do a one click create from this repo as it's a template repository. So that's definitely a valid way to get started but most folks are infrastructure or platform owners who consume things that are already defined in providers and in that case we'd suggest you getting started with composition and starting to define kind of what your platform looks like. So that's kind of where I'd recommend folks go unless if they need additional functionality. Okay, cool. Absolutely. So to kind of like continue along with this demo and show what conformance means first of all I'll say that we have a plugin here to be able to run your conformance suite your crossplane conformance suite. So for instance we run this against a bounds UXP to verify that it is a conformant crossplane distribution and this is just like how Kubernetes does it as well. So if anyone has questions about the conformance program or about what it means to be a conformant provider or a conformant distribution of crossplane please feel free to reach out afterwards. We'll also have some more documentation coming for this in the coming weeks. So with crossplane installed in our cluster I'm gonna show the workflow for upgrading to UXP. So we have a command line tool called UP and I have it installed here on my local machine. And basically what I'm gonna do is instead of doing a UP-UXP install since we already have crossplane in place I'm gonna do an UP-UXP upgrade. And so essentially that's gonna say with whatever crossplane version I have I want to upgrade to a compatible UXP version. So one of the restrictions we place on upgrading to UXP is the ability to have matching versions. So we version UXP in a schema that is semantic versioning of the crossplane version followed by a UXP iteration. So this is similar to something like GKE if you've seen in the console when you're picking your distribution when provisioning a Kubernetes cluster. So here, as I mentioned before we have the 1.2.1 version of crossplane installed. So I'm gonna use a compatible UXP version and just say please upgrade to that. And it's a pretty simple operation for those who are interested in the technical details behind the scenes. We actually use Helm under the hood to go and upgrade to UXP. You can install UXP itself with Helm directly if that's a requirement or if you're doing it through a CAC pipeline or something like that. But if we look at the pods in the cluster now you'll see that we've kept and actually replaced the two crossplane pods we had running before. But we also have these additional components which are part of the UXP distribution that allow you to do things like connect to upbound cloud or XGQL actually gives you a GraphQL interface into the crossplane resource model. And so there's a number of extra components that you get when upgrading to UXP. And the number one thing is the ability to connect to upbound cloud. So I'll go ahead and hop over to upbound cloud and we can take a look. I am here in the upbound cloud console and essentially what upbound cloud allows you to do is create what we call control planes. So control plane is essentially a link to a UXP cluster and you can create a hosted or self-hosted version. So hosted essentially means that we'll spin up a Kubernetes cluster for you, we'll run it for you, we'll install UXP there and we'll give you a friendly interface into it. We'll also allow you to connect via Qt control if you like to be able to do any operations you need. Or you can run self-hosted which is kind of the big feature that we've seen a lot of customers wanting. They want to be able to run UXP in their data centers or on their cloud provider and connect that up to upbound cloud and get the benefits that it has. And so today we're gonna be showing off this self-hosted ability. Another thing I'll mention just for folks who want a bit of a tour of the upbound cloud console. With upbound cloud, you can have your personal account. So this is my personal account and it's kind of where you could do testing and development and you can create control planes as well as repositories, which we'll take a look at in a minute. And then in an organization, you have some enhanced features. So you have things like teams and users, the ability to create permissions and show that off. So we'll definitely show some of that functionality after we connect up our control plane. So if we look at the self-hosted workflow, you'll see it's using this up CLI tool that we released. Another thing I'll mention as we're going through this is that all of these tools are actually open source. So as part of our release last week of our different products, we went ahead and open sourced a lot of the different components, including UXP and the up CLI. So you can actually ask your features there, you can contribute, we already have community members contributing to UXP. And so we really wanted to be open about that. And we can talk about the relationship between crossplane and UXP further on if folks are interested. So going through this self-hosted workflow here, I've already logged in to my a bound cloud account with the up CLI. So we're gonna run this control plane attach command and then pipe the output to UXP connect. And I'll go ahead and type that in and we'll talk about what that's doing. And XP here is just short for control plane. So we'll call this CNCF live and I'm gonna create this in the Dan account here. And then like I said, I'm gonna pipe that to my up UXP connect. And what's happening here, it looks like I left off attach, is we're basically saying to up bound cloud, we have a self-hosted UXP instance that we want to connect up to up bound cloud. So please basically make an entry for us to be able to connect and give us credentials to be able to talk up bound cloud from the cluster. So that's what up cloud XP attach is gonna do. It's gonna create a CNCF live control plane and give us a JWT to be able to past our agent to be able to connect to it. And up UXP connect is really just to help our method to get that secret into the cluster and allow the agent to be able to access it. So I'll go ahead and run that command. And what we'd see here if we looked behind the scenes is take a look at our secrets in the up bound system namespace. And the one that gets created as you can tell from the age is this up bound control plane token. So what the agent's gonna do is look for that token to exist. And when it does, it's gonna go ahead and connect that up to up bound cloud. Now, if we hop over to up bound cloud and look at our control planes, you'll see the CNCF live one is provisioning. This process looks different. Obviously, if you're using a hosted control plane versus self-hosted one. So in the hosted case, we're actually spinning up this Kubernetes cluster for you installing UXP and then giving you access in the self-hosted case, you'll see it's pretty quick, right? Because this cluster is already running and we just need to establish communication between the two. So once we go ahead and connect that up, you'll see we have a pretty empty console here because we haven't actually installed anything into crossplane yet, or in this case, UXP. But in our control plane settings, you can see that it's identifying the version of crossplane we have running. And it says we don't have any packages installed. And then we can edit some attributes. But you also see that you can connect via cube control. In this case, we already have a cube control access to our cluster, but if you're running hosted or you wanted to give team members within your org the ability to connect via cube control to your cluster, we have some nice commands to be able to do that using your upbound cloud account. So if we want to go ahead and actually use this for something, we can get started by going to browse the upbound registry. And Bill, I wanted to give you another moment here. I've kind of been keeping an eye on the chat, but is there anything that's come up that you wanted to chat more about before we move on? No, I think we should keep going with the demo. I have a couple of questions, but I don't want to interrupt the floor. I think you got some momentum going now. Awesome, awesome. Well, I do like to go through a lot in these demos, so definitely feel free to just interrupt me with anything that comes up. But we have moved over to the registry here. The upbound cloud registry is actually just an OCI registry. So all of crossplane packages are packaged as OCI images, but they're kind of special OCI images in that they are single layer and just contain a YAML stream in them. So maybe if we have some time at the end, I'll actually look into the contents of these packages. But once again, I mentioned that we have different providers as well as configurations and configurations can declare dependencies on the providers. So with some kind of like extra stuff we provide in the upbound registry, we allow you to provide metadata, which makes this different information about the packages present and also shows things like the dependencies on providers for configurations, as well as different information and links and that sort of thing. And we can talk about how to do that in a bit. But I'm gonna go ahead and go with our getting started package here, which is a really simple package that we use in our documentation here, just to get started with UXP. And it only declares a dependency on the AWS provider and it's going to give us that Postgres database abstraction and a mapping to an RDS instance that we're looking at originally in the crossplane docs. So normally with crossplane or even with UXP, you'd go through and you'd install this package via cube control in our crossplane plugin and you specify where that package resides. Once again, our registry is just an OCI registry, so you can also push these to Docker Hub or anything like that. But if you're connected up to upbound cloud, you can just click this run in upbound cloud command and whether it's hosted or self-hosted, you have the ability to go ahead and install this in your cluster via this nice kind of UI workflow. You can select the version that you want, we'll use the 0.0.2 and then you can select the control plane it goes into. And so what we're doing right now, I like for folks to think about this as we're building our platform, right? So the goal here is to not give developers the same kind of interface that they'd have with AWS and have them need to understand all of the parameters that they configure on one or many cloud providers or infrastructure services. Instead, we're giving developers like a Heroku-like experience or a Layer 2 cloud experience. And we're doing that by building our platform here. So you can think of what we're doing right now is kind of building our Layer 2 platform that we're going to then provide to developers within our organization. So I'm gonna select this CNCF live cluster, which is connected up to my local machine here. And it's gonna be a pretty quick installation here because it's basically just creating a configuration package type in the cluster, which Crossplane then reconciles. Folks are familiar with Crossplane. And you'll see that it specified this provider AWS dependency and then based on the versions, it's gonna select a appropriate one. And we can go over to finish and view control planes. Before we actually look at the UI, I'll show a little bit about what's happening behind the scenes. If we do KGet package, which if folks are familiar with Crossplane, you're probably familiar with this command, you'll see that our getting started configuration was created. Once again, I'll make this a little smaller for so folks can see. You'll see that we have our getting started configuration as well as an appropriate provider AWS version installed. And you'll see AWS is ready and our getting started configuration is just waiting on all dependencies to become available. And now that they are, you'll see it also become available here shortly. Sometimes it has a blip there, but you'll see they're both installed and healthy now. And if we look at our cluster now, we can take a look at our CRDs and you'll see we just brought in a ton of different AWS CRDs. So installing that provider is essentially going to bring along all these different CRDs and controllers to reconcile them. You'll also see that we have some CRDs for getting started, I believe is the org here or they might be an up bound. We can take a look actually because I'm gonna show you the source of what we actually installed here. So the package contents are actually in the universal crossplane. If we go in this configuration, this is all that's in our OCI image here. So if we look at our crossplane.yaml, this is where we're specifying our dependency on provider AWS. It's also where we're specifying our abstract type, which is our PostgreSQL instance, as well as our mapping to our underlying types that are gonna get created in response to this abstract type. So there is a question in the chat. If this operation fails, where would I be able to see the error of logs? Yeah, so it depends on kind of what your persona is. You could go through up bound cloud to look at the status of these different packages. Also, if you're more savvy on the command line or something like that, you can do things like, let's show the events here so we can try. Okay, describe our configuration here and just like any Kubernetes resource, if you spell the command correctly, you get events on that. You can also take a look at the provider. So let's do the same here. And you can see kind of the different step it takes to install them. And if you want to, you can see we have these different revisions, which basically allows you to roll forward and backwards, but you can see, we'll show the configuration revision here. You can see explicit information about, let's see, configuration revision. You can see the information about the pencies found and installed. So essentially just like any Kubernetes resource, we have information that we surface there. Also, if you'd like to get deeper into the weeds, you can look at the logs of the crossplane pod, which would be the main core crossplane component here and see what's happening. One of the other things I wanted to point out here is when a provider is installed, I mentioned all those CRDs that are brought into the cluster, but we also need controllers to reconcile those and create those externally. So here you'll see our actual controller for our provider AWS here. And this pod is running and basically watching those CRD types that we installed. So once again, taking a look in the console, you'll see that we now can see that we have some populated information here in the form of our providers and configurations. And if we look at the provider AWS, you'll see all the different groups, all the different API groups we support here for provider AWS today or this version we have installed. And you can expand those actual granular types. And if any of them existed, you could see instances of them, but we haven't created anything yet. And then at the getting started level and our configuration, you'll see this abstract type. And we'll show here creating an instance of this abstract type and how that kind of breaks out into granular managed resources and then how we can see that relationship between those. But in order to be able to do that, we're going to need to create some credentials to be able to talk to AWS. So the way that we provide credentials to providers in cross-plane is we create a secret with some credentials in it. So we're just calling this AWS creds here and I already have my creds.comp there. So that was pretty easy. We created that. And then we create a provider config which basically just instructs our AWS provider. Look at this secret we just created when you try and perform operations on AWS. So I already have our config.yaml here. So I will chat to what that looks like. It looks just like we have there in the documentation. I'll go ahead and apply it. Another thing I want to note is since we named this default, it's going to be used as the default. We don't specify a different provider config. What that allows you to do is basically say in this situation, use these credentials and this one use these ones which allows you to point at different accounts and sets of credentials and that sort of thing. All right, so we have our provider config created. So at this point, we're ready to create any of the abstract types that we define. So let's take a look at what this might look like for a developer who's deploying their app via helm or something like that and wants to be able to get a postgres database to interact with it. They have a pretty simple interface here where they're only needing to define the storage size. Behind the scenes here, as we'll see in a moment, this is going to create a lot of different resources on AWS and use a platform builder to actually define what that mapping looks like. So in this case, we're going to create a VPC and subnets as well as the database in a DB subnet group. But you may say, this is just something that creates an RDS instance directly and you can also select different compositions of resources for a single abstraction. Saying for instance, if this is created in this namespace, it will do this. If it's created a different one, it will do something else, that sort of thing. You also get to define what credentials those are provisioned with, et cetera. So once again, focusing here on defining what interface you're giving to developers to provision a postgres database alongside their deployment of their application. So I'll go ahead and apply this. And you'll see it gets created and we can get this just like any other resource in our Kubernetes API. And you'll see it's not ready yet because it's actually provisioning a number of different resources. And when those are all provisioned, that's gonna create a connection secret which we specified as DBCon here, which is gonna have basically all the credentials we need for our application to be able to connect to the database we provision. But in Upbound Cloud, you're going to get this experience of being able to see this and how it maps to the underlying resources in the cluster. So everything behind the scenes here is a Kubernetes object. So if you're really familiar with how CrossMain works and how different things interact, then you could just use Q-Control directly. However, once you get a bunch of these at scale, you're gonna start running the issues with knowing how different things map, knowing what's connected to what. So this gives a really nice interface for that as well as ability to do user management for who can create what. So you'll see here that when we look at our abstract type here, we're gonna get some nice stoplight status around whether it's available or unavailable. And since it's unavailable, let's take a look at the different components and see what may be preventing it from coming available. So you'll see here that we have our subnets and VPC, those all look good to go. Our security group, route table, internet, gateway and DB subnet group are also already, but we're waiting on our RDS instance, which makes sense, right? Because those other resources are kind of networking components that you created immediately. One RDS instance takes a while to come up. And if you're wondering where all of these came from, if we look back into our composition here for the package that we installed, you'll see all of these present in the composition. So we basically said that abstract type maps to these underlying types. So they get created directly. And there's a lot of functionality built into this composition model that we've defined that allow you to say things like match controller ref, which means that when we create this RDS instance, it's gonna use the DB subnet group from the same composition. So if you created a bunch of these, right, you want your RDS instance to be provisioned with the components that are provisioned alongside it. So there's a lot of different facets to composition and kind of the scheme as you can provide here. And you'll also see that the way that we populate from the abstract type to the underlying types is we say from field path. So once again, if we looked at our Postgres instance, this storage GB field, which is kind of like an abstract representation of storage, right? There might be different parameters for different types of databases. And we're mapping that to the allocated storage field, which is essentially what AWS says is the parameter to specify storage size for an RDS instance. So as that comes up, we can once again see kind of the status of things and we can drill down lower and you can see that we had some warnings, but we also got to the latest one with a successfully requested creation of external resource. And we can wait for this to come available. Once again, as I mentioned, it will wait until the abstract type will wait until all the underlying resources are available and that secret can be populated. And if we wanted to actually look at these directly, we could do something like KGet AWS and we'll see all of these different resources here, including our RDS instance, which is not quite ready yet. Once that does come available in the namespace that we created our Postgres instance, we should see a connection secret get created and it's not there yet, right? Because it's not ready. Once it is ready, that connection secret will be available. So if you reference that from your deployment to be able to get those credentials into your application to connect to the database, you'll be able to see that. So I know we ran through quite a few different things and there's some, some, at least some more stuff that I could go through here to show off, but once again, wanted to pause and give you, Bill, an opportunity to ask any questions or anyone in the chat as well. Well, first I wanna say you have some fan mail in the chat from DanPopMYC. Hashtag Dan is one of the greatest resources in the community, upbound slash crossplains, great projects and amazing people. So I'm gonna now say some love going around the community right now. Well, that's high praise coming from Dan as he's certainly a big name in the CNCF community. Yeah, I guess, so one question I have is we talked a little about crossplains, a little about UXP. Can you tell me kind of like the difference between them and what is UXP kind of provide on top of just crossplane? Yeah, absolutely. So the first thing I'd say is it bundles the exact crossplane distribution that upstream is. So those components are going to look exactly the same, at least for the time being. One of the things that if you're familiar with how Kubernetes works, you'll see similar patterns in crossplane. We build crossplane focusing on interfaces rather than implementations. So what I mean by that is when there's an opportunity to create a kind of plugin model or the ability to support different functionality, we opt to go for that plugin model. So with UXP, we have the opportunity to kind of replace or put in our own implementations behind different plugins. So an example would be composition. One of the things we're talking about right now is being able to support different composition engines. There's other things like deployment models in terms of how providers actually get provisioned and their controllers run. So there's lots of opportunities to be kind of opinionated about the implementations behind these interfaces. So that's the first thing. Today, the main part of UXP that's different from crossplane is the ability to connect it up to a bound cloud. That being said, it's kind of a open point for us to be able to add in new functionality. But to be clear, we'll never keep functionality out of crossplane and we don't actually even have that prerogative, right? Because crossplane is a CNCF project. It's not run by upbound, though lots of the maintainers currently work at upbound. It's a community-driven effort to decide what goes in there. So really, UXP just gives us an opportunity to say, we think this is the best implementation behind this interface, so we're gonna use that. But as I pointed out at the beginning, UXP is totally open source as well. So everything that we just deployed into our cluster is present in this repo. You can go and see what we're doing, what we're adding. You'll see the different components. You'll see that we actually bundle crossplane into the deployment here. And this is 100% open source. So in terms of functionality, it's really things that we think make using crossplane alongside upbound a better experience. And there's different things around security and that sort of thing that are also added. Okay, yeah, definitely. Just another piece of fan mail here. Oh, I guess, sorry, not for you, it's from the chat. So Eve says, Dan is definitely the best, so I'm agreeing with you. So a high praise all around the community. So we do have a question from Eve now. My issue with Kubernetes Object to Cloud Provider Resources at AWS is always when something goes wrong, I waited the whole day for an NLP controller and realized there is an IAM permission issue in CloudTrail, so. Absolutely, well, first of all I'll say, Eve, I feel your pain. That is definitely something that is difficult just in general. So outside of managing your Cloud Provider resources with Kubernetes, even if you're just doing things on the console, click-op style, it's a hard problem. And there's lots of different moving pieces. What I'd say about that is what we're trying to do here is give the folks to have really strong expertise in those areas and the ability to troubleshoot those issues. We want to give them ability to build abstractions that make developers not have to troubleshoot those issues, right? So for instance, I know I showed in the beginning and we've touched on this a few different times. But in this demo that we do, we opt to create all new VPCs and subnets and that sort of thing. The reason why is that's the best way to get this to work, right? We're basically provisioning all new infrastructure to get this database stood up. You could also reference existing subnets or an existing VPC and provision your database in that manner. But in this case, we're opting as platform builders to say, we want to give you an all-in-one solution and we'll make sure that those different components work together. That's not to say that you're not still going to run into issues and there's gonna be credentials mismatches and that sort of thing. And that's really an area where we as UpBound would like to supply additional information. So obviously, if you're a customer of the company or something like that, we have additional insight into your environments and that sort of thing at the level that you ask us to essentially. And so we can do things like evaluate when you create this abstract type. For instance, how does that actually map to credentials being used? So how does it know to be able to create an RDS instance? How does it have permissions to create all these different resources? So you'll remember at the beginning, we created our provider config. And if we look for that, we should be able to see it. So we have our default provider config, which points at our secrets, right? You have a whole ecosystem view of the cluster. Then you're able to say what credentials, what IAM roles and permissions does that secret have? And then we're able to trace from resources to get provision. So from that Kubernetes object to the granular managed resources that come out of that and then see what kind of credentials are associated with that and then we can give you a view of maybe why your RDS instance isn't being able to be provisioned. So that's definitely a key problem area where I think different distributions of cross-plane are able to address in different ways. But it's certainly something we as UpBound would like to provide greater insight into. Another comment for Dan. Dan is always so helpful in the Slack server. I'm not sure, which Dan we're talking about, but both are awesome. We're lucky already. We can share the praise. Yeah, there you go. It's all friends here. So the next one is from go for IT UpBound is aware of the secrets created in my cluster or is it just using them? So this is a great question. And another case where kind of using something like UXP is different than cross-plane. So importantly, UpBound is not using your secrets or accessing them in any way. And the reason why is really all we're doing from the bound cloud perspective is showing you these Kubernetes objects that you've installed via UpBound Cloud or you've installed your cluster and exposed UpBound Cloud. What's actually accessing that secret is running in your cluster directly, right? So the thing that needs to be able to assume those credentials is our provider AWS controller pod here. Excuse me. And essentially it's the only thing that needs to get to it. So there's no reason why UpBound would ever need to touch your AWS credentials or something like that. We basically just need to make sure the controller is running and you're in charge of putting the credentials there. That being said, there are a number of different integrations to be able to put those credentials into your cluster. So I know on cross-plane, for instance, we have an example of, I believe in our guides here, using Vault to be able to drop those credentials into your pod. And that basically is just controlled by how you install your provider. So there's lots of different ways to manage your secrets, but long story short, no, we're not looking at your AWS credentials or anything like that. Cool. Next one was for me to say, thank you, by the way, please bring back the Flake Finder Fridays. Any comment about that? Absolutely. So I think we've only had two episodes of Flake Finder Fridays, which is definitely less than we wanted. We wanted to do at least once a month. For folks that aren't familiar, Flake Finder Fridays is essentially a Kubernetes community show where myself and Rob Kilty go through and talk about different test failures in upstream Kubernetes and why they happened and kind of walk through how test infrastructure works and visit some different parts of Kubernetes and in the process. Rob actually messaged me this week, so I'll say the delay in episodes is 100% on me, not on Rob, and he's certainly pushing me to get those back going again. So I'm hoping that we'll get some more episodes out there. As always, if there's specific topics you want to address, please let me know and we'll try to work those in. Yeah, so either it sounds like there's some momentum there. Next one is from Lucky Lordy again. How do you envision multi-cluster with crossplane, a single input cluster, or delegate a cluster's crossplane departments into its own crossplane instance? Yeah, so this is a great question that a lot of folks have. So one of the things that crossplane does really well is it spins up new Kubernetes clusters, right? So then you come to this kind of situation where you're wondering if crossplane needs to be installed in all of those clusters or crossplane just used to manage other Kubernetes clusters and what that looks like. One of the things I would point to is in the crossplane docs, again, we have a new guide section on multi-tenant crossplane talking about some of these different models and how you might want to set this up. It's also really useful for understanding composition, honestly, because that's a big part of the multi-tenant aspect. One of the things we didn't cover here today in terms of providers, and maybe I'll briefly show it quickly, is we have providers for things like Helm. So this is actually one of our most popular providers which you may think, provider Helm work alongside something like provider AWS. Well, when we're writing these compositions, on the one we showed today, we had just AWS infrastructure resources. If you want to do something more complex, like maybe spin up an EKS cluster and then deploy a Helm chart into it, because you can target any API, including the Kubernetes API, that's entirely possible with composition. And you can actually build these compositions together and create higher and higher level abstractions or just basically take the different pieces. So this is an example of another configuration package that we could show off. But this cluster abstraction is a pretty simple schema here, not a lot of fields where you're basically just saying the size of the count of the nodes in the cluster. But behind the scenes, this is doing two different things. One of them is spinning up an EKS cluster, which has quite a few components in IAM roles. Once again, getting back to Eve's comment, we want to make it easy to have the right permission set up. So it's creating an EKS cluster, but it's also creating a Helm release for Prometheus into that cluster. And because of the way that we allow you to reference resources between different cross-plane compatible resources, you can say, wait for this Kubernetes cluster to come up. But the user basically has a single interface that they do to actually spin up that cluster and install things into it. So if we look at our higher level composition here, we're taking that EKS abstraction and that services abstraction and basically pointing the services one to the EKS cluster. So you can really do a lot of different things with this composition, like in this case, spinning up a Kubernetes cluster with Prometheus, you could also spin it up with your application in it. So that would be one model of doing multi-cluster, where you have kind of your control plane on a bound cloud and you spin up other clusters and from your control plane, you put your applications in there, including your database credentials and that sort of thing. So all of the kind of infrastructure provisioning happens from a single cluster. We also see folks, and this is more in this multi-cluster, multi-tensy model, they create a bunch of different clusters. So that could be a bunch of different clusters on bound cloud. That could be them using cross-plane to spin up other clusters and using Helm to install cross-plane into those clusters, which is kind of a meta scenario there, but something folks do. And one of the things that we want to support in that case is instead of trying to maintain connection between all of these different clusters, we like for you to be able to treat them as units where you can create a reproducible platform in them. So it was as easy to create this kind of abstraction for our Postgres database as installing a single cluster, but that actually brought along kind of a lot of different complexity that we had designed behind the scenes. And this is probably the simplest configuration package you could have. Most folks who use cross-plane would have potentially hundreds of abstractions here. You'd have your database, your cluster, your VM, some networking stack, et cetera, basically anything you wanted. Because these are packages OCI images that can be versioned and pushed around to either the cross-plane or the upbound registry or Docker Hub or Harbor or whatever you like, these can be installed into a new cluster. So to get a same platform, so let's say you've kind of defined what your Heroku looks like for your organization. To reproduce that in another Kubernetes cluster is just an up-USP install and then installing the configuration or even doing that in a single step. So when we talk about having many, many clusters that we're still treating as individual units, to be able to get a reproducible environment in them, we have this nice packaging technology that allows you to be able to do that. One of the other things I'll point out and I'll keep going on this a little bit since I don't think we have any other questions at the moment. But one of the benefits of using a composition is your provisioning happens in a namespace. So for instance, we created our Postgres instance in the default namespace. Based on that namespace, so let's say you have a team one namespace, a team two and a team three namespace. Based on the namespace that the actual abstraction is created in, you can control how that mapping happens to individual managed resources. So maybe team one, you want them to use specific credentials or a specific AWS account when their abstraction is created. So when they create a Postgres database, it goes into the team one AWS account. You can actually use the same underlying composition and just use things like our patches here to say from wherever that claim was, which is what we call the abstraction, from whatever namespace that was in, use the provider config that references that. So it gives you a really powerful abstraction mechanism that also provides isolation between different namespaces and that sort of thing. So definitely would encourage folks checking out this guide. And I would say to kind of harp on a bounce functionality there. One of the things we didn't talk about too much is the user management aspect of this. So for these different teams that you have, you can give them permissions in a specific control plane. So for instance, in CNCF live, I could give this view editor owner. This actually maps to RBAC in your cluster as well, which means that if they have cube control access or a GitOps pipeline or something like that, they'll have the appropriate access to provision the resources they need. So for instance, if I make these editors, then folks in this account now have edit access to resources. And another thing that we do is give you the ability to create a robot. So I'll call this my CNCF robot. And once that's created, I can create an access token for it and put that in my GitHub workflow or something like that. And then assign that robot access to this team. And we'll go ahead and say the CNCF robot. So now whatever token I use with the robot account that we added is now gonna have that type of access to the cluster. So for instance, instead of cube control applying my Postgres instance here, I could put that in my manifest that I commit to my GitHub repository and it gets deployed using the credentials of that robot account. So that's actually what we encourage folks to do mostly is to use a GitOps workflow. We really embrace that and we encourage folks to use GitOps for absolutely everything because then you have versioned config for your manifest. And once again, if you have that stored in Git, it's as simple as installing crossplane and installing your configuration and then deploying whatever's in that GitHub repository and you've brought up the same exact infrastructure you had as before. Yeah, cool. I love the shout out to GitOps. So we have about three minutes left here. One question that I had that we didn't get to cover earlier. You talked earlier about kind of this conformance stuff. So can you tell me what does it actually mean to be like a conformant crossplane distribution or provider? Yeah, absolutely. So some of that is still underway right now but you can kind of go through the instructions here to understand exactly what you had to pass to be considered conformant. So it's pretty simple to run this suite of tests against it. And if I go back over to our conformance suite here, you can actually take a look at the different tests that we run. So one of them would be things like being able to create compositions and that mapping happens correctly and that sort of thing, that would be an aspect of it. We also have things like provider and configuration testing within core crossplane. So that would do things like create a provider or maybe a better example would be create this configuration and make sure all of its dependencies get resolved. That would be a good example of that. If you're not able to do that, then you wouldn't be a conformant crossplane distribution. And then if we look over in provider, they would do things like, make sure that it gets installed and becomes healthy, make sure all of the CRDs that it creates fit the crossplane conformance model. Essentially all of the CRDs that providers in crossplane have, have some standardized fields would say embed in them. So just different conditions and the status and that provider config ref and that sort of thing. So that's really all this is checking. And you can just run it through the plugin and it'll make sure that you're up to date. And we actually have quite a few folks that are already interested in doing that. For instance, I think all of these different organizations, you'll see folks from each of these orgs. So Equinix Metal, IBM, AWS, Azure here, they're all submitting for conformance for their various providers. So definitely a lot of interest already. And then we also will be submitting our UXP conformance results here. And then other folks come along and build a crossplane distribution can also submit their results and go from there. Awesome. Well, I think this has been like a great show. Do you have anything else you wanna leave the audience with? I don't think so. The one thing I'd say is, join us in slack.crossplane.io and we'll be happy to answer any questions. We're very active in that channel. So we definitely encourage you to come hang out and definitely subscribe to CloudNative TV as well. Yeah, awesome. Thanks Dan, and thanks everyone else for joining the latest episode of CloudNative Live. It was great to have Dan with us today to talk about crossplane, UXP and exciting new stuff there. We also really love the interaction and all the questions from the audience too. So thanks Lucky Lordy and go for IT and Eve for joining us in the conversation today. Yeah, so join us every Wednesday through PM Eastern to hear the latest developments from Aviv Hindi. Until next week. Thanks everyone. Bye.