 Welcome to another OpenShift comments been a while since we've all seen each other and today we have Dan Mangum and Chris Chattery. I'm hoping I'm saying your names right, but from the cross plane project and cross plane as a CNC of sandbox project that is gaining more traction that allows you to extend your Kubernetes cluster to provision and manage your cloud infrastructure. As well as your services and applications. So Dan is going to kick us off with demo and we'll get right into it. Absolutely. Thanks Karina. It's definitely an honor to be on the show and I'm excited to be here with Chris who has become a quite a large contributor to the cross plane community. And he's going to talk about some of the stuff that he's been working on lately towards the end. But as Karina said, I'm going to start off. We're going to break the demo into which is, you know, kind of frequent thing to do when you're provisioning infrastructure that takes some time to come up. So I'm going to go ahead and jump into the demo and then take a step back and look at the cross plane project as a whole. And then we'll circle back to kind of see what's happened behind the scenes while we're talking. So with that I'm going to go ahead and share my screen here. All right, so you should see an editor here and in Chris go ahead and let me know if if anything goes haywire while we're running through this. But here I basically have a little bit of a guide and this will be available for anyone to use after this recording as well. And basically what we're going to do is we're going to install cross plane into a Kubernetes cluster. So I'm going to go ahead and create a cluster or waiting. And then we're going to install cross plane and we're going to install a configuration package, which I'll talk about more later down in the show. We're going to set up a GCP provider, which is basically going to give us the ability to provision infrastructure on GCP from our Kubernetes cluster. And then we're going to go ahead and create some East and West clusters is what we'll call them right now, which are kind of an abstraction on infrastructure defined by our configuration package. And once again, I'll circle back to all of this more a little down the line. So it looks like our cluster is ready. So we're just running a local cluster in Docker with kind. And before we do anything, I'm going to go ahead and build my configuration package. So that's the package here that I've defined in this repo. And you'll see it's just called cross cluster and it has some specifications of the dependencies it has specifically on cross plane itself and then GCP and helm because those are two providers. It's going to use to abstract infrastructure. Let me go ahead and go into that package directory and we can use the cross plane to control plugin to go ahead and build this package. If we go and look, we should see this X package extension here. And then the next thing we want to do is go ahead and push this configuration. Let me go ahead and make sure that I push it to the right place here. We're about to read me. So we're going to put this, this will just go to Docker hub. So cross plane packages are actually just OCI images behind the scenes. And so we're going to go ahead and push up to Docker hub. It looks like that was successful. Next thing we're going to do install cross plane into our cluster. These are just the instructions I pulled off the cross plane website, which we'll also take a look at in a little bit. This is just going to helm install cross plane. And if we get the pods in the cross plane system namespace, we should see that they are present or they're creating. Once those are ready, we can go ahead and install our configuration that we just built and pushed or go ahead and get ready to do that. And once again, I know we're moving a little fast here, but we're going to circle back to what all of this means. So those are up and running. I'll go ahead and install our configuration. Like I said, that has a dependency on provider DCP as well as provider helm. So we should see those also get installed with compatible versions. We've said anything greater than 0.15 and 0.5 for these two. So we should see compatible packages installed by cross plane. And it looks like they are installed. They're still becoming healthy. So we got the latest versions of each, which was 0.15 and 0.5. And shortly they'll come available. This will just take a little bit of time. And we'll start to see things like CRDs added for different GCP types. So for instance, let's look at a GKE cluster, which is something we're going to create today. You can now basically provision a GKE cluster right from your Kubernetes manifest. And then we can also see with helm and have some resources as well. And then the last thing I want to touch on is that we have XRDs, which is abstractions that we've defined over these granular managed resources. And you'll see we have an app abstraction and a cluster abstraction. And these are part of our cross cluster package here. All right, so like I said, we're just going to do a few more things before we jump into more information on cross plane. What I'm doing here is just creating my GCP credentials in the cluster so that cross plane has access to actually talk to GCP. Let's see. That again, here we go. All right, so that should be all set up. And let me create these clusters. And then we'll be done for now. All right, so you'll see I created an East cluster and a West cluster. And we'll come back to the actual whole architecture here. So now that we've sprinted through that, as I promised, we are going to go back and actually talk about what cross plane is and everything that we just did. So I've got a short presentation here. We'll try to spend more time on demos and actually interacting with cross plane because I feel like that's a little bit more tangible. But we'll go ahead and show the presentation as well, give you a little bit of background. Chris, is it showing up for you okay there? Yep, I can see it on my end. Awesome, sounds good. We'll expect that folks at home can also see it. But like this says, I'm just going to give a short cross plane overview that's kind of current as of today. So starting off the important thing to think about with cross plane, though it has evolved since this time, looking back on kind of the roots of the project and when it was released really informs the capabilities that we have today. So cross plane was announced in December 2018, at KubeCon Seattle that year. And since then, like I said, it's changed quite a bit. But one thing it has remained is open source and open governance and standardized on the Kubernetes control plane. Now, there's lots of different infrastructure provisioning systems. There's infrastructure as tool, infrastructure as code tools. There's cloud provider APIs, all sorts of things like that. But one of the things that the founders of the cross plane project noticed kind of early on is that we were standardizing around this Kubernetes API. So whether you're provisioning workloads or orchestrating containers, which was the initial purpose of Kubernetes, or doing a wide variety of different other things that Kubernetes control plane has extended to do. Integrating on that API allows all of those different tools to work together and for you to be able to have consistent workflows. So that was a core tenant from the beginning. And as it's evolved over time, we've kind of developed these three main feature areas. We're going to primarily look at the first two today, but you'll also get a flavor of the third. So the first one is provisioning infrastructure from Kubernetes. That's pretty straightforward. That's what we're doing right now behind the scenes. That's what adding that GCP provider and that helm provider said, you know, you're now able to create these types and cross plane knows how to provision those on the cloud provider. Publishing your own declarative infrastructure API. That's basically saying, yes, you want to be able to create these different managed resources, these granular infrastructure representations, but likely within your organization or even just personally, you want some abstraction on that, right? You don't want to have to fill out the 100 fields in an EKS configuration every time. You might just want to, you know, auto populate some of those or set some of those fields as hard coded and then allow for configuration, you know, a common module approach that you'll see with infrastructure tools. And you may even want to publish that and distribute it and let other folks use it or build on top of it. So we have a nested infrastructure composition concept in cross plane, which we'll also touch on a little bit later. And the last one is just running and deploying applications. Kubernetes is really already very good at this and there's lots of frameworks built on top of Kubernetes that make it even easier. And because they're all standardized on the Kubernetes API, it's quite easy to integrate those with your infrastructure with cross plane. All right, so this is just some more on the history and origin. We'll go ahead and move past that and look a little bit more at the model. So I'm guessing a lot of folks who watch OpenShift Commons are fairly familiar with Kubernetes and OpenShift and are probably very familiar with Kubernetes operators, right? So Kubernetes itself internally has a number of operators that make it possible to provision container workloads onto different nodes in a cluster. But you can also extend that and add your own controllers and own API types in the form of custom resource definitions. So the core thing that we're going to see with lots of projects is creating their CRDs. So what CRDs does that expose and how does that extend your API for your Kubernetes cluster and then your controllers which register with the API server and they watch for changes to those CRDs or instances of those CRDs and then they take some action on your behalf in a declarative manner. So cross plane, for instance, we saw earlier that we had a provider GCP that we wanted installed. So basically what that's doing is saying I would like these CRDs which represent GCP resources and this pod to run with a container that has Kubernetes controllers in it that talks to the API server says, you know, when someone creates a Cloud SQL database, let me know and I'll go and create it on GCP and then I'll let you know about the status of it. So that's a common pattern you're going to see across Kubernetes projects and that's what you see in cross plane as well. So we have a number of different providers here. These are just the major cloud providers that are listed but we have countless others. You can go check those out in the cross plane org or the cross plane contrib org on GitHub. We also have things that are not, you know, traditional cloud provider APIs. So one that we're seeing today is provider Helm. So basically anything with an API is something you can write a cross plane provider for. So in provider Helm's case, that API is the Kubernetes API. You can also write things for, you know, we've seen kind of like toy providers for things like ordering pizza or creating a GitHub org or something like that or sending a Slack message. So while traditionally folks are more interested in these large cloud providers, it can be really helpful to know within your organization if you have proprietary APIs that you can easily write your own provider as well. So kind of moving on to the next level beyond these managed resources, we want to be able to create abstractions over them, right? So whether you're multi-cloud, multi-region or just have anything that differs between your infrastructure which I think includes every category of organization, you want to be able to create abstractions on top of those. So we've seen over the time of cloud computing which is still a relatively short history but even in that short history we've seen a lot of different kind of levels of abstraction, right? And when I say that, I mean, you know, within a single large cloud provider like AWS, for instance, you'll have granular resources like an EC2 instance all the way up to something like a Lambda function, right? So there's these different levels of abstractions between or within a single cloud provider. Then you see specialized cloud providers like something like Heroku that says, you know, we're going to go ahead and create those abstractions and only expose the abstractions to you. So this is really useful, but the problem with that is that your organization is not choosing those abstractions, right? You are having to be forced into another cloud provider's abstractions and if, you know, the one that you've chosen to integrate with suddenly doesn't have functionality that you need, it takes a lot of effort to add a new cloud provider. So what Crosplain wants you to be able to do is create abstractions with pluggable backends to them. So we're using provider GCP today and we're going to look at a multi-region example. You could also very easily have a provider AWS back end for the same abstraction that we've created, which today that's a cluster. So our cluster consists of basically a Kubernetes cluster and a PostgreSQL database. We're satisfying that with a GKE cluster and a CloudSQL database. Without changing anything at the abstraction level, we could switch out behind the scenes, you know, an EKS cluster and an RDS instance if you'd like. But today we're actually just going to switch out a configuration of Kubernetes and CloudSQL on GCP in an East region and on a West region. So once again, you can see here that you're hiding that infrastructure complexity. And the last thing is you can include policy guardrails. So I'm sure on this show before there's been talk of things like OPA. So that's open policy agent where you can write policies that basically control who can provision what configuration of Kubernetes objects. So once again, when you expose these concepts as Kubernetes objects, these different projects integrate really well. You can write a rule that says, you know, no one can create a database larger than 20 gigs or something like that. So this is just kind of a visualization of what it looks like to create abstractions in front of those resources. So this is a little more complex of an example where you see we're using Azure and AWS. And it also shows a little bit of how you can expose these at the namespace level. So if you have a multi-tenant cluster, right, where you have folks in different namespaces that need different capabilities, you can choose what's exposed to what namespace. All right, so this is kind of an overview, but we're getting towards the end of this presentation. And like I said, I really like sticking with the demos. So I'm going to go ahead and swap back over to that. All right, so let's get this out of there. And let's see how our provisioning is going. So like I said, we created an East and West cluster right before we left. We'll see here that both of those are now ready. So what does that mean, right? When we, before we went over to the presentation, the first thing we did was build this configuration package. So when you have these abstractions and you have these different providers, they need to be able to be distributed easily. So Crossplain has its own concept of packages that go through the Crossplain package manager, which does some things like enforce constraints around how many controllers can be reconciling a single type and what permissions are given to controllers. And then you can tune the package manager to say, no controller should ever be able to look at config maps or something like that. And so that gives you some strong guarantees. Obviously, if you go outside of the Crossplain packaging ecosystem and you have other things running in your Kubernetes clusters, it can't make guarantees about that. But that's where we see a lot of folks kind of migrating towards having a cluster dedicated to infrastructure provisioning. That's kind of what we're doing today. And you'll see a little bit more of that down the line as we provision new clusters and then put services into them. So the first thing we did was build our configuration package. That's in our package directory here. This configuration type here is actually not a CRD. It just follows the same schema. It's basically just informing the Crossplain package manager. I'm a configuration package type as opposed to a provider package type. And I depend on these two other types, which are providers. So provider GCP, as we saw earlier, brought all those GCP types and their controllers. If we actually look over at the pods in Crossplain system now, we can see that we have provider GCP running and provider Helm running in addition to our RBAC manager in Crossplain. So those were installed successfully. Crossplain took care of that for us. And the other thing we're doing in this configuration package, which once again configuration packages carry these abstractions around, have dependencies on providers, and providers bring the granular resources and controllers. So in this configuration package, we've defined a composite resource definition, which is frequently abbreviated to an XRD in Crossplain parlance. And we're calling this a cluster. So we're saying we want an abstraction that's available to folks in our cluster that, in our infrastructure cluster, that allows them to create clusters. And then behind the scenes, we're creating a single composition here that says, I satisfy the cluster type that was defined as an XRD. And when a cluster is provisioned, based on some of the fields that is given to it, which here we're only saying the only field in the spec is a region, west or east here, I'm going to provision a GKE cluster, a node pool, a Cloud SQL instance, and a Helm provider config, which we'll touch a little bit more on in a moment. But essentially, this allows you to create this huge abstraction over some pretty complex resources like a GKE cluster and a node pool, and a Cloud SQL instance, and make sure that they're always provisioned, right? So here we're creating a database in the same region as the GKE cluster and node pool, which means later on when we provision a Helm chart into the GKE cluster that was created, it's going to consume a database from the same region. And likewise, when we go to the west, we're going to have the service there consume the west post-grace database. So we're doing that by basically using this light patching system that Crossplane has in compositions that allows us to overwrite the spec we have here. So we're basically saying, you know, based on the region that's provided, go ahead and overwrite the region in the Cloud SQL instance in the node pool and in the GKE cluster, and that maps to the actual regions on GCP. You could also imagine that we'd have many compositions. So we could have instead of, you know, having the values come through here, we could have a whole separate composition and label one of them east and one of them west, and based on the labels on the definition or the instance of the definition that's created, select one or the other, or select an AWS cluster, and you could create different criteria for doing that. So if we jump back over to our cluster, we should see that those different resources we defined are actually present in our Kubernetes cluster. So if I do the shorthand get GCP, you'll see that we have two Cloud SQL instances running, so one east and one west. It looks like they're all ready to go. And if we go up a little further, we should see we have two GKE clusters, one east and one west, and a node pool for each of them that are attached, and they're all ready and run. So we also created a provider config here, which is basically how you configure a given provider. You say, you know, point it to the secret, and then I'll reference it from instances for that provider. So for instance, for the GCP provider, we actually just created one default, which means that's the one that provider GCP will use to provision infrastructure if there's not another one specified. Once we're provisioning to two separate clusters, so two different APIs with our application that's going to go into our GKE clusters that we provisioned, we need two separate provider configs. So likewise with the east and west variants of the GKE cluster and the database, we also are creating a helm provider config that points to the secret that's actually created from provisioning that GKE cluster. So up here, if we look at the GKE cluster, you'll see we have right connection secret to REF, and we're saying write it to the cross-plane system namespace and name it kubeconfig dash whatever region it's in. So if we actually look in the cross-plane system namespace and see that we have a kubeconfig east and a kubeconfig west, you'll also see we did the same thing with the database. So we have a database east and a database west. All right, so if we hop back down to the helm provider config, we should see that we have two helm provider configs, one of them referencing kubeconfig east and one of them referencing kubeconfig west, which basically is giving the ability to provide our helm to provision things into either one of those clusters. So once again, we'll take a look at provider config. These are the cluster scope. You'll see we have helm east and helm west. If we want to make those a little bigger, we can see that we're referencing kubeconfig west in the helm provider config west. Excuse me. All right. Next thing we want to do is now that we have all this infrastructure provisioned and managed by cross-plane, we want to provision applications into them. And specifically what we want to show today is that when I put something in the east cluster, it's going to use the database in the east region, and I'll put something in the west cluster. It's going to use the database in the west region. So what we're going to be provisioning today is actually a documentation site for CRDs. So I'll pull this over here right now. doc.crd.dev is a website where you can go in and look at documentation on different CRDs. So for instance, if we installed all of those provider DCP CRDs and we want to see which ones are available, it's basically going to parse those and discover all of them for us. It's running a little slow right now, but you can see it looks at all the versions in that repo, and it's going to show us the different ones available. So for instance, if we needed to see that GK cluster spec while we are writing our configuration, we just go ahead and click on it. Once again, my Internet is a little slow here, but it will come up momentarily. You'll see it's just the v1, beta1, container.gsp.crossplane.io, and we can see all the different fields that are available. And this basically just helps us fill it out. So this is just the application that we're going to be provisioning today, and we're going to do it in two different regions. So once again, in our composition, we defined an app, and this also just has a single field, so it looks very similar to our cluster XRD. And you'll see here that I'm referencing a doc help chart that I just published this morning, and then giving some values from the DB secrets that we provisioned. So we're basically saying provision this help chart with these secrets for the given cluster. So if it's in West, we're going to use DB West. If it's in East, we're going to use DB East. Those are going to flow through to the application, which basically tells it to connect to the appropriate cluster. So if we hop back over and once again look at our XRD, we can see that we have apps and clusters. So we want to create two apps now, one for East and one for West. And I believe I have the commands here for that. Yes, we do. And so I'll go ahead and create those. And what that's going to do is once again looking at our composition is create a helm release, which basically is a representation of a release that gets deployed in a cluster from our infrastructure management cluster. And so if we look behind the scenes, we can see there's two releases. They've both been deployed and installed successfully, one in East and one in West. And if we look at our abstraction on that, they're not quite ready yet, but in a moment it should so that they are also ready because their one composed resource is ready. And what we can also do, which is kind of handy, because we have those cube config secrets in our cluster, we can go ahead and connect from my local machine and look at those. Our apps are now ready there. So once again, in the crossplane system namespace, we can see that we have our cube config East and cube config West. And so I'm going to actually pull those out into cube config files here, because instead of exposing these via service or something like that on the public internet, I'm just going to port forward them both locally. And what we want to see is that they're talking to different databases, right? Because one should talk to East and one should talk to West. Right? We should see both those are now present. And so in one of my terminal windows here, I'm going to go ahead and port forward to 8081 from the East cluster. Oh, I do need to check. There's in the default namespace. I need to change the name of these. So we have, this is for East cube. Go ahead and update this, right? And in West, we have our instance in the West. So like I said, I'm going to port forward both of these to our local machine and show that they are connecting to different databases. We have our East setup and our West setup. And let me open a browser window here. And pull it over. All right, so first I'm going to look at, let's see, which one did we do each? Okay, yeah, 8081 is going to be in the East. So I'm going to go ahead and connect to that. All right, so we are able to connect. Our app is running successfully, but we haven't actually hit the database for anything yet. So what I'm going to show is doc.crds.dev. If it has an index to repo on the first request for it, it's going to do that behind the scenes. So the first time we asked for it, the repo should not be available. And it's going to say, we'll index that, please try again soon. So I'll just do the crossplane repo here. So there we see we're not able to index it yet. So they're working on that. So that's great. And actually, if we go ahead and refresh the page, we can go ahead and get results. This isn't the latest version. It's indexing all of them right now. But you can see that behind the scenes, the website has gone and indexed the repo. And if we kept refreshing this, we'd eventually get all of it. So 1.0 is the most recent version. And we have documentation on all of these. And so this is talking to the east database, right? So now what we want to do is look at the application in the west, make sure that it's actually communicating with its database. So what we should see is instead of it showing us the documentation for crossplane, since it's using a separate database, and you can imagine a real-world scenario, you might want something like a replica of the database. We should see that it has not already indexed the crossplane repo. Perfect. So it hasn't because this is using a separate database. That's a separate app instance. So it's going to go ahead and do that. And now if we go and refresh it, once again catching up to the other one, slowly but surely. And we should see eventually that it has indexed all of the versions of crossplane. It is taking it a little bit of time here. But anyway, you're able to see that it's using a different Postgres database. And it's done indexing now. And so that way you've seen now that you can, excuse me, provision applications to different clusters and have them consume the infrastructure that's close to them. This is kind of a trivial example, right? Because we're actually just passing through the region that we want. You could do things, since you're standardized on a Kubernetes API, you could do things like, you know, evaluate the capacity of each cluster or use some other metric that says this should be provisioned here and consume this infrastructure. So here we're talking about, you know, consuming the database that's close to them. You can think about edge scenarios where you may want to use an in cluster solution as opposed to an external database or something like that. And that's getting a little closer to what Chris is going to talk about here and some of the work that he's been doing. Okay, I'm going to go ahead and stop sharing my screen here and pass it over to your demo. We can take a look at that and then talk about how these two things can work together. Perfect. Hopefully you can see my screen. Looks like we're just getting the, you are sharing your screen again here. Oops, let me try that again. No worries. Okay, how about now? Fingers crossed. Let's see, it looks good. Okay, perfect. Yeah, I'm going to be talking about a hybrid cloud deployment of Red Hat Quay or Key, if you want to say it the technically correct way, using crossplane on OpenShift. So I'll skip the intro slides since we've already gone over that. But the basic idea here is, you know, we want to integrate and compose different AWS resources along with the already existing key operator to create a reusable and automated package for deploying key. So the real thing that brings everything together in this demo is the in cluster provider, which is a newer provider, which I am the primary maintainer of at this point. And so the idea of the in cluster provider, as the name suggests, is to provision resources within your Kubernetes or OpenShift cluster. And the benefit of this is that, you know, we can mimic the interface of, you know, all the cloud provisioned resources. So now you can interchange provisioning an RDS instance or Google, you know, Cloud SQL instance or Postgres locally inside of your cluster. And so, you know, we can then fulfill a requirement with any implementation based on, you know, what your use case is or where you want to run it. As Dan mentioned, you know, you might want to be running it in cluster if you're in an edge scenario or, you know, on RDS if you are running OpenShift inside of AWS. But there's trade-offs, right? Because, of course, we can't replicate all resources. So if there's something proprietary, we can't necessarily replicate that with the in cluster provider. And something else that we have support for is operators. So you can create any, you know, OLM or operator hub supported operator using the in cluster provider, which is actually how we create the key operator. So there's a couple of different components to this deployment. So we create a custom catalog source and we're also going to be utilizing the Helm provider, AWS provider and in cluster provider. So the AWS provider is what is primarily going to be provisioning all of our resources. So Redis, S3 and Postgres. And it's also going to handle, you know, the networking security and IAM. The Helm provider is going to be responsible for more granular resources along with jobs for configuring the database prior to actually starting up key. So how do we bring it all together? Well, you know, we first install crossplane the providers, then we set up the configuration, set up the provider configs, configure and create a requirement. So let's run the demo. So first things first, we have to run the make crossplane command and the make provider command. So these are commands that are already set up within the crossplane query repo, which there's a link to at the end of the presentation. So if we run make crossplane, we'll see that we start creating crossplane within our cluster. Just going to take a second. And once this is done, we will have created the crossplane system namespace up here along with the actual crossplane deployment. So we can check that. Perfect. So crossplane is up and running. And then we can run the make provider command, which sets up all three of the providers. So something to note for OpenShift, you also have to set up a controller config, which is basically responsible for doing some setup work for the deployment for each of the providers. Then we're going to create our authentication secrets for AWS along with this Kubernetes cluster or this OpenShift cluster that I'm running on. And in a second, all three of the providers will be created. So we can check that these are all spinning up now. So this, as Dan mentioned before, this populates all of the CRDs for different resources exposed by AWS, Helm, and also the in cluster provider. Next up, we can set up all of our CRDs using the make configuration and then the make catalog command. So make configuration installs a crossplane package. And so this crossplane package exposes different XRDs, which are responsible for provisioning different sets of resources. So if we go into the repository, we can see that we define one XRD specific for an S3 bucket. And this actually wraps a couple different underlying resources. So we wrap the IAM user, an IAM user access key, which creates a set of access keys as a secret. The actual bucket along with a bucket policy, right? And so there's similar XRDs for Redis or ElastiCache RDS, which is Postgres, along with some networking resources. And in the end, the actual key resources for the operator. So if we do make catalog, all of our XRDs are now created. And so the final step to actually get everything spinning is to run make key and then make watch. But as Dan mentioned, oftentimes spinning up things on AWS can take a bit of time. So I've just pre-recorded what it would look like if you were to run this. So once we run make key, we'll see that it creates the requirement for the the XRD resource. And we can then watch all the resources spinning up. So we'll see that we've already created the subnets for our VPC. And then we create our different networking resources for the security group and route table. Sorry, my internet's not that great today. And we'll see that our IAM user and the IAM user access key are both already up and running. And the bucket has also been created at this point. And now we're just waiting for the replication group for Redis and the RDS instance. So RDS is done. Our operator has been created and will finish in just a second. Perfect. So now that all of those are done, we can validate that our actual pods for a key are coming up. If we just watch that, those will take another second or two. So just fast forwarding through this because I don't want to have to make everybody wait for two minutes. But the key deployments will finish in just a second and we can switch over to the OpenShift console and validate that both deployments are up. And yeah, they look good to me. And we can switch over to the actual routes under the networking section and we can access key. And so just like that, we have a container registry that we can use within our enterprise, for example, or any other example for an application that you would deploy. So this isn't relevant right now but you can also run make clean and that will clean up all the dependencies for key along with key itself. And you know, the Kubernetes resource model allows us to clean everything up through the reconciliation loop and we don't need to worry about any other collisions. So that's all for my demo. I'll hand it back to Dan. Yeah, that was awesome, Krish. If anyone else wants to see some more demos like this, we did have a cross-plant community day before the end of the year that Krish and a lot of other awesome individuals presented at. So there's a lot of great demos and solutions that you can check out. Also we have a live stream, the binding status which we have every two weeks which you can see on the cross-plane YouTube channel. But as we're kind of like nearing towards the end of our demos and an overview of cross-plane and stuff like that, we're obviously on OpenShift Commons here and Krish and I have been talking about for the last few weeks or so along with other folks what is kind of like the future for cross-plane and OpenShift. So one of the things that's kind of interesting about the cross-plane model which I mentioned earlier is this ability to pass things through its package manager and that gives you some guarantees. However, if there's something else that's also trying to manage that, for instance the OLM from Red Hat and OpenShift, there can be conflicts on that and I think we've gone a long way to reduce some of that with things like the Controller Config which Krish referenced there which basically allows you to customize different parts of how that installation process happens. But Krish, I just wanted to kind of like ask you and maybe we could just have kind of a conversation about some of the things that you are excited about in regards to integration between cross-plane and OpenShift and some of the things you see challenges with. Yeah, I'm really excited to see the continued growth of cross-plane as a project along with seeing more and more people getting involved. I do think there's still a lot of room to grow and a lot of interesting things that we can do using cross-plane and OpenShift whether that's multi-cluster or looking at edge scenarios I think those are two areas where we're definitely going to be seeing a lot more interest in the next in the next few months and as the project grows. Yeah, for sure. One of the things that was kind of a larger focus earlier on in the cross-plane project was sort of this concept of intelligence scheduling. And you can see and in my demo and Krish's demo we talked about kind of scheduling infrastructure scheduling the consumption of infrastructure to different places whether it's regional or whether it's the location in cluster externally but that was mostly somewhat of a manual process in terms of we had to say I want this in cluster or I want this east or something like that. But I alluded to a little bit earlier about potentially the ability to automate some of that. So one of the nice things about being on the Kubernetes APIs you could do something like I mentioned of having like a mutating webhook that says like put this in the east if the west is really overloaded or we're seeing more traffic and you could actually just design a whole separate system that integrated really well with cross-plane. But I hear that from folks on OpenShift a lot which I admittedly don't have as much experience with OpenShift as I do vanilla Kubernetes but what are kind of some of the customer use cases that you see Krish of folks that are wanting to do this sort of like intelligence scheduling kind of thing. One of the teams that we've been working with a lot one of the customers or someone is looking to adopt cross-plane for their own project something that they've seen as one of the core features of cross-plane and one of the reasons that they're looking to adopt it is they're really interested in being able to schedule workloads intelligently whether it's you know if you want to create a workload for production being able to deploy that with a custom XRD that's set up for AWS for example right and just like with the flip of a switch being able to switch that to a development workload where things are provisioned in cluster and so you know when it comes to scheduling just that level of control and flexibility and also the fact that it's opaque to developers is I think something that is really appealing for commercial teams and enterprise. Yeah absolutely that's a great point I think there's folks all across the Kubernetes ecosystem that are looking for that kind of workflow and you know you can see that through the like CI pipelines and that sort of thing that's where we see a lot of folks using cross-plane with something like Argo where you know they have you know based on the stage that they've defined in Argo or just last week I did a live stream with Captain which is from Dynatrace and they do things like you know different development staging production steps in a workflow and run tests against them and that sort of thing and one of the things we were talking about in that case was temporarily spinning up development infrastructure to like run these tests right so maybe I spin up a new Kubernetes cluster using crossplane and run load tests on it and that's kind of like isolated away from you know my production workloads and that sort of thing and then I can just tear it down right with these same systems like Argo or Captain or whatever your favorite one is so there's a lot of flexibility of being able to do all these operations from the Kubernetes API but yeah those are the main things I wanted to cover I think we've gone through quite a lot today Chris was there anything else you wanted to chat about today? Yeah I guess the last thing that I wanted to say is if you're looking to get it involved with the crossplane project feel free to reach out to Dan or I or hop on the crossplane Slack we're always looking for for new contributors right then Yeah absolutely and there's lots of opportunities even if you're not that familiar with like Kubernetes development if you're some you know type of operator or something like that and you want to put together a demo or put together a guide for using the different projects that you like to put together for your cloud recipe into a guide or a live stream or something like that we'd love to have you also another thing that just started up which we're looking to get more folks involved with is an on-duty rotation which may not sound super fun right it sounds like being on call for free but it's a little bit different from that there's obviously flexibility with your time and that sort of thing but it does give a good opportunity to continue to cultivate our Slack community which in my opinion I'd say is a pretty good one with lots of folks helping each other which I think is a common thread across the cloud native ecosystem but I've been particularly happy with a lot of the folks who have decided to join our community. Alright well I think that's what we have y'all today like Chris said definitely feel free to reach out to either one of us individually and we'd love to have a conversation with you or talk about your use case or anything like that but I'll hand it back to Karina here to kind of round us out. Thanks Dan, thanks Krish so speaking of maintainers and hope you don't mind but we do have a couple other contributors that are watching right now the IBM team has been really you know doing a lot of work and integrating IBM cloud right into cross-plane so Paolo I don't know if you wanted to jump in and say anything or now you're not going to speak to me anymore because I'm calling you out in a brief thing but so can you talk a little bit about the work that you've been doing? Yeah sure thanks for your question in the work we've been doing actually yeah it's been actually a very interesting journey together with the cross-plane community actually Dan has been very helpful for us and I think in a very short period of time we're able to deliver initial release of the IBM cloud provider this allow us basically to integrate a few IBM services actually from IBM catalog and so this actually give us this kind of portability aspect I see now cross-plane is this aspect of portability right we have customers that they want not just the portability of the workload but also the infrastructure service they require right so I'm running for example OpenShift maybe I'm running on AWS but now I want to somehow to move this to I don't know GCP, IBM cloud and now all this portability of the services using this common definitions this accept the so this for us is a very important aspect one of the reason I think cross-plane is a lot of value for our customers and that's an area that as you said also this area of somehow scheduling somehow kind of smart scheduling I think is a very interesting aspect as well and probably having also maybe operators that somehow can now use the Kubernetes API provided by cross-plane we can make some of this kind of more autonomic in a way more intelligent right we can do this kind of autopilot and now for workloads and infrastructure I think there are a lot of possibilities that we definitely want to explore that's awesome I love you know all the community coming together right I mean got a bound IBM Red Hat so for other people that want to join and write service providers when is your community meeting yeah so we have a community meeting every other Monday so it would have been this week but it was canceled due to MLK Day yesterday so not next Monday but the following one of our community meeting that is at 10 a.m. Pacific I think I need to double check that but it's posted on the cross-plane website and also on the cross-plane GitHub page and that's definitely a place where you can come and bring ideas that you have bring use cases or questions or anything like that and we would love to have you there and love to you know help out as best we can awesome thanks and for anybody you know that's watching you know Chris do you have any questions in the livestream chat and feel free to drop any questions into this chat and thanks Paolo always love having you online in these calls you're welcome absolutely I don't have any unanswered questions in chat right now but there's a lot of going back and forth in the chat yeah I've been talking about what they've been doing with cross-plane it's pretty fun to see users and how they're enjoying the experience of using any product and cross-plane is one of those nice ideas where it's like cube cuddle becomes the de facto kind of interface for everything and that's a huge win for people so really really cool to talk about it one of the things that I love that Paolo was talking about there is the flexibility that provides right so providers are kind of just go to Kubernetes operators if you will so you can really get this nice interaction between these different pieces of the ecosystem and one of the things that I know I've been chatting with Chris and also Scott from Red Hat about recently is getting an even smaller kind of operator deployment unit which we're kind of referring to as functions that kind of give you some of that day two operations feel so something like I want every time I delete a cloud sequel database to send a Slack message or do some extra cleanup because if you've worked with cloud infrastructure I'm sure you're very aware that it has the capability to leak resources and so in a lot of these cases we're solving for the generic use case in the provider you create the instance you bring it back down can't really make assumptions about your day two desires of how you specifically would like to work around that but if we make a smaller deployment unit that's really easy for you to just script out some actions then that could really enhance the workflow for a lot of people as well so we have a couple great questions in chat can you talk about the intersection between cross plane operators, Helm and so why you'd want to use them together versus standalone yeah absolutely so one of the benefits there's a couple of different facets to this so I see also that some of the other provisioning solutions are mentioned as well so right off the bat one of the things we've talked about is the standardization on the Kubernetes API right so a single interface where if you've worked with other infrastructure as code tools having a imperative system or having just a different set of tooling to provision your workloads separate from your kind of like Kubernetes environment or excuse me, provisioning your infrastructure separate from your Kubernetes environment can just lead to some fragmentation it's also hard to synchronize a lot of those operations and create useful abstractions in front of the infrastructure so that's one of the benefits. Another one is Kubernetes and cross plane are running continuously so it differs from infrastructure as code tools in that it's always watching your infrastructure so let's say you know I provisioned Cloud SQL instance in this demo let's say I went and tried to you know scale that up you know 10x its current capacity cross plane would say hey that's not what you said you wanted when you provisioned this I'm going to scale that back down and if you want it scaled up right you come to the source of truth which is your Kubernetes API so that's one of the benefits looking more at the operator side one of the questions we get asked sometimes in terms of the different providers and that sort of thing since they are Kubernetes operators themselves why not just helm install them in the same way that we helm install cross plane or something like that and that kind of goes back to some of the benefits of the package manager that I was talking about earlier so you know one of the things that we saw is that in cross plane you can establish dependencies which is a real problem in the Kubernetes community is having dependencies that are deployed in your cluster and so being able to have dependencies managed in a central location means that I can install that configuration and you know in that case it brought helm and GCP and it was kind of a simple case let's say they were already installed but they were the wrong version it would give me information about that or it would say please update to this other version so you get some guarantees there and then the big one that I'd say is when it comes to reconciling these CRDs that we installed cross plane is going to guarantee that only one operator is actually acting on those CRDs which if you have a lot of operators installed in a cluster it can be very confusing to associate different API types with those operators and so cross plane is going to say when you bring a new provider a new configuration package or something like that it's going to say alright you are the owner of the types that you install no one else is going to mess with them and it's going to guarantee that you know another operator isn't going to come along and break them once again that's all configurable but if you kind of stick to the base functionality you are going to get all of those wins with standardizing on cross plane there. Now what about provisioning solutions like Terraform is there an intersection between cross plane and Terraform? Yeah I mean one of the besides one of the benefits of having such like a robust cloud native ecosystem and Chris also feel free to jump in on this one but some folks really like HCL some other people really don't like it and some people like YAML some people don't some people want to write their configuration in an actual well that's probably slanderous in a more traditional programming language maybe like TypeScript or something like that and the nice thing about having this wide ecosystem is there's all these tools to take whatever source you like and essentially compile that to the Kubernetes API or that's how I like to look at it kind of like a compiler tool chain so Terraform for instance has resources to actually be able to use HCL to produce Kubernetes objects so just like I applied YAML here you could write your HCL to actually create these objects as well kind of using Terraform for a front-end there similarly we work with CDK from AWS to create a TypeScript front-end for doing this type of thing and then you can you know get all the benefits of versioning and whatever distribution that programming system uses you can get all those benefits by standardizing on one of those so that's one of the nice ways I feel like people have an easier path for migrating but Chris have you had any experience with those tools? I don't agree with what you said but the one thing that I'd chime in one of the things that I like more about crossplane when compared to Terraform or some of the other tooling is that since it's so integrated with the Kubernetes resource model and Kubernetes as a whole you really get to take advantage of the whole body of tooling that is available as a result of being part of Kubernetes so Dan showed the docs.crd.dev before the tool for visualizing CRD is exposed by any repository and there's also tooling like Helm or everything that exists in the Kubernetes world for IAM that exists for orchestration that's one of the really big benefits from my perspective. Awesome thank you we have one minute left do you have any final thoughts or are we in today? I don't think too much I would just echo again what Chris was saying earlier about the community that's a really big part of what open source means to me personally and I know it's true for a lot of the other crossplane maintainers and so yeah if you have any desire to be involved or if you're just looking for a place to learn you're definitely welcome here no matter your background experience level or anything like that so please feel free to to join us in Slack on Twitter etc so crossplane.io that's right everybody just go to crossplane.io and you can access GitHub and Slack etc but thank you so much we're out of time and we do have one last question but hey everybody go to the crossplane Slack and they can answer it there and how it relates to Knative I will say I will say very quickly there is a live stream where we have Matt one of the Knative maintainers so if you want to go and check that out on YouTube give that a look nice that's really good to know all right awesome thank you so much both of you Dan and Chris that was awesome really great demos and overview and discussion and thanks for having us yeah thank you and Chris do you want to see us out