 Hey everybody, if you are watching right now, we had a slight technical hiccup. So Victor and Nick will be with us very shortly. We apologize for that. So we'll just wait for them. As we're waiting, I'll just really quickly highlight crossplane. So crossplane is a Kubernetes add-on that basically enables your team to assemble infrastructure resources that may exist outside of your Kubernetes environment. Think like a cloud resource like Amazon, RDS, the AWS RDS, or ElastiCache, via kind of a declarative model in the Kubernetes realm. So you would, instead of going to the API or the console for AWS, you instead would, oh, he's made it, Victor's here. Okay, cool. I don't know. Hopefully no more issues, one more person. We'll wait for them. But I was just, Victor, I was just talking really quickly about what crossplane is. There he is, Nick is here. Okay, all right. So I think we're back on track. Thank you to everyone in the background doing all of the heavy lifting and making sure producing is going as it should. We apologize for the hiccups, let's continue. I'm gonna let Victor and Nick introduce themselves. Go ahead and take it away, Victor. So yeah, I'm Victor. I am principle of something, something in CodeFresh. I mean, I'm not sure what to do anymore, but I work for CodeFresh. We are a company behind a few products. And lately my major interest is around GitOps, huge fan of Fargo CD and similar products. And crossplane just happens to fit perfectly into that amusement, let's say. Awesome, thank you, Nick. Go ahead, why don't you tell us more about your background? Hey, folks, my name's Nick. I'm also a principle of something, something at UpBound where I've been leading crossplane engineering for the last two years from UpBound standpoint. UpBound, it was the company that originally open sourced and founded crossplane, as well as the Rook.io project for LNF. I am a steering committee member and pretty active contributor to crossplane. Definitely have a lot of architectural insight into how it's all put together. And my background's site reliability engineering. Same as you, I think Mario. So I was an SRA for about 10 years at places like Google and Spotify before I decided to go try and build my own infrastructure project. Very, very cool, yeah. Well, you both are higher levels than I am, so you have much more industry experience and much more of an idea of how to build something of this stature. So with that, it sounds like Victor has kind of a presentation or kind of a flow that he wants to, a scenario he wants to share with us. I'm gonna let him take the floor and go ahead and share his screen and get that going. And then Nick, being from UpBound, being the principle that he is, is going to provide insights and a lot of your questions will probably go Nickbound, I think, and if I can't answer them myself, which I won't be able to, I guarantee it. So Victor, why don't you go ahead and get your screen shared. And again, please use the chat facility in your platform that you are using to watch this and leave any questions that you have. We will get to them as we can throughout the hour here. And thank you for tuning in. And Victor, go ahead and take it away. Sure. Okay, so I'll try to keep it short so that we can discuss and talk about it and potentially answer questions instead of me just you staring at my screen. I hope that you see a black screen over there. That's my terminal. So what I didn't do, I didn't do much in advance. I created a very, I created a simple quick Kubernetes cluster. I installed crossplane and I installed GKE providers simply because I will be using GKE today, but basically whatever I'm doing applies to Azure, AWS, what so not. So let me just actually start creating a cluster using crossplane and then since that will take like five, six minutes, something like that, then I can talk about why I'm doing all this and why bothering the first place. So I'm using Kubernetes, right? So kubectl apply, filename and some Kubernetes definition in this case, GKE YAML. And that's about it. Now, there we are. Now, let me show you the, actually before I show you the definition, very quick. So I'm not the developer of crossplane, I'm just a user and I have certain motivations that led me to it that might not, you know, might not be the same for everybody. So I will speak more from personal experience, the reasons for it in the first place. And on one hand, you know, if you look at infrastructure as being something that we define as code, and then we apply those changes somehow and then infrastructure and services and what's or not happen automatically. Then from that perspective, Terraform, Pulumi, crossplane and many others are all the same. And we can just talk about how many libraries they provide and what's or not and the differences in the syntax, but I don't really care much about those things. What they do care about a lot is Kubernetes API, right? I do not really see Kubernetes as being a platform for running my containers. From my perspective, that is more like a first step towards something else. And that's something else is that Kubernetes is the closest thing we ever had in this industry towards having a single API through which we can do everything. And that everything can be scheduling containers or creating infrastructure or managing VMs or I don't know, like working with Mac machines or what's or not. Theoretically, it could be anything and it is designed to do that even though most of us are focused on containers at the moment. So going back to crossplane, I tend to lose the focus very often. The reason why I like it is because it allows me to use Kubernetes API to do absolutely everything. My application is running Kubernetes infrastructure being somewhere else and so on and so forth. And another thing that, and since it is used, it is based on CRDs basically Kubernetes CRDs essentially and Nick will correct me. That means that I can leverage the whole ecosystem that I have set up or we have set up around Kubernetes. If I like using customized, I can use customized to manage my crossplane definitions. If I like using Helm, that's also okay. If I want to use Flux to manage my GitOps processes or Argo CD, I can do that as well. So it's more about looking for something that is part of a broader ecosystem than evaluating the value of that something in isolation. And there is nothing that beats Kubernetes today. Anyways, those were kind of my main motivations. Now, let me go back to what they created, right? So I executed this definition here, which is the simplest possible I can have. And this says, hey, I want to create a Kubernetes resource that happens to be called GKE cluster and that has, you can probably guess from the names what it is, right? It's a GKE cluster and I want it to be in US East one. And then I want to create a node pool. And that node pool is based on GCP API. So obviously it will be node pool in G Cloud and a few other parameters. This is basically, if you look at this, if you're familiar with Terraform or Pulumi or what's the note, these are the same arguments, more or less same definitions that I would use there except that I can treat now this in the same way how I'm treating everything else. I'm almost everything else I'm doing today, how I'm treating my applications, creating stateful sets, deployments, K and ATIV, this or that, right? Everything putting in the same box using the same API or Kubernetes in this specific case. Now, Victor, really quick, this is great. I, we actually had a question. Someone wanted to see your GKE. Yes. And this is his GKE.yaml file, which he has counted out for you to view. He's showing you two different manifests in here. And these manifests are CRDs, right? You can see container gcp.crosspoint.io API version. Victor, can you get a little bit really quickly, just brush up on what CRDs are and what this actually means for people interfacing? Sure, so CRDs is short for custom resource definition. Basically you can, if you use plain words, you can say that Kubernetes comes with certain resources like this is a pod, this is replica set, this is deployment, this is this and that, right? So out of the box, Kubernetes comes with certain capability or certain resource types that you can use. But then Kubernetes was never supposed to be something that we use directly, at least from my perspective. Kubernetes from day one was designed to build platforms. And you're supposed to choose what you want to Kubernetes to do beyond what comes out of the box. And we do that by installing custom resource definitions. Basically, we are creating additional resource types that Kubernetes can consume. And that can be almost anything, right? Like right now you're seeing custom resources that were extended that extend Kubernetes capabilities by, and those resources are GKE cluster and node pool which were installed or defined when installed crossplane. Similarly, later on I will show you how I extended it further. For example, by adding even more CRDs to for Kubernetes to understand ArgoCity applications, which is, so with CRDs we are extending the capabilities of what Kubernetes can do. If that perfect, thank you so much for that. Yeah, no, that was wonderful. That was wonderful. Really quick, there's a question. Can you also interact with Azure ARM and the Kubernetes API using crossplane? My understanding is that you can do this. And I'm gonna lean on Nick. Yeah, I'm gonna lean on Nick a little bit later to talk more about providers and kind of like where they are in terms of versioning. What's ready, what's right? What can you use now versus what's kind of in progress and mileage may vary sort of providers, right? Because not everything is perfect, I don't think yet. So, all right, Victor, continue. Okay, so if I go back to, if I go to my browser, sorry, wrong browser, here, this one. I left the screen intact, right? This was before I did whatever I did in this demo and if I refresh the screen, here's my Kubernetes cluster, right? It was created because I sent a request to my Kubernetes. In this case, meaning you were even running locally. I'm cheap, right? Saying, hey, I want to create this and that and that happens to be a Kubernetes cluster that has one node pool. Here it is with only one node, right? The simplest possible Kubernetes cluster that I could come up with. And now, if you prefer, on the other hand, if you prefer to use CLI, that's at least my preference without doubt, you can just examine all that through CLI. You can do kubectl, get, GKE, clusters, right? This is that custom resource definition. This did not exist before I installed crossplane and I made the type over here, right? And you can see the, yes, that cluster that I created, it's ready, it's synced, right? It's showing me similar information what I see from Google Cloud console. And I can do the same thing with node pool. Now, before I go on, let me just confirm that my cluster is really running and I'm going to do that by fetching kube config and outputting the nodes and come on, there we are, right? Ooh, okay, that is really nice. And okay, I also want to verify this in my head. We got a question from John. Thank you, John. Just to confirm, you just provisioned GKE resource from Minicube CRD. Yes, I mean, from crossplane CRD running in Minicube, yes, yes, exactly. Yeah, perfect, okay, awesome. That is so cool. Like it's that double check in my mind of like, wait a second. Yeah, so you just did that? That's very cool. It's a chicken and egg, right? I need for crossplane and other tools that I will show later, I need the Kubernetes cluster. So I need that. And for me, for this demo, it was easier just Minicube start, right? I could have went to GKEs and created one. So I need that initial Kubernetes cluster. I need to install the initial tools I have. And basically what I'm trying to say here, the only tool I would ever install manually is Argo CD, but I'm getting there later, right? But yes, the short version from my Minicube, I send the request to my Minicube cluster and that request was create and manage my GKE cluster and then it happened. I could have created 57 GKE clusters and 17 Azure clusters, the result would be exactly the same. That is super cool. And for those of you locally that are looking at developing and using Kubernetes on your laptop, there's of course Docker for desktop, which includes a Kubernetes component. And there's also kind Kubernetes and Docker, which I've used and I love because of its flexibility. It's declared of nature. So there are ways of doing this beyond Minicube depending on your environment. So very cool. Victor, take us through this. It looks really awesome. Now, if I got to the node pool that was created for me, I created the regional cluster, but there is only one node. I'm supposed to have three nodes if I created my regional cluster correctly. And I obviously didn't. I made a mistake in my YAML definition and it didn't do what it should do. Now, if I go and edit here, and I'm now skipping cross-plane completely, right? I can see here that the mistake I made was that I created the node pool that uses one of the zones in that region, but I forgot to create to add additional two regions to, sorry, zones to that region. So that's a mistake I'm going to correct right now. And I'm doing it completely outside of the cross-plane. And I have a sinister reason for doing that. You will see soon, right? But anyway, I made a mistake. I created the cluster in the wrong way. I corrected that mistake by going to console. In this case, G-Cloud console and edit the missing zones over there, right? And if you look at my screen and be prepared with questions because this will take a couple of minutes, you will see soon that the number of nodes now will increase, right? The three nodes because I extended from one zone to three. So that will take a couple of minutes. If you have questions, this is a great time to ask them. If not, I will come up with something to talk about for three or four minutes. Yeah, there was a question for Nick, right? I just want to give Victor a pro tip for future. As someone who's been doing cross-plane demos for a long time, cloud SQL instances is where it's at. They come up super quick. Nice, nice. I'm a little worried because he's kind of out of band, right? And in my mind, he's out of band. He's in a console doing something, but he originally used cross-plane to create the resource. What is going to happen? I'm really interested. I'm also interested in if you were going to edit and right size or correct your configuration, what that looks like doing it from cross-plane. Right? We got three nodes. This was not done with cross-plane. Now, but that is the thing I know it's not nice at all. You will see in a second, right? Actually what will happen in a second will be or in a minute or two or five minutes will be nice. And so let me actually explain what will happen. And that's the reason why that's one of the things that cross-plane does and Kubernetes in general does and other tools don't. And that is that it is, there we are, right? Cross-plane through Kubernetes scheduler basically, just like when you define a deployment with two pods and if you change that, if you change it somehow manually, if you destroy a pod, Kubernetes will recreate it, right? Same thing is happening here. I defined that I want to have a cluster in one zone. And that's what was propagated to my cluster and that's what cross-plane did. It created a cluster with one zone. I went and changed the state of my cluster manually to three zones and additional nodes were created. But what is happening right now is that cross-plane is kicking in and saying, no, no, no, no, no, stop there, right? You said that you want one zone. And we made a contract, we made an agreement that I will do my best to make sure that it's always one zone. And basically it is rolling back my manual change, which sounds scary for many people, but for me that's exactly what I'm looking for, right? I want a tool that will make sure that whatever is my desired state and my desired state is what is starting it is always the same as the actual state, right? And if anybody wants to change the state of my infrastructure in this case or my applications, that change cannot be done manually by Hocus Pocus magic. It needs to be reflected in it, right? So if I would like to do what I just did in a proper way, right? Let's say the only way that is allowed is that I should actually do something like this. I should change the definition and I should have the zones in my manifest or to put it in different way. If I show you what this is what I had before and this is what I should have. And basically I'm changing my manifest to have three zones instead of one zone, right? The right way to do it. Now, the right way to do it, actually I'm going to copy it is not to interact at least from my perspective in my head. The right way to do it is not for me to interact directly with Kubernetes either. The right way to do it is to do something like this. Actually, let me go to the UI, right? What I have here, this is Argo CD and as simple as possible, right? And I told Argo CD that it should monitor this repository this repository over here and that it should monitor the directory production in that repository. So whatever is in that repository is what should be created in this cluster, right? So if I want to change the state of something of my applications or infrastructure, or what's it not the way how I would normally do it is I lost my thought, what did I want to do? Yes, I will take this manifest that I showed you here. I will put it into the directory production that my Argo CD instance is monitoring and I will add those changes. I will commit those changes with a creative message that says something and I will push those changes to get. I'm not interacting directly with my cluster. Everything stored as code, all the code is always in Git and what's not, right? And if I watch the notes, you will see that there is one node only there. The last remnant of my manual changes is being destroyed soon it will go back to the state. But what will happen here is since I committed some changes to the repository that Argo CD is listening to Argo CD from now on is managing my among other things. It is managing those definitions in this case of JK cluster and the node pool, right? So from now on Argo CD is making sure that whatever is defined in that cluster in that mini-cube in this case, the simplest possible one is exactly the same as what is in my Git repository. And then if some of those things managed by Argo CD happened to be resources controlled by cross-plane then cross-plane will take it over and do whatever needs to be done for that infrastructure in this case or services to be up and running, right? While for the rest of the things, I might be having a cube weird something or simply deployments or stateful sets. I don't care anymore because from now on basically my interaction goes only and exclusively with Git. Neither with Kubernetes nor with consoles or what's on it. So now this will take a few seconds. This is wonderful. Yeah, well, that's churning. I want to really quick take a step back, okay? So Argo, what is Argo doing? And can you explain a little bit more about the nature of how Argo works? Victor, just to give people who have never really heard of it or worked with it before an idea of what it's doing for us in this process. Absolutely. So let me actually, let me draw it for you, right? While you're watching my pod going, right? So what is happening is that we have a Git repository, right? That has, you can see this on my screen. Yes, it has some manifests. Those manifests are something, whatever they are. And I as a person, I'm interacting only with Git, right? Whatever I want to do is define this code and my code is in Git. That's the only place we're not using SVN anymore, right? Now I have a Kubernetes cluster or it could have 1000 Kubernetes clusters. Doesn't matter, right? Now I don't need, I don't have access to that cluster. I don't need to have access to that cluster because I have installed Argo CD in that cluster which is monitoring this Git repository, right? And from that perspective, this Git repo is the desired state. Desired state, that's where my desires are defined. And Argo CD is making sure that whatever is in desired state there is exactly the same as what is the actual state which is in this case, my Kubernetes cluster. So if I define a deployment, deployment will be running here. If I define, for example, like in this case, GK cluster, Argo CD will make sure that whenever I push a definition of a GK cluster or anything else, that something is created in this cluster or maybe in some other cluster, whatever that, no matter how many clusters I have. And then since I have crossplane running here, crossplane, when I create that CRD, that the definition or resource which in this case is GKE will communicate with Google Cloud API and make sure that by cluster, my desires are propagated further. This is wonderful. I love live diagramming and apparently the demo gods are with you Victor because like this is perfect. There's no screw ups, everything's working as expected for the most part. I'm jinxing ourselves though. Okay, so to reiterate in maybe Layton's terms, so Argo CD is and what you saw Victor do in the UI there in the Argo UI is sink down changes in the Git repo into Argo. Argo then took it upon itself and agent running in Kubernetes in the cluster to ensure that state in the cluster was what was declared in the repo, right? And this is GitOps, this is when you hear the term GitOps, this is what we're talking about, right? And then in this case, one of the items was a GKE cluster definition which you saw earlier, Victor create that he submitted that he put in this repo. Argo is now making sure that exists in Kubernetes and then crossplane and it's associated CRDs and controllers is then taking it upon itself to talk to Google and create those resources for you via the API, right? Does that connect out? Perfect, yeah. That's correct. Now, one more thing that I would just like to say, right? There is alternative that we've been using for many years. I could have in this Git repo, I could have triggered a web hook that would go to some sort of CI CD pipeline that would create those things. I could have done that. The major difference between using some form of CI CD pipelines and Argo CD is that Argo CD is continuously monitoring my Git repo. So it's not one shot action that might actually, and the state, the live state, the real state might change. So it's continuously monitoring and seeking those two things. That's the major difference apart from me not needing access to my cluster. For sure, okay, got it. Okay, so yeah, this is what we talk about when we talk about GitOps. This is kind of the modern CD pattern that we're starting to see, right? And this is, as in the comments, many of you have said, this is really true infrastructure as code. And this is, I would say, this is so much more developer friendly as well. We don't have to worry about access. We have a very well-defined path for changes to get deployed. And there's a lot more that you can do, I'm sure. We're not gonna get into it in this session, but when you think dev staging, production, disparate environments, right? There's a lot more you can do in terms of PRs and testing in certain environments and things like that. So really quick, I don't wanna go too far without answering just one or two questions. So we have one question here. I think it's from John. Once a cluster is initially provisioned, can you force it to keep its state by deploying these CRDs to that cluster? Kind of like bootstrapping. Do you want to ask me, or shall I? Sorry, what was that? Can you repeat the question again? Sure, yeah. Once a cluster is initially provisioned, can you force it to keep its state by deploying these CRDs to that cluster? Kind of like bootstrapping. So I think what they're saying is after this new GKE cluster resource that you're creating is made, can I then use Kubernetes resources in that cluster to maintain its automated state? Yeah, to manage it. Yeah, you effectively, I think if what you're asking is, could you use cross-plane to make a cluster and then use cross-plane on that cluster? The answer is yes. We actually, we have the various cross-plane resources like the GKE one that you're seeing Victor demonstrate are installed by what we call providers. We have tons of providers for all kinds of different things. We wanna be providers for every API you can imagine. We have a Domino's Pizza provider, for example. One of the providers we have is Help. So you can use cross-plane to spin up a GKE cluster, use cross-plane to help install cross-plane on that cluster and then go drop stuff on that cluster and inception your cross-plane. Gotcha, gotcha. Okay, so there's another really good question as well here. It goes around, let me see. Oh, hold on, Hannah. What happens if you delete your management Kubernetes cluster? So this Minicube cluster that you created, what happens if you delete that? Does it change anything with any of the resources that were created? I'm guessing the answer is no. The answer is no. I mean, as long as you are capable of recreating that state in some other cluster, then you can let cross-plane continue working with the existing resources. Got it, okay. And there was another question too, what will happen if you stop your Minicube? So you will not be negatively impacted. The resources you created will still exist. You can still go and interact with them. Is there any provision where? If I destroy my Minicube in this case, that will not result in destruction of the resources that were created. If that's the question. Yep, yep. My GKE cluster will keep existing, yes. Gotcha, gotcha. Okay, cool. So somewhere else, and this is the last question for now, I wanna make sure we can continue here. What happens if Victor is part of a team? Does that person need access to his Minicube? So I can actually answer that one, Lars. Thanks for the question. I think the big thing here is Minicube is running locally on his own laptop. He is not, it's not another resource that the company has. It's a local Dev environment. And in that Dev environment, it's a Kubernetes cluster, right? It's a creation of a Kubernetes cluster, which he is using to then provision outside resources. And I think, and we can talk about it now or later, Victor's providing the authentication needed for the Google, for GCP, right? To his, as I said, I'm not sure how that is and what we can, you can get into that really quick. So that he has, so he has some access to create resources in GCP, right? I'll let you take it from there, Nick. Yeah, exactly, exactly. I have, it's Minicube, right? You cannot access my Minicube cluster. And in this context, it, even if you could access my Minicube cluster, I would never let you do that because you shouldn't. But what you can do right now, if you're brave, here's the repository. If you want to change my Diki cluster yourself, let's say that you work in a team with me, go to this repo, fork it, make a pull request and change anything in the, in here's the, let me see where they define it. Yeah, change the, I don't know, wrong directory. Let me fix it right away. Here. Go here and change this definition and it will be directly applied to my JK cluster. I will merge your pull request, I promise. That is wonderful. So DevOps, pyridox, cross-plane demo is the repo everybody. Definitely check this out. I would start, it's a great demo. This is something you can play with locally just like Victor is doing right now. All right, Victor, let us know what's, what's the status of things? What, how are things going? The things are going, let's double check. I'm surprised actually that nothing failed so far. Let's see, maybe it's not going. Maybe I messed up, demos always fail. Let's see my cluster. Yes, failed to load. I will blame it on Google if it doesn't work. Cluster creation can take five minutes or more, nodes. Okay, I messed up. At the very last step, something. Well, we can continue waiting for it. I know there is another, go ahead. Actually it's fine, it's fine. I keep three nodes over here. Yes, yes, it's fine. Three nodes over here or three instance groups and then Google is messed up with nodes. Doesn't matter. Imagine it's fine. Excellent, I love it. Yeah, let's just imagine everything's okay. We'll just sweep that under the rug. No big deal. Okay, is there anything else you wanted to share? Well, we're looking at this, Victor, about this process, what you've seen and other things that you really feel are really powerful and enable you to, either with your current team or that you've seen with other teams using cross-plane. No, that's about it. The rest is more, I mean there are many great features of cross-plane, the ones that I like personally the most, simply that it's a Kubernetes definition and it is syncing basically the desired state and the current state continuously. So I can, whatever happens will be corrected basically by cross-plane. So those are the things I like the most. Now there are composites that are awesome if you want to create your own definitions. Nick can speak much more about it than I am and probably he's eager to speak about it. From my perspective, what I'm most interested in always is how can that tool work within the ecosystem I have and the ecosystem I have is somehow related always with Kubernetes. And that's why I brought ArcoCity in this story today more to demonstrate the permutations we can create when we use a tool that is part of Kubernetes, one way or another. For sure. So there's a ton of questions and discussion. I love this. Thank you everyone who is chatting. We're seeing all the comments across all of the platforms that you are using. I think our producers will probably try to link the repo. That's the github.com slash devos paradox slash cross-plane-demo repo. And one of the questions here is around giving developers the ability to deploy only infrastructure that they're allowed to like Azure Corp Sentinel. How would we manage that access? I think that access would be managed and maybe we should kind of unpack access here on what that looks like. Access would be based on the, so let's say your developer is putting a change into Git and they push that change up as a PR. That developer isn't doing anything else at this point. Like that person has committed code. That code represents a desired state and cross-plane would exist inside of the environment that is getting that configuration applied to it and cross-plane would create those resources. Now, the question that becomes what kind of access does cross-plane, what is it configured for in that cluster that this declarative state will be deployed to or that Argo is picking this up from? In this case, in this specific case, since I'm using Google, I created a service account, Google service account. Nothing to do with Kubernetes service account. And I created a secret that tells cross-plane to use that service account. And in this specific case, since I'm lazy, I gave myself admin permissions to do whatever I want, but those permissions could be limited to something else, right? But now that's one level of, let's say control that you can have another one, which if you do use GitOps in one way or another, as you mentioned, the permissions are who can create a pull request and who can manage a pull request. That's basically how you control things and who can review that pull request. And finally, we could introduce OPA into the mix, open policy agent and say, hey, okay, even if somehow after all those pull requests and merges and confirmations and all those things, the resource gets created, does it comply with certain rules? We could say, hey, you can create a GKE cluster, right? And I will merge your pull request, but OPA can be defined that, for example, it doesn't allow you to create GKE resource that has more than 10 nodes, just to say something, right? Otherwise, if it's more than 10 nodes, then OPA can reject that request to create that resource in the first place. At least that's how I would approach it. I think of access here is having multiple layers that you can sort of opt into, I think as Victor touched on, at the very lowest layer there is what does your account for Google or Azure or AWS have, what does your service account have permission to the cloud? Then you tell a crossplane somehow to use that service account, which can either be using what we call a provider config, like Victor said, you basically give a secret that has a credentials in it. We also support IAM based authentication, so you can use the permissions that the pod that runs crossplane has instead of giving it a secret. Then once you're in the Kubernetes level, you've now got RBAC going on. So you can actually grant RBAC access. You could say, Mario has access to make GKE clusters, but Victor does not or vice versa at the Kubernetes role-based access control level, basically just describing verbs that people can do to resources. Then as Victor touched on before, we have what we call composition, which allows you to basically build your own APIs. We haven't touched on this much, but you could effectively say, I'm defining something called my great Acme corp, Kubernetes cluster that just doesn't have machine sizes of field that's always fixed. So that's a form of access control. Then you could have Gatekeeper and just OPA, anything in the Kubernetes ecosystem. And then on top of that, you can go up to your Git repo and use Git as a mechanism, and listen, people have access right. So as Victor points out, just the fact that this is all built in the Kubernetes ecosystem, just gives you so many different integration points and levels to do these things that it's super flexible. It's layering. There's layers by which you can choose your tool, you can choose your process and you can implement it in many different ways. That's what I love about Kubernetes in the community here. There's really quick, there's a quick question on GKE autopilot. Daniel Magnum, which I will talk a little bit more about him later on, actually did a live stream adding autopilot a week or two back. And I think we can probably link that possibly if we can find that. Nick, we can get that shared out. And then there was another question on what your preferred solution is for secrets. I don't want to go too much down right now. It's a little bit kind of tangentially related. I think everyone has different needs when it comes to handling secrets. There's, you know, vault. There's Bitnami's sealed secrets. There's Mozilla's SOPs, which I have had the pleasure of using and is not very developer friendly, but is definitely a solution. There's a lot of different things there. I don't think secrets are kind of the key thing we want to talk about right now, but thank you for bringing that up. Someone did ask though, and I think they just were getting confused. Is there Terraform going on? Is there other provisioning code going on here that is being used with Argo? How is Argo deployed? Things like that. Is what, how did you deploy Argo? How did you set up Argo, Victor? And what was the process for that? So Argo, I had to set up Argo myself manually. In this case, I used Customize to do it, right? But from there on, I'm not managing Argo anymore in that way. It's a similar thing like the answer to how, can you use cross-plane to manage a cluster where cross-plane is running? Same thing with Argo. So basically, my goal is that the only direct interaction I have, at least right interaction with Kubernetes clusters is to Argo. And tell Argo, this is the repository from where you manage yourself and everything else. And then I added to that repository cross-plane and cross-plane was installed. And whatever else I had there. So basically, Argo CD would be the only manual. Yep, for sure. Well, I'm sure you can install that with that. There's probably an Argo Helm chart. There's probably an operator CD and all that, right? Like that's kind of like of course you would, yeah, good. Yeah, there is a Helm chart. There is a customized, you know, all the GIFs, whatever it was. I wanna chime in here for a second with just a little agenda I've been pushing relating to Argo CD. I'm gonna keep saying this until someone from the community does it. So you're absolutely right. You know, we have a cross-plane Helm provider, as I mentioned before, that you could use to ask cross-plane to declaratively go install Argo on another cluster. And I really wish we had an Argo CD provider for cross-plane where you could have cross-plane resources that would talk to the Argo CD API and go tell it to like set things up. I'm gonna keep saying that until someone comes and builds an Argo CD provider. So if anyone's listening, talk to me. Wait, wait, wait, it's open source. Now it can be somebody else, so it could be you. Exactly. Everybody watching, we are telling you, you can do this. We're giving you the power. It is in your hands. So there's a few more kind of discussion here around secrets a little bit. One question, are we able to fully replace GitLab pipeline by Argo CD or Argo CD is dedicated for only deployment updates objects? So GitLab is kind of, so GitLab is Git, right? And so again, we can do this with any Git repository. Victor, do you have thoughts on this? Yeah, I mean, Argo CD doesn't care which Git repository you use, right? Right, exactly. Any Git repository. I mean, I haven't tried it with everything. I haven't tried it with the Git Air. So it might not really work with every, but it should. Definitely with GitLab. Yep, for sure. And that's what I'm thinking as well. Another really good question from John, what's the recommendation around provisioning the initial cluster for management? So should we say that we have a management cluster that these configs get deployed to and that management cluster then works on creating other clusters? I also wanna tell John here a little bit. I think, I'm not sure I would think about it as much from a management side of like I need a management cluster. I know there's a lot going on with cluster API and a lot of other abilities. When you think about how do we just manage an environment of clusters as cattle, not pets, right? I think in the environments that I've seen, of course there is like a dev and a sandbox and there might be like all of these five or six different environments. What I'm thinking more so cross-line being is a controller that exists in all of these environments that let's say we need a resource for the sandbox environment. The way we would create that resource is by this declarative Kubernetes resource that we're defining, right? And the object probably wouldn't be another cluster as much as it would be a developer who needs a Redis instance that is an AWS Redis instance, right? Elastic Cache would be an example or needs a bucket created, right? Or some other resource, a database, right? I think that would be probably the most common use case and Nick, I'm interested here. What other use cases have you seen customers have and do they kind of take a sense of like we're gonna have a management plane and we're gonna do everything there and that will manage other resources or do they do it more co-located? We see a bit of both. There's definitely many of the people who come to cross-plane are already enthusiastic about Kubernetes and have Kubernetes deployed. So it's relatively rare to have people come to cross-plane more and more these days but it's relatively rare historically to have people come to cross-plane and just not be Kubernetes users. So for a lot of people, the path of these resistances I'll help install cross-plane onto my existing cluster or clusters. But as Victor pointed out, Kubernetes isn't just about containers. Kubernetes set out to build the best container orchestration platform and accidentally built the best control plane platform. So you could just go and spin up a cluster that does nothing but run cross-plane separately and we see people doing that as well. Plug for my company, Upbound Cloud, Upbound Cloud has a great system to go and run your own cross-plates. You can just go to our cloud.upbound.io and click a button to get a cross-plane and start playing with this. And in that case, there's no deployments there or anything like that but you can go and spin up other clusters. You can just spin up your databases, your caches, et cetera. And some people do that just because they wanna try cross-plane and they don't wanna go put it on their production cluster. So it's just a nice safe way to go and deploy it. Absolutely, that makes a lot of sense. There's actually another quick question too that, is this similar to Rancher Fleet or Cluster API? Nick, I think you said you guys are working with the Cluster API folks, right? Yeah, Dan who was in the chat at least, Dan Mangum from my team is working, he's deeply involved in Kubernetes release management and whatnot and he's been working with the Cluster API guys folks. An interesting thing about composition in cross-plane is you can effectively define your own CRDs which we call XRDs, composite resource definitions. And they have any shape that you want. So in the Cluster API, you've got a bunch of CRDs that say, hey, please give me a Kubernetes cluster. Cross-plane can just satisfy one of those by saying, sure, I can make a GK cluster or any KS cluster or whatever or several node pools and VPCs and all the things that actually make up a cluster. So Dan is exploring the integration points that we can have there so that maybe the Cluster API could be optionally powered by a cross-plane because they're writing code to go and spin up VMs and things like that and that's cross-plane to bread and butter so we want to help out there. Right, right, absolutely, very cool. That's really exciting to hear. One other thing, this is a really solid question, I think, something I didn't even think about. What options does cross-plane provide to reference and enforce dependencies between different resources, i.e. IAM policies? So this feels like something that someone from the cross-plane community has asked because that's a very on-point, deep question. In Kubernetes, there's a big document called Kubernetes API conventions that sort of talks about how you build custom resources and one of the things that they recommend is if you need to refer to something else, use a reference and that's what we do. We call the cross-resource references. So quintessential is an example of this that was one of the first things we solved was Azure databases. Azure does this really cool thing where when you deploy a SQL database, it doesn't allow any traffic by default. You need to go and create the firewall rule to just use your database at all. And the firewall rule has to declare what database do I apply it to? So we support doing this three different ways for any reference to another resource. In this case, where the firewall rule needs to reference the database it applies to, you can either just give it the name of the database in Azure. Database doesn't even need to be managed by cross-plane. You can just say, hey, I told you there's a database, firewall rule, please go apply to this. You can give it a reference to the database in Kubernetes. So you can say, hey, I'm managing my database in Kubernetes using cross-plane. It's called Fubar, please go apply to Fubar. Or you can actually give it label selectors the same way that you might select what node your pod runs on. So you can firewalls a bit of a weird use case for this, but you could be like, please go apply to firewall rule, please go apply to a database that is big and on the east coast rather than specifically that database. And the nice thing here is that unlike other infrastructure as code tools that sort of build a giant graph of these things and do it all step by step, because we're continuously reconciling, we could just keep trying. So even if you say, hey, I refer to a thing that doesn't actually exist, doesn't matter when it does exist cross-plane or make all the reference gets resolved. Gotcha, okay. And then, yeah, I think, thank you, Magno. Magno's highlighting the fact that your dog is making himself comfortable in the background there. Just a side note, this is my friends who also SRE's dog without dog sitting and he is famous, Elon Musk tweeted about him recently. Wow, we'll have to share the link to that tweet. To stay semi on topic, do the XRPs and kind of what you just discussed kind of also work in the same plane for we need to get outputs. So instance URLs or resources and buckets and things like that. So like for instance, access and secret keys if we were creating an IAM resource. Yeah, so we handle this in a couple of places. In most Kubernetes resources, you have spec and status. So spec is the desired state, status is the observed state of the thing. So we tend to look at a cloud resource like a GKE cluster and usually in like the Google API, they'll be like, hey, these are the outputs of this thing. So we take those and we put them in the status. So that's interesting, but it's kind of hard to refer to the status of something you can't tell a deployment to go and easily read the status of an object. So you can look at it with your human eyes and see what region is in or whatever. So the other thing that we do is we optionally, for any resource, we have a field called write connection secret to ref. And that allows you to just name a secret and we'll output all the interesting connections he tells to that secret, which are kind of outputs in some of the Terraform sets. So that'll say it has this URL or it's credentials like this. That way your deployment or whatever you're, whatever's consuming a database, let's say could just go and read that and use it. Gotcha, that's most excellent. Another really good question. Thank you, Ether. What will the migration path look like once these CRDs graduate from v1, alpha one to v1, beta one and eventually v1? Will downstream cloud resources be affected or unaffected? What is upgrading look like? Yeah, so there's, we follow pretty closely the Kubernetes versioning semantics. So crossplane, crossplane is architected real quick as crossplane core, which supports composition and what we call a package manager, which is a quicker declarative way to install providers. Providers, what give you functionality for each thing? So if you only use Google, you can install the Google provider. You don't need to have AWS and CRD sitting around all that kind of stuff. So crossplane core is all actually already v1 CRDs. So you don't really have to worry about us making breaking changes there for a long time or they make backwards compatible changes until v2. The providers, mostly the providers, mostly a mix of alpha one and beta one resources. Like Kubernetes, when something hits beta one, it's a pretty serious contract for us. So we effectively don't make breaking changes for beta one resources unless there's some edge cases but we almost never make breaking changes to those things. We never have some farther response of our way. So you'll be able to go from those beta one to v1 and honestly, they'll probably be the same schema you'll probably be able to just change the beta one to v1 and you'll see it'll work. Alpha one is where the risk is. Folks should be aware that if you're going in and using alpha one resources that we don't offer a contract we might just make a breaking change on those on one day. If we need to, we've got a fairly, we've got our patterns established. So usually something lands in fact alpha one and there's no big breaking changes these days. If you did, there is a path that's a little bit painful but crossplane can adopt existing cloud resources. So what you can do is you can delete an alpha one resource and when you delete it you can set its deletion policy to orphan which means we'll delete the custom resource that'll be going for Kubernetes but the thing in the cloud will still exist. Then you can create a v1 replacement and you can tell the v1 the name of the thing in the cloud and the v1 replacement will just go and pick it up and start taking it off. So it's a little bit manual but there is a path that is possible. Sure, that is really good to hear. I didn't even think there was a path to do that. I thought you'd just be in the dark woods by yourself alone at night. It'd be scary. You wouldn't know what to do. Someone had a question around cross-referencing and enforcing dependencies between different resources i.e. an S3 bucket to route 53 record with Kubernetes objects in YAML deletion compared to something like Terraform. I don't want to get too much into Terraform here. There's another question too. Any plans to support Terraforms? HCL, the hazard code language to ease migration. Nick, what are your thoughts? Like what are we seeing in terms of customers and people that are trying to leverage this that literally have tons of Terraform, right? That's every organization I've ever been in. There's almost one engineer full-time writing Terraform. Are organizations thinking like, can you help me make migration easier? Are they starting from scratch? And so it's easy. What's the thought process? So there's a couple of different integration points for Terraform and I totally feel those people. As I said, I was necessary for a long time. I've written Terraform resources and added Go code to Terraform and used Terraform a lot. I love it. It's a great project. Terraform has Kubernetes provider. So one integration point is you can actually use Terraform as an interface to cross-plane. You can use Terraform. If you prefer HCL, let's say, you can write HCL that goes and outputs these cross-plane YAMLs and sort of get a Terraform-like interface to this sort of consistent, self-healing sort of reconcile process that cross-plane has. Another thing that we're doing is we actually have a couple of providers that are generated using Terraform. So you basically point a tool at a Terraform provider and we'll make a cross-plane provider from it. The first one that we've shipped recently that I think Dan and my other coworker Casey just did a live stream about is the VMware vSphere provider. That's actually the cross-plane provider is an abstraction on top of the Terraform provider. As for folks who want to write HCL in cross-plane, I will say watch this space. We have some really interesting ideas going on, but nothing that I can speak publicly about at the moment. Very nice. That's really exciting to hear. Go ahead, Victor. Correct me, Nicky, if I'm wrong, right? But both cross-plane, if you take, let's say GCP, same applies to AWS and Azure. Both cross-plane and Terraform have syntax that is based on the same API which is API of the provider. It's almost, I mean, at least I made my conversions just by changing the syntax. I took Terraform, changed few lines, and then replaced equals with colon because it's YAML and not the HCL anymore. I know it's not that simple, I know that, but basically almost all the fields are the same. There is very actually little difference in the field names. Right, they've taken, and a lot of folks do this, cloud formation is similar, they're very wise. And I think some of the people work here cross-plane. We call this high fidelity, right? You want to take the cloud provider API and you want to make your representation of that API as close as possible. You don't want to innovate, you don't want to have opinions because then anything someone can do with the cloud, they can do with your resource. And if people want to innovate and have opinions, they can use composition and cross-plane to build their own different thing on top of it that outputs what they needed that matches the cloud. So, and being in the infrastructure management game, we see people come to cross-plane to Slack who are like, I really am excited about cross-plane, but I hate YAML and I love HCL. We see the opposite as well. We see people coming and be like, I'm so happy. I hate this HCL thing, it's very subjective. So ideally, we'd like to support both, but YAML is sort of all the Franco of Kubernetes in the API server. Absolutely. We need another, this was fantastic. And we need another session on cross-plane. This is great. The questions are great. I think we did the best we could answering everything. We're at time. I just want to make one last mention. We mentioned Daniel Magnum. He is a driving force of at least myself and many others I know that are understanding what cross-plane does, getting involved. I'm looking at his Twitter right now. Cross-plane Community Day Europe is happening. It's I believe a day zero event May 4th with KubeCon EU that's coming up here in May. And I hope everyone has registered for that. That's going to be virtual. And then there is also a Kubernetes podcast, which if you're not tapping the Kubernetes podcast, you need to need to get in that. It's from Google. There was an interview of Daniel on cross-plane talking about a lot of these pieces that we've mentioned here today. I strongly urge everyone to check it out. I think we'll try to provide all of the links that we can. I want to thank Victor and Nick for joining us today. I want to thank the CNCF for putting this on. This was amazing. Cross-plane is for me, the way that we modernize and help developers, we help self-service them to allow them to get the things they need so they can keep moving without all the complexity in getting there, right? This is how SRE teams can move faster and it's just, it's wonderful. So thank you everybody for joining today. My name is Mario, signing off. Thank you to Victor and Nick and our producers as well. Everyone have a great rest of your day, rest of your week. We'll be talking about exciting cloud native topics again very, very soon next week. Talk to everybody later. Thanks so much. Bye.