 Hey everybody, my name is Jared Watts and I'm a founding engineer here at UpBound and a co-creator and maintainer on the cross-plane project and we're going to talk about how you can build your own platform as a service from all the components in the cloud native landscape. So speaking of the landscape, it's pretty big. There are a lot of entries and a lot of projects in it. People have found some humor in this and in recent months somebody actually made a thousand piece puzzle out of the landscape and it's sold out. That Etsy link will show you where it was but it's no longer for sale because it's sold out. It's pretty popular. There's a lot of stuff going on in this landscape right. We see what happened to old Charlie here when he tried to make sense of the landscape. So let's do a little bit better than Charlie did. Let's start making some sense of all of the sprawl in this landscape. So the CNCF itself says that complexity is the number one user-reported issue in the ecosystem. It's been that way for a number of years. So that's a recurring theme. So one way that we can help out here is that we can start composing these entries in the landscape together into higher level solutions, right? Turn these ingredients into recipes. And so what we're going to do today is just that. We're going to build our own cloud native platform as a service and it's going to empower our developers to run their applications with a rich platform underneath them of all these great projects from the landscape. So we're going to make some sense of all of that into a nice platform. We're going to put some best practices, some operational wisdom into it and then we're going to be able to share it and collaborate on it as well too. What we're building here today is a starting point and it's just one opinionated platform, right? Out of that landscape you could build a number of them, I'm pretty sure. So we're just going to start with one today and then we can use that as a starting point to collaborate in the ecosystem and keep building. So let's do a quick refresher on how you can build your own custom platform using crossplane. So basically we can assemble together low level granular resources from multiple vendors, clouds, environments, and then we can expose those as a higher level of abstraction to our application teams that serves as an API for them to be able to self-service and get the infrastructure they need. So to make a more tangible example, we can compose together, you know, GKE, node pool, network subnetwork, a bunch of GCP resources, and then also some Helm charts as well too for our platform services and projects and compose those all into a single cluster object that is basically an abstraction of what it means to be a cluster and that cluster object is going to have a small set of config for our developers so that they can tweak what they need to and then all the complexity and the details and policy and all that sort of stuff is actually going to be hidden away from them. That's going to be underneath this simple API line that we're building for them. All this is going to be done with Kubernetes API so it's going to be compatible with anything that talks Kubernetes basically and we're not going to have to write any code to do this either. This is all going to be declarative. A little visualization for this here is that our application developers, all they'll see is the cluster object, this API that we've surfaced for them with some simple configuration and then behind that API line, we can have multiple compositions to fulfill what it means to be a cluster. So in this example here, we've got one for AWS, we've got one for GCP with all the specific cloud resources that make up a cluster within those particular environments but it doesn't have to be multiple clouds for our compositions. It could be all within one cloud and it could be something like fast or slow, expensive or cheap, gold or silver, doesn't matter. We can have multiple compositions that serve as the runtime definition for what a cluster means for our application developers. So let's talk about what are we going to put into this platform as a service we're building from the cloud native landscape. Well, we're definitely going to start with Kubernetes because we're going to need a container orchestrator cluster to run our apps and our workloads. And so building on that, we're going to also put Prometheus in there. So we can do monitoring and collecting metrics from all of our microservices and then alerting when things go wrong as well too. Yeager is going to be useful for distributed tracing. It will give us insight into all of the complicated interactions between the microservices in our distributed system here and find out when things are going wrong as well too. Fluid D will be helpful for our logging, consolidating all of our logging that's being output from our services. Rook we're going to be putting in there as well too for storage. So Rook provides persistent block file and object storage. So if an application needs a volume to write to or a file system that it can get it for Rook. And then lastly, we're going to put flux in there as well too. So we can connect our Git repos with our cluster there and do continuous deployment of our applications from the Git repo using GitOps into our cluster here to keep our applications up to date as developers are making changes. Here's a architectural system diagram that kind of puts us all together. So remember we were looking at our cluster object that our developer will be able to configure and have a simple API with. And then when that cluster object is created, Crossplain is going to take a look at it. It's going to render out all the compositions that we've defined. And then the providers in Crossplain will actually be talking to external APIs and making this system happen out in the real world. So provider GCP will be talking to Google Cloud and it will be creating the network, the node pool, subnetwork, the GKE cluster itself. And then provider Helm will also be taking a look at all of the composed resources that came out from our high level cluster abstraction. And it's going to be deploying all these Helm charts for Prometheus and Yeager, et cetera, into a namespace inside the GKE cluster, the operators namespace. And then lastly, flux and running inside the operators namespace there too is going to be connected up to GitHub to be looking for changes in GitHub and the using continuous delivery to take the application from GitHub and put it into some namespaces inside of our clusters as well too, so that we'll have pause, employment center, running that make up our workloads. Let's start designing the shape of this cluster object, this platform API that we're putting together for our developers. What sort of config knobs do we want to give to them to allow them to be able to tweak and configure to their liking. So some things that might be interesting is some characteristics of the workload cluster itself, you know how many nodes it has, what type of machines are going to be making up this cluster. We're probably going to also care about what versions of the platform services they might depend on. And also we're definitely going to care about what is the Git repository that we want to run continuous deployment from. And so we can start thinking of an API here that exposes these knobs for the developers to set and tweak. And so we see here we can specify the count of nodes and then the size of the nodes that we want. Note this isn't a specific machine type because this is going to be a universal API for multiple vendors. So we're kind of extracting it into a small, medium, large type of format. We've got the versions for the services in the cluster and then what GitHub URL to be syncing from. And note too that we're probably going to want some policy here underneath the API line because you don't want to just let your developers specify willy-nilly how many nodes in the cluster. We want to put some upper bound on it too. So we're going to need some policy underneath our API line as well too. So here's another kind of diagram of the composition hierarchy really. So starting at the top there, you know the developer all they interact with is a simple cluster object. They're making a claim on a cluster and then beneath the API line, here's this whole hierarchy of composite resources and compositions that compose them. So we can see here that underneath of the API line there's a cluster, composite resource, and then there's two different compositions that we can be looking at here. One for GCP that is made up of yet more composite resources, a GKE one, and the services composite resource and underneath those there's more compositions of lower level resources. So the GCP resources we'll see there, the Helm charts for the various landscape projects we'll see there as well too. So we can see this hierarchy of the composite resources and compositions underneath them that put all together make up the entire platform that we're building. We see that all that complexity is yet again underneath the API line and then the developer doesn't have to worry about that. So let's also talk about how are we going to get configuration from the developer from this simple cluster object down into this composition hierarchy so that the leaf resources, the granular low level resources can get the composition or sorry the configuration that they need as well too. So we see here that we expose you know the size and node count for the cluster to the developer in our cluster object and so we can take that from the developer's object we'll take the size and we will patch that down on into the particular GKE managed resource, a low level resource and note here now that small, medium, large doesn't mean anything to GKE so we were making a transformation of developer's configuration into something more specific for GKE. We do something similar for AWS as well too with their different specific AWS specific machine types but basically we're taking small, medium, large, we're mapping that to a specific set of machine types in GKE and basically getting our developer's intent down into our compositions and into the real world as well too. Similar thing can be done for the node count where we will set that on the number of nodes in the cluster and the auto scaling properties as well too and this is exactly where we want to apply more policy something like open policy agent to specify hey developers you can't set a higher node count than 10 or something like that so they can't go too high with the node count so policy being configured into these patches in these compositions is very important too. One more patch to look at is how do we get our continuous deployments, platform services, configuration down into those home charts as well too in our compositional hierarchy so we can see a very similar pattern here we've got some defaults defined for let's say the flux home chart but then we're going to patch in everything that comes from our developer's cluster object as well too the versions and the git repository URL they set we're going to take those from their developer cluster object and patch those down through to the underlying home charts in our compositions so that those get reflected into the real world instance that we're bringing up for them as well too. So a quick reminder as well that we're looking in deeper into some of the complexity here but that's all beneath the API line right this is something for the platform team or the infrastructure owners to worry about and the developers still focus on the high level simple object simple API that we're exposing to them with a small set of config knobs for them to turn into tweak and that's all that they have to deal with that compositional hierarchy and patching and all those things are only something that the infrastructure team and the platform team is going to have to worry about the developers get to have a simple focus still all right so let's hop into the demo now and see all this running together on a you know live practical system okay so let's get this kicked off by starting on the upbound cloud registry because that we have made this cloud native reference platform and it's available in the upbound cloud registry and so we're going to start from here to get it running so I'm going to go ahead and just run this in upbound cloud so that's you know I don't have a cross plane instance running right now so I can just create one on-demand here to run this demonstration with a cross plane instance that's in upbound cloud so that's going to get kicked off and created a cross plane instance for me okay now that my cross plane instance is ready in upbound cloud the cloud native reference platform was installed into it automatically along with all of its dependencies as well too so we're ready to get started with this thing so in my cloud native reference platform here I'm going to go ahead and connect to the command line and start showing some things there as well too all right so on the command prompt here let's start examining what is in our platform that we brought up so let's take a quick look at the packages that are installed just make sure everything's on there we installed our cloud native reference platform there and they brought in dependencies of provider gcp and provider helm so those are all there ready to go and then also included in that that platform that we installed here it would be the xrds is well to our composite resource definitions so you can see here now that we have a cluster object and one for gke and services as well too so those are all look to be ready to go and ready to consume so let's take a quick look then at what we're going to actually create here so I'm going to create now I'm the developer my infrastructure platform team is installed everything for me and everything's ready to go and as a developer I want to get a cluster now so here is a cluster claim that I'm going to go ahead and create as the developer and we're we've seen this in the slides before where we're going to create one node it's going to be a small one and these are all the versions of the platform services that I want and then note I'm pointing specifically at my get repository to do the continuous deployment from so that's what we're going to apply now so let's go ahead and apply the same file there so that gets kicked off and then underneath now cross planes going to see that we have requested a cluster and all that machinery for the compositions and the composite resource definitions that our platform team defined that's kicking into gear now so we can see here that we've got a cluster composite resource created now and it's not going to be ready right away because underneath it there's going to be a lot of infrastructure that is getting brought up in Google Cloud now so I'm getting all the managed resources which means all the basically all the services in Google Cloud so in response to requesting a cluster a cloud native platform cluster I'm now getting in GKE I'm getting node pool subnetwork network etc so all that stuff is kicking off now and installing and then the helm charts for all of our platform services our Yeager our Fluent D flux all that stuff is now actively being reconciled and the actual state is being driven to match the desired state that I've requested and all this is happening right now all right let's check in on our deployment and see how things are going now it looks like all the gcp infrastructure is up see they are all ready true so all of our cluster and network and everything is looks like it's ready to go and happy let's also take a look at the platform services that we deployed as well too so yes our Yeager our Prometheus Fluent D all that stuff it looks like they are ready also and deployed out to the workload cluster in GKE that we brought up so it looks like everything should be about done here one other thing to look at is that I have a separate kube config to connect to that remote workload cluster we brought up I just got the kube config from the connection secret that crossplane saved for me from after it completed provisioning the cluster so if we look at the operators namespace in that workload cluster that we brought up in GKE we can see that all the operators for Yeager and Fluent D and everybody and Rook down here they all look to be running and this one's a job so that we completed okay I think and so everybody's ready and running so I think the platform is ready to go so we can now start putting our application into this workload cluster that we brought up from our platform in GKE in order to start getting our applications deployed to that workload cluster using flux let's jump over here to my my repo that has a couple of different applications in it that I forked from fluxes example upstream so we can see here that we've got a ghost blog and Mongo and Redis you know database and caching as well too so these are the components that we would expect through fluxes continuous deployment to be syncing from this github repo into that workload cluster that we showed you so one thing I need to do to make that finally connected here is that I need to add my deploy key to the to the repo here so that the identity that flux uses will be accepted by github to let it access this repo and to start deploying the applications the Redis the Mongo the ghost blog from this repo into the workload cluster so we've added the deployment key now and let's go check on the workloads inside the cluster all right so back to the command prompts here we're going to run a kube control command to connect to the workload cluster that's in GKE and then we're going to get the the helm releases from flux the ones that it's syncing from the github repository down into this cluster here and we're going to see that excellent looks like ghost MongoDB and Redis were successfully grabbed from the github repository using continuous deployment to get them into this workload cluster so it looks like everything is pretty much completed now where we've deployed our cloud data platform we have gotten the infrastructure provisioned through a simple cluster object that i's developer was able to tweak a couple config settings on we've got all this complicated machinery and you know full cloud native platform brought up for us and on demand and then we were able to start using continuous deployment and allowing these operator platform services inside of our workload cluster to start getting our applications up and running and be able to use some of those services as well too that our platform team provided for us okay let's wrap this up with some conclusions now uh basically not everyone's going to be an expert on the cloud native landscape so let's take this expertise and knowledge and consolidate it to our platform team you know they can go through the efforts of you know making sense of the sprawling landscape and they'll define a platform for our developers and all of our app teams will get to benefit down the road from their efforts um you know they'll be able to write their apps to focus on their business logic and they'll get all that great functionality right out of the box from this platform we've designed so basically we can now with crossplane that we can make a opinionated platform of our own we can design APIs for our developers to get their needs serviced and today we made one that was a cloud native environment with all the fixings in it right and so we can also share this with the ecosystem um we've done that here today uh there's a link to the github repo that has this cloud native reference platform in it and so you can use it directly or you can use it as a starting point to start tweaking and building your own cloud native platform um or platforms in general because there's a lot of good content in there to help you get started on that journey um you know building your own platform APIs with crossplane so thank you so much for attending and I think we're going to get into some questions now