 Hello everyone and welcome to this tutorial. Manage apps and cloud resources in unified approach. I'm Andy Shi from Alibaba Cloud. Alongside with me virtually is Jared from Upbound and Project Crossplay. My colleague, Jianbo, was originally the co-host, but he cannot make it today, so I am his double. And that's why I'm showing my face here so you don't get confused. But later we're going to switch to screen recording since there are a lot of exercises to do. This is roughly the schedule for today. We'll go through the prerequisites first and then I'm going to talk a little bit about Qvilla. And then we're going to go to Lab 1 which is shipped the first cloud native application. And then we're going to talk a little bit more on Qvilla and then go to the next exercise extending Qvilla. And then Jared is going to talk more about cross-playing. And then we're going to do the third exercise which utilizes cross-playing and Qvilla together. And that's pretty much what we're going to cover today. Let's talk about the prerequisites. There's an instruction hosted on the GitHub page. I would suggest you go there and check it out, especially there are a couple of scripts that's going to be used. So I would suggest you clone the repo as well. Let's take a look at that GitHub page. So what are we going to need? We need a clean Kubernetes cluster, a mini-cube and kind of fine, but the cluster has to be newer than 1.16. And please verify your cluster installation. On the third or last exercise we're going to be using cross-playing to provision a public cloud database. So probably you're going to need an access key and secret to do so. But if you don't have one, we can still go through the first two labs without using public cloud resources. And also on the last lab we're going to be installing cross-playing. Next, let's install Qvilla release from this release page. The current release is 008. I would suggest you use this release as it is proven. It's been tested. So after you download one of these distributions, it's going to be under the folder that's matching the art of your computer. So let's take a look. There's a binary. Now let's move this binary to user local bin. And after that we'll be able to use the command. So now we have installed the binary on your local machine. What we need to do now is to install Qvilla onto your Kubernetes cluster. So let's do Qvilla install. And what that will do is going to install a Vila core chart and a couple other CRDs. And we are done for the prerequisites. So let me first give you some background information about Project Qvilla. Who are we? We are the platform builders at Alibaba Cloud. We often call ourselves YAML engineers because that's mostly what we do. But jokes aside, we do work on very interesting technologies and we are dealing some of these very unique challenges. And one of them is that Alibaba has probably the world's largest Kubernetes cluster, more than 10,000 nodes. And also we are dealing with different customers both internally and externally. So we get to experience a lot of the challenges that I think that's ahead of the community. As to why we build these application platforms, I think the answer is that Kubernetes is not designed to be used directly, especially the YAML part by users. And just like we wouldn't expect users to directly interact with Linux, the kernel or the projects, we need to build a platform or tools on top of Kubernetes. And especially for the cloud native applications, there are so many operational tasks that's relying on the Kubernetes or underlying platform. It is very important to provide an app-centric view or app-centric abstraction so that developers don't feel like overwhelmed and they don't have to keep track of all those resources that's provisioned and they don't have to deal with all those fields in the API that they have no clues about. So our focus, a lot of them, is diverting into creating app-centric APIs, abstractions, and user interfaces. If we look at these three different products, we'll notice that they have a lot in common. They have canary deployment, auto-scaling, Ingress, et cetera. But why is it that they have to redo all those features themselves? Well, we believe that's because people tend to map their APIs, Kubernetes APIs, directly to the user interfaces. And in this process of creating a very opinionated template or schema, they eliminate the possibility of them being reused because people will notice, oh, this feature comes from that pipeline and we don't need this user experience. So everyone is reinventing the wheels. And that's what we want to avoid because that's really creating a fragmentation, causing silos and really stretching the engineering capabilities very same. So what we are proposing is adding layers in between. Just like in application work, we have data modeling, we add abstractions, we create building blocks, and people can build their user experience on top of those building blocks by reusing those building blocks. Just like when you're building houses, you buy lumber, you buy bricks, they're all coming at similar sizes. But the houses eventually look different because you create your own unique user experience on top of it. So that's where the idea of Qvilla came from. So right now Qvilla is already used in Alibaba. We have this unified application platform engine that's utilizing Qvilla's design. And also we are open sourcing it hoping that it's going to help the community to solve this problem of silos. So I guess we all agree, a lot of us agree on the same problem because from day zero we have contributors from several different organizations. They all come to help us to bootstrap our basic features. Another point I want to make is that currently Qvilla is still under OAM community and that's because Qvilla really follows the OAM spec and it's built on top of OAM runtime. But eventually we want to be an independent project facing developers. So we intend to donate this project to a new tool foundation and we will do it very soon. So if you have concerns that this project is owned by one of the companies, don't worry, we will be into a foundation. The goal of Qvilla. Qvilla of course is designed to serve developers. We want to provide the developers with an app-centric user experience so the developers can concentrate on their own code and will take care of the rest. On top of that we also want to help our fellow platform builders because as platform builders, especially in the Kubernetes community, this community being so vibrant and active, we on a daily basis are seeing new features coming out. So we are struggling with the idea, do we need to catch up? If so, how do we do that? And it's day in, day out, we are struggling with all those pressure. So we are saying, why don't we reuse these capabilities? Why do we have to serve customers in the day and then writing new code during the night? Why can't we have a better life? And that's also the goal of Qvilla. There are three design principles of Qvilla. The first one and the most important one is being application-centric. Well, you might ask, there are so many other projects claiming to be application-centric as well. So how can Qvilla be better? Well, that's because Qvilla is built upon Open Application Model or OEM. This allows us to decouple the Kubernetes APIs from its UI. Now, for those who have built pipelines or platforms, we always worry about when building this opinionated pipeline, we're going to lose some use cases. For example, there are 10 fields in one Kubernetes-native API. If you expose all of them, that's going to be very hard to use. But let's say we would view seven of them and leave three. Now, what happens when the user asks for the fourth one or fifth one? We're going to lose those use cases. Villa, on the other hand, is built upon OEM and we don't have to use even one field from that API. So it totally decouples from the API itself. So that's why we can be application-centric. Now, the next principle is we want Qvilla to be capability-oriented. What that means is we would like to have something like a repository or market for all those building blocks that we talk about. And then we can simply build our UIs or user experiences based on those building blocks. And those building blocks can be independent and they can be reused by other projects as well. So currently, we have a couple, I think, three trades that's installed when you install Qvilla. And we're going to see them in a minute. For the raw trade, we use Flagger. For auto-scaling, we use Kida. For those who are familiar with these projects, I will challenge you. When I'm using that, we will see in the lab, you will not notice at all. And the last principle we have is to be highly extensible. What that means is really on the user interface side, we have this technology app file, and it's built on top of Q-template. So this technology allows us to modify or to customize the user interface without rebuilding or recompiling or reinstalling Vila. And that's one very important feature that we believe to have. So in today's lab, we're going to see all three of these principles. The first one is our lab one. We're going to taste the command line of Qvilla and see how that gives us the application-centric user experience. The second is to add a new feature or new capacity to our existing Vila system, and we're going to use it. And the third one is we're going to go through the details of app file and use app file to install a fairly complicated cloud-native application. Exercise one, ship the first cloud-native application with Qvilla. In this exercise, we mainly to get used to the Vila commands and see how application is treated as a first-class citizen. Now let's go to the instructions. First, we'll do a Vila system update. It is very important to keep this habit, since Vila is a client and cluster is a server. We need to sync with the server from time to time, especially after any changes you're making on the cluster. It has to sync to the client. Next, this command Vila workloads will show all the workloads that's available for us to use. So currently, we have the three built-in workloads coming with the Vila core. The templates and the definitions are all here. We don't have to do anything to use them, and they're very straightforward to understand. The next command is Vila traits. Traits are operational tasks. So right now, again, there are three of them that's installed, and we can use them right away. A matrix, route, and scale. They will be attached to workloads. Next, let's create an application. This is a very simple application called Lab1. It has two components, the backend and the front-end. The backend is using the backend workload, and the front-end uses the web service. Let's create the backend first. Let's create the backend first. So we have created this app, Lab1. It has two components. Now let's take a look at the app definition or status. It is a very simple printout, but it actually shows the application-centric view for that application. The components and the traits. There's no deployments, no any other Kubernetes resources. It's just your application. And that's our Lab1. App file. In our Lab1, we used CLI, and that's fine because those are very simple commands. For any complicated or production-grade applications, probably we need to have a more complicated way to describe it. So, app file is something to satisfy that requirement. If you look at it, at first glance, it looks like a Docker Compos file. And that's our goal is to have this app file as the Docker Compos for Kubernetes. And what it has is also somewhat similar to what Docker Compos can do is we allow users to specify the Docker image or even to build your file and create the Docker image for you. And also, I want to say that it's fairly extensible, meaning that each section you see here maps to a different capability definition. So, the first section, of course, is the build pack capability. The second one is actually the route capability. And then we have the environmental variables, and then we have the scaling trait, et cetera. So, it's fairly extensible. And the third feature about app file is that it's based on QLAN template, which is a very powerful tool. So, this format is totally customizable using a template file. And once you define the template, it's effective right away. There's no intermediate steps to customize your CLI or to add new things to your GUI. No, it's available right away. For example, if we import a new workload or new trait, it's going to show up on your CLI on your GUI, and you can use it, you can right away use it in your app file as well. So, this Q template gives us this capability to customize the look and feel and to extend it. How does app file work? Let's take a look at this diagram. So, there are two concepts in Qvilla. One is called workload types, and the other is traits. Well, workloads are just workloads. Traits are operational tasks attached to workloads. So, when we are defining the workload types, we can define a template that's decoupling the workload itself from the presentation layer of the workload. So, remember workloads and traits are still Kubernetes APIs, but they are already simplified or somewhat twisted APIs from its original controllers or CRDs like we see at the bottom. But we create another layer of abstraction, which is the Q template. And when you write your app file, it's going to be evaluated against those templates you defined in your app file. Let's say you say, I'm going to write an app file for this workload type deployment. It's going to be evaluated against that workload type. So, that's how app file works. For example, if you have defined a template for the trait route, and then in your app file you declare to use that route trait, it's going to be evaluated against the template you wrote, not the route API itself. We talked a lot about not reinventing the wheels, but how do we actually do it? Well, basically, all you got to do is to create a definition file. And in this case, we're creating a trait definition for metrics. And look at this definition file. Basically, on the top of it, it's Kubernetes API based on OAM. And then at the bottom of it, you see the template. So, there's the extension and then template and underneath it is the template. This template, once you apply it and it's synced on your local Vela system, will be taking effect right away. So, you can write your app file and it's going to be evaluated against it, or it's going to show up on your CLI and also on your GUI. Exercise two, add a new capability to your Vela. So, in this exercise, we're going to add a new capability called CubeWatch. CubeWatch is a community project. Basically, it's going to watch the events on the API level and it's going to send the notifications to your channel. In our case, it's going to be a Slack channel. So, let's go through the instructions. First, we need to create a Slack bot. Let's do that. I'm going to do it quickly. Make sure we use the right Slack. And we're going to pick incoming webhooks. That's the simplest and it has successfully configured. Now, let us add the capacity. So, this CRD is hosted in the catalog repository. The idea is that many other developers can also contribute to this registry and we will be able to have something like Node.js repositories or Java Maven repositories, things like that, so that the platform builders don't have to reinvent the wheels all the time. So, let's create capability center first. Next, let's take a look at the capability. It's not installed so we need to install that. Okay, now it is installed. Let's take a look. Okay, now you can see that this KubeWatch has been installed. Next, we're going to be creating a component just to try out this new capability. And it's the same as previous one, lab one, except that this application has only one component. And next, we will be attaching this capability to the component. Now, look at the structure of the command line. It is Vela and then it's the trade name applied to the component name and then application name, and then here are the environmental variables. Let's copy that. So, this is WebPope URL. We will just be copying this. Okay, now it's successful. Let's take a look. Do we see any events? Yes, now it keeps populating events. Okay, so that's a simple example of how to add a new capability from the community into your Vela system. This is a simplified diagram of KubeVela. From the user's perspective, it is dealing with the workloads and trades and through the UI. The UI consists of the CLI, the GUI and FILE, all of which are application-centric, which means that application is the first class citizen or the main API, the entry point of KubeVela. Of course, under the hood, we have the capabilities from the community and we also have the capability of discovery and management system. Like any repository, we need to be able to manage our assets. So that's basically KubeVela in a nutshell. This is the overall architecture of KubeVela system. I would say it provides more details or more comprehensive views of all the logical systems within the KubeVela system. From the bottom, we see that all the capabilities are coming from the community. So that's the first takeaway out of this diagram. We've talked about KIDA, we've talked about Flagger, but now I want to talk about cross-plane. So cross-plane does two things for us. One, of course, it provisions resources on different clouds. And the second is that project cross-plane also includes the OAM Kubernetes runtime. This OAM runtime will shape the raw CRDs into the building blocks that can be reused by KubeVela and all the other projects. That leads to the second takeaway. That is, KubeVela system actually has two layers of abstractors. One is through the OAM runtime, which shapes up the raw CRDs into building blocks. And once you import those building blocks as traits and workloads, the queue template and app file will do another layer of abstraction to present the user interface. So that's two layers of abstractions. And this gives us great flexibility in designing the best user experience for application-centric application platform on Kubernetes. We've heard a couple references to cross-plane so far in this presentation, but let's take a quick second to introduce it a little further. So the easiest way to explain what cross-plane does is that it handles the infrastructure for your applications. It does this by extending the Kubernetes control plane with a set of CRDs and controllers that essentially allow you to provision and manage infrastructure resources. Those could be cloud provider-managed services like an Amazon Dynamo database, or they could be on-premises infrastructure, but cross-plane allows you to control them and manage them, etc., from inside of Kubernetes. It is now a CNCF Sandbox project. We donated it in June of this year, and it's from the same creators of the Rook project, which is also a CNCF project and recently graduated this year, so happy about that news. There's three main feature areas to cross-plane. The first one we kind of already talked about there, where you can provision infrastructure in a declarative way from inside of Kubernetes, but then you can build on top of those infrastructure primitives to essentially create your own infrastructure API, your own platform, that you can then offer up to your teams for them to be able to consume that API and self-service provision their infrastructure when they need it. Third, it allows you to run and deploy applications that will consume this infrastructure. Okay, so now let's do the last lab, which is managing cloud resource and applications in application-centric way. So in this lab, what we're going to do is we're going to deploy a fairly complicated cloud-native application. So this application has six different components, one of them is a cloud resource, it's a database, which we're going to be provisioning using cross-plane, and the others are just regular microservices. And before we get started, we would have to install cross-plane. So in this lab, we have verified with cross-plane version 0.13, and that's why we have also provided the help chart locally. So the first step is to create a namespace. And the next would be installing the cross-plane charts. All right, it's done. And the third one is to install the cross-plane CLI, which I have done already. But we need to keep in mind that we need to move this or we need to add this CLI to pass. Otherwise, Kupka Tail cannot execute it. So the next couple of steps would configure a cloud provider. In this case, it's Alibaba Cloud. And this step is to create a secret using your access key and secret. So our access key secret. So make sure that you change these strings here. So I'm going to do mine, and I'm going to edit this part out. All right, coming back, we're going to create or configure a provider. Okay, explain a little of what we have done so far. I think Jared would do a much better job. So up to now, up to this part, the cloud provider part, we have created a CRD, a controller in our own cluster that's going to control the lifecycle of the remote cloud resource. And the second part is configuration part, which will chain different resources together. So in our case, we are provisioning a database, but also the storage of that database. So it's two things together. It's got a composite. This configuration is a composite configuration. Now, keep in mind, who should be doing this? It is the platform builder's job to provide these to the customers or to developers. But today, since we're self-serving ourselves, so we have to do it. And also let's reflect back on where we are right now. Remember in the last system diagram that we have at the bottom all those raw resources, and then through the OEM Kubernetes runtime, they became the building blocks. So that's where we are now. We have the building blocks. And the next step is to make a definition of those building blocks and import them into VLAS system to become trades and components. So in our case, we are going to create a definition for this database, and it's going to become a workload. Let's do that. Okay, that's done. One important thing, this is created on the cluster. So we need to synchronize back with our local VLAS system. And let's take a look. The RDS, that's what we call it. And remember we have a template, so we are going to call it a different name. So it's RDS. I'm just using the simple words here. That's all the prep work, actually. And let's do a VLAS app, and we're going to just shoot up this application. That's how simple it is compared to all the previous steps. Now, this deployment takes a long time because we actually have to provision the database on Alibaba Cloud. So let's take a look at this VLAS.yaml file, which is an app file. There are six different components. And they use web service and RDS. Two workload types. These workload types, again, the template was built in the definition. So you can change it. You can negotiate with your platform builder because they resemble nothing of the Kubernetes API underneath. For the web UI, I actually created a route. This is a trait. So look at this. We simply added a domain. Do you recognize Flagger? Probably not, right? So this is the beauty of using the templating and also the OEM runtime to basically model the data twice. Look at how many environmental variables we have for just one component or service. If you do this through CLI, obviously it's going to give you a hard time. And also, given so many different components, it's really hard to keep track of. So app file really is a practical way of keeping your application together of providing the app-centric user experience. Now, let's take a look at how many resources that actually is generated by Kubernetes. Yes, see, there are so many of them. Imagine you have to keep track of all those resources. All right, let's take a look at the database. So, okay, I think the database is ready. Let's take a look at the UI. Let's access the UI. So as I mentioned that I have created route trait to the web UI, but my cluster, my local cluster is a kind cluster that I didn't install ingress. I cannot do that, but if you do have ingress, most public cloud will have. You can use the get ingress command and get the IP address. And see, this is the UI, web UI trait, right? And then normally we would have the IP address here. And next, what you're going to do is you're going to do a VI of your host file. And then you can add that host here so you can access by host name. But for us, since I don't have ingress configured, I'm just going to use port forward. And it's also very straightforward. We're going to use this pod, the web UI pod. And the port is 8080. Okay, this is a dashboard. So let's refresh some data. Okay. All right. So let's take a look at the flights. Yeah, so here are all the flights available at this hour. And this is the earthquake information. Yeah, so these are all the earthquakes that's happening recently. And lastly, it's the weather. Okay, so I guess that is the demo app. So we need to give credits to the folks who actually build this demo, not us. All right, let's recap what we did. So basically, we just used one VDAup command. And of course we have this app file written prepared. And that's developer's job. But overall, I think over the three labs, I think for the first lab, we created two components. And then for the second one, we just added one trait to the component. And then the last one, we just used the app file. So totally four different commands that's really what has to be done by the developers to have all the things we did. And if you think that's impressive, please come and join us. And that concludes this demo. Thank you. Now that we've seen a little bit of crossplane in action, let's go ahead and dive down a little bit deeper into the architecture and some of those three functional areas we were talking about earlier. So let's start with the first one being able to provision infrastructure using the Kubernetes API. So as we mentioned earlier, cloud provider-managed services and on-premises infrastructure can be represented with crossplane as CRDs. So this lets you declaratively configure a CRD to capture the desired state that you want for your cloud provider-managed services or other infrastructure. And so controllers inside of crossplane are watching for events on those CRDs and then reconciling the desired state on them with the actual state within the cloud provider or other infrastructure. And so this enables you to use cube control or any other tool that talks to the Kubernetes API to provision and manage the infrastructure that is actually outside of Kubernetes in most circumstances. So in the diagram at the bottom of the slide here, we're going to use an example of Amazon RDS. And so on the left here, we've got a CRD that captures all the configuration that you may want for your Amazon relational database service, your RDS database there. And you'll use cube control to send that over to the Kubernetes API server. And then inside of the control plane there, we've got crossplane has a RDS controller that's watching for events via the API server on that RDS CRD. And then it sees that RDS CRD was created and then it will call out over the network to the AWS API using AWS RESTful APIs to provision and make that actual, sorry, the desired state that was captured in the CRD to make that an actual state inside of Amazon's cloud over there. So on the screen now is basically just an example sticking with AWS of some of the CRDs that represent services and infrastructure inside of Amazon. You can see this on the doc.CRDs.dev site, which captures the documentation and the specs and all the fields and everything there, help and everything for all the CRDs that are offered by crossplane right now. But basically we see a bunch of CRDs that are in the aws.crossplane.io namespace and they capture a whole bunch of different types of services and resources inside of AWS. So networking and caches, database, Kubernetes clusters themselves, et cetera. All those you can create instances of CRDs inside of Kubernetes using crossplane to create and provision, configure, manage, et cetera. Real-life instances of those infrastructure resources inside of Amazon. And that's just an example, right? You can do that with Google Cloud and Azure and Alibaba and Packet and others as well too. The second feature area that we'll focus on in crossplane here is around offering declarative infrastructure APIs for your application teams to consume. So we do that by composing together some of the infrastructure primitives that we saw previously. You know, like the cloud provider managed services on-premises infrastructure. And so we can compose those into higher level API abstractions of infrastructure and then offer that to our application teams to consume. To make an example of this here, let's think about a MySQL resource that an application team might want. They might want MySQL database to use for their app. And so one way to do that would be to compose that MySQL abstraction of underneath the covers some Azure resources like the Azure MySQL, a resource group for it to live in, and a firewall rule to open up some access to it. So we can compose those Azure resources together and then offer it to our application teams as a MySQL abstraction, a MySQL infrastructure API that they can then consume on demand when they need it. And so a nice part about this is that it hides some of the complexity of the infrastructure and environment details away from the application teams. And then even better, it allows the infrastructure owners to encode some of the policy and best practices and configuration that's important to their organization and only expose the simplified infrastructure abstraction, this API to their application teams to get the infrastructure they need when they need it, but in a safe, safely configured and secure way. And this is all done with no code writing at all. It's done declaratively by the infrastructure owners to surface this API to their teams. So let's look at a diagram here that kind of shows a little bit further of what I'm talking about. So in the top left, we've got our application, our developers, their application team, and they want to consume a MySQL database. So we as the infrastructure owners here have composed together a MySQL instance API and abstraction there that represents MySQL. And so one of my application teams, they want the AWS flavor and another application team wants the Azure flavor. But to both those teams, they're pretty much exactly the same thing, right? They're dealing with the exact same MySQL instance that is an infrastructure abstraction that we've offered up to them. They're just kind of setting a little config knob to tweak which flavor they get. So underneath the covers, that selects a specific composition that we've put together to represent an Amazon or an Azure MySQL. So some of these infrastructure primitives like Amazon RDS database and a subnet group and a security group, those are composed together to form a specific implementation of this simplified MySQL API that we've exposed to the team. And likewise, we can do something similar with Azure. So note that we're making different compositions to fulfill the MySQL abstraction with different cloud providers in this example. But you could do the same thing with different classes of service. So for instance, you could do a composition that represents a high-performance database or you could make a separate composition for the same infrastructure API that represents a low-cost version. But either way, you as the infrastructure operator have control of the policy, the configuration, what primitives are composed into this abstraction here in order to give the team what they need but with the policy that you're in control of. And then at the bottom of the diagram, once again, we have the providers that are reconciling the infrastructure primitives with the cloud provider APIs so that an actual state in the clouds represents what the team has requested for their infrastructure. And the third feature area of Crossplane is around running and deploying applications as well. It does it through its support and implementation of the open application model, the OMSPEC, which is definitely a focus of this presentation today as well. But it allows you to deploy the applications alongside of the infrastructure that they need to run on as well. And it allows you to do that in a standardized, normalized way where you can deploy or declare your applications in a very similar way to how you're declaring your infrastructure as well. Crossplane is the Kubernetes implementation of the OMSPEC. And it's a very good fit together because they both employ a model of a strong separation of concerns where you've got a few personas at play, where you've got the infrastructure operators, the people that are the owners and in charge of the infrastructure and services of the platform. But then at the top layer, you've got the application developers that are building the application components and don't really need or have much insight into the specifics of the environment they'll be running in. They want to express their needs for their applications and infrastructure in a very general way. And then you've got your application operators that are kind of the runtime deployers and builders of the applications and kind of marrying the infrastructure and the applications together. So that strong separation of concerns that are supported in both the OMSPEC and with Crossplane as well. And being able to declare your applications and your infrastructures in a single standardized way is a very nice fit. So let's put all those three functional areas of Crossplane together now with a final diagram here that summarizes the architecture. And so your application team at the top there, they can use the open application model. They can use some of the Kubernetes core resources directly to declare their application and the infrastructure that they need as well from the layers below of the infrastructure APIs and abstractions that you as the infrastructure owner or operator are defining for them and exposing to them with some simple configuration that they need but enabling them to self-service, on-demand, get the infrastructure that they need for their applications in a standardized way in a very similar way to how they define the applications themselves. And then those infrastructure abstractions there that you're exposing, are composed of infrastructure primitives underneath and so using Crossplane's composition feature to pull together and set policy and configuration on a number of infrastructure primitives all composed together to form this infrastructure API and then on the bottom those are talking to the cloud provider APIs to make that infrastructure happen in reality as well too. So application teams are getting their applications deployed they're getting the infrastructure that they need through a simple abstraction they're doing that all in a consistent way the infrastructure operators are putting together these platform or infrastructure APIs they're getting to encode the organizational policy and best practices and configuration that's important they're enabling the application teams to get that infrastructure when they need it but in a safe and secure way and everybody's happy at the end there. Okay let's go ahead and start exercise 4 now the final exercise of this tutorial in this exercise we're going to be building and offering an infrastructure API to our application teams and this API here is going to focus on making some infrastructure resources in AWS available for application teams to consume you can follow along with everything because it is all published and made available on GitHub that the link provided there it's under the upbound organization and it's called platform-ref-aws so basically we're going to be making some network resources a Kubernetes cluster and a Postgres database we're going to be building an API that we will make available for consumption to our application teams as well then so we're going to author this we're going to push this configuration up to a registry and then we're going to install this configuration into a cross-plane instance a control plane that we have and then we'll go ahead and publish it or offer it some of the claims that we make available in this configuration we're going to offer those to our application team and then we'll allow the application teams to go ahead and provision their infrastructure self-service you know on demand whenever they want to and it's going to have all of the policy and configuration that we declare inside of this configuration that we're going to build together alright let's get started the easiest way to get started here is to go to upboundcloud upbound.io and get started with creating a hosted cross-plane instance and also some nice UI to kind of watch the platform and infrastructure API that we're going to be building come up together alright so I am at upbound.io I already have a kubecon staging platform here but I'm going to go ahead and create a new one here for kubecon production this is my prod kubecon environment can't spell that so let's just go E and B right and let's kick this off so that's going to be running here now and that doesn't take a very long time for our cross-plane instance to come up but let's go ahead and start talking about the configuration and platform infrastructure APIs we're going to build the first thing we're looking at here at the root of the repo is just a simple metadata file basically the cross-plane.yml and this is saying that we are building a cross-plane configuration and it's the AWS reference platform it was made by me and just some simple metadata that says what version of cross-plane it runs on and what its dependencies are and things like that so more interesting content though is in some of these folders here so I've got the repo organized here by resource or infrastructure type so let's first dig into the network type and we're going to see a similar pattern here for each one of these types here for network and also for cluster and database as well but we're going to have two main files there the definition of that API and then it's composition what is it made of so the definition for the network infrastructure API we're building we're going to see that it is a composite resource definition we like to call these XRDs in cross-plane and so it's got some UI metadata which will help influence how it shows up in the UI if you wanted to create one from the UI the configuration fields that would show up etc but another important part here is that basically we're defining a network API for our app teams to consume and we're defining the shape of that API we're defining sort of what are the configuration knobs that the application team will get to turn when they want to create a network self-service on their own right so there's not too much we're giving them here we're kind of giving them the name of what they want the network to be and then what cluster this network will belong to so we're not really giving them a lot of configuration here at all right and then under the covers for this definition of an infrastructure API we're building under the covers is the composition what is the underlying infrastructure primitives that will make up this network API that we're building and so here we go is a composition a cross-plane composition that matches back to that network API that we're defining and there's a whole bunch of infrastructure primitives that belong in here right so there's a bunch of AWS resources so a vpc and internet gateway some subnets multiple subnets that we're building here a route table and a security group and I think that's the end of them right so basically this is composing together a bunch of different AWS networking primitives and all of those will be instantiated when the application team self-service asks for a network right so all this configuration information and the policy and everything this is all a set of basic infrastructure primitives capturing that configuration policy but behind underneath the API line that we're building together right so we'll see a very similar pattern for the other types of infrastructure APIs that we're building so we're also building a database API to so let's look at that so we're building another composite resource definition underneath the database Postgres folder and so this XRD this composite resource definition is going to be for Postgres and the shape of this Postgres API is going to have a couple of configuration knobs for the application team to twirl and spend as well too so for instance we have a storage storage GB field that we're defining here it's an integer type and that field there will basically be used to determine how big of a database that they could get we're not exposing much to them here at all once again you know just how big do you want it and what network do you want it to use a network reference field and that's about it in terms of the definition of the database of Postgres infrastructure API to be exposing to our users and then underneath the covers again here underneath the API line this Postgres API is composed of a couple of different AWS resources again right so here's a composition once again that is for Postgres our Postgres that we defined the infrastructure API for Postgres and it will have a DB subnet group and it will have an Amazon RDS instance so here is some policy configurations being captured here of what size we want it to be or instance type that is and here's a very interesting or important part of these compositions that we're authoring here so remember that in the definition we exposed a storage GB storage GB field and so inside the composition underneath the API line we're going to do a patch and we're going to take the storage GB field from the user from the application teams request for a Postgres database and we're going to map that down into the RDS instance AWS infrastructure primitive and we're going to map it down into a particular field there that's inside the Amazon API so the allocated storage field there will be a recipient of our storage GB field so this is a way that we give configuration to be exposed to the application team but without exposing the entire surface area of the Amazon RDS database or whatever other infrastructure primitive that we are exposing for them so remember that you can have multiple compositions for each infrastructure API you're exposing so for instance we could have a fast or high performance database with a certain set of configuration parameters here like maybe a beefier instance type and then we could also have a cheap version or flavor of the infrastructure API as well too that maybe uses a smaller lower cost instance type so through those means there we can expose different classes of service for our application teams but without giving them the entire surface area or the ability to create these instances in the cloud providers APIs directly we're putting this API line in front of them that makes sure that they use that R configuration or policy that as an infrastructure owners we're okay with okay so then we're not going to go too deep into the cluster one too but cluster is a self-service infrastructure API for the application team to get their own Kubernetes cluster when they want one so there's some configuration fields here too of what size of cluster do they want small and medium or large, how many nodes do they want in the cluster etc but this is an interesting one because its composition is actually a nested composition where underneath the top level cluster API that we're exposing it's composed of an EKS API and then a services API that will install a bunch of platform services like Prometheus and Tracing and things like that so not only can we put together infrastructure primitives like cloud provider services and networking and stuff like that but we can also put together other composite resources to create it's kind of a nested tree of them as well and create more complicated abstractions that we're exposing to our application teams let's go ahead and build and package up this configuration that we built together, this set of infrastructure APIs, let's build and package it up and push it up to a registry here so we're going to use the cross-plane kubectl plugin so we're going to do a kubectl cross-plane build and we're building this configuration together and we're just going to say ignore the examples directory and go ahead and call it a package.xp xpkg so let's do that build there real quick that was easy and then now we should have that package.xpkg sitting there on disk here which we do and now let's go ahead and accidentally copied that, didn't mean to want to run, is I want to go ahead and use the kubectl cross-plane plugin to push this configuration up to the upbound cloud the upbound.io registry that we are using right now so I'm going to tag it as a 0.0.3 version there and I'm going to push it up to my repo here reference AWS platform here and so let's push that and that should push it up to the registry so let's head on over to upbound cloud again and let's check and see my repository sweet we got that 0.0.3 version that we just pushed right now and so let's go ahead and take a look at our platform that finished quite a while ago and let's go ahead and install this configuration into the platforms then I can start exposing it and making it available for my application teams I've got my kubectl pointed to that hosted cross-plane instance I have in upbound cloud kubicon production there there we go and so now I'm going to use the kubectl cross-plane plugin while pointed at that hosted cross-plane instance in upbound cloud to install this configuration that we had pushed up to the registry there so my platform reference aws to 0.0.3 I'm going to go ahead and kubectl cross-plane install it so it's click that that's good and so now we can do things like kubectl get pkg to get all the cross-plane packages that are installed here and so it is yes our reference platform aws is now installed that's great good good and so once now now that that is available here we can go ahead and see that as the infrastructure owner here or the administrator of this platform for my team I've got all the raw composite resources here exposed right so I've got the composite clusters the composite networks the EKS services all that stuff but it's not quite ready for my team still so I want to go ahead and create a workspace for my team to use so let's go ahead and create team for team one let's create team one workspace okay let's go ahead and create that so that will be the place in my kubecom production environment there that team one will work and so what I want to do now is enable some apis here right because nothing available yet so I'm wanting the team one to be able to on their own self service create clusters when they want to and create networks when they want to and create postgres instances when they want to so now that that is done I am going to log in as a team one member and they will get a different view here where when they log in they will see a custom cloud control panel basically which is a visual or a view of all the apis that I have defined and published for them and made aware for them so let's go ahead and do that too okay so I am logged in as a member of team one here and basically I am looking at a custom cloud control panel console that my infrastructure team has essentially built for me using the configuration and everything that we defined and declared earlier on in this exercise so I log in here and I am looking at all of the infrastructure api and abstractions that I have been enabled or allowed to create similar to when you log in to the AWS console or the GCP console and you see all the services that you can create I am seeing essentially a custom one that was defined by my infrastructure owners my infrastructure team so I come in here and I am allowed to create a cluster or a network or a postgres instance so I want to start off by creating a network and so I will click in here and this UI is generated by the the declarations the definition the schema that we defined earlier on when we were playing the role of infrastructure owners and so I create that network and that will start getting ready for us okay our network infrastructure resource is ready to go done being created and as a member of my app team here team one I can keep creating more infrastructure resources in much the same way so now that we have an underlying network fabric created we could go ahead and also create a cluster or a database for my application to use we are rolling out a new service and it needs a database and so I could create my own or on demand a postgres instance for it as well I can specify the size that I want for it that is part of the configuration that my infrastructure team has figured or enabled for me to be able to set I could go ahead and create a whole cluster in order to roll out some some containers or other services that my app team needs to run I can say how many nodes I want in it what size I want those nodes to be but essentially here the infrastructure team has created the an API that the console is built on top of and you know generates the UI so that I as an app developer for team one can on demand create the infrastructure that I need to get my job done and to roll out my applications more easily and with less friction so let's review everything we did here right so we started with creating an infrastructure API basically as an infrastructure owner we created a network definition we created a database definition and we created a cluster definition as well and we defined the schema for these infrastructure resources infrastructure API that we're going to make available for our team we defined the configuration knobs that we want the application teams to be able to set and configure and we encoded all of our policy and our configuration that we think is important into this API that we're building we defined the infrastructure primitives that these these infrastructure APIs should be composed of we used composition to put together infrastructure primitives to specify how they should be rolled out and what their policy configuration is and then we built and we published this package of these configurations together up to the upbound cloud registry and then we created a basically a running instance of crossplane for the application team and all of these infrastructure APIs and abstractions that we defined for them with all the configuration and policy baked in underneath the API we exposed that API to the team so the team could then self service get the infrastructure that they need with a nice UI and custom cloud control console basically to do it and now they can create the infrastructure that they need when they need it to keep their application development rolling and be basically empowered or enabled to get the infrastructure that they need without creating a ticket and coming back to us while that infrastructure will still have all the policy and configuration that we baked into the infrastructure abstractions that we defined for them let's talk about the community currently OEM QVILA together so for OEM we have the specification that's getting stabilized we're moving to beta which will be backward compatible and for QVILA it's still in working progress not ready for production yet but we plan to have 1.0 release in December so currently we have the features such as ffile, the CLI and dashboard we have the trades like rollout, route and scaling and they are all coming from the community we didn't do anything ourselves and also the default work loads are web services tasks and backend and we have an active community and we have the stack channel and Gitter for both OEM and QVILA if you're interested please come and join us and here are some links for how to get involved in the crossplane community as well too we mentioned it's a CNCS sandbox project and it's open and very welcoming to new contributors and adopters and anybody who wants to get involved crossplane.io is the main website to jump into everything and all the links to everything else can be pretty much found from there but we're also super active on Slack and welcoming and talkative there too so join us at slack.crossplane.io as well too and you can check out the rest of the links on the page here but we would love to have you join the project and a quick look at our roadmap the big news here is that we are working towards a v1.o release for the end of this year and then beyond that as well too many good things and exciting things of the roadmap but basically a big focus here is getting to v1.o and that means some hardening we're graduating some of our apis to get to a stable place as well too so there won't be breaking changes and you can upgrade between versions with minimal headaches and hassles there definitely some exciting features around composition itself and our package manager as well too a big investment we'll be making as well is around the providers for all the cloud providers to to greatly expand the amount of coverage that they have so our goal is to get to 90% coverage of all the services offered by each of the cloud providers and we'll do that by working with the cloud providers themselves to do some code generation and kind of get hooked into those pipelines there so that new services will come out very quickly and with minimal maintenance effort there it's very excited about that and you know we have some investments still into OM as well too to get that one to a v1 beta one level with some new features as well too so a lot of exciting things on the roadmap there is we get to a v1.0 for cross-plane by the end of the year and further beyond into 2021 so that's everything we got for the presentation and tutorial today we definitely are really appreciative of you joining us and following along and we will have some time for some questions now as well too so thank you everybody really appreciate it