 Hey, welcome. Guess it's the evening crowd. Tonight I'm going to be talking a bit about composability for cloud native applications. And the theme of the talk plays well with others. When I was growing up in school, children were graded, young children, were graded on how well they played with other kids in the schoolyard. And we want to make sure that when they grow up, they're capable of working well with others in different environments. So we think we've seen a lot of lessons learned from the open source Kubernetes project on how to build cloud native applications that work well with others and have greater composability and reuse. So as it matures, we hope to see more of that. And that's what I'm going to be talking about. I'm Steve Judkins. And I'm a program manager at UpBound in Seattle. And we love Kubernetes. You all probably are aware of and love Kubernetes as well, or you wouldn't be here. So I'll be talking a bit about the benefits of container orchestration for scaling your own software deployments. But that's not all that Kubernetes brings to the table. It also makes it easy to use a declarative management API. And these active state controllers can reconcile the actual state of your application with the desired state. So this has shown how great it can be for complex apps. And but there's even more. It's extensible. Kubernetes lets us define custom resource definitions. And implementing controllers is sort of the right way to extend Kubernetes. So we can manage more things within our infrastructure than we have been. So if we took a look at the modern cloud application, you're probably leveraging Kubernetes in your apps. It just makes sense to deploy our apps to containers. Like many companies, you probably built these apps. You've deployed them to a cluster, just like the big container ship here. When we talk about deploying with Kubernetes, those are running in your own cluster. And you have a responsibility for managing that cluster. So most of us want to take advantage of cloud platforms. And the cloud providers have built these big, beautiful cities with lots of infrastructure. They have an SLA they can give you. And so in order to take advantage of that, to manage databases, we can then have scalable databases that manage backups, replication, disaster recovery. There's just a lot of features there that we don't have to manage when we're deploying to our own cluster. So and of course, public cloud providers have been rolling out new features, advanced features at an amazing rate. So if you want to take advantage of those differentiated services like Search, AI, ML, it's kind of a pain to manage those in your own cluster. So what's wrong with the picture? Modern applications are basically composed of more than just the services that we're writing and maintaining. You're going to have dependencies on databases, object storage and buckets, pubsub, search, monitoring, sort of the typical application components. But do you really want these all running in your cluster? Do you want to be paged at midnight? I don't. So the other aspect of this is that your IT and DevOps groups are typically running a completely different set of tools for orchestrating or provisioning infrastructure. So that has become this dumpster fire of tools that don't align with all the great Kubernetes work that's been going on. So can we solve this in an elegant way? If we take a moment to look at the different approaches to infrastructure orchestration that are out in the wild, we'll see some patterns. So I find it useful to categorize these components in this way. So along the vertical axis, you have your services and those running in your Kubernetes cluster, which you might have deployed with Helm. You might have basic cloud provider services that you're taking advantage of. And if you need something like cloud formation, that can cover the gamut of AWS, but doesn't extend to other cloud providers. At the top of that axis is utilizing differentiated cloud provider services across multiple cloud providers. And something like Terraform and HCL allows you to basically do that kind of thing for installation and some upgrade cycles. On this axis, you have the whole resource life cycle management. And many of these tools like Ansible and Chef and Puppet have gotten you closer to the install and upgrade path. But when we look at CoreOS, who introduced the operator framework, we see what it means to have the entire life cycle managed within an active state controller. And there have been some projects like AWS service operators that have taken that into different cloud providers. But nothing has really reached that target of having all of the complex managed services available in cloud providers across all the cloud providers or managed services and for the entire life cycle. So how could we solve this in an elegant way? If it were based on Kubernetes Engine and it brings those cloud provider services and infrastructure into Kubernetes, it gives you one API to manage everything and provides portability for those workloads that you have beyond containers. It'd be nice, wouldn't it? So if we look back for a moment, the time feels right for this. We're kind of an interesting moment in the evolution of cloud native. The level of abstraction has increased over time. If we go back to virtual machines, OpenStack and AWS EC2 got us into infra as a service. And gave us operability and resiliency and elasticity. And when Kubernetes came along with managed Kubernetes and GKE and EKS, we have containers as a service. So we're now getting into more modularity. And with modularity, we saw the evolution of lambdas and functions as a service. But I think now we're at a state where we can start to get some of the benefits of portability and modularity out of this. We have all the pieces in place within Kubernetes to start building what I think of as portable resources. And in a workload-centric model, those portable resources can be span things that we're writing and applications that we have dependencies on. So when we look at building this on the Kubernetes engine, there's lots of advantages and lessons learned in Kubernetes, including a declarative API, which is great, Cube CTL for native integration with other tools, libraries, and UI, rich ecosystem and community around this, obviously. And letting us apply some of the lessons learned from container orchestration to multi-cloud workloads and resources. So when we look at the resource lifecycle management piece of this, Kubernetes has custom resources that we can use to model cloud provider services as well. And we can use custom controllers to provision, configure, scale, monitor, upgrade, provide, failover, and backup. All of those lifecycle management things can be, that logic can be put into a controller. And that control can handle active reconciliation so that we essentially have hands-off management of those cloud provider services. We also have this portable resource abstraction notion. So Kubernetes has a powerful volume abstraction. It gives portability for stateful applications. But what about other resources? Cloud provider resources might include databases, buckets, clusters, caches, message queues, data pipelines, everything there. So let's abstract those two. We really want to get to writing once and running anywhere. And how do we do that? So about seven months ago, Upbound introduced Crossplane. It's an open source, multi-cloud control plane. And it was released where it's still young. It's version 0.2. But what this does is it leverages the Kubernetes API and machinery at CD, the workload scheduler, and introduces resource controllers. For both, we support the three cloud providers, AWS, Azure, and Google Cloud. There are other cloud providers we'd like to extend that support to. And it can also provide controllers for actively managing other third-party software. We'd like to see that extended to things like Elastic, Confluent, Databricks, other typical frameworks that your app's going to manage. And of course, by building on the Kubernetes API machinery, you have all the same user interfaces and client libraries that you've grown to love and use. And so when we look at one of the concepts that we're introducing here to do this called separation of concerns, where we look at a developer who can compose their app and resources in a general way, think about the dependencies that they have, and at development time, make sure that they're less tightly coupled. And I kind of think of it like this image from Todd McClan, who deconstructs common consumer items or household items. And you see just how complex some of these simple items have become. But when we think about how we build upon the machinery of software, we have to get to the point where we have a factory model of components that we want to reuse and design to be modular and reused in other applications, regardless of where those applications are intended to run. So the separation of concerns also lets an administrator define the environment specifics and policies. So at development time, I might be running this in Minikube and installing crossplane using cloud resources. But when I go to production, that environment can be maintained by an administrator that can set policies. I don't necessarily see secrets. I just need a connection string or the secret references for a database request. So how do we model that? We looked at persistent volume claims and storage class in Kubernetes. And that's a model for creating resource claims and resource classes. So if I have a resource class that is a database, I can make a claim on that as an app developer. And this allows us to do some dynamic on-demand provisioning of resources when we deploy to a specific environment. So if we look at how this enables us to get to a GitOps style cloud native development pipeline, we're going to have, essentially, app owners who are developing their YAML for their application that includes resource claims and workloads. We'll have administrators who are also provisioning the environment or the resource classes that are available within that environment. They'll be choosing providers that basically can be used for that. They might choose to use AWS or GCP. And they'll be defining concrete resources like secrets and VPCs and other things within that environment. So as an app developer, I'm going to be insulated from that. And I think in this way, the dev and the ops world converge on a single Kubernetes API-based resource for that stack or app definition that's composed of those two different things. So just to give you an example of what a resource claim would look like as an app owner, here's Postgres, where the request for Postgres doesn't have a whole lot of environment specifics. I don't yet say where this is going to run. I basically can specify the engine version that I need and that it's going to come from a cloud Postgres provider. The resource class, as defined, there are defaults that are installed when cross-plan is installed. But the system administrator can go in and update those defaults. But this is going to contain properties that are defined, that are specific to that cloud provider. It might on AWS be using a certain instance size or a database size. They're going to be able to define policy and allow apps to default to that. So one of the partners we've been extremely fortunate to be working with is GitLab. We wanted to choose a partner that has a fairly complex real-world application to prove out cross-planes model. And we're learning a lot about how to design the controllers and the workloads. So just to give you an example, GitLab is currently deployed as a Helm chart. It has about 4,800 lines of YAML. It consists of 14 deployments, three jobs, nine services, 16 config maps. So there's a lot going on in there. Their main dependencies are on Postgres, Redis, and Object Storage. So they wanted to be able to deploy across different cloud providers. And we were starting to look at their current install and how could we make this better? So with a custom resource definition, there's a simple config experience. With the custom GitLab controller, we can generate these artifacts and essentially manage the state of the deployment. And get to the point where the deployment of GitLab is a fully automated and portable multi-cloud deployment. So this is how it kind of looks when you look at what we've designed for GitLab. And by the way, this is all out on our open source site. So you can review how this is the examples for this. You'll see that basically the Postgres controller, Redis controller, and Bucket controller are pulled in as resource claims within GitLab on AWS or GCP or Azure. Those are mapped to the different cloud provider specific services. So it might be buckets on AWS. And we're going to also be spinning up and deploying to a managed cluster. So it's GKE or AKS or whatever the cloud provider specific resource is. So Crossplane just manages this for you. So right now I'll kind of show you a demo that steps through this step by step. It kind of decomposes it. The app would normally bundle these things together. But I think this will give you an idea of what it looks like. So first we're going to basically install Crossplane. In this case, it's my local mini cube cluster. I'm going to run it. You might just run it where your workloads are going to run for test and staging. So we grab Crossplane from the master channel. And by the way, I can never type this fast. And I don't want to screw up the demo. So it's a video for you. So we'll install the Helm chart from our master channel to get Crossplane installed. And the key part of this is once we've installed Crossplane, we will have on our system custom resources for a set of classes. So you can see here when we do a cube CTL to get those CRDs that we now have various CRDs for buckets and databases, memory caches, and other things that have been installed and are available from different, all three cloud providers. And once we've installed Crossplane, now we want to basically start to grab our cloud provider credentials. And for this example, we're just going to do the simplest possible thing. We're going to grab our credentials from GCP and stuff them into a YAML file. So here we grab the credentials and just copy them into our provider YAML. So now we have those credentials. And we're going to deploy those to our cluster as secrets. So with the secrets available, we can then provision some resource classes. So the process of provisioning resource classes is really the domain of the administrator. If you're playing administrator and app developer, you can put these two together. But an administrator in this case might decide they're going to make available the resource classes for AWS only if you're an AWS shop. And so you can go in and set various properties within this case. Back there, we were setting the storage size and region for our storage for the standard Cloud SQL instance in GCP. But we can come in and basically configure these defaults. In this case, that was postgres configuration. Also, look at the bucket configuration here and just make sure that it's good to go before I start having apps unleashed into our buckets. So once that's set up, I now basically can keep CTL create on those resource classes. Those are now created. And they're essentially provisioned in that cross-plane instance and available to any apps that want to consume them. So now we go back to what does the app developer do? In this case, we're going to provision some managed services in Google Cloud. And then I'll show you an AWS. So this is the dependencies for GitLab. We basically make use of Kubernetes. So we're going to spin up a cluster that we're going to deploy GitLab to. We're going to make a reference to Postgres and Redis. And so what we did was basically deploy the buckets. They have about nine different buckets that they use. Since we basically keep CTL applied the bucket resource, we now basically have that controller going out and spinning up the buckets in GCP. And once those are reconciled, they'll be bound within the resource. So you can kind of see that there right now. Only one of the buckets is bound, but the others are still being created. So if we switch over and look, we can just keep pulling this where you can go get a coffee and just wait. The buckets are fast. Other things might take a few minutes. But now you've got a reconciler that's just doing that work for you. It's hands off. So here I kind of just switch through the console and kind of look and see what the state of things is, making sure that things are spinning up. But really, I don't have to do this. It's just to show you. And so once those resources are created and bound within Kubernetes, we can basically then come and take those resource claims that we have and export them to our GitLab Helm chart. These two steps would generally be bundled together in your application, but we're showing them here so that you can kind of see what's going on under the covers. So here we basically have the GitLab resource claims that we made in our default namespace. We're going to copy those to our target cluster and make sure that they're available in the GitLab namespace. And then we're going to update the Helm chart. Make sure we have a values file that has all of our secrets or references to our secrets and is ready to go. So there we go. We've copied over everything into the target cluster that is ready to go. So now if we grab Helm chart and everything from the GitLab repo, we are basically ready to install and update it with our values. So that's it. Now we're ready to install GitLab. So this essentially will be installing GitLab to our target cluster that we provisioned as a resource claim. And we go ahead and run that. GitLab takes a little while to spin up. So we can either go get coffee again or we can basically watch as it spins these services up. So now we're waiting for real using Kubernetes, of course, to check the status of whether these services are bound. And you can see here the services that are being initialized, including the buckets and Postgres. So once those are spun up, we will have an IP address that we can grab. Crossplane doesn't yet provision the things like VPCs and other open ports and other things like that. We'd like to extend it to manage those things. So you'll see here that I just grab the IP address and manually open that up, drop that in, and create a new DNS entry in Google. Keep waiting here and we'll see what the state of our pod is. Now see that the status is running or completed on everything except for their test thing. So we are basically ready to go. We can then grab the status URL from the controller and basically fire up the console and we are ready to go. So basically that took a complex app and made it hands off with the GitLab controller to handle all of that reconciliation. And that controller can also continue to monitor the health and status of that GitLab deployment. So that is GitLab. With crossplane is, so we'd love to welcome the community to continue to grow crossplane. We're just about to hit a .3 release. And but we'd love to see folks get involved. I put some resources up here if you want to take a look at the GitLab deployment and other sample applications like WordPress on the site. We have regular community meetings and we'd love to extend crossplane into infrastructure provisioning and other cloud providers. Thank you. Anybody else, any questions from the audience? OK. Thanks, everybody. Oh, did you have a question? Yeah. First of all, I'm a newbie to the about the containers and Kubernetes. So my question might be a ridiculous one. But what is the biggest advantage of introducing much cloud, introducing your product? I mean, is it correct to understand that by using your product, the application code portability is guaranteed, right? Right, right. So one advantage being all different sides of your house, your administration and developers are converging on Kubernetes, but also the application things that you're deploying may not live in a cluster. They may be managing things across clouds. So your application spans that with a single development framework. Most applications today are essentially targeting a cluster or they're doing multi-cluster deployments, but it's still limited to your cluster. Are there any activities to minimize such gaps? Or products like your product activity provided by you is recommended for the future? I think that there are two alternatives in the future, like from application developer point of view, if there is no gap between two clouds with regard to the application programming point of view, no problem. So I think there is a way to minimize such gaps. But is there any activities to minimize gap for such no activities? Well, I think if I understand your question, there are going to continue the differences across cloud providers. They're creating walled gardens, and their incentive is to have differentiated services. But for many of the services, particularly services that have been taken from open source and put into the walled garden like databases, you have compatibility at a wire protocol level. Like, if I want to progress, it doesn't really matter that it's RDS. So taking those and creating a portable abstraction for requesting a SQL database and allowing apps to essentially, for the 80% scenario, use portable extractions before you get into cloud provider specifics is a great way to essentially reduce the development time and give people a common set of controllers that are well-tested and very rich in their active state management. You can certainly go and develop a controller that is specific to a cloud provider and has all of the rich features. And so the breadth and the ability to sort of configure specific features is there. So yeah. OK, thank you. I understand. All right, thank you. I think we're out of time. Yeah. Well, one question? OK. Coming from that question, I thought there is some service provider controller that been developed. So has the interface been developed for each service provider, cloud service provider? Isn't that helping to alleviate some of the issue with compatibility? Right, so we do have resource classes for AWS, Google Cloud, and Azure. And so there is support directly for those. It doesn't extend to the full breadth of APIs. And that would be something that we would continue to evolve as cross-plane matures. OK, thank you. Cool. Thank you very much.