 Wow. There's a lot of you. So I'm going to be talking about managing applications on Kubernetes. I'm Brandon Phillips, CTO of CoreOS, and I have five minutes. So the goal here, I think, in Kubernetes is to make it easier to build, deploy, operate, upgrade, decommission applications over time. And to ideally do this with minimum toil, removing the grudgery work of getting your application into production. Now there's some good news and some bad news. The good news is that we're making it easier and easier to get apps into production. The bad news is that there's going to be more and more apps that we need to manage in production. And this is kind of an economic effect that was observed during the Industrial Revolution. The price of coal went down and then people started to use more coal. And this holds true in pretty much any economy. If you lower the prices, people will use more of it. If you make it easier, they'll use more of it. And I think the fact that my voice is echoing in this room of all of you indicates that there is a lot of demand for making deploying apps easier. So we're going to end up with more apps. But if you have a lot of apps, there's a lot of things that you need to be accountable for. What versions of those apps exist, where are they deployed, are they healthy or not, who actually own the app, who put this on the cluster? There's just a lot of questions you need to answer over time. And Pre-Cubanettis, apps made it into production by being budget binaries, some scripts, and then boom, production. And in reality, it looked more like this, which is budget binaries, some scripts, and then what exactly exists in production. Our current state is that we have a bunch of containers, some manifests, and then those containers end up in production. But still, it's really just a bunch of containers all over the place across our Kubernetes cluster. And then there's all these questions of where are the dashboards for this thing, who owns it, where are the docs, who has the SLAs, and it's usually on a really poorly maintained Wiki page internally or just tribal knowledge where you're like, can somebody tell me where the health monitoring is for this app? And so what I'd like to see, and what a lot of us in the Kubernetes ecosystem would like to see, is a state where we start to define containers plus manifests are a distinct version of the application. And then those versions end up in a catalog where you say, this is the app. It's some internal name, and the app has a number of versions. And then you're able to say, these instances of that app tie to those versions. And so it's a lot less confusing when you can go back and say, all these containers map to this version of this app, which was built in this way. And then you can start to do things like, oh, I need to have a temporary penetration test environment or a temporary debug environment. And then what we'd like to do is tie all those things together with all the metadata that makes an app an app, and not just a bunch of containers. So a quick demo here. This is the tectonic console. And we've created a catalog of services called open cloud services. And we're using this idea. So you come in and you enable apps like Vault. And this is all driven via the Kubernetes API. It's just the console gets a little old after a while doing demos. And we can do things like resolve dependencies. So Vault depends on LCD. So we deploy those things. And immediately inside of the namespace where I deployed them, we see the couple of base things required to spin up these Vault instances. And then this application is available inside of this namespace. And we can do things like configure an instance of this application. Click create. And it will go off and deploy all of the pieces and parts necessary in order to get a Vault application going. And so we've deployed that. And now we configured it. And now all the pieces and parts to get that app up and running are deployed. And the cool thing is that we can tie all these things together. We can start to say, well, what are all the resources in Kubernetes that exist for this instance of this app? And similarly, we can start to tie things together like inside of the LCD cluster, where do all the pods that exist to support that LCD cluster on this app? So this is kind of a vision of what we see happening in the future, where we use Kubernetes to hold all this metadata together. And we want to create a shared toolkit using Kubernetes APIs accessible via KubeCuttle that lets you move all this stuff together. The catalogs and the instances that I just showed you are actual real Kubernetes resources. And wait, what about Helm and everything else? I think these are solving good problems, but by creating a toolkit instead of tools, I think we're able to solve this problem in a better way. And we can answer these questions in a consistent way. If you want to get involved, there's the AppDef working group that is starting to tackle some of these fundamental problems and as well, SIGApps. If you'd like to chat more about this, come see us at the Chorus booth. And thanks.