 Hello. Welcome, everybody. Welcome back from lunch. Hope you got some good food. Got a little re-energized. Welcome to the second afternoon. I'm going to be presenting, sharing, talking with you all about Kubernetes as your cloud control plane. We'll talk about what that means exactly. We'll show how that's realized to some degree today. As we go through it, I'm going to point out some of the things that aren't perfect yesterday and that there's where we can work together to improve. And then we'll wrap by showing how you can build on this pattern yourself and summarizing what's next. So yeah, we'll start off, I guess I just said this introduction. I'll tell you who I am, why I care about this. I've been thinking about this for a lot of years. What exactly is a cloud and what's a control plane? Why is that important that we might want a common one? What life was like before or without Kubernetes? And then we'll jump in a couple demos, how you can use Kubernetes to provision infrastructure, to provision entire applications with infrastructure. We'll hopefully get a chance to at least briefly show how you can build your own extensions for cloud control via Kubernetes operators. And then we'll wrap with what's next. So a little on me, I guess, like I said, why am I interested in this. I've been working in open standards and open source for a good 10, 12 years now. I worked at Microsoft in evangelizing the swagger spec way back in the day and open API and DevOps specs. I worked in Azure for a pretty long time. I'm a resource manager spec. I actually pushed them to open source that and make it an open standard way earlier. Now I participate in CNCF tag app delivery in particular. And if you want to continue these discussions, I think that's a reasonable place to do it. So if you want to find us over there. Yeah, and now I work for Red Hat, great place. They use Kubernetes as the control plane for everything. One of the reasons why I am excited to be with Red Hat. Okay, so before I share kind of a definition for us for cloud control ways, I wanted to ask, does anybody here, when somebody says a cloud, how would you define that? Does anybody want to offer a definition for cloud? I'm just curious. Not cloud, the abstract computing, but a cloud. Yes. Someone else's computer. Someone else's computer. Yeah. Okay. So hosted somewhere else. Anyone else? Okay. So here's my definition of both of these terms. A cloud and I'm just going from what exists out there, AWS, Google, Azure. But this definition is going to extend it further. It's a collection of infrastructure capabilities and services that you can take advantage of to integrate into your app and to utilize. And by this definition, I'm kind of including any service provider like MongoDB and their Atlas servers or Confluent or Red Hat offers a few services too. I'm sure they're happy if I mention them. So a cloud is really just a bunch of managed services. The biggest ones like AWS have a lot. And a control plane is the mechanism by which you can control, provision all of those services. So in this graphic, I put down, I just used AWS and I actually have a graphic on Azure on the next page to kind of show if you're managing AWS directly what the control plane looks like. At the heart of it there in the middle is of course AWS's API. And really the things that you work with the API to provision your AWS services underneath. And then there's a couple of layers on top. There's the AWS's own first party libraries and tools. Of course, they have a slew of SDKs. They have cloud formation, which itself builds on the API. They have a CLI and there are a couple other tools as well. And then there's a layer above that where we have the Terraforms and the Pulumis and the cross planes and certain libraries like the Go cloud library, which build on top of further the AWS SDKs. So that kind of comprises the cloud control plane for AWS or at least one perspective on it. Here's Azure looks pretty similar. The difference is that Azure's central services. It is an API, but it's a little more declarative to begin with. Oriented around submitting templates and having an orchestrator reify them. Okay, so this is kind of the world without Kubernetes. And I just wanted to, this is not my best, this is the last graphic I created here. But just to summarize those in the control planes that we have today. There's really those three layers. We could describe them in this way. We could talk about provider specific libraries like SDKs and cloud formation or Azure resource manager. We could talk about things like Terraform and Pulumi. I tried to illustrate this here with that kind of purple column going up from each of the SDKs. The Terraform and Pulumi and cross plane are trying to project the entire API from the underlying cloud providers. They're not trying to genericize that you can't provision a VM in EC2 the same way you would in Azure. But they're also not hiding any of the available features that are in those things for you. That's in contrast to these last four on the bottom generic cloud. Where they are trying, like live cloud lets you provision a compute instance or a storage instance, but tries to do it generically so it works across all the clouds. So I just wanted to bring that up as part of the topic of discussion as we go. So one more kind of graphic illustration of what cloud looks like before or without Terraform. And this, the idea here is, I know it's a lot of tiny text. It's four different patterns for basically deploying a workload, deploying an image or a container instance. The far left one is a task in Fargate described with cloud formation. The middle one is a container instance in Azure described with Azure resource manager templates, ARM templates, if you've heard of those. The top right is a droplet in DigitalOcean provisioned by Terraform, because that seems to be their first class like infrastructure as code. And then finally Kubernetes, that's kind of Google native, I guess you could say. That's what I was thinking. But they're all kind of similar. I mean, you can't see it that well out there, I know. But they all kind of have a version, a type, a list of resources specifying their desired state. But software developers, we know that it doesn't matter if they're close. These are totally different API patterns. If you want to provision these four things, you're going to have different patterns, different processes for each one. And this is the final kind of the ultimate where we end up in this world of multi-cloud that most of us are embracing. Maybe we should ask that question as a question. But as we bring in more providers, and like I said before, the definition that we gave of cloud also kind of highlights this. It's not just AWS in Azure. It's also Confluent in Mongo and whoever you get tomorrow. You got to bring them in also. They have a different API shape for provisioning a database in Mongo. It starts to become a full mesh and it starts to get too hard to manage. So that's kind of stage one. Let's study the stage, kind of the background before Kubernetes where we were getting to. And now I want to kind of present the case for using Kubernetes as the interface to manage all of those clouds. So I accidentally used the wrong template for this deck. You might have noticed I use cloud open instead of container open. But I felt like it was kind of Freudian, you know, appropriate because here I am saying it's not about containers. And that of course that's not that's not exactly true. Kubernetes is all about scheduling containers. But I think a really valuable part and maybe one of the reasons it succeeded has been its API. It's been that common format for describing a pod in this case, resources, things that you want to create. You know, services you want to run in the cloud. I'm going to go ahead and describe some folks maybe familiar with Kubernetes control plane. I'm just going to go ahead and describe these components a little bit more. So the core three components in Kubernetes is own, you know, control plane, if you will, are the schedulers, which do the work of, hey, I've got a container or a pod that I need to run somewhere. I've got a bunch of nodes. Let me tell the node to run it. Communicate through the kubelet and tells it to run. But there are two other important components. One is the API server, which if I would use a more descriptive term, maybe resource manager, comes to mind, no influence by having worked for Azure for 10 years, that it's basically you're admitting, that's all the API server does is you send it a resource, an API resource, it puts it through its admission webhooks and things like that. It decides, okay, this is good for my cluster, I'm putting it in at CD. And then it's done, honestly. That's all the API server does. In fact, you can declare a custom resource and start applying it to putting them into your cluster. I won't do anything, but they'll be admitted by the API server. Behind the API server is your controllers, and that's where they do all the heavy lifting. And basically they're watching for changes to resources through the API server. So anytime you do a kubectl apply, the API server says, okay, we're good, puts it into at CD, notifies controllers, something else has come in, whoever needs to know, now it's your time to check that out and do your work. So it's this paradigm of resource manager, you know, extensible resource manager with a slew of reifying, reconciling controllers. That's what is allowing Kubernetes to become a control plane for so much more, to do so much more. Now I thought it was important here also to call out another big difference between, say, a Terraform and Kubernetes in particular. Remember before we said that one of the paradigms is the Terraforms and Pulumis that project the provider APIs. And a lot of the stuff we're going to see in Kubernetes does that too. There is a little advantage even here for Kubernetes over say Terraform in that Terraform is not continuously reconciling. Terraform, you run a plan or apply, you run it. It doesn't, in fact it needs to have a TF state file to know what the previous state was so we can get it to the next state. Of course there are projects that are looking to use Terraform in a continuous loop. Reconciliation, make sure everyone understands that concept. That means that when a resource is admitted to the cluster, these controllers look at what's desired, what was specified there. Check the current state of whatever it might not be there yet at all, or maybe it's there but not in the current desired state. Calculate what it needs to do and then apply it out there. Most of these controllers also run on a periodic basis, so they wake up every 10 minutes and say, did it drift out? Okay, let me bring it back, so drift detection. That's like the core of how Kubernetes works is this reconciliation, bringing things back to that desired state. And that's a little difference from say the Terraforms. Even Helm charts will contrast this with operators later on, but Helm charts also you apply them once. They kubectl apply stuff out into your cluster, but they don't necessarily make sure that it stays that way. Okay, so reconciliation a big deal. I even made it a little bigger here, the same graphic. You're running kubectl apply, you're submitting a resource to the API server. It checks, it says okay, writes it into XD or whatever. Then your controller managers at the bottom get notified, hey, something new has arrived. They check out, it does the stuff that's out there now match. If it doesn't, let me figure out what I need to do. So that's core part of how Kubernetes works. And now the culmination. So with Kubernetes's model like this, we can start moving everything behind. I see that my graphic got a little messed up, but I tried to move, one of them is missing. I tried to move those graphics for the last section underneath here in the controllers. So we're moving all of those cloud provider SDKs, maybe even the Terraforms and Pulumis too. Everything behind Kubernetes API server implemented in a controller. And this isn't just a dream. This actually is starting already now. Excuse me, all the big clouds have offerings at this point. So there's AWS controllers for Kubernetes, Azure as the service operator, GCP config connector. And I'd be remiss not to mention cross plane. One of the, yeah, cross plane is tries to bring all of the controllers for those clouds together and more clouds and put some common paradigms around them. But it's not just services from the, you know, the managed providers. It's also, you can provision the smart cluster, you can provision Knative, you can provision cert manager PKI system in your cluster by just submitting a resource to the API server. And then finally, like I was saying before, Atlas or Confluent. I wanted to call those out because it's not just the big clouds. These guys have operators as well. By the way, I'm using the term operator and controller kind of interchangeably. Maybe it's not exactly right, but the idea is that those are common terms for things that look at a resource and reconcile the state in your cluster to whatever it needs to be. Oh, I just did it as a graphic apparently. Okay, so yeah, so we'll do one more eye chart here. This is, I'm going to deploy, we're going to do a demo of this, but here's the same. Here's three specs for a Kafka cluster for a certificate issued by my internal PKI and then for an RDS database. And this is a good time. I said I would call out some of the warts. So these all share, you know, the Kubernetes resource model is sometimes the term used. They've got an API version. They've got a kind. They've got a metadata with name and namespace, labels and annotations. They've got a spec and a status. The second level after that, there's some conventions. There's things like conditions you might know like in statuses. There's other common ones like replicas is a commonly used term template for a pot. There are some common attributes, but this is an area where we could use some work to get some more commonality in the second level of those specs in describing the attributes of the resources. Okay, I'll get a chance to call out more of those here. So we're going to deliver. Let's see what time is it. Okay, we still got time. Let's see if we can get everything done here. We're going to show a few of these. We're going to show creating Kafka and a certificate in that database instance at the least. This is not going to last very long anyway, so don't bother trying to copy it. I'm just going to have this in the background here. I'm going to go ahead and look at my Kafka name space so you can kind of see as things go. And we're going to bring up this repo, which by the way, you can go to. It's at github.com. My name, I'll put it at the end too, github, joshgav. Oops, devenv. This is like the modern .files or something. But I'm going to do it here in VS Code, so we'll just close that. But you're welcome to check that out. It's a constant work in progress. The first thing, it looks like I should make that bigger. Appearance, zoom in, control equal. There we go. Oops. Okay, so the first thing we're going to do is Kafka. And I just want to show you the script. It's a really simple script. Let's deploy the operator. Oh, yeah, let's just show the script, deploy cluster. This is it. So all like this is just boilerplate. And then I just do a customized apply. So a big benefit of having a common control plane is the same tools work on everything. So customize, which does transformations and renders your templates. I can use that all over here. So that's what we're going to see in all of this repo. I have this services section. And basically I have like 20 services that I can deploy into my cluster. Each one is just a pretty simple script that calls customize and applies a bunch of templates that look just like the one I just did. So just to prove it, I'm going to do it with a make. I'm just going to say make that Kafka cluster. And it's going to apply those things. And we should see, you'll see zookeepers coming up. You'll see Kafka coming up next. I could do a demo where I actually get in and publishers get a reply, but I don't think we need to do that right now. It comes to me after if you don't believe that this is going to keep working. The next one I want to show is certificates. So cert manager. We'll leave that going in the background. There you go. So there's my Kafka cluster coming up. See it says container creating. We are going to deploy cert manager. And this is going to look very similar. So here's deploy SH. And this is a good chance for me to call out an issue also. So here the way I'm, of course, once I get cert manager installed, I can declare a certificate. I can declare an issuer. And that's what I'm going to show here actually. Oopsies. So here when I declare it, I say the way I do it is I create a self signed issuer. I have that issue me a root certificate. And then I create a CA issuer on top of that. And then what you'll see in say my dashboard is I declare a certificate, a cert manager certificate for the dashboard. And what I get out here, if I look at my connection to secure certificate is valid. You can actually see it's signed. I can't zoom this in, but it says CA root. It says the name of that certificate. And to do all that, all I have to do is, you know, the same idea. I just did a customize, customize apply. Inside important so I can do make cert manager. Oh, yeah, but I was going to mention the tricky part. So I was able to create all these resources, but getting cert manager itself installed, that's an area where we need some work. So like, for example, this I used a tool called CM cuddle or CTL, which stands for I guess cert manager cuddle. And internally it seems to just be deploying a helm chart. But I'll show you other ones where they'd say, well, just deploy the helm chart so that it's a helm install. Or other ones where it's a cube cuddle apply, point it to a YAML file. Yeah, and I'm going to show one more. Actually, I'll show key cloak really quickly here. Because it gives me a chance to discuss operator lifecycle manager. So the way I would deploy key cloak, or I've decided to play key cloak, is using a framework called operator framework. There's a tool called operator lifecycle manager, which can help provision any kind of operator that's been packaged in a certain way. So add that fourth way to deploy an operator. And that's why I do key cloak, because I've installed operator lifecycle manager, and I just say, subscribe me to the key cloak operator. So I could say make key cloak, and it would do that here. Install key cloak operator view up. Now the last one I wanted to show is cross plane. So cross plane, again, this was another one, they said to use helm. So I installed cross plane itself with helm, but then once it's installed, it's again a customized build. I point it here. I install a provider for AWS. And then finally here's the RDS instance, which I named dbinstance to. And I could say make cross plane. It will go through. It will apply at the end of this. It will apply that RDS instance. And where I have here, we should see this come up. dbpc already exists. But we should see, I created dbinstance to. And I can say kubectl get dbinstances. Hopefully it works. It says synced true, ready false. Right now, maybe it's not going to work. No, it should. Oh, you know what? Ha, there it is. I don't know if you saw that change. It was dbinstance one, because I created that earlier today. I was deleting it. Now it says dbinstance to. You'll see it's creating. So here I was able to declare this resource over here in Kubernetes, dbinstances.rds, and have it go back and create a database, a service in AWS. And it actually wrote a secret out to my current namespace here, too, that I could use to connect to it. dbinstance one password. Yeah, so the idea there is to show that with using Kubernetes and structures, you can deploy any kind of infrastructure, and every day there's more and more becoming available. Okay. Next thing, I don't want us to think it's only infrastructure. It's also applications. This might be controversial, and if you want to come fight with me afterwards at the Red Hat booth or something, I'd be happy to talk about it. GitOps. When you think about GitOps, you're really creating a custom operator or a custom controller for your application. I've got a bunch of Kubernetes resources describing an API server I want to deploy, but I don't have anything that knows to watch those for changes and apply it into my cluster. So that's what, when you think of an Argo CD or a Flux, their job is, well, they're their own operators, but their job is when I create a descriptor of my application, where it should look for my manifest and things, it's creating this virtual controller, if you will, that's going to sit there, subscribe to events from Git, subscribe to events from at CD, like if I make a change to the configuration, and then make sure that the cluster reflects that state. So it's again applying this concept of getting things to a desired state of reconciliation. Again, Argo and Flux become frameworks for creating your own operators, in this case, extremely opinionated operators that pull from Git or from S3 that apply, say, Customize or Helm or maybe nothing, that's one of the options. But they're ultimately very similar to just any other operator. So what I want to show here is with that, I can deploy, and I'll show here two components which become application yamls in Argo. So here's an API server, which is, I'll show you when it's up and running, it basically just says hello or returns a list of widgets from a database. I just declare this application. I say, happens to be in this case, I kept all of the manifests in this same repo, even though the code isn't a different repo. So I just say, hey, find the manifests here in this repo, I'm actually going to do the same thing postgres database. I have a postgres operator running here already. I'm going to just run this same application yaml and have it point at my services postgres base. And here's the same thing. I've got a customization file. I've got my postgres cluster. And actually another thing to point out, one of the tricky parts here is bindings. So I've deployed my postgres database. I deployed my API server. How do I get the URL and the password into the other one? That's actually another area of research of work. But in this case, I just have my deployment correctly loads up the environment variable from a secret that gets created. So let's just look here at my Argo. And I will say, make API server Argo, which is just basically creating those two application instances. You see right away, wait, there's the dbms. Where is, yeah, and there's the API server. These guys are already out talking to that repo. They pulled it in. They've created the API server here already. So I can say, I should have showed you that it wasn't there before. Get pods. What names do I have? Argo app, I think. And there's a couple are creating. But they're coming up. And I can actually click this button here. Let's see if it works. Yeah. It says very hard to read. Sorry. Hello world. And there's actually a widget server. That's a swagger server so we could put that in there. But this will return an empty array at this point. Sorry. Widgets. That will just prove the database is working. Okay. I hear them clapping. That means low on time. So there we saw infrastructure being created from a bunch of manifests, the exact same API framework that was used to deploy the infrastructure. Now I get them both out at the same time. Yeah. Okay. So we've gotten through the most important parts, which is Kubernetes as a standard API. You can use it to deploy infrastructure. You can use it to play applications. I did want to quickly show your options. If you want to build your own mechanisms to extend, to use Kubernetes as your own control plane. There's a lot of work in this area. The biggest project is QBuilder and Operator SDK. And what these do, it's a layer there of libraries. They build on some of the core components from Kubernetes itself. They also have library controller runtime, controller tools. QBuilder is, it's those libraries plus a scaffolder. And out of the box it comes with, here I'll just show you what it shows. If you do QBuilder.h, you get these, they call them plugins. You can scroll up there a little. You see the main one here is the go one. But then Operator SDK, I'll just do the same thing, a dash h for it. And it adds a few more. So these are, QBuilder, Operator SDK have really come together in the past couple of years. And so Operator SDK adds on a few more scaffolds. So by the way, just download this deck. And these things are all linked in there. And the key point of this that I wanted to emphasize, and actually I'm going to do a demo in a second, is that all you, all any of these things have to do is just that core loop. Yeah, you got to wire up to the API server and listen for changes. Yeah, occasionally somebody deletes something. So then what do you do? There's a lot of boilerplate and shared concerns like that. But ultimately your concern is just implement reconciled. Implement that one method. So I went ahead and already prescaffolded a go operator. We'll just skip right to showing it to you. It's called here the scratch operator. So all I did, I pointed, no, I ran Operator SDK in it. And I told it add one resource called a scratcher. Scratch to me just means like empty space, like a scratch container. And it created everything. And I want to show you the main thing it created is this. Scratcher reconciler. There's the reconcile method. You see it says to do user your logic here. There's my logic. It just says reconciling resource. And in fact, if I... Let's see if I have that running. I don't think I do, but I can say scaffold go. And I've actually got a scratcher resource right here. It's going to create that. Let's see if it can do it quickly. I'll let it do that. Yeah, the reason I'm going to... Yes, now I'm going to go ahead and just look at the logs, kubectl logs dash n. So it created a controller manager for me. There it is. It's in the scratch operator system. Just going to look at the logs. You can see at the very bottom one there. Let me see if I can scroll it up a bit for folks in the back. It says reconciling resource. And it tells you it was this scratcher called Scratch Scratcher. Okay, so it worked. Yeah, that's an interesting one, but I also don't want to lose the chance to show you the Helm one. So you can do operator SDK in it and say use the Helm one. This is an easy way to get started. That's why I'm saying that. Like if you already have a Helm chart, in this case, I borrowed from a project that I sometimes contribute to Potato Head. They have a Helm chart. So all I did was clean that repo down. I pointed it at that Helm chart, and I said create me an operator based on that. And it wraps that into an operator. Same thing here. I can scaffold Helm, and we'll deploy it out. And then I can create... This is the Helm chart that it's wrapping. Actually, this spec is the same as a values file. So it's going to apply this now. In fact, I've just got five minutes. Cuddle services. Oh, yeah, they're coming now. Took a second. But there's the load balancer. Let's make that work. Hopefully, come on. Oh, 9000. Sorry. Yeah. And then the nice thing about doing it with this is very easy to apply my Helm chart change by just saying Cuddle get Potato Heads. I'm just going to make sure that I'm in the right place. Yes. I could say Cuddle apply-f back one. What do I got there? Oh, no, not back one. Potato Head. And now it should apply that out. Let's just watch the Cuddle pods. That's going to be too late. It already replaced the hat one. And there I go messing that up, too. Now let me do this. Aha! And the hat changed. All I had to do was change the declaration in there. OK. What did I do with the deck? There it is. OK. So just implement Reconcile. Here's a list of all of those plugins. So I mentioned the Go one. I mentioned the Helm one. The Hybrid one is pretty interesting. I didn't have a chance to exercise it before, so I didn't want to talk about it today. But it's one where they're letting you reconcile a Helm chart and add some extra Go methods. So that probably could help with transition. If you want to start from a Helm chart and gradually graduate to using more Go. OK. We already did this. There's no more demos, so we'll get through the last few things here. I would be remiss if I didn't let you know about the other frameworks that are around. They don't seem to be as popular, at least based on the articles and open source repos that are based on them. Meta controller, though. It came from Google originally. Big difference from it is that it runs the Reconciler out of process. So it's got, I don't know if you could see that diagram. I tried to put it in there, but it's got a central controller which wires up to all the events from the API server. And then when it finds a relevant event, it sends an HTTP call to your process, which has limitations, but it also makes it a lot cleaner. You don't have to have all that boilerplate, which I showed in the other thing. Yeah. Cop is a popular one. Or is it one that's for Python, if you want to write something in Python? You do, I think, if I remember correctly, it breaks up the reconciliation to a more structured process, so you can do more with that. Java, that's from Red Hat. They're trying to work on some stuff that used Operator SDK to scaffold out a Java operator. And then the last couple of things here, just registries. So Artifact Hub is a project from CNCF. So I'm, you know, open source. Let's use, well, they're all open source. It was Operator Hub. That one's managed by Red Hat. But if you do kind equals three for Artifact Hub, you get OLM operators. So ones that can be deployed with Operator Lifecycle Manager. Yeah. So I promised I would call out as I went along some of the warts that are right now. I tried to a little bit, but here's kind of a summary of those. And so, yeah, if you want to talk about these more, I'm happy to talk. I think in that one good venue is tag app delivery for CNCF. I'm not sure, you know, in Kubernetes in general, I'm not sure where else to talk about it, but that's a good place or come to talk to me at the Red Hat booth. Kubernetes resource model. You've probably heard of that. There's a doc. That's the basic metadata spec status, stuff like that. Crossplane resource model tries to standardize. You know, we showed how it's projecting through things from the providers. I think it's too deep into it, but it's got a few sections there. So they're trying to come up with some standards. Knative, if you've worked down deep in their stuff, they use a concept called duck types, which really are about, you know, does the status have an address with a URL property? So again, they're kind of defining APIs. Bindable is another one they have in there, which, you know, maybe that implies, oh, I should inject a secret from this one into the environment of this one. Things like that. So those are things we kind of, you know, there's a lot of discussion, but we could use, you know, getting to more standards or getting to more, you know, de facto standards. Service binding, I mentioned that, you know, there's different ways. Red Hat has a project called Service Binding Operator, where you can create a spec that says to bind two things together and it will inject, like I said, the secret from one into the other. HashiCore, Vault, obviously kind of a de facto standard. They've got an agent sidecar and things like that. Crossplane has this idea of right connection secret to ref. It looks at that and puts it into a secret. So these are just all, I mean, all immatable and that's good, but, you know, if we could bring them a little closer together, it would make it easier for people that are trying to use these things. Operator managers, I kind of tried to highlight that. There's, like, a billion different ways to do that. And then the cloud providers, just forget about it. They talk about their add-ons and things like that, which are operators that get deployed in their bespoke way. So there's room to improve that so that, you know, we could manage the life cycle of our operators upgrade them in a kind of consistent way. Tenant scoped operators. This is also a big deal. If you're used to using namespaces for tenancy, unfortunately custom resources and operators are basically cluster level. So how do you say to a tenant of yours, you can have this resource but somebody else can't or you can install this operator but they can't. So there's things to work out there. And then the last thing I just wanted to mention, this is all about specs, but devs don't necessarily want to write specs. So I'm kind of throwing out the term speckless. If we have serverless, let's get to speckless. We want to be able to infer, basically, the specs for common framer. Like, oh, he's using a database. Oh, he needs a certificate. Be able to infer that without a developer having to put in every, you know, full yamls for everything. And that's like the Heroku app.yamls of the day or the funk.yamls of the functions. So yeah, that's something to think about, too. So with that, I'm here for questions. I'll be at the Red Hat booth. Yeah, we're at time, so I'm going to let everyone go, but I'll be here for questions. Thank you.