 Hi, everyone. I'm Tiffany Jernigan. I'm Jack Long. And we're going to be talking about beautiful Kubernetes controllers. And sorry, we're not both there in person, but I'm here. So if you have any questions for me, feel free to come up and ask afterward. All right. So as you can see on the different parts of the screen, there's a bunch of contact information for Josh and me. There is also a GitHub link. So everything that you're going to be seeing today, except for, I guess, maybe the slides. I don't know if they're up there. Everything else there is something that you can follow through effort if you want to. So go ahead and check that out. And then, yeah, Twitter, email for Josh. Mine's not up there, but yeah. All right. So a little while ago, Josh made a post saying that Kubernetes is his favorite EML database. So in response to that, Brian Grant said basically, yes, by design, and included this diagram there. So we have a database that has a definition of something we would like. Then there's this loop that is going to respond to that information that is put into that database. Those loops come in the form of controllers. So more specifically, Kubernetes is an edge-leveled reconciling controller. Basically, it will spin up a loop that evaluates some system state. And if that system state should ever drift away from the operator's desired state, the controller's job is to make that state. So there's two different types of triggering. There's edge triggered and level. So edge is basically what you might think you have an edge. Something happens so triggered based on that. So for instance, you might have something like, oh, hey, I have a temperature, and it dropped by five degrees, and now it went up by three degrees. With edge, if you don't know where you started at, you don't know exactly where you are at the end state. Level is more of what was at this point and what is it at another point in time. So Kubernetes comes with some primitive controllers by default. So if you've already worked with Kubernetes, you may have heard some of these or dealt with them. So we have things like deployments, replica sets, statements sets, etc. So some of these controllers can be used to handle multiple things, such as maybe managing use cases for stateless or stateful apps, or maybe you want to run your apps on a specific node or machine. Or maybe you want app to only run for a specific amount of time or some specific number of intervals. So there's a bunch of things that are there by default. Then there are other custom controllers out there for more sophisticated tooling. So some of these require maybe something like some kind of persistence or a quorum or maybe something that's just more sophisticated than you might need with what you have with deployments or damage sets, etc. So you can't necessarily just do these things by mashing together like a bunch of the built-in controllers that already exist. So this is a small, big, depending on how you look at it, list of some of them. Josh wrote an entire blog post on which you can just check this out. So you can just probably go to the internet and search up, get to know a Kubernetes operator and look into a bunch of these things. You might recognize some of the different ones here. So like plastic search, Mongo, etc. So if those two lists aren't enough for you, well, don't worry. There's a bunch of products in the CNCF landscape that rely on custom object definitions and the controllers behind them. Josh likes to describe this basically as a fractal. So basically, yeah, you basically click on one and then you end up seeing a million more things. Maybe if you looked at this, I don't know, four years ago, it was a lot smaller, but it just keeps getting deeper and deeper. It was only a few thousand back then. Yeah, probably the whole thing is all what this is and you can't click on anything. But now it's like, hello, it just keeps going, which is really cool because the system just keeps getting bigger and there's like a bunch of new stuff there. So this here is a pretty picture of what Josh is about to be demoing pretty soon. So first off, when you communicate with Kubernetes, everything talks through the API server. So you can see that up there. So here, we're going to be creating this thing called foo. It could literally be whatever you want it to be called. If you've been dealing with Kubernetes this far, you could have placed this, let's say, an existing thing. So maybe this is for a pod or maybe this is a deployment, etc. So the API server, basically, it saves a request to the control plane database, which typically is at CD. And this is why Josh was basically saying that Kubernetes is just a database without any action behind you. You need something to ask the API server about what you're actually going to be doing with those resources and if there's any requests related to them. Otherwise, it's like, hey, I have a pod or hey, I have foo. Okay, well, now what? So for here, for instance, we have our custom resource of kind foo. So this then gets added to the database if there is no controller. Yeah, this is where things end. All right, that's it. So the way we've created foo here is that foo will create a deployment, which uses the deployment controller. And then this gets added to the database. So then the deployment controller will acquire with the API server. And now, hey, there's this new request for a new deployment. And then it continues down through creating the replica set and the pods. So these controllers basically compare the actual state to the desired state. Then there's the scheduler. So this is like, it's another type of controller. Go figure. So it assigns the nodes for these pods. Basically, it's just a bunch of controllers galore. So what we're going to go into next is actually seeing all of this in action in code. Yeah. So, mind if I share my screen? Oh, yeah. Awesome. Okay, good. So we're going to build a controller. We've got the idea of contributing a foo to a database. But as Tiffany just said, without something monitoring that API server, there's no path forward for that thing. It's just going to sit there in the database and nothing will happen. Something needs to respond to those changes. So we're going to write a CRD to describe our custom object called foo. And again, as she just said, it could be anything. It could be CRDs. What do they usually describe? They usually describe infrastructure. If you wanted to build a CRD for Google Cloud, you might have a custom CRD that understood and had attributes for configuring Spanner, for example, or Zero Cosmos DB in Microsoft. You could build custom CRDs to act as configuration points for anything, really. And then that data goes into the database, and then a controller pulls that data down and uses it to act on whatever you want. In this case, it could be other deployments. As we're going to do today, we're going to have a custom controller that creates new deployments. You could be talking to some back-end REST API that configures some infrastructure behind scenes and some third-party thing, whatever. But the point is, once you have a CRD, then you can speak and use Kubernetes to manage these objects that live on the cluster or off the cluster in the same consistent fashion through YAML. Or, yes, you can also use JSON, but most of us use YAML, and it's really very sad. Now, we're going to create a custom foo. Ours is not a particularly interesting object. It's just a foo. It's just a wrapper that manages one property. I think it's called name, something like that. That's it. If you were to write in Java the equivalent definition, that's what it would look like. It's a string with a field called name, and it's a foo. We're going to describe that type with that field to Kubernetes as a CRD. That's a class definition, if you will. Then you can create an instance of that class by just creating a new instance in the config. You can say, I'm going to use this custom kind called foo, and then you can parameterize it by providing new values for the name attribute. Let's talk about that. We've got this code here. It's all in controllers one-to-one under Kubernetes, Neo Java, Tiffany, and I worked on this, and it's there. It works fine. Please check it out. Normally, we would try. We'd take a gallant effort at live coding this from scratch for you, but as I understand it, we've got like 40 minutes. Is that right? Yeah. You don't want to sit here for hours or hours, but literally hours. Yeah. It would take me days typing every single character of all the Java code. Me too. It's not about proficiency. It's just that this is not a message of hope. Kubernetes controllers are painful. They're no fun at all to write, and I don't begrudge anybody their frustration. We're just going to take this code, and I've got it cloned in my local machine, so I've got it here. I'm going to pull it up. Controllers. There we go. Now, this code is here. We're going to generate a simple project and work from there. It's going to be a new project called IO Spring. We're going to call it a controller. We're going to use Spring Native, and Spring Native gives us the ability to take our Java code and turn it into a operating system in an architecture-specific binary. Now, the reason this is nice is because the process of turning it into that kind of binary eliminates all the dead weight implied in a Java deployment. You don't have any extra types that are not being used in the binary, and that includes stuff from the JRE, includes stuff from the class path, it includes all the stuff in the JRE, anywhere besides your code. The resulting binary can start up in tens of milliseconds, and you can go from 20 to 120 megs of RAM in a typical application, but that's a far side smaller than you might have otherwise become accustomed to having these Java in well. The idea that you can deploy a Java application for what for a lot of organizations is effectively a tenth of the RAM is great, and it's an even better fit in the context of Kubernetes, because you don't want to deploy a controller that takes more resources than the things it's controlling do. In this case, we can just deploy this very nice lightweight thing. So we just bring native to enable that process, and I think we want Lumbuck, and I think that's it for now. We're going to add everything else manually. Starting on your second favorite place on the internet, start.spring.io. Exactly. My first favorite place, of course, is production, and these controllers that make it to that production can be anything you need it to be, right? Where production for most people these days is Kubernetes. So controller application. Okay, first things first. Before we can write a bit of code, we need to go back to that foo, right? That's the beating heart of what we're trying to do here. We're going to create a definition of foo as a CRD. So I've got that configuration here, and you can see there's 56 lines of code to describe to Kubernetes what the foo is, what package it should be in, what the plural list form version of it is, what the plural version of it is, what the singular version of it is, whether it's bound to a particular namespace. The versions is all kind of basic stuff. Oh, and then, of course, when we were just when we were thought we were done describing it, because we also have to describe them. Just when you thought you were done describing it, you have to describe the object using open API with Swagger basically. Here's that definition, open API v3 schema. Here's the definition of the object again in schema. So we're creating a CRD, which in turn embeds schema to describe the object. All of this for one field, right? I think it's called name. I don't even name. There, it's called name. The name is name. Yeah, the name of the name is name. Okay, so there's our one property. It's tedious. It's soul annihilating. Nobody likes to do this. I'm not happy that we have to do this, but such is life. So we're going to take this. CD, dear name. Okay, okay, apply minus F. Oh, and by the way, K gets CRDs before I get injured on that. K gets CRDs. What do we got here? And K is just alias for cube cuddle or cube CDL or however you pronounce it. Absolutely. Thank you. So let me delete this. K delete CRDs. Good. Okay. So now K apply minus F food at YAML. It's going to define the custom resource definition. And if I now query the CRDs, I've got my food, right? K describe CRDs like that. And you can see basically what we just contributed is now in the API server. And that's fine, right? We have a definition of a class, but again, we need to new up a new instance of it. And that's where we need a test, something we can work with. So here we have a simple test file, you know, API version one type is foo name is demo two. And the, this is the AP, this is what Kubernetes cares about. This is the name for Kubernetes. But the name that we care about in our custom object is spring one tour. Or in this case, what's the conference called? Open source. Is it two S's or one? That's my question. Sorry. Sorry. It could be two M's and a T. We can make up our own names. We can fork the English language. That's the whole beauty of OSS. So there we go. We've got this wonderful, I don't know if it's that one or which one we could look it up, but you know, anybody got time for that. So we've got this custom test object. I want to create an instance of it. And if I do that, again, nothing will happen because there's nothing out there that knows what to do with this. Okay. Apply my self test, right? Okay. Get foos. Okay. Okay. Get foos demo two. Do it describe. All right. So you can see nothing has to happen, right? It's just in the database, but again, nothing knows or cares about that. I want to now write Java code to respond to the life cycle changes of foos. Any foo, this foo, another foo, you can create as many as you like. Obviously in the same way that you can create as many deployments or pods or whatever. So we need to actually write code to do that. And that's where we need the Kubernetes Java API. And the Kubernetes Java API, the client for the API server, even has a nice sort of passable spring boot auto configuration that we can use, but it doesn't work out of the box with Gravium and native images. And so that's why I'm going to also bring in a set of hints that make working with Gravium and the Kubernetes Java client just a little bit easier. So let's go back to the XML. Okay. And I'm going to copy and paste these two bits right here into my build. There we are. So re-import. Okay. So there's that. Now I have these types in the class path. I'm almost ready to start writing code, but again, I want Java types that I can deal with instead of talking to the API server directly in terms of the animal, right? And so I can code generate, I can code generate Java types, right, that match the types in the server side. So, and there's this nice convenient, and I'm using air quotes here, but there's this convenient script, the solution that came from the Kubernetes Java client where we can actually you could point this Docker container at the controller. You can point this Docker file at the beginner repository, containing the YAML file for your CRD, and it'll dump out of all the Java code that you can use. Now I'm not going to re-run that process. It takes a solid minute and it's time consuming, but I do have those types here. They're not glorious, but whatever. So we go here, we can see the models are the models are code generated, and it says so images, the schema for the image API, APM model, images, schema, blah, blah, blah, open API, code generate, languages, blah, blah, so I'm going to take this code right here, open this in my Git repository, in my local finder, command C, I'm going to paste that into here. I'll just paste it to spring models. Okay. And I suppose this could go up here. It doesn't need to be in the controller's package. Get rid of that. Get rid of this because we do not have time for that. I want to quickly explain what the hints were and such in your .xml that you dumped in there. Right. Well, so again, those are the things that I wrote to make the Kubernetes Java client work well with Spring Native. Spring Native works by turning it, by analyzing your application, using the Graphium compiler and finding all the things that it can throw away and have the effect of trimming down the resulting application. But it does this analysis at compile time and therefore it cannot see some of the very dynamic things that Java is capable of doing, like serialization, reflection, proxies, resource loading, et cetera. So you need to provide configuration. There's an extension mechanism that you can use to contribute more configuration than what Spring Boot automatically derives for you by its understanding of the Spring Boot application. And so that's what I've done. I've written a set of extensions to Spring Native that know about the Kubernetes Java client, that contribute more configuration so that Graphium does the right thing when you try and use the Kubernetes Java client as we are here to build this custom controller. Okay. So we have our custom types. We have this API. Now I want to build the actual controller. So let's just copy and paste the controller here. There's a lot to it. Okay. It's called controller application. Give her that. That. Good. Okay. So let's work backwards. Okay. There's a lot of moving parts here, but it's not really that scary when you think about it. Conceptually, the heart of the contract that you have with the controller when you build a controller is to build something called a reconciler. The reconciler has a very simple job, given a request, provide a result. And this request goes back to Tiffany's explanation around edge leveled, edge or leveled triggering, right? And so what's happening here is we get a request. The request has a name of an object in the API server and the name space in which that object lives. That's it. You don't know what the extent of the changes. You don't know when it happened. You don't know anything. You just know that something has happened to that name in that name space. So if somebody creates a new instance of a foo or a pod or deployment or whatever, you get a callback. When somebody creates a new instance of a deployment or pod or custom foo or anything, you get a callback specifying the name space and the name of the thing that was changed. There is no explanation as to whether something was created or deleted or updated or scaled out. There's none of that. It's up to you to give in the name space and the name, go to the API server and sift through all the data and see what changed in any meaningful way that you need to respond. Maybe somebody did kubectl apply on the foo.yaml definition and there's now a new value there. And in our case, since our example is going to deploy a deployment, which deploys nginx, which has an HTML page, which in turn is informed by a HTML file whose contents we will customize when we deploy this based on the name that we specify in the foo. So when that changes, we want to redeploy that deployment so that we get the new nginx with the new HTML file based on the new value in the foo. So we have this very simple controller mechanism, this reconcider, requests come in, you get the name space and the name, and in response, we're supposed to say whether we have handled it or whether it should be redelivered. And the contract is pretty straightforward from there. It seems very simple. But again, there's a lot of delicate, fragile state that you have to be aware of. You have to figure out whether something was created or not, whether it was deleted, and so on. So let's work backwards from that. In order to do our work, we're going to need a shared index informer. Do you want to talk about what the shared index informer is? Yeah. So we have a few different things. So we have informers in general. So basically, instead of having to reach out to the API server a bunch of times, that kind of ends up being a lot, so ends up caching it. Then there are things like the index informer. So each of those informers ends up having a key for each of them. And then there's also having it shared so that you basically don't end up having a million informers as well on top of that. Right. Exactly. So it's a cache. And there's, you can have multiple of these, you can have one of these per JVM process, right? But the state is being kept in the client process, whether it's Java or Go or whatever. Is that true? I'm not specific with the SELTA and anything besides Java. Okay. Oh yeah. It's being, as I understand it, the state, the cache, the data is being stored in the shared index informer in the client process. And so it makes sense to just keep one of these, right? And then it'll be the arbiter of all the data for that particular binding, be it Java, Go or whatever. Okay. So we've got this, and you're going to see these used all over the place. They're very convenient. They're an actual way. I start with that. Unless I have a very specific thing I want to do, I start looking for a shared index informer for the type that we're trying to work with. In this case, our foo, right? The V1 foo, that was reverse generated, code generated from our CRD. There are also some lower level APIs, I'm sorry, higher level APIs in the Kubernetes Java client that you can use that correspond more or less to the things you might do with kubectl and with the API server groups themselves. So there's a lot of stuff in the core apps, in the apps namespace, a lot of stuff in the core namespace, and those are represented by these two different clients, these different things that you can use to talk to those different APIs. So we're going to use those for a couple of things. We're going to be creating config maps and deployments. And rather than programmatically in Java define and create each one of those each time, I have left most of the definition in a YAML file. And I'm using spring here to load in that definition as a resource, as basically a buffer that I can then read, it was bytes I can then read. And same thing here for deployment. And we're going to use that to create, to programmatically create a new instance, we'll change one little thing about that and then save it to the API server rather than having to programmatically build it all up. So it's like a template. And we're just going to change one little part of that template to get the thing that we want. Okay, so the shared index informers are defined up here. Let's see, we have the foos shared index informer. We've got the deployments API and the foos API, we need these as well. The apps V1 API in turn depends upon the API client. This API client, it knows about with a GRPC and it knows about the authentication mechanism for the API server, but it doesn't do much more than that. It's a very low level, like I know how to talk to that server in the right protocol, but I don't care about what payload you're sending about the wire. So it's a low level thing. We need that created for us. I think the auto configuration creates it for us. Yeah, it looks like it does. So we just inject that. So we've got these beans that all in a way act as clients to the API server, different kinds of levels of granularity for the API server. We feel that here into the controller, the controller consumes the shared index informer for foos and it consumes the reconcider and it consumes the shared index informer factory. And it's the thing that actually wires everything together that then gets run, right? This is the top outer level, the most thing. It's the wrapper around all these different dependencies. And we're saying we're going to create a custom controller by watching V1 foo types. And if there's any event created, deleted, updated, whatever, we want to send those events onto this queue. There's a queue here that we've got. And we're going to listen for events and then publish those events onto that queue. We're going to re-sync every second. We're going to have two threads doing the work. And we're going to finally build the controller using this reconcider that we've defined below, using this ready function that says, hey, has the foo node informer synced? That is to say, I want to make sure that we don't do anything before that foo node informer has fresh, has all the data from the API server. We're going to give it a name and then we're going to hit build. So that's going to be the engine, but somebody needs to start the engine. And that I do down here with this little bit of threading. What I'm doing is I'm injecting a controller. I'm injecting the controller that we built. This is not a spring MVC controller. This is a completely different concept. And I'm going to use the thread pool here, the executor service. I'm going to use that thread pool to kick off a thread in the thread. I'll start all the registering formers and then I'll kick off our controller, which will depend upon all these having been started. So this is just a way to bootstrap all of it, make sure they're up and running when the application starts. Now, with that in place, I think we're basically able to start trying to decipher what this code is doing. It's a little inglorious, but that's okay. We are getting the current request. Remember, the contract is given a request. We provide a result. So this is the lambda, right? A result given request. Here's the lambda result. We take the name, we take the namespace, and we use that to create a key. And you can usually, with the shared informers, you can get any resource in the API server by its fully qualified namespace plus name key. So here we're saying I have a request for something called this, something called the same space, this name. I'm looking it up. And if it's no, then I assume somebody deleted something, right? Because why would there be an event? And there's nothing there. If there was an event and there was something there, then it was created or it was updated. But if there's nothing there, it was deleted. In this case, we don't need to do anything particularly special. And the reason is because we have defined our, let me see, test. We have to find our CRD and our code to have parent references, right? Do I specify that here? No. Down in the code somewhere, as we'll see in a minute, we actually explicitly parent the child objects that we create as part of our custom CRD. So when you create a foo, we automatically call the API server and create a deployment. And that in turn creates other objects. And there's a chain of heritage there, right? The deployment belongs to the foo. And if the foo instance in the API server should be deleted, then, of course, so too should the deployments and all that other stuff that gets created. So you can handle that yourself, if you like. But if you explicitly set the parentage in the code when you're creating these objects, then Kubernetes will do it for you. The only time when you want to be careful is if those other objects below correspond to some other expensive thing that needs to be cleaned up externally. Maybe you've got, I don't know, maybe you've got a controller that creates a volume or something. And that in turn uses some sort of proprietary API for mounting a disk or whatever. You want to make sure that gets cleaned up. Well, of course, there's probably a CRD for that as well. And it's going to clean itself up. But if you have some complex cleanup state that has to be done when something is deleted, here's your chance, right? Before we turn the result, you do that. Otherwise, we keep going, right? We've got the namespace. I'm not sure why don't we just... Oh, it's down there. Otherwise, we keep going. So we've got the namespace. We've got... These are the same fields that you have to specify when you create or update an instance. And that's the pattern here. There's no clean, easy way to say, show me what has changed. Instead, we need to try to create the thing. And if that marks, if it throws an exception, then we try and update the thing. Okay? And that's a very common pattern. So you just... You try, it fails, you try something else. Okay? So we're going to... These parameters get fed in for both. So the first thing we're going to do is we're going to create a config map name. And then we're going to programmatically create a config map. We're going to use that resource that we looked at earlier for config maps. That's this right here, config map. We're going to copy and paste that from here. So what's happening here basically is since we're creating a foo and this foo creates a deployment, which has pod. This pod specifically is going to be creating a pod or it'll be using nginx. And we are going to be using a config map to change what is happening on the web page. So you have your like default nginx stuff. We want it to do something and say something that's specifically from our foo. Exactly. So this config map is that this is the default version, right? This is what it looks like just regular without... Before we've had a chance to change it, it's got some indexed HTML content that will then get... We're mounting that config map as a volume and we're exposing a directory and that HTML file in turn is in form of this config map. And then nginx knows to look in this directory for any HTML to serve. And as the HTML is being informed by the config map and the contents of the config map, we change each time we update the foo, then the HTML will reflect that change, I think. So that's the templated version. So we're using the resource to get access to those templates. And then here, I'm using this convenient little method I wrote here called load yaml as. I'm going to give it a resource, give it a class type. It'll read the yaml, the bytes of the file on disk and load it as an object. It'll create... It'll map the fields and the hierarchies to the different Java objects, okay? So here's the properties. We're going to then go back up here. And if you're ever wondering whether something belongs in core, if it belongs in app, etc., you can just use kubectl with api-resources and see where things are located. Exactly, yeah. So we're going to create a custom config map. We loaded the object as an object, but we still want to change the HTML, right? So in this case, we're going to use the foo that we got. There's no longer no. We know it's not no. We're going to get the spec, get the name, and this should say OS Summit. And we're going to create a new HTML file. We're going to get the data for the config map, put the new key or update the existing key with this new HTML content, and we're going to put that config map, name it, and then we'll add that config map to the API server. If it's already there, we want to update it. Well, that gets us to that other pattern I'm telling you about where sometimes you want to create it and sometimes you want to update it, but you don't know until you try it. So I've created this method here that takes the type you want to update and two suppliers, API suppliers, basically. These are, this is a custom function that I created. It's just a functional definition here that explicitly cares for the fact that we have an API exception. You can use regular JavaEat supplier. It's the same thing. It's just you have to do try catch, and I figured it would be easier to have this. So it's a functional interface, and it takes two of those. And your job is to return a foo or whatever, or T in this case. So here I'm saying, okay, first, if it doesn't exist, then create it. And here's how you create it. You create the owner reference. You associate the owner reference with the current foo and this new config map. Then you create a namespace config map, and you're passing in all those fields. If it already exists, you'll get an exception. And so this block gets run. And here we're updating it. We're saying replace the namespace config. So I'm using V1 config map in the core V1 API to do that work. Great. So now we've got a config map. Now I want to create a deployment that points to that config map. Same thing. Same pattern. Load the template into our job objects. Change some of the parameters that are dynamic based on our custom foo spec. Make sure that we've got the right amount of volumes and all that. Make sure we don't keep adding volumes. And then create or update the deployment. And here, very specifically, we're creating a, we can add a custom annotation here as a demo. These annotations are great for all sorts of bookkeeping and state management that you want across multiple runs of the application. And then there's the add owner reference. Again, that method gets the metadata and adds references of you and owner for the foo. So now it'll be explicit in the API definition that this is a child of the parent type there. Okay. So create the namespace deployment. Otherwise we replace it, et cetera. So that's pretty much it. That's, you know, kind of, it seems logical. There's a lot of code here, obviously, because there's a lot of stuff to do. There's a lot of moving parts here. I think there's a lot of room here for a framework or something like that. So goodness knows what the future holds by Sheryl. We could do something to make this a lot easier. So, okay, here's the actual full application. I think we're ready to try running it. What do you think? I'm ready. Okay, good. Let's hit go. Okay. So there we go. I've got the applications I've been running. It worked. We created a new config map and a new deployment. So let's go to our cluster. Okay, get all. Okay. So you can see it's created these things 14 seconds ago. I've got two new plans. And there's also a service that I created. I just have to do port forwarding. That's fine. Okay. Port forward this 80808080. Assume local host. What is it? Port 80 then? It's pretty strong. Okay. There's always something, right? So that has worked, right? We've got this. Now, hey, I could have gotten this wrong. What is the conference that we're speaking at, huh? It's literally phrased as open source summit. Oh, see, I got it wrong. So now I need to change the definition. Okay. How could you? Darn it. So where's my definition here? Yeah, test open source summit. Okay. There we go. So I've got the same definition, the new spec, and we're going to go back and update it. So I'll go to the command line here. Terminal. Okay. Apply. And I think it was a source. It wasn't the same, but it was the other one. So let's go. Controllers or kates. Pet. CRDs. Test. Okay. So we're going to run and apply that one. So k by minus f. Test.yaml. And we should see this running. It says, okay, we've already got that config map. We're replacing it. Already got the deployment replacing it. We've successfully updated the equivalent objects. Therefore, there should be a new thing running. Oh, the port forward is no longer working. Okay. Because that pod went down. Yeah. Yeah. New pods. Okay. Get pods. You could maybe do it the deployment instead, for instance. Oh, yeah. Port forward. Watch it. What is the name of the pod? The deployment. So deployments. Okay. Port forward. Deployments. Deployment. Mod 2. Like that. I think. Hopefully. These. Okay. There we go. Right. So all we did was we changed the configuration and we got an application that works well in our scenario for this very simple thing of just propagating the name in the spec. Now, I think we have one more thing on our show, which is how do we get this into production? We need to turn this into a Gravium native image. And that's where those custom hints come in. And I've already added the things to the class path. And I have some custom types that we need to tell Gravium about. So that's what the sanitation is about. It's registering V1 foo, V1 foo list, V1 foo spec, and V1 foo status. And for some reason, it's a decent element. We're registering that for configuration. Everything else, the auto config, the config, the Gravium configuration will figure out for us. So we can go here, terminal, and we can say CD downloads, CD controller, Maven minus P native skip tests, clean package. Okay. So that'll take, oh, I don't know, about a minute. And it's going to actually build a native image. And then we can package that up using build packs. You just say the Maven spring hyphen call in build image, build hyphen image, and you'll get a Docker container. You can Docker tag, Docker push that to your container registry of choice. And you're done, right? You've got some, you can not deploy to your Kubernetes cluster. So if this is to finish, it's nearly there. It's another three or four weeks. If you haven't, if you haven't dealt with build packs before, it's really cool. Instead of like having to create a Docker file, you can just be like, here's my code. It's in Java. It's node. It's whatever language it is that is supported. And you can basically give it to build packs and it will create a container image based on your code. Absolutely. It's the, it's a cognitive computing foundation spec, right? So it's, it's really a lot of interesting use cases there. And it's not just for Java, right? You can do it for other languages as well, which is great. And I have one consistent recipe. Okay. So target, do you miss HS controller? It's 62 megs. That's to run the application. It's up and running. You can see it's doing the same thing as before. That's interesting. But the most interesting thing to me is the memory footprint. We're like, oh, controllers, right? Controller. There you go. That's right to them. Let's assume the latest one is ours. Okay. So, and then I have a little script here. Nope. Java DMV been RSS by PID. Okay. So I want to pass in this PID and get the results. And it's taking 71 megs of RAM. So I'm basically, I'm doing PS minus RSS for that process. And, you know, then scaling it with some formatting here. I'm getting the RAM of the application, that Java application complete with a runtime and all the other stuff that we brought into God's Bath takes 71 megs of RAM, which is a very spelt thing. Indeed, you could put that in a minimal Docker container and deploy it and be off to the races and have something working, you know, now. Back to you, my friend. Awesome. Well, thank you, everyone. If you go to github.com and go to that specific location, you can pull down all of the stuff that Josh and I have like gone over today. You can run it yourself, which is really great. And again, the way to reach us is on the, I don't actually know what strikes I am and probably over here, something like that. You can find us on the internet and ask questions and I'll also be around at the conference too. So thanks, everyone. Thanks, everybody.