 So some of the things that I would like in a sort of any system to deal with this. You know, I'd like, again, I mentioned before, I would like to stay close to what the project does because I want to be able to be using the same stuff that other people in the community are using and so I can talk to them and say, yes, I saw that issue or I filed a bug about that or we can work through this together. I'd like the resource control, as I mentioned, if it doesn't have to, if I don't need BM overhead injected, I would rather not have it. And I guess that's to close to our project is just a dupe of the support upstream packaging. And I'd like one system to rule them. In the time that I've been at Red Hat, we keep acquiring companies and we keep getting into new spaces and that's awesome. And that's kept me super engaged and it's been super fun. But a new thing comes along and I say, okay, how does this fit into the sort of setup that I have put together to test things out and learn about them and understand them and hopefully as a user of them, be a more useful part of the community. So I thought everything could be in VMs. I could just have this kind of overlord, overt layer where everything is actually, my lab is running all in VMs in one layer down. But the VM overhead is the sort of thing I didn't want to get into and OpenStack, I've been paying attention to the different kind of container efforts within OpenStack and Red Hat's big on OpenStack and I've done a lot of OpenStack stuff in my time at OSIS. But until one of the kind of container systems in OpenStack presents itself as, all right, this is the one where you're going to learn about it and then it's not going to be like the other things in the past that not just in OpenStack but in other projects that you learn about and then, oh, you know, we're not going that direction anymore or it kind of, you know, attention shifts somewhere else. I just, you know, haven't seen that one emerge yet. So up until this point, OpenStack hadn't has just isn't fitting the bill for me. I want a system that will allow me to use OpenStack and use parts of OpenStack alongside the other projects that I care about. OpenShift, this is an interesting one because OpenShift is now, you know, where the picture of me here, you know, stroking my chin wondering about what other options I have, this is taking place, you know, a year and a half ago or something. And OpenShift in its version 2 incarnation, we're currently at version 3, is a, I mean it still is a platform as a service, but a platform as a service where your applications are running in containers, but it's sort of a, it's a sort of an OpenShift specific sort of container. And you have, you say I have my WordPress code, we'll use WordPress. I have my PHP runtime. I have my SQL database, you know, these cartridges, this runtime and these cartridges I pick and I put the code together and it runs and it's great, but this gets away from the sort of, there is a DIY cartridge in OpenShift version 2 where you can just kind of run anything that would run in Linux, you can run in it. But it's really not designed to just, like, if you look at, okay, what would it take to run Gluster inside of OpenShift, it would just take more decoupling from the ways of the project than I was interested in doing. Again, I don't want to get off on, way off on a DIY tangent. I want to stay close to what the project is doing. Now version 3 of OpenShift is actually based on Kubernetes. So, you know, it's sort of a different reality and I'll talk about that a little bit later. So the third thing and way the container option, I was at the talk where I was talking about the cutting edge, how to build a cutting edge cloud. I talked about that point about the Gluster resource usage and I talked about how maybe some kind of container thing, but at that time there wasn't a clear option for me to use. In fact, this little screen here, this is at scale 11x. I thought the first time I heard of Docker was at scale 12x and I guess that was the first time I saw Docker presented as Docker at scale, but at 11x this presentation by Jerome is about Docker and I thought it was super cool. I thought the reliance on a uniting file system that was not part of the kernel, upstream kernel and probably was not going to be made it seem less interesting to me and I could tell that sort of thing would be less interesting to Red Hat and I thought, well, I guess we'll just kind of watch and see what happens with this. Well, I mean, I guess I'm jumping ahead a little bit about containerization in general. You know, it's a totally cool topic and it's been a cool topic for a long time. You take everything you need, all the dependencies you have made, you bring them along in the container together, you have some isolation and that depends a lot on how you deploy. The amount of isolation you're going to get and you have the lower overhead versus a virtual machine and so like when I was at E-week when Solaris 10 came out and I reviewed it and made all sorts of visits to the Sun's campuses in down the peninsula and was just totally stoked. But the problem was is that I just wasn't like, we didn't have all these companies coming to us to pitch. Oh, hey, here's our application that runs on Solaris. It was Windows or it was Linux and the fact that I could run these Solaris applications really efficiently just wasn't. It was like, this is totally cool, but I don't know what I want to do with it and I'm having, most of the people I'm talking to, I don't really know what they want to do with it either, but still awesome. Now, this is interesting also because this is still like totally a major thing now with what they're doing with Alumos and what's the parent of the company is, the SmartVM, it's escaping me right now. The name of the, I'm drawing a total blank, but the company that makes Triton, they have this Solaris 10 based, it's Alumos based, I guess, system that does darker containers now. Oh, a joint, yep, they're behind that and that's looking awesome by the way, but yeah, anyway, they basically figured out how to, the work that they began in Solaris 10 for running Linux applications in this system, they finally circled back around and are getting that working and I haven't spent time with it myself, but the talks I've seen on it looks pretty awesome. But anyway, okay, great, you know, zones and containers, but Solaris, so then there's all these other ones, I mean, when I started at Red Hat, I signed up for like way more open source mailing lists than I had previously been following because, okay, now I don't have to, I'm not looking at VMware anymore, I'm not looking at Microsoft anymore, I'm not looking at a million companies, I'm working at Red Hat and so I'm going to dive into all these projects in the lists around LXC where among the first I signed up for, because I'm interested in it and I want to see, you know, where it went, but again, it was just kind of two DIY for me, I mean, it's just, it's a matter of traction, you, when you get something that's really popular, like Docker has become, now, everywhere you turn around, you go to the projects site on GitHub or you go to some of the forks of the main contributors and they are working on, okay, there's this, an issue with containerizing this, you know, here's how I fixed it or, you know, everywhere you turn, people are sort of solving these little things that were with the VM, the VM is expecting or the application is expecting to get its own hardware, the VM says, here you have its own hardware and so lots of problems go away. With the containerization, there's a little more work to be done sometimes, but if you don't have the traction, if you don't have the community and the developers and the users all working at it, it just, you know, these other options and they continue and I see a lot of people, like you'd see people on the mailing list doing really cool things, it's not that you can't do it, but it's harder if you don't have the community alongside you. So, you know, Docker is just this whole containers thing that is not new, that's the great, that's the thing that I was, you know, everyone likes to point out or maybe people are starting to get over it, I'm not hearing that as much that people, you know, immediately pipe me, well, containers aren't new, you know, but what's super new about what Docker has done is they've just executed it well and they've got this kind of magical traction where everyone has, you know, gotten behind it and the issues that come along, people are able to figure them out and now part of that magical traction, you know, I mean, two big parts of it is this, even if you don't, the fact that you can take a Docker file and you can just say, okay, I use Ubuntu so I want to say from Ubuntu where I use CentOS, you know, and you say from CentOS and then you're using AppGet, you're using Yum, you're basically just assembling the pieces in a way that you could give that to somebody and they could go to bare metal and they could just use it as a little how-to and install it the same way, I mean, the fact that it is so much more straightforward to take some existing app that exists, you know, in the ecosystems we all work in and bring it over to Docker, I mean, that's huge and then the Docker Hub is awesome too, I mean, the fact that you can put something together, push it up somewhere and then someone else can pull it down, that's being huge. So Docker is awesome, but really, I mean, you know, going back to my example here in my lab, I'm looking at a multiple host situation and anything that's really going to be interesting is going to need to be existing in this multiple host, you know, scenario and Kubernetes, that's what it does, you know, it takes your multiple hosts that will be hosting containers and you orchestrate the containers among them and at the Kubernetes conference, CubeConf or whatever, that's what they call it a couple of months ago, it was just recent, there was the Kickoff keynote, it's available on YouTube, but it was a really good introduction to Kubernetes from one of the project leaders, but one of the repeating slogans that they have is that it's about managing applications, you know, not machines, I mean, that's what I want to care about, I want to care about my app and if part of my app is kick now, if I'm testing machines or virtual machines kind of within my test lab, that's cool, I care about machines and I want to care about machines, but the infrastructure that I'm using to, you know, pet my pets and test things out, I would like to just not think about that, think about that less and just think about the apps that I'm trying to run. And also the fact that this system was based on Google's internal experiences and with their own systems, and the fact that they run everything in containers, that to me, I think, wow, this seems like the sort of system that I could just throw up my hands and say, take the wheel, Kubernetes, possibly. And this is last year at scale, I gave this talk and I run a GitLab instance inside the firewall that Red Hat that our team uses and that's actually what I have sitting on, one of the things I have sitting on the dusty overt, you know, machine that I just promise not to mess with and so I don't break and get people mad at me, but I gave a talk about how I broke this down into running in Kubernetes, but again, I guess, if all I cared about was, if this was the user mode me back in an earlier slide, you know, that would be fine, but after I took this as an example of the thing to learn about Kubernetes, but then really, okay, so where's this cluster, you know, what are all the resources that you are just effortlessly calling on as a developer? So that was kind of my next thing after I learned how to basically use it, you know, I needed to run it and it was a challenge because this last year has been, it's really been only since this 1.0 release that was like in late July that Kubernetes has really settled down and I would have the sort of nasty experience of just kind of returning, I would, you know, some weeks to go by and I would go to use it again and then various things will have changed, would have changed and that was frustrating, but it has really settled down since it's hit it's 1.0 and subsequently it's hit it's 1.1 releases. Just some basic things about Kubernetes. So there's the containers and there's Docker containers is how I use it, although there is some support for some other kind of containerization. Then there's the idea of a pod and the pod would be like if you had a set of containers running on a certain, on a single piece of hardware together and so they would be able to make some, they are able to make some assumptions about where they are and the sort of resources that they have access to. Then there's the idea of a controller and the most common sort of controller type that you come across in Kubernetes is the replication controller and the replication controller, you say, I want to make sure that at least X many instances of this pod are going to be running somewhere and the pods will sort of, when a pod dies, it will just be reborn somewhere else. You don't, that pod, you don't try to revive that specific pod and what does live on are services and services can be sets of pods and the big thing with services is that it gives a place to, even as the individual pods and containers that make them up live and die, the service gives you an enduring place to point to those applications. So this is the cluster. When I set up a Kubernetes cluster, I use these Ansible scripts in the contrib repository in the Kubernetes project. And it's, it's pretty nice. Most of the time I use Baygrint when I'm just when I'm testing, like we do, we do releases of the CentOS, our CentOS Atomic host, which is a, an operating system that sort of streamlined and optimized for running containers. And it comes with Kubernetes parts. It's, it's the Kubernetes is the tool that you, you know, use to cluster multiple atomic hosts together. When I test out our builds, I do, I use Baygrint and I run a set of a quick cluster and I make sure that I can run Kubernetes apps. And so that's makes it super easy to cycle through. I use a single master, although I am going to switch to multi master, but I just haven't gotten into that yet. Although in the scripts, there are options for that. And these scripts are, you can do, you can use Fedora, you can use CentOS, you can use rel, you can use Debian or Ubuntu. I think you might be able to use other OSes, you know, Linux OSes as well. And you can either use packages from the various distros that you've chosen, or you can, you can use source. And that's kind of all rolled into the scripts. So if you're interested in kind of poking at Kubernetes yourself on some of your own systems, this is a good place to check out. And it's pretty straightforward to get going with it. But for in my lab, I just instead of I didn't use the vagrant option, I just you edit the inventory file and you know, put in your addresses to your servers that you want to use and you're you're off and running with that. Okay, so in my setup here, I have Gluster in a container, right? The idea is to have everything in containers. This repo is from a fellow Red Hatter who and all these pieces currently that I have running in my Kubernetes sized lab are all like proof of concept ish. I mean, they all are coming from work that people on the projects are doing. And again, it's abiding by it's not like crazy exotic. But you know, the stuff is is, you know, it's all experimental really. But the idea here is that you're running Gluster and each of your hosts in a container. You when you go to set up your your Gluster part, you send some configuration information to etcd, which is the key value store that Kubernetes uses for its own needs. And when the VMs or when the containers come up, they talk to etcd, they say, this is who I am. And, you know, hook me up with my config info that I need. And the way that this is set up is that these the each pod is attached to a specific host, you can and it mounts its configuration info from directories on the host, and it uses a specific brick device on the host to store the data. And this is I mean, this is just a darker file. But this is again, it's basically just, okay, sent to us, give me Apple, give me Gluster, install the packages, you know, and this is a big thing. I know that you're supposed to run like one process per container and all that and have everything exploded apart. But in my use for things like this, you know, again, I'm just saying, what's the least that I can change this, so that when I have a conversation with someone about how my Gluster or or my free IPA or my over engine is running, we can at least be talking about some of the same things. And using system D inside my container allows me to treat the services in the same way that I treat them on in a VM or in bare metal. And so that's, that's what I do. And so you install that it's just installing Gluster. And it's, there's this little script bits there, the parts that go out and, and talk to it CD and say, okay, what's up? You know, who am I and where should I be? And it comes up. And then these are little bits from the, the Kubernetes. This is the pod definition, just a little zoomed in part of the pod. It's using the host network. One of the questions that I've had and I'm I'm still early on in my sort of journeys with doing my lab stuff and Kubernetes is what sort of again, I mentioned I was had some sensitivity to overhead. What sort of overhead am I injecting into the system? And the way that this Gluster approval concept was set up was just to use the host network. So I figure, okay, that's not going to be imposing the addition like the overhead that I might have. If I was communicating over the flannel overlay network that that I typically use with Kubernetes, we got this little note selector bit here, where the you I've added to my each of my hosts, my my bare metal hosts, a tag that's that says, you know, this is I'm this node, I'm going to match Gluster node Gluster six for, you know, the the appropriate machine. This is I mentioned this is where, you know, this is another part of the same file where it's showing how Kubernetes is mounting these locations on the host. And then, yeah, the volume mounts are listed. And then below, you have the volumes in it. And it lists the name of the volume mount. And then what's the actual location where that's going to be mounted to? Okay, so that is sort of how that set up. And that and that works. I mean, that that is pretty straightforward. Once it comes up, you whatever the way that I've done is I use cube control exact. It's just like Docker exact kind of except you don't have to be on the particular host. And I access one of my nodes and I use the Gluster client from there and I create volumes and do just the regular Gluster things that I'm accustomed to doing. So for the overt engine, that's the management server. And this is a little screenshot of there's another project called cockpit. That's kind of a web based admin interface for it works on on relative adore and sent OS and I'm not sure what the other the support is on other distros at this point. But one of the it there's a plug in for for Kubernetes where you can kind of just get a view into your Kubernetes setup and you can drill down into particular pods and access their shell. And this is just over engine is running there. But like with Gluster, I use system D based image I'm using sent to a seven for the volumes in that where the data needs to persist. I use persistent volumes and I'll show you a few little screenshots around that. And then the service the the admin interface for the overt engine, there's a web based admin interface, which is how I typically use it. That's exposed via a Kubernetes service. So whether this works is that I say, Okay, I need I went to a one of the overt project members has a on get that hub they had done. There's a couple different what different project members who have made a made different levels of efforts on the containerizing the overt engine. And I went to one of them and said, All right, this look good made a few modifications. The volumes that in the darker file that they say that that need to be volumes, I created Gluster volumes for those. And then you create physical volumes and Kubernetes. And you say, Okay, so one of the volumes I created was vol five. I said, This is how big it's going to be. You can set some access modes. And then you set what this this endpoints bit is how you control what it will be accessible to or visible to. Then you have so Kubernetes knows okay, I have a series of physical volumes that I they're available to offer. And then you set up physical volume claims for your specific application that's going to be using them that says that's where you're claiming that I'm going to be using this volume. So, and this is just this is one of the volumes was now over doesn't need to have postgres in the engine side by side. That's an option. This is this is a piece actually, one of our colleagues, our new colleagues, that Josh Burkus is going to be giving a talk on some postgres ha containerized postgres ha stuff that I am looking forward to, because I want to break this part out in my info right here. But right now, I have it just all smushed together into one thing. It's not even even in separate pods, which it, you know, arguably should be. But this is so one of the volumes is I have my postgres data volume that needs to persist. And so I put out the claim. I set it for nine gigs where I had set the physical volume to 10 because I wasn't sure, you know, if I need to give myself some leeway and I figured what the heck. So and then this is the next step in the engine pod definition and Kubernetes itself. This is where you say okay, so I've got my physical volume, which I guess the developer doesn't have to know about they just have to know that they just make the claim they do that part of it. And then so I have got my claim and then this is where I want those. These are the volumes, the volume claims I have out. And this is where I want or these are the these are the mounts and the names and then below here with the volumes that's okay for this mount, the engine PGSQL mount. I want you to go ahead and use the claim of this name to satisfy that. And this is the thing too, as you can see this privilege true thing here. Right now, one of the things about using system D inside of a container is you have to run the container as privileged and find grain privileges and making sure that only those capabilities that are required are given and others are withheld is not something that I am focusing on right now. So that's, but that's something that that's, that's, that's a next step thing that I need to work on. So virtualization, right? This is like the key part here. And I've been talking about all these pieces that surround it. And by the way, the engine piece, the free IPA, the off the planner thing, I was talking about any sort of additional glance or Cinder or these other pieces that I was talking about, I would have to run in VMs that they all work in the exact same way as that engine piece. If they need persistent volumes, I give them persistent volumes. And, and, and so, you know, it's, it's they're they're all additional applications. Now the virtualization piece is kind of the most fun piece. And this can definitely work. I've, I've done libverter in a container. And that's this again back to, I don't know how visible it is be, I've got a little Siro's test image. And that's running in a container in a KBM. You have using the host KBM. And so that works. There's other ways of doing it. There's this Rancher Labs has this Rancher VM where you run virtual machines in containers. This call up project is part of the in the open stack big tent. It's kind of like one of the smaller, you know, many projects that are that are in there. That's all about running open stack in containers. And the problem, though, is that for overt, it's not working. Over is expecting to have a full host to play with. And either it's going to be just like a, you know, a regular host, or there's also this overt node that's basically a specialized, a specialized version of the OS that has just exactly what overt needs. But over when the engine is talking to the host and setting them up, it is expecting to be able to get all the sorts of information and interactions that they would get from physical piece of hardware. Now that does work in a VM because it thinks that it's talking to a physical piece of hardware. But with a container, this is one example of a closed can't fix bug, because Libvert works, but overt uses more than Libvert. He uses a daemon called virtual desktop and server management manager. And that calls on Libvert, but it also just does more things than Libvert does. And it is doing things like expecting this, you'd have information that it's querying for, it's expecting to get things that it's not getting from a container. Now this is kind of an interesting area where these, these containers that are reaching into parts of the host OS, and they're doing more than a container typically would in requiring more privileges for that. Kind of around the project atomic campfire, we've talked about these as super privileged containers. And there's a lot of cool things you can do with them. But we're still, you know, as, as projects are sort of trying to containerize themselves. And just individual community members are exploring it. We're bumping up against some issues. So I went down the path a little bit of, you know, watching the logs and the interaction as overt talked to the container, I started the node portion up in a container. And that as overt is configuring as the engine is configuring the, the node, you start hitting all these issues, these deal breakers where the engine is expecting to need this information or that information. And you can go through and start thinking, don't worry about this, don't worry about that, don't worry about that. But that starts to get into this. And I'm not going to tear out all the guts of this component of the project to get it working. So in what I have running in my lab right now, I have this uncontained. So I'm going from having everything just installed side by side on each server to having everything, but the virtualization component contained in each server. So, you know, am I making progress? I mean, it is definitely a simpler view into my infrastructure, especially with with a tool like a tool like cockpit, it's sort of, well, you can you can visualize the difference where because you're just running the pieces that you're running in containers and Kubernetes, because you're just running them all side by side in the same manner, the any tool that gives you a view into those workloads. Now you now that tool just gives you is giving me a view into my storage component. The storage is acting as if it's a separate piece when really it is converged onto the same hardware like before. I can do things like update the components independently. I mean, that's I've I've encountered just maddening issues that are just just, you know, in the course of an update that you need for one part of my converged stack, I break another part of the converged stack. And I haven't had to radically reshuffle everything, I can go into one of my containers. And I can use my regular system D tools to manage the services that are running. I am installing everything with packages. You know, the basic setup doesn't look a ton different from the inside, at least from a user perspective for from, I guess, yeah, then then it did before, and that's going in the right direction. And also it has a place to host these new components when I mentioned that the Kala project containerizes all the parts of the open stack. And the overt team is is taking advantage of that. And when they introduce their new feature to have sender disk support and overt, they, they point you to containers. And I think there's even an RPM that they've created where it makes the process of getting that container, you know, simple. But then where are you running that container? Like I think in one of the in the release notes, it says, hey, you can install this on your engine. So I've got my engine running on my on my vert nodes. And then I'm going to also have this separate, you know, a disk and separate image. And how many different things am I supposed to pack into my one? I don't think the project isn't really expecting I'm going to just, you know, pack my overt VM full of all these additional services. But it's not clear where is the right place to put these things. But in this sort of a setup, I mean, I can I have a place to put all of those components. And then when red hat announces the next thing it's into, you know, base it looks to me like with the characteristics of Docker and the characteristics of Kubernetes, I have a place where I can host that and not have to rework my whole setup. At least once I get my whole setup in really, really good shape. So here's some other things are looking ahead that I need to deal with. SE Linux is permissive in this setup right here. The scripts that I pointed to, they set SE Linux to permissive the ansible scripts. Because I think it has to do with the Well, I know in particular, the DNS add on component is doing some things where it's running a file of SE Linux. And that is an area that where there are areas of Kubernetes. And it's young, there are plenty where things are still kind of being defined the right way they should work. That's something that's being currently worked on nicer system de integration. There's really good talk a couple months ago, there's a system decomp in Germany, I think, and Dan walsh from red hat gave a really good talk about system D and Docker and the kind of rough spots between them and what the future looks like and areas to pay attention to. But there are things like when Docker is killing the container, or when you're stopping the container, like in Kubernetes, I'm going to destroy a pod. That process of shedding those things down from a system D perspective in the container and then system D on the host. There are gaps there in information about what is happening. There are things that need to be smoothed out in that to really be able to use system D well inside and also the privilege issues I talked about. The contained vert, that's something that's just going to if overt is going to do that, it's going to need work upstream. They're currently doing some big re-workings around their overt node. I'd like to see some of that. I have to see just how likely some of that is. And then if it's not going to work, I don't have to stay with overt. Like I mentioned, now that interestingly, that call a project, they started out to run OpenStack on Kubernetes and then gaps in functionality in Kubernetes. I think specifically some around running with privileges led them to go, I think that they're using Mesos and Marathon, I think. But some of those gaps have since been filled in Kubernetes. And so maybe I will just join the universe and shifting my allegiances to OpenStack. And if I want to continue with the containerized approach. Looking ahead, Gluster has been suiting my needs. Ceph is also now in the fold as one of our projects. And this is the sort of thing that gives you a model for deploying things like this side by side, even if it's just for testing purposes. Other things, I mentioned networking, I need to clean up the setup there. This atomic enterprise here, this is an upcoming product from Red Hat that has been announced a while ago. And if you want to find more information, well, there's more information for sure in the Project Atomic GitHub repo under atomic enterprise. But that is a piece that kind of sits between like atomic hosts and OpenShift with the version 3 with the Kubernetes basis. And that is tapping OpenV switch for some of its networking needs. So I'm going to look at that for some lessons about how I can bolster the networking portion of it and get that more appropriately contained. Having this all, I'm running this on regular CentOS hosts, but running this on atomic hosts, where all of the applications that are running, you know, above the base OS are running in containers. I haven't even tried that route yet with this particular project, just out of saving myself potential headaches. But that's something that I want to get into. And then smoothing out the deployment and getting better automation of the whole thing. I have been getting up to speed on Ansible through the stuff that I've been doing with the cluster, bringing up the cluster. And I've really been liking that. That's great for filling in a lot of the gaps. But really, the the a lot of the deployment details are kind of encoded in your Kubernetes definitions. And this last bit here, there's this atomic app, and it's a nulacule project. And what that allows you to do is take something like a very complicated Kubernetes definition with all its different manifests, and boil it down into a one command, or two command setup where you can say, okay, you know, fetch the app, and it comes down as a darker container that itself pulls on other darker containers. And you can there's like an answer file, where you can you can set in environment specific details that you want. And then just have it off and running. And that's the sort of thing. You know, it's one thing to set it up in my lab and get it working, but to share it with other people. That's the sort of thing that will help out quite a bit. And that's all my slides. If anyone has any questions, I'd be happy to answer. Am I over time? Why would I still want overt if I'm running? Well, overt is for running virtual machines. And Kubernetes is for containers. Now I could run just VMs in Kubernetes. But overt is I like running over but overt is also one of our projects. And it's so for that reason alone, I pay attention to it. And I want to continue to you know, so there's a there's a dog food portion to it. But it's also that style of virtualization is what, you know, I cut my teeth on VMware and that sort of sometimes you want to have pets. And overt super for that. Well, it's so where you're actually running the where you're live migrating the container. There is some there's work around that around live migrating containers. But that's not something that I have specifically worked on. But there's I have seen some things around that. But kind of the the basic idea of that's the style of running your applications is that you are the pod that you want to get rid of dies. And then you know, the pod in the new place wakes up and it just connects to whatever needed to persist it connects to it. And that's sort of the assumption of it. But doesn't mean it's not doable or that some people aren't trying to do it. Yeah, but or if you need to maintain state, you can have they mean that there will be some like you could connect to your persistent volumes that you set up. And so the one pod dies and the new one comes up and it says, Okay, I need x, y and z. I have these claims. And Kubernetes says, All right, these are the volumes that are associated with those claims. And so, you know, enjoy anyone else. All right, cool. If you want to ask me anything later, I'm at Twitter, email, IRC. And that's my blog that I don't write on often enough. But I'd be happy to answer any questions later. Thank you.