 Good morning, good afternoon, good evening, man. And some time. This is so awesome. We are an open culture that is really, really, really, really, really, really, funny for your get-ups. Everybody has something that they can tell you. On top of the Red Hat portfolio. Music you just heard means you stumbled into yet another level-up hour where we discuss containers, Kubernetes, and OpenShift. My name is Randy Russell. I'm the director of certification at Red Hat. And I'm joined by my co-hosts, Jafar Shraibi, and Scott McBrien. Please like, subscribe, share, let people know that we're here contemplating all the wonderfulness that is containers, Kubernetes, OpenShift. And so this is our first show of 2022. We made it back. And I'm looking forward to our show today and the shows to follow. We have a lot in store here in 2022. So with that in mind, greetings to those who are joining us again. And let's get into it a bit here. Scott, we're going to be talking about auto-updating containers in Red Hat Enterprise Linux. So we're going to step away a little bit from OpenShift today, focus a little bit more on Red Hat Enterprise Linux. Jafar, this is your cue to nod, meaningfully, yeah, yeah. So Scott, set us up here. What is the background of wanting to do this? So in a lot of organizations that have made their transformational journey to containers, one of the big benefits of that is the speed of development, right? And so as you are making changes, you want those changes to propagate across your infrastructure so that you can have the newest stuff that is hot delivered to the end user. And so traditionally, we've used a variety of different technologies to try and accomplish that from things like CI CD pipelines to other stuff. But in the Red Hat Enterprise Linux world, we use Podman for this. And so in RHEL 8.4, we started shipping a newer version of Podman, Podman readout something. And we included a Podman sub-command called auto-update. And that's kind of what we're going to be taking a closer look at today. And essentially what it allows you to do is set a configuration option at the container level that says this one container you're interested in turning on auto-updates for. And it ties into system D for managing the redeployment and re-execution of containers. And it can also tie into system D for the system D timer units to do that on a schedule, right? So you can check it by default, it's every day, but you can change it every hour, every second or whatever you want by adjusting the timer configuration. And so it's a couple of different Red Hat Enterprise Linux administration skills that come into play to really make this magic happen. It sounds like the crud job of auto-updating containers. It is, in fact, we've kind of supplemented Cron with the idea of system D timer units. Yeah, because system D does all the things, so why not have it do Cron as well? Sure, sure. So, Jafar, what are your thoughts here so far? Is this a little bit out there that we're going to be doing these operations on containers outside of the lovely orchestrations of OpenShift? So actually, that's a very interesting feature because a few years ago, I believe it was four years ago, we had that conversation with one of our customers who was using Red and who was looking at OpenShift for managing containers at scale, but they had this sort of edge facilities where they wanted to run independent systems that were containerized, so they had already containerized some of their applications, but that factory didn't have the horsepower to run a complete OpenShift platform and they had, I believe, hundreds of small facilities around the place. So basically, we were looking to implement that, but we had to come up with ways of doing it that were manual and that were sort of fork arounds because we didn't have that type of capabilities and so basically it was something that we needed by then because for example, the CICD would kick off on the OpenShift platform centrally, but then we would want to push the new images to all the facilities so they benefit from the updated version of the application. So I think that's a great feature that will make it much easier to implement that type of use cases and I'm looking forward to seeing the demo and learning more about it because it's not a feature that I had on mind. It's one that you wished you had at one point. Yeah, exactly. And so actually it kind of brings us full circle for a moment before we launch into some further discussion of it is the description of the use case that you just gave brings to mind one of our recent episodes where you're talking about single node OpenShift. I just have to ask out of curiosity, might that have been an alternative solution to the use case you were dealing with? You mean the single node OpenShift? Yeah, that fairly recently released as an alternative to that particular use case you described. So interestingly enough, that was the, we were using OpenShift 3 by then, so I think we didn't have OpenShift 4 yet. And so we had the equivalent of single node OpenShift with OpenShift 3, which we used to call all in one, but still they didn't have enough resources to have like the full Kubernetes orchestration and they didn't want to have that because it seemed too complex for the use case where all they had to run was like 20 containers on a very small footprint machine. So even by that time, and even if we had OpenShift single node in OpenShift 3 version, it didn't really match their use case because of resource constraints, basically. Okay. And complexity of managing that. They didn't want to have like a Kubernetes expert needed on site to troubleshoot whatever happened. They just had like real admins and they didn't want to go to that next level which is getting full speed into Kubernetes. All right. Well, so that kind of sets it up really well, Scott, right? Because that's essentially what you're gonna introduce to us today. And across the Red Hat portfolio, there's kind of different shades of gray on managing containers, right? So what we're talking about today, I would classify as really basic automation, right? We're just gonna like stamp this thing out. There's not a lot of logic involved in it. It's just after a certain amount of time, I'll check this thing. If it's new where I'll pull it down and execute it. You can get more sophisticated automation if you throw in something like Ansible, right? Because they have ability to do some level of logic. And then if you want orchestration, which is not just the logic of deciding what to do, but also throws on like where it should be done and how it should be done and complex ordering of tasks to be done, especially in a situation where you have multiple containers relying on each other. And maybe there's a very precise startup process that has to happen. Different things have to be exposed to the machine. That's where orchestration comes in, right? And Red Hat Enterprise Linux does not do orchestration. We even don't do a ton of automation, right? We do automation, we provide a framework for that and you can use things like Shell Scripts or Ansible to give you that next level of programmatic automation. But if you want orchestration, that you wanna monitor your deployment, you wanna make sure the containers are running and if not start them up again or start them on a different node. All that stuff is Kubernetes and orchestration through OpenShift. And in fact, Jafar was talking about the edge. That's one of the places where we see this a lot because edge devices are not super beefy. Even Signal Note OpenShift today, I believe requires eight processing cores. Yeah, and so if you're talking about like Intel NUC, you could do it there if you buy the highest end one that you can get, but a lot of the PCs of that or computers of that form factor are dual or quad core single processor. That's pretty much it. And then RAM is another one where they struggle to get large enough to be able to run Signal Note OpenShift to get you that orchestration component. But that said, a lot of the use cases, like they don't need that because they're not running really complex workloads on those things. They're running five containers, 10 containers. They're tend to be designed to be more independent. So it's okay to just like use not very sophisticated automation to apply updates and manage those boxes. What sort of decision process would an organization, a team go through to sort of arrive at that conclusion that this might be the way to go and that something more involved and to get into fully orchestration might be overkill? What sort of the thought process or what's the checklist or what are the kinds of things that somebody should go through and say, okay, yes this, yes this, yes this or no or whatever the case might be that would help them decide this is the way to go as opposed to a more fully orchestrated approach. So that's a tough question because if you asked Jafar that's why I asked it. I'm sorry, I thought you were talking to both of us. I am. Yeah, so if you asked Jafar, it's gonna be open ship. If you ask me, it's gonna be Raul. And I think that there's like a bunch of white space in between where either one could be the correct choice or a valid choice. Yeah, so I think that one of the things maybe to take into consideration is whether you need advanced scheduling capabilities and things like placement rules where you have to. So for instance, if you are speaking about one machine then it's probably fine to just run with the podman and all the primitives that comes with the rail. If you have two or more machines and then you have to start guessing where it's better to place the container, it becomes a more complex problem to solve because you have to implement basically all the logic to compare the available capacity, the required capacity, et cetera. And basically you are then rewriting Kubernetes or an open ship. Yeah, exactly. That's where the taper up and taper down is do you find that you're actually rewriting the Kubernetes application independently? Well, that might be an indication that you need to go open ship. But it does sound like there's actually some pretty clear use cases, particularly around Edge, particularly around resource constraint. Let me ask one more question. Do you think this is the sort of thing that an organization that is open shift savvy would decide, well, okay, look, we know open ship, we do open ship, we're implementing it. For this particular use case, we're making the choice not to do it. Or is this the sort of thing that we're going to see from an organization that hasn't even dipped its toe into open ship regardless of the scale? Any sense of that, either of you? Yeah, I think both can coexist. I mean, you can have an enterprise that ran open ship at large scale and still have some use cases where they are going to choose to go with that very low footprint rail containers on ran pattern. So it's not necessarily something that, like it's not an or or a XOR, it's not an exclusive or they can be both valid for different types of needs. So, yeah, of course, if you have no experience with open shift, it's probably easier to at least learn about containers by using Pubman and doing things on single or, yeah, laptops or single machines, et cetera. And once you get familiar with these things, then you get further away with new concepts like the orchestration and such things. All right, yeah. Well, so Scott, do you want to run us through a little bit of what the process is here or is there? Yeah, I just want to tag into one comment on our last thread of discussion and then I'll shift into the demo. By all means. So, I think there's a natural inclination for people to stick with what they know. So if you are a Red Hat Enterprise Linux organization, you have a lot of Red Hat Enterprise Linux expertise, that's probably where your feelings tell you you should stay, right? And that may not be the right decision. If you are an open shift organization, you use open shift for all your containers, but now you're putting into a different a different class of hardware or a different method of deployment. Like your feelings are going to tell you that you should stay with open shift because you've already got all of that like background and expertise and other stuff with it. But that may not be the best choice. Your hardware might tell you something different than your feelings. Right, and then there's like a ton of gray area in between. And I noticed in the chat, somebody talked about K3S. Red Hat does not have a supported K3S implementation. We either have Kubernetes or we have, you know, automation with tools like Podman. And that's just because like how many things can we shove into that gray space to create even more confusion on which route someone should take when they're going through a deployment or learning a technology? How many more technologies do we want to heap onto them? So that was a very conscious choice on our part to not include that. Yeah. All right, so demo, live demo. Live demo? What could possibly go wrong? Indeed. Okay. But it's all gonna be automated so everything will be great, right? So I think we'll have to ask for a jingle for the live demo. That's right. We'd rent these vocals on it. All right, let me take note. So just to kind of lacing brown work here, this is a Red Hat Enterprise Linux system. It has Podman and some other tools installed. And I've gone ahead and pulled down an older UBI image. So you can see in this top of my images list, there's one that's tagged 8.5-200. That is old, right? And what I, for a reason I pulled it down as old is I wanted to be able to auto update to the latest, right? And if you don't have something old, how do you get to the latest if you're already on the latest? So, and I'll go through towards the end on how I actually accomplished this. But you can see that there's another one in there tagged latest that has the same image ID. So essentially I pulled down this old one, I cloned it and said it was really the latest one. So the next time the system checks in the registry and sees that there is a newer UBI in the latest, it'll go ahead and pull it down as part of the auto update, right? There are a couple other small things that I've done here. And again, I can go through that towards the end. But I wanna jump in and show the critical parts, which is like, how do we actually set up auto update? So I also wanna mention that this is related to an article that is available in the Enable Sysadmin community. And I think our producer has already linked it into the chat. It was written by Dan Walsh and Prithee and a couple, I think one other author and they base theirs off of Fedora containers. And we're basing ours off of the universal base image and the Red Hat registry for the freely reach treatable universal base image. And if you are interested in the universal base image and you're not familiar with that subject, go back a few episodes in the level up hour and you will find a meaningful discussion about said UBI. Indeed. Sure. All right. So I have this image and I'm gonna create a container image out of it. And so I'm running this podman create. I'm gonna call my container, my app container. And then here's the magic, right? This label IO containers auto update equals registry. So this special label is a configuration option that tells our system that when we run podman auto update for this container, it should check with a registry to see if there's a newer update available. So there are two options that you can pass here. One is registry, which is what we're using. The other is local, which is, I should look only in the locals container repository for making this decision. So if you're applying an update locally, you can then run your production system off of that local registry and pull down the update. So Scott, before you hit enter, is there a typo where you have sleep infinity? No. That is not a typo. So- Infinity? Yeah, any infinity, there's an I. Sorry. Yes, that is. Okay. Come on. He need to have let him hit enter, okay? We don't have the jingle yet. When we have the jingle, I'll let him. You'll let him wander into the minefield, okay. I literally copied and pasted this last night. So it does still work even with the typo, maybe not as much as I wanted it to. All right. Maybe that also. It may not work now that we have removed the typo. Yeah, nice for your health to work. Possibly go wrong. So the other things are the base container image that my app container is now gonna be based off of, right? So I'm telling it that we're gonna take this my app container, it's gonna be based off this UBI image in the image repository. And then the sleep infinity, that is the job, the command that will be executed within a container, right? So we're just gonna sleep it and just gonna run essentially and not do anything. But this could also be a starting up your web server or a variety of other things, right? All right. So let me go ahead and hit enter here, maybe. And if I do a pod man, yes. Actually, there's that new container that I created, right? It's based off of this UBI latest from my local repository. When we run it, it's gonna run the command sleep infinity and the name associated with this is my app container. All right. So now we've got our container created. It's tagged to do auto updates. And when pod man is called with auto update, it's gonna check against our registry, right? So at this point, I could run a command pod man auto update my app container, and it would execute that automation to check the registry and pull down a newer version if it existed. But we want to run this container and not only do we wanna run it, we want to automatically pull down the auto update and redeploy it. And so for that, we're gonna have to mix in a little bit of system D. So you may not know, but system D allows regular users to have their own system D units. So if you create a system D directory structure in your local home directory, you can use these user-based system D units. And I'm just gonna create a system D unit for my new container. Yes, Shafar. Yeah, I was gonna ask. So I see you have real web console. Is that like the admin UI that we provide? Or is it something? I think it's a web-based UI that you can use. And there actually is a container management thing in there that's relatively in front of it. Yeah, that's what I was gonna say. I think you can also create system D containers from that UI also. But... It's been a while since I've looked at it. Yeah, it's Cockpit, right? It is, but it's not the upstream Cockpit project. Yeah, yeah, yeah, okay. I mean, it's the web console we have, right? Right, correct. And that's the future of time. Like when Relate was originally released, the container management there was not super complicated and over time it has gotten much, much, much more fully featured. Last time I looked at it was about six months ago and I was like, wow, that's really useful on a lot of things. Yeah, I actually use it a lot on one of my web instances and especially for those containers. So just for my edification and clarification, we did our initial step, which put it into place forever because it's infinity. However, I'm assuming that after your first step, had you restarted your system, this would not be in place because it was done live on the running sort of system and it would not be persistent. And the step that you are now about to take, that you're poised, your hand is twitching to hit enter, that this is now what's gonna make that persistent across reboots and so on. So kind of, the actual MyApp container is persistent. I created it, it's out there in the local registry, it's there, but it's not running. Notice that when I did a podman ps-a right here, it said that it was created, but it's not running, right? It was just out there in the library essentially. Yeah, it's available, it's available but not running. Correct. If I rebooted the machine right now before hitting enter, that MyApp container image would still be in the local repository. What it has not yet been run, and after reboot would not automatically start up, it's just gonna sit there in the image library. So what we're doing here with this podman generate systemd is creating a service unit for this container, which will start the container. Well, I'll have to enable it using systemctl, just like I would a normal service. So I'll have to start up the service, and then thanks to that systemd unit and the systemd unit being enabled, it will then survive reboots. So if my machine reboots, it'll start up this container automatically. So that's where we're going. And then I just wanted to point out that this podman generate systemd is super slick because what it really did was create this MyApp container service unit, which looks like this. So just running that one podman command made all your systemd stuff happen. And down here in the service section, this is what I think you were talking about, Randy. Thank you. When this unit is executed, it runs that container out of the container library. Well, like one command, the magic is happening. Or you could just go and type all of that by hand, fire up your favorite editor if you have all day and nothing to do. Well, and what are the likelihood that you wouldn't type all anything, especially during a lot of them? Well, given today's show, I'd say it's high. All right. In nifty. Couple of other small things, because we're not rebooting. I'm going to reload the, or send a signal to the systemd demon that needs to reread its configs, because I just dropped this new service file, but systemd that's running doesn't know about that service file yet, because it doesn't have any of its cache. So the demon reload hasn't cached that file. So now it's available. And then I'm going to go ahead and start it. And if I do a status, we could see that this container is executing. And if I wanted to survive reboots, I'd also have to enable it so that at boot time it would start up. All right. So if I podman, yes. If there's my running container, it is running its sleep infinity there, started up, status here is up 27 seconds ago. So it's actually executing and So pause for a moment. So Joe Brown asked the question, which I think you had touched upon earlier, but let's clarify it again about no admin permission needed to add it to systemd. You've actually taken some steps within systemd and it's user level capabilities, right? That's what sort of set the stage. Correct. So that's built in. Like I didn't change anything about this box. I didn't change anything about this user to grant them this permission. Like that's part of the systemd services that users can create their own services within the context of their user, right? So I'm logged in as this unprivileged user named rel. And processes that I run as rel have all the rights and privileges of the rel user, right? So we talked also a while ago about rootless containers. That's what my app container is. It's running rootless. So all of the things that it may be doing inside of its container image is only allowed on the container host to operate with the rights and privileges of the rel unprivileged user account. And the same thing is true for the systemd services. Anything that I do as the rel user with systemd operates with the rights and privileges of the rel user and is not able to exceed those privileges and do root stuff. Right, and so if we're talking about sort of not necessarily a live demo but a production system in the field, very likely there would be some user created for the purpose of doing this and not running as a privileged root user unless there was absolutely something that required that, right? That essentially it would be the same process except it wouldn't be Scott's user account. It would be an account created for this particular purpose. Would that be a safe statement? It highly depends on the organization. So for example, in the DoD world, you don't have service accounts. So you would not be allowed with security protocols to be able to create a generic user account for managing services without a lot of effort. So it depends. It depends, fair enough? Yeah, all right. And so just to circle back on that, Joe Brown's question there. So I just wanna show you the couple things I did. Right, so the first thing was in my home directory, I made a directory under .config systemd called user. That's where my user control stuff goes. And then when I used my PubMan generate to make that new unit file, notice that I stored it in this .config systemd user directory, right? So the unit names are the same in the package of user configuration of systemd as they are for the system, systemd. Here we are. We're running our container. You can see here is its unique container image ID that goes along with it so we can track the viability of this guy. All right, so that's a lot of setup. And so what I'm going to do now is actually run the PubMan auto update. And I'm gonna pass it an option. So instead of actually doing the auto update and executing it, what I'm asking it to do is just show me the steps and the output that one would see if they were to auto update. All right, so the myAppContainer has been flagged as an auto update container. And it went out to the registry and it went out to the registry and sees that there is a new container image available. And notice the updated status over here is pending, meaning we have not applied this update. So if I run PubMan auto update without dash dash dry run it will pull down that updated container image from the registry. And thanks to my service it will redeploy this container running on the system with that newer updated image. All right, you ready? You can come back. Pull my beer. Exactly, pull my beer. So here it is pulling the updated base container image from the registry. All right, so that's good. You recognize that our container is based on an older base image. All right, and now when I do a PubMan, yes. Hmm, there's nothing running. Before it was running, but now it's not. Is that what you automated? So it pulled it down, it re-executed the update of the MyApp container and then redeployed it. Right, and we can tell if that's the case because up here the last time our MyApp container was running, look it's container ID, right? The last four is alpha five, delta echo and down here the last four are six, five, trot, eight, rebel. So it actually pulled down and made a new container image from that updated UBI and redeployed it so that we can see in our PS that it's a new image. In fact, you can see there's a difference in the runtime and a whole bunch of other properties that show us that this has actually happened. All right, so that's the magic. There's a lot of lead up for like one command. Well, you know, that's in the essence of magic. It was a whole lot of this that it might not be so magical, right? That's true, so maybe there's more science. The magic would have been if I just auto-updated and it just worked. All right, so we have this guy auto-updating when you run the command, right? So if I run this command again, goes out to the registry, it sees that there's nothing newer, right? So it says updated false, and we didn't have to do anything. And if I look at my project, yes, it did not redeploy this container because there was no update available. It's still running the same container as running a while ago and it will continue to do that. So that's how you can auto-update by hand. And for some people, this might be where they stop, but we're not those people. We're not those people. We're not those people. Oh, no, no, no, no, no, we have to forge on. So the last thing that we're gonna do is we're gonna add a timer unit to this container so that we will check to see if there's an updated, a base container in the registry behind the scenes. And when we find that's true, we'll go ahead and execute it and pull it down and redeploy it. So for that. Well, so, before you amaze and impress. Yes. What are the sort of considerations that somebody might undertake in terms of that timing? Is it you watch how frequently the particular image that you're using in a registry tends to update and try to set your timer to that? Or is there a best practice around your timer interval or is it just, we're gonna guess and see what happens? So the default timer that I accidentally hit enter on, we'll check every day. So we'll take a look at what the system timer is set to, but it'll check every day to see if that happens and if so it'll execute it. So the default is check daily? Yes, check daily, but you can override that. So you could make it check hourly or check secondly, if you really want it. I think the bigger thing is knowing whether you really want it to check automatically and pull down and deploy that with automation. Or whether you want to have a person making a conscious manual decision. Yeah, so would you prefer to run some Ansible and contact your fleet and tell them to all auto update? Oh, okay. That's the best time, right? So you know that if things go cattywampus, you know exactly when that happened and exactly what occurred when things went belly up and you can maybe reverse it. Or is something happening in the middle of the night across various systems in your organization the way you want to go? It's certainly less effort because you don't do anything. You just sit back and things happen. But then like in the morning or when the pager starts going off, you need to figure out that that actually occurred, what systems that occurred on, when it occurred, and then you have to find it and roll it back. So I think those are the... Rep for cattywampus in your logs, I guess. Exactly. So I have a crazy thought here, but I'm gonna go ahead with it. So imagine you have hundreds of containers. And the first thing, as soon as I say hundreds of containers, we're gonna say, okay, go ahead and use OpenShift. Okay, we need it. Yeah, that's probably a valid answer. But imagine that you want to have like a PubSub pattern like when the image is updated centrally, you push a notification to everyone who's interested in that. And that's when they do the update rather than pulling the registry every 30 seconds because when you implement that continuous pulling, then it might overwhelm the central unit or the registry because you are doing HTTP requests on a specific interval. And of course, depending on the load, if you have like hundreds of thousands of containers running, it might be more complex to handle. But yeah, is there something like this that can be implemented? Like if you have a Kafka stream somewhere and your registry sends an event in the stream and the part man, when you say part man auto update, it translates into you subscribe to the update event and then you do that whenever you receive that update. Is it something that makes sense? Kafka and I have no idea what, I mean, I know it's a technology, but I have no idea what capabilities are or how to interact with it. That sounds very open-shifty to me. Yeah, it's basically an event stream, an event thing, I would say technology where you can basically trace all the events that you are interested in. And as a subscriber, you can say, whenever this type of event gets written in the stream, then send me a notification. So I'm sure that one could come up with something, right? Because with Shil's thing and Ansible, like as much effort as you wanna pour into it, you can get some output out of it. It's gonna be effort to make those things happen. Where we keep things simple, right? Our technology, if it's broken, simply apply more hammer and it will fix. There's not a lot of complexity without the user putting in a lot of complexity or the administrator setting this up, putting in complexity. But with complexity, you now have more complexity to maintain over time, right? So I'll leave that to an exercise for the user or the watcher of the show to decide what their organizational skill level is, how complex their environment is to decide whether they wanna invest in that or not. Or do they just go with something like OpenShift that has a lot of this complexity already built in with management tools to make using that complexity easier? Right, that's kind of the value brought there. All right. So the last thing I did here was put in the auto update timer. And I just wanted to tie that off. And so I'm just looking at the auto update timer that I created with this Enable. It is using the System Podman Auto Update Timer module or a unit. And when I cat it, this is the instructions, right? So it's gonna do the on calendar daily. So it's a once daily check-in. The randomized delay sec is sometime between zero seconds and 900 seconds from recognizing that it needs to do that every day, it'll do it. That's to keep all of your daily firing jobs from all happening at the exact same time. And then persistent as, you know, if we reboot the machine or the machine was rebooted today, we should do that. So you can also put in your own customizations to override the system defaults, if you would like. And that can go in your individual user system D timer unit. So you could open this file in your local system D configuration and add or change or override these parameters. But that's what's doing every day now. So tomorrow when we come in and we'll have checked and I'm not expecting a UBI update tomorrow, but if there was one, it would go ahead and pull it down and redeploy it. So question, is there any sort of, within what we're doing here, is there any sort of concept of rollback? So I know that the article that Stephanie posted earlier does cover the concept of rollback. I did not invest a bunch of time in like figuring out how to roll that into this demo. So yes, there is such a concept, but I didn't put in a bunch of time to figure out how it works or what types of logic one could put into place to determine whether you should rollback something or not. Okay. So that's left as an exercise for the reader. Again, take a look at that article that our producer posted. And it seems to me like a logical next step if you're doing this sort of automation is to maybe have a plan in place for calling it back when things maybe don't go the way you want them to. Correct. Well, so what else have we got, Scott? I'm asking people if they have any questions to post them in the chat, but are there some other things we wanna share about this capability? So the one thing that I did to get this whole thing started, you may recall that I said we had two, two containers in the image registry and I had specifically pulled an old one down and then forced it to be tagged as latest. The command that I did for that was this guy right here. So when I did my podman pull, I pulled version 8.5-200. That's command number one in this history. And then command two is the podman tag which says create a new tag called UBI-Latest that's based off of UBI-8.5-200. And that was to force it so that when I did the auto update, it would have an auto update to apply, right? By starting my whole thing on this older version of the universal base image container, I ensured that there would be an updated version available when I ran the podman auto update. So there was a little bit of fake and big here, but that's what it was. Shh, we don't have to say that. All right, cool. Well, I'll ask again if folks have any questions about doing some of these open shift less operations within Red Hat Enterprise Linux or maybe some Scytheware deployment. We stand at the ready to answer. We hear meaning Scott, not me. Possibly Jafar, he'll convince you that you need to go to open shift. No, what do I have to do with open shift, right? All right. Great, then thanks Scott, thanks. Yeah. And we learned new things today. So it's always good to be around. Yep, and so again, take a look at the chat for, we dropped some links in there for more information what we've given today is a bit of a peek into a different way of looking at deploying containers that it's not open shift, wonderful as it is, might not always be the answer for certain kinds of deployments where the need is somewhat different, right? So great demos, Scott. I'm not seeing any additional questions. So I would just again say like, subscribe, and share. We're here more Wednesday mornings than not, morning here on the Eastern US. So with that, I think we can call this a show and join us next time. And again, like, subscribe, and share. Thanks a lot, folks. Thank you, guys.