 Let's get into the big topic of open source, something that we actually have in the past. This is so awesome. We are an open culture that needs to be so focused. It's that process that a developer or let's say a developer. How's the Kubernetes ecosystem really going down? Good morning, good afternoon, good evening, wherever you might be. Welcome to the Level Out Power, where we discuss all things containers, Kubernetes and OpenShift. I'm Randy Russell and I am joined by my co-hosts, Jafar Charibi and Mr. Scott McBrien. Hello gentlemen, how are we today? Good morning, Randy. I've missed you guys. I've not been able to participate for a little bit here, but I'm back now and I am very excited about our show today because we are covering that reality of life is that we don't do things once. We don't get to just deploy and forget. No, no, no, no. We have to upgrade container images. We have to do it in production. If you're new to container management, maintaining container image life cycles might seem a little bit complicated. Might even be intimidating, not for our audience. They're brave souls, each and every one, but we're going to talk today a little bit about how you maintain container images over time. We're going to look at the Red Hat ecosystem catalog and container catalog and just talk about upgrading and automating upgrades and certified container images and something called the Container Health Index, which in the modern era, the past two years, anything that has health index in the phrase makes me very, very nervous, but I'm sure it's entirely benign. Anyway, audience, if you will, if you have any questions as we're going along, please do post your questions in chat and also please do remember to like, subscribe, and share so that everybody knows we're here every other week with level of power. Let's get to it. Scott, where first thing we even begin? I think the first place where we begin is where do you get your containers? You could choose to build them yourself and that's perfectly viable and then you have your own ownership of production, wow, any more coffee, your own ownership of production of those images and you can set pretty much whatever you want for your maintenance cadence and processes and procedures. Most people choose to start with a base image first. There are a couple different places where one can get a base image and I'm sure that some of you can probably guess where one might get a base image. Anyone? Anyone? Bueller? Probably the most popular place would be Docker Hub. So if we take a look at the screen share here, Red Hat has their universal base images shown or available via Docker Hub. So Scott, are you sharing something? Hopefully. I'm just waiting for our producer to toggle it on. All right, so Red Hat is a registered verified provider with Docker. So we are included in the Docker registry and Docker Hub UI and you can see that we share four images through Docker Hub, the four flavors of UBI. And if you're interested in four flavors of UBI, we had a show we talked a little bit about them and we'll reference that a bit later. Here in Docker Hub, you can kind of look at what the most popular containers are and if you click on one, you can see kind of how many pulls it's got. If there are any tags that the container image has, so we can actually like look at older images that are in the catalog, not just the most current. And then you can click the pull button or use the URL and pull it with a Docker URL. All right, but it turns out we can toggle this back off again. So it turns out that Docker Hub is our secondary location where we store container images. We push those four UBI images over there whenever we build UBI images. Our primary location is the Red Hat certified container catalog. All right, so if we bring the free share back in, Red Hat maintains their own container registry and we have more containers here than just those four base images, although it also boots those. So a little bit different flavor for how we store the images, information about the images, we'll get into that in a little bit. There are other registries out there, but Red Hat currently doesn't participate in them. So you might see, for example, some CentOS containers in WSL registry, and that is not a CentOS image from us. That's someone else who has made a CentOS based image and put it there. We have been working with Microsoft to determine what it would take to get into WSL, but right now we're in very early stages of just talking about it and trying to figure out what that looks like for us and how we support it and a variety of other topics that go around managing base containers. Scott, if I might, Kristoff actually posed I think a very relevant question is, do I need an active subscription to use these images? So if you are accessing the universal base image or UBI, those are free to access, free to redistribute. So that's why we have them over on Docker Hub and they're free to access, free to redistribute there on Docker Hub. UBI and UBI derivative images in the Red Hat container catalog are free to access, free to redistribute. However, today, not all the software that comes with RHEL is available for UBI. So today UBI is a subset of RHEL software and it includes a lot of stuff. So for example, it includes language runtimes like Node.js or OpenJDK, but it does not include other things like Postgres. For that, you'd have to package and install your own Postgres in it if you want to retain that redistributability. Now there are some images that are subscription only and those would be the ones that fall into the RHEL catalog of images. So UBI, free, all good, do what you want. RHEL, that provides all the package set for RHEL and you would need a RHEL subscription to access those and that's actually on their Red Hat catalog page too. So it's very clear whether you need one or you don't need one when you're trying to access it. Well, and we have another question already, barely 10 minutes in. So the question is Dr. Hub is an API using the provided API endpoints for the registry. So is there a comparable or analogous functionality on the Red Hat catalog side? Yeah, so the API on the Red Hat catalog is supports most, if not all of the same calls as the Docker API. So you could literally take your Docker Hub thing that you're making calls to Docker Hub with and just swing the URL to the Red Hat catalog and generally it should work. Yeah, I used a lot of Weasel words there, but there's more metadata and other things in the Red Hat catalog of how we track images and image health and some other stuff that would not be included in Docker Hub metadata. So that's something that may be different from the two. All right. All right. So now that we talked about where to go, maybe we like just delve in and find one in the Red Hat container registry and we talk some about how we manage those. All right. So I'm just going to go ahead and pull up UBI based images. All right. You can see that we have a rel7 based UBI images, rel8 UBI based images. There's even a rel9 beta one. And when rel9 is released, those will obviously be moved out of beta as well. So let me just pull up the UBI 8, this guy. All right. So recall when we looked at the Docker catalog, the big metric that was shown on the page was pools. And generally when somebody is looking for quality of image in Docker Hub, I think they generally look at pools. So the more pools it has, the more popular it is, the more popular it is, clearly the better it is. And that we have other mechanics embedded in the Red Hat catalog for providing more depth of quality there. So number one is we also include when the container image was generated. We also have the health index. And I'll come back to this in a second as well as size. The other information that we get is stuff like security. So this goes in a little bit more detail about the health index. I'll come back to it. A little bit of metadata about the content. These are all of the RPMs that we used to build the container image. So rel UBI's and rel images are actually built out of the same RPM software that we use for building rel, rather than like unzipping to our archives and other things that could be in other base images. We also provide you a Docker file. So like if you want to build your own, here's the Docker file you can download. And it will, using a build from Docker file option, perhaps it will make the same thing. And then under get this image, this is how you can pull it. And you can see that there's instructions for using OC, which which Jafar is going to get to a little bit later in the episode. And Podman, depending on whether you're doing OpenShift stuff or a rel container of this stuff. And then you can also use Docker if you're using something like Docker desktop or the Docker tools on your system. All right. And then the last thing, we provide source containers. We had an episode where we talked about this as well. So we tell you where to get the source code for the packages that we shipped with it. Okay. There are literally no mysteries here. Hopefully, hopefully as transparent as possible, right? Well, yeah, that's what I was going to get again. Is that is that this is really the, the, the wonderful thing in the world of open source as you, you know, as you were going through all the various pieces of information and how much insight there is into what you're getting. That is about transparency. There, there shouldn't be any mysteries in there. Right. So in a previous episode, episode 57, we talked about choosing the right image, right? So there's four UBI images, minimal, micro, which is even smaller, regular and in it or multi-service image. And we talked about why you would choose one over the other in that episode. So if you're interested in, like more information on the four flavors of UBI and why you might choose one over another, I would recommend that you go and take a look at that episode. There's also other images that we have here in the catalog that include the language runtimes. And I'll show you one of those in just a second. But those others that include language runtimes are based on UBI. So I promise that we'd come back to the health index. So I know, I know the health index is basically how we gauge the quality of this image. And you can see that for this image, we have a Red Hat bug advisory outstanding for this image. And it is a severity important security vulnerability based on this advisory. So we could go in and like look at what it is and determine what we want to do about it. So because of that outstanding CVE, this container has a health index of B, meaning it's not the best, right? The best would be A. Some other things that we use for gauging the health index would be the length of or I should say the differential between builds and uploads. So the longer the container image sits in the catalog, the worse its health index will get over time. So if you have something that's been there a year, just because it's been there a year and not been rebuilt, it's going to have a lower health index. In fact, that might be like an E or an F because typically you want things to be maintained. So to address that for the Red Hat managed images, we rebuild the UBI's and derivative images every six weeks, just generally. And then if there is something like what we see with this one, where there's a critical or important security CVE outstanding for it, when the remediation for that CVE is available, we will rebuild images. So every six weeks is our general schedule. But if there is a reason to rebuild sooner than that because of a critical or important remediation for a CVE that's been published, we will rebuild sooner than that. So I expect this one to be rebuilt once we have a remediation for this CVE for open SSL libs. And then we'll upload it and then we'll push it to Docker Hub. So we rebuild in the Red Hat catalog first and then pretty much every day or actually multiple times a day, we push updated images over to Docker Hub as a player. All right. And to give you an example of what a really ugly one looks like, let's look at like this super old one from eight months ago. Right. And so this one has one important and two moderate CVE outstanding for it because of its age. And we could see that it's a health index of C. Right. I say just generally I would choose a container image that's an A or a B would be my preference. C is looking a little bit sketchy. And then if we look at some other container images like maybe the Node.js image. So here's the Node.js 12 image built on UBI 8. And we can see that it is also a health index B probably for that very same security errata. So once we have remediated that, all these derivative images will also be updated to go with it. Again, a lot of transparency there. Even to the extent that these images that maybe have more CVEs and really perhaps shouldn't be in active usage, they are still there. And it is possible to in a sense kind of see the provenance of a particular image over time, right? Absolutely. And the reason that they're still there is sometimes people build their life cycle stuff to pull a specific tag of image, right? So if they used this tag, which is the version information when they did a call to then build their stuff into the container and move onward. If we just get away with this tag in this image, that would break their build process, right? Right. It would be a denial of service attack. Well, hey, let's not go there. I'm kidding. But you see my point, though, is that if you start doing that, or if you start just one day, you take an image that might have some outstanding CVEs and you simply yank it from existence, you've effectively done worse than what might have actually happened with the issue itself, right? Possibly. Possibly. Oh, stop splitting hairs. I'm having fun here. It's Wednesday morning. Come on. All right, continue. So, you know, while we keep that in place and we allow people to continue to use it, of course, recommended practice would be that you periodically check back with the catalog and pull the latest, right, and rebuild your stuff based off the latest tag. And that'll make sure that as we do these rebuilds, you will in your automated rebuild process at your own facility where you're putting your stuff in the containers, you'll start with one of those fresh images and build it out. Latest and greatest. Yeah. And then plus as an added bonus, you know, containers are a very persistent entity, right? So just because you pull down the latest and you build your stuff into it and you deploy it and it doesn't work, we still have your older one in your local library or your local repositories. So you can always revert back to that like known good state of container if there's some kind of issue. And that's something else to account for when you're building rebuild processes. It's like, how long do you keep an image? You know, my recommendation would not be to just like, oh, there's the latest pull it down, rebuild it, throw away the old image and deploy the new one, right? You want to have that kind of backup ability to roll back, right? So we talked about rebuilding, we talked about how we actually like show you when things are vulnerable and a little bit on some of the additional features. So at this point, I would mention that we also did a level of our episode, episode 55, on configuring auto updates of containers for Red Hat Enterprise Linux. So in that episode, we talked about setting up some system deconfiguration so that periodically a machine would check with the Red Hat container registry or your own local registry, if that's how you configured it, to see if there's a newer image available. And if so, you would pull down and redeploy your container images based off of that change in state. And Jafar, I know that there's some OpenShift stuff that can do a similar process. Yeah, yeah, sure. So what you've explained so far is that there are frequent releases to the container images that we provide in the Red Hat catalog. And as probably you all know, OpenShift does rely on those images for everything that is going to run on the container by default, of course, you can enrich it and add your own images, but most of the time what you're going to end up doing is using those images that we provide as we call base images to build upon and have your applications running on top of those images that we provide. So the interesting thing that would be that we offer for our OpenShift customers is that there are some automated mechanism that allow you to pull those updates from the registry and we use some concepts that we call image streams and image stream tags to basically have dynamic pointers from our OpenShift cluster to those images that are on the Red Hat catalog. So if you want, I can do a quick demonstration and try to make those concepts a little bit more visual. A live demo? Yeah, always crazy, crazy things. What can possibly go wrong? And let's see how it goes. So, yeah, the first step is to be able to share the screen. A serious moment in a live demo. Okay, cool. So can you see my screen as, yeah, let me just move the window there. Okay, cool. So as you can see here, I have an OpenShift project where I have a Node.js application. This Node.js application has been built based upon a node image that comes from the Red Hat catalog. And the way we can check that is I have a build config, which is basically what we use to define how the application is going to be built. So that says that this image is based off an image stream tag. So a tag is basically an image stream tag is basically a dynamic tag that points to a specific image. And this one points to the OpenShift image that is Node.js 14 UBI8. So that's what Scott has explained. We have different flavors. And this one is using the UBI8 version and Node.js runtime 14. So if we check that image in the registry, or yeah, in the internal registry, we see that now we are in the OpenShift namespace. And that's by default where all those images that are used by all the projects are stored. And I can see a list of tags. So I see that there's a tag here called UBI8 that points basically to a version of the image that has been a little bit outdated. So let's have a look at the image. So currently it's pointing to this tag, which is 163. And I see here on the catalog that there's an updated image. And then they recommend to move to that container, to that image tag instead. So what I want to do now is I'm going to update the image that I have that is being used by this application. And I want to see what happens in OpenShift as I do that. So how do we sync our OpenShift images with the ones that come from the catalog? It's very simple. What I'm going to do is I'm going to actually tag the current image stream that is in the OpenShift namespace and point it to the latest tag that we have just seen here. So basically there's the OpenShift tag command. I want to add this new tag to my OpenShift image. And that's the Node.js 14 UBI image stream that is used by my application. So I'm updating it here. And as I do that, you can see that OpenShift has automatically detected that the image that the application has been based upon has been updated. And now it's running a new build of my application. As soon as I'm looking at the build now, it's rebuilding the image based on the application image based on that base image. And as soon as it's finished, it's going to redeploy that application on my namespace. So the reason why we've been able to do that is the build config has been configured to be re-triggered if there's an image change. So basically we've detected that we updated the image stream that this application was based on. And because we allowed that behavior which is allowed to the build to be triggered on an image change, we have detected that we have rebuilt the application based on the new image and we have deployed it to the OpenShift namespace. So of course, so first thing is, yeah, you can see that it's dynamic. It's great. This is a very simple use case where I'm only updating the application that runs in my dev environment. But of course, if you were to do that in a production environment, you would have your CI CD pipeline get kicked off when this image changes. So now we have kicked off a simple build config, but in a more realistic environment that will kick off the CI CD pipeline itself. And I would have all of those tests run in different staging, etc. And then once everything is satisfied, the application gets redeployed into the projection environment based on that updated image. So yeah, basically that's what I wanted to show you. And those are the main concepts that we wanted to cover. If I wanted to frequently update those images that are in OpenShift from the catalog, there's a dash dash scheduled parameter that we can add when we tagged an image. And when you do that, so for instance, if I do instead of just OC tag, I do dash dash scheduled, it's going to frequently update the image. And by default, it's a 15 minutes timeframe where we are going to pull those remote images and check if there's an update. And if so, automatically import that content to OpenShift. All right, so that's it for me. I'm going to stop sharing my screen. And before anything goes wrong. Yeah, exactly. So far, it's been working. So let's stop there. 100% of the time works every time. That's why I like to see. Exactly. So, Jafar, I know that both of us have kind of gone through and said, use the latest, use the latest. Is there ever times in the OpenShift world where you don't use the latest? And what might be a consideration for that? Oh, yeah. I would say most of the time, we don't use the latest because so there are two things. Use the latest is a good practice to update the base images because that somehow provides you updates for the CVEs that we've been able to uncover and fix, et cetera. So it's a good practice to update those images to the latest ones. But what we do provide with the image stream tags is that you don't have to call your image stream latest for it to point to the latest image. So in your application lifecycle, a good practice is to never use latest because you then don't actually know what's in your application anymore because anyone can update the tag latest. And you don't have anymore a versioning of your application. So usually what happens is your base image is going to point to the latest image that we provide from Red Hat. But your application that has been built is going to be tagged with your commit ID, for example, to be able to trace back the image to the commit in the Git repository, for instance. And you can then add an application version tag that says version 101 or something like that. And those are the tags that are going to be used by your CI CD process to basically update the deployments of your images because that way you have more control over the history of your deployments. And you can trace back things to both image changes and to code changes if you go back to commit IDs and such things. In a sense, latest turns into a black box, if that's the use. And so, yeah, and you want yourself in that situation, which is pretty typical where you have to trace back, okay, things are not working now, or we are having an issue now. Let's go back and try to reconstruct what it is or where that happened. It's like latest gives you a brick wall where you lack the clarity about being able to see past it about specifically what that tagging was. Am I getting that about right? Yeah, exactly. Exactly. Yeah, so. But again, one interesting thing also with the image stream tags is that although, I mean, if you've been punching the latest tag to a specific image, you can always, because it's dynamic pointer, you can say, okay, so we've messed up here, and I want to change the latest tag now to point to an earlier version of my application, and then you can still manage that situation. But it's always better to have a very semantically clear tag to be able to know where you're at with your application versioning and such things. All right, so it's kind of interesting to consider, we had Scott's presentation about a very non-open shift, non-Kubernetes approach to this whole topic of updating and everything, and then we added the open shift piece. And there's actually an area of intersection, first of all, that I'd say is that if you have a particular image, that image is tagged and has a lot of the characteristics as viewed through open shift that you would if you were doing it through one of the registries. So that's a common area. It seems to me, and I want you to correct me if I'm wrong here, Jafar, it seems to me that really what you get on the open shift side is you start to have a little bit more of that life cycle capability when it comes to CI CD and how you are actually managing this, not narrowly from just the perspective of, okay, well, here's my container and I'm updating it, but rather through the whole exercise of saying, well, okay, I'm going to do that, and I'm going to rebuild in dev, and I'm going to rebuild in test, and then I'm going to rebuild in stage, and I'm going to put it in prod. Would that be a fair characterization that that's really the piece that open shift gives you incrementally when you take that approach? Yeah, definitely. So OpenShift provides a lot of capabilities. We've spoken about automated pipelines using Jenkins or using OpenShift pipelines, which is based on Tecton, so that allows you to have full production grade CI CD pipelines that are kicked off on OpenShift and that can trigger things within OpenShift itself, but also outside of OpenShift, and they can also interact with registries that are in OpenShift or outside of it. And so, for example, you know that we provide the Red Hat Play Registry, which is, I would say, a more production grade and high availability geo-replicated registry that you can use for your deployments as an upgrade to the internal registry that we provide. It does allow you to have integrated security scans and such things, and you can configure OpenShift to automatically push and pull data from a remote Quay registry, so that will enhance the capabilities with even better security scans results within your registry. So we've seen that we provide, for instance, the Container Health Index on the Red Hat Catalog, which is very useful to be able to understand the quality and the security implications of a specific version or tag of an image, but it doesn't translate automatically as is to OpenShift registry because we don't show those security results in OpenShift itself by default. But if you do use the Red Hat Quay registry that provides those capabilities, you are now able to not only have your results displayed in the registry, but they can also dynamically be used as a security gate in your pipelines. So for instance, you can kick off the build of your new image within the pipeline, and then you can say, if there are security issues that have a critical severity, then the pipeline won't go further, because I have defined a policy within my pipeline to prevent the application from being promoted if a specific, I would say, criteria is met, security criteria. So yeah, that provides you with these types of capabilities where you can shift into what we call DevSecOps and also with the addition of the Red Hat Advanced Cluster Security for Kubernetes or what we call ACS. It also provides you additional capabilities in terms of security scanning and applying security, violation policies, et cetera, to your applications. So yeah, we do provide, I would say, these notions of CVEs in the Red Hat catalog, but if you are using the combined value of OpenShift with the Quay ACS and the CI-CD pipelines, you can have a more, I would say, robust and DevSecOps approach where things are automated, where you have gates to prevent security issues from being propagated to the other environment. And yeah, so that's, I would say, the overall value that you can get from that. Right. Well, that provides a lot of clarity. So Scott, we lost you for a bit there and now you're towering down at us. I'm trying to be intimidating. I saw the power company trucks on my street earlier this morning and I went, I wonder if that's going to be a problem? Apparently, the answer was yes. So I'm slowly regaining power again. Okay, so any additional thoughts, questions, concerns or commentary from you? I mean, because as you are offline, I'm sure you had a lot of time to sort of think about these subjects in greater depth. Well, before we left, we were talking about like always latest or not. And I wanted to point out that while, and Jafar made the same point, right, pulling your base images and getting those updated so you remediate the CVEs is important. But notice that in the catalog, we do things like carry NodeJS 12 or 14 or 16. And we continue to update the Red Hat bits underneath of that and even the NodeJS stuff, but we keep it at that like NodeJS 12 or NodeJS 14 version. So you don't always have to run like the latest NodeJS on the latest container image. We allow you to have the choice of what runtime on which version of image. We even still do support of around seven images at this point, although time is running out on those. We only have a couple of years left on their life cycle. Yeah. So just speaking about that, if you've been able, familiar with the .NET, what's the .NET version that runs on .NET Core? You've probably seen that there are some breaking changes when you move from a version to another one. And one of my previous demos was based on .NET Core latest. So I hadn't run that demo for years and decided to come back and pull it off. And while I tried to rebuild the application that was pointing to latest, it tried to pull like the version six of .NET Core, which is not called .NET Core anymore, because that's what the latest tag in OpenShift was updated to. And of course, it broke everything. The application was not running or building anymore because there was such a huge change between .NET Core two or three, which I was using, and the sixth version. So by going to my build config and specifically saying, do not use .NET Core latest, but use .NET Core version two, I've been able to very easily rebuild my application and have everything running together. Because as you said, we do provide an image stream that is called .NET Core. But within that, we do provide several versions that range from depending on how far back in the component itself, we are going to support the versions. So yeah, that's a very useful feature that we provide. And also from a support standpoint, we are able to give you different versions of those run times and make it easier for you to consume. So that's an example of when not to use the latest tag. A textbook example based on the description you gave. All right. Well, General, this is just another comment speaking about managing the images lifecycle. So something that we haven't spoken about is the Quay integration that we have with OpenShift. And so as mentioned, there's a bidirectional configuration where OpenShift Quay can become the default registry for OpenShift. But we also have a mirroring capability in Quay that allows you to use it, for example, to mirror those Red Hat Catalog images. So basically, you are going to use Quay as your source of truth. But you can have those images that you are going to use in Quay mirrored from the Red Hat Catalog. So they are going to periodically get updated, scan for CVEs, et cetera, and used by OpenShift as the default source for the images. All right. We did have a question that if we could tackle this before we wrap up, it would be great. And that is, we talked about image stream or image stream similar to AAPI endpoints. Is that something that can be used to search for OpenShift images? So I don't know exactly what the meaning of the question is, but the image stream is basically something that we added on top of OpenShift to allow you to have this notion of dynamic tags. So you can think of an image stream as a collection of image stream tags and an image stream tag is basically as a Docker image tag, except that an image stream tag is something that you can change over time. It's dynamic. It's not a static reference to a Docker or container image, for instance. It's a dynamic reference that you can use. And the image stream is basically just a collection of image stream tags. So basically say, for instance, that you have an application that uses three components. You have a front end, front end, a middleware, and a back end. The front end is going to have different versions. And all the images that are built, they are going to be pushed to the front end image stream. And whenever you create a new version, you are going to create a new image stream tag, which is front end, dash, 1.1, 1.2, et cetera. So the image stream tag is the equivalent of a container image tag. And an image stream is just a way of regrouping those image stream tags together. All right. Well, thanks for asking the question view style. Gentlemen, I think we're going to be going a little bit short today. We're the level up hour, but that doesn't mean we will always necessarily be an hour. Scott, any parting thoughts from the world of rel? No. Enjoy your redistributable UBI container images based on rel. All right, then. Jafar, anything else you want to add? I think we've covered the main topics. So thanks again, everyone. Yeah. Well, thank you for joining us. Again, remember to like, subscribe, and share. We're here every Wednesday, every other Wednesday, rather. And so tell your friends, tell your family, tell your kids. And we will see you next time on the level up hour. Thank you. Thank you. Bye-bye.