 Hello everybody. Today, we do not have Chris short. So as a result, we're having all kinds of challenges as you might have noticed. But today I have with us some pretty special guests. Somebody's running the Twitch channel with the live sound. But today, Andrew, do you want to introduce yourself as my guest co-host? Yeah, although you can't blame issues on the lack of Chris short. That's not fair. Yeah, well, you'll see in the slides how I furthered it. So not to worry. Yes. So hello everyone. I am Andrew Sullivan, technical marketing manager, peer of Langdon. I also host the show that is happening today at 11am. So with Chris being out today, taking a well needed day to rest and do his thing. And Langdon and I are joining each other's shows as co-hosts. So look forward to seeing Langdon on my show later and being a participant here. Although I admit I am not a developer by any stretch of anyone's imagination. So I'm hoping to learn an awful lot here. Nice. All right. So to make the show even more fun and to taunt him further, we have invited a special guest named Brian Cook. And we will talk about the slides in a minute and talk about what the show's about in a minute. But Brian, you want to introduce yourself? Tell us what your, I can never remember anyone's titles at Red Hat anymore because like every, you know, like three months they change. So, you know, I always liked people to introduce themselves. Sure. Hey, I'm Brian Cook. I'm a product manager here at Red Hat. I work on an internal product that we call the container factory. It's actually kind of a big coalition of people inside Red Hat who work for many different teams and work to make all the machinery that produces all of our containers and operators and all the stuff that has to go along with building and shipping those work. And it's actually quite a large group of people. It's an amazing group of people. And the fact that sort of it's a cooperative and not a forced function that makes it a kind of an interesting thing here. Nice. And so why, and so some of your work leaks into the outside is basically what it goes on? I'm sure it leaks what? Into the outside versus just being an internal project. Oh yeah, yeah, we have a lot of code that we push outside and we're actually making even more efforts as we change things. So yeah, a lot of our efforts become open source kind of upstreamy things and some of them get included into products or things like that. I gotcha. Cool. So why don't we I'll do the my ugly, ugly slides. It's still bothering me that I can't seem to get into the Twitch live stream to watch the chat. But I think I finally figured it out. So now it should be better. But I get no history on chat. So if we missed anything, please let us know. So let's see technology trying to defeat me for a minute. And it's defeating me. Here we go. Oh, hey, guess what? We're going to share on Wayland and things are going to be not as useful. We're still sharing on a new laptop. Yeah, exactly. My laptop wouldn't boot this morning. So that was fun. I suspect it ran out of disk because I was doing an upgrade over the weekend. And none of these desktops that it's presenting me look like the desktop that I want to share. This is super weird. Oh, let me just try this and try this. We're really firing on all cylinders today. It's an ominous start to a Wednesday morning. Exactly. All right. So you should be able to see the slides now, even though the aspect ratio and the sizing and all that is probably off. But long story short, here is the level up our normally we talk about containers. And so containers we can do apparently, you know, streaming, booting, all those things are out of my wheelhouse. Hey, Langdon, it's sharing your whole desktops. You might want to maximize that. Yeah, that's what I was wondering. All right. It's not too. It's very, very big for me now. So, yeah. So this is the level of power and today, as talked discussed a minute ago, I'm Langdon White and you can find me on Twitter at Langdon with a one. And Chris is not here today. So we have crossed him out in in a particularly harsh manner and invite Mr. Sullivan in his stead, who you can find on Twitter at Practical Andrew. So join us there. You can obviously still talk to Chris Short. He will be back next week. And, you know, all the better for it, I'm sure. And you can also find us on our Discord. And there is the link there. Let's see, can I use the magic twitch? Let's see if it works for me. No, I have to be logged in, which I am currently not. So you will have to type in the link yourself or maybe somebody else can share it if they feel like typing. So, yeah. All right. Still trying to calm down from my panic attacks this morning. So you can find out more information about the level up program in general, which as we talked about offers free training and low cost reduced training, as well as licenses to open shift. If you want, if you go to that URL, you can check it out. And today we're going to talk with the container health index, which I find particularly interesting. And I'm not sure a lot of people aware of. As we usually do, we have show notes from last time, which you can find at that link there. And I think that when I actually managed to prepare in advance, but I don't know if it's going to let me. So let me chat it off. I'm not logged in. No, it does not hold on. I have to log in. So Oh, good gracious. Now I have to TFA to FA. So let's see, but when we get started and I will sort that out and give you links in a second. And I think that's the last slide we want to talk about for right now. And this whale and sharing things going to be fun for any kind of browser based activities because a lot better at dealing with white whale and lately. Yeah, I didn't, you know, I actually I just had been talking to some of the desktop team about just that. And I was I was actually really surprised that Firefox wouldn't let me share it. I wonder with Chrome, if it would let me do it. You know, let me do two different versions or whatever. So let's talk about the container health index, which why don't you tell us a little bit about what the goal is, Brian, and and then we can go kind of from there. Sure. So as we all know, maybe we don't all know I'll just say containers are immutable, right? Like once you get one, it is what it is. You can theoretically create a new container from the original one, right? But then it's different container, not the one you started with. So the container itself is immutable and kind of our normal pattern for using them is like get one, use it for a while and then throw it away and get another one from the source, not not like continue to patch the one you have, right? So we talk about like pets versus farm animals kind of methodology with containers. And the problem is like when do you know when to get a new one or like what what when you're particularly if you if you use our ecosystem catalog to like look for images that you might want to use like how do you know like what the status of that and the container health index was conceived a few years back as a way essentially to understand the like the container status with regards to available patches and that's like a critical thing to understand because actually originally the container health index wasn't called that it was called the freshness grade and it was meant to convey like this isn't, you know, we're not trying to say this container is safe, right? It's just like how new is it? How recently has it been updated? And the container health index has grades like sort of like school except for the end of it gets a little weird like we never had ease. Yeah, I'm actually always wondering about that. Why don't we have ease, you know? I don't know. Is it because it collides with excellent? That's what I was that was my only theory. I'm not sure. I thought maybe it was vowels and I realized they had vowels so I don't know. But so the deal is like we're trying to convey the message about the most important missing patches. And for us that is the line is for important or critical, right? So CVEs have ratings and rating system and the top two are critical and important. After that it goes to moderate. Most of the things that we rate as moderate are not, you know, super concerning to be running for a while on a production system while important and critical is the ones we think people should be focusing on the most. And so the container health index only deals with the lack of important and critical vulnerability. There's a scale and I shared a document if you guys want to put it on the screen. Oh, the KB article. I was just a container health index KB article. Yep, sorry. Yeah, I got it. Speaking of logins. So what I wanted to point out real quick was see if you notice over here on the screen that I'm sharing is that health index right here. That is what we're talking about, which may not be obvious. Okay, so let's make that a little bigger so people can read it. That's what you're talking about, right? No, not that one. Oh, the main one. Sorry. Yeah, yep. I am really firing also. I also can't seem to get my mouse from the one screen to the other. That's why you see it popping up. Oh, look at that. Hey, that's really useful. I've never found this before. So this is on the the public knowledge base. And this explains what the grades mean in detail, right? And that shows the grades there. So grade A, no missing patches essentially, like no, no missing critical or important CD patches. And then you go down to grade B. And this is when the factors start affecting the grade and the factors is what kind of patches critical or important and they affect it differently and how old the patch or how many days since that patch was released. And that's important, right? Because like the longer the fix is out there, the longer the vulnerability is known, the more people take advantage of it, right? The same vulnerability will get a lower grade, the longer it's been fixed. So grade B, grade B, like is missing critical or important strurrata. No missing critical flaws, older than seven days, no missing important flaw, older than 30 days. And then, you know, you guys can read, but as you go down, it's like, it's older, older, older. Until you get to grade F, which is like you're missing stuff that's a year old, right? And this is something you will find in our catalog for the simple fact that like we have support life cycles and when something goes end of life, we stop rebuilding, right? So if you go and look for like some super old image in the container catalog, like it will be graded F. It should be graded F and we want you to know it's graded F. Don't use it. Well, is there a distinction between, you know, like a grade F because it's a grade F and a grade F because nobody cares about this anymore? Like, or should there be in a sense? In the container catalog, there's also a release category on that page. So if you want, you can flip back over to the rail page or I actually linked the UBI page there, which I have looked at earlier that I know there's a tag on that has a different grade. There's a release category and it should say generally available if, you know, you're looking at something that is in its support life cycle. Okay. All right. Yeah, let me, you said the UBI, oh, UBI is here. We have used the UBI on the show many times. UBI, yeah, I gave you a link in the doc to UBI. Yeah, I'm trying, failing but trying. Let's see if this is to the right place. This is the one, right? Yeah. So you should have a release category of generally available. Oh, here. Yep, I see it now. And then you have your grade as A, right? But if you go to the top where that tag says latest. Yep. And pick the tag that's two months old. It's like 8.4-203 in a doc, which I'll tell you what that is in a minute. Yep. Gotcha. You'll see that that one is not an A, it's B, right? And if you click on the security tab there, you can see why it's a B. Right. Because it's missing that important vulnerability. So, but if I need this 8.4-203 plausibly, right? I could put a layered container over it and update this CDE, right? You could, but that latest tag that we were on before is... Already has it, right. Yeah, right. But if I needed, like if I was more like, if I needed like 8.3 or some weirdness like that for some dumb reason. Theoretically, I could overlay it and apply those patches. Yes. You could do that. Obviously, there's some challenges with that, but you know, you can... There are challenges with that, which the point releases of RHEL are not supported after the next one is released immediately. Oh, that's true. Right. Yeah. Right. But if you're playing the catch-up game, it'll bridge the gap while getting, you know, being able to defend you against some pretty major stuff. Although, you know, 9 times out of 10, at least for me, right? The reason you can't upgrade is the same reason that is flawed, right? Like they're usually related. And so that's going to be, you know, you get a little stuck anyway, no matter what you do. But I got you. Cool. All right. So what, you know... So, you know, do you have a lot of people kind of giving feedback on this? I mean, are people... Are you hearing feedback from, you know, community, from, you know, customers or whatever about using this info? Yeah. I mean, we get feedback. We get positive and negative, right? The things to be... The negative feedback we get is all focused around some of the gotchas, right? And so like it's important to keep in mind, again, that grade A doesn't mean we're saying it's safe. It just means it doesn't have any outstanding patches that need to be applied, right? Right. Right. Depending... We do... We have a very good security department who ranks the patches, but depending on how exactly you are using that container, a moderate, moderate vulnerability, which doesn't factor into the health index at all, could be more important to your use case than we think it would generally be, right? Mm-hmm. We're... I would say we're pretty conservative on the security ranking, but it's not impossible, right? Another thing to keep in mind is that, like, there is a lag... There is some lag between when a patch becomes, I'll say, like... Like our vulnerability becomes known about and a patch becomes available, right? So the container health index is about you're missing a patch and so we drop the grade. So consider the scenario where a critical but not, like, embargoed, right? And I don't know if we need to explain that... Actually, yeah, let's just show embargoed, right? So when there is a secret patch, basically... These are the worst of the worst vulnerabilities, right? The super worst vulnerabilities that you can think of, the ones that make news headlines, right? Like, when those things are discovered, there's, you know, typically somebody will say, I found this super egregious vulnerability and it's so bad, we're going to keep it a secret until we actually release the fix. And so that's called an embargo. But it's only a secret from the general public. Usually the security teams have a bunch of, like, the Linux's, even the Windows people, like, whatever are often communicating about it. And that's necessary, yeah. It's necessary, essentially, for global information security, because the same patch or the same CVE will typically affect many different operating systems, right? It could affect Red Hat, you know, Rel and Fedora. It could affect Ubuntu. It could affect Windows. I mean, it could affect tons of stuff. And so, yeah, all these companies have to work together to prepare their patches. And then there's a greed upon release date where the vulnerability is disclosed. So, Brian, we've got a question from playing risky about the security reports. Sure. So, more or less, the question is, where does the, where does that information come from? The various security alerts that it's testing against. So the specific or the precise question is, is the security report integrated with Red Hat or Rata and Claire, or where does that info come from? It is integrated with Red Hat, Rata, and specifically the Oval streams. And it is using Claire to produce the list of vulnerabilities. Yeah. I put some resources in the doc earlier, because I assumed this question would come up and we would probably trend in this direction. For anybody who's going, what are you talking about? Oval is a common syntax for describing vulnerabilities and fixes to those vulnerabilities. And I have a link to the Oval KB article for you guys and also a link to the rel8 Oval file. So you can see what those things look like. Yeah. I'm not sure we want to look at a BZ2 necessarily, but you know. No, you can send it to them. Oh yeah, that's a good idea. I'll throw it in the chat. Unless Andrew, you already beat me to it, but now I'm scrolled back. Oh, look at that. Oh, you got the article too. So, yeah. So, and this is not like, like Oval is not a Red Hat thing, right? It's a... Yeah, Oval is an industry standard language for describing CVEs and the available fixes to CVEs from different places. Red Hat provides Oval feeds for its own products and for open source distributions like UBI. And there are other Oval feeds out there. If you go and like search around on the NIST site, you'll find Oval feeds for lots of other software as well. Yeah. It's actually, I was just kind of going back to the embargo thing too. Like that's one of the weirdest things I think about having now, you know, kind of working in an operating system company, you know, which I am now and have been for quite some time. But before that I never had, right? Is that like, I don't know about the two of you, but I often am aware that we have XYZ, like some number of embargoed patches coming. I don't know when they're coming. I don't know what they actually are. I don't even usually know like how egregious they are. I just know in the ether somewhere, there's a embargo patch that's actually happened a couple of weeks ago. And that's by design, right? Right. Right. It is literally need to know even inside Red Hat, I am responsible and part for most of the stuff that ships our containers and I am not informed when we have embargo. Right. I think that's what, like, I guess, you know, like kind of looking at it from the outside, you know, I kind of always assumed everybody at Red Hat knows and they just keep it a secret, you know, but in fact, yeah, like next to no one knows, you know, it's basically the engineers. In fact, on the development teams, on the development teams, it's usually only develop only the developer that is writing the patch is the only person who knows what the actual issue is. Right. And not even their manager a lot of the time, which I think is particularly interesting, you know, unless there's some, you know, a lot of times the managers at Red Hat, particularly in engineering tend to be like lead engineers as well. So there's a little bit of a, you know, it can be a little bit of misnomer in that they, they will often be consulting with the developer themselves about the patch. But yeah, it's, I don't know, I find it really interesting. And, you know, but they're, and they're not even all that bad. Like I, the one that was relatively recent, I remember I was a little surprised that it's CVE rating. So like there's ratings from whatever one to 10, I guess, or zero to 10. And this was, and 10 being the worst. And this was like an eight and a half. And yet it was embargoed, which, you know, that's that seemed at least to me a little low for an embargo level CVE. So it's a pretty rare event to do an embargo. But, you know, we only process, we only process maybe like two to four a year. Right. You know, generally just a couple of year. Right. So guys are doing their thing. But actually what I, why I went down that rabbit hole I was saying, imagine, imagine a vulnerability that is not embargoed, like a critical vulnerability that's not embargoed. Okay. So, so the world knows about the vulnerability. And there could be a week or a couple of week, you know, depends, but there's, there's some time gap between when the world knows about the vulnerability and when we ship an RPM that fixes the vulnerability. Right. And then there's some other time built between when we ship the RPM and when the new container is respond that includes that RPM. So in the first time gap between when the vulnerability is disclosed and when we ship the patch, the container grade does not change because there is no available patch. Right. Okay. So even though it's vulnerable to it. Yeah. The grade reflects the status of its outstanding patches. There is no patch. You know it's there, but there's no patch. So even though like it might be an important critical vulnerability that container will still say a then in the second time gap, there is a patch because an RPM has been shipped that that comes with an RHS say, so that is the errata that the questioner was asking about before. We have shipped an errata red hat security advisory. Now a patch exists and that RHS say will be reflected in the oval feeds. Claire would be triggered by that errata shipping to update the security tab. It's metadata behind the scenes, but what people see is the manifestation of that is that security tab in the catalog. You're going to now see, okay, this thing is there. And the grade would be impacted. So I think that's a, that's an important distinction to make, right? Of it's not doing vulnerability scanning. It's doing patch checking. Correct. What Claire is currently doing is looking for missing patches. It's because it's using the oval feeds that talk about what patches are available for installed containers. Right. But in the same way that, you know, I can figure my HDP container to be writable, you know, or to have the, you know, the files writable from the world. It's going to be pretty insecure, but there's really no, like there's no scan for that per se, unless you get into like kind of straight security. Run time scanning. Yeah. I can get into run time scanning. And there are lots of companies that provide stuff like that. But that's, that's not what this is. Right. This is static. We'll call it static vulnerability assessments. And that means like, we don't know anything about how you're going to run it. We're just looking at the container itself. So Brian, what, what triggers a re-scan of an existing container? Is it the release of new Arata? Is it just a periodic? It happens every, you know, one, two, six, 12 hours. Yeah. So interestingly, the way Claire works, and one of the things that's fantastic about it, we don't have to re-stand the containers ever. What Claire does is there is a database and when we produce a container, it's, it is standard by Claire. And essentially what Claire is doing, it's just creating an inventory of what's in the container. And it puts it in this postgres database that kind of, kind of behaves like a graph. Then when we ship an Arata, to follow Langdon's example, say we ship an Arata against HTTP B2, Claire gets notified by our message bus that we've shipped an Arata. And it gets information about like, what vulnerability was shipped from Oval, right? And basically we just kick it and say like, there's new stuff just to be a little more proactive. It would do it on its own anyway, but we can kind of push it, right? And so it will, it will go get the updated Oval feeds. And inside that Oval feed, there will be information about the Arata that was shipped for the package. Then it can look inside its database and see all the containers it's ever shipped with that package and the vulnerable versions that were patched. And it can immediately link the new Arata to those affected containers. So that takes care of Claire sort of knowing about the vulnerability. We have an extra step wrapped into Arata in order to make the catalog and the container health index work, which is we have a separate database that powers the ecosystem catalog view that we're looking at. And when Claire updates that information, it takes a subset, kind of a summary subset of that, and it updates the catalog as well. So that is what finally updates the security tab view and the container health index grade. So kind of in a related way, do the containers in the container catalog ever get updated aside from a security patch or whatever? They do. They do. Well, they get updated because we push new features or release new versions of software. Right. So like 8.3 to 8.4, for example. Sure. I meant more like, you know, is there, or, or like, okay, going back to, or maybe that is kind of the related thing. So going to, you know, whatever this, you know, these patches here, you know, 8.4, 1, 9, 9, whatever, are these always security driven or are they feature driven as well? Although I guess there shouldn't be any features. Okay. So there's a, you see those tags that say, let me, here's just really small, let me pull it up. So the tag we were looking at, this is like 8.4-203.1622660121, whatever that crazy thing is, right? Yep. The part after that, the end of that string is an epoch, right? Like UNIX time. And whenever you see those tags in the container catalog, you can immediately know that that was triggered by us shipping a CVE. Because we have an internal automation process called FreshMaker, and it is triggered by us shipping CVEs. Being special. Yes. Yep. Yeah. And it does a bunch of stuff now, but FreshMaker essentially will be triggered by us shipping a CVE and queue up all these rebuilds and process them automatically to build new containers that include the newly shipped patches. We do have tags, like you'll see 8.4-199 and 8.4-203, right? Like that is an example of what you're asking about. That was a tag cut by the REL engineering team and released for some reason. I can't tell you exactly what the reason was. It probably does include some lower level, like maybe some moderate security patches or something like that. Maybe bugs to heal. Like API improvement or something, yeah. Right. You know, we remember the erotic categories. We have bugs, security vulnerabilities, and enhancements, right? And so the FreshMaker only responds to security vulnerabilities. Right. And that's kind of what we're getting at. REL end teams on a release cadence where they'll hit their release cadence and they'll cut a new container with all the updated REL stuff in it that's not security related as well. So yes, you do get those inside of point releases. And you can tell the difference by looking for those epoch stamps on the tag. And just to be clear, once a container is pushed, it's no longer changed, right? So effectively over time, it slowly marches downwards in grades as more releases or more errat is released against it. So if today I pick a specific tag and it's an A, next week there might be a rata that degrades it to a B and then the following week a C and the following week a D and so on and so forth. Yeah. So I just, I just plopped over to 8.0, right? And it has a nice, nice fat F right there. Yeah. And that's exactly what it should have. Right. Yeah. That's an example. And I guess kind of going to the other point I was talking about there, it was like, but so nothing, there's, there's only, it's kind of like running. DNF or YUM, you know, kind of the auto updates, right? Like on your laptop or whatever is that it only automatically rebuilds for security reasons. Feature reasons are always driven by a human. It's in the large majority. I mean, obviously some engineering team may have some automation that says whenever they do a release, it goes out. But long story, like the, as far as the infrastructure is concerned for, you know, kind of the catalog, the only time you see releases that are, or the only releases that are automatic are ones that are done by, because of a security update, essentially. The reason I was asking is just, there's no, I don't know, I'm just going to make up a day here, right? But like, there's no weekly update that's automatically going through and like rebuilding everything. And so. They're event driven. Right. Okay. So, so as a consumer, I can also be event driven in a sense. I don't need to go and pull how often the container is changing. I can feed off of the errata or whatever myself as well. Yeah. In fact, there are a couple of ways that you can, you can sort of be more proactive about that without you literally having to go look at the grid. One is in the container catalog, you can sign up for vulnerability for RHSN notifications for the images that you're interested in. Oh really? Where's that? I don't have the link in it. I of course asked the difficult questions. Yeah, we can, we can post it later. But yeah, there's a way to sign up for RHSN notifications. But this, trust me, trust me from experience, because when they rolled this feature out, somebody decided to sign me up for all of them. This is not scalable. So if you are a heavy container user, you don't want this feature. Right, right. Or at least you don't want to go into your, to a human red email address. Right. The, the, the better thing to know is that if you're using OpenShift, if your image streams are based on tags, then there are these floating tags inside the container catalog that you're going to want to use. Right. So if you, if you look down at the tag drop down again on like the role of the UBI 8 that we were looking at. Yep. There are, there are these blue bubbles around some of the tags and those are what we call floating tags. Those are tags that get moved. When a new image is produced, that tag moves. So latest always points at the latest floating tag. Right. Or at the latest container. Sorry. In that repository. 8.4 is it is going to point at the latest release of 8.4 in that repository. You could choose to use either one of those when you're doing like referencing a build or something like that in OpenShift and you will continue to get like updated images all the time. You have to be aware that if you use the 8.4 tag, which I do not recommend at some point, it will become deprecated literally the day after 8.5 shifts. It will be deprecated. Well, this is there. From there, it will, it will start to decline. The health index will start to decline. Right. So for, for things like a UBI 8 image, you would, you would generally want to use latest because that's going to be compatible throughout its lifespan. Other repositories that might not be the same, right? There, there are also these things we call multi stream repos and then an example of a multi stream repo would be like if we squished the UBI 7 and 8 repos together and we had like, we had like a UBI 7 tag and UBI 8 tag and you would want to pick one of those depending on which one you wanted. We have UBI is not like that, but we have other repositories that are like that and choosing the right floating tag actually gets you a different major variant of software. Right. So it's good to always look inside that, that dropdown and see what the situation looks like in there with floating tags. Right. Yeah. I'm always on the fence about, you know, what to choose here, right? Because so, you know, as a, you know, a many year developer and to some, you know, and I've played, you know, I joke around about, I played an administrator on TV. I, I don't like things changing underneath me. Right. So in that argument and kind of when the container immutable world, I don't like things changing underneath me without kind of being aware of it. And so in that world, I want to choose the sticky version that is not a floating tag. Right. I want to know exactly what I'm getting. The thing is, is that when I, you know, I say that, but then at the same time, this is kind of where, you know, do you, do you trust your vendor in a sense? And, you know, Red Hat in particular is pretty good about not shipping backwards breakable changes and, or, you know, and certainly makes promises about that. It's a bug or whatever. But, you know, whether it's, I can treat that as a bug and file a ticket and get that fixed or whatever doesn't keep me from having an outage. You know. So, so I'm always on the fence there, but I think this goes back to a little bit where it's kind of like you want to choose your tags and how, what you follow automatically somewhat based on the provider of the content. I would actually say that you should really think about what your use case is when you're choosing those tags. The best case scenario is you choose the tag that keeps you up to date automatically, right? Right. Because if you don't choose that, you have to do something and we all get busy and forget. And then now you have this super out of date and sitting there. So what we should think about is, in what circumstances could the changes between versions of RHEL caught me problems? Okay. The binary compatibility between RHEL point releases is extremely good, but if I was going to worry about it, it would be in the case that I'm doing something like running low level C programs that interface with operating system, you know, potentially like, you know, pretty low level parts of the operating system. Or if I'm using hardware access, if I'm doing things like using AI accelerator hardware, like machine learning accelerator hardware directly from a container, you know, drivers and things like that. That's when I might say, hey, I might run this through a test environment. But on the other hand, a lot of the container use cases we see are using things like Java and Python, which are interpreted languages are running business logic, which does not care anything about the underlying hardware code's not even compiled. It's interpreted. You know, and I can say that in about four years of running a pretty wide fleet of Python apps, using the using the latest tag with RHEL containers, I've never had a problem with rel updating. The only problem I had with all the apps in all those years was one time when SQL Alchemy decided to update to a new major version of SQL Alchemy and it broke some stuff and we ended up pinning it to the previous version. And that's it. So it's really, it's really quite safe. But it is a, it is a double-edged sword, right? Like you can literally free yourself from patching cycles in large part, which we all would like to do. So, well, can I? Yeah, I was going to go ahead and ask the, ask your question. I just wanted to ask, and I think in coinciding with that, but also jumping back a little, it sounded like you said that the UBI images are only updated for Arata patches. No, that's not, no, no, that's not true. The Arata patches are the ones with the epoch stamps. The ones without the epoch stamps are our manual releases cut by the RHEL team to provide something. Got it. So is there a scenario where when you're building your container, you would want to do like a DNF update dash Y or something like that was more or less my question. You can do that. Is there a situation where you would want to do that? I think the short answer is no. It sounds like a no. It's a really weird use case. It does happen. It's weird. But it's not like, I mean, the, I've actually, I've done it a little bit with, if you have something that's like you are relying on a very, very new thing, then I have to do a DNF update because I need to get something that's later than the latest RHEL 8.4, whatever image that's been shipped. But like generally speaking, if you're doing that, you're probably doing something wrong or you're, it's not necessarily even wrong to Brian's point earlier, right? Backwards compatible changes or incompatible changes are pretty rare. And so it probably won't hurt you in fact, but it's probably also not valuable. I would say it's another one where I would say generally the answer, the general answer to this is you should probably not do it. If you know a whole lot about, you know, your specific runtime situation and you know, maybe there's some like moderate patch or a bug fix that you really need that isn't available yet. It is doable. The downside to doing this is like a blanket policy across everything is that sometimes RPMs need tweaks or some kind of modification in order to run properly in a container, particularly the kind of containers that we build for OpenShift where we have to assume that they're being used by an arbitrary user ID and that they're running inside of like the security context constraint jail, right? And the general RPM available for Red Hat, Enterprise Linux or UBI and installing into the container. And I would say this is not the rule, but in certain cases it will produce a different result than the one in the container. And it will not run the same as the one we ship in the container because there's some like extra configuration tweak being done to make that run properly inside the security context constraints. Oh yeah. So if you were going to do that, you would have to test it ahead of time, make sure everything worked the way you thought it was going to work. And I would only do it if you had a real good reason to do it because it could certainly cause some issues. Yeah. Well, and kind of going back to the earlier point and related to this one too is like where you're doing those kinds of things. So, you know, as you were kind of giving the example, you know, you're doing machine learning, you know, that's leveraging the driver coming out of the container, you know, and you're doing something really weird to it or whatever. And so you don't want to kind of take those automatic updates. That's those are the places where it's really important to invest in automated testing. And then you do take the updates, you take those automatic updates, but your testing does the, you know, does it run through or whatever. And before it actually gets deployed. So even if you don't feel like you can put the effort into a full like CI CD kind of infrastructure. What you can do is say, okay, I have these, they're, you know, they're, they're grade a, you know, AAA apps, right? They have, you know, their mission critical for whatever reason, they are also highly, you know, volatile, right? They're really close to the operating system or they're really close to the tools in the operating system, maybe focus your energy on automated testing and automatic release cycles and that kind of stuff on those components. And it's probably, you know, it'll probably be less effort and money and all that other jazz than, you know, doing your whole infrastructure, but at least it will cover the broad, you know, the worst cases in a sense. So two things. So one, just a five minute morning just. Oh yeah. Yeah. And two, we do have a question from Murph. So what is the process or what process does engineering go through after a red hat security advisory is released for an application? I noticed that it does take longer for an image to be updated with the patch for the vulnerability versus what's already available via the software package. So I assume he means it takes longer to get the container image than it takes to get the RPM. And that is because that's because the container image actually has to wait to be built for the RPM to become available. So the RPM is actually the fix that we're applying to the container. So it'll always follow. And is there, I think it's more like, what's the timeline and is that an extended timeline? It depends on the criticality rating of the CV. Generally with a. Important or critical vulnerability. We will essentially have, we have to wait for the RPM to become available. So that's up to rel engineering to create a fix and package it and test it on rel, go through all that testing for the package and then push it in the RPM repo. When it's pushed into the RPM repo, there's a bit of metadata that goes with it called an RHS a red hat security advisory. You can actually see those on the website. They say, you know, if you search for the CVE CVE dash 2021, you know, blah, blah, blah, there's a number. There will be an R a corresponding RHS a for that CVE when a fix is shipped. The shipping of the metadata that RHS say is what triggers our rebuilds. And so for, for like important or criticals, for example, fresh makers should pick those up and rebuild them pretty quickly. If, if it's a CVE that affects our entire portfolio, then, and, and a lot of times when there's a critical rel vulnerability, like you can imagine the, the stash of software, right? If a critical vulnerability comes out for rel, then, or, or UBI, then, then everything else we build is on rel, which means literally every container in our, every repo in our container catalog needs to do it. And so what happens in that case is that fresh maker basically goes and queues up a zillion builds. And then we have to start working down against that queue. And so it can take a little while, but it usually doesn't take more than a day or two. Yeah. And obviously the, the ones that are lower in this stack will show up before the ones that are higher in this stack. So, so a UBI eight, you know, update should land before, you know, an HGPG one. Yeah. So if you can imagine that you're using, let's, I'm going to try and think of it. This might be a contrived example, because I don't exactly know how to build, but follow my example. If you can imagine you're using an open stack cinder container from us, right? Then when the RPM ships, UBI eight needs to rebuild. And when UBI eight ships, then we can rebuild Python three six on UBI eight, and when Python three six is done rebuilding, then we can rebuild open shift cinder container because it requires that, right? So actually fresh maker while it sounds like a little, you know, simple program that just goes, I need to rebuild all these things. It actually has a bunch of internal dependency logic that before it can start rebuilding, it actually sorts out in what order everything actually has to be rebuilt. And that's actually the trickier part of just building this stuff is figuring out what order everything has to be rebuilt in. And so yes, if you're using UBI eight, that's going to happen really fast. And if you're using something that's like way up at the top of the stack, that's going to take longer because all the things below it have to get shipped first. It also is, like I said, Ralph being special. So if you if you are unfamiliar with his work, he's done a lot of really good pieces of software. But it the other thing that you have to do is once you do the dependency graph, then you also have to collapse the combinatorial explosion so that you're only actually, you know, so you want to see you not only do figure out all the things that have to be rebuilt, but then try to figure out how to make it. So you only rebuild each individual thing once. And so it is a, it is a challenging problem. So let's. Freshmaker has a team around it. Freshmaker has a team around it. And Ralph is our. So the, the, or we used to call dev ops, which would mean more stuff to our audience. And we call it EXD now, but Ralph is the EXD cloud architect. And we have a team that maintains Freshmaker and it's constantly upgrading its capabilities. They've recently been working on making it able to do automatic rebuild of operators as well. So that when like an operand image gets rebuilt by Freshmaker due to a CVE, it can actually find the controller and bundle images and update the references there and reshift new version of the operator that includes a patched operating. Yeah. And sorry to be clear, I don't know that he, he may have written some of the code, but I, it was his name. Yeah. It was his name of the app. When he and I were working on modularity originally, we were, we had to name off a piece of the infrastructure as well and that we couldn't really talk about or whatever, but it was, we ended up just calling it pixie dust. This is where the pixie dust happens. Freshmaker was not in that bucket. And, but it's, but it kind of was. So yeah, it was pretty funny. So let's pause there for a second. Let's do, let's see if I can just reuse the same window. I can get my sizing good here, but we can pause for some sweet, sweet internet points, which I'm not sure if Brian is familiar with, but we, if we give out some sweet, sweet internet points for coming to watch the show or participating in the, the repos and stuff. And then we talk about who has won or who's the leaders in the sweet, sweet internet points. Right now we are up to 5,700 points for Norenda and Netherlands Hackham is how we pronounce that at 5,400 points and no affriction and Joe Fuzz still holding still at 4,000 and 2,300 respectively, Detective Konokudo and bacon fork, steadily rising up through the ranks. And if I can find my, my g edit file, not my notepad, but my notepad, I have links to where you can get points for today's episode. But yeah, so that's kind of our current status on the points. Thanks everyone as always for participating. We really appreciate it. We have closer promises on my, our running joke of while we understand how important the intrinsic reward of getting enough internet points is to everyone. We're hopeful that the extrinsic rewards are going on soon and some of the leaders may have already received some extrinsic awards already kind of sorta, but we have, we should be able to make more stuff available very, very soon now. Yeah, it's kind of driving me nuts. So there is our sweet, sweet internet points. And yeah, so the rewards are very likely. I'm not going to, I'm not going to say exactly, but basically the idea of the rewards is going to be kind of like acts, you know, kind of, for lack of a better term, gift certificates to the brand new cool stuff store. But we've had so many challenges to, I think this is like our third brand new cool stuff store in the life of the show. Like, so that's been a huge challenge and then T's and C's. Oh, and then there was a pandemic. If anybody caught that on the news. So like, it's just been like one thing after another, trying to make these things happen. So, but we think we're very, very close. So tick tock tick tock. So from here, I did want to say, you know, Brian, what is kind of the future here? What's, what do you think is the next big step for the health index or for, for like keeping, you know, keeping our containers fresh and, and, you know, safe. So on the building side, like I said, the fresh maker team is working on automatic rebuilt operators. That's about done probably in a matter of weeks, it'll be in production. Oh, nice. On the data side, the, the different team, the team that works on, we have two different teams that work on Claire. We have an upstream team and we have an internal team, the internal team works with, with my folks. And we are the ones that make, make sure Claire is able to use all the red hat data sources. And we're constantly thinking of new ways to work with security to produce new data and have Claire consume that data to produce useful information for folks. So the next thing you're going to see on that front is that the container catalog on that vulnerabilities tab will start showing data about unpatched vulnerabilities. It still won't affect the grade, but you will be able to see all the, all the vulnerabilities that are in the container that don't have a fixed available as well as the ones that do have a fixed available. Cool. Okay. So, so kind of like that. We were talking about those two windows earlier, the window of the CV's been released, but patch hasn't been released yet. Right. Okay. Gotcha. Oh, so that would be good. From there we were going to start looking into grading and vulnerability assessment. We're already looking into it, but like we will, we will start producing oval feeds first with security that, in regards to non RPM content that's shipped. As we move, you know, further and further into container space, it's like there are, we start to see packages that we just never shipped as an RPM, and they only get shipped in containers, you know, and we call that container first content, meaning that we didn't produce an RPM installed in the container. It's just there's content right into the container. Oh, right. Yeah. And we will have, we will have direct oval files available that described that content that are slightly different than the RPM. Yeah, it's kind of, it's similar in mechanism, right to the challenge that we have with building like Java things for windows like J boss, you know, so, so sometimes we produce content that is not an RPM. And, you know, containers are kind of the latest iteration of, of that. We're working on formalizing how that works internally, you know, for a very long time between 20 years, everything has been focused on the RPM being the unit of software that gets shipped, right? RPM is the thing we ship or pms the thing we version or pms the thing we update or pms the thing we provide source for, you know, like everything was so concerned RPM. So when we look at the container ecosystem, we've actually had to start to figure out how to reorient a lot of those processes around containers. For example, for example, when we ship containers, we want to be able to ship all the source for that container as a unit. Instead of you having to go and figure out like, okay, how do I find, you know, the source for 172 RPM. Right, right. And this whole engine that we built inside container factory that produces source containers. So if you go find a, an image on the container catalog, you can actually find the source container for that image as well. In a lot of cases, increasing all the time, how many of them are available. And you can pull a container, which has all the source available. Right. Yeah. Yeah, is it click on the click on the click on to get this Yeah, I was like, I knew I'd seen it. Yeah. Yeah. Which I think is really cool, right? Because I mean, you know, like this is one of those, there's all these, you know, kind of things that are ancillary to open source, right? It's like, open source source is available. You can fix any problems you have. Well, except you can't, right? Because you, you don't have enough context. You don't know how to do a rebuild. You know, you don't know how, whatever. I mean, I think one of the things that I really appreciate about red hat a lot of the time is, you know, for the vast majority of cases, we follow the spirit of the thing rather than the law of the thing or the, the, you know, literal of the thing. You know, so we don't, we generally speaking, don't just release source code. We generally release kind of enough to get you where you want to get to with the source code. I don't know. But I think we do that with a lot of things and it's something I appreciate. Cool. So is our, you know, like, are you getting feedback on this stuff? Is this stuff you want more, you know, community participation in, you know, is there, does the upstream have the same goals as we do for the container health index in a sense? I'd say the upstream and our internal are converging. So the backstory here is that Claire was originally developed by core OS and, you know, core OS was a product bought by Red Hat. And when we purchased core OS, it did not have very good ability to do vulnerability detection for rel. There was a sent us plugin. However, it didn't work. It didn't work very reliably. And it produced a lot of false positives. So at that point, like two different streams sort of happened. So like I said, there's an OpenShift engineering related team that, that works on upstream Claire and kind of is concerned with the future of Claire and how it integrates into Quay registry and things like that. Then we have an internal EXD DevOps team who works with the container factory. And we, I started that team off the side because I knew that we would need to have somebody who was more focused on making Claire's plugins, which are not plugins anymore, but at the time they were plugins, but making Claire's ability to work directly with Red Hat content to be like the best, right? We wanted to make, my goal was to make Claire into like essentially the reference implementation for security scanning for rel. We don't sell Claire at the moment. We don't have a subscription. You can't get a supported version of it. But what we have now is there's a security scanning certification program run by our partner team. And there's some links to that in the doc that I gave you. And our internal version of Claire is used to provide the, the vulnerability data in the container catalog, a wrapper around it is used to produce the grades. And the data from it is also used as the reference data for benchmarking how well other security scanners are using our oval data to identify vulnerabilities to make sure that there are not false positives to make sure there are not false negatives. Right? Right. We have always had a history of having lots of false positive problems in rel with external security scanners because of our back boarding practices, right? Right. So we, we have to ship some certain version of Python forever and ever and ever, because that's what we baked in the operating system. We're going to support it for six, eight, 10 years depending on the version of rail you're using. And so when a patch comes, there's an upstream patch in a newer version of Python, but we have to backport that patch to the version that we've committed to support forever. Right? And a lot of security scanners will just take a version, like a version comparison and say, oh, the one in rail is less than the one upstream where the fixes. So it must be vulnerable. But what they didn't do is they didn't look inside our Oval Files and see that we backported the fix for the patch to some version, right? Right. That's, that's layer, like layer one of where the confusion comes. And then with Relate Modularity with, you know, something about that actually kind of becomes even more problematic because inside the operating system might be multiple versions of the same thing that got shipped. And even for people using the Oval Files, it gets tricky. You have to, you have to be very like specific about making sure you're looking at the patch stream for the right modularity component, right? Right. Because there can be like multiple moving stream. Right. And that was actually something that even caught us off guard when rel8 shipped and we took us a month or so to, to get it fixed because we actually had to enhance the Oval data stream to include more information than it was currently carrying and then go back and amend the clear data consumer part so that it could pull that data and use it to figure out which part of the real modularity packages were vulnerable and which ones were not. So, you know, it can be complicated. And our goal is one to make sure that we never have surprises like that modularity thing or at least that if we do, we know about them, right? Right. And then we know how to fix them. We know how, we know what data we need to add to the Oval Streams. And then by doing it, we learn how to teach other companies who are doing this. We can teach them how to do it. We can provide test case containers, benchmarks for them to use just to figure out if their product is working right. And that's all what that security scanning certification program is about. Right. Okay. So in a sense, right? It's not even, it's as you say, right? It's almost more of a reference implementation. And yeah, we're going to provide, you know, some of that information, but you know, there's a whole field out there of companies that we don't really want to compete with. We just want them to look at our stuff and be accurate. Right. And so that's, that's kind of cool. I mean, the landscape is changing a little bit. And I can't comment much on this because I don't know, but like we bought Stack Rocks and Stack Rocks is, is a security product. So there's, we have some, some overlap in that space now. And actually Stack Rocks has some components built in public clear as well. So all this work finds its way upstream and finds its way over Stack Rocks too. But, but, but from, for, for my part in this, my, my team is focused on making clear, be absolutely bulletproof in being able to identify, you know, missing patches of vulnerabilities in rel shipped content. The upstream team does a lot of different stuff. They work on integrations with clay. They work like they, they look at how to identify vulnerabilities across different distributions. They, they can identify vulnerabilities, Buntu and they look at like upstream Python packages and things like that. My team is focused only on rel content and making sure that that is like airtight. I got you. I got you. So we should probably wrap it up here. We're a little bit over time as we often are. But, you know, thanks so much for coming, Brian. If we have any further questions or, you know, if they come up or whatever, we will maybe have you back or we will try to ask them individually. As you keep referencing, we have some kind of notes of links and that kind of stuff. As usual, I'll dump those in the further reading section of the show notes for this show. So if you're interested in any of that content or, you know, or want to follow up, you know, everything we kind of mentioned in the chat will also put there. So I will be putting a link up to that when I'm done with it at the next episode. Usually I tweet out when they're done because it'll happen sometime between now and then. And, but thanks so much for being on the show. We really appreciate it. Thanks a lot. Yeah. And Andrew, thank you for being my co-host again and minding the chat, especially given our challenges with all of the things. And I think that's, that's it for the show today. Cheers.