 Hey, and good morning. Welcome to the Level Up Hour where we contemplate containers, consider Kubernetes and Opine over all things OpenShift. I'm Randy Russell, the Director of Certification at Red Hat, and a special greeting to all the Red Hat certified professionals out there. And I'm joined by my co-host, Mr. Scott Stabby-Mick, Mick Bryan. Hey, Scott. Good morning, Randy. Today we have a great guest with us, Mr. Scott McCarty. Hey, Scott McCarty. Good morning to you. What do you do at Red Hat? I am a product manager in the container space, so our team builds like the technologies like Cryo and Podman and Buildin' Scopio and Red Hat Universal Base Image, which are then kind of added into OpenShift and RHEL, or part of OpenShift and RHEL, essentially. Right. So today, we're going to talk about UBI Micro, but before we dig into UBI Micro and some of the advantages it offers, particularly around security, maybe we ought to talk a little bit about what the UBI part is first and, you know, how that fits into the overall system of containers and Kubernetes of OpenShift. Yeah. So I always talk about the journey as like, you know, when I first discovered Docker containers, basically, you know, back 2013-14 time range, you know, the first thing you did was you installed this piece of software, then you pulled down a container image and you ran it and you were like, oh, wow, this kind of looks like a VM. I can imagine I could use this for all kinds of interesting things, like looking up man pages on other versions of RHEL, which is what I used to do back then because I was a solutions architect and people were like, hey, how do you run this cluster thing in RHEL 5? And I'm like, I don't know how to do that. I can't remember that at all. And I don't have RHEL 5 installed. So let me run this container image and do a man. And from there, I kind of had the epiphany. I was like, oh, wow, this is pretty cool. It like unlocks me, you know, from these different versions. And so I realized the first thing that mattered was the container images, right? Like I need container images for like the different versions of an operating system to make it useful, essentially. And then eventually you want to build on it and build something useful because basically all software development is about collaboration and building something new and adding some kind of differentiated value. And then you want to share that. So in a nutshell, Red Hat had historically had this sort of the way we do our end user license agreement. Servers weren't done like that. Servers where I got the flour, sugar, eggs, and water and make the cake myself. And then I shared it in my house. Containers is like, all right, I want to deliver pieces of the cake to everyone. And so like we kind of had this tension between the way we monetized RHEL and the way we wanted to do containers. And so when we launched RHEL 8, we changed, we were essentially released a subset of RHEL as free, you know, free to redistribute so that it could be used in container images. And we called that EBI. And so Red Hat universal base image. And we basically made, at the time, three base images, a bunch of layered images that were things like Python, Ruby, PHP, you name it, like all these different programming languages, Java.net. And then we... All the usual suspects. Yeah, all these usual... Technically, I'm lying about Java. It took us a little while to get Java off the door the way people wanted. They were mad that they didn't know that it was not on purpose. It was just logistics, you know, softwares hard. And then the final thing, the third piece is we have a bunch of RPM repo, like a set of RPM repos that look quite stunningly similar to RHEL, but with a subset of RHEL in them that are out on a, you know, content delivery network and they're freely available over the globe. And they're very quick and easy to use. And you can basically, you know, bake the cake, you know, basically add packages to the container base images or add packages even to the PHP one a few years. And kind of that whole set of three things is Red Hat universal base image. Although I think a lot of people refer to the base images as the UBI, it's really kind of all three things. Right. So there's, you know, to use an overused word, there's an ecosystem of things around UBI. And so if you're somebody who is familiar with Red Hat Enterprise Linux, RHEL and, you know, RPM and some of the concepts associated with installing into an operating system and all of that, you can almost think of this as sort of a pared down version of that is what it sounds like. Right. Yeah, that's exactly right. I think, I think Paz was really easy, not enough control. Just doing RPMs on a RHEL system was a lot of control, but not as easy. And like in the middle there somewhere is kind of like containers where like I can do some of the things that the operating system I want to do, mostly I'm focused on the app, but I can still do some of the operating system things that make it flexible enough that it meets like 80% of the use cases, you know, maybe 90. Right. Well, one of the things we've heard over years of, you know, software development and working with RHEL is that RPM packaging is hard. Right. And so containers actually provides kind of a alternate method for packaging. So you can still use RPMs if you want. We provide all of our software as RPMs, but I mean, you could untar a tar archive into the file system of your container and then still pass it around as a single unit of software. Yeah, I had a slide, I have actually, I think it's one of the container tidbits, something about a secure supply chain or something like that. I forget what the, I don't have to go back and dig up the article, but I published it along to me like four or five years ago, but I joked because I have a pitch or not really a pitch, but it's like one of my little rants that I talk about, you know, the Java guys, if you went back in time back when I was a sysadmin like around the year 99, 98 when I first started and you'd be like, oh, the Java, you'd have the Java people, the Linux UNIX people, and then like developers on top of that. And by Java people, I meant like there was like Java app people that kind of like knew the infrastructure. And I'd laugh, you know, like your job as a sysadmin was like, go spin up the server. So I'd go spin up the server. And we used like CF engine at the time, which is fine. It was config management. That was pretty good in late 90s, I think we're, we're like on trend or pretty hip. And then at the time, all the cool kids, yeah, all the cool kids are doing it. And then, and then, you know, you'd hand it over the Java guys, they pull down all these tar balls. And I'm like, what are you guys doing? You guys are bananas. Like I have no idea. Like, you know, as a sysadmin that was pretty organized, it could basically build reproducible builds. They were just like, no, man, it's cool. We just pulled tar balls down. Like the Java guys are totally hippies. And then, and then, you know, the app developers are pulling down war files and running them inside an app server. So like, I remember about 2005, I was like, Oh my God, this is horrible. But like, so we're all speaking different languages, right? Like mostly sysadmins are speaking RPMs, mostly the Java infrastructure guys are speaking like tar balls. And then the Java app programmers are speaking wars. And you're like, this is clearly not going to work, right? So like the beauty of a container is you kind of boil it down to like, okay, we have this Docker file or container file that's kind of a blueprint that shows how to build this packaged thing. And we're all speaking the same language. Now, now in that Docker file or container file, I can go do whatever nasty weird stuff I want to do and hide that from everybody else. Like so the, the Java guys can go do their weird tar ball madness and the, you know, and the, and the war guys could can either deploy in the app server or dump it on the file system using, you know, using like a Docker file. But now everybody has a container image that they can share. And there's kind of like accountability and there's a blueprint for how we got from nothing till like to the end state. And so now we finally have kind of a blueprint, which is what I think we wanted 20, 20 some years ago when I started. And you can tag it and other stuff so you can keep track of the version of that thing. Do you do that with tar balls? You just pull them down. Yeah, exactly. I would like to point out as, as I'm old manually at clouds that there was a time where there was a Red Hat product whose tagline was unzip and go. So that was it. Wait a minute. Wait a minute. What was that? J boss? That was J boss. Yeah. Oh, yeah. Okay. That's still how they roll. I think to this day, I mean, most Java guys are just like unzip and go. Or at least that that sort of application administrator part of the crew, as opposed to the application developers who are all about wars and ears and all of that stuff, right? Yeah, exactly. Yeah. So you realize there's kind of three different constituencies all speaking three different languages. Now the beauty of this is you have, at least even in the Java world, you have all three of those constituencies speaking the language. Now you can even have the PHP guys and the Ruby gals and the, you know, like all these different groups of people, they're going to all speak the same language. And it's not, it's not a language that's too complex. Like I think, I think we love just enough, you know, like to where it's just enough to kind of get everybody organized, but not so much that's oppressive. And I think that's why it kind of took off. Yeah. Well, there's one other thing that has come up in discussions about, you know, the thinking behind UBI, the universal base image. And that's, that's something that you don't always hear about. And that's GPL compliance. You know, GPL, of course, being that governing set of rules around the sharing of software that has been released under the GPL. Yeah. Talk a little bit about that. Yeah. So that's, that's one that we worked on for a long time, actually. Like we worked for a couple of years trying to figure out the right way to do this. And essentially what people don't understand is like, okay, so a couple of things about licensing. I think the hipster world has went to like Apache licenses and BSD style licenses where they're like, whatever, dude, we just put it everywhere. It doesn't matter. And they don't really think about attribution, even though technically they still are on the hook for attribution in a lot of these licenses. And then Red Hat goes above and beyond and pretty much provides the code for almost everything, for everything basically, because the easiest way to attribute code is to just provide the code because the people's names are in there, that says what they did. And so attribution is basically saying, hey, this person wrote this code. That's attribution. Delivering the code is the GPL. And so the GPL is the most famous one, but there's families of licenses, right? But Red Hat just goes above and beyond, goes all the way to basically the furthest one, because it's just the easiest. And you deliver attribution and code at the same time and you get both for the same buck. So what a lot of people don't realize is the GPL mandates that you should be delivering. So if you deliver a binary, you actually theoretically should be delivering the source code in like and kind, which basically means if you put it out on an FTP server, the binaries, you should have the code out on an FTP server. If you deliver a CD like 1990 style to your customers in Best Buy, then you should deliver the source code on that CD in that same box set or at least through the mail in a fairly like and kind manner, right? Like it might take about the same amount of time to get in and you get it in the same format, essentially. Well, what people didn't realize is when we went to containers and you dumped containers all over the world and literally there's like a bunch of public registries and all the cloud providers, there's GitHub, there's Docker Hub, there's all these different public places. People are dumping binaries everywhere and they are not attributing the code and not actually delivering the source code for basically everything that's out there. So there's definitely kind of a hairball out there that I think a lot of people aren't thinking about. And so we kind of thought, how could we make this easier, especially for our partners that like actually care about this and don't want to get in trouble or like have any kind of static or problem from this? And so we said, well, what if we could tuck source code in a container registry? And so this is becoming a very common pattern, by the way. So we looked at, we looked at like what was going on in the helm world and they were starting to like deliver helm charts in the container registry. We also are looking at, we're actually, this will get into the roadmap, we're actually planning on implementing SIG Store and doing that with Podman and with OpenShift and that delivers the container signatures in the registry, which is beautiful because now I want to spin it up. I want to create the source code. I want to sign it all and I want to shove all that in a registry. Like that's beautiful. And I have one place like in kind. It's all the same. And so with UBI, we launched like source, this concept of source containers. And so with UBI, it's actually very easy to build what's called a source container, actually pull down the corresponding source container for any base image that you pull from us, any image actually, any of the images that are in the UBI family, pull them down, get the exact corresponding source code. So like you're like, oh, the UBI micro image only has these 27 packages in it. I want the 27 source RPMs that go along with this. And I only want those. I don't want all the, I don't want to deliver 10 gigabytes of source code just so I can deliver 27 megabyte image. So you can find the like and kind corresponding you know, source code for all of our images now. And that's actually pretty cool. And then if you are an end user and you want to then go push those binaries out anywhere you want to redistribute them, you can also push that source code and say, hey, I'm meeting my obligation to the GPL. I'm meeting my obligation to attribution for BSD and Apache licenses and all the other blah, blah, blah licenses. Wow. Well, yeah. So that's, well, it is. And you know, the thing about it is that is that sort of the compliance piece here is a thing that as you mentioned, it's very easy to miss because, you know, fundamentally though, the high level of story is all about sharing and caring, right? But there are some rules here. There are some things about attribution that are required. And so it's really fascinating to hear about how that's been factored in and addressed in this way that's actually very manageable and scalable, I guess, is the way I'd call it. So I'll try to have one more thing since this is more of a conversational one. Like, you know, I remember hearing, you know, back in the day, I've been in this way too long, but I remember back when I was in the Python committee a lot and the guy that wrote Grails, it was like, it was a different language. I think that was in Ruby, but not Grails. It was one of the web servers the guy wrote in Ruby. And I remember he was an interesting cat. I can't remember his name on top of my head. But basically he got annoyed because this other company had built, he had released a bunch of code under like, I think it was MIT or Apache license. And they basically consumed all this stuff. And then they had customers and they're like, oh, yeah, it's this upstream stuff. That guy screwed it up. And he's like, dude, it's like, it's the non-repudiation problem, right? They're like, they're like, they're like blaming it on this other guy that like gave them this code for free. It's like, why are you blaming me for them? Like what the hell? And so he literally swore he would never use attribution only code again. He would only release GPL V3 so that it was always copy left. And then basically they had to release the source code and you could prove non-repudiate who, who, you know, whose fault it really was. And so this open source, there are some of these edge cases that people don't realize like, like when you're building your brand in this open source world and you're releasing all this code, honestly, kind of out of the goodness of your heart in a lot of ways, you know, and then people start to blame you and attribute problems to you that are not your, that are not your problems. That makes you kind of mad, you know, and then so, so yeah, there's these things matter still even in 2021. I don't think a lot of people realize. Yeah. Well, it comes down to accountability, right? So can you tell us something about the existing family of UBI images? Because there's actually multiples of these or some variants. And then we'll maybe focus on our, you know, move on to our principal topic for today. But can we talk a little bit about the family of them and sort of why there's a family? Yeah. So when we first launched UBI with Relate, which is two and some change years ago, we, we launched three. We launched a minimal, a standard and an init container image. And the minute we basically planned the standard was kind of the middle of the road one. We thought 80% of people would use that one. That looked like very similar to a Fedora image, very similar to a Debian image, an Ubuntu image. It just looked like a standard Linux distro image. It has yum and DNF in it, blah, blah, blah. You can just pull down packages, install them. It works in a Docker file or a container file and it's totally fine. The interesting thing was we created this other one called minimal because people were complaining about size. Everybody's trying to minimize size. There's this concept of attack surface, although I critique the way people apply this concept wrongly sometimes. But nonetheless, we did, it is always wise to try to trim down the size of container images. So I remember Colin Walters was working on this thing called micro DNF and like, I remember we were at a conference, he's like, I think I could knock this out by like this afternoon. And he literally like worked on this thing and was like, I figured out how to like basically install RPM packages with just a small C binary instead of pulling in Python and all these dependencies and blah, blah, blah. So he was able to save like 100 megabytes, like a hundred-ish megabytes, like maybe even a little more than that, you know, saving like 120, 140 megabytes. So you're getting now like a 70, 80 megabyte image instead of like a 200, 210 megabyte image. And so we created this thing called minimal. And that was still good. It didn't have all the functionalities to like in your Dockerfile or container file, you would need to kind of know the syntax of micro DNF. But it was pretty cool because you didn't have to have a full-fledged package manager in there. So that was kind of our first step on the journey is when we released that. And then on the other side, we had like the regular image plus system D. And this is a somewhat controversial use case with some people. But I think they just don't really think about like, sometimes it's really nice to just do yum install, MySQL, yum install, HDBD, system CTL enabled, HDBD, system CTL enabled, MySQL D. And it just works. And you're like, oh, I don't have to know how any of this freaking crap works. I don't want to have SME knowledge on how to start MySQL right and collect, you know, zombie processes and all these wacky things that have it. You're like, oh, you just run, there's this argument you run one process per container, which doesn't even make any sense because MySQL runs multiple processes in and of itself. Apache runs multiple processes. Nginx runs multiple processes. All of these things that people think are one process are actually multiple process. So there's this sort of, yeah, so long story short, there's this argument. When should I run like multiple pieces of software in a container? When should I run multiple processes almost all the time? Because almost everything's multi-processed. And then when should I try to like minimize and really truly only run like one tiny little use case? And I've actually been thinking about this for a long time right in a blog entry. But it would require a lot of like creating families of applications and showing people examples of these different things. But needless to say, we try to create three different options to try to tackle a lot of different use cases that we think are pretty much the 99%. But then if you look in REL 8.4, we released something called UBI micro. So now we have actually four base container images. So on the very tiny side at like coming in at like 37, 36-ish megabytes uncompressed and like 12.5 compressed, which I think is pretty damn good. We have that micro, but it has no package manager in it. And none of the dependencies, it has a very, very minimal set of things like standard TZ data, standard, like a lang pack, like nothing beyond like standard UTF-8, like English language pack, things like that very minimized set of things in UBI micro. And to install anything into it, you actually have to use the package manager on the host or down the road. We're kind of looking at how we could use the package manager from a different container image to essentially remotely install packages on UBI micro, but not actually install the package manager in UBI micro and then deliver this thing that a lot of people call that a distro-less image because once it's baked, you can't add more packages. It's done. Even in production, you look at, there's nothing, you can't really muck with it. And they're usually pretty small. But there is still a dependency tree. There is still a C library. There is still an open SSL library. There are these, you know, Apache, the quality of how it's patched still matters. Like all of those security things you would think about in that family if software still matters, but we do get it down to pretty darn small. I mean, I've built, I think I've built an open SL image is around 63 megabytes and Apache image based on micro is about 156-ish. I think Nginx is similar, 153-ish. So you're getting into a really small, tiny images that definitely compete with like Alpine in size. Yeah. That's absolutely microscopic what you're describing. You know, we've talked a little bit about things from the past. I can remember my, my second Linux distro ever was installed, maybe 40 meg. That was a long time ago. And that, that, you know, you think about now doing something in, was it 37, you said? Yeah, 36.4, I think if I remember right. I was just mucking with them yesterday. That's unbelievable. So, yeah. So micro really pairs it down to the absolutely essential elements. You know, we don't have the package manager in there, but there's some interesting approaches to, to getting packages in there. And even if you're doing something like Apache or Nginx or whatever, you know, it still remains something that, you know, if I think about how rel has grown over time to think now about having a container usefully doing something at those sizes that you're talking to, you know, is amazing really. So quick question. Go ahead. Randy, I was going to add a little more color. So upstream and Fedora for a couple years now, they've been working on this minimization effort to essentially go back and clean up the supply chain and trim down the dependency. So in, in DNF, there's this concept of soft dependencies. These are, these are RPM packages that you're like, Hey, this RPM depends on this other one, but it's a soft dependency. Like you can actually have a command line flag in DNF to basically say, don't install the soft dependencies, only install the hard ones where this software will actually break if this other thing's not there. But there's optional ones that, like, you know, unless it's a full fledged server, you don't necessarily care about, or maybe you want to pick and choose which dependencies you actually want to add. And so we're actually expecting the rel nine base images for, you know, the UBI nine images will be actually even smaller. They'll end up, I mean, if we're, I haven't, I haven't, I'm not 100% sure yet, but we might get it down to like five-ish megabytes, seven-ish megabytes. Pretty darn good. Yeah. That's unbelievable. But, but an interesting approach too, because, you know, they're, as you say, there's the dependencies that it will not run if it doesn't have them. And then it's dependencies that are required to deliver certain features, but maybe you don't care about those features, right? Yeah, there's no such thing as distrelist. There's only somebody else's distro that you rely on. So even like the Google distros project relies on Debian underlying for a lot of the dependencies. And so if you look, like if you're building a MySQL container, it's still a C program that's compiled against a C library. Let's be honest, nobody wants to maintain a C library. Like if you don't have to, and then let's also be honest, this is the dirty secret that nobody wants to talk about in the container world. The C library is probably the most important library from a getting hacked perspective. Like if you look at most of the things that haven't, there's a few entry points. There's the kernel, which now you're talking about container host quality, but the, in the container image, the most important thing, the equivalent of the kernel in a container images of the C library basically. And so having a high quality C library that has a proactive security team that it's like literally looking at it and analyzing it and double checking it and a bunch of people that could commit code and fix CVs and blah, blah, blah, that quality of that C library in my mind is probably, that's probably a plurality of what's important in the container image from an attack surface perspective. You know, you could have time zone data and all kinds of other like dead files that sit there and don't do anything, but you better be, you better be careful of that C library. Absolutely. Absolutely. So just a quick question. So where, you know, where and how do people get, you know, these UBI images? How are they accessed? Yeah. So the official place where they reside, like sort of the canonical place where they reside is a small C canonical place where they reside is on, you know, access, essentially a registry.access.redhat.com, which is an open registry that Red Hat runs where you can go and pull them. And then they're also now available on Docker Hub as of like May 20th, something, 20 somethingish. I forget when those went live, but end of May we put them up on Docker Hub too. So basically you can get them anywhere at your, you know, favorite neighborhood, you know, container registry, but mostly the Red Hat registry and Docker Hub. So Scott McBrine, any thoughts, comments on the wonderful world of UBI? What are you hearing about it? Well, I did want to point out that we maintain all those UBI images. So we rebuild them every six weeks automatically. And then we will kick off a spurious rebuild if something like a critical or important security vulnerability is discovered in one of the software packages that comes with it. If you look at the Red Hat catalog for containers and the UBI listing there, you can actually get access to the RPMs that are a component of it. We do a scoring of all the containers that are provided through our registry that include things like open vulnerabilities, age, and some other factors, which is a little bit different than some other container registries that rate things by like number of downloads or number of likes people have given it over time. So if you are interested in more metadata about your containers from Red Hat, I think that the Red Hat registry is good. But I mean, Docker is kind of the, as McCarty said, the friendly neighborhood purveyor of container images. So we are there now as well. Yeah. And like Scott said, those get synchronized every day essentially. So they're always up to date. So there has been some chatter. I'm sorry, Randy. There has been some chatter in the chat that I, since we have this, the principal product manager for containers here at Red Hat. So one person asked, what about doing hardened container images for people who want hardened container images? And my response was that if we made a thousand different derivative container images, we'd then have to maintain a thousand different derivative container images. But I wanted to get the hot take from Scott McCarty on it. Yeah. And I'll say like, I've looked at like a bunch of different hardening guides, like the STIGs. There's like these STIGs. So government, I forget what STIG stands for. It's an acronym, but it's a famous federal government one that a lot of federal government uses to harden systems and servers. And so there's a STIG. It's a set of standards for how to harden an image. The problem with some of these things is, there's a bunch of problems with all these things. But the first thing is if you harden all the images, like you said, you have two options. You can either have one for each of these flavors, which would mean we'll have like 10 different hardened images, or we'd have to pick a few. And then the third problem is like once you harden them, they basically, security is always a trade off of ease of use and more security. And so you got to look at like, if I get a 1% improvement in security, but it's 99% harder to use, that's almost never like basically, you know, SE Linux was probably like 47, 53 and people disable it all the time. You know, like you get into that middle road where you're like, I know I'm a lot more secure, but I still want to. And so that's the problem with hardened images. And then another problem with them is, is like some of them are just ridiculous. Like some of the, like, I don't want to name which ones, but like some of the ones are definitely like, you look at them and they're checking for things that make sense on a server, but not in a container image and things like that. And so there's some spurious like little things where I'd call Balderdash on some of the ways people analyze it that are actually trying to, to, for example, prevent problems with like the container engine as opposed to what's wrong with the actual image. And so like some of these things, one could argue where it's better fixed, like is it better fixed in a container host with a container engine, or is it better fixed by like hamstringing a container image so it can't do things. And so, you know, there's a lot of debate about where you harden these things. And so I would argue that like the Red Hat ones are hardened out of the box. Like it's not like these things are not hardened. I would argue that our again, our C library is hardened out of the box. Our kernel is hardened out of the box. So like, you know, like when, when a binary is compiled in RHEL, it has all of these like, there's like five, six different security technologies, you know, that we basically compile into those binaries and all those binaries in there have that. The way we analyze a C library, it's hardened, you know, like we have, and then on the host side, you know, we're doing things like SC Linux are our default profiles for SC Linux are actually very hardened. And so one could argue that like some of these hardened image things is like adding a 0.01%. You know, it's a warm and fuzzy, but it's not actually providing you much value. And so that's, that's the challenge. You're like, I want a safe minivan. You're like, this minivan is pretty safe, like, you know, and then it's like how safe, like, well, we could put big metal plates around the outside, then it'll look like a combat minivan. And like, you know, like, it's like, how much, how far do we go? Right. And, and I will say that like the disa-stig, for example, it's about servers. And so there's a whole bunch of things in there about like managing user access, which in the container world is not really applicable. And then also like depends a lot on the container host, like if you're running rootless, then doesn't matter that your container thinks that it has root access for running services or doing tasks, because on the actual host, it's not doing those things. Yeah. And then like, if you're looking to have a more or I should say a less attack surface thing, then just start with one of the smaller images, like micro or minimal, and then you don't have to worry about network services and right about things. They're not in there. And then if you have, yeah, I mean, that's actually what hardening often means, right, is take all the stuff that you are not using out of there, right? Yeah. I think a lot of people want to check box, and I totally understand why, but like this is like, you're like a five star crash rating, like what is, what is a five star crash rating with a container image? I would argue, like you said, you want to build an Apache image that is super secure, install Apache on top of UBI Micro with the G-Lib C that we provide, and then update it all the time, every day, you know, and that to me is about as minimal as you can possibly get with the highest quality components. And I would argue, you know, there's nothing else in there. There's no, there's no system D. There's nothing, you know. Yeah, but clearly the next thing is like, but I don't want to update it every day. That's ridiculous. Velocity is probably, I would argue, a huge part of attack surface. People don't understand. Trust is temporal. I try to explain this. The world rots around you every day. As you age, you know this better. I can testify to this, but like, you know, the day you release a container image, it's great, but three months later, not so great. I don't care how good a quality components you had, because the world discovers new problems with the software in your container. And so, you know, we're constantly discovering new CDs, et cetera, et cetera. And so your CDs build up on old software. And so velocity is a huge, you know, time and space are linked. You know, I hate to break this, but like, you know, attack surface should be attack surface over T because like it grows, right? Like as time goes on, and T is probably, I would argue, almost more important than attack surface in a lot of ways. That's really what I'd definitely say. Well, yeah. So along those lines, would you say that, that sort of the security aspects are one of the principal things that are bringing people to UBI micro, or is it perhaps some of the other advantages that accrue to having this small, very tightly packaged piece? You know, what is driving customers most? Is it the security thing? Or is it, you know, are there some other things that are maybe driving people to this particular choice? I think it's a little bit of both being very honest with myself. You know, as a product manager, I have to be willing to, like, I have a technical brain, and I have a business brain, and I have a, I have a cool brain, like everybody else. Like I just want to do cool stuff sometimes. And I will admit that all three of those things have an actual effect on people. And if we don't admit that to ourselves, there are functional requirements standing, but there are also emotional and social requirements. Like, I mean, this is, even in finance, we're starting to realize this, like with, with ESG and things like that. So like, if you're more honest with yourself about why people are selecting your product, you know, you could do a lot better job. I think part of it is to your point, it just makes you feel good. You're like, I know nothing's there. You remember the first time you were installed a Linux server in Iran in production? And you're like, I'm, I remember the first time I had a, I was at NASA and I, I had all these servers that were under my, you know, under my care or under my control. And we had these people, these hacking team try to break in in the first year, nobody got in my servers. And I was like, yes, like, like that feeling that you get, you're like, I knew I minimized everything on those servers and it worked. Like, and mind you, is that the reason why they didn't break into my stuff? Who knows? Like I could be getting the right answer for the wrong reason, but it feels good. So there is some element of like, it emotionally feels good when you know that you just don't have anything you don't need. I have to admit, I think that's about 33% of it. I think 33% though is attack surface. Like I think it's reasonable to say once you've standardized on a core build, like this is anything. I don't care if it's desktops, phones, laptops, routers, whatever, once you've standardized on one version of a thing. So you can, that truly minimizes your attack service, you know, that, that gets you from 27 different types of routers of 27 different versions of SSH running to like one router with one version of SSH running that clearly limits your attack surface. You know, I mean, if you think about how many lines of code, you know, each line of code is a vulnerability, right? And so if you have fewer of them, then you have, then you have fewer lines of code. Exactly vulnerabilities. It's deduplication like storage, right? Like the less the more deduplicated things you have, the lower your attack service. So I think standardized first on one and sticking it into container image is like purely standardized on a base image and then use that everywhere. And then within that now optimized little O big O is standardized little O, you know, I would argue is then get down to things like UBI micro and get rid of that and then true, you know, truly minimize the, the attack service per workload. But, but really look at your entire environment first, I think is probably the most important thing. But yeah, so like getting back to, I think people do it partially for tech functional reasons. Like, oh, our security, I think some of them are checkbox too is the third thing I think it is. I think some of it is just like, Hey, we need to check this thing. People said they need this thing. Let's check this thing. And then some people really do understand attack service and actually have real good security requirements. And then other people just like the way it feels, I think is a big chunk of it. And I get that. I do appreciate that. I like Ferraris for a reason. I like Ducati Motorcycle. I was like, there's a reason, I can do guys not better than my Suzuki, but yet I look at and I go, I think it's a work of art. I hear you. So, you know, if somebody's interested in this, I mean, how do you, how do you dig up, dig into understanding more how to be effective with UBI micro? Because, I mean, the premise of it is to make things simpler. But any approach to making things simpler always require some net new knowledge, right? Yeah. I think containers are that in general. Like I think, I think containers look so deceptively simple. I mentioned at the beginning, like when I first started using, I remember I went and built like sent to us four, five, six and seven images at the time this back like 2013, 14, and I had them up on Docker Hub. And they became decently popular because nobody had built these things. I don't know why, but for me, I wanted them all because it was annoying. And I couldn't redistribute rel, so I couldn't create rel ones because our ULA prevented me from doing that. But I realized, like, okay, it's pretty easy to create a base image. But then as soon as you go to put an application in to a container, that's where you really have the smack in the face of you're like, oh, this is really easy to use, like consume, but there's a tip of an iceberg as soon as you go to build one. So a few, I don't know, maybe a year or a year and some change ago, I published an article, I can share the link with you guys. But I put three different applications in containers, kind of using all the best practices I've learned over the last seven years. And when you really look at it in action, I'm not going to lie, even I look at and I go, this is a lot of brain power. Like this is a lot of my own brain power. Like I'm operating here at 90% CPU, like sometimes it was overheating, you know, I'm like, this is a lot of work to like figure out where do I put the config? Do I put the config on this volume to put the data over here? And I break these things down and use system D and I do use our init containers where I just fire up MySQL and Apache in the same container and I run a read only, which is even funnier. You can actually run it with system D read only. So I mean, I've opened this to the world in good open source fashion. No security person on the planet has attacked me about it because I did my homework. Like I know this pretty well and I'm perfectly comfortable putting it out there publicly. And I'd again challenge anybody to come talk to me about why the security is bad or break into my server. If it's that bad, I'd challenge. I mean, I don't know if it's that bad, break into my server. But not that I want to challenge, you know, I know somebody can figure out. But in a nutshell, I don't think it would be because now I hear you walking it back. There's still always ways to get into service, but mine would be pretty hard to get into. But again, the brain part you got to put into it, though, it's non trivial. So I would argue you need, you need really strong Linux skills to do, you know, early on, we called containerization application virtualization, because you actually need to know the app to virtualize it. When you virtualize the hardware, you needed to know the hardware. You're like, Oh, I need a network card. I need a video card. I need this much RAM. I need this many CPUs with this much power. You know, that was hardware virtualization. And to do that, you kind of had to understand hardware, because you're like, Hey, I'm going to put this app in a virtual machine. Now I need to know how much RAM, how much disk, how much, you know, whether it needs a video card, whether it needs a, you know, network card, why does the network card need to be on this, on this network, blah, blah, blah. That's hardware virtualization. But when you get into application virtualization, you need to know, where's the config for the MySQL thing? Where's it config for this Apache thing? Oh, the app that lives in Apache has this other config. I'll give you a perfect example. A hard one, a really hard one that I thought deeply on for a long time was like WordPress auto auto updates itself. Okay. So like that seems like a really nice feature. I don't necessarily want to rebuild a container image every day and redeploy so that WordPress can like update itself because it doesn't think WordPress was designed before containers. So it just wants to write to the home directory and just like basically change everything. So you're like, Well, let me do a read write volume for this area where WordPress updates itself, but I'll do a read only volume on this config file because I don't want some hacker getting in and changing my config file, but I'll do a read only volume for this and a read write volume for that. And you got to like do a lot of architecture. And that basically takes senior Linux system admin skills. Like there's no other way to say it. Like when you're going to dig that deep into an application and deconstruct it and then put it in a container and actually limit what it can do and do it in a secure way, like it's harder than what people realize to do that. And so not all applications are good candidates for that. And then not all people are up to the task of doing it in a way that's sane and secure. Well, you know, I try to avoid too much pitchmanship here on the level up hour, but I have to. I kind of teed it up a little bit. You know, it's one of those things that I think a lot of people want to skip straight to the glories of Kubernetes and containers and OpenShift. But, you know, underneath all of that, you do have to have those Linux skills. And, you know, we made a we made a big change to the Red Hat certified system administrator program you know, a year or two ago when we actually last year, now that I think about it, in which we said, you know what, knowing containers is part of what you need to be a qualified Linux system administrator. But it works in reverse, too, is that you have to understand something about Linux. You can't look at this at a container as a black box. It's a convenient box. It's a tiny box. You can see what's in the box. And sometimes the work has been done for you about what's in the box. But at the end of the day, you have to understand a little bit about what goes into that box and why it goes in there to really be effective. And, you know, nothing illustrates that more to me to your point about understanding Linux is that, you know, if I look at pass rates on our OpenShift exams, people who are Red Hat certified system administrators do so much better than people who are coming in cold because they've been told, hey, you got to learn, you got to learn OpenShift and maybe they haven't got those underlying skills. So yes, the container world makes for a lot of convenience and ease of use. Yeah, here, you know, the whole metaphor of a container like a container ship and, you know, the revolution of container shipping is the same thing happening here that you just drop your container. It sounds really good. And there is some truth to that. But you do have to know something about what's in the container at the end of the day, right? Oh, yeah, for sure. I would argue, so with application virtualization, aka containerization, there's three main skill sets you need. You need Linux skills straight up. You need to know how config files work and where they go and how to mount devices and blah, blah, blah, all the normal stuff that you know with the Linux system. Then you need to have application-specific subject matter expertise. So like, I know MySQL pretty well. I know PHP pretty well. I know Pearl pretty well. I'm dating myself, but there's certain technologies I know pretty damn well still. I don't know Ruby that well. I don't know Node.js that well. Like to containerize Node.js would take me forever because I'd have to farm around and get this container. I'd have to get the application specific SME knowledge around how people run that best practice, how they lock it down, what modules they use, what things they use to administer it, how they get logs out of it. All these operational things that somebody that knows Node.js or knows Ruby would just know how to do, right? And then thirdly, I need container-specific skills. I need to understand the way a container engine works, the way you do volume mounts, blah, blah, blah. Once I know all three of those things, I can get an application in a container. Now the next set is how do I run that at scale? Okay, cool. Now the Kubernetes stuff, in my opinion, is not actually as hard as everyone thinks it is. It's just they're trying to bite off all four things at the same time. They're trying to bite off basic container stuff, basic container tools, the app-specific stuff, the Linux stuff, and then also run it at scale. And you're like, oh, my God, I got to learn distributed systems along with all this. Yeah, of course you're going to get overwhelmed and fail. I mean, but if you look at this four separate skill sets, you can start to say, okay, I do need these four skill sets to run Apache at scale in Kubernetes. Like once I know how bind volumes work, PV season, persistent volumes and persistent volume claims are actually quite easy to understand. Honestly, the Kubernetes language is actually quite elegant, in my opinion. I think it's actually quite easy once you know what you want to do, but you need to know what you want to do first. And that's what I think most people don't even know what they want to do is the problem. Pulling a back full circle to the concept of base image. Isn't this one of the reasons why organizations tend to operate around operations and developers and working together, because like having one person with all the skills is super hard. And then that one person, like they're not portable, you can't clone them. So having people with really serious expertise in certain aspects of that operation, or that development is key. And base images help you bridge the gap of that like Linux and systems expertise because we bring the Red Hat expertise to that environment. Building and managing a base image is a completely different skill set, trust me, as a PM that has to deal with this every time we do a new dot release, versus like when I would use it when I was a sysadmin. I mean, I love using it. It's so much easier to use, but building it is a lot of work. I mean, the building it piece, so of course you don't want to do that yourself. Why would I want to ever do that myself? Like that piece is a payment. But just like honestly, I don't want to build a Kubernetes distro from scratch myself. I don't want to do that. That's why I use OpenShift or I use a cloud provider, but I don't want to do it myself like in a lot in most cases. There's not a lot of business value in building your own base images. It's definitely better to leverage ours with our expertise. In my opinion, I think it's better to leverage ours and then leverage our expertise in the C libraries and the crypto libraries and all that magical stuff that you don't want to deal with yourself. All right. Well, yeah. So I think there is a lot of truth in what you're saying. I mean, fundamentally, the thing that people have wondered about open source and we've all been asked this many times over the years, I'm sure I know I have. You guys have been in this for quite a while as well as well. Look, if you can just download all this stuff and build it yourself, why not build it yourself? Well, because you're an insurance company. That's not what you're in business to do. That's not where you want your IT team spending their time is figuring out how to build their own operating system. And that was our long time story around the operating system. And I think the same sort of story applies now in the world containers in Kubernetes, and particularly to our subject today is why not leverage the work of a known and trusted source rather than taking some of the risks that are associated with trying to cobble something together that maybe you haven't considered some of those factors around GPL compliance. Maybe you haven't considered some factors around the attack surface and some of the things you might need to do to really ensure that you're using a highly secure image. Yeah, why do landscaping companies buy dump trucks? And why do they pay somebody to maintain them and modify them and do all the things they do? They don't do that themselves. I mean, it doesn't make any sense. You're a landscaping company, not a truck maintenance company. And so, yeah, same exact thing. Well, actually, Hayes on the chat set makes a very good point. He says, it seems easy to wander down the rabbit hole of thinking you need to learn this by building images from scratch. Yeah, maybe you actually start with some images that have been built and you understand that image very well. And I think one of the beauties of the way this works is you can start with that base image and then you can start to dig into it and understand what those components are. And who knows, there probably are those edge cases where you might need to do something from scratch, but better to be informed by having understood what experts have built from scratch first, right? Yeah. And I mean, shout out to my old homeboys that, you know, like, like, I still love Gentoo. Like to this day, I remember when I went down the rat hole, Gentoo, I learned Linux better than I ever learned in my life. I think it's a great learning tool. Would I ever run Gentoo in production? No, I would do that when I'm 24 because I was an idiot, but I wouldn't do that now. Like, but like, I would still use it to learn and run on my laptop and mess with it. I mean, I don't have that desire now because I already know how a lot of these things work. But I still think it was like one of the most amazing communities ever because you kind of had to work to get involved and then all the people on the inside of the bubble were really good. So yeah, building a base image yourself to learn it. Absolutely. Building and then running in production, not so much, you know, like, I want to understand how TZ data works. I want to, basically, how do I build a Linux distro? You know, like, yeah, you get some of that knowledge, you kind of understand that. I think that contributes to the Linux skills that you need to like truly know how to run apps in a container. That's cool. So I don't remember who told me this early in my career, but they told me that open source is free if your time is worthless. And so, you know, you use other people's work, like leverage their expertise. And then that allows you to leverage your expertise without needing to bother learning all the things. We've beat this horse pretty dead, but I will say, like, I thought I knew Linux pretty well until I became a product manager in the rail team. And then I started, like, I deal with Carlos O'Donnell and, and Florian Wymer and the G-Lib C team. And I, I deal with Matt Newsom's team about going and this weird thing with this feature in going and that like, and you start to realize like how all these different pieces of software have to come together in this magnificent way to cause something to actually work right and actually make it work right for like 10 years. So like, when I had to like sneak my last features into RHEL 7 and then launch RHEL 8 around the same time, that was like, it's really hard. I mean, you have to interact with so many different teams. And these guys know, and girls know, so much about specific things, like about how Sec Comp works and why this one CIS call has been deprecated because of this reason, because it had this weird security thing, blah, blah. And like the rat hole is so friggin deep and it hurts my brain every day. And like, once you see that, and obviously, I think any CIS senior, you know, principal sysadmin probably have some of this gut feeling. They're like, I don't want to know all this stuff. Like I literally don't want to know this. I do because I have to build the damn thing. But like, but, and even I don't want to know it a lot of times. Like I want these different SMEs to know it. And then I just trust their opinions. Like honestly, I'm like, okay, if you say we shouldn't do this, let's do that. Sounds like it makes sense to me. Like, I mean, you should, you have to be able to trust the people around you. Like this is just basic trust. So like, yeah. Well, yeah. Do you want to, do you want to have to be the expert in G-Lib C, right? Oh, jeez. No. To build a container. Yeah. It's funny because people will think I am. I'm like, no, no, no, no. I'm not even close. I have like 10% knowledge and you'll see just the average human being has like 1% knowledge. So mine looks really impressive. And then when I talk to these guys, I know nothing. I'm like, I have no idea what I'm talking about. Well, so a quick question. Were we wanting to perhaps do a little demo? Were we wanting to take a, take a walk on the, on the gangplank and see how that goes? Yeah. Oh, it's, it is getting close to the top of the hour. Oh, it is. You're right. I just realized. We, we need the sweet, sweet internet points. Well, we are getting close to the top of the hour. I don't know how much time we require for a potential demo. Or are we going to bail on the demo? I could do a quick three minutes. How brave is Scott McBrine is the question. I think I can build an open SL image with building like three minutes. Yeah. So I think, I think, I think we go for it and I will, I will discuss the sweet, sweet internet points if I have to interrupt, but I think we are going to go for it here. And while, while I'm thinking about it, we have to acknowledge the prom dress moment here that, you know, Scott and I are both, you know, wearing our level up hour swag. So anyway, let's, let's go ahead and give this a shot. Yeah, let me share. All right. So, so you'll see on the system, I just have this tool, you know, build a, this is a rel eight four box, or, you know, and then if you look, I'll have podman images here. I have, let's grab micro. You'll see I have a UBI micro image here. This one's been pulled down. So it's actually quite simple to build on UBI micro. It's like one command, build a from, and then we'll like grab this guy right here. And then you'll see what it does is build a has created an instance of a container here. This is like its internal representation of what a container is because it's not defined in the kernel. It's defined in the user space. So like build a has its representation, podman has its own. Now I can do like a build, build a mount. And what this will do is it's like, okay, there's this container, but now let's mount the storage on this container. And then I can do a yum install. So let's like, look at this guy. So what I'm doing here is I'm saying, I'm using this complex yum command detection, not as complex as it looks. The magic of this is this install root. I'm using the, the yum on the underlying container host on this relate four box to basically install into this install root, which is the directory that that build a command gave me back. And what that is is that's, that's a container essentially, that's a root file system. If I go out here and I look at this bad boy, if I actually let me do this real quick, so you can see this is a root file system, like it looks just like a server. And so if I run this command, what I'm going to do is I'm going to build open SSL. I'm basically going to install open SSL out into this directory. On top of, I think I screwed it up. That's what I did. Let's do this one more time, what I do. So really, we're talking about like placing files within the base image. And that's essentially what you're doing here is you're using yum on the host to unpackage those files from the RPM and then place them in the file system within the container image. Yep, that's exactly right. And you see it looks like a normal install because there's already an RPM database and UBI micro. So what it's doing is it's saying, Hey, I'm going to install these packages and then add them to the RPM database in the UBI micro image. And you'll see it's 36 packages because dependencies, there's a decent number of dependencies, but you're actually going to be surprised that this is not that big. These are pretty small dependencies. There's nothing crazy in here. And I'm seeing a lot of that listed in KB, right? Exactly. We've done a pretty good job of minimizing these, and like I said, we're working on even more in RHEL, but in RHEL 9, but like it's decent. This is pretty good. I'm pretty happy with, you'll see here when this thing should finish pretty quick. I think we'll be real close to a three minute mark. But you see G-Lib C, some G-Lib C things, you start to get a feel for like the kinds of things that get installed here. CA certs. This should take one second. Boom. And now let's do a commit. So if you look, I think it was UBI working container, I believe was the one. Yep, there it is. So I just committed that bad boy and then let's do a podman images. So what this did was this wrote it as UBI 8 micro SSL and then we'll do a podman images and we'll look for UBI 8 micro SSL. And actually this just saying, that saying was only 38 megabytes, I don't buy that I only created. That's interesting. For some reason it's saying I only added 2.6 megabytes that which seems strikingly fishy because I could have swear the last time I built it was 63. Something's going on in my file system, my suspect, but nonetheless, it's pretty darn small. Well, is that a compressed size or uncompressed size? That should be uncompressed on the disk there. Yeah, that's interesting. Yeah, I didn't, I don't know why it's so small. I would think it'd be about 60. Well, look at this, you can see here like, what were these things? One point. That's pretty small. So maybe we had a dependency torn out somewhere, but this is two megabyte. Yeah, that's 25. That should have, that should have added more than that. But I don't know. I don't think that's actually 25, 25 though. I think it's compressed on disk. Yeah, I don't know. That's an interesting one. Something wacky with my tools going on there. I think, I want to say the last time I built this was about 63 megabytes, but nonetheless, pretty darn tiny. Here it's showing I only added 2.6 megabytes, which hey, if somehow somebody made something that made this, or somebody fixed something that changed the dependency and made this smaller, I'm even happier. But yeah, that's you behind a microphone in that show. And so question is, is it the case that you might discover that when you build something that you've built before that perhaps additional optimization and improvement will have happened in terms of the size of the image? Yeah, absolutely. And it's a place where we're focusing on and it's a place where if it doesn't break something, we might do. And you'll notice I had, in that command, I had some optimizations in that, in that yum install. So I had set opt install weak depths equal false. So if people had updated their weak depths list, even in rel 8, this would change over time. So since I'm using an optimization, I would pick up those optimizations. So yeah, that is exactly right. And in rel 9, I should pick up even more when I essentially run these, I'll do a rel 9, you know, version of this, and then you'll see that it will end up being, it'll be even smaller in rel 9, because there's more optimizations in the dependency tree in the other versions of Fedora and then rel 9, quite the teaser. So with that, it's time to talk about the sweet, sweet internet points. And we have a really, really tight race here on the points. Narendev, of course, at 6300, but NLHACM at 6200, and some other leaders on leaderboard here. So in order to get into the sweet, sweet internet points, the SSIPs, hit one of the URLs below that you can see in this. And I think we might also post that into into the chat. And that's how you can perhaps try to get on the board, maybe even catching up, good luck with Narendev. And so, you know, just last word here, thank you, Scott McCarty, for joining. This has been enormously informative and fun. Thank you, Stabby, as always. And thank you, everybody, for watching and listening. Make sure to like and subscribe, share it with your friends, tell your family, your grandmother is especially interested in this stuff. So until next time, thank you for joining the Level Up Hour.