 All right, sweet. Can you guys hear me OK? I think you can, yes. All right, so I'm going to ask a provocative question, right? Like, do Linux distributions still matter with containers? I get people asking me questions like that all the time. And so a while back, I guess it was February now, so about six months ago, I decided to publish a blog with the same title. And it's just combinations of this conversation and a bunch of different people talking about dystro lists and scratch builds and all these different things. And I think whenever we get into new technology, like something new comes out that's hot. And back when I was earlier, I was at Linux when it first came out. I was in that first rage. Then virtualization came out. And now containers. And I think every time one of these major pieces of technology come out, we then go back and go, do we need these other things? Perhaps we've forgotten. We re-evaluated, even with virtualization, whether we need the operating system. And now we're asking this again with containers. And I think for me, what inspired my thoughts around this and the entire framing of this argument or this conversation was actually I had read this article of all things, just a random article I found that talked about the fact that we balance, in our minds, we have sort of a bias towards innovation. In fact, the word innovation has become a buzzword at this point in a lot of ways. It's just like we always value innovation. The way I heard it described by somebody I was talking to you was when we build a bridge, a new bridge, like when Golden Gate Bridge was built, we had a tuxedo dinner for the engineers and the architects. But 30 years later, 100 years later, when the construction workers resurface it, they barely get Taco Bell. They go to Taco Bell for lunch. So it's like, we just have this bias against, or we have a bias towards innovation and probably then subconsciously undervalue maintenance and just don't think through all the things that are happening. And we don't necessarily know which thing is actually making our life better. I mean, I think that's probably even true with our cell phones and everything else. We just constantly get these new things and then have no idea how they're affecting us over time. So the question I mentioned, I have the young kids come to me and they go, I don't wanna care about the operating system anymore. And I'm like, okay, how many of you don't care about the operating system anymore? Raise your hand if you don't care about it. He doesn't care, he's good. He doesn't care either. It's two people don't care. All right, so that's good actually. So good, I'm preaching to the choir, like I said, to an extent. So let's use tires as an analogy. Like I love this one. How many of you are in the cars? Raise your hand if you're in the cars. All right, so good, this is a little mix, right? How about the people that aren't in the cars? How many don't care about the tires on their car? All right, good. Let's see. How many of those people that don't care about tires have kids? God, just kidding. I'm going right for it. So you have kids who don't care about the tires. Yeah, I was like, how's your stopping power in the rain? Is it good? Don't know, don't care. I'm not a lawyer, but I would suggest that maybe you reevaluate considerations on that. So right, you have a minivan, right? Like, all right, do you care about the tires on a minivan? I might at first glance say I don't care, but then wait a minute, I think about the problem a little bit deeper. Maybe I do care. All right, so sports sedan, you know, I don't know, $40, $50,000 car, fairly quick. Like driving in the mountains, the advertising shows that, you know, this is what I'm supposed to be doing with it, right? Do I care about the tires? How many of you like have BMWs or, you know, something like that? All right, so do you pay attention? How long do you research the tires before you buy a set of tires for it? Days? No, don't care, don't know. You race it at all? Yeah, because they're ridiculously expensive, right? Yeah, exactly. Because even the cheapest ones are pretty good, right? The quality is always good, this is the cost is too much. Fair. Now, if you buy one of these, you care about the tires. How many people in here have a Ferrari? I don't. Yeah, you have a Ferrari? All right, I'm thinking just maybe if you buy a Ferrari, you probably care about the tires. Like, you're probably caring about that. And honestly, price probably doesn't matter anymore. You're probably not going the other way. You're like, I just want the best thing. And you're like, hey, tire guy, put my tires on, they're awesome, right? And then let's go all the way though and say, we know they care about tires here. I mean, we absolutely know that the tire's gonna be the difference between winning and losing. And so the question becomes then, all right, what kind of system are you building at work, right, like to build things with containers? You know, like, let's add some, like, yeah, safety's probably tires matter, road performance, amateur racing, professional racing, right? Like, tires probably do matter, but we start to think a little bit more nuanced about, at first glance, I may say, I don't care. I don't want to have to think about it. But I may not want to think about it because maybe I'm an amateur, to be honest with you. Maybe I'm not a professional at this. Maybe I'm just driving a minivan around. Maybe I'm just building containers for my house for like on my router or something. Maybe I don't care that much. But if I'm starting to put into production and do really professional things with it and there's transactions happening and money going through it, and then it starts to become a completely different set of calculus, right? So maybe we, maybe I've convinced you, okay, so we do care, right? But how do I even know how to think about this, right? Like I think with the analogy, it's fairly easy to kind of see why you care about tires in certain contexts and maybe you think about it differently, but with containers, what is the context, what's the criteria? So let's start with some criteria for understanding. Traditional options, typically. You'll look at, when you're doing containers, people look at like, you know, rel, Fedora, CentOS, Devin, Ubuntu, Windows. Obviously you care because Windows containers don't run on Linux at all. Linux kind of runs on Windows, but typically now with WSL too, it's a real Linux kernel. So actually it's basically Linux running on Linux on a VM. So like, ah, I don't know. It was this emulation thing which was crazy and they're like, yeah, that's too hard. So now on this side though, what happens is I mentioned, we will typically, once we have this big technology thing in front of us, right? So when I buy a new car, that's when I care about tires the most, right? Or when I get into a new hobby. When I first got started F1 driving, I really cared about the tires a lot. But I'm joking, I've never done that. But as you get into different aspects of these things, that's when you reevaluate. And I think what happens is people go, oh, I'm gonna do containers. Maybe I should check out this distro-less thing or this scratch thing or realm minimal or alpine. And then they reevaluate what they care about and they come up with a new set of criteria in their brain that they care about. And the one that I find that's hilarious is typically, they don't evaluate it from a purely engineering perspective where there are trade-offs and costs and benefits. I think it's funny, I talked to my mechanical engineer friends and they are not emotionally attached to what bolts they use. Like when they're building like machinery to like manufacture brakes. Like my brother-in-law does this and he never is like, I love the bolts from blah-blah-blah. They're amazing. But in software we have like a massive emotional connection to like the different tools we use. We talk about it all the time. We were like, it reminds me more of people in the 1700s when they probably cared about their tools a lot, lot more. We're still, in my mind, a little immature compared to like mechanical engineering and electrical engineering. We have much more emotional connection. So we will then connect certain values like, well, size matters. I'm like, okay. And then we'll even kind of do the thing where we'll attach. Well, the security has to be better if the quantity's smaller, right? Like attack surface, attack surface, attack surface. I'm like, but if the attack surface can be one package and if that package quality is terrible, it's very easy to hack. And like it's a two-dimensional problem, right? It's not a single-dimensional problem. And so I'd argue we need to kind of think through these things a little more nuanced and actually dig like into that 201 level. So they use as the analogy, right? Like this is a joke. There is no cloud. There's just someone else's computer. Well, I would also say, I would say a corollary to that is there is no distriless, just another dependency that you manage yourself. That you have to patch over time and change and make sure it works. And then the whole dependency and then the dependencies for that dependency and then the dependencies for those dependencies and then the dependencies for those dependencies. The next thing you know now you're building a Linux distro in your container. And so I don't know, maybe I've kind of gotten to the point where, yeah, all right, maybe Linux does matter in this, right? And then let me go back to this one. So like even distriless, actually Neil and I were digging into this last night out of our morbid curiosity. You know, like still uses Debian. It's still using the Debian dependency tree for some of the stuff. It's using the GoLang dependency tree upstream. It's using PyPy for the Python stuff. It's still relying on somebody's dependency tree to go grab all the dependencies. And so there's somebody that maintains that dependency tree. They're not actually doing it. So you have to like say, yeah, I think maybe it's not just Linux distros, maybe it's anybody that builds a dependency tree. And so I guess you start to think about like what criteria should I use to even think about this, right? Well, I think they're very similar to the criteria that you would use for any Linux distro if you're looking at a container image. Because I've now probably convinced you that you need to think about the quality of that dependency tree such that like the C library, I'll give you some examples. Like Muscle C is very small. Who thinks Muscle C is faster than G-Lib C? Faster to run. Yeah, I don't think so. Like there is a lot of money spent in making G-Lib C very fast in many, many, many use cases. Not just fast, but consistent. So it always gets the same response. Works for the real time kernel, blah, blah, blah. There is a long tale of 40 million things that get added to G-Lib C. So like quality of the C library matters, not just the size of it. Core utilities is the same thing. Busybox is great, but like, GNU Utils have their own advantages in that there's like, if you have a bunch of legacy scripting that's relies on this stuff, like the command line options are different, et cetera, et cetera. But I would say even more important, probably the most important on here I would say is life cycle, right? Like I think, I kind of tied it together when I said like, you start to manage those dependencies and now that life cycle, two years from now, you have CICD system going and you're just pulling the latest version of G-Lib C and then G-Lib C changes and your app breaks in some strange way. And you're like, okay, what's wrong with this? It's running 20% slower, why? First, what did it in the dependency tree? Was it G, and I'm being nice to you. I'm saying, okay, it's G-Lib C, I'm telling you what the problem is. But in reality, you're gonna run your app and you're gonna make this 20% slower and what the hell happened. And you're gonna be farting around for like three hours or three days figuring out which dependency broke things. Then you're gonna be figuring out, wait, what happened to this dependency and like what regression happened, blah, blah, blah. That's stuff I don't like dealing with. So like you've now taken, you've now taken it like, or a better example is a CVE happens. You're like, oh, this version of G-Lib C or OpenSSL that we use has some CVE. Okay, well, then I just rolled on the newest version and then it breaks API compatibility and now I've taken a sysadmin event, a run, essentially, actually, I've taken a CICD YUM update event which didn't even require a sysadmin, didn't require anybody. It was just pulling updates, turned it into a developer event. Or now a developer has to get in there, muck with code, change arguments to function calls. And like whenever a new version of a library comes out, maybe a function call difference and you just have to get in there and hack around with code. Now it's a two, three hour developer event, two AM or something, hopefully not at two AM because with containers, hopefully you're not doing updates in the middle of the night. But you see now you've taken an operations event or even better, a CICD event, turned it into a qualitative thing where human beings have to go touch it now. So managing those dependencies, how you manage those in the life cycle of those dependencies, how long they're supported for matters, big time, even in containers. Because really that's based on how often you wanna re-architect your app, not on how often you rebuild the container. I think people confuse those two things. Second, security response, having some kind of set of human beings that their job it is to go look at that set of dependencies, that dependency tree, aka Linux distro. Look at that, analyze it, proactively figure out if there's stuff wrong with it and then write down what's wrong with it and file CVEs and then incrementally make that sandcastle better over time and constantly maintain that sandcastle so it doesn't get washed away into the ocean. I think that's an important thing to think about. I think the same thing is true from a performance engineering perspective. Having a set of people, and this could be actually users, having millions of users using those bits already, that's one set of safety mechanism where if I'm using an Apache that's built into a Linux distro, there may be millions of users using that Apache. I put that Apache in a container. It's a heck of a lot better to use the one that I already know is tested by 50 million people than to like go build my own from scratch and it's some version that nobody's using. So distros do kind of create gravity around certain versions of upstream software but then also having specific professional people that go and look and maybe load test it and write white papers and like go, actually like for example in the CNCF go do upstream testing where we run 2048 node cluster and test all that software stacked together and drive that and actually like proactively push the software because there is no such thing as good performance. There's, you know, typically when you're tuning you're tuning for specific purposes, right? I mean general performance is something that happens over time but to specifically tune something takes human beings to go in and specifically tune for workloads. So how do these things work? So like I'll now go into the deeper part where I try to hurt your brain, right? So, you know, let's start with how many of you have programmed in C before? Decent number, good. All right, for the people that have not programmed in C I mean, who here has never programmed at all? Raise your hand. All right, good. I'm talking to a fairly technical audience. So when you program in C to refresh your memory, you know, you can basically just do a G, you know, when you've used GCC you can just compile into a .ofile, right? Like and basically pull lib ssl, pull, you know, the components of glibc that you use different, you know, libraries and just create a binary and that binary will have everything in it that needs. Literally when you, you know, when you basically type .slash and run that binary, that alpha binary Linux, the Linux loader will know how to like load that thing into memory and start executing at like bit one and just start going, right? That's called statically compiling. We learned how to do this in like, I don't know, 1970s, right? Like, I don't know, ish. Before that, actually it was worse because you couldn't even create a binary that was even portable at all. But at least with this, we have now some level of portability. I can run this binary on any kernel that's similar to the one that I compiled it on and the source code itself is completely portable because I can actually move it between like different architectures, et cetera, et cetera. But you should be thinking in the context of containers, oh, wait a minute, like obviously a, you know, obviously an ARM binary is not gonna run on x86. Obviously a power binary is not gonna run on ARM, blah, blah, blah. So there are definitely limits to compatibility and portability and this is one where I harp on a lot. People say containers are portable. I'm like, define portable. I mean like, yeah, I could pull an ARM, I could pull an ARM, you know, manifest list and cash it on an x86 box. That doesn't mean I can run it. It's compatible. Like it will, I can pull it with Podman or Docker and save the image locally. Yeah, but you can't do anything with it. You can't run it. So like this is where we start to realize, oh, wait a minute, I do need to understand how some of this works, but the downside of statically linking though, and this is becoming trendy again, like especially in Go Lang and C, like people are getting, I would say, this has become kind of a hot hipster thing to do is like, let's statically compile stuff and they like completely forgot why we created libraries and why we have dependencies. But, oh, but when you go to update it, it's a pain in the butt because every time I need to update it, I have to recompile, right? Like if lib.ssl has a problem, I have to now recompile the entire binary. And if I have lib.ssl in 20 binaries, I can recompile all those 20 binaries. Rebuilding a container image may take five minutes or two minutes or one minute. Recompiling a giant C program and then rebuilding the thing may now take 10 minutes or 20 minutes. So like we get into this thing where this gets worse and worse. So instead of reinventing the wheel, going back to 1970 something, we now realize, oh, wait a minute, there's this awesome feature in Linux with elf binaries where we actually put the loader as the first bits in the binary. And then the binary is smart enough when the Linux loader loads it to basically read the loader and go, hey, go find those files on disk. Like, open ssl and glibc don't actually have to live in the actual binary because the Linux operating system has this capability built in and so does every other operating system on the planet. But with these technologies turned on GCC, elf and the loader, we can now go find these dependencies on disk. But we've created a new problem now. Now we have dependencies. How do we get those files on disk? That's a different problem, that's a logistics problem. And typically we would hand those dependencies with, if you hand those dependencies with your binary, now you're in this hell world where Mika just talked about it with Fedora Silver Blue. If you have, how many of you were in that talk? Were any of you? So some of you, if you have two different overlay FSs and they deliver two different versions of a library, you now have two copies of that library on disk, right? Because we're not managing the dependencies, we're not managing the versions of the flat packs and the set of dependencies that go into that. So actually this problem is very apropos of that. So like, you start to realize, oh man, we have a dependency problem and a versioning of dependencies problem. And that's a really hairy problem. Spatial and temporal in nature. So to fix that, we come up with packaging, right? Like packaging, what is packaging? Packaging is a set of human beings that have SME knowledge that go out and create that set of dependencies as what we call a dependency tree, which a dependency tree is nothing more than just a group of these dependencies that we put in one place and we version it. We go, that's Fedora 30, that's REL 8, this is Ubuntu 14.0, whatever. And then we have this depth solver, a dependency solver built into the packaging system that then goes and knows to search the right version of the dependency tree, AKA the yum repo, pull it down and then only install the specific versions of that set of dependencies that I need. So now when I install this version of libssr, this version of glubc, it gets onto disk, the Linux loader can find it, boom, we're good. We can load the RPM into, we can load the binary into memory and boom, off we go, everything's good. This all happens in a container, like none of this is different, right? Just because we have two different containers, it doesn't necessarily mean that we want two different versions of libssr all the time, like sometimes we still kinda wanna make sure they're using the same version of libssr so that these two different containers perform identically, like sometimes we want them to communicate and perform identically. So I guess these dependencies, this is not all, this doesn't solve everything, but we're getting closer and closer to understanding why we built all this stuff, right? Same is true with other things though, not just glubc, but with Python, right? A lot of people don't realize Python, same thing, right? Instead of this happening when I load the script, it happens when I load the Python interpreter, which is written in C and still relies on open SSL and libssl and things like that because most people don't reimplement cryptography algorithms except for Go-Wang in the actual scripting language. And in fact, as an aside, Red Hat is patching Go-Wang to actually have an optional ability to use open SSL because open SSL has been through all the FIPS requirements and so it's much easier to get through FIPS compliance than like actually using the boring SSL stuff that's built into Go-Wang. So even that is solved optionally in different ways, but you'll see now here, now we have another set of dependencies, right? Not only do we have the operating system dependency tree, we now have like PIP and MPM and PyPy and all the other dependency tree tooling for every different language because nobody could ever standardize on one and I thought about starting a business around this as I was doing this. I'm like, I need to start a business where I basically package all of that stuff, have it in one place and be able to create like the ultimate satellite server for every like possible version of every language, Maven, everything. I know, all of us, let's go start it. So, all right, so now we understand why we have dependencies, why we don't statically compile things anymore because that's 1970 and it's cool but it's not that cool. And it's cool for some little things but it's not cool in a general case. But now let's think about, we have in this new level, now we have this new set of technology. We have these OCI container images, right? And now the same problem is what Mika described with the flat packs. Now we've got another set of metadata on top of this dependency tree and we've got snapshot versions of these dependency trees basically baked into these container images or OS trees, it's the same exact technical problem. With containers though, you end up with, I don't know how many of you saw the pod man talk by Ravashi and Sally but they talked this through this a little bit. But there's a couple different JSON files that basically get created. Basically one gets consumed from the container image, the container engine analyzes it and uses it as some of the component like architecture and other things to make decisions on which version to pull, which versions of the actual blobs to pull and then actually will then assemble some of those pieces into a new JSON file called config.json. Then we'll hand that off to Run C and then actually once it's basically expanded the root FS which has all of these dependencies in it. The difference here now though is that that dependency tree came along for the ride, like it was already installed correctly. The developer decided what dependencies needed to pull in when they were building the code. The end user doesn't decide any of that, right? I got to just get whatever they gave me but then again the same thing happens when I load this into memory. The HTTP binary still has that old Linux loader and it goes and finds those but it finds it in the image in the root FS that's been expanded from the OCI container image on disk and then all that magical stuff happens. But and then you end up with, we had some new technologies, OCI spec, tar, gzip, JSON, right? Like these are basically very basic technologies but this is how you construct and use an OCI Docker AKA Docker container image which Dan's not here so he can't yell at me about using the DWerk. So but then now we can create this new nastier dependency tree, this like layered dependency tree, right? Okay, so I like walking people through this because people always get confused about, all right, if I use a base image and then I start in this example, I have like, so in this one, I show this is the basic image we created, right? But these are some source images. So like maybe this guy, installs, like this guy basically builds on top of that and replaces the Apache, right? So now I have this glibc and this libssl with this newer version of Apache or different version of Apache and then this guy or gal goes and builds over that one and replaces the glibc with a different version of glibc. So now really the glibc and the Apache are different than this image and this one they replace the libssl so the Apache and the libssl are different and now these are different and it's basically a different set of permutations that I've put in this and now these things, all four of these different container images look kind of the same. They could have even the exact same Docker file to build them but they have completely different performance characteristics because maybe there's a regression in glibc or there's a regression in libssl or maybe there's a new feature in libssl that makes it faster and now they have different performance characteristics, different security characteristics but they look identically architecturally, like there's nothing different architecturally. All right, so now that I've basically scared you about container images and then what happens is we define those different sets, right? So like in that particular one, I showed a different version of Apache but here I'm showing a version of MySQL or something and then we communicate to the end user basically the builder of the container image builds out this dependency tree, this actual version dependency tree uses it to construct a container image and then builds another dependency tree and then actually like labels them and communicates to the end user, like this is how you use the 4.0.1 version of this stuff that's that set of permutations and then here's how you use the 4.0.0 version which is a different set of permutations but they're basically using different permutations of glibc and Apache et cetera, whatever. But it's another layer to add confusion to people. So and you think, okay, do I need, like do I wanna do all this stuff myself? Like do I wanna manage every single piece of this myself? I won't go deep into this but this actually represents what happens at start time, what Urvashi and Sally covered. Container image gets pulled down, there are variables embedded in the image, there's metadata, there's a config file, there's basically a JSON file. They call it a JSON blob and then the user basically, for example, if you run podman run and you say you run a command at the variant and you pass it a command, you're overriding the CMD variable. So basically the user option overrides if there was a CMD in the container image, you'll actually override it, right? So like basically the container engine's job is to basically take the summation of the stuff you typed on the command line plus what's in the image or I should say I'm sorry, the image variables, the user can override those variables and then the actual engine itself can actually override some of these things like for example, SE Linux rules. The container engine decides whether it uses those SE Linux rules, not necessarily the end user and set comp rules, things like that. The user can override them but typically the container engine will default. Then it will go create this config.json, hand this off along with the root FS to the container runtime to basically run C and then the kernel will go and do this, right? So there's a lot of complexity happening there. And as I showed you before, like now you have these different sets of dependency trees, right? And now you can start to architect at a higher level than the Linux distro if you will, right? Like this is out of the Linux distro's hands. This more maps to the actual business problems that Enterprise have where like I'll have a customer that will say I want a real base image and then we want to add some stuff that I want everybody at our company to use. Like we want the specific version of lib SSL, we want like the Etsy message of the day to say something like don't break into our computers we'll sue you, blah, blah, blah. And then we will have SMEs for like Apache and Nginx and MySQL. And what this really represents here is this is the same collaboration that we would do with those SMEs except historically we collaborated in a file system. So basically there would be some kind of burn in team that would go and build a server, right? Then they would hand it off to sysadmins and the sysadmins would configure the core build. And then the sysadmins would hand it off to the Java people and the Java people would dump a bunch of tar balls on the file system because they're crazy. And then the Java programmers would say thanks for those JVMs, we'll dump our jar files in there. But if you really think what's happening at a higher level here, this is a place where people collaborate in the file system. And we may have used Ansible for this, we may have used RPM for this, we may have used Yum, we may have used all these different tools, but it's flour, sugar, eggs, and water for us to basically bake cakes with different SMEs or subject matter experts at no different parts of these stacks. This is probably where we should spend more of our time than down in building all the dependencies again. We should probably start to think about how we wanna remix the flour, sugar, eggs, and water than basically building the flour eggs. I don't necessarily wanna grind up flour at my house and make the flour, but I wanna use the flour in the way that I want it. And I would say this is probably where we should start to think about it more. And so this is what it would look like from a container perspective, right? I would say there's a lot of work to be done by the end user around how you tag things and how you communicate to the end user how they should consume your container images. But I don't know that there's a lot of value in going back and basically evaluating the underlying dependencies and mucking with those and rebuilding all those yourself. So I talked about the criteria, but what about the context? So from this perspective, we thought that basically containers solve the works on my laptop problem, right? We would always have this problem before, how many of you are familiar, the works on my laptop problem? Good, a lot of people. So historically, I worked in American Greetings back in like 2003 or something, 2004 or five. And basically we would have developers build Python apps and they would literally develop on their laptop. Like we didn't even have VMs back then. There were no VMs really on their laptops. They would literally just have Python on their laptop. It'd be some random version of Python. It wouldn't be the one that we had on the servers. They would do PIP or at the time, we had PIPs, easy install. Yeah, we were using eggs in easy install. And we would muck with those and they would get some hacky thing that they could pull together and then they'd hand that to us in ops and be like, okay, run my thing. And it would break in all kinds of horrible ways because I would try to install a different set of dependencies with a different version of Python than they were using and there would always be some weird circular thing where certain dependencies would break, blah, blah. That was an annoying problem. I mean, and we had that with Perl, we had it with Python, we had it with Ruby, we had it with all these different things. If you were using different versions on Dev and prod, you would always run into these annoying, upgrade, downgrade, chicken and egg problems. You're like, oh, if we upgrade then this library breaks, but if we downgrade that version then this other stuff won't have the things that ops wants, blah, blah, blah. So basically containers are beautiful because at least from this perspective we've solved the works of my laptop problem. Right now at least we're using the same version of that dependency tree in the container image. And now there's a way for, there's a currency for the operations and development team to talk to each other. You're like, here's my Docker file, here's my container image, go muck with it, but you can rebuild it from scratch and you can maybe tweak a little bit, but we're now all pulling from the same set of dependencies and we have sort of a contractual agreement at least written in code about what we're actually using to build. We know what grocery store we went to get the flour, sugar, eggs and water, right? Like we know that it's the same quality of milk, the same quality butter, we know exactly what we're dealing with. But it doesn't solve everything. So this is one that it doesn't solve. I call it the one million transactions per second problem. So I can fire up an app with a web server on my laptop and then I can throw it over, I can basically build a container image, push it to a registry server, have the ops team pull it down and how do I know it will actually run at a million transactions per second? I mean, what about that set of actions that I just took guarantees that that will work in a million transactions per second if a million transactions per second is a requirement? There's like nothing, right? Like I have no idea if it will work. I don't even know. We can load test it and then start to understand what the baseline performance characteristics of it. And again, now we at least have like some basic understanding of like we use this set of flour, sugar, eggs and water and they were able to perform at this level. You know, this set of this dependency tree worked it that well, but it doesn't really solve like actually getting better performance. The same is true with security, right? Like nothing about agreeing on which versions that we're using solves a security problem. Like if I fire up that same web app on my laptop and then I throw it to production by pushing it into a register or have them pull it down. Nothing's saying it won't be hacked in two seconds as soon as you put it into production and put real load traffic on it. Again, nothing about a container stops that from happening. Nothing makes it better or worse, you know, but so looking back, the only way to really solve that problem is to have bits that we know are actually that actually work at a certain quality level, right? And so this is where I'll say this is where it differentiates. This is where you start to really decide on which Linux distribution you wanna use and why. Because you go, all right, some Linux distribution, again, some have dependency trees like G-Lib C, like Muscle may be very good for having a very small library, but it may not be very good for a real time use case or et cetera, et cetera. But the only way that you can really know that those bits are battle tested is if they are used by like millions of people out there. And so now, again, kind of relying on the quality of the Linux distro and knowing that that Linux distro is basically been battle tested and out and ran like, you know, things like the New York Stock Exchange. Now you start to have confidence that this permutation, this set of dependencies that are pulled together into this Linux distro in this container image actually function and have a baseline of security. Like there is a certain minimum level of performance and a minimum level of security. And so like, at the very end, like I would say any Linux distribution is probably gonna be better than no Linux distribution. Like building this dependency tree set yourself, you have no history of usage for any of these bits. I mean, you have a little bit because there's still upstream projects, but you have none on the code you wrote and you have a little bit more on some of the upstream stuff depending on how battle tested it is. But I would still say that intermediary, that Linux distro, they pull those things together in a, you know, Apache releases thousands of versions of upstream, but really only a few of those, it's almost like the tags in a container image are the ones that get pulled into Linux distros and then really battle tested like at scale. So like I think that value of that Linux distro is pretty obvious here. And I think, I don't think we've thought through it as an industry, how to explain this to the rest of the world like what these, basically what the value of all these Linux distro packages do. Now I'll leave you with one, Red Hat has one that we used or that we released in May called Universal Base Image. Now I'm biased, I'll fully admit and I'm only throwing up here as an example, but you know, this one is built on the rail bits. So like essentially the Red Hat Enterprise Linux bits. So there's a Red Hat Universal Base Image 7 and a Red Hat Universal Base Image 8. This is like a set of RPMs. I try to show it here. It's not all of rel, it's not all of Red Hat Enterprise Linux. It's a smaller subset of things like Python, Ruby, Java, Node.js and all the things that you will coincidentally notice basically look a lot like software collections and app streams because those are the things that end users need. They don't necessarily need graphical things to run servers. They don't necessarily need kernels. There's no kernel in Red Hat Universal Base Image because you shouldn't be booting that thing. But there are all of the dependencies in a set, that dependency tree, it's a subset of that giant dependency tree that's in rel to basically build, you know, basically what you would typically consider cloud native apps or things like Python, Ruby, you know, I wouldn't necessarily call Perl cloud native although I have people that really love Perl so I don't want to insult Perl. I did a lot of Perl programming myself so I don't want to insult it but I don't know that I think of it as cloud native but it is definitely an end user like, you know, language. And then we package that, we sort of re-swizzle that set of dependencies into three different configurations and then we release these as different base images. So we release a minimal one which I would say is good in a use case where maybe you really do have a small C library or a C binary where maybe all it needs is libssl and just glibc. And you're like, all right, cool. My binary with that is like eight megabytes, sweet. So I just use this minimal image. I get access to a Red Hat basically supported glibc that gets backported, survives for 10 years and actually that's a really good quality base image to use because now I can run YUM updates on that base image for 10 years in a CICD system and I just know my binary will always be patched for the latest CVs and I don't have to worry about it. But I'd say, we typically say this is like the 80% use case standard. This looks the most like, you know, a regular base image that you would see and then we have one actually that will let you run system D in it and I know some people hate that, some people love it but I will fully admit I used it because I'm a lazy bastard. I used it for my wiki when I migrated it because I migrated from rel six to rel eight and I basically went to UBI and I run a media wiki instance on Apache and MariaDB side by side in a single container and I run it in read only mode. So like, there's no 3306 port exposed anywhere internally or externally. It's just only internal, you know, only within that container can they communicate. The only thing exposed externally is a port 80, you know, and port 443 and it's running in read only mode. So the only things that are writable are basically varlib, dub, dub, dub and varlib, MySQL. And so it's for me, that was the level of security that gave me a warm and fuzzy but the beauty of it is my Docker file is super simple. It's like yum install, MariaDB, yum install, you know, yum install, MariaDB, HDBD and then all the like PHP requirements and then basically system CTL enable it, HDBD, system CTL enable MariaDB and that's it. Like it just works. I was like, beautiful. And then I have a little bit of a nasty command line for Podman to basically fire it up in read only mode but other than that, and I capture that in a script but it's useful for those kinds of things. And the beauty of universal base image is it's a very high quality basically set of dependency tree that we released that is a different end user license agreement. It's more like CentOS or Fedora where you can basically distribute it anywhere you want. Whereas RHEL has a different, Red Hunter by Linux is a product so it has its own end user license agreement where a customer sign an agreement that say they won't just distribute it all over the internet or install 50 copies of it and only pay for one, things like that. So universal base image has no restrictions. So that's, of course, I think it's valuable. I think Linux distributions are valuable in a container. And so I'm then explaining what I think what a very good example of one looks like which I think is UBI. And I'll just leave you with some, you know, essentially links to other things. There is a introduction, you know, Red Hat universal base image, but I think in general I would say I would argue in defense of Linux distributions do still matter even in containers. Like I think that dependency tree and the quality that dependency tree absolutely matters. So with that, I will say, are there any questions? If you have questions, I'd ask that you raise your hand or, and if you can try to move to an aisle so I can get the microphone to you, that'd be great. We have about five minutes for questions here. Scott, are you open to questions outside the room afterward? Yeah, absolutely. Great. Does anyone want to make an argument that they don't matter? I like tires in my car. I like stopping in the rain. Like when my kids are in the car, I only have one kid. So it's a single point of failure for me. This is an H-A thing. I don't have two kids. So like I have to care about the tires. I'm just saying, when you only have one primate offspring, you have to be careful. I mean, like, it just happens, chimpanzee, you drop the one offspring. That's it. Like you're done, it's a single point of failure. I've went into a rat hole. It's a dependency tree. It's just genetic, nature. Any questions that aren't about tires? Nothing? Does anyone want to yell at Scott? Do you guys want to give this talk? I would love you to give this talk and educate other people on why the Linux distrust don't matter. If you're interested in that, let me know. You can definitely get these slides and do it. Sweet. We need more dependencies. Recursion. All right, I'll be around to talk afterwards, if not. Thank you. Thank you.