 Hello, okay, so I think I have to talk into this for it's a record. Yeah, but I don't think you need it for Okay, yes talk loudly. I don't have to get an hour done this way Hello everyone, how are you feeling? 9 a.m. On a Sunday, I'm very happy to see each one of you because I was like there's gonna be three people in this So I'm extremely excited. So thank you Yeah, I got super sick last year. I was sick for five days and I was quarantined in London And they would not let me fly back home. I had to stay in a hotel So yeah, I'm glad to be here this time without getting us sick So this talk has been kind of inspired like over probably the last year year and a half Just a bunch of different so I guess I'll back up and say okay, so I've been doing container stuff since like the sub one dot over release of Docker and now at red hat I I am the product manager. I'm like the primary product manager that drives our container images So like I build the red hat Enterprise Linux base images, which are now UBI red hat universal base image Which I'll talk to about a little bit in place But but um so I think about this stuff all the time and I get a ton of questions around this I'm also the product manager for cryo and I don't how many of you know what cryo is All right, I won't bore you with it, but still cool that some of you know and then how many of you know what podman is More okay, that's good because I'm also the product manager for that and then build up and scope you know And it used to be Docker when we well still technically I guess because we we do supported roll seven So I'm still the product manager for Docker as well But yeah, so I basically eat sleep and breathe this stuff for like the last five years And so this started culminating in my mind. All right like else. I'll get into it but the There's like this. I don't know every time something new comes out Everybody says oh the old things dead and now there's this new thing and everybody that nobody cares about the old thing And this is just like how people I don't know the last this is like maybe politics. This is technology. This is everything I don't know But yeah, so I published a blog entry on On open source comm and it was fairly popular as decent it did pretty well And so like I was like maybe I should do this as a talk I think I was talking with you and you actually told me I should do a talk And I was like I guess I should do that And then it's a culmination of that and the fact that I thought I came upon this So it's an interesting comment or it's an interesting Concept where people Undervalue like like in economics. There's the concept of discounting right like you'll subconsciously discount certain things And then you will overvalue other things So like you know like if you see a lion you're gonna run away from the lion But then you could starve to death because you discounted the fact you actually also had to go get food people discount maintenance so like Used to say until somebody corrected me that like you know when a bridge is built people have a tuxedo dinner and Then when they when the road crew comes in to resurface at 20 years later. Nobody cares right like they get Taco Bell Or kebab, you know at the end of the day But but I was corrected by a civil engineer that said they definitely do not get tuxedo dinners because apparently they can Barely pay for bridges so now I use like the skyscraper example right like when a bank builds a skyscraper because they have a lot of I'm guessing there's a tuxedo dinner But we value innovation and we even do this everywhere that I've seen even in our planning mini last week I noticed like everyone claps when we're doing this new thing and everyone claps But then when like we fix the old thing everyone's like, oh, that's cool But like we don't care like and so like I started going through online I'm like ooh, I mean we basically undervalue What's in the container image because it's essentially a Linux distribution and so I've seen this multiple times from People Twitter Twitter drives me crazy, but this is probably a quote I've seen on Twitter more than more than once like how many of you care about the operating system All right, I can leave I'm done I've already cut to the chase You'll see this comment like especially people with containers the and and I guess How many of you were in Daniel rick's talk yesterday any of you one? Okay, a couple so he talked about how there's two ways to look at an operating system I almost stole a slide by the side I talk about this anyway You know you can look at it from the bottom up and you can look at the operating system as enabling the hardware and Like, you know the Linux kernel can enable certain pieces of hardware blah blah blah But then when you run your app it looks the same no matter what kind of hardware as long as it's like x86 hardware or arm or power Or whatever, but you know given different architectures It looks the same and then you go look at it from the top down Which is something that I'm a systems person So it's not something that I had historically done early in my career I looked at always from the bottom up But but top down you look at as an application platform or if I write my app once it should work in other places, right? Mostly for this talk I am going to tackle it from the top down because I think I think we all know that like you still need a Linux kernel to boot up hardware, but We're moving to cloud and everybody's talking about how you're running like containers on you know cloud platform That is a VM That's a fairly generic VM and it is fairly easy to light up because they're pretty standard So I think the more interesting problem for these developers and people that are thinking about this is in the container And I think this is where people are really blind to the fact that it still matters so Let's start with something not technical and then I'm going to hurt your brains in the middle and then I'll calm down again and Stop hurting your brains, but um, let's use tires in as an example. I joke like how many of you have kids So how many of you care about your tires? How many of you would have said that before I asked you if you had kids Like we don't think about it right like okay, so minivan is one thing This is a very common vehicle in the US I know you guys don't have as many here, but but the but the idea that when I hear people say well I don't care about the operating system or that are like I don't care about my tires I'm like yeah, I think you do like if you have a family in your car. I think you care about the tires I think it gets especially in the winter rain. I think I think you start to go Oh, there are certain variables. I do care about with the tires, right? Like I don't just want them to like blow out when we're on the expressway and like crash the car off the road Right so I care somewhat like at least care a little bit and then I go well Here's a higher-end card. You care about your tires with this Like how many of you own like sports sedans? None of you So he might care a little I know Yeah, these just came over to the US I think and they're stupid expensive there too But then you move into you start getting into this territory. It's not quite professional But I think these I don't own one of these I don't think any of us to you Does anyone own a Ferrari? All right good. I'm talking to my people, but I think I think they care about their tires I would think that once you have that much money, you don't care Like you're you're like just give give me the best tires put new ones on I'm done And then when you're in this scenario you definitely care about the tires, right? And I would argue this is where we should be thinking if we're professionals So like how many of you do any kind of like computer stuff for a living like we're okay exactly So like we should all be thinking in this category right like you would never go to an F1 mechanic is like yeah I care about the tires. I care about the new carbon fiber wings are putting on this Nobody says that like if you're a professional you worry about all the components that go into that, you know thing that you're doing Right, so like if I'm a I'm a sys admin by trade mostly I guess most of my career software engineer and sys admin I care about the operating system because it's part of the things I gotta care about I still care about Kubernetes I care I mean actually with Kubernetes and and containers is a lot more to care about too But I don't think anything went away, right? Like I think there's just more stuff to worry about So all right, so yeah, I basically said you know safety. I care about here probably road performance At least it's the common thing in the US people will go out and drive through the mountains like on the weekends They care about their tires if you bought your tires driving through the mountains You will die like you'll slide off the road and die here these people definitely care And then here's where we should be right like I said, this is really awesome. I think the professionals All right, so maybe I've convinced you you care a little bit like most of you already care But how many of you care now about the operating system? I think it was all the same people. I don't think I've convinced them politics All right, so what context though, right? I love showing this picture because I Think I think this is this is this is actually a genius picture in that like People think that you know you like this is still more efficient than not doing that, right? Like if there's a bunch of crap piled up on that boat it would be less efficient But there's some crane probably put it in there, right? Like so so that still went you know on a big ship and then into a smaller You know who knows what onto a crane and then onto a truck and it's on this boat for the last mile And maybe go to some island like there's still there's still something interesting about containers I won't go on too much of a tangent about that, but I've always said it's a packaging format I think that proves it All right, so there's a lot of different options I mean I work for it have but but I still value all of these all of these Linux distributions I'll say and when even windows I mean at this point you kind of have to say well if I'm running windows I need windows containers you have to kind of care about you especially in windows world have to care about what's in the container because I have To match you can't even have different versions like there's no there's no there's no ability to like you know leak But but in the container world how many of you are like how many of you've been using actually let me I guess let me I said how many of you have used Docker The vast majority as some of you have used podman. I saw how many of you have been using containers for more than three years Decent amount how many more than five So a few yeah, there was us that were like talking about all this early on and how many of you more than a year So actually it looks like the last year shot up a decent amount It looks like it almost doubled if I were to take a quick estimate, so like we're still in that phase where people are figuring this out so These are two of the common patterns that I've seen and admittedly as an old like curmudgeonly, you know old person now that I might not show up, but I am I've been doing this way too long I see people trying to reinvent the wheel they they look at these minimal options This has become very popular in containers, and they'll say well we use distro is we don't want to use a Linux distra Or they'll say we start from scratch and I'm like But then on the other hand some people are have common sense and they go I'm just gonna use a Linux distra I'm gonna use what is already there. I'm a Debian user I want to Debian in the container because now I can you know app get all the dependencies blah blah blah But there's two schools of thought happening, and I would say this is the hipster cool And these are like the old curmudgeon, you know, I mean in general or people that just don't think about it But I would argue that it's still it's still pretty valuable to think about this All right, so I was joking. I had the sticker. I didn't put it on there, but there is no cloud, right? There's just someone else's computer if you all heard that But there's also no distro list. There's only dependencies that you manage So like you have to think about this like actually here's the funnier part How many of you know about what distro list is have you heard about this? All right, do you know what they actually use? It's a Debian dependency tree. They rely on the Debian Linux distro to do the work for them And so it's funny because like I think it's almost like I mean it's it's again It's almost reminds me of politics in 2020 2020 we say things like distro list And I'm like there's a whole team of people working on that behind the scenes and you're basically minimizing like everything They do a value, but yet you're using it like it's a complete insanity So hopefully I've caught you ahead of time before you see distro list and now I've put you back on the right path Without without there's no such thing like there is no such thing as distro list. All right So why would we use a Linux distro? You know you would think through the standard reasons that you would use a Linux distro at all like why would you Use in a container image? It's basically the same reasons You know like I mean I try to boil these down I mean obviously some of these get into like things red hat would think about more But I think all these are pretty common among all the all the Linux distros right like size Like what core utils does it use what C library? What life cycle how long would they patch it like how long would they patch what's in that container? People don't realize like when you go from I Ranted about this because on Docker hub the CentOS 7 and CentOS 8 Container images are actually in the same repository So if you're using the latest tag and you're building like when CentOS 8 came out all of your stuff probably just stopped working And so you just did a Linux migration in like ten minutes, you know five minutes I have a lot I took to rebuild your containers This is something that's people it's hard for them to get their brain around they don't realize oh Every time I rebuild that container image I'm essentially reinstalling operating system and then when I go from major version to major version That's an actual upgrade like I have to invest engineering time and going from like major version of Linux to another major version of the links And that doesn't that could require reading tons of docs figuring out they went from send mail to Postgres I remember that happen like these kinds of things matter right like you have to think through this So the life cycle in a container matters probably even more Because I think like VMs would probably get even lazier and we don't pay attention for a long time And then eventually we're like oh crap the latest tag just pulled the latest version I've got to figure out what's broken. Oh, I you know what what programs changed So I think these are pretty important things to think about containers I mentioned security, you know I go a step further and think about a security response team and like looking at it I mean I don't want to say anything bad, but different different Linux distributions have different qualities there some are more Reputed in their ability to you know think about patching and like actually patching things other ones are less So like I think that's pretty important being able to like quickly track down You know when the security team comes you and they scan your entire container environment They're like you have all these CVs you have to patch these things You know in that typical feedback thing that happens in any professional environment That's gonna happen with containers and it already is and then automate. I you know I talk about automated You know Automation and performance engineering basically like proactively going out and and making sure You know I show a bunch of them here instead of just one because I think it's pretty funny But but as you scale up and you have more containers. Here's here's one of them Here's one of the weirdest problems they can have in any containers and this is I always find these edge cases And then they happen and then I laugh because I tried to tell the world that they don't listen to me This is a problem I haven't seen a BZ for it yet, although I've seen the BZs for all the other ones I've predicted so far This one is gonna happen soon where like you're building a web app and you're building and you're running out on Kubernetes and you You're you're putting your performance at say 200 milliseconds I need the I need the web server to respond to 200 milliseconds and I fire up ten containers And I have a certain level of traffic load that's coming out and trying to connect to it What happens when like three years from now? Kubernetes just kept firing up more containers. You're still getting a 200 millisecond response time But now there's 72 containers and you're like wait what what happened to like Kubernetes will just fire up more containers Even if like some performance went completely sideways inside of your container You wouldn't even notice because it's just load balancing traffic So like this is something that people are not thinking through like the glibc the way the web servers compiled all the standard stuff That we care about in Linux distros still matters in a container In fact, it might even matter more because the platform will go fire up more of them and make things work And next thing you'll know you'll be spending way more money on Amazon because you have like 22 more web You know 22 more, you know virtual machines because you needed that much capacity because you had a performance hit and Kubernetes just fixed it for it fixed it for you by adding more containers So so let's go into like how all this works so to remind you and go a step deeper says where I'm gonna hurt your brain a little bit We invented all of this a long long time ago I did since only two of you were in Daniel's talk He went through the entire history of like Unix to now and we were messing with him afterwards But I have a talk. I have a couple of blogs where I do go through this entire history I will skip most of the history and come up to like 25 years ago The the idea that we could So so I show here This is actually becoming a trendy thing to do in containers again where people do scratch builds what they'll call scratch both But nobody really knows what that means sometimes it means pulling binaries from a Linux distribution or running them You know basically stuffing them or putting them in the container But then other times it means compiling like a C app or a G or like a going app Which we have a lot of in our container team because there's a lot of tools in the container world that are written And going and going is compiled The this is like seems like a really good idea at first you're like oh This gets the smallest container image possible, right? Because I get a very small binary if I compile everything statically like and I don't have any dynamic linky And all I need is glibc or we're going or something like that And I can compile it in and I come out with a real small binary and then I stuff that binary in a container But but this has some really strange caveats in that one if I have 5,000 containers in my environment I now have 5,000 g-loopsies embedded everywhere and I have 5,000 different SSL's you know live SSL's embedded everywhere now We get into the nuanced problem of attack surface people think it's the container image They'll do this very not smart thing where they'll look at the container image They're like this container image is bigger than this container image So this one has less attack surface and that's actually not the way you should look at it You should think about all of the containers in your environment I think about the attack surface of all of that and if you have 200 different teams using two different 200 different versions of live SSL and 200 versions of live glibc Now you have a much larger attack surface because the hacker does not care about that one container They're trying to get into your entire environment. They're targeting the environment So the more different containers with different versions of glibc and lib SSL in them The worst this is going to be right so many years ago. We solved this problem By doing dynamic linking And what we do here I show in this example is I'm compiling Patchy and for those of you that don't remember this What happens when you dynamically compile a binary is you add LD so basically you statically compile in LD so and Then when you run the binary the Linux operating system is smart enough to analyze the elf binary and go Oh, this is this has a dynamic linker and then the linker at runtime will go out and find the dependencies on the disk I don't know how many of you remember this but this goes back to computer science link back 20 years from me Now the beauty of this is we've turned on more technologies and I'm showing you kind of like How we get to a container and then like which technologies get turned on as we get into this and this is all about I should have explained This at the beginning, but this is I'm gonna build up a container image and kind of go further and further further until we truly have a container But now we're using you know, we're using elf binaries We're using g lip see you know GCC and then and g lip see inside and then we're using LD and now we start to go okay now this is cool because if You know live SSL or or G lip see need patch because there's some security problem or some performance problem. I can patch it I Can patch it without Basically recompiling the app every time but I still have to rebuild the container every time So there's there's still a problem there and then we've also introduced a new problem of now We have dependencies we have to figure out how to get these dependencies on disk inside the container image So now basically we've said oh well now we need to build something called a depth solver right Oh, well in this case right at world. It's young and and RPM and then I turn on this technology that I call Linux distro Linux distro is a bunch of human beings like that have Subject matter expertise in a bunch of different pieces of software They compile all this software into one giant repository and then they build out all the dependencies for that And it's a nightmare and honestly like except for operating system people in Linux distro people Nobody wants to do that work right like as it developed. I'm trying to build a PHP app I don't want to do that. I don't have to go to hunt down all the like gmc dependencies or the lib ssl dependencies things like that It's a nightmare so We've now solved that problem, but now we have Linux distro We're back to having a Linux distro inside of the container image because I want to have that dependency tree basically But that only solves the the bottom up part of the problem inside the container image We still have to worry about like PHP You know our Python or or npm for node We still typically end up layering on some kind of you know application on top like like Pearl Python Ruby PHP Bova and almost all of these languages have their own dependency their own depth solver and their own You know supply chain of you know dependencies and there's teams that go and analyze and you know basically manage all that And that's a ton of work So now you're getting into I got two different dependency trees I have Linux distro and then I have some language maybe multiple and now I've you know depth solvers for both And I have to have access to all of this tooling at the time that I'm building the container image basically It's almost like a knot again. You're kind of back to an operating system install But we go one more layer So this is really where we get into like the technologies that are truly make a container right because if you think about What a container is it's it's not much It's a tar file and it's got a config file and that's about it And and you could have multiple tar files depending on how many layers you have and then I joke here You know we have the the open containers initiative governs the specification. It's an open I don't know how much how many of you do you know what OCI is? Not many okay, so open containers initiative is managed by the Linux Foundation It is an open governing board that basically dictates not dictates, but governs the The format of container images and actually run times and they actually have a like sort of a reference Implementation of the runtime standard, which is run C which I don't go deep in this talk But this organization is really cool and that docker gave this to this Organization there were some politics at the beginning to get this to happen But luckily it didn't happen and so now at least in the container format world We're never gonna end up with RPM and Deb We won't have a split thing at least everybody's gonna use the same OCI spec and and there have actually been a few others like How many of you have heard of singularity? Singularity is another different format. Alexi has its own stuff or Alex D has their own thing Oh, I didn't know that yeah, so there's still people nibbling at doing other things But I can tell you right now at Red Hat we are doing this like this is the one that that Everything out if there are there are down there are down sizes specification I Alex or I from Seuss has a great talk where he talks about all the downsides of the current technology We selected for basically container images and he has all of these solutions But it's going to be like the next 20 years to get any of them done because we have a working standard I'm even seeing in my world where people are less interested in singularity and more interested in OCI containers Especially in the last year like it started change whereas before I think people didn't they didn't realize the beauty of having one Container spec is that I can have one piece of infrastructure I can have one registry server that has everything in it in fact, we even I think CNV or Kuvert can even pull virtual machine images from within a within an OCI basically image file and I know for a fact core OS in OpenShift 4 I know I don't know how many of you have heard of core OS But it's the it's the basic distribution of rel that we snapshot for OpenShift and it's a it's a read-only operating system It actually pulls down its updates as container images lays them out on disk and then updates the OS So like we're moving to this for a basically if you look at OpenShift 4 it needs one thing it needs a container registry That's all it needs to manage OpenShift. It even manages the operating system updates as container. So But this can these container images are basically a bunch of metadata and some architecture specific stuff all stuff to the config file And then basically we we basically just put that in with the image layers side by side And then the container engine which I won't go deep into it, but when it pulls down the container Image and runs it it knows how to digest digest these variables that the image builder has put inside of the container image But now this is kind of all the technologies working together to make something you know basically a manageable container image that will work that will be constructive and we can use over a Over an operational life cycle like three years or two years or five years or longer sometimes I I still I still hold to the fact that people will use technology more than they for a longer period than they want to Like the container world is under this impression that we move fast So we won't have things for like a year or six months But in reality I guarantee there will be container images that are like ten years old And they'll keep rebuilding them until till whatever bits you have in there are not updateable anymore All right, so then though this opens us up the fact that containers are also layerable So so the way container images work is that that config file Points to like what layers make up that container image and then when the container engine goes and pulls down that config file and It looks and it says oh, I need this layer this layer this layer pulls those layers down And then it lays them over each other So like if you want to delete a file you actually have to it This is another one that Alexus right as he rants about it He's like when you want to delete a file the container image doesn't get smaller It gets bigger because you have to write metadata saying you deleted the file in the layer below So like say you have a five gig you know ISO image in the base image And then you delete it in another container, you know layer on top You will now have five gigs still there plus like the the metadata pointing back that says hey I deleted the five gig file. It doesn't go away. So there's some there's some good and bad things to this You can do things and I'll show you at the end where you can delegate To different Specialists like say historically you've always had sort of sys admins dba's java specialists You can start to delegate some of the work in different layers to different teams and let them add things But it also has some downsides if you don't do it right so in this one I point out I don't have names for all these problems But this is like the this is like the I've overwritten something without realizing and I've actually increased my attack surface So in this one what I'm showing is is this person build a container image on top of this person's like this might be Like the operating system team built some base image Then this team came in and built another one that where the overwrite wrote a patchy or engine x And then but they didn't overwrite glibc So at least if I go if the if this team goes and patches glibc this team will inherit it But then this team said they needed some other live ssl that was newer that had some or compiled in a different way They had some other encryption options or who knows what and then this team needed a different glibc for who knows why And the problem here now is is that like when this person rebuilds live ssl This person doesn't get that update and when this person You know rebuilds glibc this person doesn't and so this image might have cvs That show up that aren't in this one and aren't in this one and aren't in this one And so you end up with this layering problem where you have this like cascading effect And you have to really think through this and I've seen I've seen some organizations have very very complex Supply chains where it might be you know 35 different, you know branches to the tree and they're very complex and then they have to figure out What it's a cute computer security we would call non-repudiation Where this team says we didn't add it It must have been the team ahead of us and it has some trojan in the image right and then you're like who added it You can only point to the next person in the chain and say it was somebody above me It wasn't me and there's no there's plausible deniability if you don't have Basically non-repudiation so sometimes people will do things like put a signed file In that container image layer to verify essentially to verify that that team actually put that together It's not super perfect, but it's better than having no idea who did what So so this is kind of what the container images in their full glory look like so Also, I already talked through layers Um, but with image layers you basically get to communicate to the end user You know like what what these different layers mean and you can think about these as basically human constructs, right? Like these are arbitrary There's no the oci doesn't govern what you put in these images So i've seen people do nasty stuff where they have like a patchy configured one way and a patchy configured another way And they branch and it'll be like a patchy one a patchy two and they'll have like different configs embedded in this You can do that. I would not recommend it, but you can do that Um This latest tag is the only one that's governed But these don't have to be numbers in this case I show that and I try to explain this as this is a way for the container image builder to communicate to the container image consumer How they should consume the image and really what this is is about simplifying The the the api surface of the thing that you're getting right you're like, hey, here's a container image I I don't know which layers are usable, you know, but there could be 25 layers But i'm gonna label a few of them so that the end user knows how to you know what they should be consuming in a while It makes it more intuitive um And I already talked about um You know like these layers kind of come together And then I give a quick overview here of like how this gets at runtime what happens So this is the container engine. This is docker or podman or cryo Cryo cryo any container engine has three jobs basically it's It's basically to build this config file is one of the main jobs and what it does to do To build this config file that then gets handed off to run c to then basically talk to the kernel and run a container is It actually uses the default variables that were embedded inside the container image And then it will add user options. So like say you say podman run dash p 8080 coan 80 and you like map port 8080 to 80 when you're running the container image That's basically these user options that would override any port settings that maybe you had embedded in the container image And then the engine itself will add all kinds of other things like se linux rules or set comp rules Or there's all if you look at a config file, which I don't do a demo in this one But if you look at a config file, it's you know 400 lines long and it has all these set comp rules and all these things that the container engine itself adds in And then and then finally it passes this config file off to run c, which I mentioned is the the oci Reference implementation for container runtime and both podman and docker and cry all three of them use n container d Basically everybody in the world uses run c and so you hand this config file off to run c And then run c talks to the kernel to fly up a container um I kind of mentioned that already but but I also show here how I can get you know in this one I showed basically a smaller tree, but this is getting now what it really starts to look like in a real environment, right like we may have like a a Operating systems team that kind of builds a base image You know Basically pulls down and say a devian image or a fedora image or sent to us or whatever Well, and then add some things that they think they want that are kind of standard things that they'd like to have Um, and then you know you start to specialize like this might be an apache team This might be an engine x team This might be a mias kreltsin. Um, and this is where we start to like break things out, you know by sms, right? but as you start the reason why i'm Boring you with all these details is you start to realize there's a lot of other problems I should be working and thinking about than just like like I shouldn't be like questioning whether i'm going to use Yum and Ruby gems and npn right I should be thinking there's I have to think through all these layers of things And to be very honest I'd rather spend more of my time here I'd rather spend more of my time like setting up a cicd system and figuring out how I can make this like Send to us container image build over and over for five years without having to think about it And so like that's where I got to the like the life cycle matters because the life cycle of the linux distro in the container image Essentially governs how long I can let this thing run in a cicd system Constantly rebuilding and always like actually having the updates that needs to have security performance, etc But in a nutshell I want to focus I want to focus most of my time and attention up here and not down here So obviously like an operating system still matters even in a container image All right, so I mentioned like some of the problems that like these things solve right though the Just having a container image starts to help the works on my laptop problem Because I did bundle all those things together right like I bundled The you know the php and everything together right if I test that construct that I call it like a jiggly stack of Software right like all the different versions of these things can change But once I've cement them in a container image at least I know I snapshot it It's almost like a core build like we used to do back in the day or we still do Um, you know, but I have it safe forever and then I can run on my laptop Moved into production. I pretty much know what's gonna work like I pretty much know it's gonna work There's some caveats depending on people will mix and match. I'm not a big fan of that They'll be running like a boonsu here and they'll run rel here I'm not a big fan of that because if you're running binaries from one linux just draw on another I do prefer let's say in this scenario you're using or you know A red hat universal base image on an a boonsu laptop and then you move it over to rel I like that better than going the other way Because like if it ran on a boonsu in development It will only run as good or better on the operating system that it was basically compiled on And so I think there's a there's less danger going from like something that wasn't compatible to something that we know Is for sure come at all but going the other way I think is a very dangerous way like like I suggest people I typically suggest to people to you know If you're gonna have an a boonsu server run in a boonsu server image because you know those binaries were compiled with that kernel And you know all the like for example with rel s e linux everything is dialed in everything is like the way it should be In those binaries and so you know things won't break Which we've seen that happen. That's probably been the biggest one of the problems I saw was like a A and a boonsu image was ran on a boonsu and then they moved it over to sento S in production and the sento s had s e linux turned on and then basically the binaries that were compiled in a boonsu Actually had s e linux options on which nobody knew and then they like came on and broke the binaries But there was no way to like change it around in in in the container image because the tools were not there Does that make sense to everyone? so Another problem I mentioned was I mentioned the my you know my containers like I'm in a big kubernetes environment They just keep scaling out right like and I could still get like say when I first put the application into production I need a million transactions per second um You know I might be able to achieve that on day one But how do I maintain like a million transactions per second and and you know like I mentioned on day one I might deploy a thousand containers to achieve a million transactions per second That's the way you start to think in a kubernetes environment But then like three years from then I might have like 1500 containers running to get the same million transactions per second And nobody will ever notice no there's no like monitoring or fault You know management that we use that would ever catch that that would just be like decay in whatever you had in that container image You might have made a config file change at some point that then made it need You know made it need 33 percent more resources or you might have made some ssl change that was like You know caused it to use a different algorithm that was less sufficient or the operating system could have Updated a binary in there and then things broke down the road and caused it to use more You know resources so this these problems get very tricky when you start to move them into into you know Essentially I distribute the systems environment like kubernetes And so again, I I suggest offloading that problem to an operating system or to like a linux distro because the linux distro is Good at doing this already um Another one is is I you know the million transactions for a second then I have the hacker Um works on my laptop doesn't help you with performance or security Like just because it works on your laptop does not mean it doesn't get hacked two seconds after you put into production Right so how do you how do you even how do you even have any kind of warm and fuzzy feeling that it's not going to get hacked? You know you can like compile it yourself and be like well I think I know this stuff pretty well or you can use something that like a linux distro is already building And you're like well, there's a whole bunch of other people using it and they're not getting hacked So now I have precedent that I probably won't get hacked also So it's another nuanced reason to think about using a you know a linux distro in the container image Um and then here I just break out You know there's a whole bunch of other user space things you need to think about uh with performance especially like compute You know there's a ton of work that goes into linux distros to make sure that basically all of these Capabilities remain the same so like like in any linux distro when they bring it together for the next version They basically compile, you know things with glibc or whatever compiler they use Um, and then you like start to like make little tweaks and kind of set a baseline of performance for all these things um If you are again doing all this from scratch every time you're every time you basically make a little tweak to it It's almost like you just create a new operating system So if you if you're back at that model where you're like, uh, we're going to compile everything ourselves do all this ourselves manage All these things and you you give that all to the app team every different container image in your environment It's going to have different essentially different performance characteristics with all of these things And that can become pretty cognitively that can be a pretty heavy cognitive load and it can just be inefficiency again You're it's not a tax surface. It's like work surface or maintenance surface You're creating a huge maintenance surface because all these different container images may have different performance characteristics Which will inevitably bite you, you know down the road when when it's in production Three years from now when that team's gone and they don't even know how it works anymore And it's just sitting there in a cicd system being rebuilt and you're like it works kind of like, but I don't know what it is Um, it's a bad place. I already predict this is this is happening to people that have done this Um, I talk a little bit about red hat universal base image. I won't go deep but but but uh Red hat saw this problem like so rel historically we had always charged a subscription for right And so this container world turned my world upside down and this has been my life for like the last year and a half um or about a year uh In that in that we had to figure out how to basically put some rel bits out there So people could use them to basically do all the things I just said rely on a linux distro inside of the container image While still being able to redistribute the linux image, whereas like historically red hat had not let people redistribute rel We basically used the trademarks It was kind of the only way that we could enforce our business model That basically allowed a contractual agreement because the way red hats business works is we just there's no Legality like like the customer could totally share their their their stuff wherever they want But they sign a contract that says they won't and we sign a contract that says we'll give you support As long as we both agree to this contract. That's basically the only enforcement we have So we basically created this thing called red hat universal base image that changes that end user license agreement to allow people to redistribute them And we did introduce like no jas and java and php actually were introducing java soon php A whole bunch of different language runtimes and then a bunch of we have sort of three standard base images a minimal one One that you can run system d inside the container. How many are you would think about running system d in a container image? All right, so some people are pretty pragmatic Personally, I I I actually defend this even though again I'm kind of a curmudgeon but there's a lot of You know subject matter expertise built into that and if you've ever If you've ever been tasked with building a container image for some piece of software that you don't know And then you have to figure out how to get it to start up with a single line It's really annoying so like apache you can run like hdbd dash d background And there's a few command line options you can pass that demon to run But it's annoying because I don't want to go into the system d file And like find out how you start all these different pieces of software that i'm running in container It's much easier in a container image built to do yum install apache system c-tail enable hdbd And to be honest, I even run mine. I run even like my blog and all my stuff as essentially system d Ran apache that just starts with like two commands and then I actually run them read only So I actually get system d and apache running in a container read only and so like it's pretty secure Like it's definitely more secure than running on a regular blood server So these this does not preclude doing really serious security things even if you're running like system d And anybody that says system d eats up a bunch of resources is probably crazy Like it's not like running apache and then system d side-by-side for the small scale. I'm doing is not a big deal um anyway, so let's get to uh, we're getting down to the end here, but But let's let's go with some just actually before I even give any recommendations because this is my world now We're getting to where I'm constantly thinking about this But how many of you are actually involved in a linux distro like contribute to it. Okay, so a decent amount so I'm curious like have Those of you that do how many of you have seen similar problems like from your users asking about things in the container image Because anyone asked anything a little bit. All right a little bit, but you're kind of looking not that much Are you are those of you that are working on this thinking about specific Features or or changes that you can make to your linux distro to make it run better in a linux in a container image Raise your hand if you're doing that. It's just some okay. Yeah, and I would say I'm in that boat I'm thinking about things and we do certain things, but it's still pretty much a regular linux Like root fs basically for this at this point for the most part even for us so I again, I uh I I decided to put up something funny to show like how containers happen basically Um, I'll let you read it for a second This is basically what we did with containers, right? Like we just glued two two phones together to make a tablet because we wanted to split screen, right? Like we're still basically a regular operating system inside the container image Uh, we're not doing that much. So we're just gluing new stuff together in a slightly different way some tar files And some config files. That's basically what we added to this whole magic um so I guess especially the other people that work on distros um Like the call to action I think is first off all of us telling the world the same thing Like telling people that this still does matter and having confidence that matters I noticed early on at red hack because I've been doing this like six years ish um I would say early on some of our packages and stuff didn't fully Feel confident that this mattered like I would say we kind of slapped some things together in containers early on And then we eventually built out an entire huge team to like handle all this But I was like the kassandra Wining early on that this is going to go sideways if we don't think about it Um, especially container image rebuilds because it takes a lot of image rebuilds when you get into these layers like this I mean red hat has a several million container image builds per month probably we're in that scale I mean, it's a huge environment. We don't actually talk about it publicly a lot But we could easily do like one of those high scalability blog entries on how we do all of our container image builds It gets huge But I think I think all of us should be telling the world that this still matters Like the dependency tree that we build matters and like the value the quality of that dependency tree And all the subject matter expertise for all those packages matters You know like a package on our subsystem team I work on the container subsystem in rel but like our packages know how to build pod man better than other people Know how to build pod man because they build it all the time And they know like all the little secrets to make pod man run better in the way that we want to run And that's true for like bind and Apache and like every other like you know type of software they put in there And then you scale that out across an entire linux distro now you're talking thousands of pieces of software So I would say my call action for or ask for everyone You know is like think through that and like tell the world that your stuff still matters Like don't let people say that you know, it doesn't matter Then the second thing I would say is let's think about like features that we can add to our to our linux distros basically that will Start to make it more optimized for container images. There's all kinds of things that we're thinking through right now like For example, like uh, I would say, uh Maestro is a perfect example There's a bunch of environment variables that like the official Maestro image or the official Maria db image Will accept like you could pass the database user the database pass where there's like probably I don't know 10 ish different environment variables We should all start to think about how the linux distro can do that out of the box Like for a patchy like imagine if you could set like the The encryption algorithms that you want to use there's a security level of the container image Those are like really nice to have things that I think probably the linux distro should start to think about doing Um, you know again optimizing tool, you know, for example, we have phips compliance is a perfect example for us for red At we have to be able to turn on like the phips algorithms inside the container image And we do it actually by setting some variables in the host and then the container engine is smart enough to basically know That and then run the container image in a certain way, but this requires a lot of moving parts It's the way the container The basically the way the container engine is built and compiled and configured And it's also the way the container images are built and compiled and configured And so you end up with this dependency tree where you can make things kind of magically happen to make them a lot easier um You know, I would say same defaults are another thing like Like making sure these things come up in the right way and start in the right way That would be good for a container image think about another one is minimization I think a lot of us anybody that's probably doing this The ones that raise your hand like minimization is probably the biggest one that I hear among the dependency tree And we're working on that. I don't know how many of you are from fedora But we're working on that upstream adam samelik is working on a minimization effort upstream because the way red hat does everything We just do it upstream So probably around the rel 9 time frame you'll see this work end up arriving in rel But I would say that's a huge one where we really want to optimize the dependency Graph basically so that we can have loose or weak dependencies and then make things smaller And maybe even break up some of the packages in new ways. So like maybe You know lib ssl we could break it out or especially glibc is one that we can break out into different things different We did it a little bit with like languages So the langpaks we have like a default minimal language pack that we can pull in with glibc to make it smaller But we want to do more and more stuff like that But I guess my call to action to everyone, especially the ones that work on discers Let's all start thinking about this stuff and like make our stuff relevant in this future world because It annoys me when people say that operating systems don't matter. I think there's a ton of work to do here And so with that I will uh, I will leave you with my twitter and some other stuff I wrote I publish a lot a lot So like if you google me, I write like all kinds of crazy stuff about this I probably publish I don't know 20 blog entries a year 30 blog entries a year or something But um, I would love to continue the conversation If you want to chat about it and I will leave it to questions if anyone wants to do any questions or even thoughts Especially the the other distro maintainers. I'm curious That is hard Yeah, I'll repeat So he said he said they're having actually a problem So there's a problem to use the latest tag or not to use the latest tag If the end user that pulls down the container image specifies a essentially a tag Whether it's the latest or not latest you get burned one way or the other What happens is is if they pull the latest it will constantly break them possibly But if they pull down a static like version 1.2, you know, like if they pull down a static version Though that image I say container images age like cheese not like wine And so they will end up picking up cvs over time and then they complain no matter what you do Like like they want so the the short answer there is it's a hard problem to solve Red Hat solves it because we do a bi api compatibility over a life cycle So we guarantee so like if you pull down a rel 7 image or it will stay rel 7 forever It keeps getting cvs, but it doesn't break anything But it's a hard problem to solve that's a lot of investment like and that's kind of why we do it But that's the only way I know to solve that problem I'm sorry That's essentially what I'm doing. Yeah Oh, sorry, so he's it wasn't a question, but I repeat he said he said you can stay on the stable branch So for example with pod man upstream we have stable branches that we actually rebase Down into rel we basically just follow those stable branches And if there's a cv e we'll patch the cv e there and then anybody can pull it So like even suce contributes a lot to the pod man, but they can just pull down the stable version That is one way to solve it That that hopefully the upstream will maintain those cvs. There's for some things you can probably do that like mice koala and other things Any other questions? crazy stuff Yeah So okay, so this question is around host and container image compatibility, which I love ranting about and hpc proves it Um and what they do what he said is they basically will bind mount in Things like the mpi library or glibc from the host They may like have a directory that has like a specific version of glibc or the mpi libraries built Or like kray has their own mpi library and then you have to like use that one So so the question is how what are we doing? I may know you by your question like I think I might know who you are if you've been on a call for For the for the like sandia labs and all those hpc things Okay, okay, okay, so I may have talked to you before this sounds such a familiar problem But the question is what is red hat doing to try to make the better? Well, it's kind of the same answer as the last question like what we do is try to maintain that abi api compatibility So at least if it's a rel 8 or you know ubi 8 image on rel 8 that should work Right like like even if you're using the underlying host glibc's everything should work And then hopefully the ecosystem of rel is big enough to attract like the hardware and all that stuff said It will enable it right and we are doing a lot of that in open shift and in rel No, no, there's an abi. I'm like there's Yes, so his question is is is there a way to like basically compare two abis and kind of diff them We do have some kind of tooling to do that Um, I think we mostly do it to check old versions against newer versions to make sure we didn't break the compatibility But I don't see why you couldn't use it to compare two different container images Um, I know carlo so donnell is a is a really good guy to talk to about that He's a red hat guy that's a glibc maintainer and I know he talks about this all the time This is like deep in his area. I don't know that we've built anything that we exposed to end users to do that But we definitely do it internally We definitely do it in between like rel versions and and and things like that So you could probably use that now. Actually, I'm sure if you google we we talk about publicly, but admittedly I haven't gotten deep in that That's an interesting one. We should chat more about that one I think we are we should probably wrap it up. So thank you everyone