 And y'all are spread out all over the room. That's fine. Honestly, this is way more people than I was expecting at four o'clock on a Friday afternoon after a long week. So thank y'all for being here. In this talk, the unsung hero of Cloud Native, I'm going to talk to a little bit of history and a little bit of where we are in container Linux's or Linux's in general, Linux I. First, I know a few of you I'm already seeing faces in here. If you want to know me, please reach out. I love meeting new people all the time. I've been around in a couple of these different communities for quite a while. I lose track of how all these things overlap, but for this community, particularly in the last 10 years, have been involved in Docker since 2013 and then at some point, AppC. And now, if you've heard about the Open Container Initiative specifications, like how registries even talk together, how run times work, how images are packed, love playing with tar layers, please reach out to Commiserate. Most recently was with Kinfolk, and now for last year and a half, Kinfolk team is on with Microsoft Azure. Among the favorites there, Flatcar Container Linux. So I'll go ahead and disclaim that biases or otherwise frustrations, this is, I like to keep things very objective, but my most current, you know, still current affiliation is with Flatcar Container Linux, but I have been involved in aspects of a lot of things over the years, so leave it at that. So first off, Container Linux. All these nice little buzzwords here with the little bumblepiece everywhere. What would you say would be a Container Linux? Like actually, what would y'all say Container Linux means to you, thin, minimal, whatever. Anybody else? Come on. What? Purpose built, use case specific builds, hardened, hardened. You kind of want that from all of them, maybe not exclusive to Container Linuxes, but again, in the purpose built, you know, theme, you start figuring out what that means and you harden it down. I put Ikea versus artisanal, just because Cattles versus Pet kind of whatever. You go to Ikea and it's a warehouse of mass built things and good luck, but it's like mass produced and then anything artisanal when you see that this one thing doesn't quite look like the other, you've now got something tailored, not necessarily different than purpose built, but we'll kind of get into that a little bit. And the atomic updates, this gets into kind of a theme here. But first off, like you have to kind of back up and say, okay, cool, Container Linux, but then what is an actual, what is a Linux? Obviously, there's a good place for a well actually here. What makes it what you and your tools are expecting? That's really what people are coming down to and that's really been the most interesting part about cloud native and, you know, any distribution or otherwise that you have been familiar with, whether it was Solaris Zones, whether it's Windows, otherwise, like what tools, what are you trying to deploy? And at some point, the thing that you're deploying has multiple targets, so it kind of starts blending out and you really don't care at the end of the day whether it's Linux or not. Because Linux is so malleable, it's been a good target for that. But because Linux has been so malleable, it has made it wildly confusing for a lot of us who are either new to the area or having to support more than one use case. Would y'all say that's true or not? A couple of heads nodding, okay. Because at some point, you're going to be trying to figure out what about the thing that you're trying to deploy is having a grind? Is it that the kernel wasn't compiled quite the same? Is it not have the drivers that you're looking for? Maybe it's that you're on a real kernel that's 2632, but it somehow has the kernel features that you know weren't introduced until four or something. And so it's like, well, it's still kind of the Linux that I'm looking for, but it gets into the use case specific pieces. Here lately, some of the particular grinds, we figured out how to work around most of these like syscalls, iocdols, whether it's set comp, whether you know why something is not available to you or not. We kind of like work around it, sniffing around trying to see if it is the kernel config, how to work around it. Is it libc or is it GNU or is it muscle? Like you find yourself into edge cases, how do I do DNS resolution in a consistent way? Good luck. But it could even just be often, I would venture to say most of our times, not just the developers of the stuff, but the end consumers of the stuff, start figuring out like, oh cool, you tested only on Ubuntu, Kubernetes end-to-end test, where's Tim St. Clair, and Kubernetes end-to-end tests are only on Ubuntu and then you find yourself on a different version of Ubuntu or a, you know, RPM variant of some kind or whatever and the Etsy stuff is in a slightly different place and like, you can't rep, it takes a lot of effort and lift that you find yourself doing and you feel like you're dying in isolation because something is different than how the community tested. And it could just be something in Etsy or how the tools play with it, it's inconsistent. Early boot provisioning. There was ignition that came out of the CoreOS Container Linux project. Once it got into the Red Hat side of life, they actually changed the spec a little bit and it's progressed and there's good things about that. But version two to version three and then cloud init and the spectrum of colors that are in that cloud init landscape. Whether you're using the upstream or the particular build of the Ubuntu that you're using and all the patches in between, it's different. Where are you actually storing your secrets when you do that early provisioning? Like, are you storing your secrets for bootstrapping in your cloud init or are they coming through ignition? Like, people start figuring out like how and where they're doing all these things and you'll find conversations where people will think that this is just a Linux problem or this is how you're interacting with it, like make it consistent and we all know it's intentionally inconsistent. Like, get into your use case, figure and tailor it out, but to make these things consistent is the pain and the burden of this whole conversation, like how do we drive to that consistency? Whatever your flavor is on system D, whether you love it or hate it, it is an API of like interacting with these INI-esque files, like unit files, landing them somewhere on the host, dealing with them, reconciling them. System D might know what to do with them, but do you have tools that can parse those config files to know what the behavior of your host is doing? Are you willing to subject yourself to Dbuss? Anybody in here really willing to, who's interacted directly with Dbuss or written code to do it? Three people, there's probably what, 60 people in here, that's higher than I was expecting, but this is the nature of, there's a bias because you're here. Dbuss is painful and it's like, yeah, it's an API, but even the authors of it realize that it's painful, but it's that standard. And this last one is kind of ambiguous, the NVR, the name, version, and release. Any packages, this gets into a packaging conversation, but is the Linux the packages that you're using? How do you know that Bash 4. whatever is the one that you're expecting? And you can get into all kinds of like S-bombs, where did it come from? Who signed that package? Is it the same build? Did I just pretend to have the same version, NVR, so that your scanner will pass? I could do that all day, but how did you actually attest that whole chain? How did these things land? Did you compose it statically, test it, and then deploy it, or did it just happen to update on my system in a non-deterministic fashion? Is that your Linux experience? It might be. That might be what you're expecting, but don't just expect that that's the same for everybody else. If y'all have questions, I'd love you to interrupt me and just be like, what's going on with that? Happy to take them at the end as well. So as a real rudimentary history on this, I would venture to say the first kind of widely used read-only Linux that people are aware of would be Knopix, Knopix, Knopix. It was intended to be on a CD, so they often ran around 700 megs. That's not necessarily a point as the size on that one, was the fact that it was a read-only in memory state, or at least able to read and pick up from the disc. It was mind-blowing, the fact that you could have a disc and walk into somewhere and have an environment, a CD and a thumb drive, and you could have a workstation up and carry your work with you. How adventure for a lot of people was the initial dream, and we're still working towards that today, of like, how do you instantiate something and get off to the races immediately. To your point on Thin Minimal, did anybody ever put their hands on Dam Small Linux? Hell yeah, love Dam Small Linux. If you ever had those like business card CDs, I wish I had a CD round drive just to still have a business card CD, DSL would fit on a business card CD, so you could like fit it in your wallet and walk into some place. That was 2005 even, so in and around that same time, the Linux kernel was introducing namespaces. The first one happened was just new namespace around 2002, and then more started happening around 2006, so a lot of these things started coming together around the same time. Yeah, it's on the next slide, but through all this time, I mean like if you go look at the Wikipedia page for Linux distributions, it's fascinating. Unfortunately, it hasn't been updated in probably a year and a half, but still you can see how the sprawl of Linux distributions is exploded over the inception since SLS or whatever you want to say is the beginning, Debian or Slackware or whatever, it doesn't matter, but just constant amounts of sprawl, and every single one of those, this is not to say, this is like the ellipsis, like things happened, like everybody's been experimenting with lots of different approaches, do you thin it, do you make it thin, do you just harden it, do you just produce something that can build operating systems or render a file system really quickly, but there is something about like the kernel and like the user space tooling and how do you play with those things and make consistencies, that's probably single-handedly the most valuable part about in the open source area in general, in Linux distros specifically, being able to fork and play with a flavor of something until you see that like this is its own use case or it needs to kind of roll back in, we learned a lesson and you can let that thing die and you roll in some learnings over and over and over, so if you look at any one of those different Linux distributions, Debian, Ubuntu and otherwise, the different Fedora one, Red Hat other ones, but that they like learn along the way and some of them die and some of them merge into others, but it needs to happen, it needs to continue iterating, so then you get into kind of like, let's think about containers, like I mentioned namespaces, but really the first was putting those namespaces together was LXC started about 2008, things happened, there's, I'm glossing over things here, but particularly for this topic, CoreOS, original namesake was the OS, it later became CoreOS Container Linux, but CoreOS, the company and the distro started around 2013, it was in the spring of 2013, DocCloud did the demo with the Python scripts of Docker in the summer and then by the fall, winter, we had the emergence of the Go version of Docker and that there was kind of like some cross-pollinations in both directions there, lots of history, I'm not going to worry about, CoreOS Container Linux was based on Chromium, which was just meant for the Chromebooks, but to this day, still the predecessors of that distro still do the same kind of AB partitions with a variable, you know, readable partition, you have hot side, cold side, you can build in the trust at boot time, if one fails, it'll roll back to the other, tag the cold side is available for update and wait while it's running the workloads, that's like how Chromebooks to some extent work, that's how CoreOS Container Linux works, that's how Flatcar continues to work, whether you like or hate Gintu as an upstream, all of them used Gintu as an upstream, Chromium, Container Linux, and Flatcar. What's interesting about that, if just in like quick technical details, is that you can have your upstream of how like the ported scripts that you pull from or whether you're building up a snapshot so that you can test incrementally, and then you have your overlays and the Gintu overlays at least allow you to have, you can override, so you say here's our build of the kernel, like I know you don't have to go to great lengths to rip out the kernel that was in the upstream, you can just overlay it, and you can also add your own packages at that time, and then you can rebuild world. So it allows you to at least leverage a community, the speed at which they iterate, all that kind of stuff, and you could iterate faster, and you still just build a root of FES at the end of the day, and you have at least points in time to build from and know what you're building. 2014 Kubernetes, since then, particularly, again, thinking about the container Linux space, there's been a lot of iterations. There was the atomic project that came out of Fedora and Red Hat. And since then, it's now Fedora Chorus, and they have a rel, rel Chorus and a CentOS Chorus. Interesting, interesting, a good developments there, especially even though it has the namesake, the CoreOS container Linux to the Fedora CoreOS was not a jump, like there was a breaking change in there. So you actually had to redo your workloads. To do that, it was not, just by the namesake, a clean jump. But then you have like Talos is doing interesting stuff, and is completely rethinking the entire problem does not even have like a bin sh in the root file system. You interact over like different interfaces with it. So it's not like, oh, you do this thing, and then you SSH into the host. If you're SSH again, it's probably through a container, and it's not the actual host, like it's drastically rethinking what a Linux is. But fascinating developments in that space. Bottle rocket coming out of AWS, fascinating work, rethinking the whole, you know, situation as well. K3OS. I know I was told this at some point, but K3OS might likely have been using the Linux kit from behind the scenes. If how many of you put your hands on Linux kit, it originally came out of Docker. Cool. But like how do you how could you find the tooling that just needs to build the image that you can iterate on so like you can build your trust up to a certain point. And then rather than just consuming, you could actually build that thing for yourself and say this one package is in the base or whatever. Fascinating, but then you still find yourself having to own some amount of the same like, full testing, end to end framework testing and ownership of it rather than saying, we know it was good on a certain base and we just needed to add on top of that. Open source of micro garden Linux, which a couple of these, I would say, Talos Talos is building a lot from from the from like, as far as leveraging and upstream, the atomics and CoroS of recent CoroS is are leveraging whatever their relative Fedora rel or Centos as an upstream bottle rocket. Jesse, you could actually answer this for me quicker, but bottle rocket while RPM based, I know that they own a lot of those things, but does it particularly build from an upstream like I know Amazon Linux now uses Fedora as an upstream? Does bottle rocket also? Okay, so kind of a two step piece from those that couldn't hear was that bottle rocket being minimal in its way is actually a cargo Rust build system, but some of the packages it sources those from Amazon Linux, which has Fedora as an upstream. Still, so open source garden Linux is based on Debian as an upstream. Anyhow, and I'm sure that there's like again, this is one of these things that should and is organically always like people have derivate derivations, it might even just be internal and you're excited about it. And that's good. But there's more. There's other history. I've got to get repo of past talks. If that was ever such an interesting thing. So common challenges. How easy or would you say this is a point of frustration of when do you when something in your deployment, particularly in the Linux space, becomes artisanal? Like you did your best to put everything in a terraform ansible, whatever it was, and then somebody SSH done and put in an IP tables rule. Do you know when that happens? Probably not. Or they ran a job or they dropped in a unit file or something like, ah, this log directory keeps filling up. Let's drop in a system D unit to like, wipe it or log rotated or something who knows what. And then suddenly it became artisanal. Like this is almost an unanswerable question. Most of the time, due to the nature of things, it's not necessarily a problem. It is a problem. But it's the nature of things and it's kind of like embracing that piece of it. I've alluded to this piece already. Package management, where they came from. Do you trust them? Are you managing these packages? Are you like leveraging an upstream and their trust chains? But even not in that part of packaging is the determinism. And this bites everybody over and over and over. As long as your packages, whether they're Debian, RPM, long tail of other things, if there is a scriptlet that runs in them like a pre install, post install, pre uninstall, post uninstall, scriptlet, you have a non deterministic state period. You're going to be fighting dragons constantly. This is even if just for the one reason of being able to render a root file system and test it and then deploy is like a winning solution. Because when you would say, oh, we pushed something out like an unattended upgrades or like some other kind of update channel, and it deployed out and it had a scriptlet that ran and did something. Good luck. Like it this this causes outages all the time. So even just like being able to push out some kind of immutable blob or otherwise and roll it out is great. But even with that, the reason that so many people use scriptlets is because there are like file system API is how do you add a user to the file system? How do you do a lot of these different things? Like you can't just drop something into a users dot D on a system that he's talked about this, but there's like lots of these systems that you're like, no, no, then you have to go edit this file somewhere. Do you have to twiddle a file somewhere? How do you do that? That is the API of interacting with a Linux like it or hate it. It is like something we should expect. When you roll out a security update or not, who actually gets to say that this thing needs a reboot? Is it the maintainer of the kernel? Is it the maintainer of libc? Is it your admin? Is it your CISO or whoever security person that says, no, you took an update of any kind. It's not considered complete until you reboot or that a reboot is even needed. Or does what ran the update? Does it just get to reboot the node? Or do you actually have to talk to the cluster on top and say a reboot's needed? Who gets to say that? That experience. A is not consistent and B. There's no API for that. There's no interaction for that. Like that's that is mystical. And so most because of that process, you just say, reboots are pretty much always needed. And then immediately, everybody knows you're going to have customers or yourself say, do not reboot until some like you reach some kind of like higher thought or scheduled time or otherwise. But like, we all know that like people are that's why people are interested in K patch or splice or whatever it is. Like how long can we put off a reboot? But because of the non determinism of who gets to say when a reboot is actually needed, like is it a security issue that only is mitigated once the system comes back up? They just say you must reboot. But you this is almost an unanswerable question and appointed contentious contention constantly. The last couple of pieces here, I think I've already alluded to this, but one lesson learned in all the projects that I've been involved in is if you're not allowing a path for your users to stumble into the right way of doing things or to stumble into a migration path, you have screwed up. Like you have legit messed up. Like how do you migrate? So if you it's great that you innovate, it's great to find drastic new ways of thinking about it. But if your users didn't, like in an airplane, if they can't find that bright shiny path of stumbling into migrations or upgrades, you've messed up. Like just please stop and rethink it. Even if you have a great visionary thing in the future, you have to think about how to users get there easily. Because a lot of us are dying in silence for that innovation and cumulatively dying in silence. And what's the last one? Oh yeah, everybody thinking that their approach is right. So yeah, so again, I've already kind of talked about the deterministic piece of things. And this even adds into this. You're at KubeCon, you're hearing all about the Kubernetes. When you go to deploy or as a Linux distro person who's involved in those things or you're using them, when you go to deploy a Kubernetes, it is not just a single thing. It is like config files on yaml's will have implications on what's underneath in the host, whether it's like run times, networking stacks, kernel modules, all kinds of things. And it's not just like a thing. You don't just deploy a Kubernetes. That's why there's vendors and they're selling you solutions on how to make that easier for you. But it is not just like a thing that you deploy a Kubernetes. So as a Linux distro, how do you make that feel as easy as it's possible? It's difficult and near impossible, but that's why some of us are toiling away at this. Sometimes those config fields are implicit. Sometimes they're explicit thankfully, but it is a very strong common challenge. So all that to say, even however you render this file system or you're excited about the distro that you're building, even if you have ventured away from an upstream, in any aspect of a conversation where you'd say we're moving on to functions or some other higher level part of the stack and like, I don't have to think about all those other stuff, the packages and the upstreams that these distros are built on are absolutely critical. If at any point you think like we've moved on beyond that, like who actually still publish packages, RPMs and spec files. My gosh, they are absolutely critical. And if you don't think so, then you are like taking for granted thousands and hundreds and millions of person hours that are still working basically to make your life easier and all these other tools, you need, you need them, period. So it is highly relevant. You need those things. The kernel and the OS interfaces that you use absolutely matter. Like there's no, there's not yet enough of abstraction to get away from that. Every new feature that happens on the top, like I was just chatting at the party last night, somebody from Intel was talking about how do we public publish all the way up with like the function level, something that would have to do with cash and validation and otherwise on the hardware underneath, like bubbling it straight all the way through, it matters at every step, every new iteration. Yeah, and again, we're just talking to Brian and I was talking to Tim yesterday that at the end of the day, even though this is big and juicy and we can nod our heads to a lot of this stuff, when it inconveniences us, to have to deal with this, and I don't get any value out of it, I don't care. I just want to not be frustrated. Like all of us that are having to be like, oh, well this next version of the same thing is different. And I'm having to spend a lot of hours on it. I don't care. Like please make me not have to do this. And so like this is the kind of things like how could we actually start talking together to not have to do that over and over and over. So all that to say, and this is enough of a rant, but the big takeaway is all of us are feeling this collectively. If you're new to this space, and you are thinking that you're going crazy because you're frustrated with it, you're in good company. If you've been in this space for a long time and you think why haven't we solved this yet, you're in good company. Do not. Not that we have to get together and rant. Sometimes it's cathartic, but otherwise this is something that I think we've all suffered in isolation and collectively. So even just this week, we kicked off actually a group called the kube operating systems dev. It's a channel now on slack and we're trying to figure out what would actually be success in that channel. And like how how could we actually elevate some of these interfaces? Again, everybody wants their baby to win, but there are obviously lessons learned and there's things that we could at least take away or make it easier for the tools to be more consistent to bubble these things up. Maybe it's to work around the kubelet and that API or to like begin to iterate on that. We love we love the control plane API of Kubernetes, but like the kubelet, everybody has something they wishes would be improved there. How do we talk to the OS is it through existing CSIC and ICRI, whatever is there something new? How much overlap is there between the kubelet and system D? At the end of the day, again, most of us don't care. We just want it to run. We want to be able to figure out what's going on. So this is that conversation. And honestly, it's not about saying we're going to choose a winner. That is not that is honestly not the end goal here. There are use case specific things. There are we we the people that are involved in the space make decisions that have implications on everybody. And if those people that are having to suffer because of those choices, don't have a voice to be able to speak back and say the tools do or don't support it, then it's kind of a broken feedback loop. So this is kind of a gap that's been in the space because the OS has not mattered in a lot of the container, like people have moved on from that. But time is at least proving it absolutely matters. There's absolutely a place to have a conversation around this and the place to have good to maybe iterate on some some interfaces and make a step forward and like a migration path so that we can continue not caring and just do good cool stuff together. All right. Right. Almost on time. Find me online. VBATs at most places. Obviously online is great. And that's how most of us live. I love the human connection. So again, feel free to reach out to me. Don't feel like you can't reach out to me. And I look forward to working together on these projects. So cheers. And does nobody interrupted for questions. Does anybody have questions at all? Going once, going twice and sold. All right. Thank you everybody.