 All right, so my name is Ben Breard. I'm a Red Hatter. I've been at Red Hat for eight years now, and I've spent the past three years really focusing in all the technology, and really in the layers and in the space that we're going to talk about today, going from everything on the system D side to container run times, what we do for image content, how we produce them, keep everything fresh in these things, and everything we've done on the Atomic Coast side, and then looking forward kind of where we're going on the CoreOS side. So that's kind of me in a nutshell. And I thought for, see my clicker doesn't seem to work. Excuse me while I switch this. OK, so I thought today we'd really focus in on the why. So why are we making some changes here? What are we doing this for? Some of my peers have given other talks on this subject that I think kind of cover more of the how and the what. So Benjamin Gilbert and Dusty May have given a really good talk at Flock on this. If you guys want to learn more in this space, that's a really good one, a good look. Colin Walters gave another one at Flock, or no, sorry, DevConf US that I think was probably more on the how side. And so these are really good talks and these guys are all brilliant. So I recommend those, but today I want to talk about the why. And every time I talk, I like to tell a bad joke, so I apologize. But we're going to start with that now. So we got a new logo, which is not new, but the colors are slightly tweaked to align with Fedora. So I was thinking about logos. I always love the CoroS logo. It's perfect. Even the name is perfect. So I'm really, really glad that these things are still in play because they're very, very symbolic about kind of what we're doing in the effort. But something Leonard and I have talked about is getting, we need a real logo for SystemD. And you guys know we can talk about SystemD this whole conference, right? Because this kind of grew out of SystemD Conf. And I like SystemD a lot. But so there's actually services out there that will generate logos. Have you guys seen this? So this logo generator, so if you just type in a name, it'll go scour the internet. And based on what it finds on the internet, it'll come back and give you a graphical representation of what you want to do a logo for. So I thought, oh, well, this is perfect. So let's see, what would the internet say about SystemD for a logo? Do you guys want to see this? No, nobody. OK, a couple of people want to see it. So the internet can be a cruel place. But this was what came back. So we're not going to use this logo. OK, this is not good at all. Anyway, but I thought that was pretty funny. OK, so going into what we want to talk about is kind of what we're doing with CoroS and the space and why. So last time I was in Europe, we had probably a three, four day kind of planning, team meeting, figuring out kind of next steps for us what we're doing and stuff. And I got back on the plane to fly home and over the Atlantic, I was on the satellite internet. And I got a ping from my boss said, hey, guess what? CoroS is coming to Red Hat. And I thought, well, OK, this is super exciting, right? Because people are brilliant. The technology is really, really good. I thought, but on this plane, I need a really big trash can to just throw away all my plans, right? Because everything is just out the window right now. So I was a little bit nervous flying back out here, fearing another ping like that. But that has not happened. Anyway, but one of the reasons I like Red Hat so much is because there aren't a lot of companies out there that when they do an acquisition, they aggressively open source any proprietary IP. And I think that's really, really unique and something I really like about Red Hat. And so primarily everything at CoroS was open, except a couple pieces for Quay, which those are being open source right now. So that's really cool and something I really admire about the company. But when we look at the technology there, it's really, really compatible. And a lot of the things that we were doing at Red Hat, CoroS was also doing a really good job of forwarding that mission. And so you guys know Tectonic is the Kubernetes platform that CoroS has. And OpenShift is the one that Red Hat has. And the two align quite well. Tectonic historically has a stronger story on the operational side about how you instantiate the cluster. And it's kind of more of a self-driving model. And the whole theme of automated operations play very strongly there. OpenShift, I don't really want to talk too much about products today, but on the OpenShift side, our strengths was always on onboarding applications and that flow to just getting workloads on the clusters. One of the problems we've had in the cloud space is companies go through these initiatives and they end up with this empty cluster problem. And across our industry, there's stats of, I want to say, it's like 70% of IT projects tend to fail. So there's several conclusions we can draw from that. One is we may be solving the problems the wrong way. Or two, we're all not very good at our jobs. But think of it what you will. But that's one reason why I think we've seen success in the space is because of that ease of onboarding applications. So long story short, putting those two together is actually a pretty powerful value proposition and what we're working on. So CoreOS is probably best known for is CoreOS, which is later renamed Container Linux. When we look at this concept that they pioneered of stripping down the OS to just a few hundred packages, like the bare essentials you need to containerize applications, it's ultimately been incredibly successful. And the things that people like about it are the fact that it's basically just this minimal footprint. It's just the bare essentials you need, not a lot of extra stuff. So what you would traditionally get in a distribution like a Fedora, say, would really strip away anything that's not essential. And so that helps from a security perspective. But when you're trying to ship the latest software, the less you have to compile and build in that stack to version, the more possible that becomes. And then of course, the fact that the OS is basically transparent under the seam. And so the update cadence and the care and feeding that sysadmins give to Container Linux, most of that is automated where possible. Primarily through the auto updates model. And so the thinking is that the way containers work for Docker, that type of model, the OCI model, is that everything that we ship for the application is bundled and isolated there and is effectively isolated from the operating system. So if we decouple these two things, the result of that is we can then automate and rev the OS without impacting the application. This makes sense, right? We need to get away from the patch Tuesday problem of coming in once a month and figuring out what's broken. And containers have gone a long way to further that. And it's just looking at the adoption. We're seeing the results of that. And so at the summit, Brandon Phillips, and I did a little bit of a retrospective on what worked and what didn't work, looking back from at the Container Linux side and the Atomic side, I don't really want to rehash that whole talk. But for context, I do want to hit on a couple points. Most of these were specific to Brandon's side. And so the goal, if you think of the goal as shipping new components, so a new kernel, once that's ready to go and upstream cuts a release, or when system D cuts a release, for example, Container Linux did a really good job getting those versioning, building, testing, and then shipping this one unit. And some assumptions that it would actually become this pretty clear cut architecture and these clear layers of separation here. So the kernel, system D is our service manager and system manager right above that. And these things always work nice together. And then Docker works nice with system D. And then, of course, the kubelet will then control Docker effectively. Kubelet does a lot more than just talk to the Docker APIs. It's a much, much broader. It's almost like a system manager now of its own right, where it does quite a bit and proc and everything else on the system. And so branded stake on this was that most of these assumptions did not hold. The kernel was really the primary one where, from a compatibility perspective, does like a really, really good job of carrying workloads forward. So again, this is interesting when you start to think about it from the layered cake model. The other thing is, from a Kubernetes perspective, by the way, I'm sure most of you guys know, RedHead's heavily invested in Kubernetes. This is a huge strategic big deal for us. And we put a lot of resources into this. On the basically, updates for container Linux are done centrally, right? And you have this kind of monolithic update model where which channel you're on determines the update that you're effectively subscribed to. But the cluster version doesn't control or push updates down. And so you can end up in a state where nodes can be at various versions of the operating system. And you don't have basically like a top down versioning from this part. And I think I'm really, really close to this with Tectonic. And we're on the path, I would say. But this is something we think we can actually turn a couple knobs and actually have the cluster drive, the OS version, which is powerful from a consistency state. I talked to one of the big cable companies in North America just a few weeks ago. And their issue right now is when some of the nodes come back up from reboots, they actually don't know what version of CL they're going to reboot into. And so there's a bit of an uncertainty for them. And they have a couple thousand systems running right now. So this is like one small thing that I think we can make a big impact on from this use case. The other big thing is I hate this picture, by the way. So I just kind of put it up to make a point. The container ecosystem and everything from CNCF has changed a lot over the past five years. I think a lot of us when we see stuff like this, it's like, oh my God, this is a great, because this is a good looking sushi menu. And I want to give me one of those, one of these, I think three of those. And it's all going to work together. And I just, the experience tells me that that's many, many cases not true. And so we, and I think when the container kind of revolution kicked off, right? And Docker really did a phenomenal job helping people understand the concepts and seeing the value of this. But we've seen like proliferation at many different levels. So now there's far more container run times. You guys, this is kind of the, I don't know, the Greek tragedy of open sources. There's like 500 ways to do everything. We probably have 40 different ways to run containers and Linux, maybe more. But we've got more and they're becoming highly specialized in the runtime space. And you think from technologies like SCONE or whatever that they use, some of the CPU privilege rate spaces and stuff. So we've got, you've got this explosion of projects in this space. And everybody's trying to build these distributed systems that I actually solved real problems. And it's pretty easy to pick and choose and build a cluster, but what's incredibly difficult is versioning and life-cycling, so many different disparate projects that all fit together. It's incredibly difficult. And it's a trap that I think a lot of us fall into that I can do this, not should I do this. And so where this, and I'll give you another good example of this, is a different telco in North America. They have a huge Kubernetes system. And a lot of their nodes, the file systems were flipping to read-only, just like randomly throughout the whole thing. And you know, they call us and say, Red Hat, your file systems suck. And I said, well, no, they don't suck. They work for a lot of people actually. And so we traced it down to, the actual problem was a bug in Prometheus, actually, for them, and because of the endpoint metrics, which is like, you would never ever, like why would you ever think something that reads in points and collects metrics would result in that? I surely never would have, but it's interesting how interconnected all these pieces are. And so that was just one thing. And again, that bug has been fixed upstream. I think one of the biggest traps we run into here is actually in the logging side of the way containers just spew everything to standard out. And the bottleneck that creates its scale is been, that's a really challenging way. So I think like a reality is, yes, containers of course help from the OS and the application side, but then we look at larger distributed systems. These layers are so much deeper and tightly coupled than we'd like to admit, right? We think they're just all APIs that are gonna talk and everything's good. But I'm not really, I'm not convinced that's the case. Maybe I'm wrong, but experience says otherwise. Okay, so let's, so that's a little bit of context. So again, everything in this, really, really successful. But there are a few knobs that we can turn that we think we can make things better at just the OS level itself. So Red Hat, again, we have like a product path and I don't really wanna talk too much about that, but what we're doing in that sense is basically taking the best of both Atomic Host and Container Linux and really, I was such a big fan of just the user experience of everything from Container Linux, meaning just the way it's built, the mission, the vision, everything that, in reality, in a Kubernetes environment, you really should not have the SSH into a node and that's a concept that we really wanna bring forward. And then, of course, from the Atomic side is actually a lot of the Linux stuff that we do at Red Hat and how the host is actually composed. So there was a sarcastic tweet a while back I wanted to find, but I could not find, finding old tweets is tough, but there was one that said something like, oh God, RPMOS tree, that's really what CoreOS needs to get better, right? So I couldn't remember who made that crack, but anyway, RPMOS tree, on one hand, I'm sympathetic to that, but it is quite good technology and we can put out spins really, really rapidly in Kubernetes and ship updates as container images, which is really appealing because when you look at mirroring content for like disconnected environments and so forth, container images are a really, really powerful vehicle for that because you're not having to, you're not even to grab yet another content type, not another repo to set up, everybody pretty much already has some type of registry. So specifically, we are making a few changes on the product side. Some of these are different on the Fedora side, which we'll talk about in a minute, but basically going back to the linearizing updates from the Kubernetes cluster perspective and really driving that down, one of the other realizations from the upstream side is that containerizing the KubeLit probably creates as many problems as it solves. So whether you're running the KubeLit as a container or in the OS Compose, you're gonna pay for that one way or another and containerizing that has actually has become problematic in some ways. Okay, I gotta speed up. Anyway, containerizing it works great for small PFCs. There are a lot of code paths that never really get tested upstream, especially in the storage space. So we're making a conscious decision to move that into the host. And then again, on the product side, this is perfectly versioned for Kubernetes. So we'll be driving that stack down. It's a little bit different though on the Fedora side because again, that's all from the Kube OpenShift model. On Fedora Core OS, we really wanna be less opinionated. We wanna find that right balance between being opinionated and just like general purpose operating system. We already have plenty of those. We don't need another one. So Core OS does a great job right now of just when you sell out of the box, you get this, obviously this minimal container host that's ready to go, ready to take on workloads. And kinda that mindset is what we wanna carry forward. So again, this is becoming the upstream to what we're doing on the Red Hat side. Keep the same mission. When we look at the release, we're marching towards getting regular releases out next year. I'm hoping this lands with Fedora 30. We definitely wanna keep the same alpha, beta stable kind of mentality going here. And we are gonna be plumbing this into kind of a lot of the CI work that's been going on on Red Hat over the past couple of years, which is something I'm super excited about. Yeah, that's a big deal for us. The update model is gonna change from the Omaha and update engine side. We, the predecessor to Omaha is called Cincinnati. Again, I don't know where the names come from, but some other Midwest city. And so that's actually gonna be tweaked a bit to update standalone hosts. So again, you'll be able to do the same kind of model for CL where if you wanna deploy it in a large cluster or have a standalone nodes that just update themselves, we'll be doing that. From a runtime perspective, I don't actually think we've fleshed out which runtimes are gonna be included, but probably these, and I think we'd be open to others. So if other people have ideas or things they wanna see, we definitely wanna capture that and hear it. So I do wanna go back to the why's a little bit here. You guys know the five why's. If you ask somebody reasons for something, they'll never tell you everything at first, right? They'll slowly open up everything. So if you have kids, this is like free parenting advice, like keep asking why. But so I think the main reason we're looking at making some changes is just acknowledging that the ecosystem is quite different now than it was five years ago. And hopefully everybody here kind of feels and realizes that, and there are some things we can do at the OS level to make things work better. The community development angle is a big one. That's something that's obviously super important to us. I think historically container analytics seem to be driven by like super smart people. Small team and they're definitely with some outside development in the OS, but not all that much. And so this is something where we're hoping to benefit from expanding that. And we actually, that's an easy way to actually drive a heck of a lot of value from Fedora itself. And then of course that ties down into sustainability, right? It's no good to have an OS and use it for high value systems if you don't think it's gonna be there in six months or a year, 10 years, right? So the sustainability side is a big deal for us. And then again, if we do this right and tweak the right knobs, the potential for growing the user base is huge. Kind of one minute left, so I gotta wrap up, but I just briefly future opportunities. I don't have much specific to say here other than we actually, there's internal talks about all kinds of stuff that we can do with just how we version these lower pieces of the OS and get this going. So we're actually really excited about what this is gonna look like one to two years from now. And so actually hoping to come back and kind of do a retrospective maybe with more guys from the team on kind of what this looks like and the results from it. But that's super exciting to me. So I'm gonna wrap up, but I guess here's kind of some of those other talks I talked about. Don't have a very good website right now, we'll fix that when we start getting releases and actually do the launch. But again, we're trying to have all the discussions that I don't get hub right now. So definitely encourage you guys if you're interested in this space, which hopefully you are, please swing by and we want that feedback and make sure that we're doing everything we can. All right, so with that I'm gonna stop talking. Thanks for coming.