 Oh, he found another microphone. All right, we're going to get rolling. My name is Josh Boyer. For those that don't know you, I don't know, which I think I know or have met quite a few of you already. So that's cool. This is Brendan Connoboy. He is my counterpart over in Rel Land. He comes up with crazy ideas. I try to make them reality. Sometimes it pays off. We'll see how this talk goes. Sound good? Excellent. All right. Before we get started, you all had lunch. Before you fall asleep, I want to do an audience poll. Raise your hand if you participate or use in fedora. Good. Keep your hands up. Keep your hands up. Leave your hand up if you also participate or use rel. OK, that's a good cross-section. Same question, but sent to us. Wow, that is actually way more overlap than I thought we would have across all three. One big happy family. You guys are definitely in the right talk. So all right, let's get going. All right, so the title today's talk is Solving the Penrose Triangle. And if you're looking at this screen, you can see that that doesn't exactly make sense. It's kind of hard to understand what's happening. But I submit to you that this is actually just a visual representation of a Mobius strip that has three sides. So with that in mind, we need to refute one thing that Matthew said. This is not the continuation of what Red Hat wants. That was a thing that we kind of talked about. Screen saver. Oh, man. Amateur hour. That's your password? Does it need a longer password? Is that the one you use for your luggage? That's really it. We'll be going so fast you don't need to worry about it. Just put those lines back. Do you think so? Yes. All right. So this is not a continuation of what Red Hat wants. We talked about that for five years. And I think that question has been well answered. The question is, what is Red Hat doing? So we've long had this problem where we can't talk about our product plans, but we can talk about technology. We can talk about what we're up to and why we're up to that. So we're going to talk about a lot of things. That we're doing right now that we haven't really talked about before. But I think if you've seen anybody's slides earlier today, there's like a, and go to this presentation later on, or tomorrow, or whatever. And that's because there's a lot more coordination than there's ever been before. So first of all, in case anybody has never heard this before, there is a relationship between Fedora, Linux, RHEL, and CentOS. And the conventional understanding of it is that everything kind of starts in Fedora and then RHEL gets built from that and then CentOS gets built from that. And that's true sometimes, but it's also misleading. So when Fedora started, that was kind of like the big picture, right? But things have changed a little bit in that technology has gotten more advanced around Linux. So Linux is still the thing that runs on the hardware. And we have, that's the light up your hardware. But after that, sometimes you add a virtualization layer. And then sometimes after that, you have a virtual hardware OS that may or may not be the same thing. And after that, sometimes you want to run containers, a bunch of containers. Then after that, you have a user space inside that container orchestrator. And after that, you have an actual application that runs. So it's been a long time since the OS was the only thing. And I think it's time to kind of reset and say, all right, what are we doing, especially when there is Fedora, RHEL, and CentOS, or Cent, all to be considered? So let's start with Fedora. OK, so this is kind of preaching to the choir here. But what is Fedora, right? If you look at it from a Linux distribution standpoint, there's 21,000 packages. Is that what it was, Matthew? Some large, multi-thousand number of packages they get pulled into what we call a Linux distribution, right? They all converge. We pretend that we can manage them as a single unit. Sometimes that's true, sometimes it's not. So we talk about things like packages and repositories and ISOs and images, right? These are all things that people think about when they think of Fedora. And they all have value. Our packages are high quality, right? Our repositories, for the most part, generally work. They have repo closure, things like that. People use them, they expect them. Our ISOs, obviously if you can't install your operating system, it's not really worth a whole lot. And our images, images are kind of a new, not really new, but they're getting more focused, right? So container images, QCao 2 images for virtual machines, things like that. We're getting better at producing those in Fedora. Atomic images clearly are in there as well. But these are artifacts. That's not what Fedora is. That's what Fedora produces, right? So it's really interesting to have conversations with people about what is Fedora. And they say all these things, but really that's not what Fedora is. So what else does it do? Well, we have additions. We have spins. We have tools. We have desktops, IoT, Apple. We have all these other things. And people like to take basically what I said to begin with and move it to a higher level, right? These are focal points for things, right? We created additions so that we could have kind of opinionated collections of the artifacts that we produce. But that's not also what Fedora is. Those are just collections, right? There's nothing magical about them. It's still content. So what actually makes Fedora valuable? And valuable is an interesting word, right? It's valuable to itself. It's valuable to others being upstreams, downstreams. Like, what makes it valuable? In my opinion, and I think this actually ties in with what Matthew said earlier, and we didn't play in this. I promise. It's the people, right? So this picture here is actually from Flock in Brno? Prague. Sorry, I was in Brno before I went to Prague. So yes, Flock Prague. And that's package maintainers. That's release engineering. That's users, QA, right? Fedora is a community, and the community itself is what is valuable to a wide variety of people. So why is it valuable? Because we have collaboration, we have innovation, and we have solutions, okay? Solutions kind of fall into the artifacts section or the additions or whatever you may say. The innovation thing, I think in some areas we're really good at it. In other areas, it looks like failure when we innovate and it doesn't exactly pan out. And I think that's something we have to correct, right? Like Fedora is this space where we can do experiments, we can fail fast, and we can learn fast. And if something doesn't pan out, we shouldn't be, sorry to put you on the spot Matthew, but we shouldn't be down in the dumpster fire of optimism because every time I fail, I try to look at it as a positive thing. We learn something, right? We learn what not to do, or maybe we learned how to do it better next time. And I think we need to get a little bit better at that than what we've become in Fedora, where we've kind of stagnated on the innovation thing. And that's particularly true for stuff that comes from outside contributors that maybe aren't necessarily like right in line with what Fedora is doing today. But sometimes that space is really interesting. And not to be too negatively, we have good success there as well. So if you actually read our mission statement, which I'm not gonna do it, and you can just trust me that that's what it says, it basically says all the things that we just talked about, right? Like that is what Fedora is supposed to be for. And that's what we're going to actually try to strive for going forward. With a RHEL. So RHEL in a sense is a downstream of Fedora. But what is it unto itself? It's a long-term supported operating system for commercial use. And when we're making a RHEL, our particular concern is security, performance, stability, like kind of all those ops things, right? Like even though we experiment in Fedora, we try to never experiment and fail in RHEL. And one of the things that distinguishes it from the Fedora development is that we co-develop with hardware and software partners. Like, while we pull from upstreams, we actually insist that partners get their code into upstream before we pull it down again. And the other thing about RHEL is that it is the foundation for Red Hat's entire commercial product portfolio. Because again, it's been a long time since it was like RHEL and maybe satellite. So what makes RHEL most valuable? Well, it's enterprise versus fast moving, stable versus features, back port versus upstream, and it's all about less churn, getting things working really well and then leaving them alone for the rest of your life. And when you actually do have to make a change, make the most minimal change you can. So here's a question, where does the RHEL user space come from? Well, about 13% of Fedora is chosen to become RHEL packages. And in that other 87%, that is other things that are happening in Fedora that aren't directly related to RHEL, maybe they're experiments, maybe other things are happening there. It's really cool stuff because I don't know about the rest of you, but I couldn't get my job done if I didn't have a Fedora desktop. Like it is an essential ingredient in my life. But the last time we actually did that 13% transfer was 2014. And I think there's room for improvement there because it's been four years. And then where does the RHEL kernel come from, Josh? Thank you. So, where does it come from, right? If we look back to the original graphic of the flow of Fedora to RHEL to CentOS, everybody would assume that the RHEL kernel comes from Fedora. That's not true. There's some spec file stuff that in that 2014 timeframe, maybe we pulled that in and we looked at it. And just because of the development workflow and the vast number of people inside of Red Hat that are working on the kernel, they threw most of it away. They have a totally different setup. They kept some of the structure, right, of the spec file itself, but the workflow is completely different. And also the source is upstream. Now, you could argue that that's also Fedora because Fedora ties very closely to the upstream kernel, but the development space, the maintainership of this kernel in RHEL is it has very, very little to do with Fedora. And so that's kind of an interesting point because when we talk to hardware partners or ISVs that have kernel space drivers and things like that, they say, well, where do we do our work? And our answer is upstream. It's not Fedora. That's kind of a problem sometimes because we tell them Fedora for other things, but we don't tell them for the kernel, which is obviously a major component of a Linux distribution. So can we do better? I think we can. And actually, Laura Abbott and Steph Walter, or Walters, depending on who's saying it. They have a talk. It is right after this talk, I think we established that. And they're gonna talk about kernel maintenance and how we could possibly do make Fedora a better place to contribute, not develop the kernel because development happens upstream, but integration, maintainership, et cetera, et cetera, for the kernel. So I would highly recommend if you're interested in that area to stay for her talk. All right, so that leaves number three, Sentos. It's a rebuild of all the RHEL packages and some assembly is required, but I'm sure it's very straightforward, just like this schematic. So why is Sentos most valuable? So we know why RHEL's value, we know why Fedora's value, but Sentos gets kind of all the RHEL op stuff, it's free and there's this one other cool thing, which are the special interest groups, which kind of give a space to take RHEL and then change the parts that are important for inclusion in future RHEL. And that's a thing we just can't do with Fedora today. So maybe there's room for improvement there too. So remember that technology stack we showed you? Let's talk about how that actually affects the Red Hat portfolio. Okay, he said the portfolio word, I promise it's not architecture. We're just gonna talk about where Red Hat's products land in these spaces. So this is the stack. If you look, obviously hardware is hardware partners and OEMs and things like that, right? Red Hat doesn't make hardware, but we have a lot of partners that do and we work very hard to help picture that their hardware lights up with our operating system. Also cloud. Also cloud. So the operating system layer is obviously RHEL, right? Like that is our main bread and butter product and it is very stable, it's very secure, very performant, right? So that's the focus there for the light up your hardware and your cloud layer. So for guests, we have management of guests through Rev and we have open stack, okay? I'm not going into details here. This is very high level. And also anything that I say is my opinion and not necessarily what Red Hat marketing or sales would be telling you. So for the virtual hardware OS, we have RHEL, obviously we also have Fedora, you can put that in there. CoreOS, we added to this slide, you could possibly call it container Linux. I know container Linux also runs on bare metal. There's a lot of interesting things going around there and we announced our CoreOS acquisition and this is just a guess on where we're gonna land it. Like this is me just making stuff up. So kind of ignore the CoreOS part. There's more details coming on that when they actually have plans. Container orchestration, obviously a Red Hat open shift and our OKD product from an open source perspective is where the container orchestration happens. The application stack in libraries, like who knows? Sometimes it's RHEL, sometimes it's stuff that you pip install on your machine. The point is there's a wide variety of applications in libraries and they don't all necessarily come from anything that Red Hat provides. So that's kind of a key thing to keep in mind. And then the container applications, obviously we have open shift S2I, we have customer applications, we have Podman and Builda and other things that we can use to create these containers and run them in this technology space. So that's just kind of a mapping of where we're at. So if you look at all that, Fedora plays in exactly one of those. None of the other products really use Fedora as their upstream as their source. Now that's not to say that they don't have packages in them but they don't do it. So what do they use, right? So obviously RHEL uses Fedora plus some upstream stuff directly from upstream. OpenStack uses RDO which is based on top of CentOS. Red Hat virtualization uses Overt, they also use CentOS. And I should qualify this slide here. When I looked this up, sometimes I could figure out where they do like their CI CD or their development space. Like RDO is very well known to be on top of CentOS. Sometimes I couldn't find that. So I looked to see what their install guide said. Like what did they say to lay down for your host operating system? And almost all of them were CentOS, except for Ceph. Ceph is weird. They have this really awesome quick install video and they say use them bunto. So we maybe want to talk to them a little bit, but. So OpenShift container platform, obviously they use RHEL. They also have a platform as a service SIG in the CentOS SIGs where they do some integration there. But mostly they're kind of a self-contained thing. They do a lot of upstream development through GitHub as well. And then Gluster, they're actually the overachievers here. They actually have install instructions for every major OS, which is really cool to see. I was happy on that one. But the point is only one really claims to have Fedora as its upstream source. I can't remember, are you covering this one or am I? I'll take it. Okay. All right, so this is an arbitrary, right? There are actual reasons for this and it really comes down to the things that make RHEL valuable to customers. If you aren't developing an operating system, if you want to develop on top of it, it's a lot easier to develop on top of an operating system that isn't moving. And so this is why they choose CentOS. They're like inside Red Hat. We have tons of discretion on what we do. And we do the thing that's easiest most of the time, as long as it aligns with upstream first principles. But that upstream is not necessarily Fedora, it's not necessarily CentOS. It is like the highest point of the mountain where the snow falls. So we started with this view. Now we go from Fedora Linux to RHEL to CentOS, but the truth is that it's more like this. Because every once in a while we make a new RHEL release, but most of the time we're spending our days and nights on updates. So between 2014 and now, there have been a lot of RHEL updates and her package, maybe an individual maintainer is going to pull and run their stuff through Fedora, but often they're actually going to push their stuff through CentOS, test their stuff in CentOS because it is highly stable. And if they need to interact with a hardware partner or a community member and that's their upstream, then they're going to work there too. So sometimes upstream is feeding itself, sometimes the waterfall goes backwards. And the problem that we have is that we just want to have a functioning community. Like we want to be able to give consistent messaging. This is where, don't I say that? I used to be an engineer, I swear. So we just want to be able to tell people, go here, work on your stuff, share, enjoy, make it good. So these are the problems that we see right now that we're actually working on. I mean, within this audience, there are tons of other things that we're working on. And many of them feed into this. Many of them are other experiments, but like where Josh and I are, this is the thing we care about a lot. So the Fedora kernel and the rel kernels are unrelated. We could do so much better than that. Fedora releases are valuable to individuals, but statistically speaking, irrelevant to Red Hat's business, but for every few years, Red Hat partners participate to get content into rel, but usually not through Fedora. And again, what we want is for Fedora to be Red Hat's upstream, not rel's upstream. We want it to be Red Hat's upstream. And again, most of Red Hat's product portfolio is using CentOS, not Fedora. And so this is a thing that we would like to fix because it's not valuable differentiation. So summed up, there's no clear answer that we can provide to anybody. If they're a user, a developer, a partner, software developer, ISV, for how they should engage the community, which community they should engage in, like what is the path to join and work with us? But it doesn't have to be this way. We have other options. I think you have option one. I don't remember why I picked option one. So revisiting Linux distributions, again, right? Like Matthew's talked about this with rings. Langdon's talked about this at length for the past couple of flocks. This is something that we talk about. It's something we have to really start doing. So I started off saying we have a pile of packages. We try to manage them as a unit. Sometimes it works. We get a release every six months out the door. After that release happens, what happens to that pile of packages? Do they stay cohesive? Do they move at the same pace? No. After that, people update the package willy-nilly or based on whatever they wanna do. And that's okay unless you're still claiming that it's a cohesive unit as a distribution. The rules don't apply evenly, right? And we mean that two ways. One, the maintainers don't pay attention to the rules evenly across a package set, which is human nature. But also two, the rules don't make sense across the package set, right? If you have something that is slow-moving upstream and stable, you don't really need to update it every single release. Brendan likes to use the example of Bash. How different, really at a fundamental level, is the version of Bash in REL7 and the version of Bash in Fedora 28. The Bash maintainer can probably get up here and talk an hour on the differences. The average user would be like, I don't know. Like it makes me run my commands, right? I'm sure that there's a whole spectrum of opinions within there. But the rules don't necessarily apply generally. So we have to start focusing on that a little bit. Right, so the first thing we wanna cover is the distribution. And is it time to reevaluate that? We think it is, but I wanna say something beforehand, which is that we're retreading some ground here. And previously, when we looked at each one of these things in isolation, they didn't make sense for practical reasons. But when you put them together, it makes a lot more sense. So I know everybody's going to have a healthy dose of skepticism for some of these things and wild enthusiasm for others, except for Matthew who's crossing his arms. But so first one, does it make sense to split the distribution into an operating system and applications? Windows doesn't have a distribution, it splits them. MacOS doesn't have a distribution, it splits them. iOS doesn't have a distribution. Android doesn't have a distribution. It has an operating system, it has applications. There are distinctions between the two. There are different rules for the two. There is a different app store for the two. I don't think we have to go quite that far, but maybe we create some opportunities for ourselves by having different policies for the operating system side of the distribution and the application side of the distribution. So let's consider what that gets us. So what are the rules? So operating systems, you need one to boot. You don't actually just boot an operating system for the fun of it, like you want to run something. So you need to be able to boot. You need a certain set of features. Generally, once it boots, you don't want to change it very much and the smaller it is the better because again, you're not booting it for the fun of it. You're booting it so you can run something. So we have a lot of operating system distributions right now. We have Fedora, Role, Atomic, Container Linux, CoreOS and they all have a role, but we treat them the same way we treat applications. And then on the application side, people usually want the latest, more features are good. They can be closer to upstream and more is more. Like if you have a one gigabyte application, yeah, it's kind of big, but you wanted that and you were okay with it. For operating systems, it's non-negotiable. So one thing though is that variation between the different OS distributions is wasteful when it comes to applications. So if you could have one version of Bash that ran on Fedora and sent to us and Role, it's probably the same maintainer. They have one third the amount of work nominally. And even if that's just sharing the source and changing the binaries out because of static linking rules or dynamic linking rules, they're still coming out ahead. So if we split the operating system and applications, maybe there's more room for sharing between the different communities, at least on the application side. So anyone who's paid attention to rings or modules is probably thinking, but these things, the gray area is, how do you know if it's an operating system or an application? Dependencies get very complicated if you start intermixing them. And the QA matrix gets huge. So we've got kind of three things there. It's okay to iterate. If you do it once and it doesn't make sense, you can do it again and again and again. This is a work in progress, but it is not final. Dependencies are complicated, but all that work on modularity, two of the things that came out of it for my standpoint are stream branches which provide the option of parallel availability, even if you don't have parallel installation. And that means that you can have a situation where your operating system and your applications have different provides and dependencies as long as you have some sort of repository, some sort of module stream that gives the apps what they need. And the QA matrix, the QA matrix is already huge. Like we've gone from just an OS to all these other technologies like 26 Red Hat products, but all of them, nearly all of them exist in some capacity inside Fedora. And we're definitely not testing them. And CI is the way that communities actually bind together and can trust each other. So, real CI is no longer optional. All right, so let's say, actually, are you doing this? Sure, I'll do this one. All right. Why not? Can you tell we've finished these slides at 2.30 in the morning? So, revisiting the life cycle, right? Life cycles can be different, as Brennan said, between OS and applications. Now, he actually said, maybe you want your OS to be longer. Sometimes you want it to be shorter, right? And Atomic is already kind of doing this in the way that they're doing like a two-week release of their images. There's all kinds of different reasons for having different life cycles for things. Fedora has a single life cycle across the entire package set, except for Atomic who's cheating. But that's okay, because that's the kind of innovation that we're looking for. Huh? KDE is also cheating, right? Like that goes back to the whole the general rules don't apply. So, when I say cheat, I'm actually kind of like applauding them, right? Because they're getting away with something that people actually wanna do and we're not really yelling at them. So that's okay. But if you have different life cycles, you gotta figure out how does that work, right? How do you have an OS that moves out from underneath the applications, but the applications still work? And there's a lot of really smart people working on this problem and I think we can actually carry it forward. I think there's also a lot of benefit to having an OS, a set of content that is small that you can update and you can trust, right? Now, Fedora's pretty good about this from release to release. I'm gonna say it's through a lot of heroics from people in this room and people that couldn't make it today that that's the case. It's not because we have CI in place. It's not because the package set is inherently stable. It is because we literally crush ourselves every six months to get a release out the door that we know will update from the prior release. So, earlier today, I actually updated Android on my phone and somebody from the CoreOS team looked at me and said, good luck, right? Like, you're doing that at a conference, you're crazy. It worked fine. Fedora has also done that for me for the past four releases. It's been very boring, but at the cost of a lot of work. So we gotta figure out how to make it less work, do it automated and figure out how to split those OS and applications. So, can we do better? Paul Frieds, after Laura, is going to tell us that he thinks so, and how, and Josh is gonna be there. I am. It's true. So it must be true. All right, so the next one on the list is, again, because the place that we hope we can get to is consistent guidance on where to send anybody. We want to send them into the community is there are differences between the Fedora and CentOS Community and we have to honor the founding principles of them and make it work, but that doesn't mean that we can't narrow the gap between them. One of the things that we can do is reduce the differentiation in the infrastructure. It's more efficient. It lets the infrateam do more and there is a talk by our fearless leader, Matthew Miller and Jim Perrin on this, after Paul Frieds. So again, this is, we are spending a lot of time coordinating these things because we think when you put it all together, it makes sense in a way that by itself, it just does not. And the last thing on the list is just a commitment to continuous integration. I think a lot of people hear the word CI and they think, oh, that's Adam's problem. He's the QA guy. He's in Seattle. He can just go to Starbucks and it'll work out. And that has been true, kind of, but at the same time, it is not, is he in Vancouver? He's in Washington. He's in the Northwest area. Anyway. I like how you guys, out of all the stuff we've said so far, you had to be ecstatic about where Adam Williamson lives. That is the best. I'm gonna take this as a read that we're doing pretty good so far. So he's in the Puget Sound area. So the thing is that it's not a QA problem. CI is fundamentally a development problem and the more packages you have, the more complexity you have, the more essential that it becomes. So this has already been pitched once today but Bryan Stinson is doing a talk on Thursday at 11.30 in two Hamburg where he's talking about the current state of CI. And again, when all of those other products are doing their upstream work with CentOS, a big part of that is because CentOS CI is in great shape and it's something that they trust and believe in and that's something that we need to have in Fedora too. So let's put it all together. So we have some problems. Do we have solutions? Kernel problem. Talk to Laura. No, so yes, we have some possible ideas there. Laura's gonna go into them a little bit more in detail, obviously, but we really want to have that incubation of the Kernel across a bunch of different stacks. Problem two. Apparently talk to me, I don't know. There's a lot of time between rail releases where Fedora just isn't used by Red Hat. Can we do something about that? I think so. Paul thinks so. He's gonna talk about it this afternoon after Laura. Problem three. Hardware partners, actually. Yeah, I'll give this one. Okay, so hardware partners for Red Hat or really any partner, right? It could be an ARM partner that's interested in Fedora. So they participate sometimes as a step just to get content into rail, right? Like the first question they always ask is, how can I get this random hardware or specific package into rail? We tell them Fedora because that's upstream. That's how it should be. But if it's just a process step, or they really engaging, right? Like do they participate? Some of them are a lot better than others. Dan knows IBM actually pays attention to Fedora. They run Fedora. They donate hardware for Fedora to have multi-arch capabilities. They're actually a really good partner in that space. They do it, one, because it's fun. I know the people that work at IBM to do it and they enjoy it, right? Like that's cool. But also from a business standpoint, they do it because they think it's a view into the next rail release. And that's really poor visibility for them. They're guessing, right? If somebody comes up with this cool atomic thing, they're like they're trying to guess, is atomic the future of rail? Is modules the future of rail? Like they don't know and they can't ask and we can't tell them. So we wanna make sure that we make it clear why they wanna participate in Fedora. And it can't just be to play guessing games on what the next rail release is. So we have a little bit of twist here for them. Let's make it possible to participate in Fedora, rail, and CentOS in an easier way so that they can understand what's gonna happen, where their content's gonna go, how they can get that feedback loop across all three distributions. And we can experiment in Fedora and they can engage in Fedora and not get burned by the experiments. Yes. All right, four or five. Most of Red Hat's product portfolio uses CentOS, not Fedora as a solution or as a foundation and this is in some ways, this is fundamental. In other ways, like infrastructure-wise, it is not. And if we can get to the point where the distinction between is this a Fedora build or is this a CentOS build? It's like the branch it came out of and the build route that was used, that'd be pretty cool. That would be hyper-efficient, which is a thing that we're looking for, but it would also just open additional possibilities for where people participate. So we have some problems. Do we have solutions? Yes, but actually let's not talk about problems. Let's talk about opportunities because Fedora is growing sometimes by organic growth, sometimes by acquisition, but in any case, Fedora's getting bigger and we're really happy about that and we want to keep everybody inside the Fedora tent. So we want to serve more people to grow a bigger community and that would provide a clear answer on where users, developers, partners, ISVs and every other kind of community should participate. When we started this journey, we didn't even know what community meant. We had a user community. There was a memo-less thread about it and we're like, we don't even know, but there's a thing called Fedora.us and it, like six months later, we had the announcement then a year later after we had the announcement, we actually had some kind of traction on what that meant a little bit, but we haven't gone all the way. We haven't gotten to the point that every kind of community can participate in Fedora. Not every kind of participant can be in the community and that is where we'd like to get to. And of course, it's already happening. Like organically, Fedora is addressing additional opportunities as they come up, but there are some that are just kind of intractable because there is a business side. So even though it's happening, and this is a thing that Red Hat is working on, because we think upstream first, we think Fedora first is important. We're telling you about this because we want you to join in. That's why these talks are taking place so that you can say, you know, I think this is good and I want to make it better. And lastly, this is a thing that we are working on, but we do not have all the answers. It would be wrong for us to come into you and say, we have all the answers, we're doing this thing. Again, we participate in Fedora like you participate in Fedora. It is the place that we take our ideas, we do our experiments and some of them work out and some of them require rebooting three or four times. But ultimately the community to us is people and if we work together, we will make the best choices. So in other words, today we have this Penrose Triangle where it's really hard to explain what is the right place to engage, what is going on, what is the relationship, but with a little bit more community focus, we can at least turn that into a nice mobius strip. And if that looks familiar to you, it's because I've co-opted the Fedora logo for it to be so. Questions? Laura. Let's say there's a new team of people, whatever exactly that means, how exactly can we say that is in to operate through the process of application and maybe will there ever, can you actually update the two entirely? So I'm gonna repeat the question. Question was with separating operating system from applications, what do you do about things like containers and features that you want to exploit in those containers? Sure, so I'm gonna say I can't make forward looking statements. No, that's a very good question. So containers are an interesting one when we talk about the operating system and the applications, right? Because a lot of people take an operating system container, like a base image, they use it and they build their application container on top of it. So if we have a separation at the base image layer, obviously you have a separation between your OS and your applications in your container. Now what does it mean to switch your base image? That's a really good question, that's hard because we don't know yet, right? We're still trying to figure that one out. You wouldn't necessarily be able to take your layers and swap out the Fedora image at the bottom for a rel image or for an Amazon Linux image or whatever other operating system you have. So we have to definitely keep containers in mind as we go forward. And I think there will be a separation that does make sense. And I think containers already kind of do it for us in the way that they're built, right? Like layered images are, it's a unique take on how you actually construct your application, which includes your operating system at that point. So I think there's some stuff we can learn there, but I think there's also some stuff that we need to really focus on to make sure that the fundamental base images, one, they're small because containers, they want to be like tiny and our containers are large right now, but they're also stable and easier to move along. I know that doesn't answer your question, but that's because I don't think we know the answer yet. It's because there's a thousand answers to your question. Yeah, containers are awesome. We'll put it that way. Next question. What about the kind of the constraint use that's, if the issue of the application speed versus operating speed, should be solved by speeding up the rate of development of the authorites, right? Pure running release, when we get containers at all speed, we all sort of think what's going to do with the application. If you have the difference, the option of containers is, but if you keep your bases frozen, you're just going to be stuck with all kinds of containers, if the problem just moves into an attraction there, it doesn't actually go away, but it can't go away. OS, that actually is smaller than the question. So I'm not pausing to answer your question. I'm trying to summarize it succinctly. The question was, what about the country in view where you want to move everything at a faster rate, as opposed to keeping your OS stable and doing that? So Brendan said shorter life, or longer life cycles for the OS bits, because he's from the rel land, right? I actually happen to agree that we need both. We need an operating system that moves at the pace, but it doesn't have to be rel versus fedora, right? Like that's the focal point we're talking about here. If we want to do a slower OS release in fedora for the core bits, we can do that, but we can also do a faster one at the same time. Like variable life cycles is the point of that particular thing, because you're right. There is no single use case that will be solved by, oh, slow down your OS, because then you disenfranchise people who want to move fast and take advantage of features and things like that. So it's not necessarily that the separation is solving that, it's the variable life cycle piece. The separation makes it possibly easier. We don't know, we're experimenting with it, right? Yeah, I think the thing is there, the one size fits all world is no longer true. So this morning, Peter Robinson talked about the IoT release cadence. It's pretty aggressive. It's not quite atomic aggressive, but here's the question. Would you block your like one month IoT release because there's a bug in Firefox that's considered a release blocker? If you split these into two, you don't have to answer that question anymore because it's distinct. No, seriously, questions. There's gotta be more questions. All right, questions for the audience. Who thinks this all makes a lot of sense and is a good idea? All right. No, we are, we're totally, no. So who thinks that we're all smoking something up here and we want to be sharing right now? Laura does, all right, that's cool. Spot does. Okay. Peter does, which is interesting. The core OS guy does. Not, no, I'll share the question. Not why you are. So I get the only discussion in this whole framework, but then I need to get the specific part about why you want to make, after the separation, you want to make QS as a rule. I mean, if you separate it, because then you can make QS pasta. I just answered that. I said we need both. You answered that, but I didn't get the why. You can do that now, yes. Oh, sure. So why would you want to make it slower? I'll take that. Yeah, good. All right, so let's go back to 25 out of 26 Red Hat products or using CentOS as their base. The reason they do that is because the OS is slower. No, no, that's a different question. If you go back to the slide, then my question for that slide is, how many of those products depends on OS dependency? Like how many of those are actually with RPM installed of some stuff? And then the answer is yes, you want something stable because you don't want to catch up every single month or whatever with all the dependency on all the OS version. But assuming that we do this split between the OS and the application containers, then why do you want to make it slower if the application can go at its own pace? So when we talk about the split between the application and the OS, it is not a concrete artifact. It is a separation in theory, right? Such that it might be a set of packages or a set of sources that moves at independent rates from other things. It's not, you do it in a container. Like that's a technical implementation of a split. We're talking about from a process standpoint from what you consider your traditional OS core bits and the rules that kind of go around them versus the rules that go around applications, right? Like Brendan. I think the disconnect might be that it sounds like you're saying that the OS is at one pace and the applications as a set to a separate move by pace. And I think that's where the disconnect is. Yeah, no. Applications in the story, each individual application cares sometimes like separate from the OS. Right. Yeah, Brendan's example of would you block your OS release because Firefox doesn't work is a good example, right? Like I wouldn't. If I have an OS that has a specific purpose and it's passed at CI and we think it's good to go, I would release it. If Firefox doesn't work on it, maybe Firefox doesn't have to work on it because it has a different purpose, right? I'll add one other thing, which is that we have multiple OSes. So maybe you want to have a really slow moving hardware enablement OS and really fast moving virtual OS that that's where the container is run from. And maybe that's where the user space comes from too. Yeah, so maybe if you have a OS running I don't know, like a nuclear submarine, you might have one of these, right? If Fedora is running on nuclear submarines, I would be pretty proud, that would be cool. But in theory, that was like the longer life cycle. You're right, yeah. All right, so question way in the back. So Justin asks, why wouldn't you just have a single source that build across all the versions and use the Fedora kernel as an example because way back in time, when I was on the Fedora kernel team and they still do this today, they use literally the same upstream kernel across all the Fedora releases as supported. Now there's a little bit of a staged roll out based on reasons. But why wouldn't you do that, right? And so my answer to you is, that's what we're asking. That's essentially what we wanna do, right? I have a single common source and if you can run it across all the distributions, build it for all the distributions. And oh, by the way, we have technology called modularity that makes it really easy to do that. Yeah, so I mean, this is getting a little bit into the infrastructure we have in Fedora where it is set up such that our branches are release specific rather than content specific. And Jim and Matthew are gonna talk a little bit about how disk it is organized in their talk and I would recommend to go there because I think we can dive into the technical details a little bit more in that session. But stream branches were designed for the exact thing that you're talking about where you have a stream branch that covers your source and then you build it for the targets that you're looking for. And one of the really cool things about stream branches are that if you have a common nomenclature, it's the sort of thing that we could develop in Fedora, we could absorb into REL, we could develop in REL, we could push back out to Fedora again or into CentOS as it is today, but we could have a continuous back and forth development loop. And we think that would be incredibly valuable because that would be a way that we could have like an interaction between major releases, which is most of the time. Speaking of time, we have time for one more question. One of the solutions being provided in this slide is talking about the kind of the two-store, two-pass program between the awesome applications. Maybe it's going to be solved with vulnerability itself having the balls to move slower than the patient moving fast. So the observation was that modularity itself allows some amount of differentiation. I think the thing it doesn't allow is at least in current policy to have different streams be release blocking versus not. And that's probably just something that we need to consider because we have this new technology that's actually starting to deliver results and we haven't really integrated it into our rules or our thought processes. And so that's going to be one of the next steps. So you're probably right, there is some opportunity there, but it probably won't work on its own because not every package is a stream branch. We don't always have the working version and then the experimental version. Maybe we'll get there and maybe that's the direction we should go. All right, I think we are out of time. So then we've got Laura, then Paul, then Matthew and Jim. So we can dig into a few of these details and hopefully their slides are kind of aligned with ours. I think we're all aligned in our thinking but we've all got different takes on it. So thank you for coming.