 So I'm Langdon, White, who probably many of you know. And what we decided to do for this talk, even though it's listed as just Ralph, we're actually gonna do, I'm gonna talk a little bit about the future stuff and then Ralph's gonna talk about building this stuff. Oh wait, I had a slide for this. I also can't find my mouse. Oh, there we go. So I'm Langdon and I try to find embarrassing photos, but these are the best I could do. So I'm gonna talk about kind of the future and the why of what we're doing. And then Ralph's gonna talk about the how of what we're doing and then Adam is gonna talk about the with what kind of what we're doing. So with that incredibly detailed introduction, I'm gonna move on. All right, so kind of, I'm gonna talk a little bit about like kind of, here's what we're doing kind of right now in the future, right? And then we'll get a little further and further afield. So we're targeting kind of an MVP of Fedora server for F27. What is MVP? I was getting there. So basically what we worked with the server work. Massacistic villainous plan. Yes, that's it. And just for the audience, MVP stands for Massacistic villainous plan per Ian Clapp. So what we did was work with server working group and said, what do you consider the kind of most important set of things that would make Fedora server? And then let's modularize those and then deliver that as the Fedora server. This comes from the startup world as a minimum viable product. So not the bare minimum that you could build but not everything you want to build. It's something that people could actually find useful. So that's the first thing. The next thing is workstation. We haven't really gone too far into workstation. They are currently already working with us about trying to marry modules and flat packs and trying to figure out how that's gonna work together. We also have an interesting discussion going on in the middle list about overlap between module metadata and app stream metadata and how can we marry those things together. But that's kind of a, we need to get to that. We're not quite ready for what we wanna do with workstation. Atomic, however, has already, I think successfully built the first version of a modularized atomic. And so the idea with atomic is that because atomic is a very well-known set of things that make up that kind of base OS tree, it's kind of simpler than all the rest of Fedora because we know exactly what goes in that box. So the atomic, and they also wanna be able to use the module kind of infrastructure to allow for some of the CI improvements that we wanna see in Fedora. So we can use the module kind of definitions to allow for things like gating. So that's kind of why the atomic working group has been working with that. And so we hope maybe we'll have something for F27, although they may do a slightly different release schedule because it's atomic. So there's that. And I would like to point out my, even bigger than Duplos, I guess. One of the things that we like to talk about with modules is kind of how it makes things simpler, particularly when you talk about containers. So as you can see here, we have your typical Docker file, although pretty lightweight one. But what this lets us do is we can, because we have this kind of stream concept, we only have to kind of update one file and then we can kind of rebuild the container and we get to new and different versions of those containers more easily. And I had a better segue to this slide earlier today, but I don't remember what that was. So sorry, that's a little out of context. But if you notice down at the bottom, that's the Node.js.module file. You can just change that stream there to be six or whatever, or 10, which is probably more likely, and then just kind of rebuild your module or rebuild your container and it just works. You don't have to change anything in the container about to deal with the fact that now you have to change all the different RPMs that you want to install. So that's one of the things that makes it a little easier, which I think is cool. So this is kind of what we want to do for the tooling and to make it so that modules have as little impact as humanly possible on kind of the RPM workflow. And apparently, even though it's seven minutes before the time for this talk, there was like hardly anybody in here, we were going to try to negotiate for grand. So, but yeah. So sorry, there's no sense. So, but the idea is that we want to be allowed, we want to allow like a package or whatever to essentially just give a single input. Here's the SRPM I care about, and then kind of have everything else just generated so that it can kind of, so it's a very easy workflow. For the first time, maybe you have to touch it, clean it up a little bit more, but then as much as it can be is that over time, it just happens, right? You don't really have to be involved. So that's kind of the generating from SRPMs. A general goal I have is that the human editable part of the module MD file is like, I would like it to be like four lines, if that. Another thing that we discovered during the Boltron activity was that it's important sometimes to be able to view the kind of whole ecosystem to see how the different pieces fit together. That's not that important for an individual package, but it's important when you want to do something new. So you can kind of say, okay, what's kind of available and out there, and where would my thing fit into the overall ecosystem? So Adam's been primarily working on that. Oh, and I was gonna comment on this. We actually have implementations of a lot of these, but we have like three or four where individuals kind of said, you know what, this is really a pain. I'm gonna go write a little tool to make this easier and faster. And now we're in the process of kind of merging all those to actually end up with like a good tool for each of these kind of spaces. But we wanted to allow kind of everybody just to see what pain points they ran into, and then kind of consolidate after the fact. The next thing is a kind of net scene ecosystem problem. We need some tools that validate kind of across the ecosystem to ensure that all the modules are not overlapping and that they're working together and that they're continuing to stay updated and that kind of stuff. We have some of these as well. So it's kind of like we need almost like repo closure kind of for the whole ecosystem. So it's similar to like a repo closure problem. And then the last thing is the copper team already has in their development environment, you know, a way to build modules and use them. But they're a little blocked on the kind of initial content from both us and the platform team to be able to ship that. So we really want that to come online. So that you have kind of a good place to test because in the modular world, we're probably not gonna do something like scratch builds. Instead, we're gonna probably do stuff that's more like in copper and then you kind of promote it into the real infrastructure. But then because we're introducing all this gating stuff, that doesn't necessarily mean it will actually get released. It actually has to pass the tests and stuff first. This is some stuff we're still working out, like how should this work? But one of the things that I think people find confusing is that there is not a scratch build, excuse me. And so just be aware that that's kind of intentional, maybe not long-term intentional, but right now it's intentional and that's why it's not there. The other thing is, so some people wanna work on stuff that's out in the infrastructure using something like copper, but some people wanna have just a local build possibility. So we've also been working on a vagrant image that will kind of have all the stuff installed and all that stuff that you need to build modules just kind of right there already all set up. So it makes your life a little easier. And then the nice thing about doing it kind of with vagrant is that it's pretty easy for us to extrapolate into an ansible playbook or even you just copying and pasting the shell scrap right so you can actually set up a local machine. But that's why we're starting with a vagrant box so you can kind of get an idea, just plug and play and it just works. Any questions so far? Not saying anything too controversial today. So here's what we need, right? We need help. We need, now we have a process in place now that you can create your own modules. We would like you to create your own modules. And there is a workshop later in this session where you can first learn about how to do that. Tomas sitting in the back will be running that. After that, there's one about building tests for modules as well. That's Petter will be doing that one, but I don't see him. So he is apparently somewhere else. Then we kind of have this kind of list of issues that we've been working on that are on the actual modularity kind of project itself. The review process that second bullet down and we're gonna, at the end, we'll kind of have a takeaway slide so don't worry about it too much. And then, you know, all the documents about how to build modules are on the third URL. And then where we've been putting them. So what I was kind of saying is like, we have this human editable component. And so we've been doing that right now in GitHub. Now that Pager is on disk it, we might wanna switch, but I'm not sure we're ready to do that yet. Like we just don't have time between now and 27. But that kind of human entered stuff we've been storing on GitHub. And then we generate out this stuff that we put into disk it, which is kind of the more complete module on Ds or whatever. All right, I could probably be hitting space bars. So look ahead. Let's see, how am I doing on time? Good. Four minutes. All right, so looking ahead, one of the big challenges we've been running into and we knew we were gonna run into this is many, many packages kind of package together things, too many things. And it seems logical when you're looking at it from an RPM based distribution. So Image Magic is everyone's favorite friend right now. But it's a really good example of this, which is that Image Magic has a library which could be, which you may have multiple versions of, but because in the same RPM is the convert command line executable, you can't have two of them, right? Because they conflict on the convert. Excuse me. So, and if you came to Adam's talk the other day about documentation and similar problems with docs is that if they're bundled in there, then it's hard to allow for divergence, right? Unit tests are my pet peeve around this. It's like we have to build all these things that we may not actually use kind of in the deployment side of the house. So I think as we move more modular, we're gonna want to see more repackaging. And so that repackaging, we may find automated ways to do that. We may ask people to repackage things. We may decide, you know what? For these 37 different things, we just don't care. So I think we'll see over time, but that's why this is the future slide. It's like, this is one of the problem areas. And I think it's something we're gonna have to deal with, you know, whichever way we decide to deal with it. The next thing is dynamic linking. So one of the big things that we have in modularity, we have parallel availability, but we don't do a good job of parallel installability at this point. My original thinking around this was to actually make the kind of OS basically the thing that is loading all these libraries for you to be smarter about which libraries it's loading. So that any given application, when it was installed, it would actually say, oh yeah, I want this version of that library and that version of that library and that thing over there. And that you could then just handle where they were coming from and making sure that you got the right stuff when you asked for it. That is probably a bunch of work. It's also a ton of repackaging because basically it relies on things like R-Pathing which is, you know, disallowed per FBC policy right now. So that's a ton of work. However, the thing we have going on right now is, and I'm just using the term native containers because there's like a bunch of competing technical implementations to this. But basically what I mean is containers that feel like they're part of the operating system rather than kind of feeling like they're out over there. And so you have system containers doing this, Flatpak kind of does this. I'm sure there are others I don't know about or somebody is working on in their backyard. But that also solves the same problem of how do we get parallel installation? And if the container folks do an even better job of making that feel more and more native that might just be the answer and an easy answer which doesn't require tons of repackaging and a new way of doing things and everything else. And so we might be able to just rely on containers. So this is again why it's kind of a future statement. Good. But my guess is we'll actually end up with a blend of both, right? Is that for some things, it's gonna make a lot more sense to have that all native to the OS. And we do do parallel installation of some kind for some weird reason of certain kinds of things. And then like all the other things do it with these native containers, whatever they end up fully looking on. My bet is it'll be something like the system containers effort, which is pretty good. We just need to move all the things so that they can actually run that way. All right, so the next thing is, and one of the things that makes doing the future talk of this talk hard is modularity is really meant to be like an enablement for innovation. So I think there's gonna be a lot of things we can do with kind of very flexible metadata that we can't do now. The thing is I don't know what that stuff is, right? Like we kind of need it to like land and then we need people to start playing around with it and say, oh, you know, it would be really useful if we knew this thing or that thing about this particular application. And it'd be nice if we could compare them in this kind of way and that kind of way. So, you know, I think a lot of what we're gonna do next comes out of having this kind of much more flexible framework that we can now start to play with. And I'm hoping that, you know, people in this room and people elsewhere are gonna come up with whatever our next innovation is. You know, it's not just us, right? It's, you know, we're just trying to set up this environment and we want everybody to play, right? So I think that was my last slide. Thank you. Yes, sir. Would you consider any or all of those things continuation of modularity or are they things we're going to do in the broader ecosystem now that modularity is in place? Right, yeah. It's much more. Is that a meaningful question? Yeah, so the question's kind of like, are these projects of modularity or are they projects of Fedora, right? And in some ways I half jokingly, half seriously say, modularity's done. You know, like we pretty much feel like we've solved all the questions. You know, there's a couple still that we would like to really get cleaner and tighten up a little bit. There's things like tooling. You know, if we can't just walk away and say, you know, hey, you know, you're on your own for tooling or any of that stuff. So we have a bunch of stuff we still have to do. But this stuff, yes, it's exactly that. What we're trying to do is we try to enable a flexible environment. We're gonna get to it, you know, the train's moving. But then, you know, yeah, like the experimentation is a Fedora-wide experimentation. And we need to, you know, we need to enable everybody to participate in that innovation. And we don't want, you know, I certainly don't want the modularity team to be responsible for trying to figure out what that innovation is. It's just, you know, it's not enough people. We want to, you know, we need everybody, right? Does that answer your question? Yes. Cool. Thank you for the right answer. I guess. That was my last slide. So, and I thought I'd do a little throwback for everybody. Nice. So, does anybody have any questions specifically for me? We're also gonna do some more questions at the end, but you mentioned tooling to detect if, like, modules are overlapping the package sets. Is there gonna be some centralized authority that decides how practice gets divided into modules or is it gonna be wide open? I don't know. That's a policy question. I mean, it's a policy question. I think we're gonna have to, you know, we have the exact same problem with RPMs today, right? Is, you know, who and how do we decide whether or not this thing should be included in, you know, this block or that block? So I think we're gonna continue to have that problem. I foresee the module, well, and actually, the council actually said the modularity working group in some ways will morph into something like the FPC in the sense of try to be the centralized authority of here's how you write modules, here's how you, you know, keeping track of those guidelines, updating processes, and maybe making those kinds of decisions, maybe those decisions go to FESCO. I think a little bit, it's gonna be, we have to kind of see the problems we run into before we can kind of come up with the answers. But yeah, it is definitely a potential issue. Cool? Do you want to use mine or do you want to? If it loaded, I'll use yours. Oh yeah, it did. I just need to learn how to use a computer. Yeah, they're hard. Who's idea was this computer doing? I'm not the easier one. I just learned about graphics stuff. Right? We never need that anyway. Who's idea that was? Oh, good, I see mine. Better trust you. I had his phone start. It works, right? Oh, cool. Full screen? Oh, sure, it goes up. No. Hey, cool. Hi everybody, I'm Ralph. Hi Finn, is the microphone not on? I'll just talk about it. Correct. Hi, I'm off the old service, and so can you. So, I was introduced broadly as talking about the how we're going to do this stuff, but it's a little bit more narrow scope than that because the how includes other services that are beyond just how models are put together themselves. There's the automation and orchestration frameworks that Yankeluzha presented on called FreshMaker that plays a bigger kind of governing role in the build system. But I'm just going to talk specifically about the part that's responsible for putting together modules themselves here. And so there were three things I wanted to cover. I wanted to compare with last year to today what has changed in the module build search that was presented at Flock a year ago. And some things changed, we'll talk about those. A review of the NBS internals and how does it work and how you could help to make it better. The point there being kind of like in Langman's pitch, my team has been working almost exclusively on the module build service, but we would like more people to be involved in that. So it's a community owned tool that's a part of our process, right? There's bandwidth issues with small groups of people having the intimate knowledge about how something works and we need everybody to be able to fix it, patch it, fix bugs, make RFEs and so on. And lastly, if I have time, I'll get into missing features that we know we don't have right now, but that we're going to be introducing in short time. And if you want to get involved, that would be a place to help in coding on the NBS. So on how build modules, nothing has really fundamentally changed since last year when we presented at Flock, but there are a couple of things, a couple new backends were grown, some efficiency improvements and changes like that. I only want to focus in detail on the last two on these slides, on this slide, which I'll show here. The first is that we introduced this notion of build order groups in a module. When you're building a module, it's my initial sentence here. Yeah, back up, the way that we were building modules last year, imagine that you had a module with 400 RPMs in it. The module build service would submit the build of the first RPM to Koji and then it would wait until it first to complete and then it would wait for the repo to be regenerated so that that RPM would be available in the build route of the next RPM and it would build that RPM, wait for a repo regen, wait for that to finish, then start the third one and so on. So this took an insanely long amount of time to do. Our theory was that we had to do that because any one of the RPMs in the module might affect the build route of another one so we couldn't just do them all, what if they depended on each other? So we did the naive thing and that's how it worked then. We introduced this notion of a build order group which is a way for the module maintainer, the packager to specify groups of RPMs that can be built in parallel and only once they're all done they get tagged into the build route for the next build order batch to start. So you have control as the packager over how the MBS actually executes the order of the RPMs in your module. So here's an example of a hypothetical module. I think I stole this from the shared user space module that was a part of the F26 Boltron release which is going away in F27 but here's some RPMs. There were many more in the real module and it's a limited set. You might have in group one, SurePath and LibdWorff, those two get submitted in parallel and one might finish quick, the other one takes a longer time but only once the last one is done do we then do a repo regen and build the next set. Submit parallel, build the dynast and SQLite develop and then same thing repeats. When they're done we regenerate the repo and then can build the last RPM in the third group. Reusing components is a second feature that was introduced since last Flock before, again we weren't, we had no way to know what in the build route was affecting anything else so if anything changed we felt in order to be safe we have to rebuild every one of the RPMs from source. If you had a module with 400 RPMs that meant 400 rebuilds anytime one spec file in that 400 would change which is dramatically inefficient, right? We knew that couldn't stay. So we came up with some rules for when we get to reuse components from a previous module build. Obviously if the spec file changed we have to rebuild that one but of all the other ones how do we make decisions about what to reuse? And so here in the center you don't have to read them because I'll talk about it in future slides. Our three rules that we came up with for our reuse logic. It leverages the build order groups that I talked about in the previous section. So here consider the example where the system tap spec file changes. If you submit a module build of this hypothetical module and the system tap spec file had changed since the last build the system tap RPM will be rebuilt but all of the other RPMs will be reused from the previous module build. They'll be tagged from the old tag into the new tag for your module build. And so this is like the optimal case. One thing changed, one thing was rebuilt and nothing is wasted, yeah. So is it, doesn't, wouldn't like the dependency graph provide you with the information of which thing is? It's in my last slide. So this is our current state of reuse logic and the research project is how do we get that information out to have even more intelligent rebuilds. All right. And then part of the problem is that RPM the spec files aren't parsable, right? They kind of have to be executed in a context. So what is the real build requires? You don't know until you've already started to build it. So I need RPM scientists. So that was the first case, the nice case. Here's a slightly worse case where let's say the dine in spec file changed. You submit a build of this module or, and this is in like young collusion stock with fresh maker, the automated system will be submitting builds in this module. But the module build service, once it receives this request to build the larger model, it will look and see the dinon's change. And so because it knows that SQLite develop and dinon start in the same builder group, it will reuse one because it's built if the dinon wasn't present in the build route the last time it was built. So it can just be reused. Dinon has to be rebuilt because it changed. But then everything in group three then gets rebuilt from source because something new is influencing its build route that wasn't there the last time around. And then here's the worst case. If the cheer path spec file changes, it gets rebuilt. The other things in its build order group get reused, but then all of the subsequent build order groups get rebuilt from source. And that's the current state of things. So a review of NBS internals, check my time real quick, and I'm running short. The NBS internals, the point here is to get an idea of how things are organized in the NBS source code so that you can get into it and have some bearings to help patch and change things. So there's two major processes. There's a web front end and a backend scheduler. The web front end receives requests from users or from other automated systems like FreshMaker saying, I want you to build this particular module, and it doesn't do very much. It does some validation on the module MD file to make sure it's sane. In the module MD file are listed RPM spec files that should be pulled in as part of the module and dis-get branches. So the NBS will validate that and go and check dis-get to make sure those branches exist. And it'll record the refs at that good point in time so we know exactly what was built in this round of the module. It then announces a message that's picked up by the backend that says to start actually doing work on building this module. As modules are built, they pass through a variety of states. Here's a diagram to kind of skip over, but if you want to know the details, come back and look at it. Things move from init to wait to build and the build takes a very long time. And at the end, there's a done and a ready state that denotes things are ready to be composed. The building steps in Koji. So that's that center state, the build state that takes a long amount of time. The bulk of that work is the process of going through those build order groups like I described in a previous slide. But note that two things happen at the very beginning of that process that are worth being aware of. The first is that the MBS creates the tags in Koji that are gonna contain the RPM to this module, creates a build tag and like a distribution tag where the content that you're outputting ultimately gets tagged into. And importantly, the build tag though uses Koji tag inheritance and sets up the relationships based on the build requires that you specify in your module MD. Say we're building an HTTP 2.4 module, that might depend at build time on the platform module. And so the platform 27 tag that was produced by another module build at some other point in time is brought in through Koji tag inheritance so that all of its RPMs are available at build time even though those don't get like rolled into the output of your HTTP 2.4 module. Cool, cool. For the curious, what we do with the build groups in Koji, which define what things get installed by default are specified in terms of the install profiles of the modules. We reuse that feature for client side use about whether you wanna be in a server profile or a client profile and to build an SRPM build profiles that determine behavior at build time. The code in the backend is organized something like this. There is a central consumer that receives messages from the message bus and it passes off those messages to a variety of handlers that are a part of the code. Each of those handlers corresponds with a type of event that should be responsible for handling. A modules handler, for instance, might respond to events from the MBS web that says, Ian has requested a module build and it has entered the wait state. And so that's received by the module handler, a module has entered the state. What do I do in that context when you start building? The process of building submits builds to Koji and as each of those components, those RPMs finish building, they're routed through the consumer to the components handler which says, this RPM finished successfully, okay, remember that. This RPM failed, okay, remember that. Oh, we can fail the module build, right? And so that code lives in the components handler. It's about, it's organized in response to the events that those are responsible for and that's the takeaway, yeah. The arrow at the bottom of that box is like kind of going up the tags. So that's a good thing to do when you're on the process or just with a diagram. Well, what are you talking about? Can you, can you? So the arrow at the bottom, like. So at the top, it goes straight from consumer models and then the other ones are kind of like it's supposed to be one great model of the tag. I was curious if that's significant in the way the process works or just by virtue of the diagram? No, that's just by virtue of the diagram. It's not significant. It's just that there are different kinds of events coming out of Koji and then the consumer makes determinations about which one goes where. I was just trying to suggest that there is routing involved. Yeah, no problem. And I was supposed to repeat the question. Is there significance to the curviness of the lines? And the answer was no. Check on time. A word about local builds. When you do an NBS local command on your own box you can build modules locally using this local mock backend that we implemented. And the thing to take away here is that when you do that it's not a separate piece of software but you're actually firing up on your own machine the exact same software of the NBS scheduler that we run in infrastructure just using a different builder backend. So that means that local builds are maybe a little more complex than they would need to be but the benefit is that we have one piece of software we're sharing between those two environments and so bug fixes go to bugs. This is a thing to be aware of if you get into NBS development. And that's it, I'm out of time here seeing things we need to work on. One was the question about doing smarter more intelligent component reuse and that's on our radar. There are other things like build time filtering and transitive depths when you say you depend say you're HDDBD and you depend on like an intermediary module and it depends on platform. You unfortunately have to also specify that you depend on platform in your module at build time because the transitive runtime depth isn't respected in the NBS but that is easy to fix and we understand how to do it it's just a matter of cycles next couple of weeks we'll learn more about. Transitive depths and build time filtering are both falling to that context. Smarter component reuse is very tricky but we know kind of what we need to do and context value and stream expansion I don't have time to explain but there's an email that was sent to Develest about a month ago that describes the problem and our approach to the solution there if you want to put that up. Any questions for me specifically before I hand it off to Adam? Adam it's yours. I don't know if you're talking about it. So this almost works, that's great. All right, there we go. Okay, good. So hi everyone, my name is Adam and I'll be talking about how do we do packaging in the modularity world? And yeah, I'll be basically talking about two, three main things. Like what is it to do packaging in modularity? What to do and how we can do it. So the first, what it is. So basically the main concept of the modularity and this is just for recap is that we are transitioning from one ordinary distribution to a smaller pieces. So instead of building like F25 and F26 we can build independent modules and then just somehow merge them together. And this is a little bit more detailed picture. So you can see that in traditional Fedora it's been sorted out by branch. So if I built for example, web server with F25 branch it goes to the last 25. If I built it from 26 branch it goes to the last 26. This is pretty simple, but with modularity we have arbitrary branching and we have these modules and we somehow decided what goes where. And that's why we have the module and D file that does many things, but it describes the modules and let's take what goes there. And we can then reuse it if you wanna build the final distribution. So this is like the whole way for a package or distribution so I can say that I wanna build for example database module with a database package 2.0 and library 5.0 and have a host and pass from there and I can say with another module and D I want to build a Fedora atomic addition for example that's what they do and they have some atomic platform host and maybe some kind of atomic CLI. And that's how we can build the distribution. Yeah, there is also like addition. Yeah, there's like Cowsay LTS. So that's my example, for example, my own spin. I can do my own spin host platform that has been on Cowsay LTS for example. Because. It has stability at all costs. Yeah. It depends on your answer. Good. Yeah. And there's also the concept that in module IT one source builds many things. So when I have the module and this is my module I can build containers, RPM packages or in the future, many thought bags and run them in small places but they become from the same source. So we build many things but we fetch it on one place. There is for example security issue and it gets rebuilt everywhere. So that's one of the concepts we have. All right, so I said module MD, what is module MD? So I have many pictures. So yeah, it basically decides how to build a module, what to ship in a module and it gets some hints how to use. So first how to build. So I can decide what components go in there. So this is that on the source level. So I can say package name and branch of each package I want to be present in the module. So for example in HTTPD I could add the HTTPD 2.46, 2.4 package or 2.6 package. Then I can decide the build order and this is what Ralph talked about. So these are the build groups that's also in module MD. And it also defines its build rules. So it's also something Ralph already said. So I can say I depend on platform and some build dependency, like TechLife that's very favorite one. Then I can decide what to ship. So this is the binary module and I can see that some packages got built into two binaries. So for example I have package 2.develop and package 3.express and I can decide that I want to ship the develop package. So I can use a feature called filter to get rid of one of the packages and just ship the rest. An example of that would be the stripping the X libraries out of things like the system runtime. Oh, right, yeah. That's a good example. Yeah, that was an example that could be used to stripping the X11 libraries. Because you might have a sub package that pulls them in. Right. Please don't filter sub packages. You can filter out, or I'm going to filter out packages. Yeah, but then maybe for example different modules for different use cases. So you want to have a small container image. Yeah. You don't want to? Provide all of these packages. But yeah, that's up to you. Like there might be reasons there might not be reasons. Probably profiles as well. So profiles, yeah. How to use. So I have this package that I ship, the module I ship with these four packages. And I can say that these two are API that basically means that that's what I support. So if I have HTTP module, I would have HTTP package as my API. If there are some dependencies, that's fine. But I don't guarantee anything about the dependencies. I guarantee only the HTTP. And there is something called profiles. And that helps users with the installation. So if I want to install a module on my system, I can either select the packages I want to get installed, or I can choose one of those install profiles that can help me and there are many ways how to use them. So for example, if I have a database, I can have a server and blind install profile. If I have a VIM, there could be normal VIM, minimal, or with HTTP, there could be production and development. So there are many ways how to use these. And that's what we're going to do basically. So it helps you to build the module. It helps you that they may want to get shipped and also how to use. One of the examples I didn't say with the filter, example number two is, for example, if I want to bundle build dependency, I don't have to shift the build dependency. So yeah, that's one of the things. Any questions so far about module and V? All right. So now what to do? So if I want to create a module, what should I do? So yeah, build the distro into easy steps, right? I need to build the modules and I need to group them into distribution. That's pretty easy. So how to do the first step? I need to determine which packages go into module. And that might be easy, that might be tricky. And yeah, we had a question about determining what's in every module so they don't overlap and they're just like work with each other. So that's one of the problems we have to deal with. Then the module name, it might be easy, it might be tricky for HTTPD, that's HTTPD, but there might be groups that needs to be sorted out and also stream names. So for example, again, with HTTPD that can be version. So HTTP2.4, HTTP2.6, but if I have something called auto tools or LAMP stack, how do I determine the version or the stream name? So that's one of the things we need to think about, I guess. So that's what I need to do. And these are some links. So the first is our first attempt to deal with the dependencies and overlapping. It's basically a set of initial scripts that just take care of it. And yeah, if you wanna do the module name to access to this git, so this is the package name space, this is the module name space, just this git. And we have the packaging right now in the process here. Now I'll share the slides so the links will be available. All right, so that's what we need to do to build the packages. And then there is this idea how to form the actual distribution. So what we are having now, for example, we want to ship the F27 server as a modular prototype, but who will decide what modules go in there? It's easy right now, just all of them, right? But we still have to list them. So there is a function config we need to fill out. And yeah, we need to still decide who will own it, and what to do. And there is a discussion after this where we can talk about it. But in the future, it could be in the way I had on the previous slide and it's even the next slide. So I could just write a module in the file that lists all the modules I want to include as dependencies, and then use that in the punchy config, just a single module. It will contain everything we need, even the SLA or end of life. So this is just an idea, let's talk about it later. And now how to do it. That's part of the workshop. So Dimash in the back will have a workshop at four. Yeah, so we're definitely welcome to come here and try this out. That's all for me. Do you have any questions? No, I'm very good at questions, very good. Yeah. What's the meaning of life? What's the meaning of life? Sorry, I just turned one year old to tell you. I was gonna just make one quick comment. All that stuff that you saw linked, everything that we talked about, everything else is basically there. So start the, if you want one URL to remember, that's the one. Go ahead, Mike. Can you say a question about the filters? In the module MD, you're listing out the set of packages you want to pull in, right? So could you just not list the developed package in the module MD and have it not included? Also in the module MD, you list so it's not being back to use. Oh, okay. Yeah. You wouldn't list them in the API. So if you say, we're not asserting that this is guaranteed to be compatible. But yeah, the reason you would shelter it out is if you were expressly trying to do a point pulling. For example, you have a package that has a set of plugins that includes the sub-packages and days. One of those plugins nobody uses but pulls in the entire Emacs stack, for example. If you don't need that, you can filter it out. So the problem I was referring to is that if you make a pretty module you expect what you make a pop-up of, say you take the up-to-missection and then filter out some of the sub-packages, then it's very hard to take that and add it back in one. Yeah, filtering should really be done, it'd be an exceptional case. It really shouldn't be done unless it's causing, unless it's preventing more problems that's coming. Well, and there's the flip side of that too, which is that one of the problems that we have today with the RPM world is that anything that is an RPM that's being built is assumed to be in good shape and stable and supported by the people who do it, right? One of the things that you can do here is you can say, no, I don't want you building stuff on top of this or you can, but you can only build it on top of these small set of pieces, because I can't. That's what the API said. Right, that's the API said. So I just kind of made a point that the flexibility here is also to allow to scope what people are going to depend on. And if you want to create a completely different scope, you are free to fork the original module. Right, right, yeah, or negotiate. Yeah, yeah, yes. Humans involved. Work together. Right. I mean, it may be that you end up splitting out that module, splitting out that package into a different module that then yours and theirs would both depend on. Right, right, exactly. There's lots of different options. And I really think with kind of like the pager on disk it, you know, not without its problems, but the real advantage of it is it kind of changes the mindset around collaboration on packaging in my mind, right? Is that, you know, it's very, very easy now to say, hey, it'd be really cool if you did this and here's the patch to do it. And then the receiver can decide to consume that. I think that's much harder with disk it in person. That way lies, but so. Everybody looks like so. More questions? Yeah. When you install the module, what's happening then? Do you still agree package that's including a module of just the API one and then closing its differences? This is a matter of contention. But so there's the problem is that we use the term install with RPMs. And what we actually mean is that at some prior point, I enabled a yum repo that allows me to install this RPM. But those two steps are so far apart from each other, people don't realize that you have to do one end to do the other. So to collapse that and make it so it's not as confusing with DNF, when you install a module, you're actually enabling the module and then actually installing some set of RPMs. That some set of RPMs is defined by the profile, the install profile. Right. Well, no idea if you have a profile. No, there's an actual thing that says, OK, here's the minimal, or here's the maximal, or here's the dev version, or whatever. And there's a reserved word for default, which is if you don't specify one this week. Right, which I have some problems with. With existing technology that can be referred as the install groups. Right, right. So the kicker, though, is the point that I was kind of getting to in a long fashion is that you can just enable the module by calling enable. So and get no packages. Or really what you will probably want to do, right, is you want to enable the module and then say, I want that one package and that one other package that has nothing to do with the normal install for whatever reason. And you have a conversation about that. Yeah. Sounds fine. Yeah, good. Can you repeat the question? Yeah, I'm sorry, I couldn't hear your question. How much of Fedora's modules and whatnot can I let her leverage if I can say running a third-party package would go? Or module would go? So in that context, you're building additional software on top of Fedora's base? Yeah. Great, for instance, for the example, I think I heard earlier what some sort of reflector for IUS. But knowing nothing about IUS personally, I would guess that, presumably, you were using the DNF tool that's modular-ware. You could build things still in the traditional way on top of the repo of content made in Fedora and then in your build process enabling which modules you want to depend against. So that would get you traditional RPMs on top of the modular base. If you wanted to additionally build your third-party content as modules, you would probably need orchestration tools like the MDS to intelligently interact with Fedora. Who could do that by hand? I'm not sure if I could grab one of these from Fedora. We're going to have to do that. No, if you grab one of these from Fedora. So I think that the short answer is the same way you would today with slightly more complexity in that, just like today, you need a way to actually build RPMs as your third-party RPMs creator. So with modules, you also need a way to build modules. But there's no block or anything on you being able to pull that stuff as a base. Yeah, good. So how does this fit in with the concept of package groups and meta packages and that kind of thing? So we talk about this problem a lot. So the question is kind of like, what is this related to like meta packages versus comps groups? Right there, we have a fight about that. Yeah, yeah, so modules kind of combine both in a sense or they kind of look like both in kind of different ways. We have some things, Radick, who said like, modules are kind of like super groups, but at the same time the install profiles are kind of like meta packages. Like AVA? What? Exactly. Don't get it. Super groups. Yeah, why is that? Yeah, I'm still going to go ahead. So the idea is that, I mean, so meta packages are kind of a way to try to accomplish the same kind of ideas like an install profile. So kind of some hint about how to actually get the software you need to run, to accomplish some goal groups are kind of like the same thing, except in more of an eclectic sense. So like, was it system tools set? It's like, here's a suggestion of what you need if you were going to do system administration. So it's kind of like those, but kind of more fun amount. So, but there's a lot of overlap with, and the way I argue it, and I'm sure the people who invented the other versions argue it the same way, is that I think modularity is trying to go after the root of the problem that there's a lot of symptomatic solutions for. So alternatives, infrastructure, software collections, meta packages, groups, et cetera, et cetera, et cetera. What modularity is trying to do is go after the actual problem rather than trying to just fix symptoms. But I bet if you asked them, they were saying the same thing. So, Mike, again. Is the chain rebuild, like, model that modularity is kind of building on top of it, and you know, a lot more churn at the door because? Yes, maybe. So the thing, as I said at Flock last year, and maybe the year before, I don't know. So modularity enables significant ability to shoot yourself in the foot. I don't think, as a Fedora community, we should shoot ourselves in the foot. But I'm just saying. I think we should all shoot Langdon in the foot. Adam just wants to shoot Langdon in the foot. You should shoot yourself in someone else's foot. So... In the builder group stuff, module packages and modules were there. If they're not aware of how you can control that, and if it's done in a different way, then we will be rebuilding everything all the time before we have to go around and have conversations about how to... How should you structure your modularity so that you have the ability to do it in a way that rebuilds everything all the time. But we also have a way to limit it much more closely to rebuilding only exactly what we do today, plus some delta. So it's going to be somewhere between that and it's going to be more. It's going to be between a little bit more and context-financial system. You're just going to have to watch it and keep track of it and restructure the content as you are. But this is what guidelines and processes are for. Not enabling technology. But I was kind of curious about... I'm looking at how frequently G-Lipsy has been rebuilt. I presume almost everything will inherit... I assume G-Lipsy is in host or platform. And almost everything is going to inherit from hosting platform. One thing is that, in those chain rebuilds, I like to check between modules. So platform rebuilds. Everything in platform rebuilds basically in new G-Lipsy. Oh, okay, so it doesn't cascade from... I mean, it's not. The question of cheating rebuilds between modules and cascading when platform gets revved. Then do we rebuild every other module? There's a question we're actively investigating right now. And we had good discussions about it last week. We couldn't tell you in the details, but we came up with, I think it was seven scenarios in which we might want to trigger a rebuild. And we're looking at each one as its own policy question. But what do we want to pursue it? Because they will need increased cost. And how much do we want to do it? One of the simple options is just white-list platform for black-list platform. The area, everything else, rebuilds, but not black-list. What do you consider it? If the load gets high, you just encounter build-up in modules saying this is a build-back and doesn't require some independence. Yeah. And then how do you choose which modules do or don't? The important one is, of course. Oh, yeah. I'm waving those hands. Exactly. If you're interested in that question to gave with Yonka Duget, is setting up the Bridgemaker project and that would be a good response. Is this going to kill RPMs? Like, is this going to win? Yes. Hack it is. It's certainly not anytime soon. Yes, certainly not anytime soon. For F27, workstation is going to be produced without modules having any impact on what's going on. In F28, it's still an open question. And even then, if everything was fully modularized, it's like the session after this is called when to go fully modularized. You never got a question like that. For that, all of these models are made up of RPMs, so RPMs doesn't get displaced by this because it holds on top of RPMs. So that's like the now. Oh, that's cool. That's just a cool drop. Drop.dabs. Well, that goes second to third back ends. So where has Alien been given back? It's more a question of what replaces the traditional whole repository metadata information associated with particular release of spin. So let's say Fedora 30 goes fully modular. Does it provide DNF or YAM compatible metadata so that plain YAM or DNF can fetch that? That would be a question to answer. Not that RPM is there. So at present, yes, that's the goal. The problem a little bit is just you basically won't see a bunch of the new stuff. And so how can you solve for that? So we've actually toyed around the idea of kind of generating what I refer to as name-mangled versions of things so that they still are available and they're out there and to basically kind of accomplish that. So some old DNF YAM could still work with it. The thing is that addition or spin defines exactly what goes in it. So at that point from the YAM or DNF point of view it's just a bunch of RPMs exactly defined in the spin. So for that one you generate a metadata that YAM or DNF understand and they only understand that spin exactly like we do now. So you're not losing anything with that. Of course you're not getting the flexibility to generate spins, but if you're only consuming a spin if that's your point your software continue to work. And that's exactly right. The only thing is just you also need to ensure that your repository doesn't have two different versions of things. That example has like database 1.0 and database 2.0 and the same spin. That just means available. But you need a way to say if I install database I want database from database 1.0. Let's say it's a profile within a spin. Effectively. So there will be something called system profile that will decide for addition or spin what is the default. But your scenario what I would actually do is if we really get to this module the definition of the kind of output artifact in a sense basically you would just blacklist all the things that were other versions. Which is not a default profile. You generate data per profile and let people choose exact URL to point to. I think we have a lot of options. But the short answer is basically as I wrote in the blog post about the boltron release the idea here is not a green field. We're not building a brand new thing. We're trying to build a new feature set on top of a bunch of things that we have. So I kind of refer to this. We're kind of turning it on its side. We're not deleting it and starting over. We have time for one more question. We have two minutes left formally in this session. We're going to hold our full discussion. Yeah. Should I start campaigning now to get more disk space than I could go on there? Yes. I mean, the answer to that was yes even without money. That's a side effect of this work. Again, it's a policy decision. Again, we can shoot ourselves in the foot and have 50 million different versions of things that's out there. Or we could be sane and have maybe two for some things. But for the bulk of it, we actually only have one. Because it's a volunteer community we're not going to be able to maintain every version of everything that ever existed. I don't know why we want to. But during this transition period between Rails 3 and Rails 4 instead of saying no, you can't upgrade we could support both for a while. We're focused on popularity of the project. We might have four different Python versions but only have one PHP one page. The practical aspect of it is I imagine we have a number of leaf packages in our 53,000 package set that if they don't make their way into a module for the 27 server to be a part of the 27 server it's a part of the 28. We might have an opportunity to call some packages that have been abandoned for years and nobody knows that the maintainer has actually left. But we're still just rebuilding them and shipping them over and over. And the flip of that too is like do we really need more than one version or less? What's the API compatibility guaranteed but less? We have many global gates that can guarantee the source code and stuff like that. Yeah, retention policies. But again, it's just policy. Hold on a minute. No, I meant policy as in we just have to set a policy that we will retain those things for the appropriate amount of time based on the rules we're supposed to follow. We have a particularly strong technical impact. Alright Matt, now it's your turn. Do you have any modules included with something or something that would be built in 10 times than every hour? Yes. So this is the conversation that we're going to be detecting over that? Yes. In our case for that we had in that library, we have our own module for part of a similar to the shared module and that's been exploring different ways to structure that and we're going to be using those. Presumably in the built service too. Okay, we are out of time for this one. Alright, so let me just close it out for the video. Thanks everybody for coming. That's the wrap for this final talk.