 Hello everyone. I hope you're having a good time with DevCon so far. I'm Ucie and I'm the second session chair for this room. I would like to introduce you to the next talk, which is What is the New in Modularity by Martin Curey. Please enjoy this talk and welcome Martin. Welcome and hello to my talk, What is New in Modularity. I'm Martin and I will be talking about what's happened in modularity space in the last year. So before we start, I want to introduce the team, which is basically working on modularity in Red Hat. It's me who is the software developer and product owner of modularity. Then we have Peter Pisci who joined our team last year in April. Before that he was the maintainer of Pearl and he had a lot of experience with modules before that. And now he's the maintainer of Live Modularity. Next is Philip. Philip is a software engineer in modularity. He works mostly on infrastructure related things. Before modularity he worked in Factory 2.0, which is a Red Hat team which created microservices in the infrastructure for building modules, containers and so on. With this I want to also thank not only I want to thank all the people who are involved in modularity and who enabled us to do the things that we did. We are very grateful for that. So let's get into it. Just like a disclaimer, this will be not very newbie friendly. Basically I assume that you know what the module is and how modularity works and so on. So let's get into it. In the last year we created a new version of the module and the package format. And this was due that we needed to fix an issue, basically a design issue in modularity, which was the broken upgrade path problem. I can make a small intro into this to segue to the next slide where I can tell you or explain what this means. So basically this is a simple example. Of course the upgrade path problem has many variations and it can be not only this but also more complicated. But just to illustrate the point I make the most simple example I could. So basically here, as you can see, we have three columns. The first two columns represent the whole table, represents a module. Each column represents a stream. Each field in the tables, as you can see, we have Perl 524, 001, 06888. And this represents basically an upgrade path. If you go up from the bottom to the up, you can see the versions are 001, 002, 003. This represents an upgrade path. The first stream, which is the 524, is okay. It works. There is no problem. Because nothing has basically changed only the version and the version defines the latest upgrade, which should be used when DNF makes its transactions. So this is fine. In the next column we have Perl 526, the stream 526, which basically is okay until the version 002, but in the version 003, in the third column, you can see the context changed. In the old way how modularity works is that every time you change your modular dependencies, your context is generated by MBS, which builds modules. This is a problem. Because actually if you change the context, DNF, when doing its transactions, has a real hard time to identify which is the next upgrade for your stream. So as you can see, 526, 001 is fine. 526, 002 is also fine. But when DNF wants to identify the next upgrade, for DNF, the 003 version is a different stream with different context. So it cannot put those two together. So you basically, DNF couldn't identify which to upgrade next and upgrade your stream wrongly or it didn't update at all because it could put together those two contexts. So basically what we did. One thing why the context is generated is due to something which is called module stream expansion. Module stream expansion basically is an automatic way how it will create modular dependency combinations for you. You don't have to spell which modular dependencies you need. In your stream, you just leave it empty and the modular dependency stream expansion, modular stream expansion will do it for you. And that's why every time those dependencies changed, also the context changed and the context was dynamic. The context changed at a real time, so the package didn't have any control over it. So what we did basically, we replaced the whole context generation with a static context. A static context basically is a configuration for that specific context. The context is defined by the package before it goes to the MBS to build the module stream. So basically you have to specify the name of the context, which is an arbitrary string. It has some restrictions, but about those you can read in the specifications of this format. So yeah, it's an arbitrary string which is defined by the package. This string should be the same for the lifetime of this context, or the module stream context. Because if you change it for the DNF, it will again be the same thing as with the old way how we do things and there will be a broken update. So another broken update, you will think of it as a new combination of module stream context. So yeah, context is now set by the package. Yes, so why we did a new version of module empty package? It was because the changes are not backward compatible, so we needed to bump up the version. Also what proliferates with this change is that now you have different metadata for input and different metadata for output. Because the module empty package, I will explain, the module empty package version free is basically affects only the input file. In the input file, you specify the configurations you need for each context so that your modules can be built. And the output which comes in the repository, in the RBM repository, in the repo data, there are still version two module empty metadata. Before this, basically you could put module empty v2 at input and then get module empty v2 at output. Now you put module empty v3 and you get v2. So yeah, v2 basically didn't change a lot. The only addition is that module v2 now has a field which is called static context and it just set the true. So DNF can identify that this is actually, the context is not generated and it's static context. Yeah, so let me check if I didn't forget anything. Yeah, so this wasn't really easy to do as modularity is really implemented in a lot of systems in a pipeline. So it took a long time to actually implement into the pipeline. So it took nine years, nine months. I mean, the most problem was that we didn't have a lot of people to do it. And additionally, we need to check if everything works as expected because this was like a big change. Yeah, so let's continue. Yes, this is the upgrade path, for example. Yeah, so additional thing what we added to modularity, one of the features which was missing at the start was removing page packages from modules, basically, or we call it demodularization. But just don't be confused, demodularization, we don't mean like removing like modules, but just removing the modular RPMs from a module. So basically, yeah, so demodularization of packages. Right now we have like a demodularization section in the metadata where you can basically put the RPMs that you don't want to be provided by your module, but you want to use the non-modular counterparts. Yeah, so this issue was, yeah. So I think there is not much to say. This is already released in Fedora and also in Rao. Yeah, but this Fedora, it is released now, but in Rao it's only for Rao 9, which is still not released, so yeah. Okay, yes, so modular absolutes. This feature was a long time in Limbo or in development that we can sell. You can say basically the implementation of support for absolutes and end-of-life metadata in Fedora or not in Fedora, but in the tools which are using it, which are DNF or KREPOC and others, was implemented, I would say, quite a long time ago. But again, the problem was to implement it into the pipeline itself. So this is, we have, all of the implementation of this feature is the work on it, the development work is done. All right, now we are actually, it was done like two weeks ago, the last page that we needed. And now we are basically testing it. So we also have a real-life data for absolutes and VR testing. So basically what absolutes are, if someone doesn't know, this feature was missing for modularity also. And this is basically that you could tell if you will add metadata into the distribution or the repository, RPM repository, which will tell which module streams can absolute which ones or when an absolute is end-of-life. So this is basically like this. In the end, how we did the user, not user interface, but how you can add your absolute metadata to Fedora or to other distribution is similar as default streams. You will just add metadata to a git repository. The link is in the presentation. And you then, it will be taken into the repo data basically during compose time. And then, yeah. So, yeah, this is this. Yeah, the specifications are here. There are bundled together a little module empty so you can check them after this. Yeah, so the last thing, which was also a highly requested feature which modularity lacked is basically building modules locally. I know that we have, how to say, we have a way to build modules locally, but it's through MBS and this means that you basically need to know how MBS works and MBS is a service. It's not a very easy system to understand if you don't know all the concepts of modularity basically. And also, MBS builds modules in a way that it builds them in the pipeline which basically has some quirks and hacks which I would say are not really the best thing you can have. So, yeah, basically we wrote a tool which is called module build. This tool is quite fresh. It was like the first, like a proof of concept version was created in last year in November. So, yeah, this is still new. The benefits of this tool right now are that it has minimum dependencies for installation because when you install MBS you had dependencies on Flask and other things that you basically didn't need for building modules. So, basically, the only dependencies that this has is a lib module and the mock and then a create repo c. And there are like two small libraries of Python which enable the py objects, pyga objects. Yeah, this is the one other benefit of the local building tool is that it doesn't need the quirks or the hacks that basically MBS needs. And those are module build macros which basically exists because Koji doesn't know how to work with modular repositories. So, module build macros basically has all the information to filter out all the not-necessary RPMs from your build route. This is not needed in the module build tool. Also, basically, virtual module streams, for example, platform is not needed when you are building locally because basically you, as the package, are providing the environment and you are also providing the mock config for which you want to build your module. Closer to traditional way of building RPMs. Yes, this is one thing is that basically right now you need MBS to have the information about which package should be built first which arranges the packages in some built batches which then are forwarded to Koji as a normal builds because Koji doesn't really understand how modules work internally and how they should be built. So this tool basically what it does it can do arrangement into build batches and then when you have all the build batches arranged when it's building it then used one build batch as a modular dependency to another batch. Hello, just remind you that you have two minutes. Okay then, so I will speed this up. So, please, this is a new tool. It needs a lot of testing, a lot of love so I'm still working on this. I have the package already for Fedora. I have it in Fedora review so I hope I will be able to release it in Fedora for installation in the next two weeks. And yes, and I think this is all... Yes, and then the module build tool only works with the new module and the package already free metadata. Yeah, and I think that's all. So that was quick. Quick ending. Thank you. Thank you for your talk. It was really interesting. Please, if you have any questions please head to the Q&A tab and ask there. Martin will also be presented in our virtual event. Yes. It's called Work Adventure. After this meeting I will be staying outside of this room because there are virtual rooms like Room 4 so I will be staying outside of it and I will be just gone and we can have chat. The link is already in the chat. Please go ahead and look at it. It's wonderful. I've just been there. It's like, you know, if you have been in offline DevCon it's like the old venue. It's looking so cool. Just go ahead and talk to your favorite people. And we have already one question for you. From Zbigniew, do you have any modules in Sedora? Yes. Basically, no. If someone is interested in this technology and wants to try it, you can but there is no main data that you have to or whatever. Okay. Thank you for the answer. If you want to talk more, just go to the virtual venue and there is another question. Okay. From Jens. Previously, it was quite straining to maintain large packages under modularity. Will that became easier? For example, the building system was not very predictable and sometimes modules were fragile. Yes. Basically, I can speak a little bit about the plans for the future. One of the next biggest problems we have and this is not only a problem that you have problems like small problems when you are packaging something. The problems are that the pipeline, the build pipeline is basically not up to the standard as it should be, because as I mentioned that basically Koji doesn't know about modules, it's just like a little hack that you put the builds there. So what we are trying to do next this year is to make Koji aware of modular repositories and also so we can directly build modules in Koji. But this is still in planning phase. This is not something that we already have. I hope this will make the package and user experience better. That sounds cool. Hopefully everything will go well and we don't have any other questions. So thank you very much Martin for your talk and please look at the work adventure and meet with Martin there if you have any other questions. I will see you in the next session and have a nice day. You too. Bye.