 I will introduce myself. I suspect that most of you know me, but I'm Steve Gallagher, general cat herder and misanthrope. Mohan Buddu. Did I get it right at that time? Cool. Is our Fedora release engineering lead and we'll be talking today about how a how a YAML file turns into modularity unicorns. Do we have a clicker? Say hello Mohan. Hello everyone. Hello everybody. As Steve was saying, I'm Mohan Buddu. I work as a release engineer for Fedora and we are here to talk about how you can build modules and how they will be available in Fedora Repos. So first part is how you can build them and Stephen is going to help you out with that. Alright, so some of you may have come to my earlier talk where I explained why you might want to build a module, so I'm not going to cover that again. Let's assume for the moment that you've decided that this is the right thing for your project. So I'm going to take an example that I put together fairly recently for a... Alright, I realize that this is not terribly legible. We'll have the slides put up later. But this is a relatively simple example of a module MD YAML file which is the basic recipe for how you put together. This example is based on Hub, which is a tool for interacting with GitHub Raps. When we started the modularity I'd been maintaining the stable version of this for some time and they hadn't done a stable release in well over a year, but their upstream had lots and lots of really new features. So I decided I'll keep the stable release in main fedora and I'll make a module stream for the latest. Just the Git snapshot I'll release it about every month. Great. The first thing I had to do there was I had to create a new branch and I had to use at the time it was Fed repo request, but now Fed package actually has the capability to request from Relinch a new branch for an RPM. So my Hub RPM I asked it for a pre-release branch. It took about, I think it was six hours turnaround to get and that was particularly long, but they were busy that day and I got a new branch that I could commit to. So on that branch just like a regular diskit branch you prepare your spec file, you can do Fed package local and so on and so forth. The only thing you don't do is Fed package build because that is going to get rolled up into the actual module build. So you push to the repo but you don't actually need to do a traditional build. Then you request a module repository in diskit and you drop in it this file. It's a this is honestly but it's a very simple one but they don't really get a whole lot more complicated either. It's a very simple format, so I'll walk through it quickly. Sorry, we're having a little trouble with the speakers. I hope that doesn't get picked up on the recording. Aside from the header information, which just says this is a module in the document for format two, the mandatory entries are a summary which is basically the same as an RPM spec file summary. It's the same for a description. For the license, there are two kinds of licenses. You only need to specify what the license is for the module. During the module build service, this will automatically be populated with the license fields of any RPMs that are built with it as well for compliance purposes. The dependencies are actually, you know, I'm going to do that one last because that's the only complicated part in this run through the rest. References are fairly easy. You just tell it for informational purposes. Profiles are sort of like comps groups except that instead of a cabal of people with commit privilege to comps, you can actually just say I want to have these this set of profiles so that let's say my project has a server and a client, I can choose to say if you install module name slash server, you get all the server bits install module name slash client, you just get the client pieces and the packageer gets to define this, which is a significant usability improvement over comps. API here is trying to... API here is one of the more interesting concepts about this. It allows you to specify which output RPMs from the module build you are treating as acceptable for general use. Thank you. The implication there is that any binary RPM that is produced from the module build that is not listed under API is therefore implicitly included and supported only in so far as it is used by this module. One of the classic problems we've had in Fedora is that any time you package a piece of software you care about you almost certainly have to package three or four dependencies that don't. This allows us a way to bundle those together in such a way that you can say I only care about this for my package and you shouldn't be using it for anything else. And then lastly is the components. These are the things that make up the module itself. The name field that references that will reference a repository in the RPMs namespace of DistGit. The ref is any commit-ish if you know the get term. It's any branch or specific commit ID in that DistGit branch that you want to build from. So normally this will be a stream branch in very rare cases. It's a particular commit if you know that something got broken and you need to build against an older version, for example. So I'll jump back now quickly to the dependency section. The rationale is just a comment, essentially. It's reminding yourself why you put this particular component into the module. It's mostly useful for dependencies. This library is required in order to use this function in the main table or something like that. It's a useful hint for future use, but it doesn't get considered as part of any programmatic decision making. Thank you for answering. No, it can't. And there are historical reasons for that. That's a message of the modularity 1.0 effort where, since we were trying to modularize everything, including the platform, you had to justify and we made that a mandatory field because you had to justify why it got in there. That, I suspect, will probably drop that when we get to 3 of this format. So the dependencies this will be a little bit non-intuitive the way it's written here, but I think you'll find once I explain it that it's really handy. So the build requires and the requires, in almost all cases, they're going to be identical. Then they'll indicate that you'll build this platform and you'll run on this platform. So in YAML syntax, that is an array and that is the reason we specify an empty array is that that's a special case for the module build service, which will tell it build on any currently active platform. Right now, that would be Fedora28 and Rawhide in a little over a week, that'll be Fedora28, Fedora28 and Rawhide. Eventually we hope that'll also be Apple. So this just says when I build this, try for everything. If you want to limit it, you just use YAML syntax and you specify the specific releases that you want to put it on. As long as those two things are the same, what you'll do is the MBS will automatically build for each of those separately and then it will all get pushed out into the repositories. The other feature of that is not quite ready but will be hopefully soon, which is if it is possible to specify a different set of them and have it for example build requires on F28 but be installable on F28, 29 and Rawhide because you know that it has no dependency on the actual platform. You just need somewhere to build. I would like people to notice that there is no dash in front of requires. It's a list of build requires and requires. Just one item. The comment from the peanut gallery, sorry, Petter is that there's a little trickiness in the YAML here. This is actually a single entry and you can have multiple entries but that's a complicated feature that I hope no one ever uses. And people will get wrong. Okay. So the question is what happens when you have modules, other modules besides platform in there? I think. Okay. So the question is what do you do when you map things when the build requires and requires aren't exactly the same? I'm going to gloss over that in this particular talk because it's a fairly complicated case and we anticipate that the majority of cases will just match them. We probably will only support the build on once and run many places case. I don't think it's likely that we will try to do the reverse. By policy, I suspect we will not allow both. So once you have written that YAML file, that was the hardest part of this process from the perspective of the packager. What remains is you fed package push and then you fed package module build in that modules disk repository and in this case you'll see that it submits builds 1978 and 1979 and the reason for that is because I requested it be built on all available platforms and that was 28 and 29. So that I believe is my half of this. So Mohan is going to talk to you about how the tagging structure works and how that gets out to a repository. So now about how to get those modules into the repose basically Koji tagging. How many of you guys know how a normal RPM tagging works in normal world? Okay, four people. That's great actually. So it's basically similar to whatever the normal RPM tagging with slight differences which I'll go through it right now but we have three different life cycles. So we have three different life cycles. One is raw hide, the other one is branched and then released. So we will go through on each of them. First one is raw hide. Basically you call fed package module build and once you call that it will be tagged that particular release. Raw hide modular signing tag where it will wait for signing and once the module gets signed which means all the RPMs in the module once they get signed they will be tagged into the base tag of that release which is F29 in the current scenario and the repo is generated every night in our nightly composes. So and basically when I say repo we are actually having a sub package for federal repose which is federal modular repose back until a week ago I guess yes so in which you contain all the module information and stuff like that but a week before we drop that and now each and every variant we have modularity enabled so guys who don't know about this and if you are using raw hide please go and try it out and please let us know how it is going so that's how you can consume the repose every night it will get updated with modules then we have branched branched is a little bit different because we have both involved so how it works is once you build it it will get tagged into modular update where the update is going to sit until you submit the update in Bode once you submit the update it will be it will be tagged into modular signing pending tag where it will wait for it's signing and once it's signed it will get tagged into modular updates testing pending I think I got it right yeah we release engineers and info people will push updates every day so once it is tagged into that pending tag, testing pending tag when we push updates it will be tagged into modular testing tags and modular updates testing tags where we generate the repose and you can use it as that's the testing repose that you can consume and once it meets the Bode requirements it will get tagged into the base tag and from which these repose are generated nightly again just as raw hide and you can get the stable repose from them and the released for the time sake it's similar to branched but we introduced modular updates tag as a stable tag so once it passes testing it will be once the Bode requirement is met it will be waiting in the modular update pending tag and once the push is completed it will be tagged into modular updates and where the repose are generated and you can consume them as the stable repose and the differences from the normal rpm to current is as Stephen was saying you can build modules for different releases at the same time based upon what your build requirements are so it can generate multiple builds and the way that it will work is with the same module build command you can get two different module builds each pointing to different releases and based upon where the release is it will go through that particular lifecycle if it is raw head it will just get tagged and the nightly compose will generate it if it is branched it will go through the entire Bode process so that's how you get your repose so that's it mostly and I hope everyone will try modularity and let us know how it is working out for you questions question about targets we found out that for development it would be good to have at least a target for the last module build but I think they are pruned currently do you know how long they stay in the Koji and we could reuse them for scratch builds it's a complicated issue but the specific question is how the module targets are pruned so the question is about how do we prune modularity targets in Fedora and up until recently that was a bug they actually they are not supposed to be pruned they were getting pruned right now that pruning is entirely turned off on them and we are doing it manually if we know something is gone but that was just a bug I think I fixed it probably two weeks back so should we do that again? yeah the expected result if let's say you built against platform 28 and 29 and one of the build fails what is the expected result if you build against 28 and 29 and one of them fails pretty much the same if you built the RPMs individually against 28 and 29 and one of them has failed like the updates don't happen automatically well except for raw hide if it doesn't fail you won't have it available to create a BODI update for it you just have to go fix it before you can do that does it mean that you get another build that you want to first push the working update and then fix it and then do another build or? the question was does that mean you have to do another build yes we don't really have a policy on whether or not we want to require you to build them both a second time I mean you can always choose to modify for a single build to have it just build for the one platform by changing that build requires and requires field so if you know that there's no reason to rebuild you can do that and then then uncommit that after you know that change after you have done the build what is the policy about using the branch name in the model and if that I need to do a new commit to take the new content of the branch or what is the policy behind reusing the same branch name oh this is not about policy this is a technical question and what he's asking about is why he needs to commit to the module again when he just updates the RBM ok so the question is why do I have to do another commit to the module repo when I make a commit to the RBM's repo in order to rebuild them that is a question better asked of the MBS maintainers I'm not entirely sure well yeah so when the MBS does its build it takes that branch and then it actually goes and looks up what is the real commit ID and it saves that as part of this build and when you do a and the reason for this is because you have test flakes or issues where the build fails it's recoverable you just try it again and it would go and so this ensures that it actually rebuilds the same thing it was trying to before even if you've made another commit they don't want to change things out from under you while you're doing that so it requires you to make a new commit in order for it to go and look up the latest ID and see if it matches it's a really esoteric technical problem the question from Langdon was didn't we push a change to make sure that wasn't the case anymore and the answer was no let's take that discussion to the expert help desk in the afternoon so the question is if you build something that is available only as a module and not in the standard traditional RPM repositories will a user have to go and enable a module in order to see it the answer is it depends on whether or not you have requested and this is something we probably should have covered in there was you have to request a default stream and if you pick one stream of a module to be the default on this platform and on this platform then it will just simply show up similar to if it was enabled there's only one minor technical reason why it's different than being enabled so defaults are not visible in the builder you always have to explicitly state that in the build requires if you need it to build so the question is can I have a non-module RPM depend on something that is in the default stream of a module technically you can by policy say no the question is can a module depend on another module in build requires yes absolutely I also kind of glossed over that because when that list starts getting long and divergent the number of MBS builds that get fired off gets large that would be one of those cases where you might see more than one list there is if you know that it depends on this version stream in this release and this version stream in that release you can actually dictate that in the file format but we are out of time so I'm definitely not covering that today apparently we have a different clock so I had time to say thanks for coming thank you very much