 Hello. Hello there. Morning. Did you get a good enough sleep? No, okay. This is Bisek, I'm Miro. We work at Red Hat. We also are members of Fesco and we also have some strong opinions about modularity and I think you know that, otherwise you wouldn't be here. This talk is called Alternatives to Modularity. We are not here to give our opinions to you and then go home and everybody will be happy. This has an hour and a half and we expect everybody to discuss after what we say. If you have a need to throw things at us, please do. It's fine. The thing we say about modularity is that it brings us a lot of benefits and we recognize them and we see them. We are not like fanatics. People say that we have the too fast, too slow problem and the modularity helps to solve it. People say it helps to solve independent life cycles. It provides us multiple versions of RPM with parallel availability, not parallel installability. It also brings us non-supported RPMs and build-root only RPMs so you can build your stuff with stuff you don't ship to your customers, which makes perfect sense in some scenarios. It also gives you easy rebuilding for multiple Fedora releases or releases of anything. Easy chain rebuilds and installation profiles. Possibly it also gives you more but those are the things we identified as what people who know modularity think modularity is about. But if we want to go more official, there is this objective, this is why we all do this in Fedora. It says modularity will transform the all-in-one Fedora OS into an operating system plus a module repository which will contain a wide selection of software easily maintained by packages. We would like to achieve the goal without the stuff we have now, without the modularity stuff. In order to do that, we first need to remove the modularity words from this goal, which we transform the all-in-one Fedora OS into an operating system which will contain a wide selection of software easily maintained by packages. Wait. We have been doing this since day one. Are we good at it? We don't know. We try to be. Is it getting easier to maintain those packages? Sometimes. Sometimes it's getting less easy or even harder. But we try to make this happen. And suddenly a lot of things go out. Spichek will tell you what actually modularity is on the technical level because when we ask what modularity is, we usually just say about the requirements and the features. But it actually is something. So it's a big thing that contains many, many parts. And I think it's good to consider one by one. So the most obvious part is the module ML language that defines modules. And this is what packages would interact with. And then there's the server-side infrastructure that we have to take those definitions and use them. And those two parts are big, complicated things that have to be written and have to be maintained and so on. Once we build stuff, we deliver it to users. And this is something that is visible to users. This is something that requires additional support in the infrastructure. And we also need to adapt pretty much everything that Fedora is. So Koji needs to know about modules. Bodhi needs to know about new type of updates. DNF has lots of requirements are put on DNF. It must understand modules. It must report conflicts. It must report modular conflicts and package conflicts and so on. In Azure, we have a new namespace. Pungi also needs to understand modules to build delivery stuff. This must be mirrored. This must be maintained by relink. In Fedora release, there was a plan to define the base version. So even Fedora release is impacted. And on top of those changes in the tools, we need to update policies. We need to create new update policies and teach our developers how to follow them. We need to teach developers how to do things in this new scheme. And in the end, users also need to learn this new stuff. Modules are not transparent and they cannot be transparent because the way that packages are chosen for installation and upgrades is changed. So even users would need to learn those new things. So we don't want to run. We want to talk about solutions which build incrementally on the stuff we already have. And as far as possible, those solutions, when they are localized in the sense of impacting just some parts of this long list of things which I listed here, are easier to both our developers and to our users. And they're also probably easier to develop. So in order to talk about this, we think it's important to actually look at the modules we have in Fedora. Because a lot of people don't know what modules we even have. Unfortunately it's not revealed to figure out. But that's an implementation problem, not a design problem. And this is what you have in Rohit when you do DNF modules list. And we split it into several categories. Can I have the clicker back? Thank you. So we have a lot of modules that have one stream, one package. When we say one package, sometimes it's two or three, but they are interlinked very hardly with dependencies and they basically behave like one package. Some of them are alternate versions to what we have in non-modular repositories. Some of them are the same versions of what we have in non-modular repositories. Some of them are only available through this one stream, one package module. We also have a lot of multiple streams, one package modules. So you have different versions of MariaDB, different versions of Avocado. You can even have different versions of Subversion for some use cases. There is this depth three slash Java kind of modules. So for those of you who are not familiar with these, they basically include all the Java dependency tree of the thing they are called after, except Java itself. And they try to have everything for themselves, which is a good argument from a maintainer perspective. But technically there has been a lot of issues with conflicts with non-modular content and stuff like that. And there is one thing that's different and that's Perl module, which is a language stack. The difference is that it doesn't include dependencies of Perl and then Perl as the final product. It contains Perl and a lot of Perl modules as like the ecosystem and you can then switch different versions of Perl, including the packages that have been built on top of that. It's special because only Perl is doing it, but you can do it with other languages as well. We are just not currently doing it. And there is one module, Perl Bootstrap. We just assume this is something they use to build the Perl module. But yeah, that's true. So if we go back to the goals, the list of things like too fast too slow and stuff like that, we try to propose some alternatives. And the first thing is too fast, too slow problem. Obviously this is a meta problem because we tried to achieve solving it through the independent life cycles and multiple versions of software. So we can just go there. But there are some important things to consider here. There was another talk here at Defcon. This is a different slide taken from that talk. And it has an interesting observation which I would understand as the following. In order to want something to be fast, I need to be deeply involved or interested in that very thing. And that makes me a different kind of user of that thing. A specific example is Python. So as a Python developer, I care about the new Python version. I wait for it. I need to support it with my library. I need to test it or I need the new features to actually start using them. But on the other hand, I am the Python developer and I know how to handle Python using the tools that Python provides. So all I need is the interpreter and I'm fine. I don't need DNF to run on this new Python version. I don't care as long as DNF works. And DNF is just an example of something that's written in Python and important for me. On the other hand, if I am not a Python developer at all, I am not very interested in the new Python version that much that I would need to do something specific. I don't want to wait for the next release. So there is no particular need in this very example to have, I don't know, different versions of DNF built on different versions of Python through some stream expansion, which is something that might be useful for DNF developers, but they can run their tests of DNF without any RPMs. The independent life cycles is one of the things we want to use to solve the too fast, too slow problem. Right. And I mean, the idea sounds great in principle. We have built new versions and we deliver them to users and users pick when they want to have the new version. But in practice, this might work for leaf packages, but it doesn't work for packages that interact with other packages. And it doesn't even really work for stuff. Again, this is like the previous slide. If this is the stuff that I deeply care about, I am fine with the new version. But if this is just some part of the OS, I don't want to come to my desktop one day and see a new version of normal running if it's much different. So in Fedora, this problem has been, well, seen forever and we have a very specific set of rules and guidelines. You are not supposed to do software version bumps when this software changes in any significant way at any time. You can only do it at release boundary. And I mean, this is the thing that allows us to coordinate changes in the distro. This is the thing that allows users to have some stability and know when they will get a new system with possibly annoying or incompatible changes that are significant. And if something is delivered as stream as part of the distro, the same rules need to apply. And in fact, this has been discussed and voted by Fesco and you cannot obsolete streams at any point in time. You cannot do significant version bumps at any time. You must coordinate with the releases of Fedora. And so this is those packages that interact with other packages. But then we have the special case of stuff that could be made to is either backwards compatible or the user could opt in into having a rolling stream and version updates at different time. And in fact, we already do this. I mean, we, the policy discourages it for good reasons. But in those cases where it makes sense, we do it. So Firefox is essentially a rolling package for two reasons. One is that it tries to maintain backwards compatibility. And second, it's a very big project that releases often and back porting any significant number of patches to other versions. It's just not feasible and it's better to have users see new Firefox packages even in stable releases. And the same is true for the kernel. Some antivirus software, possibly some other packages. And if we want to have a package that is rolling, we can achieve this already in our current system. If I have Node.js and I want to have a, I mean, I need new updates. If Node.js rolling a compact package or a second package was built, users could have access to the rolling stream or the rolling version of the package. This is just a question of policy and naming. But technically we can do this. So when we say alternate versions of packages, like new versions, older versions, rolling streams or whatever, this has been brought multiple times on the mailing list. People repeat this. We have compact packages. And the arguments against compact packages and for those of you who don't know what that means, there will be an explanation in the next slide, is that people are often afraid of them. They are annoying to create. I wanted to say to maintain, but at the end, once you have them, the maintaining is the same as any other package. They are annoying to create. And people say they are hard to discover. So it's hard to say what Node.js versions are there in Fedora from the user perspective if some of them are in compact packages and some of them are in normal packages. So a compact package is a package that has something, usually its version or a version of some stack in the name of the package. This is from the guidelines. We needed to put some fancy stuff. So here's a regular expression. But there are some examples. So you put the number 0.5 in the package name. So you can have the Python SQL archiving package in a different Python SQL archiving package that have different names and they don't collide. You don't need to put a number in there. You can put a feature stuff like stable or rolling or different kind of malloc or whatever. What's important, and people often forget it, is that there is no package review required when you introduce a compact package. So there are no roadblocks. If you have, I don't know, Python 37 package and you want to introduce Python 38 package, boom, you just run the request repo. I have an exception. There is nobody who would actually check if you have it. They just create the repo for you. But if you follow the rules, you can do this. The good stuff about the compact packages is that they may or may not be co-installable. So for example, usually what happens is if you have a compact package for open SSL, which has the SO numbered file with different number, the runtime packages can be installed together while the devil packages conflict with each other. Or you have different set up tools built for Python 2, Python 3 does and Python 3 something else. If they are cold different, they install to different locations and everything is fine. They don't conflict. But you can get conflicts and when we go back to the goals, the goal was to have this parallel available, not parallel installable. So conflicts are fine as long as they are explicit, as long as DNF tells you, hey, you can't have this parallel and this parallel at the same time you have to choose, then we are still getting there where we want it to be. While on the other hand, we have a benefit of using the same mechanism for stuff that actually can be co-installable, like the Python packages for example. While within modules, even if this is technically possible, as long as you put it in the same module of different streams, you just can't. There is this discoverability problem. So yes, you can list all the packages. You can do DNF list V8 and it gives you all. This is cheating because if you try to do this with Python, you also get all the Python packages, not just the Python interpreters. So this, yeah, like you can discover the packages, but it's not very nice. And we get back to that. The problem with combat packages, one of them I told about is that people don't want to create them. It's hard. Or hard. TDS, let's say that. And we want to make this easier. There are a lot of people who try to make a package easier for various macros and dark rituals and stuff like that. We put in Fedora, so we have this automatic generation of Builder virus and stuff. And we are getting there. It's getting easier and easier. So for example, you can have something like this. This is an OJS package that has a conditional macro in its name. And then you have the Babel package, which is an OJS package. I think somebody just said, like, is this software collections or something like that? We get to that. The idea here is that you define or not define the mod version thing. And I use mod for the lack of better term, like module, in this case, but it can be anything. And when you define this macro in the build route, you get a different build artifact, which is called differently. Otherwise, you maintain the same spec file in possibly in one place or in two places, if not possible. All right. So this is a mechanism to build both the compact and the non-compact package from the same source. Yes. What's important when we do stuff is to make sure that the RPM automatic provides and dependencies actually encode the version this is for. So, for example, in Python, you can have Python 3D set up tools as a provide or Python 3.8 set up tools. And we now provide both. But we make sure we require the more specific one on runtime. And this is easy because then the conflict happens soon. It will tell you, I need the specific version of Python for this library. Well, if there is something like I just need the Python 3 version of set up tools, and it will try to get the other Python 3 version because we can have multiple, the resolving might end up somewhere totally differently, and then the error for the user might be very, very hard to understand. But even that is possible. This is good because you can have conflicts when necessary versus dependency held by design, which is like this has been excluded. You should never get a message that says this has been excluded when you didn't exclude anything. Here you get this can't be installed together, which is kind of better, but not the best. This looks a lot like software collections because you need to put the special macros in the names and requires everywhere. But what's important is that you don't move the files to OPT. You don't need to activate this. It just happens to be there. And if you can install it together, good. If you can't install it together, it's also good. Not that good, but also good. And the discoverability problem, this is what we currently have with the module implementation in DNF. So you list modules and it will tell you, yeah. So Node.js, we have this and we have that, and you can back. This is actually very nice. Thank you. We will lose this with Compile Packages because if you have a package that's called Node.js and a package that's called Node.js 10, DNF doesn't know that this is like the same package in a different version because we tricked it into thinking it's a different package by renaming the package in the first place. Well, we could solve this quite easily using virtual provides. So there was a question. We have this idea that we will go through our slides and then we will return and please keep the question. We want to encourage discussion, but we were like, if we allow people to discuss after everything we say, we might never get to the end. So we will try to get to the end and then go back through the slides and encourage some questions like that. So if Node.js then provides something like alternate Node.js, this is just a crazy idea and the naming is hard and blah, blah, blah. Then we can have a tool that will find all the alternate versions of this package and give it to you. It can either be implemented in DNF or some are completely different if we think DNF has already enough work to do, we can implement a separate tool. An idea you would list alternatives or whatever and it will just give you the package Node.js and all the other packages that provide the alternate Node.js. This should be an easy task to do, but we admit that it's not there. So for unrelated reasons, people have been working on something that is very useful for this subject. That is build routes, right? So we want to test packages that when they're built in the row height and we want to gate them, we want to admit them to the row height only if they pass some sort of tests. And for a while we had this for single packages, but we didn't have this for the case of multiple packages that could depend on each other and need to be rebuilt together and then only this resulting set can be tested. And to do this, we implemented something that is called SideTax. So you make a, well you send a simple request and a separate build route is created for you. And we had this idea for a long time, but it was always manual. It was done through a ticket in the rear-ranked backtracker and some human had to go and click some buttons to make this happen. Now this is automated. And this gives us this beautiful ability to build packages like we do with streams in this custom environment that we can possibly, well, make different, for example, by installing some specific versions of packages, macros, whatever. And this is a slide for Igor who is not here because his kid is sick. So I think we should just skip that. I mean, the important part is that we request a SideTag. We have some scripting that is particular to Rust packages that understand the Rust meta data. This scripting spits out a list of packages that need to be built in a specific order. And then we just tell the, tell Fed package to build those packages in this custom build route. And once this is all done, we create a body update. So this already works. The availability of SideTags as a general thing was announced a few days ago. What's important to say that this looks complicated, and we can't just have every package write scripts like this because that's a step back. And we realize that. We just try to show that we have the low-level tools to achieve the goal and then we can build something on top of it. And it's also very good to realize that the people who actually need to build SideTags in particle order, the number of people who need to do this is very limited. There are packages. I see some of them here who have to do this regularly. But if you look at the contributor base of Fedora and the number of packages we have, most of the community contributors don't need to ever do this. And we shouldn't like spend all our energy on the use cases of the few. We think we should rather spend our energy on the use cases of the many. And I don't meet users, I mean contributors in this case. But this allows every experience package to hack around, like create something that works for them. And then maybe gradually we can make it a multiple press level tool, but we don't have to. Yeah. I don't know what's next slide, so I don't know who will be talking. Yeah, I can say that. So this allows us to create the source RPM macros binary package, either as a sub package of some of our interpreter or something, or even a standalone package, create some macro definition that defines the mod version macro or something like that, build that into the build route and tell the build route to install this by default, and then build the dependent packages on top of it. It still requires a manual step, which is tedious, but it's technically possible. And if there is a way to even like tell Koji to build the package with a particular macro set to a different value, you can even avoid this step, which I know Koji knows how to do. I'm not just not sure whether it's possible from the outside, or you need to be another man to do something like that. So when MBS builds something and wants to change the macros, it installs the macro files in the build route. It doesn't just tell Koji set this macro, right? So just to have it on mic, MBS does this internally for you at the very same way, basically, but you don't have to deal with this. Thanks. I can say that. One thing that people have been using, especially in some of the Java modules, is private build dependencies. I think it was David Contrero, who called it bundling. And I quite agree that this is bundling, all about much worse. And it's perfectly sane in enterprise to say, yeah, so in order to ship this software, we need this software, but the other software is hard to maintain and we don't want to sell that to customers or give them support or whatever. So we will protect them from this unmaintained software. While in a good, healthy community project, we think this is something we should never allow to happen, especially if it has a leaky containment. If there are situations where you can actually install the package, whether it's design decision or a bug, bugs happen and then you end up with the package that you installed through Fedora, but nobody actually thinks it's supported and you will get known security holes in it. Nobody will care. But we acknowledge the technical need for this. For example, for enterprise, so thanks. This is still technically possible, even in Fedora, with the on-demand site tags. And it's actually one of the easiest things to do. You build a dependency in the site tag, you build a dependent package on the site tag, and you untag the dependency from the site tag. The only, of course, problematic part is to prevent the dependency to be accidentally rebuilt in regular Fedora, but we can block things in Koji tags and stuff like that. So you can actually prevent a package to ever be built into the regular release. And you can only have it be available from the site tags. But as said, we would like to avoid this at all costs. So another thing that has been in this case discussed in Fedora is the removal of the need to have change logs and version bumps in spec files. So the reason why this is related is that those change logs and the version bumps which are done manually by the package are super annoying because when you want to build a specific version of the software in multiple releases, chances are that the only thing that you will need to touch will be the version number. And so there's this busy work of updating the spec file with completely uninteresting things. And the second thing is that if you have few branches which you build for a few releases that are different, but you make changes that are some self-contained change, if you want to cherry-pick this patch between branches, this would be usually quite easy, but you get conflicts again in the change log and in the release number. So if we do what is being discussed, so if we make those things handled automatically by our tooling, then suddenly it becomes much easier to either have this case of a single version of a package and building it in multiple routes or maintaining multiple branches that differ a bit and copying changes between them. So when this is implemented, obviously this improves packaging for all packages because everybody needs to do this and it's annoying. But it also removes some of the push to have streams because suddenly it becomes much less annoying to take the same thing and build it multiple times. If we remove this those many parts that we need to update just a simple for loop around Fed package build target we have to have a single package build multiple versions. When we talk about building the same thing everywhere, there are two variants that need to be considered. I'll call them build ones, ship everywhere. This is the first one we want to do just a single build and just push it into multiple things. The second variant is when we take a single source package and rebuild it as many times as needed and each target gets a different rebuild. Those two things seem similar but they are quite different. The second one is already happening. For many packages, what packages do they just have a single version that is backwards compatible enough so that they just rebuild it in a loop over all targets and submit a single update that targets all federally so they're active at any given time. The first thing we want to do is build a single package and just tag it into multiple coji tags. The reason we don't want to do this is to maintain the package to make the binary build compatible with all the releases it must always be built on the oldest release. In the particular case where we would like to have the same package built for Fedora and Apple we would always have to build packages in Apple and ship them to Fedora. This means that they must follow the lowest common denominator of all features, they must use the old compiler, old libraries and in fact most likely such a package would even be compatible with our packaging guidelines which require us to use specific build flags. Technically possible but not wanted. It might sound interesting to build stuff on Apple 8 right now and ship it in Fedora but it will be different in 10 years. If you look at some packaging guidelines that are applied to Apple 7 packages and what we have in Fedora you realize it's completely different and we think that like building stuff on Apple to ship it on Fedora lacks the first features, foundations and we just don't think it's a good idea. But yes for third parties for example this is an excellent thing to offer them. We just don't want to ship Fedora packages this way. So, okay. One of the other goals that we identified on the first slide was to make it easy to do chain builds. So, we have some support for this in Fed package but it's slow and annoying and doesn't work well and modules do make this much easier. You can have a long list of packages to build in a specific order and the module build system will do the right thing and this is defined in the model ML YAML so we have a list of packages, we have some macro overrides and we have the build order and actually this is pretty much what model YAML is and all the other metadata is not really interesting and we covered how we could inject macro overrides into the build route. So here I'm just talking about this the part that is the list of packages and the build order and when we are writing a YAML file we need to put the build order there. How do we define this order? Well, for non-trivial cases there is a tooling to do this and this tooling, there was this example of the Rust package a few slides back it doesn't really matter how the Rust tooling figures it out but it takes the package-specific metadata does some black magic and spits out a build order and well, okay, we put this in the YAML file but if we had this kind of tooling we would also use it for other things we would use it for master builds we would use it to bootstrap level 9 we would use it when we build a new version of Golang and we need to rebuild a bunch of packages right now those things are either done mostly manually or by brute force and looping builds in a loop and if we had tooling that understood this we would have it generally useful SO version bumps also require builds so this is all stuff that would benefit from having this automatized and we believe that build order generation should be as automatic as possible so we can build it's not possible in the general case because we have loops we have things that cause builds to fail without being declared in any way so a fully automatic solution is certainly not possible but we can build partial solutions and so we want to have tooling that does this as far as possible and then we in various cases it will be necessary to provide additional information but this information should not be stored in some separate file that describes a specific build order this is something that should be part of the package metadata in this git so one of the things that I think would help a lot would be to have a single a simple file which says what if we flip a define what the dependencies are added or removed from the build requires and this is the kind of thing that happens at the package level and then this could be fed back into this generator script for the build order to make the result better this can even be also automated in some of the next steps but you don't have to do that you could teach some service we already have or create a new service that continuously rebuilds packages and then you can query this and say okay I need to generate a build order but there is a loop and let's look what happens if I flip this and you can get the data from Cache while it was there computed by a computer and a few so one thing to note is that we are moving towards this model where we have build requires generation at package build time so we take a nice looking srpm file and we get some list of dependencies but we don't know what this list will be until we try to build the package and the list will be different in different build routes for example the latest Python moved some stuff to the standard library so suddenly we build a package against Python 3.9 and a certain dependency is gone there are other cases of course where certain dependencies are added and so on so taking this build order specification and putting it in version control in a static file is not useful there is a question architecture specific build requires any build order harder even static, even dynamic okay sorry this is at the end the question or note was that architecture specific build requires make it harder for tooling to decide the build order because as I understood for example you run it on your machine and it only considers your architecture and it builds up in Koji where it builds for all is that somehow what you try to say? yeah it's complicated it's very hard problem let me give you an example of this we do bootstraps of Python packages I am very happy that Python synced their release cycle with Fedora now but I am very unhappy that I need to do this more often and we have like a couple of hundred more every release and it's hard, the build order is really hard and what we try to do is have a YAML which is not a module, it's an RPM list builder YAML and it has a list of the initial order and macro overrides that defines the loops and stuff like that and we have like 500 of packages in there the initial and the rest is bootforced because we are lazy we maintain this YAML file and then we adapt it when it fails and we adapt it when it fails and we do this every time and the packages on the list are not maintained by the same people who maintain that file and it is like trying to build a house over a water so if I have a module that has a static build order and all the packages in the module are maintained by the same person who maintains the module and changes are only ever done together then making it static is a perfectly valid point but if we want to make like large scale stacks where actually different maintainers maintain the packages in that stacks any static order would be out of date the second you push it to get or maybe a couple of minutes later but that's the problem comment yes, build specific requires make the whole thing much harder but I don't think that this in any way changes the fact that there are cases where you need to provide additional data to generate a successful build order and yes we acknowledge that and we think that this the driving of the chain builds can be done locally we don't need to have a service that does this automatically for us we can combine this local generation and language specific generation of the build order and even the busy loop over packages until they actually do build we can do this from a script, we don't need to push it out and if we achieve a level where this is actually working properly we can always run the script on the server and have this pushed out there I mean doing it first remotely and then locally is if we think the wrong way to attack the problem I can talk, that's fine so one thing that modularity has is that it expresses dependencies between other modules like this modular stream of Django needs this or review board needs this modular stream of Django in order to work and the problem is that you have two levels of dependencies modules on modules packages on packages even the packages on packages is quite complicated already because it's actually packages on virtual provides and virtual provides are provided and sometimes it's a name sometimes it's something different and then you have booleans and rich dependencies and stuff like that and on top of that you have like this group depends on this other group and disables this other group and it makes stuff incredibly complicated while we think that we should just have one set of criteria and the way the RPMs were built like we built these RPMs the fancy new way and we built these RPMs the boring old way shouldn't affect how they interact with each other it should just use a simple mechanism and we already have that mechanism that has been battle-dusted and improved over the last decades and those are RPM level dependencies in our opinion another thing that goes with this are installation profiles which are often forgotten when we talk about modularity we almost forgot them as well it has a fancy name but it's just grouping of binary packages if you have a different profile it doesn't mean your MySQL will build a different way it's still the same package it just says this profile gets these packages and this profile gets different set of packages and these are some mechanisms for that groups and meta packages those are some examples of groups that we already have in Fedora and there are some meta packages that we already have in Fedora what we think is better with groups and meta packages is that you can do groups that overlap with each other while with module or installation profiles a single profile can only install stuff from that module and bring the dependencies from different places that are shaking your head so I'm lying but this is still possible with groups and meta packages I'm no longer saying this is better it's at least the same sorry about that so do you summarize it and let you finally vent I see you have a lot of comments we think and we believe that for Fedora as a project do you have a benefit from this we should not do the things on the left and we should do the things on the right I hope I did it the right way and I'll try to go through this so if we have the module already as we have it today we need custom stuff in DNF to understand the module or dependencies and custom resolving on top of the regular resolving we need things like user major or user prime we need to figure out technical solutions to the problems and it's getting more complicated people say it's workarounds and workarounds and other people say no this is not workarounds it's not important but we need to acknowledge that we are like keep adding stuff to make stuff works and make it more complex and more complex we have packages, the shadow packages we have three branches and unfortunately we have two classes of packages and that's obvious from the objective the objective is to make some packages easier to maintain and the objective should be make all packages easier to maintain and for that we need gradual improvements for everybody not evolutions in how we ship our content for some content we need proper tooling but that can involve gradually and we need to focus on the content because if we scroll back we spend six years I don't know a lot of years working on this to ship 20 modules with two alternate versions of MariaDB and now this is not entirely true we have some fancy stuff in there but if we focus on the content that we ship we could actually have multiple versions of a lot of things but instead of focusing on the content we are focusing on how do we fix the delivery mechanism the same applies to groups and meta packages when we were preparing the slides I realized we have a MongoDB group in Fedora but there is no MongoDB in Fedora because nobody takes care of the group's definition but if we have somebody who would curate the content and this is something we could get new contributors come in the current way we define groups is terrible and putting it in the hands of the maintainers of the packages that are in the group is an excellent idea and I agree with that and also that's why I very much prefer meta packages to groups because they are local and we think that we could make DNF handle conflicts and actually can't resolve situations and errors easy to understand to users instead of focusing on enabling disabling scheduling and stuff like that that we don't actually find useful so if the DNF will tell you okay well you can't install this package because at the end of the dependency chain the parallel it needs conflicts with the parallel you have and in order to proceed you need to pick one this is complicated I realize that I see the DNF people being scared but you need to admit that the stuff that already has been done in DNF to support the modularity we have now is complicated as well and if I would choose I would definitely go with the better UX than these quirks with enabled, disabled, shadowing and stuff this is our point that we think that we should start focusing on stuff that is useful to everybody instead of focusing on the stuff that is useful to our 20 modules or 30 or 50 because so far there has been no community buy-in internal which is sad but we need to live with that at this point and finally you can shout on us do I need to pass you the mic? yeah you need a mic so you can drop it so I have a question that I'll phrase as a use case we have the too fast, too slow problem up there as a statement and one of the things that is tantalizingly frustrating to me all the time is that we actually have for almost all of our packages too fast and slow packages in source form that are sitting in Git repos that is the Fedora RPMs and the CentOS RPMs and we are working on putting those in the same in progress so what I would really like is to make it so that it's very easy to just use those existing streams that we have to ship that software in both places so the use case specifically is I am a CentOS SIG and I work on a specific thing let's say storage and I have a fast version of Gluster that I am maintaining in Fedora already and then I am also maintaining the slow version in rel and in CentOS so as a CentOS SIG what they are doing is basically taking the Fedora packages and repackaging them as a CentOS thing it would be nice if instead of doing that they would just be reduced to having I am maintaining it in Fedora and it builds into Apple into a version that can be selected and maybe even build the CentOS one into Fedora if people want to for compatibility reasons or whatever have the older version available in Fedora with modularity give or take some automation which is theoretically going to happen that should be easy you define your module once and then you just do the module build and then you can do all the other things with either stream with this it sounds like the solution is I now have to create one or two more streams that are the compact packages for each one so it's not just the creating it now I now have maintenance of more streams which is probably never going to happen which was I will just say in here I don't need to repeat it right now I was afraid I would need to say that all so you are correct I don't argue with that but I think it's we should focus on making the repackaging part where you take something from CentOS put in Fedora or from Fedora to CentOS or whatever make it trivial by removing the change logs putting everything in the same disk where you can cherry pick from different branches stuff like that while creating a general purpose mechanism that allows you to ship packages build or to find in CentOS into Fedora it's a very nice goal but we just think it's too far to build an entire thing around it and it's so easy to just build the stuff for yourself like you're running CentOS and you want up-to-date system D from Rohite, there you build your own because it's easier than trying to develop something that suits everybody and in the end it will be like this because you think this package is important because I don't care about it and I think this other package is important this is something we need to figure out what is the content that we want to ship and focus on delivering that instead of designing something that fits everybody at the end doesn't fit nobody sorry that was not English let me make you reply first if you want before we start devolving into implementation details and nitty-gritty stuff that I am not smart enough to understand I took this away from your presentation you restated a lot of the goals that modularity was trying to achieve you're unhappy with the implementation of it you have some other ideas that are basically taking those goals and making them general purpose I don't see anything that conflicts with modularity so why is this an either or what is stopping you from doing what you're suggesting so there's two parts to this first is that in principle things can happen in parallel but in practice we have limited manpower and I think it's pretty clear that when the DNF team is working on implementing modules they are not working on implementing better conflict handling and there's those things can be developed as side projects to some extent but they are not the alternative solution that we are proposing this will not happen without some larger group working on this so yes this is in some ways about taking the developer's resources that we have and assigning them to work on different things and we think that we are advocating towards some shift and the second thing is that modularity is coming to Fedora and it is not something that just happens on the side it is something that impacts everyone whether they want it or not I'm not sure about that just a little bit yes you're right resources are very limited and one thing I tried to express on the mailing list a few weeks back and I'm not sure I did a great job coming across was that whether or not it is the ideal solution and I've never claimed that it was I've claimed that it was the solution we could deliver in the time that the people paying us gave us modules have shipped in rel so no matter what we decide to do in Fedora finishing that and maintaining that is something that the Red Hatter the Red Hatter employees are going to have to do for the next ten years anyway and I realize that that has backed us into a corner a little bit but I'm not sure that corporate need I'm not sure that reassigning resources away from that is going to be plausible if we want to do and I don't disagree that all of those things that you're talking about like Josh said they are complementary and in many cases would also enhance things that we were trying to do in modularity so I think that they should be done but I'm not sure that we'd be able to pull existing resources off to do them I understand this concern very much and I don't know how much I'm allowed to talk about internal stuff so I try to be very general in here imagine there is a next version of rel coming something thank you so in rel 9 we may decide to ship this instead of that while we still need to support that for the next decade for rel 8 we don't have to support it for next two decades and there needs to be a certain point where we stop supporting this or else we will support this for eternity or until there are no more internets or something like that and zombie apocalypses and stuff like that and I don't want to have this kind of modularity as it is now forever I can say it I don't like it, sorry, I just don't and I think we can do better in saying we can't do better because we need to support this as a valid argument so I definitely did not do a good job of expressing what I meant to say which was this I believe that this stuff is doable in parallel that it may benefit both potential approaches but with resources limited we need help, we're going to need someone to come and bring the patches I'll give it to Langdon in a second, he talks too much so within the context of a company your arguments are correct within the context of an open source distribution which is a community project, they're bogus if you want to do this show up and do the work and find people that want to do the work with you because that's how open source works you have to have the idea pursue the idea and convince people that your idea is better because you're never going to get somebody that says I'm going to spend $20,000 and say here you go, here's four people for two months it just doesn't happen that way necessarily and I totally get the irony of that coming out of my mouth but when you do the work I think there's ways that you can take the existing resources that we have in the community and internally within Red Hat and generalize some of it to make both things better at the same time that's what I'm saying I just want to say that if we go with this argument like we are not changing modularity if you don't show up and give patches that cornered us totally, we are done and it's over, we can go home if this is the argument we want to build stuff on we might as well decide to never change anything ever again it's also a good point if we don't ever change anything ever again there's a lot of free time for us but Langdon really wants to say something I just didn't understand the comment why can't you show up and do the work I'm not agreeing to disagree or anything else with Josh I just didn't understand your answer So there has been a lot of work in modularity that happened while people working at Red Hat being paid on it, work on it and then if you say this is not correct we shouldn't do this, we should do this instead then the argument is let's do it let's show us how it works you don't need sleep, just deliver it's just not fair so the group of people you have to convince to let you do patches just happens to be a different group of people you can still show up and do the work but Red Hat may be paying you to do that but you have to convince your management chain to use your work time I think what you're saying is just that you don't see a realistic way to do that as part of your job or as part of your free time that's correct to answer in a different way the stuff that we are talking about like the build route or the generation this is actually something that would also benefit modularity because right now it's manual and the solution is to cover both cases so I'm saying that we need to work on the tooling anyway have you presented these ideas to the people who are maintaining the modules that exist today like have they said if you can do the things you're saying then I will undo my modules because I don't need them anymore I can't tell if you're speaking on behalf of people who haven't modularized I don't know if what you're proposing is something that the people who are already invested in modules actually want so I'm not sure where your motivation springs from we haven't spoken to all of them we know cases of both we know cases of people who say I will only maintain this in modules and I don't give about the non-modular packages and even if it makes things conflict and broken they don't care and it's very hard to discuss we also know module maintainers who abandoned all of their modules for example Igor who was supposed to be here with us he maintained a lot of modules in federal the rest modules even DNF in module to have dynamic build request and all the releases and stuff like that and he just decided that it's not going anywhere and wants to change it the way we do there are always people in both camps this has been very like splitting the federal community over the past years either you are pro or you are against obviously there are a lot of people who don't care that much but you don't hear them saying stuff that often I will not pick people to give questions because I'm probably biased in the middle area so please he do it well at the present time we have about 50 modules and I think that's quite easy to ask well 50 teams, 50 members even less because some of the people maintain multiple modules why they create the modules and what they expect because I believe that in the list what was mentioned what was the main purpose of the modularity there are much different reasons that we thought we always think that we try to resolve the user cases but we provide a tooling and usually people take a tooling and use them by their needs and they are quite often different and also we have to think about another part of the people I mean the end users end users are well very limited represented in this discussion because they are simply not going to attend any developer conference but we have to think how they want to consume the modules and this is why we are here we are providing word better for the end users not for us that's our primary goal secondary goal is to provide a better word for us thank you well I think that all you said is true especially the part about users so yeah we believe that that's why we want to concentrate on the content not on the delivery mechanism and making things more complicated and yes we didn't talk to every module maintainer but actually if you look at this list the first part the first two classes they are essentially trivial I mean there are toy things that making things modular or not doesn't really change anything and if you look at the two last classes that's not true I'm sorry we see the data and we suggest the reasons well sometimes the reasons are different that's what we see we don't see the long term plans of the teams that present this stuff or we probably yeah that was the new sometimes the reason could be well let's do it because it's cool stuff but we never know yeah we only suggest well let's ask them so a lot of module maintainers gave us like a very general answer and the answer was I choose to maintain my stuff in modules because it's easier for me we didn't dig deeper yet like why exactly what are the features and we should definitely do that I agree with that but the point is we fundamentally disagree with a goal that makes certain packages easier to maintain this is a wrong goal it's an unfair goal and the goal should be make all packages easier to maintain I think this should be the spirit of Fedora not like having two classes of packages and like to be honest when we created the slides there was a typo that we considered funny but also true and we accidentally typed two classes of packages and we decided not to keep that but this is what I see I feel happens so just to answer that one part like what about sorry what about anything related to modularity indicates like none of your evidence as far as I can tell nothing that you've commented on indicates in any way that the people involved in modularity would not like everything to be modules A B that everything be maintained in the same way and then C B is automated as possible like nothing about the intent to have two classes of packages or packages or anything else is in any of your evidence as far as I can tell and you're putting a lot of assumption on us that is frankly completely invalid and kind of insulting I had never the intention to insult any of the people who work on this and I am very sorry every time I hear this argument I try to make my points valid I try to make my points technical I know we could get biased I admitted that I don't like the modularity thing because I don't think I should lie about that but it doesn't mean I disrespect your work on any other people and I mean this sincerely I'm not just saying it because I need to say it or something like that That is an actual question Okay because the goal you need the mic This is from an objective that was one of the phases here and so this was I don't remember the steps here but the idea was originally that everything would be modularized but this was a step along the way of we were going to add a modular repository on top as a way of getting to that goal the goal here isn't to have a separate class of packages and I think also one of the other things about modularity is that it actually leaves the packages alone and adds a metadata on top of it so the packages themselves stay the same there's no such thing as a modular package So why I mean there are two classes because effectively when stuff is modularized it becomes much harder for the non-modular stuff to consume that I mean we were talking about URSA major and some ways to solve this but effectively the current state is that modularization of stuff makes it harder or different Well, okay, it's not a goal Yes, and it's just how things happen to be currently and sorry I wanted to say one thing before if we look at the list of modules that we currently have there are, I mean it is clear that there's a few use cases that are being done I mean this is all in a single YAM file and you look at this file and you see what is happening there people might have some vague plans for the future but what is being done right now is either relatively simple rebuilds of packages possibly with some macros defined or two cases where you have a tree stack that goes one way up with a single package at the top or the other version inverted where you have a single package at the bottom and a bunch of packages built on top I mean we look at the module YAM and we see that there's not that much complication there One thing about modularizing everything and then making everything better by doing this which is probably a nice idea especially from people who like modularity we can't ever achieve that without community buy-in and we should work on that and maybe after this much time we should acknowledge that it's not happening unless we fundamentally change either how things are done or what things are done if we keep doing the things how we are doing it now and if we keep doing the things we are doing now the community buy-in will not just appear from a thin air on the left side and right side are also other people we should be inclusive about this Hi guys Just a couple of comments I think everybody agrees with your general premise of trying to make packaging easier it is really strange to see this presentation how you've somehow tied that energy to stopping modularity for some reason I kind of missed that part but I will say like community buy-in I think that's a definite problem I guess I'm confused as to what this presentation did to help that problem to get community buy-in I mean because it seems like you're just feeling the fire and I say that with a little bit of extra just reminding everybody in the room that certainly by now all of us understand that it's not enough to be right I'm not saying you're right I'm not saying modularity is right but this hasn't moved the ball at all I'm not saying you're going into the energy going into modularity and I'm confused why Yeah, so I mean why is top energy? Let's consider a different project that is happening right now packet so I have my opinions about the way that packet approaches the problem of tying upstream to Fedora and I might like it I might not like it but it doesn't really matter if it's on top of Fedora and if there is buy-in good if there is not I'm also happy but in this particular case this changes how Fedora is developed it is not transparent to anyone I mean it impacts everything I get that it wasn't transparent to you guys If we are talking about a solution that we think doesn't work then we have to make a choice either we implement it and we buy into it or not I mean where do you want disruptive changes to happen? Well, I'm saying that those particular disruptive changes I don't want to happen And so you've decided to get up in front of this group spend 90 minutes convincing us all that we should stop it This has been discussed on Fedora Devo for I don't know, the last two years maybe a hundred messages every month Good luck guys We are sincerely convinced that the stuff that we think we should focus on would get the community buy-in We might be wrong Go get it I think it also would be also in fairness I think we should point out that some of the reason the community buy-in is not happening is that some of the enablers we haven't been able to put in place and partly that's because of the discussion over the issue of disruption that Mike's talking about So when we're hospitable to letting that disruption take place or at least trying things and then again feeling fast and then trying to recover as opposed to stopping change I think that we end up in a better place overall finding out that something doesn't work quickly and then reversing course is better than having to really fight a lot of headwind in order to get there because it basically the progress slows down as opposed to finding that your path was wrong and then changing it So again, I'm not trying to make a value judgment on which approach is right I'm just saying we probably could get through the process faster approaching it differently and Adam Williamson over there I know Brendan you had your hand up too So just a quick comment it's not that this is it's not 2016 right now and this is not the alpha version that we are saying that the alpha version is bad this solution and the alternatives that we are proposing has been hashed and rehashed in various ways over the last four years So, yeah I mean when you say that we need to be able to iterate fast and fall back to work this also applies to modularity We work on the alternatives Yes Just to try and bring a little bit of sweetness and light I think everything went off the rails at that exact point It's like I see there's kind of two things you've done in this presentation you've pointed up some legitimate shortcomings in the current modularity I think everyone said they agree with that and my perception has been that the modularity team is pretty open and willing to acknowledge shortcomings and discuss them and when it's agreed that there's a problem they will try and improve it and I think it's definitely an uncontroversial, valuable thing to point up hey, maybe we need to think about building only packages in a fedora context just as an example maybe we can come to a way to change that kind of thing You've also said that on a broader scale you think modularity is the wrong design and you'd like to try something else and people have been generally open to doing that but the problem was when you kind of implied that you think red hat ought to pay you to work on the alternative which well that's kind of what you said, they said well go away and bring the work do the work, bring it to fedora and like show it off, like get the buy in and you said well we don't have time to do that on work times so effectively you're saying we should be able to use our work time to work on this and that's saying that red hat should not immediately, red hat should give you space to work on this alternative and the problem with that is really, I thought we had another half an hour okay, so I just think that's the thing that we need to resolve that nexus okay, so let's step back a bit those things those are big things, those are not things that anyone can solve on their own, right and in a way this discussion that we are having is something that well yes, we are trying to convince people to go for a different sense of solutions than the other set of solutions we are out of time