 Welcome. That's right, get out of here. Well, I'm sure everything will be good, Stephen. Passive, aggressive, good. Just so you know, I got that on tape. I'm sure it is, Stephen. Thank you for coming. I'd introduce myself, but if you don't know me by now, I've been talking all morning, so... Hi, I'm Stephen Gallagher. I'm with Red Hat. I've been working on the Fedora server edition since it began. Actually, it's before it began. It has been an interesting ride. So it's now been, what is it, four years? I think it was... It was first thrown together as an idea at the first vlog from Charleston, which was... That would be six years, so... That can't be right, finally. I guess we'll split the difference. We'll call it five years. So I've been giving one of these at each vlog since. Most of you have probably been to one in the past. What I will do is I'll probably talk for 10 or 15 minutes, and then it will be a discussion at which point we'll probably turn off the recording because it won't pick anything up. So I'm going to go over a little bit about what we've done in the last year and some of our trials, some of our tribulations, a few successes, and then we'll talk a little bit about where we think the Fedora server is going to go in the next year or in the year after that. So, first topic, what I call modularity 1.0. We were talking about this a bit at the previous vlog in Cape Cod, and the idea at the time was that Fedora server was going to be the prototype for this completely new way of building the distribution. From the ground up, it was going to be just these interchangeable modules that you could swap out in and out as we pleased. And, you know, we made some good headway towards this, but we really, really were boiling the ocean. And not all that long after the last vlog, we ended up more or less scrapping the plan. So that, and that was pretty disheartening. I can speak for myself and I suspect for a number of the people who were working on it that that was a pretty low point in the fall. There was a lot of people that put in a lot of hours on this and we thought that this was, it turned out to be a dead end. However, we did come up with a backup plan and we were able to reuse a lot of the same framework that we had been using for modularity 1.0 when we figured out that almost by accident the technique that we had been using turned out to be possible to just fake it into thinking that the main Fedora repository was one big module and go from there and hey, what do you know? All of a sudden, modules sit on the top and they're great. We were happy again. And we've pretty much been putting all of our effort into this modularity 2.0 thing now. And it's been a real labor of love for a number of people, many of whom are in this room, which you guys already know this. But it's involved a lot of people and a lot of teams. And I will say this, I've been at Red Hat now for over a decade. There are few times, I've experienced very few times like this where this many teams and this many people have actually come together to work on a project with the same goal in mind and actually idealistic about achieving it. And you know what? We've got something good now. So I'm actually going to applaud you guys for a moment. That really didn't work. But it wasn't just the modularity team or the base OS team or the DNF team or the relinch team or the infrastructure team or the docs team. It was all of these groups working together. So we actually did achieve something really, really nice with modularity 2.0. And we had our first successful release in Fedora 28 with Fedora 28 server edition. And I'll talk a little bit more about it when I get to the future slide. But as of two weeks ago, Rawhide now has modularity enabled for all users of Fedora. So that's pretty exciting. In the last year, when we started the server edition of Fedora's key fundamentals was, we were going to just design these server roles. We were going to do these basically pre-packaged best practices solutions for how you would deploy a popular service. And we were going to try to make that the de facto standard for how you do this in the next. This is still a good idea. And Matthew is saying this is still a good idea. I think that's true. I think that the world around us has moved on such that they have found other better ways to do that than roll kit, which we finally put out of its misery this year. It was a simple D-Bus API for doing deployments. It only ever really supported two server roles, free IPA and PostgreSQL. It was mostly limping along and I hadn't killed it before this because QA was using it because it was really easy to set up in free IPA or QPU or PostgreSQL server for their testing. However, it did not survive the move to Python 3.7 and it was sufficiently complicated that I didn't feel like keeping it on life support any longer. So, yeah. Sorry, kitty. Your time's up. So, one of the interesting problems we have right now is we've actually done almost too good of a job, especially with modularity. We've still have some bugs to work out and we definitely need to clean up the user experience a little bit, but the technology is sound and we've proven that it can work. But, so we started in the server sink doing a little bit of a thought experiment. So, where does this go in the future? Well, the obvious next step is, well, we get this into Apple. Because, of course, as Matthew showed in his state of Fedora speech, Apple is what, two orders of magnitude more popular than Fedora proper, something in that area? Three, okay. Three orders of magnitude more, it hits to the mirror, at least. That's pretty sizable and... Binary orders of magnitude, not decimal ones. Binary orders of magnitude, not decimal ones. It's not really, it's not a thousand times more popular. You kernel panicked me. So, but, and I think, you know, that is the logical next step is that we migrate this to being able to support our enterprise users as well. But, that's a bit of a double-edged sword because what we will have effectively done at that point is maybe Fedora server edition completely redundant as far as, from a user's perspective. We will have reached a point at which, right now we have a very small but dedicated and loyal set of people who actually deploy Fedora as a server. What we know from, well, from ANNIC data is that approximately zero of them actually use the server edition. They use the server install media to install the smallest set of packages they can find. So, although... My mirror stats, they showed actually it's better than that, right? Yeah, because that slightly showed, like... Oh, great. Fedora server has... Matthew is correcting me that his mirror stats didn't show that there were people actually hitting the modular repo which was only enabled by default on the server edition. It did not mean that they couldn't have manually selected that to play around with it either. There was a small blip in the statistics that said somebody was using this. It's more than zero. It asymptotically approached zero. So, we are sort of engineering ourselves out of a job, but we're in kind of an odd place because, of course, Fedora, in all practical purposes, I think everybody knows that REL ultimately comes from Fedora server. It's a place where we try things out where we start to stabilize them. Sorry, hotspot. We try to start stabilizing them where we fail fast, too. We try out new things and we figure out which ones of those are likely to be useful to an enterprise customer down the road. I don't think this is surprising anyone. I apologize to Red Hat if I'm not supposed to tell that secret, but I don't think anybody doesn't know this. So, we have value to Red Hat, but we're rapidly getting to a point where it doesn't appear that we have value to users, which makes it difficult to actually use it as a testing ground and to figure out what will be in the next enterprise release. So, where are we here? This is where we are. We really don't have a clear vision past modularity of what we are doing next. We talked in the servicing and I think it was Adam Williamson that pointed out that most of what servicing has accomplished and its server addition has accomplished and its life has been because somebody was really interested about something and drove it to completion and that was what happened with... Early on, that was what happened with Rollgate and it kind of petered out. Later, it was the modularity stuff that was a matter of this is an interesting new thing and we drove it through and now we're approaching that annoying part where you have to actually make it stable but that's where the career folks of us step in. But how do we expand the servicing? How do we grow it? What is the next exciting thing that we can get people to work on that we can get people to come and say, hey, I've got this idea and right now we don't have a lot of good ideas for that because we've kind of engineered ourselves into a position where we expect that people will probably start using less because of the things we've done. So this is the part of the talk where I point the microphone in your direction and you help me figure out what the hell we're doing next. The floor recognizes Langdon. Are we using the microphone? Do you want to shut off the recording? Yeah, I think we should probably cut the recording at this point because the conversation is just going to be in the room. So the big gap that I think the server can still fill is one of the problems that I haven't been able to articulate. This is my first arm release where I built the lock and it has a special signature. It accepts all the solutions that it doesn't actually display. Okay, I'm going to renegotiate some of this. I'll just turn the laptop around. Everybody can just crowd them. That depends on how you want to use the slides. Can you just read and articulate? To some extent. But then you'll miss the funny cat pictures is the problem. Alright, so as most of you know I'm Langdon White, Steve Gallagher and you want to go to the next slide? So I always try to include some pictures of my kids because that way I can embarrass them for a long time. But this is my current running joke of trying to get them all in the same picture at the same time looking reasonable. I joined Red Hat actually as a developer advocate and my joke about it is that they got tired of me complaining about all the problems and pulled me into engineering to try to fix them. Then I got suckered into basically the Fedora next kind of project and kind of been involved in that for a few years now. So most of you know me, I'm Steve Gallagher that is an actual photo of me doing my job in Fedora. Alright, so first a little bit of the history of the problem that we're trying to solve. So I try to this is kind of the short version of this conversation which is that we have different life cycles we have different styles of things that's kind of what the planes are supposed to represent that you know we have you know some planes that fly fast and some that fly slower and sometimes you need to be able to hook them up together because you know one needs to feed gas to the other. You know the the boba I think that's what it's called is this cliff here kind of represents the idea that all the stuff in the distro is very tightly integrated one big huge vertical stack and so as a result it has a lot of problems with trying to shift things around because that whole wall will just crumble down. Also it's easy to fall off. That too. Another thing I'd like to point out is that over the last kind of 10 years or so in particular software development has really changed a lot. We've really shifted in a lot of ways to the power being with developers rather than the power being with sysadmins. So when the distros started kind of the concept of the distros sysadmins were trying to take power back from developers and by developers I mean I can meet vendors or literally software developers in your IT department but the distros were trying to simplify the overall problem that they were having by locking down what developers could do and so over the last maybe 10 years the pendulum has started to swing back the other way and software developers are now taking more control so we see things like containers are a great example of developers wanting to put what they want to put in production and forget about the sysadmins in the way. How we do development has also significantly changed you put up a website now and it's on top of a million several million lines of code before you get your little smattering of code on the very very top so we have a couple of interesting things there both you have that vertical stack problem but on the flip side it means you can tear down your entire architecture and replace it in sometimes a matter of a day sometimes a couple of weeks versus when I started doing software development where you literally spend six months just building the architecture before you even got to your real content then kind of the last thing is and I still haven't found a better way to show this is that different use cases have different needs so my little graph up there is kind of talking about mutual funds and so when you're trying to plan for retirement where you are, what age you are means you should make different investments so you want to make lower, more conservative investments later in life and riskier investments earlier in life this is also true so for the same person you want to make different financial decisions depending on the position you're in right now so that's what we have this problem with software as well and distros don't really tolerate that very well and that is one use case I installed a web server it must be for production when in fact when I install a web server I may be installing it so that I can do development on the web server like I want to actually make commits to htpd or I might be writing html pages or I might be writing php all those use cases are kind of different and they require different things to kind of be installed or how they're set up this is why developers instantly install firewall because they don't have the time or the energy to figure out how to make this highly hardened production ready system into one that they can just do work with this is tying to our next slide here yeah so basically those are kind of the problems that we saw when we wanted to go into this to this solution so this is how we think of well this is how the distribution thinks of its users right they're all neat and tidy right and we're kind of running with the joke of it's the 1995 Fedora distro and that's what I think it was it was like 2007 yeah it's 2003 Fedora as a distribution is still trapped in the Red Hat Linux days it originates at a time when a distribution was really your only way of getting open source software you started from a distro you started from a basic install and then you know once yum came around you yum installed everything before that you went through RPM hell but everything came from a single source you generally would decide you either trust it or didn't trust that source and that's how you got your software and if it wasn't there your two choices were package it don't use it that world we want open source one we are the default choice for writing new software throughout the world now and the distros didn't keep up with that the distros are still thinking we're the only way you can get software safely and we have to test it all together and it has to be delivered on this schedule and if you don't miss that schedule you go you're out six months and it envisions a world where this is what your distros desk looks like they are very rigid they are very cautious and no yeah and so if we move on to the next slide this is what it actually looks like and I'll argue this is actually what it always looked like it's just that we could force them a little bit into the slide before so I tried to find some fun pictures here Steven actually had the idea yesterday of getting a toddler to go around with a stamper and that's what you end up with for the Venn Diagram of your users and so I tried to make it but I'm not a very good artist so this is kind of the idea is that users are actually very messy it's about use cases it's about what they are trying to accomplish that day and when I say users I mean a kind of broad swath I mean developers, I mean maintainers whoever is actually your user and all of those considerations need to be thought about and that's kind of what we led to modularity and so this is kind of where modularity fits we didn't really have a whole lot of great pictures for this slide so what I did was I decided to take an example when we did Fedora 28 one of the examples I used was I've been maintaining a package called Review Board in Apple for the better part of a decade now it lived in Fedora for two years before its dependent stack Django moved past where it could support and that happened back in Fedora 17 or 18 it's been out of the distribution simply because it's upstream decided to lock on an old version they maintained the old version of Django outside of upstream but we just couldn't have it in our distro because Fedora is first Fedora only has the latest one so when we came when we came up with this modularity idea suddenly I was able to actually package this old but still supported version of Django and then bring my my pet package back into Fedora where it's been very popular in Apple but you know it's always been kind of an embarrassment to us that we couldn't keep it in the main Fedora repositories and so that was an opportunity that this gave us right so what we're going to move on to basically this is kind of the introduction this is why we did modularity this is kind of the point and I wouldn't say that modularity necessarily meets all of these goals perfectly but it's a start and I think the important part is that it's a start at the OS level instead of things like alternatives or like Python M Python M yeah virtual M I always mix them up because there's RBM and 87 different ones so all those different solutions are trying to solve very similar problems but they're doing it from a particular perspective so the Ruby developer who wants to use multiple versions of Ruby has a particular solution for Ruby whereas somebody who wants to run different versions of Java in production might use the alternative infrastructure so they're coming from different perspectives and so making different trade-offs and not actually providing a quasi-universal solution and one that's integrated into when I say the OS package management and all that in a way that is part of the system and so that's what we're trying to do with modularity is we're trying to make it so that it's at the lowest level and part of the overall system itself was alternatives not part of the OS? in a sense so the question is alternatives not part of the OS? I would say it's sort of part of the OS, it's an add-on in the sense that it's a new piece that is not part of the things that a normal user uses regularly when we were designing this we looked at our potential set of requirements and alternatives was one of the solutions we had looked at as well as the software collections and containers and what we realized with feedback from PM talking to customers and us talking to users was that people didn't actually care about parallel installability as a general case, there was a one or two percent that did but for the most part people only really cared about having the availability of these alternative versions and so doing multiple packages with alternatives was kind of heavy weight it required users to make conscious choices the same way they did with SCLs they required them to change their install scripts and change their deployment scripts to be aware of that and to use it whereas the approach we took with modularity and just switching the streams allows people to just drop their software in exactly where the upstream expects it to be and that just cut out a huge barrier to entry for users and is why it's a much simpler way of doing things than, for example, software collections. It loses the parallel installability on the side of things but we've reasoned that that was probably not important enough. And I would also add to that it doesn't necessarily lose parallel install forever it's just right now in our current state the other thing I would add about alternatives too is that it's very sysadmin biased. As a developer it's hard to use the alternatives infrastructure to switch between language versions a lot or especially if you're a polyglot developer and you jump between different versions of different languages the alternatives infrastructure is very hard it's a very high barrier to learn because it's a new thing you have to learn whereas I already know Ruby, right? So that's also an argument for it I don't know, it's tough. Go ahead Is it modularity? Theoretically almost no that's the goal, right? Is that modularity... oh sorry the question was, isn't modularity a new thing I have to learn? The idea with modularity as a user is that it's basically transparent. You have to indicate that you want a different version of something but that part is obvious and so, but you can still just use it exactly the same way you've been using the system this whole time because we have the defaults component and so in that way if I wanted to just get Postgres that is shipped by Fedora, I'd just say dnf install Postgres and it's done. I don't know anything about modularity. So it's only when you want to get further. I think what he was saying though is yes, it is still learning that you have to type dnf module enable this version, this stream is something you have to learn but it is essentially a one-liner compared to alternatives where you have to understand how alternatives interact especially in the case of Java where you have to actually change a whole bunch of different commands all at the same time and know exactly which ones you have to change. At the very least it narrows it down to a single command that we can document really easily as opposed to a just labyrinth of esoterica. Yeah, good. I'm sorry, thank you for understanding your information point. I know that in Fedora there are several times you have to keep multiple versions of the one package because you have the Fedora repo and the updates and as I could understand you can make the backdoor of the infrastructure to keep multiple versions of the same. So the question is don't we already have multiple versions of something given that we have multiple repos, right? So there's like the everything repo, we have the updates. Fedora tries to on the infrastructure by keeping or providing the latest packages and the latest and as I could understand you provide a new approach to, let's say all around this restriction provides a multiple version of the same function. Okay, so the I guess I'm not sure what the question is though. Are you just saying are you correct? Yeah. Correct, if I could understand the point, yeah. So the comment is basically what we have today is that we have say the everything repo and the updates on top of that and basically we keep at most ish two versions of any given thing the one that was shipped and then whatever the current update is excluding tricks and hacks like specialized naming. Right, so we already do hack around that, so and that's actually a little bit of the point and that with modularity we would potentially massively increase the number of copies of any given thing that we might have in the repo side and that is definitely a possibility however, I would argue a couple of things, one we do that already to this extent, maybe not to this extent by using name mangling rather than actually using the metadata that we have and could provide so I think it's kind of ugly the way we do it now and the other thing is that I think people forget how much policy we have in place around RPM we could do all kinds of that stuff right now today by just standing up new repos or whatever but we have policies in place that says you know what, we're only going to have new things, it's not because we technically can't so with modularity we have the same problem we just haven't built up 10 years of knowledge of what those policies should be so we do need policy that says hey we need to limit the number of versions of things that are out there because we can't maintain a distro where everything under the sun is available. I'm going to contradict you there. Sure and say that ultimately the decision on how many versions how many streams of something that you want to use in modularity should really be up to whoever is going to maintain it not up to the distro at the whole saying no you can only have two or three of them I think it's if you have a very easy upstream that just happens to release the very old stable and the new shiny releases on a regular basis if you want to maintain those three it's going to be up to the maintainer to decide when they want to carry more than one stream and how much maintenance they want to do on it as long as somebody is willing to step up and do the work modularity will let them step up and do that work I mean if we need to, you know, I'll kick in the 100 bucks we need to go buy another terabyte drive I mean space is cheap so it doesn't really matter that much from a disk perspective it does matter from a network perspective potentially and we might have policy around the actual core components, right, this is where we get into things like rings ideas and stuff like that we're going to have to move on with talk to him but I'm going to go to him and then I'll go to you yeah, I just have a small point because the Federa is not going or it's not distributing packages they have a partner that are allowed to distribute the Federa content the core of the Federa large, okay it will not cost Federa anything yeah, I was mostly joking about the distribution so the concern is that the mirrors may not want to distribute a much larger amount of content, we have actually already run into that, it is definitely a concern, so we have to we have to consider that problem that's part of our policy problem, right personally what I would actually recommend is that why don't we make mirroring significantly easier so I can run a mirror out of my house and because right now it's relatively difficult I would say actually let's try to increase our number of mirrors rather than worry too much about the amount of content we're pushing through them to be fair, this is a problem that Federa is facing before much Larry, we don't have mirrors that are uncomfortable with the amount of content that Federa carries simply because we just carry so damn much yes, and it's increasing at an exponential rate and having dealt with the mirror people before, there was an agreement about the amount of space that we're allowed to not use BAM with etc etc and we're well and truly over that semi pseudo agreed limit and we have huge amounts of problems getting mirrors as it is and there are certain areas in the world South America, Australia like if you're on certain networks in Australia, you basically have to go to the US and back to get access to the data because there is definitely concern about mirrors it actually existed prior to modularity anyway but modularity potentially could make it worse and I definitely agree but again, I would reiterate it's like we've got to remember that we've spent whatever 10 years plus standing up policy around the way we do things today because we have a new more flexible way of doing things doesn't mean we don't need policy around how we do them that makes it same I'd like to comment on this and then move on because it's kind of outside the scope of this talk but there is actually one place that modularity can make this less of an issue and that is that in the Fedora project because of the way that RPMs have worked in the past you've always had to package all of your dependencies first and then get those into Fedora and those fill up the repositories and those also fill up your maintenance time because now you're maintaining some package that some other packages are depending on modularity allows you the ability to build your build depths and use them just for your package and then not ship those build depths so if we move a lot more people to modularity I suspect we will discover that there's an awful lot of software in Fedora that exists solely to support the build of some other package and we can probably trim down the mirrors to some degree by doing that So Adam, did you have something else? Yeah, that was my question Do you want to? I'm just saying this slide and maybe there is a policy about how many maintainers can have a module because for example if a module, if the maintainer want to keep versions 3, 4 and 5 but I need version 2 we want to maintain using version 2 even when the problem of having 3 other streams So the question is essentially what is the policy around who owns which streams are available for any given module and I would say come to modularity working group and let's make sure we set a policy for that so there's a lot of policy that's open right now and to some extent this is where we need the community to get involved and start to help us feel out what the right policies are I also think that we're going to have the wrong policies at first and we will fix them over time I assume that the way we would handle that policy is the same way we do in Fedora and Apple there are plenty of people who maintain a package just for Apple because the Fedora maintainer doesn't want to support an old OVAS I assume we will just simply allow them to have access to that branch if they're willing to do it but I reiterate the statement come help us to find the policy Alright, so talk about the reasons why you might kind of the point of this talk is really like when does a module make sense for you especially given the architecture that we ended up with for F28 right, that's the word so the current architecture kind of makes it so that you kind of add on modules when you need them so what we want to do was a talk about when might you need them and so the first example which I think most people kind of know about already is this one where you want to have two versions of something and so we have, you know, big dolls which I thought was entertaining and goats, those goats made me sheep I don't know, animal so we have multiple versions of something maintained upstream you know, Django is a great example right, is that they are supporting at least two probably versions, you know, as do most of those kinds of frameworks they're usually two in flight usually there's no JS as the perfect example because they maintain two LTS versions and a development version right, right, so no JS has even three but like most frameworks have at least two you know, Drupal actually I've been fighting with lately they're currently maintaining three versions, I think six, seven and eight and all of them are not easy to port across so that's why they end up maintaining them for so long is because it's very very difficult to move versions so this is one of the big examples basically what we're trying to do here is we're trying, like it's good that the newest version of something is available in Fedora what's bad is that if it's something like a framework it means that the newest version of something else is not available in Fedora because they haven't had time to port to the newest version right, so what we're trying to do is give them lifecycle flexibility when they're ready to do their upgrade they can keep maintaining their application in the current version until they're ready to do that work so that's that one you want to move on to the next one and similarly in keeping with Fedora's first identity it also allows us, like I said, with no JS the example is we can carry both LTS versions and make one of those make the newest one of those the default but then we can also carry their development one so we can encourage people who want to do new Node.js development to use Fedora as well and not be required to go off to Node.source or one of those other places that hasn't really hacked up Fedora implementation it doesn't work very well a good example for this was with Python right, is that didn't want to switch Fedora to Python 3 for a long time because of how much Python's 2 stuff would break if you're using modularity you don't have to have that pain we can continue to make Python 2 the default we can make Python 3 available but we have that without modularity sort of so the example is a terrible example I'm going to stop you with this one that one is a minefield and we're going to end up in a debate rather than a conversation so the conversation with Python can do parallel installability on its own I definitely disagree that it's the same but whatever let's move on the next one is when the upstream releases don't align we've kind of alluded to this example already but you know Fedora comes out on a theoretical 6 month cadence what happens when something comes out in month 7 well that means you have to wait a whole another cycle before you can get it with the modularity we can release them whenever we want in theory depending on what it is and how it works but for the most part that's the idea whatever version comes out when it's ready and we can ship it for whatever currently supported versions of Fedora are available we can actually do it for more than that except going back to policy we don't want to so that's kind of what this is talking about is how do we make it so that software can land when we want it to and then on the flip side of that we have the older software typically like a 5 year life cycle do you really need to upgrade your database every time a new version of the OS comes out most people find that incredibly risky so they stick with a version of the database for a long time so what this lets us do is let you to maintain your existing whatever MariaDB 10 across multiple versions of Fedora without forcing you to upgrade your database until you're ready to make that choice considering that the database itself doesn't require an update alright so the last one I have no recollection this is similar to the previous one it's the case we usually use as an example here would be hypothetically free IPA traditionally free IPA and the OS release have been very tightly related and similar to the database case when you go to upgrade Fedora 28 to Fedora 29 you're not just saying ok so I'm going to get a newer kernel a newer GDC system level tools a new GDM what you're saying is everything on my system is upgrading including these applications that I rely on so I am not going to move anything until I can test that the entire stack that I rely on in my infrastructure works and that's one of the main reasons why people don't like to deploy Fedora in production is that you can't update the OS and the applications on their own separate life cycles and so modularity gives us the ability to say hey this critical app that you're relying on you can lock that here and on an upgrade or an update even between Fedora 28 and Fedora 30 as long as Fedora 30 is still capable of supporting that application you just update the OS underneath and the bits of the module providing the application will stay the same so it allows you to update your OS which may mean that you get low level kernel security updates and all sorts of new performance enhancements but you're actually the thing you care about is your application and that we've hammered this home nobody really cares what OS you're running as long as your application keeps going this is a way to do that and allow us to keep our fast OS schedule without having to it is hopefully a way to get us from the set of users that always update two releases at a time because they just push they push off this pain for a whole year instead of every six months and it allows us to get to a point where Fedora 28 to 29 and 29 to 30 those are more like service packs than they are upgrades you also have this case in the application developer scenario in kind of the reverse where you want the bleeding edge of Node.js because you know you're going to be deploying in three to six months and you know that version will be stable by then so you want to go get you want to do all your development on the dead version of Node.js so the idea is that okay but you still want to be able to upgrade your operating system itself on your laptop or on your workstation or whatever so to be able to provide some independence there means that you don't turn off updates for six months while you're doing development which I know I have done before particularly on Windows because I know that there's a potential that those updates may impact my code and I want to scope the time period where I'm fighting through that fire to one period of time so you turn off updates for X amount of time and then all at once you turn on updates you take all the updates and then you go through your whole test CI whatever cycle and go and just deal with bugs from the upgrade so it's also the reverse case where you want the bleeding edge stuff but you want to still be able to take regular updates alright so one of the other things I want to make the point of and also teaser for my next up I meant to insert a slide here and I forgot to so before this I wanted to cover a couple of the cases where modularity is not a good fit and so for example I do not foresee a world in which we modularize G-Lib C I don't see a world where extremely common low level libraries are anything but part of the OS itself if you've got G-Lib C or you have I'm sorry Peter doesn't some of the problems where you're saying I want to keep one as version X like and then upgrade a different one isn't some of that just fixed by running more in separate containers so the question is basically if you want to have life cycle independence between different amongst your OS say or distro couldn't you just use containers for that and the answer is sometimes in fact right if you look data centers before containers most people run a VM for application now they're making that cheaper by doing it with containers but they're doing the same application and in the case of like your IPA example like one of the flaws in that is IPA doesn't support running alongside other applications on the same machete so the problem is also though that you have other stuff besides IPA in your container and you may want to also upgrade that stuff oh sorry so the example is going back to the container example or a VM or a physical server example even if you're dedicating basically the user space to one given application does that solve this problem in and of itself and the answer I would say is sometimes but that I think you're glossing over the other updates that are that need to take place inside that user space that modularity can still help with and the other side of things too is the content that goes into your containers there are times when you want that container's user space to be the latest and greatest because you got the latest new heart bleed or whatever new name to vulnerability you've got but you still want to have that same exact application you don't want to have to be required to pull down some new upstream version because they built it on top of the other one so constructing those containers modularity gives you that control as well okay so the example of kind of when modularity is a bad idea is low level, heavily shared system components are probably not a great fit which is what we kind of discovered when we were trying to build the base OS or platform modules is that the maintenance and effort around that is as much as doing the whole distro and having multiple versions available of any of those individual components is not very useful so that's probably when you should shy away from it if it's something that's heavily reused by everyone else a traditional RPM probably makes more sense the flip side of that is maybe we'll get there someday maybe we will see a way to start to simplify those components so that we can do more shared components in this way but that day is not today and it's probably not for several years so we need to it's like a lot of new things when we have new capabilities we should feel out how they work before making long standing hard decisions about it what modularity is trying to do is offer flexibility in our decision making it is not trying to say that everything should be a module it's trying to say sometimes it makes sense and sometimes it doesn't and we want to have the build infrastructure flexibility that we can make those choices based on the software itself rather than based on what our build infrastructure can build now we go to the next slide so what I wanted to talk about here is just that there are other distros that are also doing this and so here basically some quick examples one linux to extras is what it's actually called it starts to do something kind of like modularity where they offer alternate versions of things in separate repos that you can install and if you want to know more about it I will talk about it more later SUSE modules, SUSE is now offering in their enterprise editions at least they were last time I looked in their enterprise editions alternate repos with different versions they're doing more like it's more like rails extras so they have I think it's once called web development and so they have a bunch of new versions of PHP in there but they're kind of mixed together into one repo so you enable that repo and you have options on a new version of PHP and a new version of Nginx but then they have a couple they have like five or seven different repos of various subjects GWX and NixOS is actually modularity if we could burn everything down and do it right it's really interesting from a user perspective it's very different right so if we were totally fine with just everyone forgetting about this whole young and DNF thing all together and just do things completely differently from the get go Nix does a really interesting job of this I would actually go back to Peter's point that a lot of the advantage that they give is actually solved in containers so the parallel and solubility of stuff that they're capable of is while really interesting may not be that necessary especially not necessary enough to burn everything doing great I was going to comment I have an example from the past that containers were not solved is when Rails 4 went into Fedora the version of Fedora with Rails 3 end of life before Rails 3 did and there was literally no way if you had not done the non-trivial task of porting to Rails 4 like what do you do? so the example basically is when Rails 4 came out it basically missed the Fedora release schedule window correctly it wasn't timed perfectly with Fedora so it was incredibly difficult to get from Rails 3 apps to Rails 4 apps so Rails 4 went into Fedora so let's say Fedora 15 came out with Rails 4 and then Fedora 16 came out and still had Rails 4 but Rails 4 completely replaced Rails 3 in the distro but we had an app in the team I was on in Rails 3 and it was very non-trivial to port to Rails 4 by the time we were able to port to Rails 4 the version of Fedora with Rails 3 was end of life so we we could have stuck in a container but then the entire container runtime doesn't get security updates so the example I understood your point I just spoke it poorly it doesn't matter this is one of those cases that modularity would make easier repeat so basically the example was Rails 3 was in version X of Fedora Rails 4 came out in version Y the upgrade from Rails 3 to Rails 4 was non-trivial so most large applications that were using Rails 3 were stuck and many of them stuck long enough that the version of Fedora that still had Rails 3 end of life before they actually got to do the upgrade and this is my exact example with like Drupal was the same problem I actually remember when Drupal had not upgraded to PHP I want to say 5.3 and all the distros adopted it so it wasn't even a distro you could choose that you could run Drupal on you had to backpin you had to pin it to older versions and it took Drupal like 6 months or a year to upgrade which is not cool so not on the Drupal guys not cool for anybody trying to run that production so this is kind of explaining modularity in kind of the new architecture which is basically that there's kind of a set of rpms or we have been joking we referred to as bear rpms which of course sounds like the bear like the animal so then we end up with like ursine rpms and we end up with medvet I don't know how to say that medvet rpms etc so any word that you can think of for bear then rpms is also entertaining for us and then you have the modules kind of sitting on top of it so the names are a little arbitrary but imagine the application stream is up on the top and then the base or what is essentially the everything repo today I'm glad you're covering my very important calc use case thing the calc use case is a major driver of all of modularity because there's apparently both a stable and develop version of the calc application because you know alright so moving on we are Q and A which is good we are at 2 minutes so we timed it well we planned the tech problems alright so do we have any more general questions we weren't sure if we wanted to show up like actual examples or whatever the fastest oh but he wants Brendan to go first do you have specific policy that you're advocating for now that we have this technology in fedora so the question is do we have specific policies that we would advocate for in fedora to to keep this same well my first policy would be back the ci objective because that widens our abilities by a lot the second we have good solid ci in the infrastructure a lot of our concerns about the number of different streams of things that we have are at least less than if not go away so that's a huge thing I think is really really important to the modularity project I thought so from the get go so that's the first thing the next thing is I think we should also we should start small in the sense that two streams of something is probably the limit at least and when I say two streams I mean there's maybe something in the oh can we go back to the architecture you know there's a version of it in the base and then there's one other version so two versions now we might have a few exceptions but for the most part we probably want to try to limit it to two-ish until we really start to feel how much work it actually is so those will be some things let's see what else I don't know that's the biggest one I've got I don't know do you have a- again I'm going to disagree with you that I think the policy I wouldn't say this is a formal policy I would just say that anyone who is doing the who was willing to do the work is allowed to do the work if they find that it's too much effort to drop a stream in the next release what's your experience been doing Node.js in multiple versions my experience what does the multiple look like versus the streams honestly I have found with Node.js specifically that it has- sorry the question was how was my personal experience I know I'm getting flagged for time so this will be our last answer sorry my experience has been that it has been minimal additional effort for maintaining Node.js to begin with although we are talking about you know 1% being a very large chunk but it's a little bit of additional work but also the module stream expansion stuff which we'll talk about later in my talk with Mohan makes it a lot easier for me to get it running on 28 and 29 at the same time as well so it pretty much balances out I haven't actually found it to be more work than the regular package. I think we are at time so thanks for coming