 Please welcome our next speaker, Langdon White from Red Hat, who will be talking about Refinking Clinics Distributions. Hi, I'm Langdon. In case you couldn't figure out who was speaking, I didn't know I could actually change slides. So I'm Langdon. Oh, good, it's not cut off. I'm Langdon. I was an evangelist for RHEL, although we call them advocates now. But now I'm a platform architect and developer experience. My running joke is that they got tired of me complaining. So they said, okay, now you have to go to engineering and fix stuff. I think they're still waiting for that. And I've been working on Fedora modularity, which is kind of loosely related to the ring stuff as well. This is Thomas. He plays a lot of football, I think you call it here, and basketball. Does lots of homework. And as my joking tweet said earlier, he uses Instagram to talk to his friends. They post a picture on Instagram, and then they use the comments to chat. The picture has nothing to do with the conversation. And apparently it is a mechanism by which they can avoid parents monitoring what they're talking about, which I just thought I was fascinated. And between that and Snapchat, I'm like, you know, I'm going to stick with Twitter. I know I'm old, but you know, come on. So starting off, distributions are awesome, you know, just like that dinosaur. They're always awesome. So, but kind of the point of why is it a Lego dinosaur? Well, because it's actually, you know, a whole bunch of pieces fit together to create something, right? And the thing they create is really, really cool. However, that whole thing has to move together all the time, right? Otherwise, it all falls apart. And that's not so good, right? And I will say it was not my idea to include the dinosaur originally. I just thought it was hilarious. All right. So what we're doing here today is we're going to talk basically about some of the motivations around like kind of what's been going on in the software industry and why maybe Linux distributions aren't the right answer for all the world's problems anymore, at least the way they're constructed today. And so to start off with, the single biggest thing I think that's the driver here is that we've really, really started to recognize that operating systems, application frameworks and applications themselves have different life cycles. And distributions almost by definition disallow that disconnection, right? And in a sense, that's actually what's good about them, right? Is that you could always update any application as a sysadmin any time you wanted because if there was a critical CVE or whatever, you didn't have to worry about the vendor, right? You didn't have to rely on the vendor to ship the appropriate patch at the appropriate time and as a sysadmin, you could update it underneath them. So that's the first step or the first thing. The next thing is how we do software has also changed a lot in the last 20 years, right? Roughly speaking, that's about how long distributions have been around or certainly at least they've gotten popular. And so we do things like Agile, right? And actually this is my example around Agile and DevOps I think are interesting. In the mid-90s, when you were a developer, that meant you also could replace a hard drive. It meant you could also fix a server, right? Because developers were basically all there was. We didn't really have the striation we have now of all these different types of developers, right? Or sysadmins or UX designers or any of that stuff. It was just all the same people and all the user experience stuff showed it. So what happened in kind of late 90s, everybody started to separate that stuff out and we started to have specialists, right? And as one CFO actually I worked for once used to say it's like we took the entire steel industry over 200 years and decided to do the same exact thing except in 40, right? So we're basically going through all these maturation processes that every other industry has gone through except like with medicine which took several thousand years, we decided to do it since the 60s, right? And so we separated all these people out and then we said, oh crap, right? We just found out that when it's not all in one person's head, it doesn't work so well. So what we did was we said, hey, maybe we could actually get the guys to do the requirements to come and talk to us instead of it just being something that they ship over a wall. And we call that agile, right? Then a little bit later on we said, oh, you know, there's these people who operate the actual applications we run. Why don't we invite them too? And we could call that DevOps, right? Then the other thing that's been crazy, right, is the amount of data buildup. I used to know the stat off the top of my head, but it's like petabytes every year or something. It's just insane. And so now we have an entire subject field, right, around data analysis, right? Another whole new thing. You've got all these software systems that use streaming data as a way to trigger events. I mean, it's just everything about software is so different now than it was 20 years ago. And then my favorite part is the pendulum has swung back towards the developers. So now the developers are in charge. And as a developer, that's awesome. As a sysadmin, that is terrible. So basically what's been going on is we kind of, like everything else, right? Everything is always on a spectrum. And there's a pendulum that goes back and forth about kind of who's in control. And so in, again, kind of the late 90s, mostly it was developers who were in control. As a result, it was very difficult to manage systems. It was very difficult to manage applications, et cetera. Then kind of in the early 2000s, the sysadmins kind of took over the control, the pendulum swung back towards them. And everyone became very concerned about the ability of systems, about the resilience and reliability of systems, way more so than the applications themselves. But then that kind of started to swing back the other way. And my goal with some of this talk and the work I've been doing is to not let it swing all the way back, right? Is why don't we actually help developers not be idiots? Because I know I'm an idiot. I don't really understand the concept of DevOps. Like why in God's name would I not want to just throw my application over the wall and let somebody else get woken up in two in the morning? But I understand from a quality and a use perspective, yeah, DevOps is probably a better choice. So this is a great book if you haven't read it by Civo Grady, who works for an analyst firm that actually specifically focuses on developers. Then the next thing we've also started to realize is that different user scenarios have different risks. Oh, by the way, if you couldn't tell from my personality, feel free to shout out questions, raise your hand, it would probably be slightly more polite. But don't wait till the end. So different user scenarios have different risks. So in other words, when I'm deploying an ERP that runs my entire business, it costs me millions of dollars to implement, you know what I would like to do? Never ever touch it again, right? Now on the flip side, you take something like a movie website. So a movie is about to come out and they put up a marketing website and it's going to go away very quickly. It's probably very static. There's just not that much risk involved if it gets destroyed, right? You can replace it pretty easily and it doesn't really matter that much. It doesn't have that much lifespan. Why do we want to keep those things at the same level of quality? There may be a particular argument for my particular example, but the point being different applications can have different levels of quality that's required, different levels of reliability. I actually worked at a big financial firm a long time ago and one of the best things they did was they had this grid, right? That said, you know, reliability kind of along this way and uptime kind of this way. I can't even remember what the X and Y axis were exactly. But basically, if you were out here, it was like 37 nines of uptime, right? And if you were down here, it was like if you were lucky it was up, right? And what they actually did was charge the business to be over here. And so the business had to actually make a determination that said where they were on that spectrum and if they were over here, it was like, well, you know what? Your budget is a million dollars a year right now. You need 10 just to run this app, right? So they would drive them back into the realm of sanity without actually having to explain what 37 nines of uptime means to somebody. All right, so one of the things that is of particular annoyance to developers usually is that CIS admins update their applications in production. Why is this a problem? Well, this is a problem because and this is also kind of the point, they do not have bug for bug compatibility, okay? Because you're putting in a CDE fix, that doesn't mean I didn't notice that there was a bug that had a security flaw that I might have written a workaround for. So the CIS admin who introduces that new patch might actually be introducing a security bug, right? So we have this entire fragile set of things, right? Fragile in the sense that things are very well stacked up and actually that's what the picture is over here, right? This is a particular kind of rock that actually gets cracked and the entire basically surface falls down. But the point being is that we talk about standing on the shoulders of giants, right? As a software developer now, we're standing on the shoulders of skyscrapers, okay? I mean, just millions of lines of code underneath your application. The last thing you want to do is touch anything in there without actually essentially running all your tests again, right? So this is some of the popularity around containers because it's so easy as a developer to defend my application from the updates coming in from, you know, the CIS admin side, right? Now on the flip side, they also do a nice thing of containerizing but containerizing what is exposed, right? So you can still do a firewall update assuming, you know, my app doesn't touch firewalls and it doesn't actually have any chance of affecting my application even if they both use the same SSL library, right? Well, it's not a great example, but you get the idea. So that's the last kind of problem I was talking about. So then a few years ago, just checking how I'm doing on time here, a few years ago, Matthew Miller through, you know, mostly him as a presenter and he claims that there were a lot of people involved in the concept. I think that was mostly to avoid getting hurt but propose this kind of rings concept. So what does the rings concept do? Well, it says, okay, why don't we start to think about how do we separate the applications from the frameworks from the actual core of the OS, right? So instead of being a distribution we actually have an operating system and then we have applications that run on top of it and then where we can, we share pieces between applications and that's what we call frameworks, right? So the next thing we did was we released the Fedora editions, okay? And this was kind of the first step towards this path. So there's a shared library of RPMs essentially that you can choose to use in any of the editions. So we have, you know, three different editions and with different use cases, which means, you know, to some extent that on the workstation one, for example, we have the shorter lifecycle components, right? Because as a desktop user, it's much better to be using kind of much more current things and updating it regularly. The system itself is usually less at risk, you know, except for things like conferences and stuff, whatever. You know, I don't recommend going to DEF CON with your own phone. The server, you want to solidify, right? You want it to, like, be able to age over time. So you want to choose kind of a little more carefully that you want to use, you know, an older set of packages, perhaps, right? And you want to make choices that are less risky. Cloud is actually really interesting because it's kind of both. But at the same time, you want to be able to choose for that use case the individual applications you want to do. For example, you know, like Cloud doesn't have a firewall, right? Which seems really, really odd on any other kind of server because you don't need a firewall when you're deployed in something like EC2 because you have firewall service coming from somewhere else. In theory. All right, so then, what we kind of decided to do was kind of, okay, so we've established, essentially, kind of the first idea of ring separation by doing the additions. So then we kind of jumped to the next easiest thing, which is let's take a look at the outermost ring. Okay, so applications that are of dubious quality. And by dubious, I don't necessarily mean it with the normal interpretation of negative, but dubious in the sense that we really have no idea what kind of quality they are. They may be awesome, but we don't know. And we don't have the time or the energy to review them quickly enough for people to want them to be available. So that's when we introduced this copper thing, which is apparently proven to be very popular amongst most people. And it does a couple of nice things, right? So this is how I got LibreOffice 5, for example, on Fedora 22 because it wasn't available yet for Fedora. So that kind of gives me the cutting-edge stuff. But then on the flip side, for a long time, I was using System Config LVM when they took it away from Fedora because I can't figure out LVM, so I like the little graphical interface. And so I would just pull SRPMs out of Koji and just rebuild it for whatever version of Fedora I was on. So it kind of lets you do both things, right? So I can do a net new application that we don't know if it's good yet or not. Or I can do an older application that's probably riddled with security holes. But I can choose to run it, right? It's my choice. It's my application. It's my risk, right? And because it's my laptop, I know enough about my laptop to know whether I can quantify that risk, right? So that's one of the nice things about copper. One of the things that was proposed but never actually got passed, at least not yet, was to try to start to think about the next ring in from the outermost. Again, being kind of the simplest thing to do, which was the playground proposal, which is the idea that some set of the copper repose will be of high enough quality that we want to give some sort of attribution for it, right? So, you know, a test to it as Fedora that it is of some level of quality. It's not as good quality as all the stuff that's in the main repose, but it's definitely been... somebody's taken a look at it, okay? And so that was kind of the next idea there. Then we also had a proposal called the ALIF proposal, which actually tried to define five rings, okay? And basically, with ALIF 0, if you guys are familiar with ALIF, it means it comes from math. And so the set of integers is ALIF 0 and like the set of ordinal numbers is ALIF 1, and so it's basically... it's different sets of numbers. So same kind of idea here, right? The set of RPMs that is the most core is ALIF 0 and then essentially ALIF 5 is kind of all the way out in that outermost ring. So that was kind of the next idea. Also, it didn't get passed, but it was close. Yeah. And then, basically, a lot of things got stuck and my interpretation of the big reason why a lot of those things got stuck is because of the ring's proposal, is that the metaphor falls apart because we have lots and lots of orthogonal concerns around the quality of the package. So at first, with the ring's proposal, everything fits neatly in a ring and it looks pretty and it's really easy to understand. However, let's take, for example, build dependencies. So if you want to build something, you need to have the build dependencies. Do the build dependencies belong in the same ring as the application that they built? Do they need to be maintained at the same level of quality as the things that they built? Not necessarily. As long as they do their job correctly, that's fine. They could be packaged terribly, right? Because they don't have to be the same. They don't have to be the same quality. Okay, so what do you do with that? Well, so does it go in the ring, but it's somehow marked as yucky? We don't really want that. So I did all kinds of cool diagrams. It all worked out terribly, with bubbles and Venn diagrams and a bunch of other things. It just got worse and worse. If you were at one of the talks earlier today, they showed another of Matt Miller's pictures, which was the rings, but then with a bunch of drugs in it. So it really, really falls apart. And the orthogonal concerns, right, so the metrics by which you want to rate the flexibility of something are wide reaching, as we know already. And then on top of that, we don't even know what we don't know yet. Sorry, is there a question? Let's put it this way. I don't know. And the point I'm trying to make here is that we want flexibility in the decision around, I hate to say every given package, but every given package, right? We want to be able to have a flexibility in or out of a ring or whatever, or being varying levels of quality that the system we're designing supports that. So that was the first problem. The other problem is developers still won't do it because it has the word RPM in it. So pretty much by definition they won't do it. So there's lots of reasons why developers don't really like RPMs, but I would argue that whether RPM is the best way to package things or not, the fact that as a developer who does Python or Ruby or Java or whatever, every six months when I want to do a release, every three months when I want to do a release, I need to relearn this esoteric language that I don't use for anything else. This is the same reason I hate HTPD virtual hosts. It's like every time I want to start a new web development project I have to go figure out Apache again. I don't do it often enough to actually learn it. I just learn it well enough to accomplish that one goal and then I run away and go back to the stuff I do every day and then I come back three or six months later and oh my god I have to learn it all again. So even if it was easier to learn it doesn't really help that much because you have to jump out and come back all the time. So my little picture here is we want to do simple packaging. We don't want them to have to go and custom touch everything, make nice boxes and the example here was actually from an article which is kind of interesting that that kind of packaging is like 15 euro per box versus kind of the more generic boxes you get which is like 50 cents so like half a euro. Well we're done. That's all you guys came for. So modularity so the idea here is that what we want to start to do is think about these things in terms of larger blobs and by blob I don't necessarily mean anything negative again this isn't necessarily about bundling or not bundling or any of that kind of thing as much as we want to start to think about applications as applications rather than as a set of packages and the reason we want to do that is because we want them to be defended from other applications. So the advantage of shared RPMs between different applications you know there's some advantage in disk space there's some advantage in knowing where one thing is and updating just that one thing however the disadvantage is basically all the stuff I just went through so how can we figure out a way to actually guarantee an application its own defensibility and the application can have some level of expectation around when it decides whether it wants to share or not share so that's kind of one aspect what I'd really like to see and this may or may not be something we can pull off is I would actually like the OS to be able to decide whether or not it gets a new copy of something or if it gets one that already exists because when you ask for a library there is somebody who knows whether or not it's already there and that's the OS itself why we rely on human packages to figure out whether or not something is there or whether it's built or whatever when we could just have the application when it's talking to the OS to get those libraries it could just go get them itself right I'm crazy just to be clear the other part is there's a project going on with fedora minimization right a lot of people look at that as there is some sort of intrinsic value into small disk size I don't care I don't understand why anybody cares as far as I'm concerned half a gig or a gig of whatever for a server is irrelevant I don't even know of a server that actually is that small so it just doesn't matter the reason it matters is for attack surface area however one of the interesting things that's a byproduct Steven Tweeter brought this up in a meeting I was in the other day it's like all the localization libraries are one of the things we keep trying to remove in concepts like fedora minimization localization library sorry translations actually not libraries but the actual translations themselves they're not providing an attack surface guys they don't execute it doesn't sort it but people want to take it out anyway because they want to get it as small as possible but so there's the attack surface argument which does make sense but it still doesn't really drive the minimization down as much as I think we need what does drive the minimization down is to minimize the amount of overlap between dependencies between different things so we need not only a minimization around core which is kind of the obvious one that people are focused on for like container based images or a minimal install or those kinds of things but we also need to do it around applications right so when we start to talk about modules and applications then we need a way to minimize their dependencies on all other applications and in this sense minimize its dependency on the OS itself right so the number of libraries that are considered part of the application is the higher number that is is better that doesn't necessarily mean that the it doesn't share components when it's actually like on disk but it does mean that conceptually the operating system can complete continue to change as much as it wants as the application stays the same right so the more libraries we push up into the application itself the more flexibility we have in the OS right so you know one of the classic problems right there's Fedora 14 right you know everybody and their brother was still running Fedora 14 for a really long time until Fedora 21 came out I don't know Matt's got all these but I can never remember so but the point being is that why did they stay there right is because they thought everything about Fedora 14 was awesome and that there was no need for any advancement in computers again I'm guessing no right my guess is that the applications that they used on that OS were what they liked and they wanted those versions right for whatever reason so as a result they had to stay with that version of the operating system right we don't need that we don't need to keep them so tightly connected alright and then the last thing is that we need a way to provide application developers a way to ship their applications right so my little bag here right so we want to give them a bag that they can shove all their stuff in and then ship it to their customers and customers users right whatever and we want that bag to look a lot like the bag they're already using if not exactly the same and what that means though is that we lose one of the major advantages that we have with RPM and you know I don't know I'm old right so that's a card catalog it's called and it's a way you find books in libraries you know 20 years ago so but the idea being there's all this metadata we still need to ship to the users right we need to be able to tell them all these quality metrics about the application they want to use but we need to give them a separate channel for that content for the metadata content that's distinct from the application itself but we also need to be able to say this metadata element is attached to this particular binary right so that's not any fun that's why RPM was invented so let's break that and do something completely different just to make things more fun so basically the combination of these kind of three concepts is really what we're trying to do with kind of modularization or you know with like fedora next or rings to or whatever you want to call it but basically this is what we're trying to do right we're trying to meet the problems from the beginning in a way that will not only protect the sys admins but also help the developers to be able to ship their applications more easily so where are we now so these are some examples of applications that are starting to go down this route ok or applications for concepts or whatever initiatives let's say so roll kit for example is this idea of we need a way to provide an application and a way to install it that is generic so that a consumer can know how to install what roles right and not have to know the intricate you know concepts that are going on with something like an rpm install so in other words when you rpm install something there's only one way it can be installed right generally speaking in the rpm world what this tries to do is say you know what when you're initially installing certain kinds of applications you want a level of interaction that you can engage with and but not all the time right you don't always want to be able to update it you may have a golden version that then you don't want to have to kind of manually install every time so it's trying to take kind of a balance between those two things then you have xdgapp xdgapp is containerization technology but what it's trying to do which I think they don't focus on enough is that they're providing essentially a platform that you can write your application against that is separate and distinct from the OS so xdgapp and it's mostly focused on desktop applications for gnom but it'll say you have your container right but then you have gtk3 of some version potentially even down to the z version so as an application author I can target that particular version and when I'm ready I can upgrade to the latest version as an application developer so they're providing an application platform layer that is completely independent from what's running on the rest of the machine so that the application can have a different life cycle in this case from gnom sort of but also from the OS itself however it's actually not limited to gtk like you could put other application frameworks in there they just happen to be biased towards that because they're gnom guys tomic workstation is kind of the underpinnings of that same concept so if you have all of your applications delivered as these containers then you can have an actual like then your OS can maybe operate completely differently than the applications themselves and so the idea is that it's an immutable operating system that then you can put applications on top of how could you you couldn't really do that with rpm the way it works today so that's kind of why they're talking about the containerization side of it nuclear atomic app are an attempt to say hey you know what most applications do not just have one container specifically but they're not just one app most of the time right if you want to do a website right you need at a minimum you need a web server and a database almost always right so how do you coordinate those two things together right so most applications are actually multi component so we need a way to describe them we need a way to create them and we need a way to deploy them where we treat them as if they're an autonomous unit right sounds kind of like the modules things I was talking about the base working group was working on or is working on how to identify that center ring which I think is kind of a very very difficult thing to actually get anyone to ever agree on so I kind of tend to like the idea better of why don't we just start pulling all the applications out and then whatever we end up with at the end is the green right that's the center rather than trying to figure out the center first because everyone will have their own opinions another change that's coming is these is weak dependencies and so this allows us even with our PM to start to modularize things right it starts to say you know what you don't have to provide this particular dependency and then the environments in sacks working group they're the ones who produce the kind of guidelines and stuff for copper as well as the proposal for playground and their proposal for a lift so in the E&S working group is where a lot of this stuff should be taking place and where that kind of definitions should be taking place even if the the implementations are taking place in individual projects so there's still lots more to do oh wait I needed to update more of my slides than I realized I give a very similar talk at last time last week so but I did plug devconf so everybody should be happy about that and there's also you know we're starting to see we're starting to have this developer not fedoraproject.org right which is targeted at application developers right the people we want to attract to fedora right we want to attract more application developers as users and contributors to fedora because as that pendulum swings over we want more of them to come start participating especially as we start to see things like devops where there are fewer and fewer sysadmins in the traditional sense right very soon sysadmins will be managed if they're not already managing thousands of machines right tons and tons and tons of machines where you know it used to be you know one guy managed maybe a hundred at the high side you can't do that anymore you're now managing hundreds not thousands of machines we need better tools to enable them to do that but more interestingly the future of distributions it's really important we need to attract the people who are actually going to be doing software stuff much more than we are today because in the in this future space right we won't need to support operating systems directly per se right I mean it'll be much more managed and it'll be about the applications and the OS will be a providing just providing enough information to make those applications right and then let's see we'd love you to come participate in the environment stacks working group you know in any of the apps I mentioned there's more of those coming along more documentation about how to kind of get plugged into this is starting and we'd like to see there and I'm giving myself a deadline for an update on all of this at Flock in August and so I hope they'll see you all there or at least you can watch the talk over streaming video that's pretty much all I've got are there questions and if once the slides are up there's links to basically everything I mentioned so sorry go ahead I'm doing something wrong yeah it's working now thank you if there are apps in the system that are on the same library but another version of it isn't going to create a huge mess at least how I understand it so can you define mess ten versions of the same library no ten different versions of the same library on the same system right what's wrong with that well configuration file formats may change and maybe let's just look at a GTK themes they break from one immersion to another so this is an argument I get a lot so that's why I'm trolling you a bit the so as people who use computers every day we have certain expectations around how they operate so that our muscle memory works correctly a whole bunch of why people hated GNOME 3 is because their muscle memory stopped working there may be lots of other reasons but that's definitely one of them the exact same thing happened with Microsoft Office when they moved to the ribbon model it's not so much that the ribbon thing was a bad idea as much as people's muscle memory stopped working create example of that too I worked with a woman who worked on a project that cost like a million dollars give or take to replace an old terminal sorry like a mainframe system with a web app and their users hated it hated it because all the super fast things that they had memorized over the last 20 years that they could do on a mainframe in seconds now they had to click around on a website talk about actually maybe talking to users first right so one of the expectations we have particularly cis admins is that I know in my head where certain kinds of files are that's one of the arguments about the mess so if the files are littered all over the place how will I ever find them that's one another example as you said like configuration for that particular version maybe that changes over time so they actually have to have different configs to go with different libraries and who knows maybe it's a whole stack of things that have to be different I argue number one because I really don't understand the fascination with disk space that to some extent as far as disk space is concerned who cares but number two one argument you can definitely see is what about ran right okay well that starts to be a better argument in my mind but you know the last part about your muscle memory remembers where it is you know we have a computer it's really good to keep in track of stuff how about we just make it so that the computer presents the user where they expect it to be rather than making the user remember because that's what computers do right so can we change how some of the operating system works or the file systems work or how they look at least to the user so that all that muscle memory still works but it's a lie and you know what file systems for a long time now have been lying like crazy right like if you know anything about what's actually going on in a hard disk when you try to write a file you will keep all of your notes under your pillow okay handwritten in multiple copies right very much like if you've ever worked at a bank really really have to convince yourself to put money back in there right go ahead so two things one and I didn't really mention this earlier but one thing that is significantly more true today than it was 20 years ago is you can much more regularly trust your vendors okay Matt would disagree he hates all vendors but you can much more regularly trust your vendors a lot of the vendors right are open source projects distribution so you can trust your distribution right part of the reason distributions are there is because you couldn't trust vendors right is that you wanted an independent way to get those security patches down so this is where I'm saying is like let's keep the pendulum from going the whole other way okay so if pure straight up containerization continues it doesn't matter what you want as a sysadmin you will not be able to patch it right so what I would argue is that instead if we can allow an application to have different versions of different libraries you know yeah barring some of the other problems that we have with it as far as like security is concerned well you can actually decide to take the the module right take the application as it gets updated the other thing is that if we have a knowledge of those modules where there's applications kind of in the fedora infrastructure the second the CDE comes out right it starts the testing and the trigger a build and everything else it's not like it's not like it was again like 20 years ago where you basically have to wait for them to ship you a CDE or something right we can actually know about those applications particularly with the open source world in the actual fedora infrastructure so just because there was a CDE that came out against a particular patch or against a particular library doesn't mean we can't just as quickly get the resolution for all the individual applications oh and by the way they'll actually have been tested too right versus just blindly updating the library that makes sense I didn't explain that terribly well but I'm stressing out because he's yelling at me out of time is there anything else or do we have one more or not I don't know what time it is okay yeah go ahead shoot do you know about other Linux distributions if they are trying to solve similar problems with modularity and so on sort of I don't know any other distributions that are trying to solve the problem from the perspective that I'm taking with it I think all of the distributions have this problem and do various things to try to solve it but kind of for different needs or with different goals so Nix I think is a great example or a packager but I'm not sure the motivations are the same so they may not end up with a result we could use but I hope like I mean I think this is the future the popularity of things like containerization how do things actually get run in the real world one VM per application every single RPM in the stack for that application to be built by the person who's running it why because they don't want any updates they don't want to affect that application without it being under control why does the sysadmin sit down and look at the 20 CVEs that came out last night because he only wants to pick the ones that he thinks applies but how does he know shouldn't he be able to just blindly say yum update and magic happens this is kind of the point we do all of these workarounds in the real world to avoid the fact that we're using a distribution so that's my argument