 So, yeah, I'm Shamalik, I'm software engineer in Red Hat, and I have a question. Who writes software? Who is a developer software engineer? Writes horrible share scripts, excellent. And I'll be commenting on you. And who runs it in production then? All right, also late people. I'll be also commenting on you. So let's start with developers. What developers do what they want? So if I'm developing something, I do want to write everything from scratch. So I need to search the Internet for things that are useful to me. And this would be libraries, this would be whatever things. And I don't really care that much, like where they come from or how supportive they are. I just like to need my work really easy. And then off we go. Is that all right? Maybe I'm just a horrible person. I have no XKCD comic, I'm sorry. But operators, so I think you look like pilots. And if you run something in production, again, there will be dependencies, there will be things. But you probably prefer packages, you prefer auditability life cycles and some kind of support. And not to sleep at night. And not to sleep at night, that's also a very, very good point. So you might notice something very impractical in these two groups. They are very different. They just need completely different needs. But they somehow need to work together because the software goes from the developer to the operator. And they somehow need to work together. So how to make them both happy? So there's a question. Isn't this just packaging? So that's what many distributions do, right? There are packages that you can just install very easily. Let's have a look if that's all right. So packaging, that makes for integrated, tested, updated and easily installable. So when I started with Linux, that's 12 years ago already. I went from Windows XP. And that was quite magical because I didn't have to search the Internet for anything I wanted to install. It was just something, something installed this and it just magically appeared. That was amazing. And I got updates here and it always worked. That was really nice. And there is a lot of work coming into distributions that make sure that everything works together nicely. Everything is tested, updated for security issues, et cetera, et cetera. So packaging is great. And then there's a second thing with the next distributions. I didn't really care about this that much back then, but life cycles. I said that brings over time stability to the diverse open source world. That is, I actually have a picture. So imagine that this is not the open source world, right? There are more than seven projects. There are like millions. But they're different, very different. This is just life cycle. This is just a maintenance timeline and just one version of an application appears and then another version of application appears and it dies. No one cares about it anymore. No one maintains it. And if you run something in production and you need to somehow make sense of everything, that's crazy. So how to fix this? Well, Linux distributions came into play and they somehow picked a few. They just magically changed the life cycle because they care for it in a certain period of time. Sometimes it's longer, sometimes it's shorter, but it's the same. And very conveniently, they just put it together and release it. As distribution releases, I can see Fedora 27, Fedora 28, and just like everything works nicely. And of course, there are many Linux distributions out there. I can see CentOS, which is like a long time, not support, maintenance, whatever. Fedora, every 13 months, there's a new release or something like Arch Linux. There's basically no release. It just sort of happens. New versions appear as they do. And there's a big variety. You can basically choose what you want. And that's great. So that sounds really good for operators sort of. Packages, life cycles, updates, transparency. I can see everything in there. But there's one problem with it. It's kind of monolithic, right? Everything's the same. Every piece is in just one version. And it might be inflexible. Probably not for developers and sometimes not even for operators because even existing applications might need something different. So let's have another question. What about containers? Do they make both groups happy? So containers are these isolated and portable things, right? Like these raspberries. Imagine a pile of raspberries that's like the upstream and then we can put it in boxes and just we can sell it nicely, ship it nicely. So this is basically containers. But how would this look like after a week or three? Right? So containers are great, but they're just isolated and just portable. And they won't somehow magically fix the software in them. And this is nicely visible, I can see, but whatever. So they're great, but there might not be the answer to this very question. So what we actually need. So we need new versions and a variety of versions for developers, but with the qualities for operators, system administrators. How to do that? So there's a project called Fedora Modularity, which I'm working on for some time. And I have stickers, if you like, I have stickers, because that's important, right? And what we do, modularity basically separates the life cycles of different pieces of only the distribution. So life cycles. We care about life cycles. We care about the maintainability. We care about the stability and sometimes support if you have a commercial distribution. So let's have a look at this image again. That's what we saw. This is the maintenance. So what's inside? This is a little bit simplified version of Fedora with just no JS package, nothing else. And as you can see, there's Fedora26 and Fedora27. These are quite old versions, but I have them for demonstration purposes. Just there is a different version of Node.js. There is a 6, there is Node.js 8, nothing else. So what happens with modularity? It's somehow what we did in Fedora28. We introduced so-called modules and how we're talking about them. And they give you choice of a version. So you can somehow choose what you want. And then when there is a new release coming in, it just somehow, you can somehow keep it and everyone's happy. By the way, this is a lie. We're not making portable binaries. It's more like this. So this is a different binary, but with the same promise, right? With the same version. So Node.js 10, Node.js 8, whatever I want I can have. So before I show you a demo, I'll have a drink. And then we'll walk through four concepts that we're using in modularity. And then we'll have the demo. So packages. Distributions are built out of packages. Everything is packages. And we're not changing anything in this regard. We don't even basically touch them. They're the same. What's new here is modules and streams. And naming is hard, so I apologize. But modules has some kind of logical groups of packages representing an application, language runtime, or something that just makes sense. And we can somehow take them and put them on independent lifecycle. So as we could see, they could live throughout multiple releases. And then we have this thing called streams, which are something like version, but more streams of compatible versions. So for example, versions of Node.js 10, versions of Node.js 8. And I can choose any one of those. So there are multiple available. So that's modules. And then when we have so many choices, we don't want to get crazy about that, right? I don't want to choose every single component of the operating system. So we have defaults, and that means that you need to choose only when you want to. And if you don't care, just everything works as before. You will only see one version. That's fine. But if you want to change, you can change. And when you do choose something, it's very important that updates won't break everything. So you say, I want version 8, and then you run update. You get the latest 8, but you won't get 10, even though it's available. So this is very important. You get updates, but just within the stream you chose. All right. Demo. I really like live demos, and I don't want to ruin their reputation, so I'll do a recorded demo. What do you think? All right. Before I start, who uses Fedora or an RPM-based distro? All right. So we use this package manager called DNF. It used to be called yum. And that's how we basically install software, and that's important. It'll be command line, just showing some new commands, how to manage the modules. All right. So I'm typing DNF module list, and that shows me list of all the modules available after I hit Enter. There we go. And I'll be focusing on Node.js in these examples. So I'm highlighting two versions right there. Node.js 8, Node.js 10. And this is Fedora 29 beta, so there might be some content missing. But for the demonstration, that's fine. I choose to install Node.js 8. So I type the command DNF module install Node.js colon typo 8. And I have a lot of time, so I'll wait for a while. And then I hit Enter. I say yes. I really mean it. Excellent. And then I'll wait for the internet to connect. There's an impressive Wi-Fi here. Oh wait, that's a recording. Never mind. All right. So I have Node.js 8, and if I type Node-V, I can prove there's Node.js 8. It's where I expect it to be. It works the way I need it to be. Everything's great. I'll try update just to demonstrate that it won't break, even though there was 10 available. So very slowly DNF, update Node.js. Nothing happened. I have the newest version because I just installed it. But I could have got an update for the 8. All right. Let's switch it to 10. Now I'm making the choice I really want to upgrade to 10. So I type DNF module install is the same command Node.js 10. And it should ask me if I want to really do this. It's a switching stream from 8 to 10 over there. And yay. I got 10. So again, Node-V, and I'll see. I said Node-V. Thank you. And it's 10. All right. So that was managing streams. And there's one thing that I haven't mentioned. It's sort of a bonus. So if I have an application, I can sometimes install it in multiple ways, like a database. I can install the server. I can install the client or both. And that's what I'm going to demonstrate here as well. So I'll do the list again. And I need to find a database. There we go. I found MongoDB. I can see client and server. There's also something called default, which is a little bit ambiguous. I think we're getting rid of it, but just ignore it. And now I type DNF module install, the module name, so it'll be MongoDB colon diversion and slash profile. And I'm just installing the server portion. And I don't have to care which packages are that, because there are so many packages in there. So that's why we're trying to make it easy. I just want to get the server, whatever the package is. All right. Installing. And just to show it's really there, I just type mongo-tap-tap, and that'll be it. I have mongo-d, which is the demon, and something else, which I don't know what it is, because I'm on the database. OK. And if I decide I want also the client, I can use the same command, but I change the server to client. So I just delete the server part, type client, and I get even more packages. So with the profiles, these are just package subsets, so I can install one to all of them. Or I can just choose packages manually. That's also doable. So this is really just a bonus to make it easy for people. There we go. And again, just mongo-tap-tap, and I'll see that there are things like mongo-tools and other interesting things. All right. So that was the demo. That was for the modularity. Multiple versions of packages in Linux distributions. So that was, I hope, interesting. So containers again. I was a little bit hating on them, so let's fix that. Let's have a look at containers for their true benefits. They run almost everywhere, and I can do the compose and testing upfront, and then just ship it to production. So let's just use them for what they're good for. I don't shove things in there and just let them run, but let's just use them what they're good for. So this is great. And basically, it's so easy to make them that there are third-party repos, and I can just go to them, find a container, and run it, right? What could go wrong? This is an old-ish article like half a year ago, but it has an interesting quote in it that someone was pushing containers into Docker Hub, but it could be any registry, right? And they were functioning fine, but there was this script that was mining something on your system, and they made quite a lot of money. So yeah, back to containers. Let's just use them for what they're good for. So if we have modules, if we have the software, we can just build custom containers with Linux distributions, right? And we can leverage both. So lifecycle benefits and the packaging benefits. With the container benefits, with the portability and isolation, and I don't have a demo here, but I have just a slide just to demonstrate it. It's super easy. So this is a Docker file, like three Docker files, and I can do from Federa 29. And the command we saw, dnf-y means yes, module install no gs8 or no gs10 or no gs11, and then dnf clean all to just clean the metadata to make the image smaller. And that's how I can make very easily my own container, and I know what's in there. And it was nice about it. If I need an update, I just rebuild it, and I get the newest versions. So I get security updates and whatever. So there we go. All right. So if I somehow run everything, well, we saw the multiple versions, and then we can see how we can run it in containers. So if I run everything containers, what about the operating system? Do I need to care? Well, I think I should definitely care. For security reasons, performance, hardware, et cetera. But it doesn't need to be my pet in a sense. I don't need to care which packages exactly are on the system, or I don't need to install individual things on it. But it can be immutable. And we have two projects in Federa. Someone could already heard about CoreOS. We have Federa CoreOS and Silverblue, and these are immutable operating systems for containers. CoreOS is for your server, and Silverblue is for the workstation. And what's kind of interesting in them is the way it upgrades and the way you can manage it. So if I look at traditional upgrade, and this is not Silverblue or CoreOS, this is just a traditional distribution, right? And let's see, this is the system, and I'm going to update it from orange to green. So what happens? Well, the packages start updating, right? So if I want, and the system modifies itself underneath, and then I'm done, and hopefully nothing broke. And if it did, in the middle, I can be in some weird state, which I would need to recover from. But if I have a look at the RPMOS tree, or at the Silverblue and CoreOS, which use technology called RPMOS tree to manage the system, what I do is that I download a new image on the side, so this is like a new system, and I just reboot in it. So there's just much less, much less way this can break, and this can be from major release to major release, or Fedora 28 to Fedora 29, which I did during the lunch break, major release for a bit, but I didn't quite care, because if something breaks, I can go back. And you can go as crazy as like from Fedora to CentOS. As long as the configuration files are compatible, there's no problem with that. So, yeah, basically, feel less upgrades. That's great. CoreOS, if you run a server somewhere, yeah? I have a question. So I have a question. What happened to Atomic? If you heard Project Atomic, there was a Fedora initiative to basically do the same thing. And yes, CoreOS is, and Silverblue is basically a new version of Atomic. So when it had acquired CoreOS, they basically, it came also to Fedora, right? And they took like the best from both worlds. So for example, RPMOS 3 is from Atomic, and CoreOS, like the brand, and other technology that had a lot of interesting automation in there, so that came from CoreOS, and they somehow making it, everything work together. Yeah. So that's like a next generation. And yeah, so this is kind of useful if I want to have a server, and I need to make sure it always boots if I update it. So I can just switch the image. If it doesn't work, I can just boot the old one. And Silverblue is great for experiments. Sometimes I can just try new versions, or I can just make sure that it always works, or I can go back. All right. That was quite quick. So if there are three things I want everyone to remember from this talk, I think it'll be these three. So Linux distributions. We saw Fedora modularity, and they're great for packaging and lifecycle. Then we can take that and build containers with Linux distributions. And there are tools like Builder and Podman to help you with that. And yeah, containers are just portable and isolated. So that's what we need to keep in mind. And then if we have everything in container, we don't need to care that much about the OS, but we still need to care a lot about the OS. So that's why we have immutable operating systems like CoreOS and Silverblue having few less upgrades in them. And you can follow me on Twitter. All right. That's everything I had on the slides. And now we can have questions where I can show you more demos or whatever. OK. So you started by, well, some history back in 2000 and then when Linux started to be used in production, there was some promise that you would get into your system. Everything is like that. You do human style or DNS install, whatever. And it was such a great thing for end users. It even made into a marketing motto. Linux, a battery included. You don't need to go to different optional things like the proprietary devices. At the time, you had the middleware of the time like Apache directly inside your Linux distribution as a package like everyone else. When on other systems, it was an optional add-on or whatever, not really well integrated. So today's middleware is something like Elasticso that everyone here has to run somewhere if he has a modern information system. So Elasticso is what it is. It has Java code. It has a great, nice modern JavaScript. Lately, they've been adding some bits in Golan and then to wrap it all together, you usually have unseable or something else in Python. So you show us how to install developer oriented models, Node.js or whatever. If I do, let's tell me the Elasticso model. How do you compose all those developer oriented streams into something which is useful in production, not in development but in production? That's a good question. How do I basically install something complex for developer? So you need to think about developer of what, right? I want my Elasticso. Oh, I need to repeat the question for the record. So basically the question was, if I try to make it shorter, how do I install Elasticsearch as a developer with this? A complex application. That uses a lot of different prospects. Yeah, so how do I install a complex environment that's using a lot of developers like Python, Elasticsearch and other languages maybe at once? Well, if they're packaged in Fedora, you can just type the same command. So DNF install Elasticsearch. But how does modularity help? Oh, how does modularity help? So if there are multiple versions that are actively used, there might be multiple modules that you can actually choose from. But that's basically it. It won't somehow change the packages if you want to make the development of Elasticsearch, right? Better is saying, yeah, you can have a stack module. So just for the user experience, you can take multiple modules and just wrap them into one if that's desired, if it's a really common thing. So that's like a method module. That's what you can do. But otherwise, you can basically take all the pieces and install them if you need a specific version of something. You can, but if you don't, you don't have to. But it's for using the software for development, not of development of the software. So is a module feature based on different channels inside the DNF configuration? So do you have different repos that you're pointing to that DNF modules is dealing with or is it implemented in another way inside DNF? Yeah, so the question is, how is it implemented? Is it multiple repos or is it something else? So it's everything one repo because we believe that repo is like a source of software from a third party or from some entity, right? And you can have multiple modules in one repo and it's implemented in a way that we have a module definition that basically points to different packages and makes a group, something like COMPS Group, but with additional things that make it work as modules. So everything is one repo. And I can show you details if you... So if everything is one repo, we need to do DNF update. So if you do, sorry, if you do opium minus, okay, if you download the package and you upgrade it, it will upgrade it. You need to use DNF to not update it. Yes, I need to use DNF if I want to update and I want to make sure that everything works here. DNF knows what modules you have installed and then it just follows the opium for the module. Good question. Are dependencies part of the module? And what happens if two modules need different dependencies? So this really depends case of case. You can have dependencies in the module. You can have dependencies outside of the module if they're pretty common. And if two modules conflict, you can't install two. So you can only have one if they conflict, right? So this makes it basically... In general, you can't install all the modules at once because they will, at some point, conflict. So, yeah? Sure. Yeah, that's a good point. We say that it's parallel availability, not parallel installability. So you have multiple versions available, but you can only choose one of each module. And yeah, if there is another module that conflicts, you can't install them in the same user space. But we figure that if people are using containers anyway or in the enterprise model, right, it's just one app per user space. So it's one app per container, per VM, or per even physical machine still. So that works in those cases, right? Another comment to the original question. If you want to have the whole stack of a space, application that uses this stack, we go to the second one. So we have containers. We have a group of containers, which is called POD, that represent essentially the configuration we want to run. Right, yeah, there was a comment that, yeah, another way how I can build like a complex developer stack is, for example, to do something like Kubernetes, and I can make POD, which is like multiple containers for an application. Yeah, but that's very alternative. We came from. Yeah. There's a question in the back. Yeah, you're next, you're next. How does it work with software collections? So, yeah. So I don't know if you know about software collections. Software collections are basically RPM packages that install things into separate parts so you can install multiple versions of the same thing at once. So they proved some kind of how to maintain, how to use, because you need to manipulate the parts, you need to change your application to use them. So in theory, you could put them into a module, but we don't do that. We figured that, anyway, people don't want multiple versions of the same thing in one user space. And that was causing more complication. Well, much more people don't want that. Few people do, but that's fine. They can still use that or they can use containers, right? But the general use case was that people don't care about that, so we just simplified it and using standard packages. I can replace some of the use cases of software collections. Yeah, there are many cases where people just want one just for the different versions, so that's even simpler for them right now. Right, next question? How many versions do you want to provide in modules? Yeah, how far can we go? So that really depends on the community. So basically, our team produced the technology and we helped a few people to produce some modules, but this is basically about the community what they want. You can go as far as you can. You can go as far as you can if you want. You can go as far as GCC, but that would be kind of crazy because if you have two versions of GCC, you need to have two trays of everything, right? Which is possible. By the way, that reminded me we have something called stream expansion which means that if we build, let's say we have two versions of Fedora, we have two versions of Language Runtime and two versions of Applications and if I just say I want to build everything against everything, two versions of Language Runtime for each distro and then two versions of Applications for each of those and it kind of explodes. I can control it just to the combinations I really care about, but that's all you can do. So in theory, yes, I could do that, but I don't think there will be someone who really wants to maintain something like this. Yeah, I was first. You're second. No, the question was if there's something I change in the spec file of the RPM packages for modularity to work, no, there's no changes in the spec file. Okay, not an acceptable answer in that you can't help me showing that. Okay, so if flatpacks should somehow fall into this, so yes, basically flatpacks are, I don't know if anyone knows flatpacks, these are continuous for graphical applications and we have people in Fedora, at least one guy, who tries to build flatpacks from RPMs, so the same is available as flatpacks. So, yeah, that's the way to ship the software as well. So the benefit of modularity is that we have the software available to multiple deployments, so we can deploy just RPMs right on the machine. You can ship the container, you can ship a flatpack, you can even ship a VM, I don't know if people do that. But the benefit is you have the same thing across multiple level deployment, yeah. Installation, that's right. So Nix does that, like I can switch the versions of, no. Yeah, so we have, the question was parallel availability, not parallel installability and for example, Nix of us can do parallel installation, and if there are any plans to do that, no. So, basically, that's the funny thing when we started. We had this requirement, or like we made up this requirement, let's innovate and let's not make any changes. Right? So people are using RPM distributions, and on their laptops, in the data center, like if you have, for example, a rail, like stock exchanges run on that, right? And in case it gets adopted, which it did even in rail 8 beta, we can't change it too much. So we need to stick to RPMs and we shouldn't make any changes that much, right? So if we started with nothing, we would probably do that, but we haven't started with nothing. We had to somehow keep the RPM packages. But yeah, if you need something different, then maybe different distributions might help as well, or containers. But yeah, these are basically RPM and that was part of the deal. Question? I'll just point out that in silver-blue locations. Yeah. Yeah, with silver-blue, which is the desktop version of container OS, is flat-back is the mechanism how you install graphical applications in there. And there's even repo and GNOME software, which is like an app store for GNOME. You can install flat-backs directly in there. But then flat-backs are... No, flat-backs are not in any way like connected with the OS3 now. So it works kind of like a phone. So you have the OS part, which is in this case the OS3 and you update the OS, and then you have the application updates, which comes separately from whatever place you choose to consume them. My flat-hub example is upstream happy about all this stuff. So that's the thing. Upstreams sometimes don't care that much. Sometimes do. And the reason why we have packaging is that upstreams are really nice with developing new features, making interesting features and whatever. But they live in their own worlds, right? And if you want to run things like to build complex development environment or complex application, they might somehow collapse or just don't work together. So that's why we have packaging. We have packaging guidelines to make sure that everything is packaged in consistent way. So we have packages who actually just have opinions about where files are on the disk, and they just somehow reshuffle everything and make it work together. So these are very different worlds and these are very rarely the same person. So in principle, they shouldn't care. And if they're maintaining two versions, we consume two versions, right? If they don't, we probably wouldn't somehow just take something and support it and like package it in Fedora. That would be weird, yeah? How do you deal with divergence? Because we all know, for example, in an ideal Fedora mobility world, you have Python 2 stream and Python 3 stream. And we all know that almost no one managed to jump from 2 to 3. That's a very good question. You have apps which are Yeah, that's a very good question. So what about Python, right? Python 2 and Python 3? These are usually on the same system at the same time. So Python is a special case. And we kind of have modules called Python 2 and called Python 3. So these are different modules because different pythons. And they even went to extreme to have like module called Python 3.7 Python 3.6 so you can install all them at the same time. So they kind of abusing the system, but they had the exception to the rule. Are you to be the exception long term? Because once you have modules and developers only care with their own little dependence, their own little stack, here that soon we'll have a well little bit of big apps that each depend on incompatible modules. Oh right, yeah. So what happens if this divergence goes too far? We have like large apps that depend on a different version of modules. So what we encourage is that you should consume the default whenever possible. So if there is a default version you should really consume the default version and then you don't need to care about this. You need to have good reason to require an alternative version because exactly then it breaks for other applications, right? But on the other hand if you really need to do that, you can do that. But yeah of course it's not magic that will somehow fix everything. But yeah that's the recommendation just like if there is a default try to use the default so it's not in conflict with everything else. But yeah you can come to the run package something really weird and just like include and make sure that it only works with nothing else. That's fine. If there is a use case for it we can do it. But yeah, it won't be the general case. Oh by the way we are doing discussions here. If people came just for the talk, I won't be offended if you leave. That's fine. If you're interested just in discussion because I wanted to have a little discussion here you're very welcome to stay. Just don't want to pressure anyone to just wait for the end. Yeah. How do I kill versions? That's a very good question. So for example if Node.js 8 goes end of life we are actually working on mechanism which is almost done. When we say that for example Node.js 8 ends in Federal 30 and this is just made up thing. So we can just record this information and the build system is clever enough to just stop building it at 30. So if I already have it installed on my system and I upgrade what happens. So that's basically the same scenario what happens if a traditional package somehow dies in Fedora and just gets removed it stays on your system but you will be prompted that hey this is out of life so you should probably switch to something that's still maintained. Do you have any other questions? Yes. Yeah. Good question. So when I update CoroS or Silverblue and I have these snapshots and so how long do I keep the old snapshots? So I always keep two well the system always keeps two and it keeps them forever so as long as you do an update it always keeps the previous one so you can just go back whenever but if you do two updates only one will be there right now. Anything to discuss? All right thanks for coming.