 This is just completely fucked up. It just doesn't work. You have to do a pound sign, a pound review. How? Like this? Yeah. No. Pound review. Like this? No. No, you have it right here. No, it's just that I don't. Yeah, but I can't play this this way, right? There's nothing. All right, I'm just rebooting into my kind of okay. Okay, so we can play yours from this way, I guess. But I don't know why this mind is playing like this. So, we can just switch the laptop here. Okay. So what's the password? Uh, pink hat, for now. So it's called CZ. Thank you. And the pink hat, for now. You can put the slides here and the presentation was your favorite. Here? Yeah, this is your projector. This is you, yeah. No, no, no, your slides, your slides. Oh. Yeah, so you can just play. Yep. Leave the slides here. And now, wait a minute. Okay. Play. All right, like 40 seconds and we can start. Okay, shall we start? 40 seconds. You can count, yeah, that's fine. Start presentation, no. So, I know you will hate me, but it didn't work for me, so I had to reboot this, so. But it's okay. All right. So, I guess we're set up. Okay. Maybe. I would like to let me introduce Gertny and Adam, who are a part of the modularity team. And in presentation, we will look at how Gertny and Adam work. All right, the projector just died. I can try to reconnect, but oh my god. HDMI? It just went black. No, HDMI. Can we see it? It's connected, but this is black. Oh my god. Yeah, who will make this work? Wins. So, can you try the VGA, for example, if that works? I thought, yeah. An HDMI? Because it worked like 10 seconds ago, not anymore. That's great. I'll try to do something, but I don't know. I can try to log in. Like... Oh, I have it. Do you have it? We have it online. We can, like, I don't know. Oh, no. Is that yours? Yeah, but yours is... No, it's yours, right. All right, just let me, let me, let me log in. Let me just log in to my account here. So, slide. Yeah. Do you want to add more HDMI? No, I don't want to. I already have it. I'll try to log in to my account. So, 5, 6, 3, 3, 9. I'll try to log in. What? Slide. And we go. We can read. And we go. No, no, no, we can read. Yeah, that's fine. All right, so, you ready? Finally, thanks. It took only 20 minutes to connect the projector. So, hi, everyone. My name is Adam. This is Courtney, I'm from the modularity team. I'm from the modularity team, and Courtney is from the Factor A2.0 team. And before we start, we have something we need to do. So, if you could put the stickers we would find on the table, on your laptop, because we can't continue without that. If you don't have a sticker, that's your problem. I'm sorry. So, do you have it? So, please remember to pay after the talk. And let's start with the great stuff. So, I don't know if you were to Langdon White talk last year about distribution with this guy. So we started with saying distributions are great. Nothing changed about that. So, that's good. And let's say, well, they are great. I guess, we all know that, right? They ship packages with the dependency so it's easy to install, they're integrated and tested. So everything works together and well, which is fine. And one of the boring stuff, you know, security. Now, it's really important that they page for security, for example, if I develop an app, and I have no time to, like, yeah, they have a lifespan. They are released at some time and just die at another time. So, for example, this is Fedora from 80 to 25. And all the software which is in the release is tied to the release. So, I have a list to version one. There is something, can be application, for example, like Firefox can be database. Some dependencies. And everything has the same life cycle. Which is fine. But there is this bus, right? Because the applications are released independently from the OS. They own life cycles, right? For example, I can have Apache, MariaDB. They just release as they want, not as Fedora ones. And also, I can install, like, multiple, oh, maybe I can, but there are multiple versions available as well. So, yeah, basically that's it. But distributions ship just a single version. For example, Fedora 22, Fedora 23, Fedora 23, etc. But it's almost always just one because of the dependencies, because of the integration. Newer version can have different libraries, different dependencies that they can't really fit together. So, if we can just include this version, which one? So, what's better for you? Is it fast updates? Or is it just stable versions? I don't think we have an answer because as a developer I want the latest version, right? But if I run a server, I want the stable ones. So, that's the answer. We need all of them. But how? Yeah, but how? So, option one would be something, I don't know, Ubuntu or even like Fedora and CentOS to have fast-paced distro with all the new versions and one stable, for example, CentOS or Ubuntu LDS. So, I can have, I can choose. I want fast track or I just want something stable. But if I have something stable, for example like Enterprise Linux, that's supported for 10 years. So, after the 10 years or even after 5 years, some stuff or I might be fine with it but what about the kernel, for example? What about the drivers? What if I buy a new server? I want to, I can't run it there because there are no drivers. So, this is kind of limited. Another solution for like multi-perversion is something called software collections which is like only different method of packaging RPMs which install the software into separate path. But it's kind of hacky to type CL enable and it doesn't really work. It's some solutions for example, Harkovic path, etc. But that's an option. Option 3, we have containers, right? Who knows what container is? Docker. Almost everyone. So, it's like an isolated part of the operating system that can work processes with their file system, then it works stuff and etc. So, this is like pretty separate but still I need to build it out of NOS, out of something out of a new distribution. So, if I have a disto with one version, I can build container with one version, right? And if I want to just install something else I might need to tweak it a bit but yeah, it's a solution but the best one would be I guess modularity that solves all the problems for you. So, let's have a look at the video I will be talking the subtitles so just please read. Alright, so that was an excuse for me to have a drink. So, yeah, what we want to do is like provide a very small system called Base Runtime who was on pedestal like two hours ago about Base Runtime which is an original code. So, that's a small system that includes only the base basic stuff with good proven ABI stability and kernel and this kind of stuff and then we have modules available and they have their own life cycle independent of the distribution so there's a lot of freedom so as I said, there is a Base Runtime yeah, includes for example kernel, glipc and basic libraries and the software runs as a module and you can think of a module for example as application for Android or iPhone it's the same concept, different technology but the same concept. So, modules are basically right now just a groups of packages with the purpose so I can have packages for Firefox, I can have packages for I don't know database or even LAMP stack and it could look like this just the packages, some metadata and you can see there are two kinds of packages. For example if I have Firefox, I will have Firefox as an API package because that's the package which is the reason for the module to exist but it will also include all the dependencies in itself but we treat the dependencies as the implementation details so we can guarantee like for example if I update the module, the API will stay the same but everything else can change so there's some kind of like two different types of packages in a module and the module is defined by something called module and defile which defines everything like components, API, build recipe and an install profile and you can have a look so that's it I can show you through the file so it's something like name, stream and version so name is for example would be Firefox and the stream could be I don't know version of a Firefox but we don't call it version because there are some more complex modules like LAMP stack that have, I don't know, HTTPD of some version database of some version so what's a version of a LAMP stack that doesn't make sense so we call it stream and a version is just another point in time for example we can update it for security updates or something like that but the API stays the same during the stream alright there is some summary there is licenses, separate licenses for this file and separate licenses for the content where does the project live community documentation tracker some optional metadata, I don't know anything can be really here and we can see there are dependencies and components which are two different things there both can be I guess dependencies can be just modules but think of it as a dependency as something that is not part of the module so for example the base runtime it needs the base runtime to run but a component is something which is included in the module so if I have for example a Python app I can have the module I can have packages with my app and for example Python here so it would get bundled into the module and be a standalone component I know that Python was maybe a bit wrong example because what if I have like 5 Python apps on the system I don't want to have 5 Python right so that's something we want to solve in the future I guess it will be just an dependency so it will be just once but we don't even go and answer it dependencies and components then there is something called filter so if we build the source packages and these are RPM packages I don't know if you know that but if you build a source RPM package this can produce several binary packages so if I don't want some ones in the module I can just filter them for example develop packages or something not important then there is the API so I say only these 3 packages are important everything else is in the implementation detail and something called install profiles which is questions for the future because we didn't work at base very well but basically right now it's defined as a set of RPMs that get installed but I want to add configuration and other stuff so I can for example if I have a HTTP module right now if you have a HTTP on your system it's basically as a production by default right but what if I'm a developer so I can have like two profiles one is production one is developer so it will be tweaked to I don't know for example run HTML from my home directory so that was module MD and modules are built in the factory 2.0 so there will be a talk tomorrow so also continue will be talking about some stuff of the factory 2.0 so I said that modules are packages right but this is just about the build they can be delivered as several artifacts they can be RPMs, repo, Linux container flatback anything I want so how do we run them this is how the distro is going to look like or like kind of there is the base runtime which is the thing better talk about this is the minimal system and I'll be able to install the modules directly on the system or in a container so why containers that's a solution for one problem we have for example if you have two modules and as I said module includes all the dependencies in itself so what if I have a conflict independence between modules so one solution is container other solution would be using something like the software collections technology I talked about before that means the packages which are in API would be normal packages but everything else would be repackaged or built in a way that would be installed separately so they won't conflict I guess the default approach would be containers because that's the easiest but in a container so we have an option for that as well so from the usability side I don't care if it's container or RPM as a user I won't even notice you know why because there's the workflow for example I want to install httpd so I just type dnf install httpd like I would right now I can edit the configuration file like I would now it will be like a different name but that's just an example I can add my index html in a local file system like now and start the server this can be container this can be just RPM packages I don't know I don't care it's the same and the system will decide for you and for example I don't know few days later I can run dnf update and everything will work as it runs now so you said one of these cases for the container model is you have modules that conflict what if you've got two different container modules installed both include a patchy and you want different configurations yeah that's possible and I don't know that's the future the career plan for that is this is kind of by default like it's just native but doing parallel install of things is not a typical default by default is what we found so the idea here is these are actually sim links to some place else on the system that is owned by the individual container and then the next instance of the container would get it's own but that means basically the sim links start going around each container the host has sim links to the content that is owned by the container so we think sim links is better than the actual files but the idea basically that some of that ease of use part starts to drop away and you start to have to recognize that yeah you're starting to do weird stuff here but at least at the basic level we can kind of give a similar distro-like experience even if some of these things are being shipped in different models and you need to find out that it should take files to take files out of the container so yeah that's the guy who knows the future so asking for future questions for me in the concept there might be no changes to the kind of workflow there might be some I don't know but as we saw we don't even need them so how that works if I have multiple modules do I have installed HTTPD how did it know so I talked about streams modules come in several streams it can be versions, it can be even variants of an application and you choose the stream you want before the installation or you can even download an ISO like you would for example Federa25 it's got the component selected for you so we want to make the concept like we don't want to change it so you will be able to download an ISO with all of this we select it or just make your operations stop right and DNF keeps system updated and integrated and working together and you can just type DNF update and everything will work so that's basically it that's what this tool can look like and it's called building alright now I just need to switch the slides technology but it will be fine and to go alright so is that the first one? no it's not the first one alright so I just screwed up so I will just fix it is it okay? alright so this is Courtney and we will tell you if she wants about the build part 5 build modules we use something called module build service module build service is responsible for setting up tags in Koji and rebuilding components of a module from source so how does it work? module build service coordinates module builds it's also responsible for a number of tasks number 1 it provides an interface for module client side tooling and build state queries possible for example we can design our own htdpd module and then once we submit it we can query it number 2 the nbs verifies that input data is available and correct so for example like maybe you input an invalid json file that will get verified or reducted number 3 it prepares a build environment in the supported build system such as Koji or it schedules and builds module components and tracks their build state number 5 finally it emits bus messages about all state changes such as other infrastructure services can pick up the work so we I will actually review what state changes are in the following slides I don't expect you to know what they are right now so I just want to reiterate as mentioned module build service implements a restful interface with module build submission and state querying and not all rest methods are supported for example we have a module build submission here right we're going to do an htdp post to this URL and what we need is we need an scm URL this basically tells nbs where we're going to get our data from so I've highlighted the module get repo in red and the module md file in yellow the module get repo name has to match the module md file name and I'll talk about why the get hash is necessary besides finding which commit you want to look at but what if we want to query for modules we can query task build id which return tasks and module build state so the task is just the id of the task in module build service the build state is well one of init, wait, build done, failed already and I'll describe these in the next slide what they are and finally tasks are just a dictionary of component names in a format of type slash nbr and related kogi or other supported build system tasks in other states so here's just kind of the workflow of how it all works so we start in the init state when basically we parse the module md file learn the nbr and create a record for the module build at this point you can either go into the failed state or the wait state it will fail for example if it can't parse module md file like maybe you have an invalid json file but otherwise it will go into the wait state and we have something called the scheduler and the scheduler picks up tasks and wait and switches to build immediately once this is in build we prepare the build root, submit modules for all the components and wait for the results at this point it can either go to the done state or the failed state it would maybe go to the failed state if we can't prepare the build root or maybe some of the components that we listed in the module md file aren't valid otherwise it goes to the done state we also have a ready state where the module is ready to be part of a larger compose so here's an example of a module build state query so we do an htdp get it's the same as the htdp put or post that we had before except we do slash the module build task id so I've highlighted the task id in both of these hopefully you can see it I've highlighted the module build state which I described before and then I put the component names on the bottom so for example like I said it's type slash nvr so basically rpm slash foo rpm slash so what if we want to query all the modules like we just want to list everything well we can use an htdp get on the same URL as the htdp post so what I have at the bottom here is this is all one output I just couldn't fit it all on one slide so I just kind of broke it up half and half so what you'll see is on the left is their items which has the id from the module build service and then the state so these are just so for example like we have init equals one build equals two things like that and on the right we have this metadata results for example the first, last, and next these are all pages so it just contains that information and we can see which page we're on we can see the total number of pages is three we can see the number of modules per page is ten and the total number of modules we have is thirty now we do have options available for example we have a verbose option which we set to false by default but if you do set it to true you'll get like the owner of the build you'll get time stamps when the build was submitted, when it was finished or when it failed the page as I said specifies which page should be displayed and finally the per page specifies how many items per page is module build service actually used so here we have the module developer on the left the module developer creates a module md then sends the module md to mbs through fed message the fed package fed message through here now we have a diskit which is basically where we're going to put all of our module md files so basically we have a branch for each version of fedora 23 fedora 24, fedora 25 and from here we look at koji which koji just builds the rpms for a fedora project in general so koji will pull all of the information from module build service and diskit and then punji will just be our distribution impose tool for example composes our release release snapshots that contain release deliverables such as installation trees like maybe rpms repo data we may have things like images for like pixie boot things like that but wait, there's more there's still factory 2.0 which is what Adam mentioned before so the factory 2.0 framework includes infrastructure design for modularity in other words modularity is part of a bigger picture so here's just an infrastructure design for factory 2.0 I don't expect you to be able to read it but basically in red is factory 2.0 and then modularity is such a small piece that we highlighted in blue so let's switch gears now so what are we doing in the present well right now we are able to build modules locally we are finishing up the first version of module build service so that we are able to build modules in the infrastructure that we showed on the previous page next we are preparing content in other words we are preparing our modules and we will be building them for example like an htdvd module and things like that but we are still waiting on factory 2.0 it's still in progress but we are working on an experimental release Fedora module server called Boltron which I believe Petra mentioned in his talk so what are the next steps for modularity well how do we install modules well we have one major problem with module installations and this is something that Adam brought up and module packages can overlap due to their dependencies but we can use containers to solve this problem or something similar to software collections why containers well our goal is to prevent conflicting dependencies and containers are one way to isolate packages to prevent these conflictions from recurrent and I have this quote here from cio.com by containerizing the application platform and its dependencies differences in OS distributions and underlying infrastructure are abstracted away but why software collection why would you might want to do this well I got this quote from redhat.com and it says with software collections you can build and currently install multiple versions of components in your system software collections have software collections have to impact on the system versions of the packages installed by any of the conventional RPM package management utilities so how do we actually ship our modules this is something we're considering so we want to ship our modules as ISOs, RPMs, containers basically whatever the user wants and what is our vision well we want to rebuild and test our modules automatically that's what factory 2.0 is all about that's what module builds services about anyways so we have a workshop that's tomorrow at 1600 tomorrow at 1600 so basically we're going to show you how you can build your own module you can install it, you can use it things like that so that's it to our workshop we hope that you all come to our workshop so thank you for listening and again tomorrow at 1600 in A113 I can take you to the right direction I remember your last year but still I think it's not quite there yet because I don't know if you are familiar with mixed voice yes you are you still need a complete hash check some of all the files which are because always the main point is that you want to have some some known space that you've worked so modularity in a sense is really only trying to solve kind of the first step on a potentially very long path some distributions have actually gone kind of sideways to get down that long path and kind of like NICS there are other solutions to this as a core requirement we want to try to keep the user experience of working with Fedora the same and so as soon as you start going to full complete answers well there's a couple of them there's some really interesting ways you can do it changing how the C Library the loaders work and that kind of stuff or you can do like the NICS model which basically keeps it everything so they're either an incredibly large amount of work or they also are a lot of work and they change everything about how the distro works so this is I think it's really the first step to trying to get in this direction but I'm not sure that it's the last step if that makes sense that's why you know but like switching whole whole of the NICS style of management would probably be too disruptive to to actually take that's the question we can write a tooling to make it act just like the Fedora I think it's also I'll say we looked at it so you know like at this moment there's an OST in the background or yeah so OST is definitely on the kind of list of output artifact types it's just like we're starting with RPR groupos those are relatively easy right and then containers based on those groupos but yeah we'd like to see lots of other artifact types which I've got there yeah flatbacks are coming as well I guess for the GUI applications because we don't want to run that in docker-like containers alright so I apologize in advance because I'm kind of doing a lot of catch up at this conference I'm unpacking stuff in my head and I haven't had as detailed a look at what the virus got until now this feels to me like it's combining a bunch of different things like on one level it's almost like comps slash kickstarts 2.0 which is great because we've needed that only for the last six years and it's about kind of modernizing how we do more or less what we do now and build a conventional system because as you said if you don't have the container or collection or something like this everything has to be compatible you can't really have different models all the RPMs have to be cross-compatible like they are right now but then there's also this kind of idea that hey we could build containers out there so this then becomes a way to ship applications and blah blah and I guess I'm wondering two things one would be about how tight is your focus on what this is for initially and two when you get into building project collections are you relying on upstream tooling to do that or is this going to be reinventing like Docker files or things that are already out there and well understood alright so yeah this is like one of the things we think as well so it's not a single project it's like multiple projects so the first thing was I guess about the containers for example right well I guess the first thing was like the focus that you had to focus on making this just so we will start with yeah like making the this whole process matter right so we will start with server and so we will write text and this stuff so we will start with containers and RPMs just this we will start with the F26 server which will be like preview which will not be updated just fix the technology F27 should be something bigger even updated so this will be the first step and the second step we are going to look at the organization and stuff I don't know if you will be I don't think there will be a relying on upstream tooling by running upstream with our own tooling maybe based on that maybe something I just want to speak to how the one of the core requirements that lets this was the ability to have lifestyles between the OS and applications in order to do that we had to clean up a ton of the distro building tool chain because the second even right now it is incredibly hard to build the whole thing there is just a lot of human effort involved as soon as you start saying oh yeah by the way we are going to have 37 copies of everything it is basically unmanageable if it is not unmanageable already as soon as you have two copies of one thing it starts to become incredibly unmanageable so we have to clean up how things get built as well because we need that organization we also as part of that one of the promises is that application knows that it works right so that is why we really need that kind of it is technically not CI but basically a CI test loop for each of these things that needs to be built in the infrastructure to make sure that it works across those it can actually deliver what it is promising so at the end of the day really the core scope requirement was this one little thing which is how to have lifestyles and it is just kind of cascading a lot of things but from a scope perspective that is exactly what we are doing so look there is a system we want to change in for bull trauma we will take a look at that we are going to flesh that out a little bit more but that is the scope we are going to say ok we have the days from time hopefully we will have maybe 20 or so applications built in this model and it will actually be a real distro so that you all can check it out and start helping and see what is broken does that help? yeah I have more questions I saw you is the module file a runtime artifact that would be delivered via the repository files or is it more like a spec file that is a built time artifact what is the security around that? the module is like do you remember what module was built in? I guess I want to know the unbearing beat the jump or pause before it is gone by does that module file now exist in my meter? yeah one provides all the information for example the name stream version you won't write that because that would be taken from the git repo for example the name would be named the git repo stream would be named on the branch and an operation would be time sapped to the last commit but it will specify the dependencies components and stuff but when it is built the build system adds more information to it so it becomes artifact description so it is kind of open if I were to do a mirror an entirety but I have now got that it will be a live module file on my head you had an example of doing a dnf install htp what would the workflow for the user be if they wanted to have different versions of htp would they just say equals equals some version and people do all other versions? okay so if user writes dnf install htpd and if he wants different version then the default one so they can specify much more metadata for example they can specify the stream they can specify version of a component some specific provider of a component and it can just do the magic and even like switch modules for example if you care about PHP 6 and web server you don't care which one there can be a module that satisfies this requirement and then they can be a completely different module like week later which is newer so it can be even updated to a different module because you didn't care about anything else you wouldn't break what you wanted so this is the default idea install htpd is the default but you can do more fun if you really want so in the recipe that you built you specify multiple different things and speciality is the result of multiple different characters so we throw it in at the next level to consume on the other side from the consuming side there are attempts to unify for example to deploy in a unified way I want to consume so is there any vision of merging this install and use of the modules together so that I really don't care whether it is rpm dm container or something like that so I guess you want to answer? you want to answer? actually tell me if I got the question correct so basically the problem we have right now in most of these environments is that you have to have an arctic spec file you have to have docker file you have to have kick stars whatever else in order to make each of those arctic types there is an effort trying to say use there a way we can canonically describe an artifact using something like the answolder actually literally answolder there are other engines that will say oh I have an answold input and I can get a container output or I can get a rpm output an ISO output whatever there is already one for answold container for example that is kind of what that does but further we need the installation and deployment so one of the things we would like to see in the install profiles is actually kind of rpm post and then in reverse actually we are trying to build this this is kind of on the stretch goal list but we are trying to build this for ultron that you would actually have a way to say as an end user on the system say I want IPA and it would actually go to answold galaxy find the IPA answold plate books then which actually interacts with a set of modules and then actually kind of installs and configures them obviously that would probably not be unattended you would have to provide it with data but maybe you could do it in a script file but there is interaction there so there is actually a full installation activity so the question was really can we see conceptually the installation of the modules as part of the accessibility of the description of the module well the things I wanted to originally think about I would really like to get rid of the word install because no one actually cares about doing an install they care about running whatever it is install is a caching mechanism or a byproduct or whatever so fundamentally modules should know how they work and how they become usable but it is still coming last question you already got one so go back yeah I guess that is one of the things we want to solve and we will be solving pretty easily because for example security so if I have I don't know modules with some component in it and for example I can on my system so how do we patch that so there is multiple though for the person which we have no paid job mostly because we have a single version of install but there is a second issue which is where when you have a module that is depending on a whole version of API of OpenSSL and you have a very large backboard to fix a major security issue you no longer have itself in security issues you actually have a how do we fix the security issue and the entire new module and so the complexity blows up at exactly the worst possible time so one great actually that second part is actually much much better in the module world because assuming that library and a lot of these library libraries is an implementation detail it's not part of the external API you can just update it so yeah there is effort to do the update assuming that the library has changed material but there is no impact on the rest of the system for that module so yeah that means that's often behind not the case so what is the company that this was two years ago working as an ISP on top of our problems now I work for Red Hat but I had experience with both sides of the mess the ISPs want this modularity because they don't want to have to do the work to stay up with the most recent version and then you have something like RPE and the RPE version and the RPE version yes RPE is an internal implementation detail but so much of the code has changed over that time that they can't really patch the old versions and so they only patched the latest version and now we work for all the modules to get it patched they have to update all of these games okay I got it we need to finish all right so yeah thanks for coming and if you didn't find a sticker on your table we got a lot of stickers here so if you can't find a sticker where are my stickers I didn't get one yet I didn't pay ciao sorry sorry sorry sorry sorry no, no, no