 Můžeš se s jednou ten tým bům, který, parihném, druhá, druhá žena. To je moje bůnde. Na hodinu, no, realně tak na 30-40 cm. Já, já tu viem, oj. Nebudu síti snažit stresovat, mám tějnou toho. Jo, asi věžu, že 30-40 cm. To jsi čas. Ne, já jsem se vás sítit. Já jsem vás, můžu. Ty máš? To jen, to jen. Ale je to spinov. Budu. Ty máš? Ne, to vám měl. Je to to, co máš. Schuji je? Ne, tak to můj? Je to to. Ale jsi slovený? Ne, slovený. Hezice. To je velmi zvědětá a velmi zvědětá. Můžu se na to dělávat na tím, když se bude. Prostě můžu se s tím, když se těžkého a tím, když se předtějí. Tím, když se těžkého a těžkého bude. Tím, když se těžkého a těžkého bude. Něžně se připraví dělávám těžkého a těžkého pakéře. Něžně dvětáře dneště hráté rpms a dneště vytří dneště hráté rpms. All of these have varied quality, some of them are very well cared for, some of them are really rotten and unmaintained. The problem is that we don't build them at the same time, we don't give them the same attention. They are all tangled together so the really critical packages that form the base of the system depend on the really rotten parts that nobody cares for. And it's all released together and we promise the same level of quality for all of them. And we promise to support it for the whole lifetime of the federal release. You can see on the picture the number of source RPMs and how they evolved during the federal lifetime. Is it flattening out or is the graft just cut off? It's flattening out. A big amount of that used to do, it's going to let everything there. You can see when federal core became federal today and the number of packages as they were increasing over time. So this sucks. We cannot guarantee this promise of quality for everything that we include for all those 20,000 packages. So how do we solve that situation? We modularize everything. So out of this many people, how many of you have heard of modularity, at some point? Awesome. And how many of you know what it actually is? Two, three people. So let me summarize it for you. Modularity is about serious bundling. Every module bundles all of its dependencies except for a very small set that is included in the base. It's content duplication. The libraries included with the applications are bundled, so you have several copies of them on your system. It's maintenance overhead. Every application developer needs to maintain all their dependencies and patch the security issues and stuff like that. In some cases they also have to maintain several parallel multiple copies. It's different. But it needs to get used to something new and how to manage your system. And mainly it's still not ready. On the other hand, it enables upstream driven life cycles. So we don't have to release everything as one thing, but we can release, for example, Python whenever upstream releases it. We can release JLPC whenever there's a new upstream release. We can release new kernel anytime. And it all still works together. We can offer several parallel copies of the same thing. We can, for example, include parallel 5.20 and 5.22 or Python 3.6 and 3.7 in the future. It offers parallel installability. In some cases it's possible to either install parallel copies on the same system or you can use containers to provide those. It's more honest. Since we delivered these as part of stand-alone modules, stand-alone applications with their own life cycles, we don't have to promise the same quality for everything at the same time. And it's familiar. Other systems have been doing this for years. For example, take Windows applications that you ship for XP still work because they bundle all their dependencies. Take the mobile operating systems such as Android or iOS. You just install the applications and you don't have to worry about all the libraries or all the other packages that you put on your system. It's all included. I won't go into any more details since there's going to be another specific talk led by Adam and Courtney. They will explain how it all works, how to put it together, how to develop modules. About base runtime and how it fits into this module picture. Base runtime is a module like any other. It provides the base operating system capabilities, the tools, the services in some cases, and you link and develop your modules against this one. It's the basis of the operating system and defines the release of the traditional Fedora. One of the challenges that we were facing during the development was deciding what it actually is, what components we include, what API we guarantee to the applications, what runtime it actually provides. In the picture you'll see that our goal was to provide hardware abstraction, which is mostly kernel and driver's glipc, for example, as it provides the interface to the kernel. Also to provide system tools and services, such as core details, so you can boot the system, you can log in, you can do basic stuff, authentication demons and stuff like that. It could also provide shared libraries for the applications above. This is the most difficult part, since we don't normally know what people will be expecting, but we can do some guesses using the traditional Fedora releases as the input for analysis. This is also where the generational core and base runtime differ. In the original idea of the generational core was that we provide all the three boxes that are listed below, split into smaller modules that are included in the generational core stack. The base runtime was supposed to be only the hardware abstraction layer, then there was supposed to be a system runtime, which provides the authentication services, basic tools and such, and the shared libraries were meant to be part of the shared components module. Since we decided to simplify this concept for Fedora26, we took all these three or two and a half and named them just base runtime, which is here for people to comprehend. The main challenge is finding the balance between the system footprint on the disk space in memory and usability of this core. The fewer packages we include, the lower maintenance burden it means for the base runtime team, the lower disk space it uses, the lower memory footprint, because you don't have to have as many libraries loaded at the same time, faster builds, the fewer components we have, the faster we build them of course. And since every module update means that you have to rebuild all the components included, since the, especially in this case, since base runtime provides the budget for itself, you need to make sure that everything builds using the same components again. And it also means a smaller attack surface as the fewer API you expose, especially when it comes to cryptography or network management, the less terrorist attack and exploit. Another major challenge when developing base runtime was building it. Since it's actually the first real module in the majority infrastructure, we have to bootstrap it completely from the scratch, which means that to build the components that we want to ship, we have to provide all the build dependencies first and build those build dependencies too. We have to analyze the whole build dependency chain of the packages that we preselected, which actually just to add to the previous slide, we decided to go with post-exuser land utilities, part of that LSB inspired package set, the minimum build route that is defined in federal today, the kernel, glipsy and the hyphen, that's it We took this package set and we recursively found all the build dependencies of this set and their runtime dependencies and so on. We made this bootstrap. Oh yeah, I forgot to mention the bootstrap, so also the grub and drag it and set up and process them and all this. So, all together when you consider this runtime, it's roughly 170 source packages, I believe, which expands when you include all the sub-packages, so module developers can develop against this to roughly 700 packages now. They don't have to be installed all together, but they need to be available. If you compute the whole dependency chain recursively, it ends up to be like 3,030 SRPMs at this point. And to bootstrap it, we need to build the set up to be able to build the set up. The problem is that as mentioned early in the beginning the quality of packaging in Fedora is quite low. Most of the packages are quite rotten and unmaintained and they unfortunately appear in this build dependency chain. So, we've been trying to build this candidate tree as the initial repository set and trying to rebuild that only using the components in this set resulted in over 400 failures during build. This was mostly caused by the removal of pearl from the standard build route and since there was no following mastery build many of the packages that actually build required pearl in practice didn't declare this build dependency. After updating this set to Fedora25 beta no, no, no, it was beta first in this candidate tree. The number of failures went down to roughly 170 if I remember correctly and we have resolved most of them since. But during the updates we introduced other changes happened mostly switching to system python in RPM build to cause that python is no longer part of the standard build route and that introduced new failure to build from source issues. It's almost resolved we have like three packages that need to be fixed at this point. We hope to get in the new future. Another set of challenges is the dependency chains both at runtime and build time and how we should split the packages so we can only include this minimal set and the password procedure. Also the smaller build dependency chain for the cell hosting prototype means that we can build it much faster and we can boost up in architectures and operating systems much faster. So how many of you know what Team W is? No program as you are. Team W is means there is more than one way to do it the acronym is written differently but it's usually referred to as this Team W thing. We use dependency chains we can introduce new sub packages we can disable optional features we can combine these two into moving optional libraries and plugins into sub packages and then filtering those from the cell. We can repackage the stuff into nonstandard locations so it becomes private which is also one key feature of modularity and we need to make sure that even after this people can still use our API which means mostly including the developer packages so they can build their applications against the API that we provide. This is not always easy as many packages include development scripts and tools in their developer packages which pulls a lot of dependencies it's not just the other files. This is the map of the cell hosting prototype the dependencies between them the picture was provided by Harold over there it's really too messy to look at it but I can link you to the SVG after talk so you can actually see the dependencies in detail. Another set of challenges is implementing packaging changes in Fedora. This is mostly a people issue many maintainers are not as responsive as we would like them to be. Generally we have to wait several weeks before they apply the page or respond in any way to the bug that we file for them. If they reply they might have a different opinion than we do which is not always a bad thing which encourages discussion and finding new solutions but sometimes it can stall the progress completely. It also leads to repetitive endless discussions so what to expect in the near future. In Fedora 26 we would like to deliver the base runtime module or at least some technology preview which would include the package set that I mentioned earlier and module based product called name Boltron which would be a module really Fedora server since the infrastructure isn't ready to provide any updates this will be a one-off thing and it will serve for demonstration. In Fedora 27 we hope to have a polished base runtime base runtime module with several modules running on top and we would like to provide these modular composes as the primary release of Fedora. And much later in the future we hope to provide a lot more content which will be a lot more polished. There will be several competing implementations of the same thing parallel available language text for example and who knows maybe it will finally enable the year of the Linux desktop. I have some small demo for you. As of recently this week actually we have a demo repository with the RPMs that we would like to provide in base runtime at 26. It's still working progress. We don't actually ship all the develop sub packages so you cannot develop against all the libraries that we include but we are working on resolving the repository as it looks like this. Anybody knows what the parameter is? Like that? I can do this. No it doesn't. Minus R? Pull this. No. Just pull this. Oh, I see what you mean. I see what you mean. So most of those are for the main dependencies of the main packages that we include. Altogether it's those 699 goes to 700 RPMs. I uploaded them to federal people slash groups, slash modularities, slash repos, slash base runtime, slash 26. There will be a link in the presentation. You can install a small set that whatever you decided to be into mock and then you can create for example docker images out of that. There is also the base runtime module definition in the module MD format which is the format that describes modules how they get built, how they get installed and all the metadata. It looks like this and it also defines the installation profile for the base image for example. So in this case it would be the bash, coroutils, file system, glipsy minimum length pack so we don't include all the language packs. RPM for the management and shadow details so it passes the mock installation completely. So if you do that I have the mock config here. You see that it actually hardcodes the packages from the module MD. We plan to have some scripts to automate this and later on it will be part of the infrastructure so you wouldn't have to worry about this. It references the repository I mentioned. So if you install that and then decides to create a docker image out of that for example you can do. You see that it installed 81 packages which is the preselected small set plus all their dependencies to be able to run the shell in there. And this docker image allows you to try and develop your first modules against base runtime. It's really just for experimentation. I built all those RPMs on my laptop using the prototype we tagged in Koji in the staging infrastructure of Koji since we don't have anything on that in production yet. So I'm not actually publishing this docker image anywhere at this point but we plan to do that in the near future. If you want to build this yourself there's the link to the RPM repository. There's the link to the staging disk kit which includes the base runtime module and the file. That includes the list of all the components for the cell hosting prototype and the installation profile for the small set of the base image. It also includes the entire API definition of the base runtime module which is a set of packages that you might rely on and this is the staging Koji cell hosting repository which includes all those 3000 builds so if you want to build your own base runtime for example you can use this and modify the module and the file install the module build service locally and build whatever you like. Once you try that you may want to help the main point of contacting us is in the majority working group the website is on the federal project wiki it will point you to the federal majority working group meetings to the IRC channel, the mailing list and everything. There is also the base runtime project website which describes what we are trying to do shows you the API, the development documents all the decisions why we split something out why we did this or that but the easiest way really is by talking to us which is much shorter than expected and I hope there will be some discussion go ahead OK, the question is how base runtime relates to atomic host and whether it is a super set of sub-cycle atomic host that's a good question, there's a lot of overlap between those two the package that included in atomic host is definitely an inspiration for base runtime and the goal is to generate atomic host images from base runtime in the future since this is just one of the way how we can deliver modules I call them the abstract unit of software built and delivery so you can build them as repositories, images, VMs whatever OK, the next question is about the schedule in F26 and what we plan to deliver besides the base runtime itself so this is something Stephen Gallagher would answer as you know this one actually the comment was going to be getting from that to the plan for 27 seems ambitious because it's like 26 a single module on top of base runtime, 27 everything right, so to clarify one thing that was left off of the slide F26 is just a proof concept it's something we're going to ship but not stick on and get Fedora it's just going to be a development page just a proof that we can get something out of this code Fedora 27, the goal is to still produce the traditional approach and have this as a secondary choice and the goal for Fedora 28 is for us to be able to switch it to the primary choice that was the decision that we made in the Fedora server working group so we're going to have a few weeks of meetings we will also keep delivering the traditional release for some time the idea is that we want to produce both for a while 28 is probably going to be the inflection point where we make that one the top button on get Fedora yes, that's a good question how do we plan to actually build and ship that when we have nothing to have to structure and we are already at max capacity and no ability to put any of that in production in the next week that we've got to get it in production before we can do stuff for alpha the question was about how we can actually build this with no resources in the infrastructure and in such a short time Fedora 26, I think we're going to be building it manually again, it's just a proof so it's not going to be Fedora it's going to be a remix yes, it's going to be like a preview edition Fedora 2 they got a console for the brand it's still going to be whatever it's a small package after Fedora 26 branching we will deploy the module by servicing in production and build whatever we can do in the standard production coaching in parallel to the standard please yes, Mike is there a target for how many packages are in the base runtime the question is about whether the package set in base runtime is somehow defined and whether there are some targets for that no so we're trying our goal is to try avoid the how many packages question because what we really want to do with modularity is saying this is the API provides whatever is inside of it we're going to go to whatever lengths we can to reduce its potential attack surface so just it's a disc footprint but the how many packages question it's kind of a it's kind of misleading but we do want to shrink it of course that reduces attack surface that's the important part but we don't want to shrink the visible API that's the important part how far away from the API do you want to provide the current package that is how far is the current set from the ultimate goal of the API that we want to provide pretty far we only want to provide just the kernel, glibc runtime system management, container management tools and nothing else everything else should be unless it's intentionally provided as in this picture the shared library that is supposed to be consumed by applications and we actually plan to support it it should be hidden and it should be packaged in a non-standard location so you have to jump through hoops to actually consume that yes so I was asked to speak more about the history of the generational core and how it came to be and how it's written it's basically the same thing as base runtime it's just a different name the concept of the generational core was meant to include more of the shared libraries than base runtime does and the main reason for the change was the name was too long and people didn't like to pronounce it that's really the case so base runtime basically is what generational core was originally originally base runtime it's just the bottom line that's not going to be a separate thing anymore exactly it's not just a lower layer it's the whole thing the question is about modules and flatpacks they are kind of similar flatpacks rely on runtimes and bundle audio dependencies and they are mostly focused on desktop applications yes there's an idea to build flatpacks out of modules that you would use the module in the description to provide the recipe for how to build a flatpack you could also define the flatpack runtimes as modules as flatpacks out of modules built on top of those runtimes for more details on that it might be a good question to ask in Adam's talk in 2 hours it's a little out of scope for base runtime there is a lot of tools to actually to use on the project or something because they are definitely in part in the software to be in flatpacks to build modules and especially how to rebuild them when there is no data so there is a little picture to build modules to build modules do you consider build dependencies and their closure as a part of base or if not how do you talk then ok a question also whether we consider the entire build dependency chain as part of base runtime and if not how we call the leftover packages in this built environment we do not out of those 3000 packages we only plan to ship like those 170 and the remaining 2800 will be part of a module that's got a codename built environment in the long term we would like to split this built environment so the components that are currently included in it will be included in separate modules where it makes sense for example there is the full QT stack or tech life for generating documentation in the build dependency chain we don't really care about it and we definitely don't want to ship tech life as part of base runtime but in the future there would be a tech life module which would provide the distribution including all the most or most of the common tech libraries and modules and the base runtime would build require this tech life module just like it will build require the built environment module along with other stuff didn't closure be as big as the code in your now it almost is yeah and that's part of why we are focusing on the binary closure for the modules as Petr said we did have to rebuild everything in that self-hosting set and let me tell you that was a pain in the neck but once the base runtime is there we really want to be able to be able to say ok I need who goes like screw it I can't say for no sorry I lost my train of thought it would be much preferred if we could take a page out of something like Microsoft's book and say here's the the C development module and it contains most of the stuff that you need to build something that will run on base runtime and that contains the GCC compiler and auto tools the common set of things not necessarily all the popular libraries but at least the basic set of tools you need and that would be one of the build requires for base runtime but also for any other module and we want to split those out so that they are sensible self-contained sets that can be maintained as a unit rather than maintained as individual packages this is a long term goal this is not a Fedora26 goal so this question is going to sound more snarky than I need it to do we generally consider things that Microsoft has done in the past to be standing ovations to the right ways to do software engineering as a general case not necessarily however there are certainly places where they have just made dumb things that have been easier on their users than we have and it's best to copy where it makes sense and I ask in earnest because I lack visibility because I ignore the fact that they exist for the most part maybe it's my investment but I wanted to qualify that it was not meant to be a snarky I was just kind of generally curious because I'm not generally like to see them come out of their camp and I think probably most people in the room can share that by the record the comment was about whether we admire Microsoft and how they do it what's the plan to get QA turn into the server where testing modules is going to go out question is about QA this is mostly for the majority talk later but the whole majority stuff heavily relies on the QE and CI being readily available and everything being tested automatically in the grand scheme of things you would just push a change to this kit it would get automatically built it wouldn't be released to public until it passes everything that you prescribe there will be a lot of duplication a lot of rebuilds against everything so without automated CI it's not really possible to do anything this ties back exactly to what Dennis was saying earlier if you're building these modules that's going to make it very hard to QA and make it almost impossible to type into a CI workflow this is just a temporary thing since the module build service is supposed to be made available after f26 is branched by the only year you can get it into the product like why the better we can do it absolutely, I agree with you just to add on to that one of the major pieces here too is that one of the driving forces is that all the tests will be publicly available to be modified we'll have a full request system or something equivalent so we want to make sure that it's not just a core team trying to write tests but literally anyone can pick it up and add to it there's a question what kind of modules we would like to ship in f27 or f26 there's a list of roles for Federal Server that the modularity team is putting together I'm not aware if there's anything else at this point but we expected most of the packages and special interest groups will assist with creating logical units where it makes sense over time could be a couple of hundreds question is about whether we've cooperated with the anaconda team no, not the base on time team but I would ask Ralph Bean and the factory team this is the ok so they have so the question Federal Server is that not going to be in installable instead be some kind of container a question is about the deliverable of Opoltron Federal Server do we have any more information on that? what we were expecting to do for the first pass was probably a hand generator not hand generator but basically a cute guy virtualization so it's more or less pre-installed probably not going to do a full installer for 26 it's too much too soon but base on time itself is another release so we are not really focusing on how to ship base on time by itself do we have any more questions about architecture support and what we've done so far we've only built the prototype or attempted to build it in the staging coach so we have only tested it on arms arm 32 bit arm and x86 yeah x8 64 I have no information whether it will build on our PC for example not at this point I'm fairly certain that we would discover a new FTB because there are quite a few packages in Fedora that their source RPMs have built per architecture both the wires that we just were able to detect when we did this the first time we will mostly discover those issues in the coming few weeks when we move to production so what about when you would want the body to be involved in updating the model or would they be automatic just on the main question is about body yes we would like to include body in the process but not in F26 it will be in F27 there are no plans on modifying body in F26 time frame and whether it's going to be automatic or manual we expect it to be so body has no concept of anything but RPMs it's actually a challenge we face for containers for layered image builds and the work was briefly scoped and you're probably looking at a minimum 6 month lead time with the current pool of developers available to work on it so it's half of a person so it's half this quote on quote full time half of a person if that is something realistic with modern art team could like to do or factory whichever one we would probably want to start planning that sooner rather than later and see who can become available to help put that up yes for R27 we would want that done in about 6 months from now Mike do you know anything about the plans for body in F27 no he's sick today when everybody speaks to them they have no idea about what it is whatever which is fine I want to make sure to bring that up as a point to have on the list of items to check on but it's going to be a real thing for R27 it's going to be important this is kind of what I was getting it's a pretty short time one but I've come in getting from 2067 so he is ambitious indefinitely because Matthew's hypothesis about making 27 a year process yes byt a question is about plans for module of workstation and plans and policies for packaging modules did I get it right this is again mostly for the majority talk I'm not aware of any plans for any immediate plans for federal workstation but it would be logical to provide those it probably won't be in at 27 given all the other issues that we have to deal with especially with the infrastructure and the lack of main power but it might be after that that prior commitments and everything else because it's a pretty oppressive schedule and when like from a relance where almost everything that we're going to be able to do at 27 already he stepped up when like in the finishing stages figuring out what we're going to commit to doing issued back at 27 and the other teams at the proper time like was at capacity for the stuff that you what are the external dependencies seems like you really need to be on the ball now with any number to get into at 27 and it seems like you're not talking to any of the external teams and so it makes this more of a comment like in an area where you guys really need to reach out and start working with some other people otherwise it's that's going to be possible because everyone needs to recommend this about cooperation with other teams especially in the infrastructure so we don't actually need that much we only need the modular service and the diskit namespaces and the update systems at some point but it will be mostly packaging work and as far as I know the diskit namespaces are already implemented in the staging and will be part of diskit after diskit is branched in production the module build services is standard on your service which will be maintained by the factory 2 team and it's also near production ready there are some changes to PDC I'm not sure but at least manage the content, where is it going to go on the mirror is having distribution there's a lot of pieces that needs to be work with the external teams and the installation the I'm not trying to be double critical but it seems like no it's a good thing there's a lot of external dependencies that we have already like releasing we've got most of what we're going to be doing in F27 and all the good stuff none of it is in our scope and we have where the ones that have to deliver all together figure out problems that should be married making sure that we can use better links or whatever for the module there are many open questions in this field and I don't even know what the requirements are we probably want to get together and have a meeting and maybe pull a man car to try to scope because there's a lot of stuff that we're looking for on what needs to be done where and that's mostly a comment not in the sort of criticism we've got to lay it out and do work ok thanks so much ok I guess we can no more questions we can finish early ok there's still time because all the modules and everything come with ponies we've got straight to you test je dobrá halusna jsem říkal že to bude jen na ty repráky no to jsem četel taky celou dobu si mluvout dost po tiku no mi musí vysnaval se když se říkal tohle není důlec nejšer i klimatizace to ruší tím mluvíš dozně po tiku a po tí začných dobře ale potom splumíš hlas a mluvíš rykl a zadumíš to bych tělo potom udělal tím tím tím tím jestli chcíš vyspětnu hlasku až budeme v ofisu jaký dám normální spektuval ale i nech to byly ale mám ten mikrofon to řešte tak kdo mu si teda mluvit hodně na hlas a zgrětel já jsem bych chtěl mluvit ne jasně to je jasně a tady vzadu a když vyspětnu hlasku to není vyspětnu hlasku a budeme vyspětnu hlasku tak když vyspětnu hlasku další mluvíš musíme úpozorňovat tak samozřejmě sami kdo pohledá ale ní jsme cháty ale to vtědá to je dálší to byly dělá mám jasně až budeme vyspětnu hlasku tak kdo jen vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku tak kdo vyspětnu hlasku O.o Všichni jste dostat je to výstřednou, teď už ten 1. O, ne. Ne, ne, ne! To to mám Fedorov. Jo, mám Fedorov. Toto je závodní, to je všechno. Je to všechno. To je všechno. Já jsem na Ráha, vždyť jsem s námi, který je to, který můžu říct. Vždyť, nevědějte, nevědějte, nevědějte, nevědějte, nevědějte. Je to nejvědějte, nevědějte, nevědějte, nevědějte, nevědějte.