 Okay So let's get started For those of you that don't know me, I'm Dennis Gilmour I'm one of the Fedora release engineers and today we're going to be talking about redefining how we deliver Fedora At least we can wait for our transcriber to get settled I just thought it was just before We're going to be talking about how we deliver Fedora In Fedora 24 we made a bunch of significant changes to how we build and ship Fedora The biggest one was that we switched to using Pungy 4 Quiet in the back please We switched to using Pungy 4 It's based on the code that was used internally to create realm with a pretty significant amount of work by Luba Sander here and other people But we probably had the biggest change in how we build and ship Fedora since we got most core extras and got Pungy initially We also switched to using Light Media Creator for making live CDs which is part of the Anaconda supported set of tools which is nice because one, it switched from young to DNF and two, it means it's something that they have guaranteed that are committed to supporting long term So we'll see how we end up with that and we also switched Rohit and Branch to be complete every night So that means that we make all the installer DVDs install ISOs, all the live CDs all the docker images, the biota images the massive multitude of things that go into making Fedora release It gives us the benefit that when we actually get to branch or when we branch and we go to do Alpha we can actually be sure that Fedora is going to close because in the past what we would do is when we were done with, say Fedora 23 we just didn't do any composes for two or three months or whatever the schedule was between the GA and then the branch and we'd get to branch point and then we'd try to make an install DVD and it doesn't work and all this got broken or that got broken So now we have the continuous stream of install media that is able to be tested on a regular basis and ensures that when we get to branch point we can be pretty sure it's going to go smoothly It has a side effect where we are now showing probably 10 DVDs maybe on the mirrors of the day with all the ISOs Yeah, it's just 8GB alone for the game space It's almost like 70 So we're showing about 70GB a day on the disc just with all the ISOs It's nice to have them but it presents other problems So at some point I think we're going to look at changing it so that we only put a test of compose once a week onto the mirrors We'll still put the everything repo on the mirrors every day but the full compose we'll just put one once a week that we have done some kind of invading of it and ensure that it's going to install the live CD Everything that's release blocking is in theory going to work which leads into part of my longer term plan to entirely kill off alpha and just do a better release and do a final release and that's it One last phrase Essentially Warhide will always be alpha quality if we put in enough testing and gating and all that kind of stuff to ensure the quality So in Fedora 25 we've got a few new things in we're going to do some new things and we've got some new people So one of the things we've already got in place is a workstation OS tree with an installer DVD for that So every night for four months three months now as part of the Warhide compose we've made a workstation OS tree so you can install that and workstation being immutable you can't then go and yum install my latest favorite thing so you need to use the flat pack formerly known as X2G apps to be able to install extra things Fedora has told today that it's working really well so the workstation guys are pretty excited that was a pretty minor constrict change dropped in a little snippet into a repo with a large template and a lot of it worked so that's kind of cool we're going to be making the cockpit-layed image and I don't know if anyone cares whether people want to know the details of that but it's the first of layer images it's a new deliverable type it's something that builds upon the Docker base images that we've had for a while but allows you to get more apps more container for content part of that we have a Docker registry that Fedora is writing so hopefully in longer term we're going to make at least the sentos and the Fedora Docker builds will pull primarily from if you pull the Fedora image it's going to come from the Fedora registry if you pull the sentos image it'll come from the sentos registry and possibly even around we need to work with people that have the strings higher above me whether that's okay but we'll see we're also going to be doing windows and OSX builds of the immediate created tool we've got some a couple of Mac machines that we're using for the OSX build because you have to do that entirely natively but the windows build the plan is to use MinGW and build it as an RPM on Linux and cross build it and then we'll extract the executable binaries out of the RPM sign them, ship them so that people can install it without any kind of errors or saying hey you need to give special commissions and the result is that people going from windows and OSX have a verifiable tool that we provide that they can install into their machines it will download the ISO for them and put it onto USB stick that's kind of cool and a very different route for Fedora from all the time historically so that's going to be fun and we have a guy called Malham Bodger and he's going to be taking over doing most of my stuff so he's going to be doing all the composes for Fedora 25 with a lot of hand holding for me and probably some hand holding for Peter and other people within Fedora infrastructure as he gets up to speed and understands how we do everything and it's kind of exciting it's going to allow me to be able to focus on unifying how we build Fedora and making it totally consistent between the two of them and tell a blue watch questioner that gap what do you need to do so then some stuff coming in Fedora 25 we have things that we want to do after that and we have quite a few things that we want to do after that I mean OSPS is coming in but it's very limited in Fedora 25 where essentially we're doing the cockpit layered image that is something that's been used on the Tummy Coast for a long time and they've gone off on the site somewhere and built their own cockpit layered image and container and pushed it into the upstream Docker hub and they say hey this is this Fedora thing but it's not built by within Fedora it comes off the site so we're actually going to do it ourselves in Fedora 25 but then going forward we plan to open the floodgates and allow people to be able to build layered images themselves there's been other talks on that we're going to be working really heavily on automation we want to get rel-engine to a point where we're only dealing with exceptions when a composer request comes in QA clicks a button and says hey we want to compose the composer happens automatically and the only time that anyone in rel-engine needs to step in is if it blows up so automation is a big part of our future as is alternative architectures and changing what they are we're starting to slow the progress with the goal of unifying all of those architectures into a simple pro-G instance and the thing that defines what is primarily what's alternative what is where the output goes possibly the first thing that we see is that I-686 will not be on Pubfedora and we'll move that to an alternative location maybe an F-25 depending on how well we've said we're getting support in Ponzi or I'm just going to throw him under the bus while he's sitting in front of him but yes if it's not F-25 that we demote I-686 it will be pretty much to my it's essentially demoted at the moment Fesco has said that I-686 is not release blocking in any way shape or form so if something fails on that except that we move on yeah which is what multi-lib is the reason why we need to keep at least it we need to keep it in the primary hub and moving all the other secondary architectures into the primary hub means that people like Peter and Dan Horak no one will need to run Koji Shadow which takes up probably half of their time and it's a steaming hub it's slow punky, error prone it does the job and it does it somewhat well but it's a time consuming but it's very, very time consuming it's a time consuming thing to keep up and keep working so that's going to free up release engineering people and there are different architecture people to work on fixing architecture bugs or enhancing architectures or enhancing the Fedora release experience enabling us to do more get different things done and out so I think it's a really important thing for us for relinch that alternative architectures move into the primary Koji and we redefine what that is I think we can effect at this point the term secondary architectures is dead well, is going well, it'll take a while to disappear but I mean, sorry in Fedora 24 those have actually read the release announcement we've seen that the alternative architectures went out as part of the primary announcement and we released everything simultaneously and I used in that component of the release announcement and both secondary architectures because that's what everyone knows but also the alternative architectures to start transition because the fact of the matter is they're not really secondary for a lot of people that are working on things like tool channels and that, they are just as effect like, they treat them exactly the same as at X8664 and so it's for quite some time not being the right term to discuss so we're sort of going to X8664 and then alternative architectures and then we get the term that Josh came up with the other day he was experimental architectures of things like risk 5 and mix and things like that I'm not mentioning that word so I would pull my finger out making that change will help a lot we've almost got completed I think there's a couple of bugs that still need to be squished and we'll be able to have signed Rebos in Koji and that'll enable us then to actually fully support triggers and suggests and recommends and stuff in RPM at the moment the bottleneck we have and all the new fancy features in Koji in RPM is that Bode runs on REL7 and it uses REL7's RPM when it matches the repo and REL7's RPM doesn't know a thing about triggers, suggests, recommends, enhances all the new fancy stuff that RPM's added in the last two or three years and so we can't support that today if we try to match the repo on REL7 and it blows up we're going to switch Bode using signed Rebos potentially switch Koji using signed Rebos probably longer term we'll switch it shorter term probably not but it's another enhancement that would be very useful we're also looking at rewriting the support to make DVD ISOs and install ISOs in Koji as Koji tasks and that'll just change like today they've done in Koji as a run route task so they're a little vague they're a little hard to define because Koji Web is stuck in the 90s and needs an overhaul and so I guess we do have the run route searchable but then you don't know where the run route was in so yeah having the DVD building in Koji would be kind of nice extra work it is oh god it's as bad as I get sorry you can never see that you can never see that but maybe you're right with a couple of guys in RCM in a recent engineering working on modularity and we're working with the people that are doing the modularity work what it means today who knows but what it's going to mean tomorrow we don't know but at some point it's coming and being involved in making sure that we fully able to embrace it as it comes along and as opposed to having this thing come down the road by itself it will make our life easier by being involved it's not a shock it's not going to be a shock we're going to be sure that we get the things out of it that we need to get that we have reproducibility and all the good stuff that we need to have to provide an assurance in the modules is good fedora content we've got some work that hopefully will start very soon we're kind of being on again up again trying to find resources to do it for automated signing we're going to set up a service to enable us to request signing so that hopefully within the F26 development cycle we'll be completely signed all the time the RPMs will always be signed we've got the two-week atomic composers that we've been doing we have to manually sign the checksums and some pieces of that as part of the release process we can automate doing that we have the openH264 codec repo that we added in fedora24 we signed the repo data so that you get a verification that repo data came from fedora because it's not using mirrors in any way shape or form fedora hosts the metadata for the codec but due to the patent grant from Cisco there's a requirement that Cisco delivers the binary for you to receive the patent grant so what we do is we redirect to we redirect all RPMs downloads to Cisco which was a fun challenge to try and solve in the last cycle because our initial plan of implementation was we ship Cisco a tarball that has all the repo data all the RPMs and everything that goes with the codec turns out that CDN doesn't support directors and young and DNF require directories because it has a repo data directory that you need to get the repoMD.xml file and that was not possible which is why fedora is hosting all the repo data as a side effect we actually know how many people download the codec because it's logged but the IP is logged and we don't know who they are we have no idea but we also got we have a desire that we want to sign the atomic commits and at the moment like the OS tree commits done at the end of long-running processes part of law high, part of branch at the end of pushing updates and we can't do that today so like all the OS tree commits are not signed going forward we want to sign that so it's a better thing for the end users that they can get a little more assurance that the bits come from fedora so we have some challenges one of the big challenges we have that briefly touched on is mirror churn not only in the 70-odd gig of isos that we change every day in law high the same in branch we had the OS tree repos in pub fedora from when we very first implemented OS tree we ended up having to remove it because there was almost a million tiny little files and mirrors were having a hard time asking people of that content it's taking them forever to go through and step all the files to see if anything may change which is the story it caused all sorts of dramas so we ended up pulling it off and due to the way that OS tree is currently implemented and not using mirrors it was actually a fairly simple thing we put in a redirect and we changed the location but that's a problem so we need to figure out how we can bring in a mirror OS tree content we need to figure out ways that we can deliver frequent changes in law high but not have massive churn on the mirrors because mirrors get cranky when you're wasting bandwidth and wasting disk and wasting our air we are taking more room too it's got smaller, it's about 11 terabytes all of pub 11 terabytes currently and that is going to continue to grow forever because we don't actually remove the release content and the updates testing at the point that are released because end of life it gets moved to archive and hard-linked and put life there for posterity I think we could we used to make jigdo templates and no one was using it we dropped silently in fedora 13 or 14 and nobody ever said oh actually there's one guy there's a mirror that was apparently using the jigdo templates he would ask him to call the rbms he would then use the jigdo to recreate the ISO put the ISO in place and then run and ask him to go over everything just to make sure they got all the bits actually right there's a mirror in Brazil where bandwidth can be an interesting limited resource there's a lot of issues we need to think about with mirrors we need to consider the zero day updates for fedora 24 was just under 9 gig the change that we had from when we froze for final and when we said hey we're good to go all the changes was queued up that was 9 gig worth of data in one shot from OS it's not even released we've got 9 gig worth of change why don't we just compose that incremental because the change is already that way we don't compose it incrementally because it has a potential to break the compose we can't move it into as soon as the compose gets flagged you have until next Tuesday and tell the declarer pay him until next Tuesday where there's no changes and you're going to break the compose because the compose is done you could push updates in that time we do we have the go no go meeting on the Thursday so we decided it's got that's gold we locked the tags down in Georgia that's where we decided it's gold there's usually one or two packages that were included in the compose that had not gone stable we make sure they're gone stable and that everything that is in the Fedora 24 or 25 is in the F25 tag we lock it and it's done forever we then make some changes in Bode so the Bode instead of tagging updates tags into F24 updates and then once we've made that change we then start doing pushes to the zero day update we did have one one or two slips in F24 right on in the final freeze I think it may be two was it two weeks? Matthew? what was that? Fedora 24 final slip from when we froze to was it one or two weeks there's some slips there it might be one so instead of the approximately two weeks that we go from freeze to gold we ended up with almost three weeks and in that three week period we accumulated nine gig worth of changes it's a lot of churn on the mirrors the mirror network is bigger and smaller than it used to be we have less host and less mirrors but the mirrors that we have have more bandwidth more disk better resources than we used to have ten years ago we had maybe two hundred universities across the state trying mirrors most of them only had ten to a hundred megabit connections the mirrors we have today most of them have gigabit to ten gigabit connections to the internet where they can provide significantly more bandwidth than they used to be able to but would still be great if we had two hundred universities and all of it absolutely would be great if we had more on the mirror list you get the odd email from someone they're like hey I just took over here and apparently the guy that left like three times performing had set up this mirror thing and it's just been working for three years I'm shutting it down they don't really realize it's even there on their network at least they let you know but sometimes they don't mirror manager is actually really good about going hey this isn't being updated or this is gone away it removes from the pool of the mirrors but mirror change is something we need to think about it actually is terrible delta rpm is pretty terrible it's terrible service side but the place where delta rpm is really useful and beneficial is people that live in countries with limited bandwidth India, Brazil even in Australia it's really useful because a lot of people have cats the social right here I'm trying to leave nine city where there's a very old man wife but you know in countries and they have really we have grown up with the man I have friends in seven names once a week and it's like okay see you next week they don't know when they they get connection stable connection or realize sometimes delta rpm saved me 20 minutes this morning on the crappy network here I mean it's useful but from the mirror standpoint I I run a local meter and I'm on one of those really crappy networks where we have eight main bits of email and I actually use the delta rpm kind of like that guy in Brazil and in Brazil I mail the delta rpm I have a script that generates delta pms from the delta pms and then do a final rctm which is useful I mean it's an extra complicated it's amazing but yeah I mean like at my my own house I don't live in a bandwidth and I ran my own fedora mirror for everything I moved last year and I got a nice I got a nice fiber connection and they have a download cab I went from unlimited to having to fit everything in 500 gig a month which I mean it is reasonable but if I wanted to ask think over fedora I need well I mean if I wanted to start seeing fedora and all of it it's 1.6 terabytes or something like that a long time ago we used to make this we had this soft agreement with the mirrors that we wouldn't go over a terabyte of disk we've not been under that for fedora we've not been under that for quite a few years and it's certainly gotten worse with the new composers what's the sort of problem one of the bigger problems is probably that because delta RPM creation is embedded into create repo and when create repo runs it it's probably a small patch to create repo to fix it but it doesn't hash to the directories so we have at least one mirror that was running their mirror on open AFS open AFS has a limit of 65k files within a directory the delta RPM directory has like 70,000 files for one of the updates trees I can't remember which one it was but he filed a ticket saying hey can you please hash the delta RPM directories because I count mirror plus it's at least two small files for every regular RPM it's updated and I think the file count makes that much of a difference but it actually does for delta RPM it takes longer to count the files than it does to transfer so we also have an issue of sorts where Fedora is so big and I think this is a place where modularity is going to help a lot is that Fedora is so big that if you get a new Fedora install and you say yum install irssi that's you know 200-300k you need to first download 42 meg of metadata about all the RPM's in Fedora part of the issue with that is that so yum there's two different databases there's one which is basically a database of packages and there's one that's all the files there's one that's all the files that's all the contents and slash user bin and then there's one which is basically a database of every single file and every single RPM any possible thing that you could say yum install or yum repo query and so yum only downloaded the small database unless you ran a command which needed to query the bigger database at which point it would then more down that made it quite quick and then dnf pulls them both down every single time and so what should actually be able to point down about three megs for the average user so if you do like dnf upgrade or dnf install xchat it will pull down a couple of megs it's actually downloading 42 megs every single time is that a lot of regression it's efficient in the vast majority people's opinions yes is above or a regression or whatever according to the dnf 10 is a feature sorry discuss whichever way you like well I thought you were talking you said that the module you might actually stop here on QB don't believe that you have a paragraph in this public paragraph you actually transfer problems into modules so you can actually test it I think it's very widespread and you can see if you're not used to it where it would help with modularity is that you're only in like if you do you get the module metadata and you say I want to install these five modules it's only going to pull down the metadata for those five modules which are much smaller instead of pulling down all the metadata possibly I mean it could be really big an issue that we're faced with a few things is that new technologies have come along and I'm going to pick on OS tree here and docker as well if not considered they've built it with x86 64 blind design and they in the OS tree case the jason file that defines where you install it lists a bunch of files but the bootloader configuration files and the bootloader programs vary across architectures it also hardcodes in the ref what in a young like in the young case it's a dollar base arch and it's a variable young and dnf replacing there it's hardcoded the architecture in the jason file so it's not being well thought out as far as like how are you going to make this thing and make it support module architectures and people want it on power people want it on arm I mean that's an interesting thing because I mean it's an inconvenient but in a lot of cases there's a lot of time to market and other doors and other programs and in most cases you speak to those people about it and they're like yeah we're sort of aware of that and we're taking it into account but we're not so much too good and yeah for me it has been a challenge I regularly have to send out patches to anaconda where they don't take secondary or alternative architectures into account but you know there's a very old various other tooling and I mean all the maintainers are always very very responsive but by the time I do but I get to send patches and do various other pieces it costs me a day here and there which I just don't actually have and docker just doesn't support anything other than xe664 and they're trying to figure out I mean docker itself runs fine on arm AR64 and how there's people using it it comes to a mind frame near here in f25 but it's not well supported and it's not a great experience for the users and we need to figure out how we can do more with less and that's we have this finite number of resources in release engineering in engineering and in all the pieces that go into fedora people want to deliver more artifacts, more different types of stuff something's got to give either flexibility in that you know we can't do something or if something fails and it's not released then sorry I missed the book which is kind of sucks when people are putting in effort but you know it's really hard to get the right balance so yeah that's a challenge that somehow we need to try and address to make things better for everybody and sometimes things rather than the result of doing more with less sometimes that's infrastructure sometimes that's people and we had a new restriction on things that we added PDC as one of the f24 features and changes that we did in release engineering which PDC for those of them was a product definition center it records the output of all the composers so you can go to a new query and say this is what was in fedora 24 which is fantastic but it then means that we're kind of tying things to having this integrated process for at the least we then need to ensure that whatever we do if it is separate that they all update the same release information so that when you query it and say tell me what was in fedora 24 tell me what was in fedora 25 it tells you and it's a truth it's not a lie because we had all this in here at the time but then we added these pieces on the side and they're there but they're not really there it makes for an extra complication in how we do stuff ideally along the term I would like to have PDC you're having to define everything's supposed to be in the composed if it's a release blocking or not and then Panjee will talk to it and say tell me what I need to make because I'm doing a composer for fedora 25 yes sir? I have not seen your demo so somehow we've accumulated over 10, 15 years or 12 years or whatever fedora has been around now lots of technical debt we somehow need to get rid of that and we have the people topic and we need to enable ways for people to do stuff that do it in a way that when it becomes something that the greater fedora community wants to embrace we can say that's great you've worked with us you've done it in a way that we can easily integrate in with that proposed process we get all the metadata we get all the good stuff that happens because as I said before we have a limited resources we can't do everything so we need to figure out ways to enable people to support themselves and provide the guidance and direction on how to go about doing stuff do we have questions? yes we have no questions if there's no questions going once, twice three times do you think we could have some sort of overwatch over the transition of you know, from two or three months we are doing something that you guys won't verify, won't say it's actually not it's actually not so there will be a press simulation that we are trying to do that would be awesome there is a hat for this which is three weeks from now and you know this is the outcome so the question was somebody from you guys the question is if the Fedora release engineers would overview what modularity working group is doing we need to be involved and we need to do that we need to verify the time to do that I think I honestly believe that we are going to have to keep doing the stuff we have been doing and the way we have been doing it at least for longer than what some people think you know like we may end up shipping two versions of Fedora server in 26 in the modularity talk earlier today like Langdon said they won't have this modular version of server as a F26 deliverable that's great but there is going to be people that really just want to do it in the way they have been doing it at least for some transition period we are going to probably have to do you know server DVDs one that is modular based one that is in the traditional comps place I think everything needs to go away I really think it needs to be broken up into smaller chunks and then people will need to enable the pieces that they want to use and it only needs to be there to help support that you know mirror manager probably needs work in supporting that to be able to support more DNF will need work there is going to have to be a lot of work in doing something like that but I think longer term everything we both just has to go away I mean it's big why would you actually keep like you said it to make it to month we will just rename everything to a lot of crap we could rename it to LXDE or something like that beware welcome I'm just giving you a hotline for yourself I guess I kind of so we got a lot of stuff going on and you know the fedora of the future is not going to look like the fedora of today if everything if everything repo goes away and that sort of implies that there are some things that are currently in the everything repo that will probably accessible whether they just want to have a repo we'll probably end up having to have instead of the everything repo we'll have a whole bunch of smaller repos and either by modules or by something else there will be methods to make it all available but you won't I mean if we want to get to a point where as I know Matthew would like to get to a point that we support a server for 18 months and workstation for 6 months we've got to rethink what we ship what we enable and you know I don't know what would be more grand on that I would say we support this module for 6 years in this module for 2 weeks so the world the world's just going to change we ultimately answer your question but we don't have any answers to that yet it needs to be a solution that's horrible but we don't have any answers you know and we're not there all of this requires a lot of resources that we currently don't have so somewhere like an engineering or community or people are going to have to come along and help us to be able to implement all of this stuff that has been requested and engineering is providing some people we have 4 people that are tasked full time in helping with rallying things in different shapes and forms from F-Door engineering today and they help us get a lot of stuff done but it's going to take more than 7 or 8 people it's probably going to take 20 or 30 people to get everything done in ITMAID this future world is very messy and complicated and it's different and a large part of the workload that historically has been on the end user you you have the repo and you just you pull in what you want that's now going to put on to release engineering where the ones that having to pull together the curated content sets of things and make them available so that greatly increases the work I mean even if we automate the crap out of everything and we're making 10,000 different objects instead of the few hundred we made today how if a tenth of that fail every day that's probably going to be somebody's job is just figuring out why is this one fail why is this one fail already is it kind of already is but it's going to be a bigger thing because instead of having this smaller subset of things to look at there's this massive thing I mean that's going to happen instead of like languages because there is a potential of pretty much every other end to some extent being part of a language or being a language sorry and it's going to cause our disclosures to go up because I mean you can't de-duplicate the content inside of a late image that can be the mirrors can pick up the de-duplication it's not hot there is technologies to deal with but that is not something that's disposable to a user that's asking and they may not have any kind of de-dub that works in that way on their mirrors they can't assume anything about de-duplication we're going to want to stop if that's it