 Hlavně ty fejvryc tyhle, jste můžu vyklikat dnoček se žíza, to náš krásný teď nezáznáme, že to je fakt nic. To bylo na poslední půjde, ne? A návžitečně ještě ještě píčora, ne? Jo, právě. Spodně jak se. Dereka sláhalo tady někde? No, kukal, že kde je Deniz, že... Deniz, no. Jeden se tětka bomlovala ze skoru. Zde znovala si, že si měčky tady skavit do nás to mámy taky to, že nevaz nežedem, že. Takže možná, Deniz se se nedeparostině. No právě, že ní je deniz ten dvík, 20 píce, myslím, není takže. Má polu minuty, ne? OK. Ne, neviděl jsem na to, že... Jo, tak počkávalo. Má hodinu růte, ne? Příklad ne, můžeme vzpustit nápí se, že... Ale jim si na to, že můžeme vzpustit, můžeme vzpustit, můžeme vzpustit, můžeme vzpustit, můžeme vzpustit, můžeme vzpustit, můžeme vzpustit, můžeme vzpustit, můžeme vzpustit. Sorry, folk, we cannot find the Dennis Gilmore. So maybe we have an hour break until our speaker rushes off. I messaged him on telegram, but I saw him earlier, so he's actually... You guys could do you want a presentation in Spanish or in English? Deniz asking. Spanish? Yeah, the best is Czech. So maybe voting, hands up, who won it in Spanish? Czech? English? Oh, sorry. All right, can you all hear me okay? No, no, no, no, no. Allah, my name is Dennis, and Michala is moving the whole world to Lurahad. I'm not really doing it in Spanish. The English translation of that is, let's move everyone to Roh... Moving everyone to Rohad. Like, what? You crazy? Well, maybe. But the aim of the talk is to tell you things that we're going to be working on changing in Fedora over the next period of time, like six months or so, to largely incorporate a lot of the factory 2.0 stuff and make Rohad be the place where developers and users and stuff want to be. So for real, it's making Rohad be a viable option for day-to-day use, making sure we don't have unbootable systems by testing the kernel all the time and getting extra testing. As soon as a Rohad kernel is built, we test it, and if it doesn't meet some set of criteria, we kick it out. No more broken dependencies. So if you push an update to Rohad, so you have the libpng maintainer, and you're like, oh, I'm just going to push this build and build it, and it happens to change the provides, and the surname bumps, and half the distribution is no longer installable, and we'll kick it out and deal with that. I also plan to no longer do alpha composers from Fedora 27.0, so we'll just have a better and a final, and that will be it. And hopefully that leaves more development time so that you guys can then add more features and do more in Fedora that each release. So how can we do that? The main way we do that is CICICI. For a long time, Fedora's done way too many things, too manually, too often, and it doesn't scale. Three years ago, I was the entirety of Fedora Release Engineering, and I could only do X amount. We now have five, six people full time on doing Fedora Release Engineering and helping us get more. We still struggle to deliver it, and we just struggle to add new things because we don't have the time to do the project work because we're doing the same old things over and over and over again. Those things need to be automated. For dealing with things like broken dependencies, Python 3 and rather than request side tags, like say Python 3 comes along and says, we're going to go to Python 3.8 now. They could just build it. In our ideal world, we'll automatically set up a side tag, move that build into there, rebuild everything against the new Python when it is at least the base of the operating system, like Anaconda, Workstation, Tomaco's server, the base core pieces, when they're no longer broken, is when we merge that over and bring that in and push that out to everyone. With CI checking to make sure that it actually works and doesn't cause you to have a broken Anaconda that doesn't function for anyone. In order to get to that, we need to have things like PDC, which we have today, but we need to put a lot more data into it. We need to know what's in the minimal runtime for Anaconda, what things go into Workstation, what things go into the server, what goes into the atomic host, what goes into the different docker containers because then we can more quickly check as soon as something changes that's going to cause a change in those objects. Ideally, with that, we can then automatically start to rebuild things using a policy engine. There's a thing I got told about yesterday it was a project robot, which is a effort by QE to bring tests that are run internally for RHEL into Fedora and make sure that the tests work there. So one thing we need to do is make it dead simple for everybody to write tests. So you hit a bug, you write a test that covers it, check it into a diskit test area, and that gets run every time your build happens now. And we'll build up a big amount of testing so that the CI actually works and is useful and gives us a raw hide that is consistently stable and working all the time. And at the end, more CI, CI, CI. We're doing a little bit of continuous delivery for atomic today, but we actually don't have any CI on it. Every 15 minutes we've got a cron job that builds atomic host in raw hide against what's in the raw hide build route, but we're not actually testing to make sure that that's right. It broke at one point, and people came and were like, we fixed the bug and got it going again, but we're not doing as good of a job as we should be in making sure that all that stuff happens automatically. And the cron job runs every 15 minutes. We build atomic host for AR-64, 32-bit ARM, and X86-64, which is more than we build atomic host for in releases, but it's kind of brain dead. We don't build it when we know that something in it's changed, because we don't have that data today, but we're working on getting all of that in place. So we want to make less artifacts to do more good. And by that we mean do things like build only when changed. If today a raw hide compose has XFCE Live CD, KDE, we build everything, docker base image, cloud images, the whole swath of what we make in Fedora. We build it every day, because it's kind of simple, but it's a bit of a sledgehammer. We could and we should be only building those artifacts when something changed. An XFCE package is updated. Let's build the XFCE Live CD. Let's test it, make sure that it's right. If it's not, then we can file bugs and kick it out and deal with it. But then it also makes the composers much faster, than making a whole bunch of things. It's just gathering a bunch of output. And by doing less, kind of more often, we can do more good, we can do more things, we can deliver more, we can test more new deliverables, new artifacts. So the question was, you're talking about making less artifacts each time there's a composer event, but having more composers? More times when an artifact appeared, which maybe we'll make workstation three times a day, maybe we'll make it five times a day. Who knows? Maybe we'll make it ten times a day, depends on how much the stuff changes. But then when we do the composer event, we're grabbing the things that we know are good, because they've gone through testing and we're pushing that out to the mirrors. So in theory, if XFCE only changes, something in the XFCE Live CD only changes once a week, we push that XFCE Live CD out, and it sits on the mirrors for a week before it gets changed, which results in less churn on the mirrors. The mirrors will then, in theory, be able to serve out that content much more. And mirrors don't like when we churn things on them too much, because that costs them Io cycles and there's all sorts of things there. So the idea is to actually make the things more often, to make the things more often, to actually make the things more often, deliver them a little less and, you know, make for a better experience. But then we want to do more projects to make less work. So we want to make the pieces separately. We want to automate things like configuration management. Today we have a manually configured Pungy config that does raw hide. And when, you know, a new spin comes along, we need to go and manually edit that config file. When, you know, we need to change some part of the definition of, like, what's workstation? We need to go in and manually edit config files. We need to manually edit kickstart files. We need to manually edit the JSON files that define, you know, atomic host. The workstation OS tree, you know, I had the idea the other day that it would be really cool if we had an OS tree-based vert host that has, you know, a node and, you know, open stack or, you know, over it and you can, you know, really easily add it, but, and then, you know, update it independently, which would be really cool. But in order to do that today, like if we want to make, say, have, like, a composed of sorts that does workstation or in Matthew's, you know, vision that he keeps touting about where he wants, where he wants to do the schedule and, like, workstation shipping here, but KDE's not ready for two weeks. We can ship KDE later. That today means completely separate compose with a completely separate manual set of configuration with a completely separate manual running of the compose, which is not going to scale. So in order to get to the world where we can do more and, you know, get more things out, we need to do a lot of work and, you know, PDC is Product Definition Center. It is supposed to be the source of what is in Fedora, what defines Fedora. So, you know, we're going to be working on pulling out of PDC what, you know, making the configs on the fly and making it so that it's, you know, a not a manual error prone thing to do. And that will get us to the bike shed, which is, you know, a magical universe where we can, you know, argue about the color of the bike shed, or we can, you know, figure out how to do more things. Atomic is, you know, becoming more central to a lot of pieces. We replace the cloud working group with the atomic working group and, you know, how does Fedora's future look? We really, you know, we don't know, and it's really hard for many things to, you know, change, because we've got a lot of big monolithic pieces with a lot of technical debt. So, in the end, you know, I'm hoping that we get to a place where, you know, we get rid of that. We do more testing. We get everything out. You know, Rohide is the place. A benefit that we get from that, you know, we have no alpha. Great. It gives us another four weeks to do development. We keep Rohide more stable than you as developers will be using Rohide, because you'll want to use Rohide, because that's where you can go, you can get the latest bits. At some point, we're going to have to probably say, you know, Rohide is going away. We no longer have Rohide. We'll have, you know, maybe we'll call it the bike shed, but we'll call it something else. Who knows? We can argue about that later. But at some point, I really feel like we're going to have to rename Rohide to something else, because there's a lot of baggage with the name Rohide. People are like, oh, it eats babies or kills kittens or does all sorts of bad things. It throws the baby out with the bath water. There's a lot of negative connotation with the name Rohide. Getting to that point, I kind of see that we'll end up probably redefining what we consider Fedora and not potentially and likely not actually doing releases anymore. You'll have, you know, the development stream, the bike shed stream, we'll have a stabilization stream where we take things that we're like, you know, this is getting pretty good. This is something that we want to make, you know, available for the general populace and the people that are slightly more adventurous, you know, so the Rohide stage will be for the people like, you know, Fedora packages, developers, people that are like technologists that want the latest and greatest of everything all the time, but they want it to be stable. Today a lot of people do that by, you know, particularly maintainers, they use Fedora 25, use Fedora 24 and they push everything to everywhere because they want it all stable, except they want this latest piece that they want to keep, that they care about, they want the latest grace. The problem with that is, it's a bit of a misnomer, everybody does it. The amount of churn that we have in updates in Fedora is ridiculous. On day one of Fedora 25 we had over 5 gig of updates that we pushed out onto the mirrors with the first updates push, which is insanity. We're pushing so many updates that it's ridiculous. So when we get to the point where we have Rohide stable and you're all using it, we can then look at going, well, you know what, I don't need to push my thing to Rohide, to Fedora 25, Fedora 26 and when we consider something to be really stable, we can then push it to the users or it's a bug fix and so users instead of every day getting bombarded with hey, your Fedora install wants to install 267 packages, they'll get once a week, oh, we've got 20 packages for you and we'll give a much better experience to the users that don't care to have the latest and greatest all the time. So that's my big goal. So the question was we're talking about making the Fedora updates as to a service pack type of thing, maybe. Maybe it'll just be service packs or maybe it'll be monthly batched updates or maybe it'll be something else. There's still a lot of questions in the air as to how it's going to look and it's going to take time for us to get there, but I think the first step is making Rohide stable, because once developers start moving to it it's the place where everyone wants to be and then it disincentivizes well, not disincentivize, but if you're running Rohide and you're getting your bits do you really want to push the updates and so I think there's a natural flow where it'll slow down the churn of things in the stable Fedoras. Right, absolutely. So let me just repeat it for the camera. At least summarize it. So Matthew is saying that in the way to get the latest and greatest when you really want to opt into it is something like flat pack which then makes the Workstation OS tree much more attractive because it's a stable core application and then you can pick and choose the other ones that you want to get and be able to run but the goal is not to push OS tree Workstation but that is another option in getting things but users will be able to selectively pick the latest and greatest of stuff from Rohide via flat packs so it's not like you have to still have to put you and want to push that because that's available to people. The question was that running CI and quite often today on Rohide things will break so the dependency is breaking and you've been unable to run the test until you can either you get hold of the package or they notice it and they fix the issues and then things start moving again and how do we get from the model where we just we keep Rohide stable but how do we get the new content in is that what you're asking? Yeah The other question was they pushed one into enable Bode for Rohide in order to gate builds to ensure that they get through we may want to automate to at some point go to a Bode model but a lot of this comes back to here where we've kind of skimmed over we've got results DB and Fedora today which stores the results of everything and other systems so if we wanted to work out a way to enable people that are running external CI infrastructure to provide results we could talk to infrastructure we could work out a way to do that a lot of this is going to be implemented in the factory 2.0 stuff that Ralph Bean is running so his talk covers how all the factory 2.0 stuff works so I kind of skipped a large part of that it was yesterday so you should watch the video from Ralph's talk from yesterday to get the overview of how factory 2.0 is going to work and how the CI and stuff integrates into there I think that it would be a good thing for people if they're running their own CI to provide the results back we may even look at having Bode enabled everywhere but with a Bode today is slow and painful you've got the two weeks if we put that those same restrictions in raw hide everyone would just complain and it wouldn't be good it needs to be automated so that we run the checks the checks need to be fairly quick and then we let it in or we kick it out but without that automation in running the tests and having the CI and getting it in it's not going to work there is an API it's FedMessage sure the comment was that having Bode I could allow you to batch a group of say 5 packages that need to be updated together at least my thought of that is that we would have testing that checks that say the base package that you have to build first when you build that there would be a breakage in the package that we detect and we would automatically rebuild the rest of them or we would set up something so that it would go hey this is broken we would set up a side tag move it over to the side I'm going to be filing an RFE for Koji to make the cost of side tags really inexpensive because they're really expensive today so noem tag for F26 for the first run through it has to read through 50,000 RPM across every single architecture to make that repo and then every time you update it checks to see if any of the files that are in that repo have changed it's a really expensive clunky process we need to have ways where we can make that quicker and cheaper because it takes about 10 minutes I think to run create repo through everything because of it's just checking so many files and it's a really slow clunky process he's not but I told him already so the question was about when Python 3.6 landed there was a lot of stuff that's broken and it takes months to deal with all the fall out of that the way we deal with it is today you have to file the ticket and we make a side tag you build the Python you build the other things and at some point we merge it in Python 3.6 ended up being rushed and it caused a lot more breakage than things normally do but what we would do is you would just do a Python 3.6 and then the CI would detect that oh the provides for Python 3.6 and now have this whole new surname and there's 5,000 packages depend on that previous surname so we're going to set up the policy engine which comes into the factory 2.0 in the slide the policy engine piece would detect talk to results db and say hey this thing just changed which causes all this breakage and we would then set up the side tag we would rebuild all the things the CI would determine or the policy engine would determine at which point or a piece talking to the policy engine would determine at which point we're going to move that side tag into the main distribution and it will be based on making sure the anaconda run time is complete and functional making sure that workstation is complete and functional making sure that service complete and functional making sure that atomic host is complete and functional and the docker base image and the pieces that we care about and then potentially looking at the outer ring of stuff as well so we may say we don't want to have the labs be broken we don't want to have the spins be broken either but the things that are on the far edge the ones that we care about them less not because we don't care about the packages but because the brokenness in them affects a lot less people it's a bad experience for the people using it but there has to be a point where we say the core pieces the things that 99% or 95% of the people use is functional we then bring it in but we'll enable repos on the side so if you wanted to opt into the new python stack you could do that because we would make that repo available of the packages we have that same problem today with Rohit and with stable releases when people bump anything in a stable release we'll be filing bugs automatically as we hit the issues we'll need to deal with them I strongly suspect that we're not going to hit that many issues where if LiveICU or ICU gets a major version bump it changes its surname but every version of ICU changes its API so in that case everything that builds against that needs to be patched in order to deal with the new ICU and there's other packages like that where we had a mechanism to say this thing is really critical we need all it's kind of the rings definition there's going to be some outlying stuff that sometimes it's going to be broken we have that today we still ship like fedora 25 ship with a whole bunch of broken dependencies in the everything repo it's not going to be a perfect solution but we should be aiming at something that 90-95% of the people is going to be working all the time we do a python bump and half the stuff is still broken then we probably don't want to merge that over we may say it's a quarter of the stuff and none of that stuff is anything that is in any of the core pieces so it's I mean people will hopefully run raw hide on production systems and stuff like that and we would hope that we don't break them but that gives the incentive or we need to give the notifications we need a lot more work on fedora notifications setup to enable when we run the CI on live CDs and stuff to get the notification back one of the biggest issues with atomic coast today is that when stuff breaks they don't know about it because they're not paying attention all the time so it's probably not a lot different to the Debian notion of unstable testing and stable I would hope that the difference would be that unstable is more stable than the raw hide is more stable than Debian unstable and all the Debian developers all want to come use fedora because they get their things and it suits any the comment was about by making raw hide be stable and keeping the broken things out won't let the developers know that they're breaking other people's work and forcing them to have to fix the issues yes and no because you can still push things and if it's broken we need to make sure the notifications go outright that bugs get filed and not that it just sits there on the side indefinitely the output needs to be available for people to consume to have repos that are in Koji and there might be 20, 30, 40, 100 different repos at any given time for raw hide where you could if you want to say upstream's broken something in an update and you need to port your app against it, you can get the raw hide build of your library that you're linking against kind of raw hide testing but it's not going to be rigid updates testing is more dynamic and more flexible it's a matter of dropping a dot repo file in place that points at the location on Koji packages it's really simple to enable any of the Koji repos they're in the mock config by default so we could probably we can look at having a DNF plug in where you say I want to enable this this extra repo from Koji and similar to the copper it won't be a terribly hard or expensive thing in order to enable that on user systems at the back so the comment was about side tags and disk it where today it's quite difficult to work side tags are intended today to be very short lived in that because if you do a build against a side tag in a package in disk it and then the package maintainer comes along next week and does another build before you've merged your side tag you don't need to rebuild it against a side tag again because of the complexities of that and a lot of that is going to be dealt with with the modularity work that's going on where we're likely not going to have a f28 branch we'll probably have a f27 branch but we may never have an f28 branch in disk it instead we'll go to for the python case we would have a python 3.6 branch we'll have a python 3.7 branch and the branching structure will be different and if we could potentially for doing the mastery builds we could say we're going to make this short lived branchers, we do the build from the side branch and merge it back in once everything gets merged over or maybe it doesn't even need to be we just make a new branch for say python requests we make a python 3.6 python request branch or something like that yes Adam a lot of this was covered in Ralph's talk f27 alpha is never going to exist yeah plan is to to that's what we're planning on is no f27 alpha we do better, we do final we're done i'll get to you in a sec i just want to reiterate what Ralph had said so the paga on disk it work is currently underway and should be live fairly soon and the taskatron to results db integration work is very close behind as well so we'll be back to see yeah it's all infrastructure stuff is coming and it will be there in a f27 timeline because Ralph is going to make it so so the question was with the different branching will we have the ability to have the old version and a new version of something available maybe a lot of that comes down to the modularity work that Stephen Gallagher's been working on and his talk covers a lot of how they envision that working oh, actually it wasn't his from the modularity his talk on day one covers a lot of how they're envisioning the module life and different modules are anticipated to have different end of lives and there's a lot of work going on there to manage that but it could be a possibility that we could have different versions of the same library and the same fedora release it's different and need to be presented to Fesco and they need to decide on what will be allowed and what won't be allowed but it could be a possibility to have multiple versions of multiple python stacks or a parallel or whichever was essentially would it be possible in the new world with the multiple branches to have different versions of the same thing in fedora on the same system on the same system today without RPM changes you wouldn't be able to easily do that but you could have different versions in fedora that you could have in different containers very easily that you know I wouldn't say we could never support it but there's technical the modularity talk covers a lot of the stuff around dealing with different versions and modules in the end of life so I mean this is really more about bringing like the modularity stuff the factory stuff all together and the real world impact of in fedora I'm not the expert in how modularity stuff works and the person that gets fedora out the door the question is how many people use raw hide today can we get a show of hands except for Kevin except for Kevin all the raw hide users except for Kevin Fenzie are here today there's a decent amount of people use raw hide for the most part it works really well I heard an anecdote today of a large corporation who has their installation infrastructure running on raw hide for installing fedora fedora leases and sentos 6 and 7 from the one installation okay and I've just been told that's all the time we have so thank you all very much by by