 Hello, everybody. Can you hear me? Hello? Cool. I guess it's about 10.30, so let's go ahead and start. My name is Kevin Finzi. I do many things in Fedora, but today I'm going to talk to you about Rawhide and all the stuff that I do in Rawhide, and a lot of things about problems we've run into and interesting things that we fixed and why you should care and all sorts of things about Rawhide. If anyone has questions at any point, just feel free to raise your hand or shout out your question. There will also, of course, hopefully be time at the end for questions. I'm going to look over the past that we've had with Rawhide. Move this a little bit. Then where we're at now, and then some thoughts and problems that we're hitting now, and hopefully we can brainstorm some solutions to some of the things we're seeing. I don't know if you guys can read this. It's kind of small print, but Rawhide is almost 20 years old. This email was sent to the Red Hat Linux Devel List on August 18, 1998, announcing the first Rawhide release. You can see at the bottom there, they dubbed it Rawhide. Of course, over the years, that's changed to Rawhide, just one word. There's a long history here, and we've had it around for a very long time. Looking back at the past, Compose is mostly daily. It was at first turnally and synced out, so there wasn't a whole lot of visibility into what was there until it was synced out. This was just repository trees only. It was groups of packages, no images or artifacts in particular. It was just a tree of a bunch to begin with. Kind of a preview of the next release, but not necessarily an exact copy of what the next release would be, just more like a test-compose type of thing. Then we, of course, had Cora and Extras merge into Fedora. This was the era of MASH. We used a tool called MASH to compose Rawhide. This was done, of course, in the open because Cora and Extras merged out. MASH had multiple architecture support, so it could compose various different architectures. Each instance was separate, so there would be a Spark Rawhide and a PowerPC Rawhide Compose, and all the architectures were kind of done independently on their their own time. This actually, toward the end of MASH's tenure, started producing more than just a tree of packages. We also produced a boot ISO that you could actually use, a net install ISO to install from that tree. And toward the end of this tenure, also, FedMessage started appearing, and we added FedMessage support to MASH so you could see when composes finished and when they started and when it went through various phases as it went along. Let's see what else I was going to mention about MASH. MASH served us long and well, but it certainly had its share of issues. And then along came some more issues that we ran into in the in the MASH era. Failures at this point were a particular kind. Usually what meant that the build route that it was trying to build this compose in failed for some reason, so it was a very fundamental package that broke. It was pretty rare that it happened. It would have to be the installer or the kernel or something very, very low level for this to break. So it didn't break that often. MASH also was built on a code base of YUM and Python 2 and all those great things. And of course, when we started getting new features in RPM, rich dependencies and things like that, it had no concept of them. One of the interesting things we ran into, somebody added rich dependencies to a package in Rawhide back when we were using MASH. And when YUM sees a rich dependency, it looks at it like somebody said requires something or something else and it says requires I can't find package or. So it just bombs out at that point and doesn't process anything after that. So obviously we needed a better solution for all of the new stuff. So of course, along comes Pungy. Pungy is another tool that does use DNF. It is Python 3 aware, et cetera, et cetera. And really one of the fundamental changes here is instead of just building a tree of packages, we wanted to build everything. We wanted to make every compose like we were going to release the whole operating system. And there was a specific reason for this with the old days with MASH and Rawhide. You would go along. MASH would be fine. You could compose a Rawhide tree. Everything would be great there. But then when it came time to actually do a release, you found all these problems. You couldn't build images. You had dependency problems or whatever. And so we specifically wanted to make things as like a real compose for a release as we possibly could. Pungy actually produces all the images. And everything that it produces is like it was a full release. So all the ISO images are there. All the check sums are there. All the trees, all of everything that would be in a regular release is there in each nightly compose. Of course, this presented a much higher surface area, right? So before, the only thing that could break the composers were things that were in the build route are very fundamental things. And now since we're building all of this stuff, anything that breaks any of those things that we require is going to cause the compose to fail. So obviously we have a different issue here. One of the things we also added is Dusty here. I don't see him now. Dusty actually came up with this idea and it's been very helpful for us. He set up a Paguer instance and it listens on Fed messages. And when a compose fails, it writes a ticket and shows all of the tasks that failed. So one thing that's been really helpful for us coordinating. But one of the things that this has been really helpful for is coordinating between people trying to fix problems. Because often there's multiple problems or people one person starts working on it doesn't tell other people and they start working on it, etc. So if you're curious as to what's failing the compose or what's going on or who's working on it or what not, you can look in this issue tracker and see what the most recent compose failure was and what the release engineering folks think is going on on that. It's a bit overwhelming because obviously we do a compose every day. So if there's failures, there's lots of tickets in there. But it's been very useful. So I don't know how readable this is. It's probably pretty small. But this is from the raw hide page on the wiki. And this is kind of the high level goals of raw hide to allow package maintainers to integrate the newest usable versions of their packages. A lot of people miss that they're supposed to be integrating usable packages. We want this thing to be usable. We don't want to just throw something over the fence and, you know, oh, that's broken completely. No, you want to make sure that that what you're doing there is integrating something that's useful and usable. Connection failure. Excellent. So it also allows advanced users access to newest usable packages. It allows incremental changes to packages that are too small or too large for other releases. There's a lot of things that we could do in raw hide, you know, simple fixes that don't actually need to be pushed out to every user. A good example of this is recently, I think it was rel six timeframe, the RPM specs used to have a build route invocation in them. You could specify what you wanted the temporary build route to be. And ever since rel six days, that has had no effect in in our RPMs. It RPM just defaults that to something sane. So somebody actually went through and cleaned up all of the spec files that had those still in it. So it doesn't actually change anything. It just gets rid of cruft or things that confuse other people when they're looking at the spec file or whatnot. So, you know, if there's a very minor issue, you can push into raw hide and make sure that it's working before it goes into stable. And so it's a good arena for those sorts of changes. Also, the we recently added this last line about GCC and glibc. It's a place where the low level packages can gain real work real world testing and pre release versions. Fedora works very closely with glibc folks. They try and align their cycles so that, you know, they can take advantage of the Fedora mastery build to see what's going on in the compiler or glibc. And it's very beneficial to both of us because by the time a Fedora 29 comes out, it will have the latest glibc and GCC and they will have had all this test data of this huge distribution building all these strange things and, you know, working the bugs out of out of their tools. So just a little quick note about why we should care about rawhide. You know, the goals mentioned in the last slide are pretty important, but there's the integration work that we do in rawhide is is just super vital for the rest of Fedora. It's if you don't have this ability to integrate stuff at that point, it's becomes so much harder to to get that to users. And it really is something that I think everyone should care about. So here's a interesting statistic. Those amongst you might notice how many times the rawhide compose has worked recently. I'm happy to report. I didn't add this to the slide, but we have a compose today. We actually have a compose today. And I have a slide on why this has been the case a little later here. But if you look at that 11 completed, actually, let me let me back up a second. Pungie has various states to indicate how the compose worked or did not work. Doomed means that something that is a required deliverable did not compose, did not function. So it's no good. Incomplete means that all of the required deliverables did complete, but some of the non required deliverables did not. So for example, the I 686 media is not a required deliverable. So there was a long time there. I forget how long it was a week or two when there was no I 686 kernel. And so all the I 686 media failed, which was fine for rawhide. It just said incomplete because those things did not finish. There is one other state that Pungie has called finished, which is everything worked, everything composed. And earlier this year in March, we actually had that happen. It like printed out rawhide finished compose finished. And we're like, Whoa, is that finished finished? Everything worked. And then we found out it was a Pungie bug because it had actually reported that everything worked. But there's a number of tasks where it tries to make media for, say, four architectures. And some of them are not required. And some of those had failed, but it marked the overall task as completed correctly. So we'll get there. We will get there someday. Those of you who went to the Making Composes Better Talk may be familiar with this diagram, which I shamelessly stole from the Pungie website. But this is kind of an overview of what Pungie does and the steps. If you're interested in this, obviously go look at Pungie or go look at the recording of the Making Composes Faster Talk, because they went into a lot more detail on this. But you can see that Pungie does a whole lot of stuff. It gathers a lot of packages and Thor is huge. I mean, there's 20,000 packages. Some of them are gigantic. Some of them have tons and tons of sub packages. So it's moving around tons and tons of stuff. And because we're trying to make this compose exactly like a real compose, it does, you know, tons, tons of stuff. Tons of images. We basically add images all the time or we add new deliverables all the time. People come along and say, oh, well, like this cycle, we have the mini shift spin, I think is going to be added. We have all the labs, all the spins, all the live media. So tons and tons of deliverables. So here's a kind of a quick list of things that break the compose now or problems that I've seen that break. Scripplet errors when in initial install CH route. So all of these image builds that are building live CDs or DVDs or things like that are done in Koji and Koji goes to a builder, creates a mock CH route, installs packages in that. And in some cases, installs packages into a further loop backed image in, you know, underneath that. So there's all these layers of stuff here. And at the lowest level, you're installing you're using RPM and you're installing packages into a CH route that has nothing else in it, right? You're doing the initial install of these packages. And sometimes maintainers don't think about this case. So they'll do, you know, they'll call something that they don't have a requires pre for something like that in their Scripplet. So it's not there. It doesn't exist yet. Or they try and grep a file that doesn't exist yet because the package that has that file hasn't been installed yet. Or they try and look at something in proc or sys or something like that, which it doesn't exist. Or they try and call system D. And system D is running in a CH route. And it says, no, sorry, I don't know what you're talking about. I'm not in it. So that is a real common issue that we see in the base packages. Unannounced version updates for libraries causing broken depths. This happens far too frequently still. Somebody will update a package and not even realize sometimes that the library has increased inversion and you know, these four other things are broken. And then we'll see that in the compose because it tries to install stuff and then it can't because the broken dependencies. So that is all too common these days. Not fully coordinated moves changes in several packages. We see this. We see this all the time. I have a good example of this in today's compose or yesterday's compose, which we'll see later. Size changes. This is another one that we see all the time. Actually lately, it seems like it's been glibc has had a lot of trouble with their locales. So a lot of these media are defined to a certain size. And somebody says, you know, this is a DVD or we want this under two gig or something like that. And then glibc has some kind of bug or issue where they do a build and the locales are suddenly, you know, 500 gigabytes, and then it doesn't fit. And boom, things, things don't work. So that that is an issue that a lot of people hit. I think that could possibly be mitigated by kind of informing people more quickly, what the differences in their builds are, you know, you're doing a build of this package. Hey, your last package was 498 gigabytes less, something is wrong here. So yes, exclude arch is another one. We haven't hit this too much recently, but occasionally people will run into a problem with a particular architecture. And the process for doing that is to, you know, add exclude arch, block the architecture bug and, you know, mention all the things to the architecture team. But the case this doesn't work on is things that we need in base images or build routes or things like that. I forget there was an example this earlier this year. It was, I don't know, I can't remember the name of the package, but it was some package that was basically in the basic pass package set and they excluded arm v seven. And one of the arm v seven deliverables is required. So no compose. So we have to be careful about that. Also, it might have been yeah. So here's a quick list of a few things. This is interesting. I only added locking to the compose process early this year. Before it would just compose it, it composes from a cron. And it would just compose, you know, however many, and we ran into this very, very bad problem with which we recovered from. But I doubt very many people are aware of it. So Fedora 28 the run up to that release, we were doing RC composes. And we got a an RC that was gold that passed all of the tests, we were gonna release it gonna release it next week. And unbeknownst to us or unnoticed by us, there was another compose running previous to that a brand it was branched instead of raw hide but it same principle. And it only finished after we had staged the GA release. It's not right. It was like two days after two days after the release, it finally completed. And of course, this messed up the staging of the the regular Fedora 28 release. So we had to like clean up that manually and you know, make sure everything was in the right place. So I put locking in place. And we have hit definitely hit cases now where a composer's been running and the cron doesn't kick off because it has locking around it now, just to avoid these sort of problems where you get multiple composers stacked up behind each other. We also had cases where a ride compose would go along. Another one to go along and complete. And then the first one would complete and right over the second, the one, the newer one that had already completed. So the locking is definitely been a useful, useful addition. Why do we make the compose fail on required deliverables, required deliverables? We want raw hide to be alpha quality at all times. This was part of the no, no more alphas proposal that we did with Fedora 27. So the idea is, raw hide is always alpha, it always meets the alpha criteria that the deliverables are there, the test pass, etc. And we have open QA running to do a lot of those tests. And Adam Williamson, who is not here, catches all kinds of things with with open QA. I mean, he's he's been filing stuff right and left as he hits it. And it's, it's great. This also prevents us from having the problem that we had earlier with mash, where you get to like a beta and you say, Okay, well, it's been composing were probably great. And then you find out everything's broken. So if you keep things at a high enough quality to begin with a background quality, then doing those beta and final releases are a lot easier. Composing all arches at the same time. As I mentioned, with mash, things are split out by architecture. So each architecture team did their own thing. And that ran into strange artifacts where, you know, PowerPC would be behind, say, arm v seven, or, you know, you run into these very strange version skew issues. And this allows us to do all of the all of the architectures, we can promote certain things from certain architectures as being released blocking or not. So like the arm v seven X FCE image, I believe is is or the arm v seven server image. Anyway, we could decide what architecture and what images are released blocking or not. And we don't have to worry about whether they're being done in certain other places or not. So that also is helpful. But it of course, increases all the compose time. So this is a few of the more amusing little problems we've run into. I thought I'd share. So we ran into a string of compose issues where raw hide would not compose. And it was appliance images, which are arm images that you did to arm devices that would not complete. And if you all it would say is, that is one of the ones that's sort of oniony, it goes to the builder creates a Moxie truth, and it creates a loopback device and it installs into that file, that loopback file, closes that off, does some things and then uploads it the result. Well, it was unable to unmount that loopback, something was holding that image open so it couldn't actually finish unmounting it. And this one was a big pain to track down. But the problem is, or looking at the changes from the previous working raw hide, SSSD had updated their their package. And they had put in their package that certain files were owned by the SSSD user, right, which is fine and all. But when they're installing in the CH route, that has to be looked up. It glibc says, Okay, SSSD user, what is that? I'll look in the string of things that I have to look up. So it opened those libraries. And all those libraries were already open in the CH route, except one library, the NSS system D library, system D's NSS user support was not open in the CH route. So it would open the one in the image, because it was looking for this library. And there was in the image, it would open that, look up the SSSD user and then keep it open. So this was worked around in appliance tools, we just basically added something to say, Hey, when you start making an appliance before you start that loopback, make sure all the libraries that you need to open are open in the CH route and not the image that you're trying to make. This this was a big, big pain to find. So let's see. Oh, another issue that we've run into. The way package signing works, raw had is now fully signed. The way it works is you do a build and it lands in the F 29 pending tag. And then we have an automated process called Robo signatory that looks at the stuff that lands in that tag, signs it and then moves it over to the F 29 tag. And that's that's great. But every once in a while, there's there's a problem with it. And usually it's because something we did like rebooted servers, or there was a database outage, or, you know, some some issue that caused it to not process some amount of builds there. The problem then becomes that you get these packages that are sitting in there. And then later somebody says, Hey, my package never got signed, it never went out. Well, if you then flush that Q out, you say sign on these packages, put them in the right tag. But some of them are from five days ago. And so maybe there's a newer version of one of those things. And you've just tagged an older one on top of it. So you get these strange artifacts where packages go back in version, which is not what you want. So we need to we need to solve that better, put some monitoring on it or something similar. Yes, I thought I would mention this Laura is shaking your head. Yes. The random kernel issue was one that we ran into. It was AR 64 images were not composing. And so we're like, Why is that? And so I launched one off a test one. And sure enough, it just like sat there didn't do anything and then timed out. And so then I looked on the console and it got to just, you know, partway through the boot and then just sort of sat there. And then Patrick looked at it even further and he found that it was yeah, the way that the kernel is using. Well, it was a confluence of things. GNU TLS I think it was using lib gcrypt. That's right. That's right. So yeah, it's a confluence of a bunch of things. lib gcrypt using FIPS. So it needed randomness and the kernel changing the way randomness is gathered initially on boot. I don't know how if that actually ever got solved upstream, but okay, yeah, the issue has been solved in lib gcrypt. But again, it was a strange one to try and debug because it didn't seem like anything was going wrong. It was just timing out. Another fun one that we hit very recently. DNF 3.1 landed in Rohide. And live media images stopped composing because they said package whatever is blocked and for a whole bunch of packages. And this this gets back to the fact that a lot of this stuff that we do has no spec, right? I mean, if you ask somebody, what is a well formatted comps file? What is the, you know, what is the spec for this? Well, there isn't really one. And what is a well formatted kickstart file? Well, we don't know the behavior of some of this stuff. So what had happened here is that we have a base live CD package package kickstart file that includes the standard group. And at some point, the workstation folks to reduce size decided they did not want to include the standard group. So they did minus at standard. Well, old DNF and yum treated this as you wanted standard. Now you don't want standard that cancels out. DNF 3.1 made the assumption that you said you wanted standard. You said you don't want any of the packages in standard. We're going to block those and not let you install any of those packages that are in that group. And unfortunately, that has D bus and core utils and things like that. So it did not work. The DNF folks fix this up pretty quickly. But this is a case where it's the problem isn't necessarily that they, you know, change the behavior. It's that we don't define some of our inputs very well at all. So the we did get a raw hide composed today. I thought I'd share the last four or five things that have been preventing it from working the last few weeks. First, we ran into a grub to relocation tool chain tool chain issue on arm, where it was doing something funny to the grub to binary and messing it up, essentially. And Peter fixed that pretty quickly, only to run into file conflicts between two of the grub to sub packages. So he fixed that. And then the next compose after that was one of these uncoordinated changes issues. Man pages got or man pages used to carry a man page for the time command. But now the time package wanted to carry that man page. So they both had it and it was a conflict and nothing installed. So again, that got fixed quickly. But you have to realize that between all these issues, you fix the one issue and then you start to compose and it's eight and a half, nine, 10 hours later before you can tell that the next thing that is broken. So that makes things take a really long time. The latest thing which was just fixed yesterday by yesterday by Adam Williamson DNF 3.1 changed again, defining things in the DNF.conf or yum.conf the repo files, we have a failover priority equals parameter. And Koji was passing this many for many years, I'm sure. Passing this configuration in there with no value. So it was failover priority equals nothing. And DNF was DNF previously just ignored it DNF 3.1 said trace back. Okay, so I thought I threw out a whole bunch of problems here. And if people have ideas for these, we can certainly discuss them and write try and write them down and see what we can do. So one of the problems here is that, as I mentioned, it's the ability to find breaking changes and block them. So there's a proposal for gating that has been discussed many times on the develop list. And I actually have a slide on that here in a second. We'll talk about ability to test proposed fixes all the time as they come in, because this would help us be able to isolate things like that DNF, the grub to relocation issue, stuff like that and catch it before we have to endure eight or 10 hours of composing. The compose times, obviously a problem for iterating over the stuff. The signing issues, which I mentioned earlier, we can probably address by monitoring and doing a few things. And I also wanted to add here marketing issues, because I still hear people saying, you know, raw highs, unusable day to day or it's bleeding edge or ha ha, you know, it eats babies or whatnot. I've been running raw hide on my laptop for like six years, something like that. And sure, there are problems. But I think it's vastly better than it used to be, for a number of reasons. For one thing, broken dependencies don't cause the headache that they used to, because DNF basically will say, I'll resolve this. Oh, all the stuff is broken. I'll just not update that. So you stick to the working thing until that gets unlog jammed in raw hide. I think that we're also getting better about catching these things and composes instead of letting them get out to end user systems. All those DNF or grub things would have been caught by end users in the mash era and have to be fixed by them by people and iterated and pushed out again and breaking users stuff. So just real quickly, the gating proposal, you can look up the full thing on the develop list. Basically, we want to teach Bodie about raw hide and try and make it as transparent as we possibly can. So basically all changes, if you do an update, just a regular one package update for no reason, or for a version update or whatever, it would get a Bodie update. You wouldn't have to worry about this. In the common case, it would get the update and tests would run on it and get a plus one and go out in the next day just like it does today. If you have a collection of packages that you need to build, we would teach Bodie about side tags. So that would add a little bit more overhead, but it would add a lot more help to our composes. So Bodie then would get a side tag ability. So you'd say, I need a side tag, Bodie would say, here's your side tag, you build your 20 packages or whatever, it would take them as a collection and test them as a collection. And so if there was a failure on those 20 packages, you could address that iterate over it and then get them through. And so this would help us, you know, if there's a problem with a group of packages or whatever, we can see what that case is, we could test that whole collection at the same time, which would be extremely helpful for QA because right now, things come in at a pace that the maintainer is doing and it isn't necessarily, you know, reflecting on the completed state that they want all their packages to be in. Actually, so the question was, how does this affect the build route? And I believe we said that side tags would have their own or populate their own. So you can build against other things in that side tag, but not in the base. No, so the question was, if if compiler is rebuilt, and you want to use that compiler, you know, that same day. So after it does the gating, and it will be it would be tagged into like just f29. So it would add to the build route after after the test for it, went out as raw hide the next day, if that makes sense. Right, exactly. It take however long the CI stuff takes to run and approve it. Alright, so let's see updates merge and the pinning tag for testing and then the test run on that. This would also give us another place for feedback for users if if need be. Another thing that we really love to have and I believe mohan has been working on this is a quick smoke testing composed type of thing. This would really help us for critical packages. Colonel Anaconda grub Lorax stuff that's used in all the images. Because right now we untag something and we have to wait 12, you know, eight, 10 hours to see if everything works. It'd be really nice to upfront see that there's an update to one of these things and go okay, let's do a test compose. Oh, no, it doesn't work. Untag it and, you know, get back to a working state until that can be fixed. Also, a subset of images for open QA would be very useful. If we do just the the real high profile ones to start with workstation, live media, server DVD, that kind of stuff. Open QA can run tests on those and tell us, you know, if a proposed thing is going to work or not. So more future stuff. I put, man, my smiley didn't show up up there, bummer. I put drop I 686 with a smiley. We now still continue to make all I 686 images that we've always made. So that's like every lab, every spin, workstation server, you know, that is a lot of images. And it may be something to consider to say, you know, we're going to cut our compose time by a couple of hours by just not making all that stuff or making less of it. Justin. Okay, Justin points out that we can talk to the I 686 SIG and see if they're targeting or care about any subset of those specifically. And that's a that's a real good idea. Because, you know, there's just so many of these and I'm unsure how many people are using them, especially when you look at things like, you know, the design lab spin, you know, how many people are doing intensive gibb and pork on I 686 box, right, right. So they're more interested in XFC or LXD or, you know, those sort of things. Yep. Yep, that's a good idea. Try and do more in parallel. This was already discussed quite a bit at the make composes faster talk the other day. So I encourage you to go look at that recording when you get a chance. And actually, we talked about the incremental mode also there. Being able to cash a previous compose and use things that have not changed from that compose if we're wishing to do do things faster. So there's always new deliverables modules, new OS tree new containers, all kinds of things. But raw hide should really strive to push the latest working versions to users. There's a few cases where we're not right now, like the raw hide container is really old at this point. And hopefully we're fixing that. I don't think I think OS tree is all on two week cadence. But at one point, we had an OS tree that was running off of the Koji build route, I believe, and we may want to explore doing something like that at a later date. OS tree lends itself very well to testing raw hide. Because you can bisect your problem. You know, this is working at this point. It's not working here. All right, I'm going to just bisect and see where it broke and what what changed. So a few things for the future. New mock. Right now we're using an older version of mock and all our builders and we need to move up to the newest version. We need to leverage system dn spawn bootstrap mode is something we need for Apple builds. So we need to really start working on that, especially since there's a lot of pressure now with Python to going away next year. So we really need to move, move on and get Python to out of out of the environment. We may want to consider allowing packages to go backwards in some cases. This was discussed on the devil list recently. In the past, distrosync, the DNF distrosync didn't work that great. But I've been using it lately for for all my updates. And it's doing a pretty good job. But as somebody pointed out on the mailing list, you kind of want a common understanding here, a common platform to everyone to build on. If you're trying to integrate your packages, and then somebody who you depend on is moves their package back versions. It makes it very difficult for you to be able to find that thing usable. So I don't know, we may want to look at that rule right now. Fesco, there's a Fesco rule that you're not allowed to go backward in an update that shipped out in rawhide. But we may want to revisit that. We may bring in a lot more users with rawhide containers because it's a lot easier way to consume rawhide. You just fire up a container and you have the environment. You have those newer packages. You can do tests. You can do anything you could do in a container. So I think we may get a lot of people looking at using it for that reason. Mass bug fixes and spec changes. This has been something that's kind of up ticked in the last year or so. And I think it's probably a very good thing to do because it saves people a lot of time and it ends up making things better. But we need to look into that and making that easier, leveraging that. Python 3, I mentioned, we want to move everything to Python 3. A lot of things are, but we need the newer mock and we need Koji, I think, still has a dependency somewhere. I forget where. Also, there's the release for equals rawhide change, which was discussed on the develop list. Basically, this is to allow the Fedora release in rawhide to advertise its version as rawhide instead of the number. Right now it says 29. If we do this, it ends up making things like QA and so forth a lot easier because they don't have to compute what number is rawhide right now. And if they want a rawhide something, they can just say release for rawhide. We can make the number still work also. But I think this will actually be a nice win for making it uniform. We can also drop the Fedora repo's rawhide package because all Fedora, Fedora updates, Fedora updates testing can all use the rawhide name with Mirror Manager and it will just all work. So that will also save things needing to be changed and tweaked. I mentioned OS Tree earlier for testing. I think that there's a lot of ability to leverage OS Tree for rawhide testing. If we can get the compose time of those down enough, it might be worth having something where we compose an OS Tree for every package that lands in the build route, then you could bisect actually down to the package level what broke some particular use case of yours. Actually close to the end, how are we doing on time? Oh, all right. Well, questions, comments, concerns, yeah. Right. So the question is about if we have these periods where there's like a week of no rawhides, we're not updating the package repositories on the mirrors and people can't use those packages for other builds and other things. Yeah, absolutely. And that's I think something that we can address with the the gating and the CI things basically make it so that these composes are not this unreliable. That's I think that's the easiest solution. The thing we could do is go back to the the mash world where we push the trees out, even if the images don't compose. But the problem there is that we're just kind of pushing it pushing the problem off a little bit. It isn't actually solving it. So yeah. So the question is what what happens with signing rawhide on the branching day? We talked about that a lot last branching. I don't know if we came up with very many solutions. I think we were gonna, well, in the past, we have signed it with both keys. And, you know, I've done that. Yeah, I don't know if we came up with a good solution there, because it there's a period of time where they may be both signed, but you also have to push out the new, the new signature or the RPM signature file. We could talk about that some more and see if we can come up with a better, better solution. But signing them in advance is definitely good. Yes, right. Yes, we should sign all those now. Yeah. Yeah. Huh? Yeah, I'm not sure. I seem to remember a ticket about that. The question was the RPM OS tree silver blue composes in rawhide aren't functioning currently. I saw a ticket on that. I don't remember what the the problem was. He also actually just tried out essentially silver blue, not too long ago. And there was a problem RPM OS tree couldn't layer packages. And there were some other issues. But I think those got solved. So I don't, we'd have to look and see what the problem is. That brings up another issue is that when these required deliverables fail, you know, the only peer, the only people who pay attention to them are the people who care about them. You know, relinch does not have the cycles to care about any of those. So if something breaks and nobody is actively looking at it, it can be broken for a while. Right. Right. So so that brings up the broken dependencies report. There isn't one currently because the old one was written in young written to use yum and did not understand rich dependencies. Okay, see what you're saying. Right. So the observation was that if we pointed out the things that fail in the rawhide message, people would be more likely to notice them and fix them. That's a good point. That's that's a very good point. Any other questions, comments? Yeah, it's the question was, what is the user base? And we don't have a real good idea. It's small. But that's one of the things the changing the release version to rawhide from the number would give us because then we could actually look at the people who are using rawhide, specifically, because the number gets really murky or especially around branching time, whether somebody is on 29 or is that rawhide before branching or was it right after or a branch after branching, that kind of thing. So doing rawhide there will actually give us better numbers for that. But it's it's not large right now. I think Matthew Miller may have some information on that how many it was. But I'd say thousands at most. Okay, yeah. So the observation was there may be more, more users because there's more interested in things. Yeah. Yeah, I, I, I filed bugs on our PMOS tree and pod man, because it wasn't working. And on our PMOS tree, you know, people were like, Oh, it works fine on f28. Well, I'm not on f28. So yeah. Yeah, so we actually there used to be long ago a blog site called is rawhidebroken.com. And we use that for a little while, but it's really hard because things move so fast. So like, you know, somebody will notice a problem. And then you post about it, and then it's fixed already, or, or something like that. We talked actually last year, I think about setting up something in bugzilla, a whiteboard field or something like that, where you could say, you know, rawhide important or rawhide noticeable, something like that. And then you could do a bug zilla search and see if there were any new bugs. I think that might be useful. I don't know. Yeah. Yeah. So yeah. So the question is where to look for broken broken issues and packages and rawhide. And yeah, IRC is probably, sometimes if it's really bad, it will make the mailing list. But not always. All right. Anything else? Anyone? All right. Oh, sure. We need to talk to Randy about that. I don't think he's in here. He has a whole bunch of other stuff on his plate. So I don't know. I'm thinking it's probably more a f 30 type of thing. But I don't know when he has it planned. But we really need that stuff to land before we can do it in a meaningful way. Okay. Okay. Thank you, everybody.