 So, I'm Neil, this is Igor, and we're talking about how the package or experience really stinks and we should improve it with some robots and, you know, ideally simplifying the human parts. Yeah. Ideal. So, let's kind of first start with the problems. And so the first is every time we have to deal with things that depend on each other, getting everything fixed is horrible because it's a manual effort to figure out what the dependencies are that need to be built reverse and forward. So dealing with those involves, you know, you update your thing, the world breaks, people bitch on the mailing list, and then it has to get fixed again. And it happens manually, and if you're not a proven package, this is way worse because you don't have the ability to fix it. And so this is a huge crimp in the south for anybody who's not empowered like, say, Igor or myself or the small cabal of proven packages to be able to just go and fix the distro. And that basically means hero work has to happen. And that's not something that should be something that needs to be done on what amounts to be a fairly regular occurrence. And then there's duplicated change logs all over the place. So in theory, what's supposed to be is that there is a single package or change log entry or something like that. And then you have user information that is in a document or in the BODY updates info or something. What winds up happening is people write the change log with a mixture of information in the spec file change log, which it's committed to get also, and then it's just postured into BODY. They're not even serving the purpose of being the unique separate roles. And this gets really stupid when you actually look at the update info that is returned by DNF or in the emails that get sent out by Fedora Announce because Fedora Announce shows you the BODY updates. And it is the BODY update information, which is the stuff you put in when you're submitting the updates. Then there's a section which shows the package change log. And then below that is like what's actually included in the update. I think most people have never looked at those emails, which is why they don't know why these are separate fields. So that's probably something that needs partially education, some partial fixing of the workflow and things like that. And then the release specific information that shows up in the packages, in the spec file, the change log and whatnot is all kinds of pain for cherry picking across branches, which I will get into for pushing into different releases. That's another can of worms that should never have been made to exist in the first place. But that those things cause all kinds of issues for people that maintain, for example, they make a bug fix into raw hide. And then that can be, in theory, cherry picked into a stable release without bumping the version or any of those things. And then they can't because it causes conflicts all over the place. And you spend like a half hour fixing everything. And then you realize you still got it wrong because it's not chronological anymore somehow and you broke it. So, and then this one is something that doesn't happen to a lot of packages, but the ones that it does, it's a severe issue. When you have to deal with large collections of packages being pushed and updated outside of raw hide, which unfortunately now in raw hide this is a problem, but before it wasn't, this was only a stable updates problem was if you had to do a collection of them in the correct order and get it all together and push it out as one thing. This was actually really, really hard because the process is not designed to handle people doing large chunks of things at once. It more or less was optimized around people just doing one thing at a time and just kind of rolling with it and having other people like just fill in their bits after the fact. These days, that's really not what a lot of people do anymore. If you look at FedoraKD, for example, it is the best case scenario where they get a side tag and then they do all their stuff and then somebody magically shovels it all through until it makes it to the Bodhi update where Bodhi starts crashing because there's too many updates happening in one update and sometimes things have to be split up and it's more broken because one of the updates pushes out out of sync of the other ones and all the FedoraKD people are sad, including myself. In the Fedora workflow, this is the starting point of where the failures are. You start with packaging your software. You do review requests which are sadly in bugzilla and have no workflow tools or automation or formatting or anything useful for telling you what you're supposed to be doing except for if you type the wrong URLs into the wrong spots on the little form thing that people don't necessarily know they're supposed to be using, everything goes haywire because we parse text randomly. Cool, right? And then we have to find somebody to review and prepare it. So this is actually going to be a problem no matter what because there needs to be a human element to this. But the problem with this is the discoverability of new reviews and the ability for people to locate people who are capable of reviewing is hard. And it's even worse if you're unfortunately a brand new package because now you have to find this nebulous group of sponsors. Now, this is a group of people that are supposed to be known, but it's not like it's easy for anyone to figure out who they are, who to contact or how to get started. And so there's that whole other human element problem that I'm not completely certain how we're supposed to solve, but it's definitely a problem. And it doesn't help that because of problem one of dependency things, most of the sponsor people who are also often proven packages are overtaxed. So they can't really help new people over the barrier. So then the request repo, which is just, is actually probably the nicest part of this process because it is running a command pointing it to a bug, saying this is a thing, but then there's the waiting part. You have to wait and you have no idea what you have to wait for because there's no feedback when you create that. Like so in the request repo it says, oh, this URL was created. What about in the bug? What about anywhere else? Were people actually look or watching? Like for example, when I did a review to get something that was supposed to be, that we wanted to ship in Fedora from my employer, it was just a font package. But when they were tracking the bug, they were like, well, what's going on? There's nothing happening. Yeah, I don't know. Well, I'm waiting for what? So there's no feedback loops in there with the review process. Even a minor thing would be, maybe let's improve that. And then once it's done, you've got to push content there. The default workflow created by request repo is multiple branches pointing to specific releases. We haven't actually needed to do that for a few releases now, but nobody is sure whether that new workflow that has been implemented works. And so everyone's a little bit scared to try to jump onto that. So the default workflow is unnecessarily cumbersome for shipping packages that go into multiple releases. So there's, especially since, I think the last time I looked, a little over 70% of all the spec files are identical across multiple stable branches. Because most people don't really have the necessary bandwidth to maintain them separately across releases. So, I mean, I know mine. I only have two out of 160 that are maintained differently across branches. That's a pretty low number, so, and then let's go fire up the bills. It's supposed to be easy and nice, but it ain't. Because after you fire the bills, you have to submit them to Bodey, because nothing actually happens after you've built them. And that part's not obvious either, cool. But then we get to the multi-build workflow, and this just gets made worse. Because I was just describing the single workflow. The multi-workflow involves a new step, which is, figure out the build order by hand, or keep hitting it enough until it works. I can guarantee you that most people in this room don't know how to figure out the build order, because it is not obvious or easy to do that. Even I get it wrong most of the time, and I'm supposed to be somewhat good at this. And the override stuff in Bodey, like this one involves a whole new set of, like, okay, I gotta do things that isn't correctly described, or what I'm supposed to do with it. And then there's waiting. And this isn't a human kind of wait. This is, we have to wait for a machine to figure out the thing that we could do, so that we could keep going. And if you have multiple weights, if you have multiple overrides you need to implement, this takes upwards to a day or so, depending on, like, all kinds of factors server-side. Because it's regenerating repos and stuff like that in the background. Nobody's supposed to care about that part, but that implementation detail matters. But then, you know, there's an alternative workflow that I've experienced quite a fair bit in my work in the last four years being involved in OpenSUSA, along with being in Fedora. And the workflow might surprise you a little bit. It's package the software, push it to the VCS side note. VCS and OBS in the OpenSUSA system is absolutely terrible. And don't ever look for it as a model of how you implement a version control system by hand, because there are a lot of things wrong with it. But it gets built automatically once you push it. Like, all of the building and figuring out the order and what's supposed to happen, resolvability states, that's all done for you. You don't have to care. And then, once you have all of your stuff uploaded into your project or whatever, you just go ahead and do a submit request. So if you know about pull request or merge request, this is the same model, except it's actually very much oriented around packaging. And so it shows you relevant diffs, collapses diffs that don't matter, but also lets you see software changes. It has bots that run that do specific checks for packaging, shows human review queries, bot responses, things like that. And it allows you to get feedback in a way that's optimized for that kind of stuff. Now, Baron, this is caveated with you have to have human resources to make this work. OpenSUSE has got a pretty bad case of no resources. There's only four or five people that can review any and all packages going into factory, which is their equivalent of raw hide. And so when you have like 100 or 1,000 or 10,000 packages in the queue, it can take months for packages to actually land. Now, this is actually amplified by the fact in the open SUSE workflow. Every single update goes through the review process again and again and again, and that includes legal review. So there is a system with a bot that I am working on trying to adapt for Fedora, it is slow going, that actually does a good chunk of the legal review process or when it figures out that it is too complicated to figure out, bounces it to a human. And so like that allows us to do rule checks and things like that. They do all of this automatically. So then the last mile check for the human is a spot check, making sure that the maybe some human specific things are going on, some policy things, nothing did it wigged out in a weird way. Like they do install checks and stuff like that. Making sure all those are good, and then hitting the final accept button. And then it gets merged. Now, the only downside to their particular thing is that when that happens, the commit history is lost on transfer to the other project. Which is why they maintain a changes file with the full change law. Because that's the only authoritative source their system has. Now, since we use Git for a version control system, if we implemented an equivalent to this model, we can preserve that commit history because of the way Git works and it is not completely stupid. It is only so much stupid. It is not completely stupid. So we can keep commits as they come in from various things, from pull requests and stuff like that. And that's how it works now with PaggerDisk Git. It's just nothing useful happens after you merge a pull request. Who knew? Apparently nobody. So some of the things that Igor and I have been talking about, and this conversation actually started when we were at the open SUSE conference back in May, was that we want to look at moving the change logs out of the spec file? Wow, really? Out of the spec file, into Git, ideally, as a Git commit message. I don't actually know how we want to do this because one of the problems with Git in Fedora specifically is that Git commits are immutable. And this is a problem because you can't amend the commit message. Unless we figure out another way of doing it or we decide some aspect of commits is mutable. And I don't know what we would do or how we would do that. Like annotations, for example, could be mutable while the actual commit is not. I don't think anyone else would be happy about that, but I would agree with you on that. Yeah, sure. That's a brand new commit when that happens. But why? So first, I worked in projects where they use the, really? Okay, what? What? Oh, yeah, well, I don't actually know what he said. So, well, I was more like, okay, whatever. But no, okay, what you were saying was basically we want to accept the history as ugly and we're going to just go ahead and commit it. We're going to just generate change logs based on that or do prefixes to match on what the way to handle a valid message to ship out. My counterpoint to this as someone who's actually worked in distributions, that do use the commit messages for change logs. So in the Mandrieva family, for going on almost 20 years now, all of their change logs have been generated from VCS. They started with CVS, they went to Subversion. OpenMandrieva does it from Git. MyGit still does it from Subversion. The problem is that with Git specifically, this becomes really painful if you want to make sure that you can identify the same build or the same iteration and have a different message. Because the act of changing the message changes the blob, which dereferences it from the system. The thought I had about this, and I'm not completely sure if this would work, is that if we don't allow the real commit message to ever be used and we always generate an annotation. So if we annotate and allow annotations to be edited, then that allows us to change it. But I don't know how diskit rules work for annotations. And I'm afraid to experiment with our diskit, because I don't want bad things in there. Yeah, bad. Yeah, this is bad. This is bad for your conception of the world. If we wanted this to work from a rel sentos fedora sandwich, we would actually need the real commit messages from the real diskit that exists within rel to have a way to mimic those messages when they're pushed out to sentos. And that is going to be non-trivial, possible, but non-trivial. Because sending annotated messages via API is weird and hard, especially if they get amended later, which will definitely happen. Yeah, sure. We could, but I will point out that that is actually the approach Suze uses right now. So the commits that are going into the OBS VCS do contain all the same information that's in their changes file. The changes file is a separate file, and that is on build merged into the spec file to produce an RPM change log. So that approach can work. The issue is that people have to agree what the freaking files format is going to be, and that is a lot more work to go down, you know what? So two things. One, you're wrong because there have always been VCSs in the era of package managers. VCSs predate package managers. And package managers change logs are intended to represent information that people can consume to understand what the hell happened. So the idea of obsolete a package change log is out of the question because you are making the assumption everybody has access to the same data you do. We know that's not true because REL is quite obviously not available for people to look at. And we know that is not true because CentOS has less than enough information to build itself. So we know that this is true, and that means we need to have a way to externalize that information. In the Mandrieva family, this has never been a problem because from the very beginning of how the distro is built, they started with a VCS import and they built everything that way. And as part of their process, that change log is externalized and copied into the spec file as it produces the source RPM. So there is absolutely no way nobody gets all the information. They even have the revision that it came from, all that information's there, and they actually give more detail than most other distributions that do VCS based up. So that's sort of the model I want to aim for, for us because I want us to have more reproducibility, not less. So I don't think it's in the cards to say we don't do a change log at all. Thank you for leaving that because I was going kind of blind here. So the other bit is that the release field is a nightmare by itself because weird things happen as we build stuff and we have to keep changing it. And people who don't know that NBRs that fail can be rebuilt again with the same NBR keep bumping it and causing conflicts and changes as we do cherry picks and other weird things with Git or Git workflow that we have for Diskit. And it's also a point of contention for figuring out how to do upgrades properly, how to deal with versioning the package as an entirety. And it makes it a problem if we want to have automatic rebuilds on dependencies. So what we're looking at is the idea of making it so whatever you put in the release field does not matter because it's gonna get reset to one disk and it's gonna get a suffix indicating the build or whatever, like some kind of idea of how many builds did it happen from this last commit, from this last version bump, things like that. That information allows you to understand how many changes have happened with the same NBR, how many rebuilds have happened with that same change and you get a better understanding of how many times something has been used. Yep. If it's a suffix, then you can still do it on the prefix that is being set based on what the build, the build systems mechanism for setting that will be predictable. And if you bump a new version, it resets to one. Things like that. There will be a set of rules that you can actually use for that. Otherwise, you could add just like something, something that opens who's a guy's do is they do a plus one on the version for an obsolete, which functions in the same role because they're never gonna really do a plus one version. But that's kind of dumb and it's based on real weird hacky rules. Again, something that we're thinking about, I don't actually have fully solid details about how this is supposed to work, but like the release field has to be solved some way because otherwise we have no way of doing automatic rebuilds. So, and this leads to the create tooling, which will rebuild packages on dependency changes. I want to make it so proven packages don't have to be heroes because it sucks when we have to do it. Fixing half of the dependency chain because something just broke or something changed and the person who made the change doesn't have the power to fix it and has to beg to fix it is terrible. We put people in a terrible position when we do that and that should never be the case. We should never put people in a position of being afraid of actually shipping new software even into ride because they can't fix everything else that depends on it. And I think this is actually one of the underlying symptoms that caused the creation of the modularity stuff because a lot of people were talking about, well, this makes it easier for me to make it so that my stuff is always gonna be working and then I don't have to deal with breakages happening as the distribution churns. But if this part was already handled for you, why do you care anymore? Because it's already fixing itself and if it fails, you can fix your aspect without having to worry about the rest of it, yep. So maybe, maybe I'm not sure, right? If it requires human intervention, then there's a problem and that might be necessary. So again, I'll refer back to my experience working with an OBS internally and in OpenSUSA. OBS has actually a policy control for how it does rebuilds. So it can do it based on whether dependency disappears, whether it should always happen or whether it should do it not at all and it should require human intervention every time. So the last one is the stupid mode and you should never do that and that is the mode we operate in in Fedora. The first mode is the mode that tumbleweed and raw hide should operate in because that's the mode where everything gets fixed. The second mode is the way OpenSUSA Leap operates and that's the way that rel-like distros and stable releases should operate in and that allows for things to be fixed without everything blowing up. So we would need a policy control like that but rebuilding on dependency changes absolutely needs to happen because otherwise lots and lots of things will just subtly break. And yeah, so that's kind of the it for that part but like if there's any questions, feedback, whatever. That's sure. Why don't you talk now? Since you're talking. Well, you wanted me to say all the words, so I did. But you came up with most of the grand plans. I just happened to have a better command of English. So basically about the release fields, Neil mentioned like that we want to generate it but he didn't mention like what exactly do we want to put there? So our idea was to put there two numbers. So you put first number is the number of commits since the version bump and the second number is number of the rebuilds of the commits. So basically if you don't push any new commits that would answer Miros question about like how do you absolutely things because only second number would be incremented over the time. Yeah. And basically you know number of commits, you know when version change so you actually know that. Yeah. And the second one is number of rebuilds. So we can do this in Koji when building SRPM. So we could look up previous builds, find the commit it was made from, find the number of commits between and put that number in sourceRPM. So sourceRPM which you would get from the federal would already contain the number. So. Entry to the code. Yes. So. Yeah. So the question was what will happen if we have two different, let's say side tags and we do builds over there and there. So I think. We did actually talk about it. Yeah we did. Yeah. Yeah, you got it. I didn't have to say it. But basically if you just counted only the builds, you could count just the builds but the problem is that it keeps resetting every time the policy is to reset those every time you bump a version and that makes, and if you do it in different tags it's difficult to count them and it becomes really, really weird. We want to have more or less a linear path for understanding this and also counting the get commits is a very stable is a word as I would use for getting the number that you can use for obsolete. Like the main reason for even caring about it is we need a stable number for obsolete. Because otherwise I wouldn't care. Because in the open system world they don't do this at all. They actually just let everything just dist upgrade up and down and they don't really use obsolete in the same way we do so they never really deal with this problem and I don't particularly want to break that particular aspect because it's actually not a bad way to deal with package replacements or version downgrades and upgrades. Yep, because I don't want to do that. So I have actually worked in a package so I'm going to point out something that I wasn't going to say but you brought it up. So Matt was asking why not just ignore the EV part and just count the release field for version sorting. So I have worked in a distro that does it that way. So it is a bad idea because it means that you can never figure out what you're actually moving to. Yes, I don't want to think about buying right now. The epoch of 12 is insane. Yeah, so Solus actually works this way. Their EO package system ignores in the EVR comparison they only compare release and it always goes up. Except for the case where it doesn't which is not important because it doesn't happen very often but because the release always goes up because it's being counted based on their get commit method from the base all the way up they also don't support pull requests and that's actually the main reason they can even do that because if your commit history has to be fully linear for that to work correctly. If you have merge commits and sideways trees that counting gets really fucked up. You don't but the problem is that if you have pull requests or merge requests you can in fact accept and we have them in our disk it in some places where people have done merge commits and that messes up the get counting for commit messages, for commits. Yeah, don't think about it hard enough. Yeah. Yeah, don't do this. Ideally in your case, yeah ideally in this case DNF would just follow repo sources as a key for identifying where it should come from and then just not care anymore about other ones until the user says to switch which is how zipper operates except for it's a little strange on that behavior but that's supposedly how zipper operates which is they call it sticky vendor so they operate on the vendor key as well as the repo ID as a combination for figuring out whether something should follow that source or not. So and they have vendor classes and other fun stuff but point is I don't wanna mess with the EVR comparison too much because it breaks too many user expectations and it also makes it super hard for people to experiment. Experimentation is important and valuable otherwise and if we make it so that we must all operate in the Koji all the time there is literally no way for people to do really crazy things cause Koji doesn't support being crazy. It literally has no experimental mode it has no way to do playgroundy things properly everything must wind up in disk it which means that it's stuck there forever and if you don't and so like there's too many problems with that for being able to consider doing that in the solace workflow they actually don't have a playground mode they don't have a way for people to do any of these things there is no third party sources cause nobody makes EO packages outside of solace so it's not a big deal and they can have that kind of control Fedora has an ecosystem and that it has to be considered yeah sure give me a second to hit the buttons because our tools don't work no they don't yeah but that's why our tools don't work cause this guarantees that they will get built and released no no no sorry I didn't this the when it is so every single build in OBS happens in a project that publishes so that means it releases which means people can test it and use it and whatever that part I forgot to mention well yet again mixed writers sorry because you don't know whether that worked or not right so so I want to say something and I need that too so there are several problems with what you're saying the first problem is yeah so like that's equivalent to our tagging model that's not the problem the problem is we don't do any tests at build time at all ever they happen at update time so that means that everything happens too late and so if we just automatically built all the time like this we'd essentially be pushing broken packages into Koji and reserving NBRs for no particularly good reason that's not fine right now because that means it takes up storage and they never go away yeah but they happen at Bodhi so that's still too late yeah but it's still too late you're you're literally not making it possible to replace a build in Koji so it takes up space right in the OBS model the package that isn't if the package is not needed to be pushed out the binaries are not available nothing changes or if it fails nothing gets published nothing changes again and you don't have to care yeah like there has to be there the way that our workflow works for the automation side needs to be inverted it has to happen early rather than late uh... so that means it has to happen when somebody does Fed package build that package build has to fail not succeed if that package build does not fail and the NBR is not marked as failed in Koji there is no way to reap the build and that is a problem so in the open Susan workflow the assumption is nothing actually is persistent until it is persisted into a compose so which is their publishing process for the repo which is the only part that actually matters so when you submit a package and it gets built and it says oh this is exactly the same as the other one it doesn't get published who cares if it is older or it's broken like the post-build checks that run at the rpm build time say that there's something wrong with it it fails it doesn't get published it doesn't get built like the artifact gets thrown away nothing happens that is a very important distinction we do not have in fedora and that is something we need to have to optimize things better so what we have now is for raw height is marginally better than what we had before it still doesn't fix a lot of the fundamental problems for improving package or workflows and the other part of it is that because body is slow uh... it makes it actually take longer for people to do multi-builds right now i understand you're working on fixing that but like one of the simple things would be to mimic the develop project kind of thing that obs has by allowing people to have side tax that represent that and that the side tax can be the artifact that merges with that package update rather than uh... individual package artifacts so if that happens that actually simplifies a good chunk of the pain points because that means that we can now take whole collections a lot more easily because like shipping the plasma update probably the worst case scenario we have right now because it's somewhere close to three hundred or so packages that have to be updated all together and because it's so big they get chunked into two and then that the depending on the race of which one actually gets pushed out first your desktops broken congratulations you have to wait forty eight hours before it works again kitty has an exception because of the it has a b i compatibility but behavior compatibility is not and when you mess with private a b i's as long as it's supposed to be updated together it's not really a problem except for body has limitations on handling large updates and crashes a lot when you deal with that most of it is because body crashes with large updates not because the policy doesn't allow it the policy allows large updates as all hell but it just doesn't work for multi-updates with rawhide this is actually something that's gonna have to get fixed if we want to use body as our integration and i think we have to because nobody has done any development on koji to make it useful in that regard and so we have this weird bifurcation of our system and because everything happens so late we need to call back into koji to mark a bill that's failed and that actually will allow us to do something approaching this without having to re-engineer the whole world so i would say that one of the things that has to happen for at least the testing stuff and the automation and the gating is that if a test fails the bill has to be marked as failed in koji that way it can be reaped and replaced that can't happen then we're in a problem right well and and that and that's one of the reasons why one of the things i had advocated for a long time is that we need to move the things earlier in the process before the publishing phase which is what body does so but we can't because we have no tooling for it koji in fedora does not publish to the mirror network right but here's the thing if you mark a bill that's failed it can be reaped it can be deleted because it falls under the garbage collection yeah that's what i'm saying if it's marked as failed it can be reaped that's true but that's why i want them to happen before they get to bony so like so one idea i had about doing this before i was told summarily that koji is never going to get modified and we're never going to fix this was that we would add a separate test that happens after all the build steps and that's before it does the tagging to push it out and be available to download would be a post-build check step and that's where the test would run and if it failed that nothing gets published nothing gets pushed and the build is marked as failed but since everyone told me we can never do that i ignored it but at that point it's the same as a scratch build that's that's the whole point we don't so yeah so the the the thing though is that all the tests still act on an individual package on an individual build no matter what form of multi or single so if you decide to change that then that changes some of the calculus for all this but i'm working off the assumption that we test every package currently with with a common set of tests that applied everything now if those tests are run as a post-build step that happens in koji before it marks it as tagged go into the system then there's a there's a potential for you know avoiding the tom legal problem uh... and so that's what i was hoping for we can't probably do that so the next step would be figuring out how to not publish until bodie says it's okay and i don't know whether that's actually possible either because of how koji currently does it's process flow to send to bodie right so here the way this the reason why obis works the way it does where it doesn't preserve binaries anything except for the ones that are is because they would run out of space very fast because so for example my open shift origin package where i boarded all of open shift open susah about a year ago has been built forty one times in tumbleweed each of those packages each iteration of those packages is like close to two hundred fifty megabytes if they if obis kept every single they would run out of space in my home project before they ran out of space in tumbleweed itself because it would build forty one times in tumbleweed and it was built twenty five times in leap and it was built you know it the all those bills add up that is a major problem uh... from a space point from a space preservation point of view especially since fedora's storage is not cheap because it's a net app appliance and so adding more space to it is not simple and probably not worth it what would we test yeah we could that that's another way to do it but we still need the test part and they don't happen with scratch builds because there's not no interface to kick off from that because the problem is that that system works by shipping the crap to to to actually run the tests no no no so that's that's not so that only works if we disable pager's ability to do local pull requests if every pull request came from another get server and that would be less of a problem but if it comes from the same get server it doesn't matter because everything's already been committed somewhere on the pager disk sort of it's supposed to work that way doesn't always but yes uh... the problem but the issue is that you still are building something on fedora infrastructure something is already going to be stored there something that fedora and unfortunately that means red hat is responsible for so somebody is going to care about the fact that you just build a weird thing on there and in order to go through the testing process today it has to go to the body and scratch builds are forbidden to go to the body because they're not supposed to be real and if they're not real they can't run tests and so since we can't run tests properly for that we can't run all the tests that's the problem because he made it work through voting yeah yes exactly that'd be the ideal state where it could actually happen everywhere but we also need to make sure that anything that people have decided is okay there needs to be a final way to make sure it doesn't go through or if it's got like an extended test like for example the tom legal test that has to happen before it actually passes through if there's an automated magical thing for doing legal tests before submitting anything into fedora it'd be nice if that happened before it actually get final submission uh... or like something that says this is gonna hang until somebody picks it up or this is good or this is just bad we're just not gonna allow it and then it just stops it from being distributed right so that there's there's multiple checks that humans may miss that need to still happen somewhere that we currently today admitted status quo is we don't do them at all but if we want to be better we want to do them all the time uh... today yes so what is published in this case sorry yeah so this is done ish uh... whatever anyway we can talk about this more outside this is an ugly ugly talk