 Thank you, everyone, for coming. We'll get started, I guess. So we're here to speak about something big that's just landed a few weeks ago, is we are finally getting right packages. So we'll go to the usual, why do we do want to do that, and what the challenge is, and what we currently have, what's upcoming. We'll tell you about some of the issues that we are foreseeing. You will tell us about the issues that we have not foreseen, and all of us together will find a solution for making this work somehow. So who we are. I'm Pingu. You may have known me before. You may have seen me at the previous vlog, not the one last year. I work in the Fedora infrastructure team, and I'm leading the right-getting initiative. And with this, I'm working closely with Dominic, who's going to introduce himself. Oh, yeah. I'm the Fedora CI Objective Lead, and I'm also, I work for Red Hat. I'm a manager in the Fedora CI space, and also in the rail space for those who work for Red Hat. You may have already seen me. So this is all nice and easy. You know who we are. We know what we talk. But why are we doing this? Well, the answer is easy. We want to say ride to be more stable. We want to ride to be something that people can actually use on a day-to-day basis, and stop with the idea that ride, eat your babies, and you shouldn't touch it. And it's OK to break roll-hide. It's OK for ride to be broken for weeks in a row, because it actually is not. Just think of it as roll-hide is our master branch. There is a reason why roll-hide is our master branch. There is actually no real reason for that. But if you think of it as your master branch, you actually don't want your master branch to be broken all the time. You have CI for that. You fix things. You want your master branch to be working all the time. So there is no reason for Fedora not to have this data as well. So we want to have a more stable master branch. We want to have compose working in roll-hide. There is frequent time where composites in ride are broken for days, if not weeks in a row. And when composites don't happen in ride, when the composites fail in ride, that means that no updates are getting pushed, which means that all the build that you have nicely been working on, all the updates you have nicely been working on, are not getting pushed to any of the people who actually run roll-hide. And we do have some of these errors, and there is even one thing in the first row here. Thank you, Kevin, for running roll-hide. So every time we break the compose, Kevin cannot get his latest package. So he's very sad, and that makes him know. And when Kevin is sad, it makes us sad as well. There is a side thing on this, is that if we make a roll-hide more stable, we'll actually get some benefits at Fedora branching points. When you ride the stable and composites, when we branch from roll-hide to do our beta release, that branch is also going to be more stable. We will stop having to run and make heroic efforts to actually fix our branching model and get beta out there just to compose. And faster update release, that's basically what I was saying. In roll-hide, if you don't compose, you don't get updates. So we want to avoid that. There is also a side effect of this, is that when currently you dump something in roll-hide and that breaks the wall, you can go on your weekends and enjoy enough and or you can go on PTO and let people with a broken roll-hide for weeks. So we want to get a little bit in the mindset of, well, if you break it, you fix it. But if you were at Neil's talk yesterday, you may remember that he was pointing out that not everybody can fix things and that's where Alexandra is definitely right, where we want to increase the teamwork in Fedora with if you are working on something which is of significance importance and you don't have all the access that you need to, well, then you need to actually reach out to the people who are maintaining the part that you can't touch and work with them to make sure that the entire of your change with the dependencies is still working and passing. But if you do have the power, if you have a proven package, well, you have no excuse to actually fix what you break. A bunch of our contributors are volunteers. Most of our contributors, some of them working at Red Hat as well are volunteers. They are contributing to Fedora on their spare time. Weekends, evenings, PTOs. We cannot dump something in raw hide and let something else take their time off to actually fix what you break. There is no reason for me to push something in raw hide that breaks the world and let someone else contribute, who is a contributor who is giving the project free time to go after my mess and clean it up. So we have to be careful. When people say on the developer list, and if you follow it, you've seen that before, it's okay for raw hide to be broken. No, it's not. It's not. If you break it, you fix it. If it's okay for a part of raw hide to be broken, but that part of raw hide should not affect the entire project. It's not okay for me to break your work when you're contributing freely on your own time. So what are the challenges to make this happen? Well, the first thing is, as I've started, it has been a long process. We have been trying to make raw hide more stable for years now. We have been speaking about getting raw hide packages for a long time. So the first challenge that we were facing was just let's make it happen. Let's actually get raw hide packages. But we have a few conditions for that. We have a few requirements. We want to actually fit into the existing tooling. We don't want to reinvent the wheel. We want to create a new build system. We don't want to iterate a new body part. So we want to fit into our existing infrastructure. We also want to interact as little as possible the packages workflow. And that's something which is detourist because, as I said, contributors contribute under free time and the more change we impose on them, the more chance we have to drive them away or it's going to take them more time and it's going to be less interesting for them. So we need to be careful about the experience there. So we need a smooth user experience. That doesn't mean there is going to be zero changes. I mean, we are changing the way our packaging workflow works in federal. So there will be changes to the package or workflow. But we want to try to have them as minimal as possible. And then there is a big challenge as well, which is going to be also on a lot on Alexander and Dominic is the false positive side. How do we make sure that the tests are sending proper results, that we don't have package that are blocked by because the test is faulty rather than because the test is valid and the package should be blocked? That is a metaphysical question that happens with every NECI. If you've played around with CI, you know, with testing in general, you know about false positive and the issue they ground on. So how do we plan to do that? Well, we plan to do, to roll out in phases. We start with a smaller change and then try to go for the bigger one. We want your feedback as early as possible. Even when we did the change proposal to FaceCo, we did a call for feedback on the dev list and we actually got very good input there. Some of the inputs we already had in our mind, but they were not properly worded in the change proposal. Some of the inputs we had missed, so we added it. And we actually definitely want a polished user experience. So where we are, we are in the, we are currently getting single built updates in Rohite. So how does it work? So that's basically the entire workflow of it works. It's unreadable. I know it's meant to be unreadable here. Just the idea, you have, the user is on the right. You have 10 systems behind it. Hopefully most of them, as you can see, the user does not interact with them. This is the simplified version of it. So it's the one I'm going to run you through. When the Packager does a build, it lands into a certain tag in Koji. The tool called Robocignatory takes the build, signs it and moves it to a secondary tag. There Koji's body sees it, creates an update from it, waits for the CI system to report that the build update is tested and the test passed or the update is not tested and therefore it should not be gated. And if everything is good, it moves into the right build and you get your usual right experience. I'm going to be digressing a little bit from this workflow here just to go through the decision making. So how does the... Just to reiterate on that, like the first step, just to emphasize this part, the first step is one where you actually make the decision and the rest is automatic. So it might be complicated, but by default, these things will just go through. So no watching and chapering things through the process. Dominic, this is definitely right. It's all automatic. The other thing that to consider is what changes from the current workflow before getting is basically the second box here. We were using two tags. We are using three. We did not use body. We are now using body in the right. That's basically the amount of change we've done. So I wanted to go a little bit on how the CI system makes a decision, what can go in and what cannot. So we have the CI system that is running the test. The test lands into an application that is called ResultsDB. So before I go to this, I'm going to the three applications which are key and the three names to keep in mind are ResultsDB, GreenWave and RiverDB. These are the three items which are used to make a decision. So the CI system sends the test results to ResultsDB. It's just a database which exposes a JSON API that says, well, this build, pass this test, this test, this test, this not pass that test, that test, that test, and so on. So every entry in the database is about an item that was tested, CodgyBuild in our case, and the test that was tested and the outcome of the test. As well as the URL which is in there when you actually want to send the user, well, it failed and this is the log where you can find why it failed. Then we have GreenWave which is, it's a decision-making, it's a policy engine. It goes through a set of rules and based on the results it finds in ResultsDB, it makes a decision. For this it has two source for making a decision. One is its own configuration file which is managed in the Fedora infrastructure which contains policies, rules that would be applicable to every packages in Fedora. So if you're at David and Tim Flink's talk yesterday about RPM Inspect, if one day RPM Inspect becomes stable enough and reliable and useful enough for the community to say, well, we want RPM Inspect to pass on every single package in Fedora, we will put that in GreenWave's own configuration file. Currently there is nothing in GreenWave's configuration file. Everything is in this gate, in the package specific. So you can have per package, you can make a decision per package, what are the tests that I want my package to be gated on? So which means that if you don't want your package to be gated on, you just don't tell GreenWave to get these, that's it. So it's entirely up to you at the moment. So you put in your this gate repo a get in.yml file, there is documentation online on how to do that. That's what I was missing. I wanted to add a link to that documentation in the slides. So I'll have to do that. So you put a get in.yml in your this gate repo that tells GreenWave, I want to test, I want to gate on this and these results. When GreenWave receives a new result about your package from ResultsDB, it checks that file and it says, well, are all the rules satisfied? Yes, no, I'm missing some. And it will announce that. But it listens to GreenWave messages and every time they GreenWave announce, well, this bill can go through. It checks, is there an update corresponding to this bill? Yes, the update goes to stable. If the update is not, if the bill did not pass CI, then GreenWave's body say, well, it blocks and the user can go and they can wave the test results. So waving is basically saying, well, I have checked the test results and I know they fail, but they should not fail. This is a false negative. It shouldn't, they should be passing. It was something in the CI system itself. There was a network issue. There was a drive issue. There was, could not talk something. So we try to minimize that except that, you know, Schlinger and Scott and Murphy's low being what they are. It happens. One of the way we want to in the future avoid waving test is being able to retrigger a test. So if you have a network issue, you should be able, well, looking at the logs, this doesn't look like an error in the test. So let's rerun them and see if that passes that time. But sometime you end up in a situation where you actually want to wave the results and that's why WaveDB does. You as a user, you say to WaveDB, that test result here can be ignored. Then GreenWave will receive the notification from WaveDB. It will again pull the data about your build from ResultsDB and say, well, if I ignore that test, everything else is clear. So that update can go through, which when body picks up the message and say, yeah, they can go to stable. So I just included some links here. You probably don't want to click on them because there are APIs, they are meant to be consumed by computers and not really much by human. ResultsDB has some sort of UI on top of it which makes the JSON spread here than the GreenWave and WaveDB which are really meant to be just used by computers. But if you're ever curious of where they are, where they live, how they look like, you can feel free to have a look at them. Yeah, maybe from my HandWavey perspective, just some of the background for this is this is actually a pretty nice differentiation between the tools that implement this and the policy. So even I who don't know what these tools specifically, I can go in and understand what the rules are. If I'm a packager, I can put a rule in my package saying what I want to be gated on. I have control over that. As a looking at Fedora as a whole for a Fesco or the community, we can set policy, central policy that applies to everything without changing these tools again. And as a packager and or a proven packager or whatever, I can use a waving mechanism to say whatever these tools are telling, I know better, right? Because we should trust our judgment and our experience here over these tools. So you can, that's what the waving mechanism is for. So you can set the policy for your packages, just sure wide, or you also have the control to wave and say, this is what the results should be and not what the tools are saying just to kind of summarize these levels. So I just wanted to come back to the waving part a little bit. So we still want this to be the last result but we understand that sometimes it's not. And to make the user experience, waving is actually pretty easy. You have the body CLI and you can just say, well, body updates wave and then you put the identifier of the body update and a comment on why you were being this. My favorite test when I have a test that goes through the entire pipeline and make sure that everything works and the comment is, this is fine. You know, you probably want to have something a little bit more descriptive because you actually want to, we want to be able to go back to these waivers and say, well, why was it waived? It was waived because of that tool did not behave as expected because there was a networking issue because there was an atrive issue because I know better than the tool. Me, as an example, we use this internally as well. And at least in the beginning the most common used comment for the waiver was this is an example waiver. That's, you can use that but looking back, trying to understand why exactly you did waive that test result a month ago that's going to take some debugging. So it's like kind of like to-do, add comment or to-do, add description. You just don't do that, please. And the one thing I wanted to point out is that when you use body updates wave you can waive fail tests but you can also waive missing tests. That's especially important at this time where things are still in progress. We can't re-trigger the tests yet. So if for some reasons the CI pipeline or something else broke and no results are showing up after a certain time you are able to say, well, you know what? Just let that update go through and I know that something is wrong here. It has been reported. Let's just go through so that I can keep on with my work. So we got some early feedbacks. The first one is that it works. So yay! Because the second one was that if you do a single update in Rohai it currently get between five to seven emails. And for people who do a number of significant work in Rohai that is a little bit too much information to buy emails to receive. There is something else that we found out. That's actually not a feedback but something we found out is that if you use build route override you end up, I need to go back here, you end up in a different tag than any of these ones. Which means basically nothing, so you will end up in a different tag here which means nothing is moving your build into the next stage automatically which ends up making basically the update being, the build being stuck. So currently this is work in progress. Currently do not use build route override. Don't. You shouldn't need them because everything goes through and you don't need them. If you need them that basically means you probably are looking for a multi build update situation in which case you actually don't want to opt in into getting yet because this is not currently released. So we don't support that. So we only support single build. So if you're looking at a build route override there is probably something wrong here. Something we clearly saw was that we released single package getting on Wednesday and it was right in the middle of the mass release for F31. The master builds. And when the builds landed into the signing tag, well, you know, when you have 20,000 builds the robust signatory takes them one by one and where you sign? Yes, no. Where you sign? Yes, no. Well, it takes a little bit of time. And when the queue is filled with 20,000 builds it means that your right builds that are coming next to it they are just one in the queue. And it can take a little bit of time until your one in the queue comes up to be the next one in the queue or the one that is currently being processed. So we will need to make some adjustments about signing master builds. How it's going to look like. We don't know yet. Can we have dedicated signing? We don't know. Can we have a worker based model for signing? It's up in here. That's something we have realized we've, you know, pulling this out. I'm happy that we're finding it out now rather than later on because that will also impact the multi-build getting. But that is something we need to fix. On the roadmap, we are still working on, you know polishing the single build experience as well as working on the multi-build updates. So I'm happy to tell you that currently in body, in git we have fixed it so that for a height update you will only get between two and three emails depending on if you test pass or not instead of five to seven. So less emails for you. We may still go to reduce that. We have kept some email notifications because we've seen they are interesting. If, you know, the community thinks that they really are not interesting, then we can always look at reducing that number again. We have also fixed the build with a right issue. So if you end up using them, you should not need them again. But if you do, then we actually have ways to unstuck you. You have one comment also on the emails. What we're looking at is also to see if we have a CI dashboard, you know, where is the trade-off to see what is useful to people, whether you want to opt into emails or whether you want to rather check on a dashboard. I think that's the balance we'll have to find and kind of iterate on. So when you see things happening there, watch the lists and give your feedback or what's useful to you. One other thing that we know we are currently missing is when you create an update for a certain build, you have no way of monitoring what's going on. But the CI pipeline actually announced, hey, I'm currently starting to work on this build. So we actually want to be able to comment on the body of that saying, well, hey, if you're interesting, this is where your tests are being running. You can monitor them over there. So that's something which we have in the pipeline is, I haven't put it in the roadmap here, but it is something which we have in our head and that we want to get sooner rather than later. Okay, and the next big thing that we are working is the multi-build update. So this is, again, how it looks, and again, it's unreadable, and again, it's involved in systems, and again, I have made a simplified version of it. So the user has to do a little bit more work here. The first step that you do is you create a site tag. So a site tag is going to be the place where you do your work in an isolated environment from everything else in Rohide. So that's, and it will contain everything you want to have tested. So you create your site tag. It's going to be a, FedPackage just landed the work on this Lubomir, which I'm not seeing in the audience as put it off. So you will be able to do a FedPackage site tag request or create, I forgot exactly what was the syntax, that will automatically create a site tag for you in Koji. Then you will do your usual git commit, git commit, git push, FedPackage build, but there you need to use the dash dash target and use the site tag that you've created as a target for your builds. Once you're all done, so you have 10, five, 200 packages that you have rebuilt, you will go to Koji, sorry, to body, and you will say, please create an update from this site tag, and you will pass in the name of your site tag. Body will interact, will interrogate Koji and say, well, what are the builds? There you see. What are the builds that are present in this site tag, and it will come up with a list of the builds. It will move them to a dedicated signing pending tag where robust signature will be picked them up, sign them, move them to a testing tag, at which point body will pick up the list of builds again and will send a notification to the CI pipeline. This site tag with these builds are ready to be tested. The CI pipeline will do its magic and will report to results.tb, GreenWave, we come back to the usual suspects, and GreenWave will tell you, you know these list of builds that you tell me about, this site tag, they are all good to go. I have a question on the back. So the question is, I'm going to repeat the question while Alexandra picks up the microphone to answer it. The question is, will it be possible to actually define tests for a set of packages rather than individual packages? And now we want to do that. And I'm going to say two things just before giving you the microphone. First, you're putting my next slide. That was the very last question here. And second is giving the mic to Alexandra to answer it. So for now, the first iteration would be just that site tag will pass if every of the packages in the site tag passes its individual gating policy. So that's going to be the rule because we configure policy per component. But obviously it's not where we should stop and there are talks about how we implement the more generic gating policy, but we need to really work on the place for it and the way how to define it. So basically it's not technically too complicated to define additional rules, but we really need to figure out the user experience around it, where to put it, how to update it, how people are going to manage that. So if you're interested in this topic, follow Fedora CI, Fedora develop conversations and provide feedback and participate. I think one aspect to also think about there is why would you want specific tests for a set of updates? Like why should they not be part of the individual packages as well? Yeah, so the answer from Steven was that there are ecosystems out there that have a whole set of tests. And under that we had the talk yesterday about the Fedora server saying that came up as well. So I think that the trade off there to think about is, is it another test I want to add to specific packages or sets of packages? Or is it something I want to monitor specifically as a separate test, for example, say I have RPM inspect run on the entire distro, I have CI pipeline running for all tests that have packages that have tests in this kit, but I might also have a separate test that's called Fedora server that provides feedback if any number of packages are, or a specific set of packages are involved. So I think there are multiple models that we can think about and for now I would say what you can do is definitely add tests to the packages involved and start from there. So there has been sort of a reflection about getting composes and I actually believe, and you can correct me if I'm wrong, that Green Wave already tests and publishes decisions on composes if they can because through or not. So more on confirms that actually our composes are already getting with the same tooling. Yeah, so getting composes definitely a good augmentation. I think what it shouldn't replace is testing before something actually lands because this is the point where we can affect a change before it breaks someone else and I think that's a place we all want to be where we're not under time pressure. There was another question here. What is the expectation? So the question is what is the expectation for how much time this test runs? So I have a package that is, so the Federal Gazer easy fix if you want to look it up, it has a very simple test.yaml that all of these stores is calling the fail module from Ansible. So it just directly start and crash so that I can test the actual getting mechanism. And that overhead is about eight minutes. Look at that. I know. So. The. The. The. Six hours. Partly paralyze the course, maybe 14 hours. So. 12 hours. But then they get real assurance that this stuff works because it runs thousand something real deployment type cases. Where is the balance? So the question was about how long should actually tests run? What's acceptable? I think that varies per package because essentially what you're doing as a maintainers, you're saying I'm willing to wait this long for my change to go in like how sure do you want to be? I think the answer is probably going to be different whether you tests bash or whether you test GCC or something that's more complex that involves a lot of other components. So I think. That's why I'm asking for what is acceptable. We need to have a policy as a project. I think we're the policy. So the question was about the policy where this comes in is in and I'll touch on this in the foreduce the objective talk later when we talk about reverse dependency testing and when you actually start testing or running tests from other packages. The short answer is as long as it needs, it'll take as long as it needs to take a good guideline this is essentially if we look at this as part of the developer workflow if it takes more than an hour or two that starts to feel really long. More than a day that's really questionable how useful that feedback is but that shouldn't be like for example if the infrastructure goes down you shouldn't say oh this takes too long let's just wait everything. Like that's also the flip side. So that's why the guideline is still as long as it needs to take. If you think a test is essential then that's how long it takes. If you say it's not essential then it doesn't need to be included. So we have updates and reverse dependencies there's a list of packages and craters like Sound Blend, NSS and whatnot there. It's about one hour for everything that goes in the upgrade of the operating system or client-level server and rebooting machines in the process. If we are adding the same to raw product that's probably okay. But if we are adding or wanting to add something more that hopefully should be post composed in that sort of reports back that more rare use cases fail. Well differentiation between rare and not so rare is a valid one I think. But the question is like you shouldn't ask like the question you have to ask yourself here is not how long should I have to wait? But what happens if I don't catch something and everyone else can't use raw hide because of what I did? I think that's the honest, that's the hidden cost. Blocking everyone else and breaking everyone else's workflow. I would question the claim that there should be a unified policy on the time of testing actually. Because the good thing about skating is that while you haven't landed in a compose yet this is your part of the process and you don't block other people from doing their stuff on the compose. So it actually gives you a freedom to decide for yourself and for your level of involvement how much you're okay with waiting and how beneficial these tests are for you. So I would say our current framework allows us a full flexibility. We can use various CI systems in various cases. You can configure them per component, per group of component. So there is a freedom on that. We need to have certain guidelines on generic tests but for component-specific tests we actually can allow very different policies for people. I would say this is since the... So when it comes to package time, the time it takes to run the tests, I would say since the tests are in your hands it depends on how long you're willing to wait to wait for it. If you start to affect other people because for example you make it so that your test runs on your dependencies without consulting with the people that are maintaining these dependencies then it's probably a problem but that's something that you need to communicate with these people. And you say well free IPA tests run for six hours and that's only impact and all the people that are working on free IPA are okay with that then I think it's fine. If the open SSL for Excel well I don't want to wait for six hours and I would rather that we do the one hour side than maybe you can do the one hour side for the open SSL in the six hour free IPA. I think there it's a matter of communication. You need to have different places where you do it. So I think two things here. One is what will probably help is especially if you have do more testing and become more involved. Let's look at the Fedora CI SIG that kind of have that exchange with others who do this to kind of see what works and what doesn't work. And also based on my experience at least the lowest amount of time you should wait is slightly more than the developers are comfortable with. If you ask someone how long are you willing to wait for test results? You should definitely wait longer. That's kind of the lower bar. So what happens in the case when we've been talking about what happens when free IPA has a test and breaks raw hide and then... So you're thinking that something that... So the question is what happens if free IPA lands in raw hide? Something would long that the break raw hide. What happens if something with a long test suite lands in the right and breaks raw hide in the way that was not cut by the long test suite so you have to wait for X amount of time before a new build arise and lands. And I would point out that... Yep, there it is. You can arise wave missing results. For example, if something... If you need to go something quickly through... You can wave results. You can wave missing results. Of course, with the caveat that saying let me just add this quick fix that only fix one problem, definitely not introduce more. I would recommend that you run at least a basic set of tests and use judgment before waving. It's a valid case of waving. I'm just saying you should... Especially when you use reasons as let me just do this really quick or there's a deadline or let me just add this one fix. Those are classic cases of... Oops. The complementary version of this is if right is broken for six hours, it's still better than the state it is today where it can end up being broken for weeks. One small question on the whole process. So the question is, is it possible to cancel a build process when it's being... So it's not built. It's already finished, so you cannot cancel it. So it has already been built. I would say if you bump the release and just resubmit it, the old one will be... The old one will still... The test rate will still continue, but the new one will take precedence over the old one. Right, buddy. That's an optimization for the CI pipeline which we may want to look at at one point. Although... Yeah, that's something we can discuss. Question. I never brought a gating test. So I created gating YAML, test YAML. How can I trigger it without actually building the build in body? The same way like when I'm writing spec, I will locally build in MOOC. Can I somehow trigger the test so I know that I wrote it correctly? So the question is... Yeah, sorry, you had the microphone, so I probably don't need to repeat the question. I'll start. So there are several options. We still are lagging behind with documenting them properly, but we will work with that. So you can run... You can create pull request in Pagore which will be used by CI systems and they will provide feedback on the pull request. So you will get your test results as a flag on the pull request. You can also run scratch builds and I think some of the CI systems are triggered on scratch builds, but probably not all of them right now. And you can run the test locally, but you need to follow a doc for that which we're going to publish. Basically, that's an answer. There is some documentation on how to run the test locally. I think there is probably improvement to do to the documentation and I think we should also look at providing some sort of wrapper to help on doing this. But what we don't have right now as far as I know is proper testing and pre-validation of the actual definition, I think, that's what you're asking. Yeah, there is no a lint for the YAML files. So... So your followup already given us quite a bit of feedback here. There are some things that we are aware of, especially with regards to the multi-build. There is a question of site tag proliferation. How is Koji going to handle this? If a lot of people are using site tags in RAI to do their development work there, that's something which we will need to test and see how that behaves. I mean, if everybody uses site tag to do their work, I think this is a good problem to have, but that would rather that we anticipate it a little bit before then we actually hit the wall because Koji is not able to manage that. There is a big question about on merging the site tags. How do we result conflicting builds? How do we provide feedback to the user with instruction on how to fix that? There is a question about race condition between Koji and body, because body will store the list of builds that are present in the site tag and that's what it will send to the CI system and that's what it will receive information about. And what happens if just before making the decision, the user realized that, hey, something is broken, I can or I've missed that one package and I can push it as well. Then suddenly you have one more build in your site tag that body was not aware of. So we need to be careful, very careful here, but being sure that what body pushes to stable is exactly what is present in the site tag and there is no, we try to minimize the possibility of race condition within the two here. Then there is the entire question that Stefan has already raised was, how do we test site tags? Do we consider them the sum of all individual package tests present in the site tag? Can we create site tag specific test? How do we handle that? So I'm not going to go back to it because Alexandra already covered it. We also have some point of attention for testing itself and I'm going to let Dominic on that one. Yeah, maybe just to add onto the previous slide. So the goal here is not to have something that's perfect. We want to have something that improves stability and makes things, makes our world a better place and there'll be a lot of points where we can improve this and I think if we iterate together, it'll be better but the goal is not to wait until everything is finished then have the system that solves all the problems that would probably take a long time to wait for. You mean wait until everything is finished and then deliver the system that solves all of the problems? Yeah, exactly. So yeah, we're testing, there are a lot of things here that are a lot of aspects that you have to think about. Like when we do have tests, we have to think about stability, right? It's like how stable are our tests? How stable is the testing infrastructure? How reliable are things? All of these play a role. Some of these we can influence. Some of these you can influence. Some of these are easier to influence than others. So that's just, we like feedback's good on these things but it's, we have to tackle specific problems to solve them, I think. The impact of these tests, like something to keep in mind. It's like when you do this, of course you're slowing down individual development, package development. The benefit is not breaking others and giving yourself the time to fix things, address issues. But also as we ramp up and as you contribute to tests, think about what does this do for others? If this is run as a reverse dependency test on other packages, can they, can people understand the results? Can they, can they, like, do they know what this does? Can they interpret the results? How can they, how can others contribute if they see an easy fix? We are aware that we have more than one architecture that Fedora runs on. So some tests do. Do our Fedora tests run on multi-arch right now? I don't think any do right now. That is definitely something we want to do. It is a question of prioritization. So if you have a case, well that is really important. He can reach out to us and the answer will most likely contain somewhere the word contribute and please. So that is, we are aware that it's a thing, but coverage and getting things in at least for one architecture is a priority we made. So if you have feedback on that decision, please reach out to us. Maintainability. It is with tests, I'll probably get to that a bit more in the CI objective talk at 330, but think about how maintainable tests are. Like if you add something, especially if you add to contribute a test to another package or take something to the upstream, think about what would it take to maintain that? Does it make sense? Can I augment something that exists? Can, or even can I take my tests upstream? Even further upstream than Fedora. The further upstream, the better usually. And otherwise the same principles as with other code apply, like how to write maintainable code. I'm already come to the connectivity upstream and of course we are aware that we don't just have packages but also modules and containers and these are also tested but to different extents because the infrastructure that is tied to those is just different in some cases. So this is mostly a question of priorities. Not that we think one is more important than the other. It's just that we have more RPMs right now and they're also used for modules and containers. So it seems a logical choice to start with the packages. And with this, well, you've already been very helpful but we would like to invite you to help us more. We need more, more testers, give us more, take a look at the current workflow, tell us what's right about it, tell us what's wrong about it. Don't forget the first part, please. If you want to test the multi-builds, reach out to us. We will be looking at deploying this in staging in the coming months. So if you're interested to do this, if you have a set of tests or packages, you want to be looking at testing. Yeah, again, reach out to us. We will work with you. We will see how to get this working in staging with you. This is going to be a great friend ride. So don't hesitate. Give us your feedback, that's the main thing. Try it, give us feedback, tell us what works, what doesn't and tell us how we can improve it. And also your workflows, right? Think about what Alexandra said, that the freedom you have to choose what tests you run, how much to run, where to run them, use that to decide what would fit your workflow and then if you're unsure how that fits with what we've implemented, talk to us and we'll try to help you or we will help you figure out whether that fits into what we've built. If it doesn't fit, do we need to change what we've built? Maybe, or maybe we have suggestions on what you could do differently and we can talk about how we can make it work. And with this, we would like to thank you and take any questions left. Mohan. So, two questions. First one. So, what is the process for when the CI just fail? Like when I, if I say body update, blah, blah, blah, does it automatically do, up to the stable or do I need to do something else? So, the question is what happens when the tests fail? If I do a body update wave, what happens? So, when you do body update wave, what you're saying is you're actually sending a request to WebRDB via body to say, well, wave the test, the failed tests. So, you can, when you do body update wave, you can specify a single test, you can specify all the tests, which is the default, whether they are missing or just failed. So, you're sending something to WebRDB. WebRDB will send a message that there was a new wave added when wave will pick this up and say, well, this WebR concerns that test result that concerns this package, what were the rules for that package, what were the test results for that package? If I ignore that test result that I was told, that I was just told to ignore, what's the outcome of it? If it's still false, then it's, nothing's gonna happen. If it becomes true, the package can be, can go through. It will announce it, but it will pick up that and say, well, this can go through, move the update to stable, push the build to the right. And second question, a kind of related to the like that testing, is there any plan to add some distro level testing, especially when like, a text bump takes, fails the composer, that's the most common thing? So, the question is, the question is how to, is there any plans to do distro level testing, especially when a bump spake failed? So, then bumps, broken dependencies and these kind of things. Yes, more in the talk on the Fridorsi Objective. We also have distro wide tests being developed right now, like the RPM inspect with David and Tim, that they presented and now present some more options and if you have more ideas, then that's a perfect place to contribute, because this is like, the system allows you to add other systems that provide feedback and test results. So, there's no single place you have to put them, the messaging system remains the same. And the intent of this is also that we can start deploying the systems, seeing the results and then deciding as a community, do we want to gate on those? I think that's the key difference. There was another question. No, not anymore. We are right on time, so if there are more questions, if there is one last question, then we are happy to take it, as always we are happy to let you go to the next. Okay, so if I have a lint, there's something that shouldn't be tested, but shouldn't block the package going through, is there a good way to see the results for the whole distro about these lint-style checks? So, the question is how to get distro wide testing without getting, is that correct? You can add test results to any build. You can basically run tests on anything and report the tests and integrate this into results DB. They will show up in the body tests results page, which is basically how Tasker Tran and OpenQ have been working so far. And as long as you don't document that in green waste policy, whether that policy is in distro level one or package level one, it won't be used to make the decision about the getting process. And individual packages could opt into gating on that by choice, for example? Is there a good way to see the results distro wide? Distry to see the results that would, basically, yes, by results DB also all the information, so that would mean basically we need to create results DB for all the results about a certain test during a certain period of time, probably, and see how the trends goes. So there would be ways to monitor that. So the question from ExxonR is, in the body update page, so we saw the test results that are being considered for getting all of them, we are showing all of them. Which, okay, Mohan got the last one. But we should stop after that. I'll say I cannot view these results, or these tests. Is there any plans of policy about not being able to waive certain tests? Dominic seems to be very good to answer that one. Yeah, so that is not implemented right now. I would say in the future, it would make very much sense to restrict certain waivers. But that would be probably a decision for Fesco and the community to make. So I think technically, that is possible. It is definitely not planned in the near future and would not happen without community involvement. Like this would need wide consensus, it's my opinion. And I think with this, we're gonna close because it's about time. Thank you everyone. Thank you very much.