 Have you already seen what's coming? Hi! Hi! Welcome to the Basin of 2020 Distribution Discoverer. Our next talk is going to be on Fedora package, we're all high package gating, with the area usual. So, this being the last talk of the last day, of course I might be very thankful for you wanting to be present in the room. So we're going to be speaking about this official version of Fedora, which is the live branch, and how we've implemented gating packages in the special area. I'm not going to go into the mechanisms of what we get on, the idea is not to go into which tests are used to make a decision about whether a package should go through or not, but I'm going to run through the mechanisms that we place in the mechanism, the architecture and the mechanisms that we have to put in place to be able to get the question on which steps I'm going to debate on. We can discuss about that, but we're actually not discussing that. Yes, that's good. So what are we going to do today? I'll start quickly going to the grocery, because I know some familiar faces are in the room with those, so you'll see some unfamiliar faces as well. I don't know how much of the Fedora can see, so I'd like to actually give you a bit of information that we're speaking about, so first. Then we'll go into why we want to rate two gates in real life. What should we think there? Some of the challenges that we have had to face are some of the constraints that we're trying to put in projects. We'll start now, of course, to study some things. Or you can debug, you know, try to see if you can unlock yourself through some of the conduct cases. Ideally, you know, I don't want to put... This is a great presentation. Everything's working, great. So there shouldn't be a debugging slide, but you know, since I've got some questions, so there's a good instance. Then I'm going to explore the robot only once, you know, once we have this in place, but we will go where we want to go. And then we can use my small surprise for the regular experiments that we've done here as well. Let's come to the questions. There are a lot of questions I'm sharing with you as well. So, sir, what is RAHI? RAHI is the, basically, the demo professional of RAHI. It's a rolling release. It's a release from a person who has never released it. It's composed everyday, but you can find it on the mirror, but it's basically not the version of RAHI. It's on everyday. It has new packages coming in, and the interesting thing about RAHI before it starts to evolve is... Then we have, how does the package work for RAHI? It's something that we call dsds, and this is basically a term that we use for choosing. It's a collection of digital repositories. Every single package is federal. Every single package is federal as the corresponding digital repositories in which the packages are maintaining spec files and patches. It's also like a web service that you post in the platform. So, is this also a GITUS in-platform? It's kind of applied for the GIT repo, for most of the rebels. So, what is that? It has some limitations. They are playing GIT repos. What sits on the top of it is a web interface that we have designed to sit on the top of it, but the current infrastructure is basically two elements. It's a basic GIT server and what we call deluca-side cache, where the packages upload the table from the other projects. So, it's really a collection of GIT repo. It's not a central one, and each one is different, but it's been dated on an issue. There are some limitations, but you can't force push, for example, if that's what you're thinking about. But these are configurations that you can do to any GIT repository. So, it's not specific to architecture in there. Then we have the... So, once you have your GIT repo, your disk GIT repo, you have your spec patches, you have the sources from upstream, and Fedora uses a build system. So, everything that is shipped in Fedora is built in a control environment, which is called, in our case, a coaching. So, that's where everything is built. Builds are managed with tags. So, a build in a certain tag is in a certain state. If it moved to the next tag, it's moved to the next state. So, basically, the tags will give you a little bit of the state of the package. If it's in the F32 tag, that means it's available in the build route for Ryder moments, which is still F32. Something that we're going to come back with is they are called site tags. They are basically tags which are sitting next to the usual main ones, and they are used to be able to do work without impacting the main tags. So, they are isolated from the user, but they rely on the base tag. So, there is a key between the tags, so you have the base tag, and then the other one sits on the top of it. So, the lower down, you can always access the tags to try on an issue, but you can't access the tag which are above you. Then we have our update system. Once we have made a build, we can say, well, I want this build to go up to the users. This is handled by an application called body, and basically, every package gets to choose well, that application goes to the users that the data does not go to the users. That is true for every stable federal release. That is not true for RoHide. In RoHide, as soon as you build something, it's going to go to RoHide. So, there may be builds which are wasted. There may be builds that would not need to go to the users, but the mechanism of RoHide has always been if you build it, it goes to RoHide. In stable releases, you can build it and say, well, actually, I don't want to push this one. One of the aspects there as well is also, if I push something, and then I realize it broke, I can't push it up to a certain point, so it does not affect our users until we are kind of sure that it won't affect. We have an MQP-based message bus called Fedora Messaging. Well, Fedora Messaging is basically a wrapper around MQP to make it easy to send messages on the bus. This is very used for making the different applications to each other. So, whether you will listen to messages from Koji, Koji may react to messages from another source and so on. And finally, we have something called a Robotsoniatry. It listens to the messages coming from the bus, coming from a TV from Koji, and it will, so every time there is a build, it will be notified of it, get the RPM, sign the RPM and move the build to the next tag in Koji. So, what do we want to get Rohite? Well, simply, Rohite is a place which is, I think that's true, fairly well known in the ferrite community to be something which is not stable. I don't think that's wrong to say that. It is wrong now to accept this. There is no reason why Rohite should not be stable. The only reason why we allow Rohite to not be stable is because we don't have ways of grouping changes. If I build something and it dumps the surname, then I need to rebuild all the dependencies but that can take me some time and during the time that I'm rebuilding all the dependencies, well, Rohite is broken. That leads to situations like F29, I believe, or F30 when we branch, so we branch off Rohite to make the stability disease. When we branch off Rohite for F30, F29 or F30, I forgot, we did not have a working compose for about a month because Rohite was broken when we branch off Rohite, we branch a broken Fedora. So there is no reason why we want to keep Rohite broken and getting is bad, preventing this breakage from happening. We want a more stable Rohite. We get working composes. This means when we branch off Rohite we get a working compose, we get a working F31, F32 for the next one and we can already work on that product. That also means faster updates for Rohite itself because in Rohite updates the push of the updates of the right updates the mirrors up and as part of the compose. If you don't have compose for a month, you don't have you're not pushing right updates out to the mirror for a month, so everybody using Rohite is not going to get their new versions. It would be more specific about what composes in this one. So the compose is so the question is can we define more precisely what composes? That's an overloaded term which is always very hard to define. The basic answer is when we compose Fedora we are rebuilding the updates repositories, the DNF repositories that we are pushing out to the mirrors but we also are building our ISO image for example, our base image for containers and so on. So depending on who you talk to you may be talking about a part of the process or the entire process but when we talk about the right compose it means generating the repo, making sure the repos solve themselves so the dependency resolution works building the basic images. If one of these steps breaks the entire compose is considered a failure and not pushed to the users. One of the last things why we want to get Rohite is we want to get in the point of some of us have the chance to be paid by companies to work in Fedora the vast majority of the contributors in Fedora are volunteers they do that under free time which is a great investment we can't thank them enough for using your free time to come and help us and work on Fedora it's a song to have you there I've been there before I got the chance to be hired to work on this. The only thing is you may be maintaining a package and then something lower on the stack breaks they contributed because for X and Y reasons they don't have the permissions to do that and then suddenly what you're doing is you're on only day you're having family time you're working on something else on your pet project and then you get an email saying that your package can no longer stall because there is a broken there is a tsunami bump in one of the dependencies and you have to go and fix it you have to go and rebuild your package because someone else bumped some M and did not warn you and you couldn't coordinate the change with that person so we want to get into the mood of you break it, you fix it it's fine to break something you just don't, you fix it and when you push something out it's something which is fixed, it's something which is working so that's one of the changes that we want to bring to the right it's not okay to have a right broken it is okay to break things but let's break them in a way that does not impact everyone but you until you have figured a way to fix them so what are the challenges well the first challenge was to make it happen it's not the first time if you're part of the federal committee you may have heard it some months ago I would say it's not the first time that we tried to work on this but it failed for a number of reasons last time so when we start to look into that again we need to be careful on them we need not work last time and how can we make it work this time the second one is well those are more requirements we want to fit into the existing tooling we don't want to reinvent a built system that will provide us the getting we don't want to reinvent an update management system I'll come back to this and we want to interrupt the package of workflow as little as possible every time we change something in the package of workflow of contributors if you update a package every day that is you quickly learn the trick to use the new way if you're packaging something every three months every time you have to go back to the documentation and where is it again is that page the documentation up to date or is it this one over there or is that the female which I'm finding in my net box amount and then the last one is there will be bugs there will be bugs in the CI there will be bugs in the test there will be bugs in the way we're testing so we need to have a way to handle false negative something which works but it's identified that it's not working so we need a way to bypass the result of the test in a way that is satisfactory for our packages so having all of this in mind one of the idea was that we would roll out the changes in faces so we try to do the the easy one first that only contains a single package it's a it's a Python module that is updated that does not depend on anything it's pure Python it's Python 3, you know it the most simplest thing you can think of this is the easy case one build, one package, one update and then as we as we get user feedback as we polish them we also work on how to deal with updates that contain multiple build and test it as one unit so this was our idea let's do single build first place build to build after that gather feedback as early as possible and try to account for that feedback as much as we can and try to get a polish user experiment at the end of it where is it today? well we've announced at Flock so the Fedora conference last summer that we were able to do single package getting we are happy to announce you that we can do multi package getting as well in Fedora so how does it work? these are slides which are back from the, I just went back to the slide which I used at Flock when I presented this so for the single build getting it's a fairly complex system this is basically the flow of everything it's not meant to be read so don't worry that you can't read it it's just to give you an idea every column here is a different system and then every box is a different action there being the first system over there and as you can see the interaction with the user are actually fairly limited there is only the user's, the package starts a process and there is the overwrite here which is basically if something goes wrong the false handling the false negative case here do you have any steps where with the human would have to present some info for the group? everything is automated so this is the version of the large graph which is much easier to grasp and even for myself that's the one I go back to when I need to see if something is working the way it should we have a packager they do, that person does a Koji build in a certain tag in Koji so this is the basically since this is a candidate for an update of the F31 that was made this summer before F31 was out so F31 was or had at that time Koji announces that he has built and tagged the build into this tag RoboSignatory receives the message signs the build and push it into update pending Boody gets the message that an update was pushed into update pending it creates an update and then it waits when it creates the update it sends a message to the CI system that says this update is ready to be tested the CI system is going to test it it's going to come back to Boody and say yep that can go and Boody moves it or the CI system says no that can't go and Boody says well you know I can't do anything there so one of the changes that we have introduced here is that Boody was not part of the question before, before you would build it would be signed and pushed directly to the stable tag to the build route, nowadays everything goes via Boody because Boody was the place which was natural for packager to get feedback about an update so we are fitting to the existing workflow we change the workflow because we introduce it, but this is a natural place for packager to receive feedback about if this something is going right or not why is it a natural place because Boody when you create an update Boody is the place where people can report if an update is working or not so there is already a mechanism there for commands from contributors in the community to say well that update brought my system let's and push it, so this is ready now we have to build the picture didn't get clear the number of systems did not get smaller and the number of interactions actually didn't get that worse either so the simplified version again you start by creating a site tag so this is a manual this is a manual step so that changes from the way you were working before you create a site tag and that is your site tag F32 is going to be the base so currently F32 is right so that's going to be always F32 and then a random integer well it's an increasing integer then you're just going to build in that site tag as many packages as you like if you have two packages you do two if you have 100 you do 100 once you are ready you're going to be the one going to Boody and you're going to tell to Boody that site tag here is ready but he's going to get all the builds that you've made in that site tag and turn it into a single update it's going to signal that to the CI system that will kick off but you know it's not so it's going to it's going to create the updates it's going to move it's going to Boody at the time when you create the update it's creating two site tags as well it's using the same ID sign in pending and then testing pending and once you have created these two site tags it moves all the builds from your site tag into the sign in pending run the signatory as pattern matching algorithms that says well everything that is a 32 site something sign in pending I'm listening to I take all the builds in the updates I sign them and I move them to the testing pending Boody gets the notification that something was moved to to testing pending it marks the update as ready to be tested it signals it to the CI system that will perform all the tests on all the packages at once so if you have two packages that needs to be tested together it will make sure that both packages are present on the system to be before the tests are run sorry? there is a package package is it like internal red hat employee or outside anybody in the federal community with a package can do these steps how can we create the tags? Fed package request site tag so the question is how do we create the site tags? there is a utility that is used by packages that is called Fed package it is a small CLI tool and there is a simple action that says Fed package request site tags that basically goes to Koji and asks for a site tag to be created and it returns you with the idea of the site tag which you can then pass on to your build command and that you specify to Boody the next thing is that a site tag can also be shared if you are working on a update with a group of person the first one creates the site tags gives the site tag idea to the rest of the group and everybody can build in that site tag and then once everything is ready someone goes to Boody and creates the corresponding update so the question is if we have packages depending on each other in the site tags how do we end up package A being present in the build route when we build package B so there are two ways to do that the hard way is to do package A and then you wait for Koji to create the repository for the site tag before you trigger the build for package B or you have a nicer way to do that is the Fed package command again that is called chain build and that basically does that for you it tells Koji I want to build these packages and they depend on each other and you can group them so you can parallelize say package A and B can be built together but C needs package A and B the question is can we build the same package twice Koji has a requirement that's on the NEVRA so that's name, epoch version release architecture of a package so as long as you can send to Koji you can send to Koji a git hash basically to build as long as the NVR you can send it multiple times but if the NVR has already been built Koji will say well that has already been built so I guess the answer behind that is can I have a build that's present in multiple tags at the same time because you would like to send an NVR to that build to this site tag as well as the same NVR to another site tag, is that what you're denying? No, no, I think in this case I want to I have a package and another thing to build without this test disabled and then build another package that depends on this one and I want to do a full build on the first package with test enabled or something like that so I find this bump in the release in between so in this case it's a bootstrap case where you start with the test is able to build the dependency and then build again the first package with the test enabled so that's because it needed a package to be able to run the full test run yeah, in this solution you have no choice at the moment but bumping the release again So I follow a question can I untag a package from the the question is so the question is can I tag and tag packages and yes you can tag and tag packages if you're not allowed to do that Koji will tell you you're not allowed to do that but it's one of the elements on how you can unblock yourself if you find yourself that somehow you've built lost a tag you can actually just Koji tag the tag and the package name and Koji will happily tag it for you if you're allowed to do it, if you're not allowed he will just say well I can't do that so that's one way you can untag yourself so you can do packages for Koji like any can someone create freestyle target and you find it on tagging structure so the question is how much freedom do packages have to configure Koji and the answer to this one is none this is a release engineering task basically the other one that will be defining the tags, the hierarchy you can create your site type so you're not going to be able to create official tags do we need to start the CI before the signing or do we need to sign the package so the question is is it possible to start the CI process earlier in the process before the signing that was one of the questions we already considered and the issue which I had with that is that if you end up installing a package you end up testing the installation of that package not in the way the user are going to use it so if something goes wrong here and the package is not correctly signed I'm going to suddenly my testing store is going to fail because the package is expecting to be signed because in Fedora by default the the UNDNF configuration requires the package to be signed so this is going to fail if you say I'm triggering from here or directly from the Koji build well suddenly you it means you have to test with GPG in our check which basically is the different than what the user would be so it was a conscious decision to start the CI so late in the process but in a way that we make sure that what we test is actually what we are going to push to the users so how much time is it a button tag or is it natural the question is is robusting natural a button net yes and no on a daily basis no because the amount of builds that we get is low enough that robusting naturally just takes them it can be a button net during master builds master builds is when we take all the 20,000 kit reports of packages in Fedora and we say Koji please rebuild so within the DRQ Koji gets to rebuild 20,000 packages which means robusting naturally gets to sign 20,000 packages if we mess up a little bit which has happened not in just this much maybe but the last one where we forgot to turn on robusting naturally from the start we basically had to so robusting naturally goes through all the 20,000 builds and well copy them over are you signed no you're not okay I'm signing you are you signed yes you are okay next one are you signed yes are you signed yes and that basically is from robusting naturally but that was a mistake on our side because we forgot to turn on robusting naturally earlier in the process if in the case of the current master build that just happens robusting naturally was set up correctly from the start and basically as the build was up and robusting naturally was turning so and we didn't get the issue of swamping robusting naturally which we had at the last master build that's not to be passed any notion about the movement of these updates or there is an update for each patent for body wise there is an update for all group of patents or it's separate so in the case of and the question is is there an update for every build or is it a different update in the case of single build it's one update per build in the case of multi build it's one update per site tag so all the builds in the site tags are going to go into one single update they are going to be tested as a single update they are going to be pushed to the mirror as a single update so maintainer has to sign some body to say okay I have done all my builds now you can start that's one change in the package of workflow before you had to build and that's all you had to do now if you have multiple builds you need to create a site tags then do your build and once you're done with it you say okay you can go ahead there was thought about automating this and saying well we just keep on testing the site tags until it passes and then as soon as it passes we merge it but there is no guarantee that there is not some builds left over behind that should be still included in that as a package there can you get any site tag like can you get any combination of other packages that are being maintained putting a site tag and then getting blocks here or other but adding them to your site tag so if you're a packageer you can create an arbitrary site tag right? so the question is can I create arbitrary site tags you can create site tags from a very specific you can't create a site tag from anything you have to create a site tag from one of the existing tags and a limited set of the existing site tags right but if you have like arbitrary packages that you will see that are not together but maybe very many gamers don't think so how will this settle somehow? so how can we what happens if in the case where I'm hiding a packages in the site tag which maintainer may disagree with we're coming back to the NVR question to build something in federal you need to change the spec file to change the spec file you need to have the right to change the spec file and not everybody is going to have the permission to change every spec file there so the chances are that you're going to include that you have the permissions that you have the permission to change the spec file to do the build and up in your site tags I'm going to say will be pretty low because you normally have permissions to the packages you should be concerned about or that you're maintaining and that's your own set of packages and if you are one of the fit people that have access to everything then we expect you to know better than to mess around just sort of so there is one there is one change between what I've presented at Flock and what is what is currently in action so what you see with the built is that we have we have the user it builds it goes to body and then through the signatory if you remember what before and if I'm clicking in the right direction that will be easier we had it built and then it stands up in body and I was basically creating two workflows depending if you were considering a single build or multi-build so we changed the workflow for single build a little bit now when you build it first goes to body so you already have your update that update is being signed and then it's being tested so we have a much more similar workflow between the multi-build and the single build one of the advantage here is that basically as soon as it builds you have a body update that body update is in the pending state with whatever you know if you have a single build or multi-build you have an update in pending state once it's signed you have it in testing state and once it's out you have it in stable so it's just to make sure that we have something coherent between the two workflows that's just a little bit on going through how the decision frameworks works so we have as I was saying we have body that says in update and basically body basically triggers the CI system by saying that update is ready to be tested the CI system will run on the test it finds and it will send the results to something which is called nicely ResultsDB so it's basically just a database with an API that just stores whatever we turn into it it's a key store database you can just put anything in there it's backed up by Sparklight but it's basically a key value store it's really you can put anything in there every time there is a new result it's going to announce it I have a new result then we have GreenWave here which is our decision engine it's basically as a set of rules and it says well every time I'm seeing a new result I'm going to check this can I make a decision now this package has been tested I have a new result about this package can I make a decision what are the rules applying to this package nope I'm still missing a test I'm not going to do anything hey a new result is coming hey that result failed alright then I already know that this result was a requirement so I can announce that this update is not going through because that build has failed so this is what happens CI sends the result to result.tv that sort them GreenWave gets the result from results.tv and makes a decision based on these results and send that decision back to body if the user disagrees with them because hey bugs are bugs networks are networks and computers sometimes fails us surprisingly the user can override that and overriding this is basically storing a waiver so you send a waiver that says this result that you find in results.tv you ignore it you can also say these results that you don't find in results.tv you ignore them so if something is stuck the CI system just simply doesn't work anymore well you're not starting your results so your date is blocked because something is broken and just doesn't move along so you can say well you know what just ignore everything that's not present and then GreenWave is going to be notified about the new waiver it's going to say can I change my decision should I change my decision and if it does it notifies body that will react based on that so that's a little bit the framework I'm mentioning I will stop for a second this slide because if you're into results.tv GreenWave and WebDB these are the trio that's used for actually making a decision about getting the results so how can we debug ourselves well as I was saying one of the quick ways to check the Koji tags so you run Koji build info you put the nvr of your build there you have a line that says tag and then you just look into that if it's a single build and it's an update testing it hasn't been picked up by body something is wrong with body if it's a side tag then that's for me to build then it's still a body problem if it's in sign in pending then there's something wrong with RoboSignatory it could be that RoboSignatory is just going through the master build and it's so very busy and it just it hasn't caught up with your build yet or it could be that RoboSignatory somehow broke the connection with the sign in server and needs to have a gentle poke here we start if it's in testing pending it's waiting for the CI system it could be that the CI system is proceeding a lot of requests or it could be that the CI system is done you know if you're there it's probably something in one of these two here it's probably an infrasue if you stay there too long that's definitely an infrasue if you stay there too long it could be RoboSignatory swamp or it needs to restart if you're there it's less an infrasue and more potentially more a CI problem or a CI system another way to look into that is to look at the state of your body update if it's pending it's not signed so pending means we've seen the build in update candidates we've created the data spending and we're waiting for it to be signed if it's in testing then the tests are running or the package is gated and if it's stable then you're not debugging anymore it's working or it should be working so some of the user feedback we collected over the last months was about the interesting body as an element of the right workflow introduced a lot of email notifications basically a single build would give you between five and seven emails you get one for your data has been created your data has been pushed your data has been tested your data has been pushed to stable yeah okay right used to send you no emails it's now sending you five per update so much we reduced it to three basically tells you your update has been created if the test results failed it will notify you if they pass it once and your update has been pushed that's basically the only three that you should get if it's still too much we are happy to revisit this is how we've the middle ground we've found for now if people complain too loudly we are happy to revisit and see which of these emails and which are actually not that useful when we introduce multi-build getting if you're federal package you may have seen the new UI introduced but if I go you didn't see the mechanism underneath it for multi-getting because we haven't announced it at that time but there were some changes there the list of builds used to be either space delimited or comma delimited and we removed the support for comma which is a lot more complex for a little gain so now a list of builds should be only space delimited there used to be a mechanism that goes to create all of your builds in Koji the builds you've made in Koji and the mechanism was such that it would basically block the input field when it was loading the builds from Koji and it turns out that request to Koji is sometimes very slow or that you have hundreds of packages or hundreds of builds and so you couldn't copy and paste so that was a feature that we thought was good and that we were told no that's not good so like the GNOME folks who update and load packages at once they are actually tracking the NVR in the text file and then they just copy paste in the field and they couldn't do that before so we've reverted that change you can now easily copy paste your NVR as you did before and yeah that's the next point here the logic that goes and creates body to know which which builds you've made is still a little bit slow and we still need to look at the performance there we also someone that came to us and asked well there is no point in having no loading people to comment on an update once it's pushed to stable and I've had that happening to few of my updates and it's annoying so please disable comments once an update is pushed to stable and we've had Adam W coming back to us and saying nooo because the update turns out the update is still a mechanism where you can point people well that update brought this and this the bug see that ticket is over there so as a user if you run into a body update it may actually be useful to see be able to comment on it even though the update has been pushed to stable so that gets reverted as well if you have more feedback to take it with me and I'm sure will work through either finding a way to make it work for you or just fixing the the old dancing ok so where do we want to go from that well we now have mechanisms to gate something we still need to optimize testing and reporting of test results for groups so if I have currently if I have 100 builds I'm going to create greenway for give me the status about that build give me the status about this build give me the status about this build so that's a little bit inefficient when we can tell when greenway when the CR system knows that it's been tested these 100 packages and could just say as once well you know this group of packages that have tested they failed CI that would be one request to greenway that would be one way to my things nicely so this is something which is which is in progress we still need to work on the now I'm breaking back to what I said would not be part of the topic what do we want to gate on so what are our tests currently the tests are defined in the package repository and they are defined by anyone basically anyone can go there and there is a specification that the test should follow they are using Ansible and you can basically write your own test this is great except that having 2,000 packages writing 20,000 tests for all packages and making sure that you know we all want to test whether we can install or update or remove a package there is no need to have 2,000 people do that for 20,000 ripples we could do that for the entire distro in one go and that's one test to maintain instead of having a different YAML from Ansible doing the same test for every single package so we want to look into distro level testing some of them are reverse dependency testing being able to run the test of the package that depends on me being able to install, to upgrade, to done great to remove and we also want to look at the impact on the composites does your update break compose push a new row height updates if your build goes in there so we don't know yet how we are going to achieve this one but that is one of the things we would like to go to something else that we are looking into is we have the agreement from the FISCO the technical steering committee in Fedora the engine and steering committee in Fedora so we have introduced 3 test packages so they are only there are spec files which only ship a UUID in a certain file and just you can install them they won't do you any harm but on the inside you really shouldn't install them no use at all but they are useful for us because basically while working on this we want scripts to be able to make sure things were working and since we are touching so many systems it's easy to you know to lose one so the idea is to have this script run on a regular basis you know once a day twice a day every two hours something like this and check is everything fine are the tests running are the tests failing can I wait the test is my update going through and then so for a single package as well as for two a build an update of two builds two single builds many builds so those are the things that we are currently working on so that we that we come up soon yes soon I guess and one of the discussion that was on the the other list recently about the infrastructure is not reliable well using this we may be able to look into how often things break in the entire workflow not a single application but the entire workflow as often does the workflow break because one application breaks it may be a different application at the time but the result is as a package or my workflow does not work so we may be able to measure how often things break and often they don't and that would also help us when we roll out a new version of body or a new version of Koji and actually ensure that what we want the best these are best scenarios we don't test every single sketch but we want to make sure that at least the best scenario case should work fine and another thing which we have in our roadmap and I don't know how we'll get to is what's the mastery build the scanning of Robocynutri during mastery build that is something which has happened in the past it was annoying for a few days there are high chances it will happen again in the future as much as we will try to avoid it so finally a small surprise for all of you who have been who are Fedogap Packager if you are open to the package you are probably less interested in this one and you've stayed so far so with multibills we have introduced on-demand site tags so you can create your site tag whenever you want well they are for ride but they don't work only for ride which means you can create a site tag for a stable release branch which means Fed package chain build which was only a ride feature now it works in stable federal releases as well you can create your site tags for F-30 and chain build and the builds in there and crown them into one update in basically two columns so enjoy that and that's it for me if you have any question I would like you to continue let's take the all the systems so so the question is how can I plug my own CI system in this workflow yeah there is a we are working on standards there is a very clear defined message from body that says this update should be tested the idea is you trigger federal messaging and you report to federal messaging so there is a defined message from body that triggers and there is a standard format that is expected for the report and then there is one small piece of software that basically is the piece which listen to the bus and it puts the results that needs to be adjusted if you follow the defined format you just listen to the new topic and process the message that way that's all that should be required so the question is how do we envision reverse dependency testing there are a few ways to look into that one of the first thing I would do is to go look at my friends in green over there in the OBS world where they actually do rebuild the entire dependency tree upon a change so that problem has been solved for them which means it shouldn't be a problem to solve it for us there are a few alternatives there clouds would be one way to scale up resources when we suddenly rebuild GCC and needs to rebuild the entire most of Fedora the interesting piece about that we do master builds currently and that goes through all the 20,000 packages but the 20,000 packages don't need to be rebuilt for a new GCC update all the Perl and Python and PHP modules which are not C plugins they don't care about the new GCC so if we were to do reverse dependency testing and rebuild for a new GCC we would actually rebuild less than we do today with master builds because we would only we would only rebuild what depends on GCC all the new arch packages we would ignore not just packages don't need to be rebuilt what about the feedback from the new GCC we want to modify the Python and the GCC that the GCC will occur? so the question is how do we what about the feedback if I rebuild GCC do I really want to know the test result about 5,000 packages those are good questions and we need to figure out what's the answer for this the first approach may very well be well rebuild GCC for 8,000 notifications about things not working or maybe you know we'll say not 8,000 but maybe we can group them and you get 8,000 in one email one way we would do that one way which would help maybe is also looking into the we said we wanted to optimize the test result when we come to groups of builds so instead of having one build, one result, one build having one result for the entire group that will solve that because then you have a group of 8,000 packages but you only get one feedback that says all of these packages have failed but it's one action and not 8,000 actions so that may be it those are clues or ideas they are not anywhere defined so I'm not going to put my hand on the table and say this is the way it's going to be there was this the question is what do we need to do to actually have the install update and upgrade test I'll say this is a chicken head problem as long as we didn't have a built-in mechanism we could build all the tests we wanted if they had no impact nobody cares a broken test that doesn't work on the side and does not impact me I can live with a small red dot on the side of my testing bar and in body updates if it doesn't impact me in any way we had that for years, Tascotron has been running and looking into upgrade tests for a long time they were not blocking in any way they had no impact in any way so there is a chicken head problem as long as we did not have getting there was no incentive to actually go and look at these tests and try to fix them now that we have this I think it's time to look more into that problem the question is going to be a lot on the ordering which one should we start with I know RPM spec is being worked on at the moment it's also going to be probably one of the first controllable testing that is enabled and it's also used as a a playground how do we plug in a new CI system how do we, because it's going to have its own pipeline, how do we expand our testing system there so this one is probably going to be the first one and the next one after that, it's a good question fixing upgrades versus impact on compose reverse dependency versus install which one should we start with I don't know and that's the place where Alexandra is not there, but Alexandra Fedorova who works on the CI SIG is looking into this question so she will be the person to contact to help or provide feedback on some of these ideas so the question is how do we know where do we get the information about the number of package present in Fedorova there are two places the first one is to look into these sites which is going to tell you the number of Git repos that we have and that's about over 20,000 Git repos that we have and you can look per name space or you can look on IDRPM space and I will give you an IDRPM one ignoring the modules, the containers the flatbacks the second one is there is a problem with that number is that we keep Git repos the trees also for packages that are no longer shipped from Fedorova, but we seal out the Git repos because that's part of our history of as a distribution so that number is a bit squeeze you will get all the packages that Fedorova has ever had whether they are active or not you won't get that information there well you could but that means going through each and every package and checking if they are like as retired the one place where you would have the actual number and you would have to create for it and look for it that's called the PDC that's the database PDC stands for Product Definition Center and it's a database and that's a place where we code for every packages their studies and basically if you are allowed to push to the master branch or not, so if the master branch you cannot push to it basically means the package has been retired so that's the place where you would like to say well give me for all packages the studies of the studies of each branch or the master branch and from that number you would be able to see how it's going the last place where you could see progress is going to be body the body works with binary RPMs so at one source RPM can give you how many sub-packages do we have in LaTec again? I think it's up to 9300 something like that, so way too much so yeah the body's number is going to be squeezed because it's only going to count the binary RPM and other source RPM I think PDC is going to be your process back and that's not going to be the easiest one packageDB used to have a feature where it would give you the number of package per branch but we would longer out that graph it's the same for long-packs we take the translation out of the interface and we make one package for each language and we make the installation in order for language we don't need translation in order so the number we have to come is this source RPM so the remark was it's the same for long-pack where we have one source RPM that splits the number of RPM per languages so that you can reduce the install surface on containers and the like and therefore that's why the number to look at is the number of source RPM and the number of RPM splits that's not a question I believe I forget anything else for x86 architecture or for ARM? the question is, is it only for x86 architecture or only for ARM? the mechanism is arch-independent the problem is the CI system currently works on the mh86 architecture there is plan to add support for ARM to the CI system but as far as I know it's not there yet but yeah the idea is to start with if we can get 80% of the work with x86 architecture it's still better than the zero we have now and then when we get 90% adding ARM then we get another 10% 20% of the work is going to get you 80% of the benefits and the remaining 20% is going to take you the 80% of the work so using this word we start with the easy one trying to get the most results and then see if we can improve that later on