 I think we can get started. Good morning, everyone. Good morning. So I'm from Australia. It's such a long way to come here. So this is my first talk in such a conversation. So the first thing I would send for Vlog, and send for Red Hat for bringing here. So over the next 30 minutes, you're going to hear about something that will change the way you are doing package updates in Fedora. So don't be scared. We are not going to make your life harder. So my name is Mike Jia, but don't get confused. I think it's showing my Chinese full name on Vlog somewhere. So if you want to reach me, you can reach me by this email. So I'm working in factory 2.0 team at Red Hat, and I used to be a bigger developer, and I'm also present developer. All right, so let's move on next. All right, so have you ever noticed the automated test results in BODY before? So if some of you have been to items talk about, I think it's working with automated test systems. So I think you would have already got some idea how this automated testing system work. So I think the Fedora QA has done a great job running a lot of automated tests when you file a new update in BODY. So this is a screenshot of the automated test tab on the web UI in BODY where it shows all the test results from ResultDB. So as shown in this picture, it just has results of a package update for Static State. So how many of you have seen the test results in ResultDB before? Okay, great. So I think most of you know how ResultDB works and works on what ResultDB is, right? But I think it would be good just to refresh your memory. You know, ResultDB is a simple service for recording the automated test results generated by many different testing systems. So when we are talking about testing systems, we are talking about TaskTron, Open QA or CI Pipeline and many other similar testing systems in Fedora. So as you can see, we have got many test results here passing which is cool, right? But we also have two field test results here. So do we actually really care about these fielding test results? I think the answer is we do need to care, right? So what can we do with these fielding test results? Can we use them to help us enforce our package update quality? So let's find out. So today I'm going to introduce two new services we are working on. The first one is called GreenWave. The other one is called WeaverDB. So what GreenWave is, GreenWave originally is known as policy engine in Fedora. So it's recently renamed to GreenWave because we want to make it clear that GreenWave is not driving any of the processes and it's not storing and applying any arbitrary policies about anything. So the fact is GreenWave is a service just for making business decisions or answering yes or no questions. It's about other facts, such as RPMs, modules, containers. So it can be used at 13 getting points in our release pipeline. So when we say getting points, we're talking about body of internal tubes so the decision is actually based on test results in ResultsDB according to some policies. So these policies are just about what checks need to pass before an other fact is considered as good enough. So why do we need GreenWave? What problems can GreenWave have us to solve in body? So as you right now we have identified two problems in body. The first one is we don't enforce any checks across all the packages in bodies. So what that means, if you want, you can release your package update regardless of any field test results. So however, if there's a test field which could be a sign that your package update is broken and you could end up breaking other people's packages which is not what you're expecting. So to solve this problem, we as federal community want to enforce certain checks across all the distributions. So we want to get package updates based on the test results so when an update is going to be released in body. So the goal here is just to prevent the broken changes that would affect other packages as well as improve package quality. So what checks do we want to enforce? So here's just three big checks we really want to enforce right now. So this dot API check, this dot APM depth link, these dots are great tasks. Basically, task problem is running these three checks for most of package updates because I think just Pingu has reached the one issue for Greenway a couple of days ago. We just realized that actually this API check is, I think it's only applied on critical parts packages, right? No, it's restricted for the moment but there is a blacklist of packages that are not allowed to be applied. The next one, I think the upgrade part has some issues with, I think... It's on your run when package is pushing. It's pushing to this thing. Right, right, right. So depending on the gating point on the one issue of these three can be actually applied. Right, right. So basically we want to enforce these checks for all the packages. Yeah, question? One of the things that allows people to accept such enforcement is the ability to then affect the results. For each of these three, can the packageer affect whether they are how, maybe I check runs against the package, how are they definitely affected and how are they impacted against the package? I can answer that one if you want to. Yeah, yeah, that would be good. The basic concerns, you know they cannot because these are not packaging or functionality issues, tests. They are policy decisions. So if you want to... The API checks here is to prevent someone from breaking the API on a stable database and that's something which Fedora by-policy does not want to support. So if there is a bug in the API check the next service that Matt is going to introduce allows you to wait that's constrained. But it's a policy decision and that will have to be a relinch action to actually lift that specific update. And something for a great pass, you need to keep the great pass always compatible between Fedora and Rappas 1, Rappas 2. That's a policy decision. As a packageer, you can influence that by not creating a bigger update for Fedora minus 2 versus Fedora high. So you can go and fix your package. What about the arch in depth? Doesn't that want to let you know... Yeah, it's the same thing. If you get a failure, fix the package. Fix the package first, yeah. And a great part of it means NVR or... It's NVR between Fedora, say 24-25, 26 or high. I'm glad you have something like regression from Fedora 97. 6 in Fedora 97, like they did check that they were in the basic set for the packages. Is this a... recommendation for new upgrade or new plan to cover it? I know they do have a system upgrade in NVR. Can you talk about the package version? It is. I mean, it's less... It's arguably less imperative than it was before, but I think it's still important. What it's trying to prevent is having a work-in earlier release because then at upgrade, you end up not... So what that is looking for is to make sure that if you were to upgrade from 26 to 27, that there are no newer 26 that are on 27, that there are at least the NVR minus the Vistag. Those are fairly steeple. That's what I recall. Because I mean, it's still important. It's virtually not as important. One question? And for each of the checks that you're enforcing, am I, as a package, able to run each of these checks against my package during development and packaging without going through results to be avoided so that I can actively fix the problem? I think you have to guess with some difficulty, right? You need to document those steps. What moves the idea? Someone is telling me what to do, but it's really helpful. To be honest, we've been running these for four years now and we're the first person to ask. Yeah. So now they're going to be forced. That's good. And this is part of the ingredient for the checks. Years ago. All right. Sure. I understand. The tests are used just for compose acceptance testing. So it's not really focusing on some package, but more or less for whole compose. That's true or not? I think that's not true. It's just the test running for each package update. Yeah, but it will be updated in an update system or it use all the version and just use this one build inside some stable system. I think in body for each package update, there's a high list of builds, right? So I think basically touch on just running some types on against these builds. So that's what the test result is. In body it's actually just something I'm done. I just wanted to clarify my answer to this before. Sure. The output comes out as yes to build your package but it may not necessarily you may not necessarily cause it but I'm very much assuming that if your package is involved in a dependency problem that needs to be sorted out and there's no good way that I know of to figure out automatically cause it the easiest thing to do is just build for kind of building that. So some of those yes can run them but when you start bringing into the repo level checks that there's a certain amount of where you run them when you run them that is in here. It's a general principle that we should apply when we try to enforce checks and get test gating and so on to have them be reproducible by definition. I mean in general I agree with you and that works. Anyway I want to go back to my talk anyway so you guys can discuss later. Alright so where are we okay so the reason we want to impose these checks we think these checks are extremely important by the distribution so these failures on the package update I would say we almost certainly break applications or libraries that depend on this up package update. I think this failure we think this failure should be inspected carefully by the owner of AutoFeeder QA. Having these checks could help us find the problem earlier in our release pipeline so in great way these checks could be expressed as a list of rules in a policy for different products. So the second problem is actually body has a feature to allow packages to specify actual required checks but you know people are not quite aware of I don't think anyone use it because it's a manual process you know it's not big fun the reason is you have to specify your extra checks in already you know package update so Grand Wave can automate this process by allowing packages to define policies what extra checks need to pass before package update is considered as good so as a consequence Grand Wave will automatically apply those policies when making decisions so why do we need a new service why not just define policies in body itself body sounds like a perfect place for defining policies it's already a getting system that packages are used to know so the reason is body is not the only place we want to introduce getting based on the test results we want to reuse the same logic to perform getting as much as possible in other getting points so other getting points don't need to really want the well so this will make our life easier to maintain this such logic in one place rather than many places so we think that's why it doesn't make sense to put Grand Wave into a micro service along with other micro services we have developed in factory 2 product so in terms of the fielding test results so what happens when a test goes bad so what happens when a machine is wrong so if a test fail it could be because of infrastructure problems or other known issues so as a package you may want to move it to move your package update forward so since the results db are available and cannot be changed by humans so we just need a new service which is weaver db to allow humans to overwrite the test results so in a short summary weaver db is just a micro service for storing weavers against the test results in result db so generalize the existing weaving function 90 that we have in some of the testing tools like rpm grill so just as the same as result db weaver db is a central place for storing all the weavers alright so put out all these 3 services together so when a package update is going to be let's say pushed to stable in body so body will ask Grooming to make a decision whether the package update is okay to go and then Grooming will query both results db and weaver db and look at the results and weavers together to make the decisions so Yang has given a great talk about fresh maker but I just want how many of you have been to Yang's talk I think most of you okay that's good but it's just too refreshing of memory so it's just a service for automatically rebuilding other facts when their dependencies get updated so it can for example it can KBL container fresh when rpms are released to stable so another example when you update the spike fare in your rpms package fresh maker will automatically trigger the rebuilds of all the modules and containers that contain that rpms package so it will save you a lot of time and effort to rebuild all of the things by yourself so which is really nice so but at the moment fresh maker is unconditionally trigger the rebuilds all the time so but in some situations the rebuilds are not necessarily needed right so for example if an underlying artifact is released but didn't pass certain checks which may be ascended that the underlying artifact is somehow broken right so fresh maker should not trigger the rebuilds of all the upstream artifacts instead it should wait until the underlying artifact got fixed first so we think to make fresh maker more efficient as well as enforce quality building can be used here to get the rebuilds so based on the test results to decide when to rebuild how much to rebuild the next topic is going to be hard because I'm going to talk about the implementation details and so basically just about talk about how growing works under the code and where are we today with the current implementation so I say growing and where are we implemented by Flask you know so how many of you know Flask wow that's great so Flask is just you know it's a micro framework for peasants it's well documented and easy to code so first let's talk about how to define a growing policy because it plays such an important role in growing a policy is just a place where we packages could specify extra text so each policy can have ID product version decision counter rules so ID is just no more unique identifier product version just pdc as in this code yeah right it's a pdc identifier such as Fedora26, Fedora-27 decision context is a label named through coordination between policy author and consuming tools so you know a list of rules just about what types are required to be passing so with the current implementation policies are expressed in YAML configuration fails with the application but this may most likely change in the future because automatically we want the policies enforced by growing to be self-service so you know people can just come about and define a policy for themselves so let's imagine that we're going to push up package update to stable for jibs in body and we have got one filling test result in result db which is required by the policy I mentioned before if we ask Greenway to make a decision it's going to tell us the policy is not satisfied yet it will say which test is failed because we have this .api attack failed so if you think the filling test is positive you can create a wayward by calling wayward db api but the things in the policy request you need result ID and product weren't a good reason so other people can know who and why this filling test result was waived so having said that we're not explaining people to call this api directly so we picture that people will be able to waive the filling test result from the body by bi but this is still kind of we are currently working on yep yep unfortunately we don't have that but we are considering this right so we we go in I think we we're talking about that we're going to introduce a auto waiving series or something like that but yeah yeah right yeah yeah right cool no worries so this is just my last slide so next step just body integration I think we have almost completed and you know we have been putting a lot of effort so thank you for body team so I said it's quite exciting to say that because it has been deployed since last Friday but I think we just have some small issues you know before turning on but you will say production management and the next one is message bus driven so basically we would like to use find a message to automatically drive all the process and lastly we want to allow packages to define a policy for each package so okay I think that's really want to represent today so if you want to get involved and if you want to contribute here are the links welcome to any page and thank you question have you considered putting the policy in the package itself is that something worth talking about together later I'm wondering can you make sense to allow per package policies in a way that we are pushing on one side to test in this case which are entirely in the hand of the package which test on being run the controllable checks being made by Task Control then I wonder what the package needs to control are the tests that you already have control over and what the distro wants to control are what the distro already has control over which is greenway and what we do so we may be able to just supervise things here by just saying that greenway and package level decisions which are controlled by in-front but basically based on decision with distro and everything with distro and everything which is package level is based on the test which are present in this case that does imply that the state CIPaP plan is open to more than just this set of packages that you described earlier we were talking about this in terms of distro level checks because that's what is in 4D right now but greenway is not limited to that it can gain on anything that's available in a way to be very end result to be which makes it like to do test at also a more granular level and gain on those so it doesn't have to just be a test that are defined by I mean a package you could say I want to make sure that this command returns the right thing and that defines where I'm indeed on that that's what the CIPaP plan is allowing to do test time is easy but you see at CIPaP time so that means that the CIPaP plan may use after we package all of it I think we can discuss this later right so sorry about that