 So welcome, everyone, to the Fedora CI Objective talk. I think the raw head package gating, we had the higher impact, the most immediate one. That was well attended. This is a, yeah, looking back at the Fedora CI Objective, we had back in 2017. Kind of believe it's that long ago already. Then a small detour into what do we actually mean by continuous integration? Because that sets the stage for where we want to go. And looking at the current CI Objective. So in 2017, we set out to deliver the atomic host in Fedora, the continuous integration and continuous delivery. That was the goal. I took this directly from the objective page. We wanted to provide a framework for automating tests and providing results. And the primary goal was to know that component builds are tested, working and ready for compose. So this was, yeah, a lot of words. The objective page still says it should be done by flock in September of 2017. So yeah, the update there hasn't really happened. And we can see that a few things here were done. Others, like atomic hosts, are not quite that relevant anymore today. But some things that did come out of this were the standard test interface that we defined that provides a standard way of discovery and staging and invocation of integration tests, which provides a baseline for other tests that we can run. We have a pipeline to actually run tests. And we actually do provide feedback to users. So these things actually came out of the CI Objective and all the work of the teams involved. So the most visible part for most people are probably tests and disk it. So we have packages with tests and disk it. You don't have to read this. It's not meant to be read, like Pingu would say. But basically, we still, to this day, regularly look which packages have tests and disk it. So of the roughly 1,500 components, I think we have about 7% that do have tests in place. So while not a lot of tests, that still covers a significant portion of the packages. And a lot of these tests actually came from the upstream first initiative, hoarding tests that were formerly internal to rel to Fedora to the upstream. And some of them also come from integrating upstream tests into Fedora. So already here, we can see a bit of connection happening. And while it may seem like a small thing, I think one of the biggest advantages of this is I can go to a package and run their tests and I know how to do that. I don't have to contact a team or figure out which environment to set up or where to find the code. So the discoverability of this and just the fact that there is and should be a common way of doing this, like this mindset change, I think, is one of the biggest outcomes. So the testing of that, even while the gating for various reasons, which I can mention briefly, had its ups and downs, the testing still is a thing. So these CI pipelines, they run to this day. I took the screenshot this morning, we can see recent builds, saying what it says for code you build number, for different code you builds, tests were skipped or tests were actually run and it's green or we have test failures, in this case, Builda failed on Fedora 30, Fedora 29, and a Rawhide. So these things are still run, the CI system is still churning away in the background. So for those who are not familiar with this, the tests are run on Rawhide and on the branch releases currently. Since Atomic Host isn't really a thing anymore, we switched to running tests on these branches. So for the CI objective, I'm just gonna call it 1.0. Version numbers. We can say it's done, sort of. Because let's look at what really happened. We wanted to deliver the Atomic Host, okay let's just say that's kind of moot at this point. We had the continuous delivery aspect, that part we did not do. So we focused on testing, one the standard way of looking at tests. So we have the standard test interface and we have an implementation of that. The standard test roles, which is the package in Fedora. You know how to, there's a common way of staging tests, of running them, you can run them locally, you can have them run by the CI pipeline. And that is in place and working. We also have the mindset change of that, of the whole CI conversion happening. That's, you see more and more projects do CI and testing upstream. You see more projects and more components caring about how do I test my things, how do I combine things. It's not just unit tests after a build, it's actual integration tests, like how do I use this? How do I bring these pieces together? Not just pieces on their own. And also we did have some workflows that hurt. Like we had the package gating that was enabled and disabled and then kind of enabled and then kind of disabled and kind of had ups and downs until now we have proper enablement again. But I'll get into that with a current CI objective. So the retro for the first CI objective really is that we said, okay, we need more design on this. When you think, we need to step back and see what do we actually want to do so that we can make it happen? So we have these building blocks, but let's see what we can do with them. So for that, we went back to the definition and thought, what is continuous integration? What is CI? And there is an actual manifesto that is available on the dock site. And I'm quoting a bit from that here. So primarily for all intents and purposes here, continuous integration is a developer and package or process and workflow. It's quite a mouthful, right? But it kind of sets the scope for what we want to achieve. What we do want, we want to ensure that broken changes don't affect others. Data shouldn't affect developers, shouldn't affect packages, maintainers or users. Before CI you would say, there's a QA process, right? So everyone just put things into raw hide and we'll assemble them and we'll figure it out, right? Let's just make sure users don't get affected by this. But it turns out that developers and maintainers are also users and they are also happy if they're not impacted by breakage, right? I know I am happy if I can use my system. And I also know that development is a lot easier if the system works. So CI kind of makes sense. And I think on this everyone agrees. So what does it actually mean? And that's where it's a bit more tricky, right? Because that's when you start getting into how does that affect me? So for continuous integration, what we want to do, we want to assemble things like in production. We want to really drive it like a user. That's what we call integration. So the whole difference of saying there's only unit tests, not integration tests, yeah, they have their uses. But before I start impacting other people, I do want integration tests because that's what we'll use. And then also we want to do these tests for every single change, whatever change means. And in Fedora, those are things that go into Rawhide. This is continuous. So together we have continuous integration. That's all well and good, but we don't want to spend all our time on CI and testing, right? We actually do want to develop features, work on the packages and get the cool stuff out because in the end, CI should be a tool or a process, it's not, I mean, some people probably say, okay, CI is a good means to an end. I could spend all my time on it, but I would venture to say not every package or thing's that way. And that's fair, right? So how do we make it sustainable? That means tests have to be changeable by people making the software change. So if I change a piece of software, I should be able to change the tests because I know what I'm doing at that moment and I can change the test at the same time. And I also want to have rapid feedback to the person who makes a change. That's often the question, like how long should testing take? How long should I wait? What are the guidelines? And we can debate a lot about that, but like Sandra had a good comment on that earlier in the Rohit Gating discussion saying, this is part of the freedom we have, right? That's, there's a good reason we don't have a defining policy and say your tests have to be done in three minutes because that would be the wrong incentive. Incentive is still, we don't want the breaking change. We don't want the breakage. So do what you have to do to prevent that. If you have to, if you say you need a certain test, a certain set of tests to run to prevent breakage, then run those tests. If those take too long, then either make them faster, hardware, software optimization, they're the usual or change your definition or wait longer. So that's, there's a sense of freedom there. And I think this is the part where we need community also, where we share experience. And this is one of the places where we said, okay, as a central driver, we want to make sure we provide best practices and tell you, hey, if you're unsure, come into a discussion with us and we can tell you how other teams have done it. We can show you how other components have done it and say, these are some options for you. So it's not a, you're not operating a vacuum. So let's look at the Fedora OSI objective in 2019, which I think is the current year, if I'm not mistaken. First of all, why does this need to be an objective? We can set all these cool goals without having an objective, right? I try to phrase it as this thing is larger than one problem. It's not just one individual thing we need to solve. It's really an overarching story. And for this to be effective, we need to really synchronize on multiple goals. I'll list a few that we have set out here, but I think everyone just looking at, just from looking at the previous objective, we see that it's not really just one thing. And it is also very important that aside from the technical goals, we have to see the whole community aspect of this, right? We have an overall rate of change, an overall impact. We can do a bunch of small changes to the workflow. They all seem fine, but if for a maintainer, that means there are 10 small changes they have to do for the next release to their workflow, that's a lot, that's asking a lot of anyone. So we wanna make sure that by aligning these goals, we minimize the pain, which sounds very negative. Phrase more positively, we want to optimize the benefits. So what is the objective for 2019? So we're away from the atomic host stage. We said, let's do it all. Let's take atomic host, do CI, do CD, establish the real, the basis of this all, the principles, let's go for it, right? We stepped back and said, look at this, look at the feedback cycle, right? Rapid feedback, where does this go? Where does CI the most useful? And that is where we do the development and where things come together, and that is raw hide in Fedora. So we want continuous integration for raw hide. That's a pretty good scope. And as a good guideline, we say changes shouldn't break other contributions. That's, there's some leeway in that, but I think it's a good guiding principle. So the key areas we have identified are raw hide gating driven by Pierre or Pingu. There was a talk on that earlier. We have distribution-wide tests for packages driven by David and Tim. We have documentation for the package of experience driven by Alexandra. And we have the tie-in with the upstream via packet driven by Tamash. Does anyone not know who these people are? So, just because I called them out. So Pierre, if you could please send out for a quick, that's Pierre. You can ask him about raw hide gating. And the distribution-wide tests for packages, that's David and Tim. Tim, there you go. Thank you. The documentation for package experience, like Sandra. And packet, that's Tamash. Thank you. And let's look at raw hide gating first. So for this part, this is kind of looking into the future. What is the current, what is the future? And this is also the part where I actively encourage questions and raising your hand if you have questions about where this is going or what the current status is. So I kept these slides very short, but please raise your hand if you have a question. So what currently works and summarized very succinctly is that we have single package updates. And the next part that we want to do, very roughly spoken, is to have multi-package updates. Once that is done, we should be in a place where we essentially gait all the changes going into raw hide. And we should be getting to a more stable raw hide with that. I mean, there was a lot of discussion on that already in the raw hide gating talk, but if there are any more questions on this, you're getting a microphone, one second. Okay, what I'm going to ask is coming from the teams who are working on different parts of the operating system. And this is coming from the team behind DNF, LibDNF on all these things. And their request whenever we talk together is that they are working on their software and their software is interconnected, which means that there are a bunch of upstream projects which depend on each other. So usually when they are rolling out a new feature, the new feature is spent across different pull requests in different projects. And usually they need to test their thing as a one unit, which means that in order to test the feature, you need to build all the packages in specific order and then have a one repo and then install it and then you can finally test it. So my question is, could the work being done on raw hide gating and multi-package updates be the upstream CI system for this use case or it's like completely out of the question? So if we are talking about upstream pull requests which were not merged in this gate yet, then the current raw hide gating setup is not fit for this use case, but we have Fabian here who presented the Zool pipeline here on the second row here. I have presented the Zool pipeline which is much better in the pull request part of the CI. So Zool can provide groups. Zool covers the use case of cross project dependencies when you have pull requests into multiple components and you want to run testing them as a group, so or like as a dependence chain. Probably this is a better direction to look into unmerged group of changes. The Zool pipeline can help here. So I think in the future we're going to split the responsibilities a bit so the raw hide gating will focus more on post-merge setup and tests running in the gate before we compose while for pull requests we will for now we'll try the different story with Zool trying to cover this case. Okay, thank you. And also a good guideline for that is that raw hide essentially means the set of packages that we work with as raw hide. So the pull requests are really more in the earlier workflow. So that is somewhat out of scope of the raw hide gating but definitely in the scope of the CI objective. So going beyond the raw hide gating which I think is awesome that's done this year. It's not a planning stage but actually done. So physically going back I want to say one of the lessons learned really from this compared to the CI objective version one was to take this more slowly and get more community feedback and really look at the workflows that are impacted. And I think the CPE team did a good job with Pierre did a good job of looking really at what we have in the community and building that out from within Fedora so thank you. Now when we have gating in place that is well and good but what provides the information like how do I decide what gets in and what doesn't. Now when we did this the first time we said okay everyone should add tests to this git which is good but not every project isn't a place to do that. So and there's the question of how much like how to explain how much value a package will get out of adding tests. And I need to have that conversation for every maintainer that does not and did not scale that well because CI gets a lot better when more people already have tests in place. So we need kind of like a critical set of things in place to have a certain quality to be able to see the gains. Otherwise then I mean you can say you get what you incentivize right and what we had before with a small set of packages with tests saying if you added tests and your development was slower or it seemed slower because your things were gated. People who did not have tests could just get everything in. That's not quite the incentive we wanna have right. So let's say let's invest with a small group in distribution wide tests and let's write those for the benefit of everyone. And for that we have RPM inspect driven by Tim and David. And also and yet we wanna set that up as a reliable system wide test. So that's Fedora QA experience, loss of experience from running RPM different internally and within Red Hat. So this is not just something we just dreamt up right and this is a place if you went to the talks you can contribute to and that is accessible and you can see what's actually happening right. You can run it locally, you can try it out, you can try changes. So this is yeah, let's say the modern way of how we imagine testing should go and we will start running this for everyone for all the updates. And once that has established itself once we've seen that it works and refined it to a place where there's an acceptable gating test we can, we'll first see people saying hey I want to get my package on this, right. This is good information. And then there will probably be a tipping point where we'll say okay, as a community we have so many packages gating on this. Some people chose not to but really everyone should. So let's adopt the policy of gating everyone on that. I think that's what we'll get to hopefully for RPM Inspect, certainly for some tools. And yeah, this policy aspect is one of the the objective goals that we're setting for the next couple of months talking to Fesco to see what would it take, what are the requirements to make one of these tests gating for everyone. And then in addition to RPM Inspect, we also want to look at other distra-wide tests. We want to look at installability. Can my package be installed? Similar question there. I think the policy question there is probably gonna be pretty easy. I always like to say can you tell me why your package shouldn't be able, why your user shouldn't be able to install your package? Give me a reason to get that in anyway. It's a hard argument to make. Reverse dependency testing, that incentive I talked about. Let's turn that around. Like if people break your package, add tests to your story. Why is your package in raw hide? Say you depend on a library on its API, so add a test where your package used that API. And if that API suddenly changes and they didn't talk to you, that test should fail. And it should fail when that library wants to get updated, not later on when you want to update. So that's the right incentive to say let's prevent the whole breaking change, where it occurs, not too late. Otherwise, we'd just be punishing people who use a lot of other packages, which is again, not the right incentive. And rebuild testing, like let's do automated rebuilds. As we've, it was raised a couple of times here and I think everyone is aware, rebuilding is not always a trivial thing, right? But let's see how we can test it. Let's see how we can rebuild depending packages to make sure that that doesn't break. These are all things that are solvable on a distribution-wide scale, make sense. And we want to work with Fedora QA and whoever wants to contribute to making these available distribution-wide and then allow the community or individual packers to decide what they want to gate on. Any questions on this part? Maybe someone who doesn't believe in reverse dependency testing. That's a nice argument to have. Maybe I would add a different aspect to this. So, when we talked about those tests and Rohite gating, I believe people might be scared a bit by the fact that they're going to be blocked from getting into Rohite by some tests which we are not quite familiar with yet or like I need to be getting familiar with. So, I really want to send this message that as a distribution, we are just entering the world of continuous integration which means it's not easy and we're a huge project. It's not going to be like one step for us. We're going to move in slowly. And while we will be talking about Fesco setting up policies and enabling distribution-wide gating tests as a blocking test, in fact, everything we built regarding gating here is more of a informational rather than really blocking. So, we don't take control from a maintainer in the end. We built a system which will test your change, which will send you results, which will block you from getting into Rohite immediately but it will keep, it will still be you who will have full control of what's going on with this change further on. You will be the deciding person and I see the first phase of gating as giving you the information to make a decision in the end to pass or not to pass. So, for example, reverse dependency testing is going to be complicated. So, you send an update and then your update triggered test of a certain package which depends on you and that test failed. So, you are getting a failure of reverse dependency test because something else was broken and at this moment you may find this error shouldn't be on your relevant to your change or you don't know how to deal with it. At least you will have the information and you will have a talking point. You will have a place to collaborate with the dependency to discuss things. So, the main goal of the first stage of gating is not to actually block you from doing things, it's about showing you the impact of your changes and making you do a decision based on knowledge about the impact. So, you should know what are you going to made in Rohite after your change is landed and then you should make a decision if it's what you want to do, if you need to make more communication or if you need to deal with consequences before you land. At least information will be there. Thank you. So, the next aspect is documentation. Do you want to continue directly? Yeah, so, this is the part where we need to improve our documentation and all the things covering this workflow. So, I feel from reading the well list and from participating in conversations, there is a feeling that people really scared a bit by the fact that we are introducing some new terms in the workflow. So, we're more scared by the fact that there are new terms than by the actual content, what these terms bring to the workflow. So, I believe that once we made it more, once we explain it better, people stopped being scared of these changes because in fact, really underlying change is not that big. You're already doing a lot of this stuff, you're just not calling it gating by some reason and we can really improve in that so you are getting more familiar with the terminology while not being scared of probable changes which look like black box to you. So, this is a huge effort of course and we need everyone to participate in that. This is one part that as soon as we get distra-wide gating in place, we hope that people will get more information in how this works and will contribute to documentation. We want to build in the end the community around CI the same way as we build community around every other project. So, this is about it. We have a git repo with documentation where everyone can contribute to. So, you can join right now or you can follow and we will be posting updates and we will be doing some stuff in that documentation repo later on. Yeah, and valid contributions are also, I don't understand the documentation you have. I want to do this and tell me how I can do that. That's also a valid contribution. You don't need to write guides. You can ask the questions and tell us, hey, this is confusing, this is missing, my workflow is different. You talk about these packages but mine is completely different for this reason and then we can tell you, hey, it's actually not that different if you do these things or, wow, we didn't think of that one, which can also happen. So, yeah, we already have some, there's a lot of documentation. I think in this case it's, like Alexandra said, more of a question of how can you explain things maybe a bit differently and making the documentation we have a bit more accessible? So the packet, it's just Tomasho's had his talk just so you put a quick picture in here about connecting the upstream to Fedora. This is kind of a red hat view on where Fedora, Rawhide is and you can see a lot of things really happen, right? A lot of things go into Rawhide. You have different groups of people. You have the kernel, you have application stacks, open source projects, partners of Red Hat and other companies and contributing. You have operating system tools. There's lots of things going into Rawhide and then you have Fedora coming out. You have Red Hat Enterprise Linux coming out then you eventually have CentOS. So it's quite the complex environment but what we want to do is really make the connection to Fedora easier saying that's like a default thing and I don't want to go into too much detail here. You can look at packet if you want to have more but the short of it is that, for example, if you open a pull request on GitHub for a project it is easy to see then feedback on the pull request from packet. Here, packaging in Fedora worked. You can download the packages here and the tests passed or the tests failed. You don't have to tag or release in GitHub, push to disk it in Fedora, update the spec file, do all those things. You can automate those things away if you want to. This is not intended as a replacement for everyone. This is an optional tool to really tie in the upstream projects and get the logic where it belongs because if you have that connection with the upstream project that's where the development is happening, that's where you want to have the discussion. Not when the upstream has released any package in Fedora and then you want to change something that's probably a lot harder than when it's still a pull request in the upstream. So packet really makes that very easy and I think one of the next steps is that packet can do builds. I think it will use coper and it has I think tens of packages right now that it supports and as that pool grows, so will the functionality of packet. So with this, I'm pretty much almost ready at the end. So who has questions on the objective? Good questions for example could be timelines. So it's kind of scary but, David. Hello. So in all of these conversations we're having the focuses obviously on individual packages but I haven't heard a lot, I've heard a little bit but not a lot about any handling of like mass rebuilds or things like that since that would obviously result in a lot of, I mean, a lot of activity like all at once. We have any thoughts on that right now or is that still an unknown? So first of all, the important part of things like the short feedback loop, right, so when changes actually happen. So the question is, is a mass rebuild really, is that the change or are we doing the mass rebuild because we introduced the change across many things? If it's just a rebuild, the question is should we have done parts of that earlier? Like I think some of these things will go away when we actually do test dependencies, when we do test changes as a whole before they go in. So I think some of the rebuild questions will go away and for other cases, I think mass rebuilds are probably a special case because who gets the feedback if there's no actual change? I think Alexandra has also an opinion on this. I think that mass rebuilds don't fit into CI and gating workflow in general, but that's okay. That's because it's the operation by nature is not CI. We probably would need to, it's already happens in a side tag, yeah? So I think we need to consider adding additional layer of testing on top of this side tag. So we don't just rebuild things in a side tag, we also test them after we rebuild them in a side tag. So this could be one direction of thoughts, like what can we do with this mass rebuild operations? And the second part would be, of course, like trying to eliminate mass rebuilds. Like, why do we still have that? It's also like a direction. So I would say we are not going to try to feel mass rebuild into gating framework. We rather use it as a separate case, but we consider once we understand better of a dependency graph, maybe we've rebuilt service, which to help us, we've drive this understanding, we are going to have better information about what we need to rebuild, and we move slowly from mass rebuilds to targeting rebuilds of things which actually matter, and then the gating will cover this part. Yeah, and if you introduce a change that affects a lot of packages, and that's not a mass rebuild, it's actually a massive change request. So one of the things that I've at least picked up as a common theme for what we're trying to do going forward is efficiency. And here I'm hearing you talk about adding another platform for running CI. So not only would we have the Jenkins on top of OpenShift, but now we're adding Zoologum to the mix. This seems to fly in the face of a lot of the stuff that's been talked about so far. Can you elaborate on why we're looking at having two separate systems when there's been such a gigantic push to have one up until now? Yeah, so first of all, I'm not a big believer in let's have one system that solves all our problems. I think the key question here is what is sustainable, what can we maintain? Like where is the complexity, where is the actual work? If we have multiple tests, different tests that provide completely different results, for example, I have RPM inspect, I have installability versus dependency testing, test and disk it, they might have different maintainers, they might have different ways of running, so they will probably have different requirements. So if one group of people decides to maintain all those tests, they may decide to do them all in the same system, for simplicity's sake, for maintenance. But if they're maintained by different people, they could choose to be implemented in different technology. So I think for these, as the objective lead, for me those, the systems they run on are implementation details. I think there's a good reason to make sure that we use similar things, that we use scalable approaches and not reinvent the wheel, but we have an architecture that we've set up with the messaging system, results to be everything that we're pretty independent of where the test results come from. I don't have to, like from a gating perspective and from a raw high stability perspective, I don't have to worry about what systems actually produce those results. So I think it is, we should try to keep the number of different systems we have probably down if we're looking at, if the same group of people maintains them and look at, can we share best practices? Can we reuse things for sure? I think that's always good. But I would not say, just to have the same system, I wouldn't want to force people to use the same system if they chose a different technology to implement their tests. Maybe if I can continue with this thought, there's a different thing between bringing another system for Fedora infrastructure to maintain or enabling another system which is going to be maintained by different people. So there's different workloads here, which we're talking about. So for the whole part, the whole team actually suggests to maintain the whole service. And I think this is the possibility to actually reduce the load on Fedora infrastructure team and actually split the workload currently so that it's not all brought to the Fedora infra to work with. So we have a possibility to, by the nature of our system, it's a distributed gating framework. So we can enable various CI systems, each maintained by various dedicated people who then will be completely independent from Fedora infrastructure team, which means we are going to reduce the load on main Fedora infra. This is one of the benefits which we get from such a system. And I suspect I'm getting close to, this should be taken offline, but if they're willing to maintain a Zool system, why aren't we using that for everything? That is a good question. But that is a question for the test systems, for the individual test systems to decide. I think if we look at this from the Fedora perspective, we look at how this is set up with our additions and services and teams. Essentially, Zool in this case would be, I guess a mixture of, that would be a team that chooses to contribute to Fedora by providing a service, providing results, and the community decides to consume those results. Like we're not telling the team to do this, they are providing it and we're consuming it and saying, hey, this is nice to use. I think it goes along with what Matthew presented at the State of Fedora talk, but we are innovating platform actually. So he was talking more about possibility for different people to create different spins and different secondary artifacts from Fedora, based on Fedora, but infrastructure is also a place and continuous integration infrastructure is also a place where you innovate and that's why we provide a platform for people to come with their solutions. We don't want people to be gatekeepers who don't let people in just because they don't fit in our work. We want to provide an API so people can come with something they want to implement and start doing this. So in case of Zool, we are trying to, like Fabian presented actually the proof of concept which is already there. It works and it works nicely. It's integrated with Pagora and so on. So there is literally no reason for us to stop this effort because there are people who are willing to do it. So we just need to let them do the thing they wanted to contribute. And the responsibility of us here so we keep it working in the end is that we need to keep several compatibility layers so we can switch between CI systems. We can enable, disable CI systems. We can maintain these workflows but we want to give people freedom to contribute. This is basically rational behind. Does it make sense to you? I understand your words. Yeah, sorry. I wanted to add on that a bit. On the Linux kernel development, you have a number of companies that provide tests for every patch that are sent to the list on other ways that they run and that the kernel community does not have access to. But as the development and the feedback of this test has been built and has grown, people have started to realize that when the test fails and they get a report that this patch is failing on that hardware, it turns out in the long term that that test was actually valuable and was providing inputs. So the kernel community has started to learn to rely on tests that are running on systems and that they don't have access to and they have been open to that. And I think long term for us, Fedora as a community needs also to be open to the idea of receiving test results from results that are not within the control of, the direct control of the community. If Amazon wants, or Amazon Facebook or IBM. Facebook actually wants to do this. We want to participate. We've had Facebook standing here as the first keynote. We've had IBM from the news this morning mentioned. And if these companies want to contribute to Fedora, Rohit, Fedora's development in general and provide us with a set of test results coming from their system that they have access to, their own specific architecture, their own specific hardware, and provide us with that kernel update, that system, the update that GCC is seeing is breaking on this hardware for these and these reasons. We need to be open to receive that feedback whether we want to get on these results or not. This is going to be a time decision. This is going to be involving the maintainers of these results consistent enough that they can rely on. But at least we need to be able to onboard them and let the community decide if the value they provide is good enough for to be given. What we definitely don't want is that developers have to write tests for Zool, tests for Jenkins, and tests for something in different formats. This is what we want to avoid, but this is our agreement that CI systems will need to comply to certain standards. So if I describe my integration tests once and put them in this git, then any of those CI systems which wants to try them, they should be able to run it. So there should be this compatibility layer between CI systems. But other than that, the actual implementation of how the CI system is going to run, we would like to provide complete freedom to people to work with us, to play with us, to participate in that. I think what Fedora Infra has built allows that feedback to come in without changing the underlying tooling all the time, which I think is awesome. The infrastructure itself is now in a place where we can try these things out. Let's, if the Zool team or Facebook or some other team wants to try this out, why not? Why does this hurt? Let's try it out, see what happens. And if it's like with all other communities, if acceptance of this grows, then that's a good thing. And we don't need a central policy to define that. I think that's one of the biggest changes from, also if you look at the Fedora policy as a whole, what Matthew outlined, this reflects that very well. Like Alexandra said. So I think that is a good development towards freedom without tying decisions to tooling over much. And literally anyone in this room can just go and start being the CI system on Fedora. If you want to run a certain test on a certain package change, you can just start doing it. You can just start sending your results and your results will be visible for people who are updating the package. And yeah, people may complain that you're sending some rubbish. But if you actually talk with maintainer of a package, it may be valuable input. So it's literally anyone can just jump in and start testing and providing feedback. That's nice, I think. Yeah, okay. And yeah. So yeah, to repeat the question, how many packages, do you say how many packages have tests? In disk, yes. In, like in diskit or? Anywhere. I mean, that are actually executing during the engaging that we have. I think... That should be what we keep at you, Schroed. Yeah, it was at that page I had earlier, I think. So here I have, I think it's 111, I think, in the category that's listed there. So not that many, but that's the reason for that here. I think it's everything's, it's like 1500. This is a subset of 1500 packages. I'm not sure if that's... So 7% of those, 1,556. But this is the integration test. If we're going to talk about RPEM inspect, it will be 100% coverage because it's a generic test which will run for everything. So for diskit tests, we really decided to like leave it on, we will run diskit tests in those places where people configured them, but we're not going to focus on extending the reach of diskit tests right now because we really don't want people to fight alone with the gating. And so by focusing on generic distro-wide tests, we kind of share the experience and all developers work with the same test and all infrastructure teams work with the same test. And we don't have these people stuck with their framework, with their component alone and just trying to get it running. So for the first stage, diskit-wide tests are primary focus for next months, probably. We're not talking about timelines here. Yeah, to put this into perspective, instead of having all these services, what we could have done is added this, added the RPEM inspect as the gating tests in the pipeline framework for every individual package. That would have been one way of doing it. But we're saying explicitly, we're maintaining this in a different place. You don't have to maintain it. You can contribute, you can see it, you can try it out, but it's not on you to maintain. You still get the results. So we want the benefits, but not, and basically we're saying this is worth it to maintain in a centralized place. Okay, I think we're out of time. Any one last question if anyone has one? Okay, thank you very much.