 Hi, everyone. I'm Tim Flink. I am a quality engineer at Red Hat, and I'm going to be talking about test cases, the initiative called the upstream-first initiative, and a lot of this is meant to be Q&A, so the presentation itself is going to be relatively short, and the idea is to save most of the time at the end for questions. So the things that I'm specifically looking to talk about today is a bit of a discussion of where we are now. What is this upstream-first thing that I'm talking about, and as I just said, leave some time at the end for questions. So I tend to start presentations like this, but a valid question, where are we for about as long as I can think of. Red Hat does a lot of testing and has a lot of automated test cases for RHEL that aren't in Fedora, and these test cases haven't been released. Sometimes they use tool sets that aren't really part of Fedora, but there is a lot of that testing that happens as part of getting RHEL out the door and making RHEL into the thing that Red Hat's customers buy. So that being said, what exactly is this upstream-first thing that I'm talking about? The short way to put it is there is a renewed, or there's a new interest and a push to get a subset of the test cases, in particular the automated test cases that we have for RHEL upstream into Fedora. The focus is, I mean it's not all the test cases, but the lower level things, things on the package level, functionality, some of the integration tests, but the idea is to have those test cases moved upstream, in this instance, or in this case to Fedora, and end up with tests that work in both places. As far as motivation, I think I would hope this is somewhat obvious, but the earlier that bugs get fixed, the cheaper it is. So from what does Red Hat get out of this, is testing being run further upstream, so it's not, RHEL is branched, all of a sudden we found this bug that could have been fixed two years ago. Then you have to go through the process of writing the patch, submitting it upstream, rebasing that patch to the version that you had, and then getting that out through the RHEL release process. It benefits Red Hat to get those things done, or those things found upstream, and at the same time it benefits Fedora because we then get more test cases, there's more testing being done with the idea of improving the quality of what we have as Fedora. So both parties in this case, it seems like a win-win, I would assert it is a win-win for both sides, but I do want to say that this is not Red Hat forcing test cases on Fedora. The final say, it's still a community distribution. This is not Red Hat coming in and saying, you will do all of these things, so that the negotiation of what tests in what form that end up in disk it in a lot of these cases is still up to the individual package maintainers. So just as a quick summary, new test cases are coming. A few of them have started to show up. We have a somewhat temporary Pagger instance, and some of the test cases are from inside Red Hat or landing there, and eventually the destination of those is going to be within disk it. So if you are a maintainer, you will likely be seeing pull requests and conversations started in the very near future about getting test cases moved into disk it, and that's where this is coming from. And both, it's a win-win. Fedora gets better testing, or it gets more testing, and Red Hat gets stuff to fix quicker. So as this is a Q&A session, are there any questions, Ian? How many tests are we talking about? I don't know the number of tests. The initial focus is around the Atomic Host CI initiative. So the focus is having, the minimum that we have is, we want to have one test case for every package that makes up the Atomic Host. There will be more than that coming, but that is the shorter term definitive thing that's been set. The question that I forgot to, and I forgot to repeat the first one. The question was, if I can give some examples of the test cases that we want to move from Red Hat to Fedora, I can point you to where those are, but to be honest, I'm doing more of the facilitation than the actual moving of test cases. Let's see if I can, probably should have planned to have any browser window. This is going to be fun. No, there is no stand. I can't do that. Are you trying to tell me my whole presentation isn't show? No, nobody uses this network thing. So this is the pegger instance that we have, and this is going to be where things are landing. We have repositories that line up with the packages that they are associated with. So I'm just going to go with one of the ones that I know was first. So there's a test case associated with every repository on this? That is the eventual hope. I mean, I imagine that repositories are, I'm sorry, the question was if there's a test case associated with every repository. That is the hope eventually. I imagine that the process is people create the repository and then end up moving stuff to it. So for example, this is GZIP, and they have a simple test. I think this may have been one of the examples, so that these are all using the standard test roles. And the aim is to have these running in CI as that system matures and as it increases in scope. Mike. So you said that it would be up to the maintainers about whether they accept tests or not that are submitted. So is that process going to go through pegger? Is it going to be a full request process? The question was asking about, for clarification on what I had said about the maintainers having the final say on whether the test cases were accepted and what that process was going to look like. To be honest, I think it's going to depend on the situation, because there's going to be packages where the testers already know the packages and that may be easier for them to just submit patches, but I imagine a lot of it's going to be via full request, because that's the sanest thing I can think of. So that is how I imagine things are going to work, yes. So would these be full requests on the source.figureproject.org pegger instance, and how does that relate to this having two of these? It's very confusing to me. The question was where those pull requests would be and whether those would end up being on source.figureproject.org. The upstream first instance is meant as a staging area. So this is a place we needed to start somewhere. A lot of the test cases were not in a state that, you know, if they submitted a pull request at that time, they would have been rejected because they weren't ready. So the reason that we stood up a second pegger instance is because people are familiar with the interface we had, the resource that it was easy to set up for us, and in the case that something gets pushed upstream that is not supposed to be pushed upstream, we can delete repositories. I mean, these are meant to be temporary. We can delete stuff. I do not, and I will not, go mucking around in the balls of the disk and repositories that I'm not doing. This I can mess with. So the workflow will be pull requests, and the people submitting tests will create forks in source.pegger and then pull this in and then create a PR from there or something else. And the question was, the question was, as a package, do you need to know about this instance? And in general, no. We're still trying to figure out, or I'm trying to put together a way to run these tests automatically so that it isn't just, hey, here's a pull request. You should trust these tests are doing something interesting. Well, they're doing what they're supposed to be doing, and it will help you. But to get them running in a way that someone can make the fork in the disk.pegger instance, add the tests, do the pull request, and say, hey, here are these tests. Here is the logs of them running against the last build of whatever package we're talking about. Here are the results. Here's what things are going to be doing. So you may see it. This particular, this instance of pegger, but it is going to be more, the people porting test cases are going to be using it directly. Does that answer your question? Yes. Okay. In the context of that question, how does this interact with arbitrary branches? The question was, in the context of all of those, or in the context of how this process is going to work, how does it intersect with arbitrary branches? And I'm going to say it doesn't really. We don't have enforced branching. Like any of these, if people have branches for F27, F26, that kind of stuff, they admit it on their own. That is something that will need to be dealt with on a package-by-package basis. I mean, I don't know of a way. Am I answering your question, or am I not understanding what you're asking? Well, so the prototypical example is, like, HDBD24 and 26, right, and have their own branches that would follow upstream. So if I have a 24 branch and I request 26, does it inherit the tests from the 24 branch, or do you start from scratch and have to move things over? There's not a logical, like, branching structure. We have the F35, 26, 27, that's all going forward. So then it was clarification on trying to figure out how this is going to play with arbitrary branching that is coming for F28, isn't it, or is it 27? We're all hiding out that 27 is branched, is the idea where it's going to end. But how does the inheritance work if you have a patchy, or HDBD24 and 2.6, when you create the 2.6 branch, do the test cases inherit and those kind of things. Am I restating your question correctly? And I can see this. I want to answer two different questions. The first is that this particular Pager instance doesn't care about arbitrary branching. That's not what it's for. This is not going to be around all that long. Famous last words I know. But the idea right now is to have the test cases in disk it so that they live alongside, the thing that's commonly said is we want the test cases to live alongside the code. And whether you consider spec files to be code or not, in this particular case I definitely do. So however the branching is set up and however that is done on a package per package basis is going to affect how the test cases work between them. Am I answering your question? Hopefully not just throwing words out. I think we need to think through some details there. Yep. There are details that need to be thought through. And I figure it's going to be one of those. Just because things make sense now I'm sure change is inevitable and an eventuality. As a baseline assumption, how do you make sense to when you cut a new one immediately in the era? If you do seven. Is there a process for populating that test directory? Where do you get the, there's some logistics there? The branches are still going to happen in Git, no? So when you do a branch in Git that stuff's all going to be there at the time you create it. So regardless of whether that's what we want we create that branch within the Git repository those tests are all going to be there. Or am I misund... We need to talk about that one. So I have a request rather than a question. I like numbers for things. So it would be really helpful for me to have like a weekly or monthly count of number of tests opened like maybe public can be opened how many pull requests are accepted from them how many pull requests are just sitting there not accepted and maybe somebody can take action on those and then how many tests get in or are created without pull requests just kind of being able to track those different things so that I can sort of see how well it's progressing because it seems it's a pretty exciting project and it's something to be nice to show off with. And one thing I was hoping to have the ready for, the question was Matt wants to have numbers and graphs and show up status or the status in the sense of how far along we are. And I also would like to know like if we're having a lot of developers who are just not accepting the PRs like how we can... I won't see that bubble up as an issue and work on it or maybe it won't be an issue at all. I don't know. So he was talking about tracking the processes. To be honest I still haven't figured out how to track that part of it if anyone has ideas. I am very much very much willing to listen. This does not work quite yet. I'm still running into some deployment problems that I was hoping to have fixed by this morning but they're not. This is a... basically this tracks how many repositories are there uses some heuristics to figure out do they have test cases? Do they not have test cases? Are these test cases moved? And when it's working it will then have a list of the things that are there and of the packages we are tracking right now what percentage of them have test cases in the upstream first forge versus yeah, don't have anything. If you can make this safe it's safe every day. We can try. I think we'll see. The guy who wrote this is no longer around so I haven't gotten that far under the code yet but I think one of the thoughts because we've been asked to make it send emails at various points so... Yeah we can talk afterwards on what's going to be the most helpful to be tracking but in all seriousness if someone has an idea of how we can track the process of getting test cases into disk it I don't have any good ideas so I would love to hear them. Is that initiative focused on a test or for a CI for packages that they are built or for a compose testing for a qualification of composes that put all tests together and run it on a fresh not-be-built? The question was whether the focus was on the package and correct me if I'm wrong the question was whether the emphasis is on stuff that is specific to packages or whether we're including things that are more for the qualification of composes and things that are on a less granular level than packages Is that correct? For now the focus is very much packages it's one of those things where it could be very large and we just want to get started at least somewhere and this is the easiest thing for us to wrap our heads around I imagine that can be a discussion to be had at a later date whether more composed level things are going to move as well but for now the focus is definitely package specific this So thanks for this talk first of all and you said that this will be part of a larger CI process so to be a part of that CI process do we have any guideline for people who are preparing the test cases that the one which you showed was in the YML So any guidelines for that? The question was about the CI initiative that I mentioned and whether there are any guidelines about how to write things for that system Yes from the CI page in the Fedora Wiki they're trying to find the exact they talk about the format within this you'll find links where they talk about the format they'll talk about what the various responsibilities are and what you can expect as someone who's writing tests for that system That's your question? Yes Any other questions? Thank you very much for coming and that is going to be it