 Okay, everybody. It's time for the next session. Just a reminder to rate the talks and also ask questions because you can get scarves for questions. And so let's welcome our next speaker, Tim Flink. Thank you. My name is Tim Flink, and I'm going to be talking about upstream first testing. I guess a little bit about me before I get, as I get started, I work for Red Hat. My job is to test Fedora. So, end up on both sides of, you know, testing upstream and downstream. So a bit about what I want to talk about today. I want to, you know, get into, well, what exactly is this whole idea of upstream first testing? You know, why should I care? Why should you care? What's available to help people test in Fedora? How can people actually, you know, help test Fedora and leave some time at the end for conclusion and some questions? So this whole upstream first testing, you can do, you know, as a downstream project, you can do all of the testing and keep it downstream and just look at yourself. For example, in Fedora, if we did all of the GNOME testing and never reported anything upstream, there's nothing stopping us from doing that, but it also seems a bit counterproductive at best. And taking some of the testing upstream instead of just keeping it to yourself does have some benefits. Now, the key emphasis here is that it's upstream first, not upstream everything. We tried to, you know, keeping with the example of the relationship between Fedora and GNOME, you know, if we tried to send all of our Anaconda testing upstream, wrong upstream, but either way, it doesn't always make sense, so it's not in everything all the time. And there are always going to be some things which don't make sense to be pushing upstream. But that being said, you know, by the benefits you can get from pushing some of that testing upstream is a better relationship with that upstream project. You can catch important bugs earlier and just in general improve the quality of the upstream, which, if you're depending on it, will trickle down to you eventually. So why should I care? Using an image that I'm sure no one in this room has ever seen before, test all of the things. So, you know, why should, you know, I care as a downstream? More testing upstream means things get fixed faster. It means issues get found faster. It means, you know, everything starts, you know, getting upstream earlier and is going to mean less work for me as the downstream eventually. So it's faster, it improves my product and it improves their product, product, project. And then as an upstream, why do I care about, you know, some upstream first testing? It increases the testing, it increases participation in my project. And more testing means finding more bugs and more fixes. So similar benefits, it's just sometimes the target of those benefit changes, but in general both groups really do benefit. So getting on to what's available in Fedora to, you know, help push some of these tests upstream. The three things that I want to go over is OpenQA, Beaker and Taskatron. So getting first into OpenQA, there have been several presentations already and there was a workshop this morning on OpenQA. It's something that came out of OpenSUSE and it's something that is, that they use very heavily and we've recently started using in Fedora. That it's being used to test nightly composes and raw content to make sure that nothing has broken in the last day or so. OpenQA is great for graphical testing. The whole idea of OpenQA is basically take a screenshot and there is a picture I'm looking for. If I can find that picture, click on it or, you know, click on it and type something and goes through an entire installation process like that. The way that, another way it was put to me is it's great for tests where you can't go in the back door. There are certain times when you can't modify the thing under test to put in some sort of test interface. You, you know, the only way to really go at it is to go at it if it was some sort of person. And that's one of the things OpenQA does really well is, you know, the test can be difficult to maintain but sometimes there's really no other way to do it. And the execution model is, in general, you schedule a job against artifacts. That artifact can be an ISO, that can be a kernel RAM disk pair, but you schedule against that and you give it a test. That's delegated to the worker. The worker does its work, hence it's called, being called a worker. And the graphical output from that job running is compared to pre-rendered screen shots. Again, looking at what it's supposed to look like, you know, is there a component in there that I can find that I'm looking for, click on it, go on and go through and really testing, making sure that what you're getting back is what you expected. So beaker. Given the audience, I suspect most people here have heard of beaker before. Beaker is a test automation system and lab management system written, maintained by Red Hat and it's used very extensively to test REL. We do, Fedora does have a beaker instance and that has both virtual machine clients and hardware clients and it is something that we'd definitely like to see more use of. Beaker is great for things that need, and the question was how there's a, like what the relationship is between lab management and test automation. And the best way I can think of to answer that is once you get to a certain size, well let me try that, let me start at a different place. Whenever you involve hardware with automation, it's going to be difficult, things are going to go wrong. And at some point you have a pool of hardware that needs to be managed. You need to know how many of them have, which generation of Intel processor? How many of them have 32 gigs of more of RAM? How many of them have ARM 7 versus ARH 64? So the overlap I think is there because you can tell beaker, I want to run this job against ARH 64. I want to run this job against a machine that is in a specific lab. I want to run this against, you know, so on and so forth. So by putting the lab management with the test automation, you can match the test that you want to run against a specific piece of hardware that is also being managed by the automation system. Does that answer your question? I suspect, well, that gets into a whole different issue that, yeah, pretty much everyone has their own homegrown automation. Then I don't know if I have a better answer for you as far as why beaker has both of them instead of it split up because I'm pretty much a consumer of beaker and I'm aware of what it does, but I don't know as much about the back story of it. But since I'm supposed to be giving these away anyways, thank you for asking. At least I think I've dodged your question sufficiently or at least the more pointed parts of it. I hope it was at least some of an answer. I know there are beaker folks here you can ask. The guy back there in the blue shirt would be a better person to ask that question of. Blue, green, that's more blue. I think of a beaker as something that is really good for something that needs bare metal. That's one of the things it was really originally designed for was to give you bare metal hardware with a known setup so you can actually run your test. But that being said, when you're using bare metal especially, because it does that install every time, if you have a job that takes five minutes, it's kind of weird because you end up with 20 minutes or 30 minutes of setup and then you run for five minutes, grab your results and then it's done. So you can do it. It just seems a bit odd. So getting into the execution model, for beaker, you submit a job. A client is allocated to that execution. On the client, the operating system is installed. Either the original was using kickstart. There are some ways where you can get like open stack clients that have the operating system pre-installed. But then your job is run, logs are selected and the machine is returned to the general pool. Which is pretty much how it goes through. One of the things that I see about beaker is that it was not originally designed to be triggered automatically. So one of its main user interfaces is it expects me to go to the interface and say, all right, I want to run this job against this release of the operating system and I want this kind of a machine. I'm going to push the button and then come back and look at it later. There are ways you can work with beaker so that it can go beyond that. But that's one part of its, the way it was designed that it needs to be taken care of, I guess. So getting last to Taskatron. Taskatron is a system. Not one single thing. It's something that is written and maintained by my team in Fedora. And the core of that is the Lib Taskatron runner, which is responsible for most of the guts of it. What is it good for? I have everything with a question mark because I honestly don't believe there is a single automation system out there that is good at everything. But the reason that I kind of have it up there is one of the design things about Taskatron is it's supposed to delegate the specialized stuff off to things that know what they're doing. So if we need to do hardware tests, the idea is to delegate it off to beaker. If we wanted to do install tests that delegate it off to open QA so that there's less of trying to do everything and it's more of acknowledging that someone else is going to have that expertise. Some other package of software is going to have that expertise. Let's take advantage of that and just provide them the place to run it and an interface to put the results into so that other people can see what has happened. But similarly going into an execution model it's triggered on Fed messages. So basically anything that can create a Fed message in the Fedora infrastructure be that Koji builds, be it Bode updates in different parts of that process or compose completion triggers or even other tasks that can start off something in Taskatron. The way that we have it set up now is it will create a virtual machine so that no matter what happens in that virtual machine that client is always going to be in the same state for the next person. So you can remove the file system. You can install whatever you want. You can destroy whatever you want. If you destroy the whole virtual machine your task may not complete but it's not going to affect the next person. But once that virtual machine is created it assumes that every task is going to be its own Git repository so that the person who wrote it can maintain it just push to Git and doesn't have to worry about anything beyond that. Clone the repository, execute the task described in the repository and then take those results and report it to a central database. Again, a bit more of an emphasis that it's designed to coordinate those automated tasks not necessarily know how to run everything in its raw state. Also it is relatively language agnostic. As much as I like Python there are people who hate Python as much as someone else loves, I don't know, Perl or Ruby or Shell or OCaml. I may not like that but we don't want to say you have to write everything in Python. So it's relatively language agnostic. As long as whatever is running that task returns something we can understand be that X unit XML or we have a YAML format that we can work with and it doesn't matter then if you write it in Perl, if you write it in OCaml, if you write it in C. And the runner itself is just a single SRPM. It's different modular RPMs. You can run it as easily on your local machine as if it were being submitted to the system we have set up. I just want to emphasize that because I've used systems like that before where you have to have the server and the client component and it's two different virtual machines that take four hours to set up. That's not what we're trying to do. The idea is to make running it on my local laptop as close to running it into production as we can reasonably make it. So getting on to how can I help test Fedora without spaces is try to get some of the appropriate, you know, for example, if you're a Red Hat employee and you're working on something that could be appropriate to be released. Try and get that released. Talk to management, see if that is something that could happen. Even if that can't, help work with upstreams to get things tested, to get your project being tested, well, that came out wrong, to just, you know, working with upstreams. I mean, one of the reasons why we have a beaker system is because the beaker folks have helped us put it together and they're helping us administrate it. And the last thing is a bit of a shameless plug. Come to my automation workshop at two o'clock. Be going into much more detail on how to use Taskatron to write automated tasks. All right, so as much as I should know better by this time to not do live demos, I'm going to show off a few things. One example is this. This is a repository of beaker tests that was contributed by people who have internet. Is it going to load? Maybe. But these are a subset of the tests that one of the QE teams within Red Hat uses to test Anaconda. So it's an example of, you know, it's not all the things that they run. It is not the most sophisticated things they run, but it is things that we can use in Fedora that will probably make their lives easier if we find the things, you know, while we're in Anaconda's development cycle instead of they find it after, you know, rel has been forked and, you know, that patch has to go back into multiple places. And then just a few examples of the systems. This is OpenQA. This is just one of the last things that we loaded, or loaded, ran. Not a whole lot to say. I'm worried to click on any of these links because things are taking so long to load. Same thing with beaker. Let's see what happens. Be adventurous. Alrighty then. Okay, so instead of that, I'm going to go into, since I had kind of hoped to do a demo, but considering how bad my network connection is right now, I'm, let's see, I tried one. Yes, but I don't have a network port on my laptop. It's with, I have one, but it's in my hotel room, and I think it's going to take a little long to get over there. But I honestly, at this point, I'm kind of thinking I'll just, this is the standard output from what I was going to, one of the things I was going to try and demo. And like I said, you'd think by now I've done enough presentations, I'd know better than to even try doing. Okay. Again, the demo that I was going to try and do just to kind of demonstrate how this can work well. The idea behind, what do you mean? Yeah. Well, I mean, it's like, and I wouldn't want to take that too far because I actually can't use this. So I don't, I hope you weren't trying to make that point. Oh, I wasn't. They're generally very helpful. So the idea with this is, this is Taskatron trying to run some shell scripts. This is a test case that was written for Apache HTTBD. And a Fedora contributor wrote some very simple shell scripts that are meant to run against HTTBD. So this is in a very verbose mode, but what is this big enough? Can everyone read this or should I make the font bigger? Okay. So, you know, basic stuff, you know, how, you know, where it's starting from, when it started. So what this is going to do, I suppose I should, is going to go, it downloads, it's triggered on the build of HTTBD. And it goes and it downloads the Koji build. It creates a repository from that it installs HTTBD and a couple other dependencies needed for the test. It runs through those tests and then reports that back to a central system. So you can see it downloading here or downloading the things from Koji, creating the repository, installing the dependencies needed to run the actual tests. And then going through, and these are then the results. So there's something for basic auth, there's something for PHP, there's something for serving HTML, serving HTML over SSL, and whether the V host works. And this is how it would be reported to the central result storage systems. And unfortunately, again, with the demo, I was expecting that to take longer because that's honestly just about, yeah. That I think is actually all, just about all I have. So I guess in conclusion, pushing things upstream when it makes sense helps everybody. I'm thinking of my audience, if you're an employee at Red Hat and you have tests that would make sense to go upstream, please try to get that to happen. Not all of them can, not all of them should, but it doesn't always, it doesn't hurt to ask, I guess. And if, please take advantage of the resources we have. Please come find, if you want to use the beaker installation we have, if you want to learn how to use Taskatron, come find us, come find me, because the whole reason for having those things there is so people can use them. And that's what I have, like I said, I thought the demo was going to take longer. So I'm going to start off. The question was if I could talk about Diskit for Taskatron tasks. That is going to be our next major feature. The idea is that the Diskit is the system that Fedora, and as far as I know, REL uses to keep track of spec files when things are built. So what our plan is to have, to make it available to all packages for Fedora to push tests into Diskit. So that whenever there is, you know, a change in Git, whenever there is a Koji build of that package, Taskatron will go in, read which things need to be run, run them, and then report them to the central database. And that's on, at this point that's about all I have because we just, we're just getting started on the actual implementation. So I don't have the details yet, but that is the gist of what we are planning. Does that answer your question? And the question was, what about dependencies? You know, what happens when the dependencies of something change? That is a use case we have not gotten to yet, to be honest. It's something we just have not gotten to. I'm open to the idea, but just have not gotten there yet. The question was about OpenQA and whether or not, and how much overhead there was in the needles and the things that need to be maintained within OpenQA. And I can only answer that from a Fedora perspective. From what I understand, the OpenSUSE folks have a different way of doing things than we do, so that they, it tends not to be as disruptive. In my opinion, stuff like that is great for smoke tests. In practice, we tend to have to redo things, you know, if there's a new GTK build. You know, then things are off by a couple of pixels and all the visual comparisons fail. You know, if there's a change in one of the fonts that's used in Anaconda, then we have to go to redo all the pictures. But in practice, that's, you know, every couple of weeks and it takes about a couple hours. But that's for the number of tests that we have. That's your question? Okay. The question was, if I had any estimations on how many of the major components are tested as in the way I'm talking about. I don't, at this point, I think it's most of the, most of the upstreams are doing their own testing and then things drift into Fedora. I think there's little formal things of saying, it's like, okay, so in Fedora, we have, you know, chemo, we have KVM, we have, you know, those kind of things. The kernel guys do a lot of their own testing in Fedora. But as far as this idea of trying to take tests from upstream of Fedora, or having them, whichever, I'm trying to think of the best way to phrase it, it's not very many, certainly not as many as I would like. Yeah, but it's, I would very much like to see people use Fedora, but at a certain point, I do understand that productivity can be affected. So, I would, please use Fedora, but, and if you certainly have any ideas on what we can do, like, which areas need testing, I'm not, so that we can look into that, or especially if you can, you have ideas on where we can find resources to run those things, that would certainly be appreciated. Yeah, it's, I mean, that, that has just started in the last month or so. We finally got the features we needed in order to really accept tests from other groups, went out two weeks ago. So it's relatively new, and I like his answer better than mine. Any other questions? So I'm going to try rephrasing your question to see, and please tell me if I've misunderstood there are two parts. And the first question was whether or not the results from tests are machine parsable. I'm not exactly sure what you're asking, so I'm going to, I will try and answer, and you can let me know if I'm getting close to where you're getting at. So as far as the, the things that are running themselves, whether the machine parsable or not, that's completely up to people who are writing it. But that being said, and we don't force people to report into our systems. But the everything is, it's very easy to put it into what we call results DB, and results DB is just a rustful interface to a database that is open for pretty much everyone to query. So if you know what you're looking for, you can say, okay, so has, you know, the RPM grill been run against the latest HDBD, you can search, you can use the restful interface to search for that. Is that what you were asking? Okay. I'm not, kind of. Not really, it is responsible for making sure that results DB can parse it. When the task is run, we store the artifacts from it. So if it generates like an HTML report, or if it generates some other artifact that explains what's going on, that is stored. And you can find links to that through our result storage system. But the result storage system is a single format, or a single layout. And it's relatively simple on purpose, where it keeps, okay, you know, this is what it was run against. This is when it was run. Here's a link to the logs. Here's a link to the original job. And here are some notes. And that's queryable. But as far as getting it from, you know, the output of nose or pie test, just things that come to mind right off the top of my head, into that online database. It's a responsibility of whoever's writing the task to make sure that the translation happens. Does that answer your question? And I, the second half of your question was, and I'm still a little unclear, you were talking about the difference between RPMs and tests. Can you try that again? Okay. Is that? That is not something we have yet. I know one of the things that's, that I want to get done soon is using lib Abigail. And I think they just, they recently released a tool, the, the ABI package diff. I think I'm close to that. That is one thing I'd like to see, that I'd like to see run on, you know, every package. So we have a method of finding, okay, you know, the ABI changed, and it wasn't supposed to. Or run RPM Grille, which is not quite what I think you're asking about. The ABI package diff I think is the one I, sounds like is closest to what you're asking about. It is not there yet. I'm hoping it will be relatively soon. But, because the components are all there, as far as I know, all the features are in Taskatron, and the packages are in the Fedora repositories. It's just getting someone to sit down, either someone on my team, or some volunteer to sit down and write the actual task for it, and make sure it gets scheduled. Yep, and then store the results. Any other questions? Mm-hmm. Okay. And the, the question was about the, the earlier thing about the diskit style tasks, and whether or not, since we're putting it into the repositories used for the spec file for packages, whether it will be packages only, who are only packages who are able to do these, these tests, or if other people will be able to. I don't have, the, the way I'll answer that is, at the very least, packages will be able to, that's going to, I think, come down to our exact implementation. That's something in the back of my mind, where I think that's a reason not to put them directly into the same Git repository. So it's going to be a balance, in my mind, it's going to be a balance between making it look like it's in that repository, so that maintainers and testers only have to put it in a place, so that, you know, packages don't have to say, okay, so I've got my Git repository, and here's, you know, the different branches, here's my spec file, here are my patches, you know, here's my sources file, and then if I want to have tests, I got to go over here and clone this other thing, and then put the tests in there. I really want it to look to the package like it is all in the same repository, but I do want to have other people, get other people access because it's not always just the package who's going to be the best person to test it. So the short answer is I don't know, but I would, I really want it to, I really want to have more than just the package or have access to push tests. Any other questions? All righty, well, five minutes left, I guess that would be it. Thank you very much. Okay, so if someone asks questions and wants a scarf, you can come by and grab some scarf. Oh, yeah, I was answering questions not thinking about scarves, does anyone want a scarf? Oh, he's got one.