 Hi everybody. I'm Adam Williamson. I've been working on Fedora QA for quite a long time now, since about 2009. And this is Miro Vagkorty, who is helping out with the Fedora CI parts of this talk. I'm going to kind of get right into it, because the slide deck is pretty long, so I want to make sure we get through the time. Miro, do you want to do a quick personal introduction? Yeah, so I'm Miro. I work mostly on LCI, but we have some infrastructure also in Fedora. And I'm a Fedora contributor. We are providing some infrastructure for running the testing, the other testing form service. That's it. So yeah, what we're aiming to cover today is it's not going to be a deep dive into the technical details of the systems. What I want to talk to people about is what are we actually testing? What is this achieving? And for people who are Fedora package maintainers, how do you interact with it? And how is it helping you? So let's get going. A little bit of quick background. How do we test an operating system? This is a huge topic. It's a challenge, you know, deciding what to test, when to test it, what's going to be most useful, especially with the resource constraints we have and the time cycles we operate on. So there are four levels right now at which you can really do testing within Fedora. There's package source control, so Fedora packages are stored in a Git repo and you can do testing on things that happen within that repo. Pull requests, commits, anything you want to do in there. You can do testing at the level where builds happen. When you do a single package build within Koji, we can trigger test on that. You can do testing at the level of the update, which is the sort of abstraction we have for getting feedback on single builds of packages or multiple builds of packages before they actually go into the distribution. And we can do testing of composes, which are when we build the whole of Fedora together and actually produce the images, Docker containers, disk images, whatever you like that we ship to users. We can test the composes. So the background. Manual testing, ye olde days. So this is up to about 2015, which is going to get covered on a later slide. Back in the days before we started getting better at this, we basically didn't really do any testing at a diskit level. Just maintainers who were really keen might self-check themselves, somehow they might have some little script they run on it, but there was nothing automated that Fedora QA was not really doing anything at that level at all. Build testing, all you could really do is have a check section in your package, which many packages do and it's a good thing to have, but it's not really sufficient, and that would just, usually it just runs the unit tests from upstream. Update testing, once Bodie came in, the point that Bodie originally really was to allow for testing and feedback on updates so that packages didn't just randomly go into the distribution and break stuff, but before we started doing automated testing, it was really just down to people doing it. So maybe if your package is really popular, if it's Firefox, you're going to get a lot of feedback. If your package is not that obvious to users, even if it's really important, it may never get any feedback at all. It was very haphazard, maybe this build gets tested, but this build doesn't, and what actually got tested from build to build again was just kind of down to people. So it was again, it was very varied, it was very inconsistent. It's very difficult as a manual tester to test whether an update breaks other things, especially whether it breaks Fedora Composes, so we had almost no testing of that. And Compose testing was the thing we mostly focused on, I guess, and we were getting pretty decent at it doing it manually. We had a whole process for it, which I'll cover soon, but it was very time-intensive. Anyone who worked on Fedora QA back then, my life was just booting up six VMs every day and doing install tests and wishing I was dead. It was not much fun. Incredibly boring, and even the amount of work we put into it, as humans, you can't test every Compose, like we would test some snapshots every cycle and then the release candidates, so we'd test, I don't know, 15, 20 Composes a cycle, but there are hundreds each cycle, so we were not covering very much at all. Little history of how validation testing worked, I'm a history nerd, so I love this. On the left here, this is the earliest recorded formal validation testing of Fedora I can find in the history, which is Fedora Core 5, and this is a wiki table, and there were 26 tests in it, and two people did almost all of them. As a test, I'm just counting a row, I'm not counting the environments, because counting that is harder. And then up to Fedora 21, which was eight years later, I think, we were still using wiki tables, but they're much shinier and more color-coded, and we had got up to 138 tests. Fedora 21 was the last release before we really started doing automated testing, so that was, you know, as far as we got with humans doing all the testing. Did I miss anything on my notes here? Yeah, so we, you know, this is like six times as much work as we've been doing on Fedora Core 5, but it still wasn't enough. We weren't covering anything, everything, we were missing bugs, we were taking too long to catch things, like we would often only find something when we got to beta, but it had been broken three months earlier, and tracing that out was a nightmare. And just as a note on the test counts, Fedora 39 now has 202 tests, so we've continued to kind of scale the amount of testing we're doing that way. So that was, you know, that was as far as we could really push manual testing. We were at our limit at that point. So we came to the conclusion that we need to do automated testing. What are the automated test systems that exist in Fedora right now? The key two, I'm going to cover other things very briefly later, but the key two are Fedora CI and OpenQA. So I want to do a little bit of explanation here. One of the questions we often get is, why are there two, you know, automated test systems in Fedora? And this has been a process and a history, and there's a lot of nitty-gritty in the background, but we've come to a pretty good understanding of why this is and why it makes sense and why we had them. Fedora CI, the point of Fedora CI is really to provide CI-type workflows, infrastructure processes to people operating in the Fedora environment. Fedora CI, the people who work on Fedora CI don't wake up every day and say, is Fedora broken? They wake up and say, are the tools and processes we're providing to Fedora people, are they working? Are they good tools? How can we make them better? That's what Fedora CI is about. Providing CI services to people working in Fedora. OpenQA is there to do testing of Fedora. OpenQA is literally the automation of all the stuff the Fedora QA team was trying to do manually and really burning out. That's the point of OpenQA. OpenQA is there for us to be able to figure out if Fedora is being its best possible Fedora. It's not a service we're providing for other people to write tests of their thing in the Fedora environment. That's not what it is. So with Fedora CI, the system is the point. That's what Fedora CI is, systems and processes and interconnections. The job of Fedora CI is to provide testing systems for you all to use. With OpenQA, the system isn't the point. You just use OpenQA because it was a thing that was there that worked to do the job. The point of OpenQA is helping us figure out, is Fedora broken? Who broke it? What broke? How do we fix it? That's what OpenQA is for. OpenQA is specifically not about letting other people come in and run tests of their thing. We kind of thought about going down that road several years ago and we've specifically decided against it. You can contribute to OpenQA, but it has to be within the context of what OpenQA is there for. So if you want to contribute something which helps us test whether Fedora is great, perfect, but if you want to test your little widget in the Fedora environment, that is what Fedora CI is for. So that's kind of the distinction between the two. OpenQA was started by the QA team in 2015 as a Skunkworks project just to automate our stuff. Fedora CI started in 2017 and it really came out of... We try to be transparent about the relationship between Red Hat and Fedora. Fedora CI really came out of an effort within Red Hat to reduce the distance between Fedora and RHEL where it came to automated testing because how we do things in Fedora is completely different from how we were doing things in RHEL at the time. And Fedora CI is part of a coordinated effort to try and make it possible to share more things across RHEL, CentOS, Fedora. So a large part of Fedora CI is making it possible to take things we have within Red Hat that were never applied to Fedora and make it possible to apply those tests to CentOS and Fedora. So I am going to go... Mira will talk about Fedora CI after I've done OpenQA. I'm going to go into sort of details of what OpenQA is and what we're doing with it. So what OpenQA is an original strength, what it's really good at and what is really cool is that it tests like a human. I'm not going to go into details of how it does this, but OpenQA is effectively running a virtual machine, looking at what's on the screen, looking for little areas of the screen that it expects to see, clicking on them. It can also type things. So it can type commands, get the results out. And so it's great for automating manual QA testing because that's what we did most of the time. We spanned up virtual machines and clicked on things. It's also great because it doesn't care about the operating system at all. You can use it to test Windows, Linux. You can use it to test firmware interfaces. All it needs is a VNC connection to the computer and a way to click on things. So it is really appropriate to what we're doing. And as I said, what we're trying to achieve with OpenQA is high-level functional testing. It is not here for unit testing. It's here to tell us, is Fedora broken? Is something that we care about, something that is part of what we want Fedora to be for people in the world? Is any of that broken? That's what we're doing with it. So what do we actually test with Fedora? We run a standard set of tests. I think there's about 200 of them on every Fedora compose. Also, we're going to subset on CoreOS composes, IoT composes, Cloud composes. Anything that's a compose in Fedora gets tests run on it. We also run a subset of tests plus extra tests about 50 to 60 on critical path updates for every branch of Fedora. We run branches, branched, and go ahead. Any critical path update gets the tests run on it. If no one knows what critical path is, it's really just the set of packages that are most important, and this limitation is purely resource-based. I would love to test every update that goes out, but we just don't have the capacity for it. So this is what the OpenQA WebUI looks like, and you, if you're not part of the Fedora QA team, you actually don't need to look at this very much, but I wanted to make this talk practical and really show what things are looking at. So on the left-hand side over here, this is just a subset, but this is a view of some of the tests that were run on this is a compose a few days ago. These are most of the tests that ran on the silver-blue installer image, and down the bottom you can see one of them fail, rpmoistree underscore rebase, which does what it sounds like, a test rebasing, an rpmoistree type install. On the right here, these are two different screens, and the squishing them onto one slide for complexity. On the right here, this is what that fail test actually looks like, and you can kind of see there's a bunch of screenshots. When it's got a green outline, it saw what it wanted or a command did what it was supposed to. When it's got a red outline, it didn't see what it was expecting to see, and you can click on each of these and you get a full view of the screen, and you can see like the top here is it's in emergency maintenance mode when it was expecting to be a booted system. So that's kind of what you get with the OpenQA Web UI, and this is a real thing. You can't rebase from raw height to 38 at the moment because of, I think, an SE Linux thing. So that's how that works. What do we actually cover with the OpenQA compose tests? We cover 75% of the validation test suite, which is the set of tests that need to pass for a Fedora release to go out. The things that aren't covered are things that are very difficult to automate or things we explicitly don't want to automate. We always want to have a human in the loop. So we always want a human to test that the images actually boot and install on a real computer, a real container, a real virtual machine, just in case the automating testing system missed something, you know. So I never want to release an image that no human has actually tried to boot up. So we can't really do 100%, but we do our best. We also cover some stuff that's not release blocking like we test silver blue, which isn't technically release blocking, but it's very important, so we want to test silver blue. A lot of the tests are install tests. We really exercise a lot of the installers. We install a bunch of different images. We test different package sets. Can you install known, KDE, minimal, a bunch of different ones, different partition layouts, ButterFS, XFS, EXT, LVM, fin partitioning, a whole laundry list of different partition layouts. Languages, we test English, French, Japanese and Russian to cover a kind of different fonts, different, you know, symbol styles and also right to left, which is very important. We test different firmware types, so we do installs on UEFI and BIOS to make sure they both work. But we don't just do install tests. OpenQA started out mainly doing install tests, but these days we do a lot more. We have the base tests are, you know, the core operations for any Fedora install. Can you install a package? Can you update a package? Does logging in work? Does logging out work? Does rebooting work? Is SE Linux in enforcing, you know, just in case we mess that up some time? Hasn't happened yet, but it might. Can you start services, stop services, restart services, enable services, disable services? All of that is tested. System logging, you know, is logging working? Can you get the journal? Upgrade tests. We test upgrade of all the release and minimal KDE server. We test them N1 and N2, so we test, you know, 36 to 38, 37 to 38, both of those. And we also do a test that a free IPA deployment can be upgraded. An entire deployment server replica client upgrade them all, make sure they're still working after the upgrade. So that's one of the more complex tests we have. Graphical desktop and application tests, we test Nome and KDE, a lot of their core functionality, you know, the user menu, all of the stuff that you expect to work on a desktop. We test that every installed application at least starts up and stops trying to test them all functionally. There's a lot of work, but we only test that they don't just crash or just not launch at all. On Nome, my colleague Lucas, I don't know if he's here. Anyway, he has been working on writing a bunch of Nome application tests. So right now we test about 15 Nome applications in quite a lot of detail. In an app application, it does a bunch of stuff. Can you look up a place? Can you do a route? It's, you know, a real proper detail functional testing of the apps. We test, you know, desktop login, which isn't just login, but also can, again, can you log out? Can you switch users? Can you lock the screen? Can you unlock the screen? Can you reboot from the login menu, that kind of stuff? We test that death notifications work, which includes testing that on a live image you don't get update notifications, which is an important one. We test printing using virtual printers, obviously, and we test upgrades, updating and upgrading. Yeah, so that's pretty intensive. And we test, all those tests are gone on Silverblue, and we also run all those tests on an upgrade as well. So we upgrade Workstation, and then we run all those tests to make sure they still work on an upgraded system. We have a bunch of server functionality tests, like the key features in place, you know, Postgres and Cockpit. We do a bunch of functional testing of those. Do they work? Do they show what they're supposed to show? So for every single compose, we are testing all of this stuff, like every time a nightly compose comes out, all of this gets tested. Update test coverage. We don't run all of those tests on every update, again, capacity issues, but we run about 50 to 60 tests. And this has got a little refined recently because we're doing stuff like running all the known tests on KDE updates, which was just idiotic, so I did a lot of work to split up the critical path into groups and that allows us to only run appropriate tests. So not every update runs every test anymore, but there's a total of about 60 tests. We do the tests I talked about on the last slide, you know, the KDE Workstation and server tests, not all of them, but most of the key ones. Again, the subset of the base tests and the desktop tests, like IPA, cockpit and database tests, I think we've run all of them pretty much. And then one thing that's really important and I'm pretty proud of is for every single critpath update, we're building a network installer image, a GNOME live, a KDE live, and a silver-blue installer image, which was a lot of work. And making sure you can do that build, you can run an install and the install system works. So what this is really testing is that the update doesn't break the compose. So we know that, you know, if we push this update, the compose process is still going to work, which was really key. And, yeah, again, just a key thing to note, again from earlier on, when we're testing an update to, you know, libfru, we're not testing, is this libfru the best libfru it can possibly be. We're testing, does this libfru break Fedora? And this is always the goal for OpenQA. All of the testing is in the context of making sure Fedora is okay. So, yeah, I mentioned this already, but the main limitation is really just the ability to run a capacity. I would love to run more tests. I would love to run all the tests on all the updates. I would love to go across arches, but we just don't have the machines. If someone wants to give us more machines, that's great. I'd also like to thank the meta folks because they're planning to hire a contractor to work on cloudifying OpenQA, which would be a great way to get, you know, more test resources. So I'm hopeful we'll be able to get somewhere with that soon. So scale and success, just a perspective on where what we're actually achieving with this. OpenQA, a production and the staging one. In staging, we test more arches. We have PowerPC in staging. Right now, we run over 100 tests at a time on each instance, I think. We've run over 3 million tests since 2015. Discovered hundreds of bugs. 358 is just bugs that are tagged OpenQA in Bugzilla, but that's a huge undercount because I only started tagging after a while. And a lot of things, we just fix it because I'm a proven packageer, so I just fix stuff. Or we file issues upstream. So it's a lot more than that. In a typical day, like just a typical day, not a busy day, it'll be one or two fedora nightlies, depending on where the branch exists. There'll be 3 core OS or cloud composes and we'll test about 20 updates. On a busy day, you know, you can double or triple those numbers. Just some examples of recent bugs that OpenQA has caught. Right now, there's 3 failures that we know about from the EFI system partition size increase thing, which I've filed in the past. We might not have noticed that for another two months. Firefox just crashing on startup, which it does, so we caught that and we got it untagged, so that doesn't affect all hide users. The Arabic translation just disappearing from the installer. That was a fun one to catch. Exactly, notifying about updates when running live, that started happening and we caught it. And that's just the first page of search results. I just pulled up the first page of that. Another example that happened recently from update testing, a new util Linux build. It was mounting the group partition read only, which obviously makes the whole thing not work. And in the past, that would just have landed in raw hide. And the next day, people would have been like, hey, my raw hide system isn't working, but instead we caught it, we got the util Linux untagged, nobody saw that except people who were running the test. OpenQA resources, yeah, this is the first one. We see OpenQA as like a service we provide to Fedora, the testing. We don't necessarily want packages to be trying to debug failures themselves. So we don't just run the tests and run the system, we investigate the failures and we make them actionable. Ideally, we fix them. If not, we at least turn them into a useful bug report so you don't have to look at the OpenQA results and try and fix it yourself. So that's the philosophy about OpenQA. If you need to contact us, we have a mailing list, chat room, the upstream site that I should mention, OpenQA is originated by the SUS folks. It's a great system, we're really happy to have it. We collaborate with them, thanks to them for all the work they put into it and that's the upstream site for it. There's a Wiki page which kind of explains the Fedora deployment. And this last point I added after the keynote, I really love the keynote, OpenQA is an open source service. Everything is open, not just the code, but all of the stuff that we've got here. And it's deployed via Fedora infrastructure and Sible scripts which are in a Git repo and you can contribute to. And you can even do a pet OpenQA deployment using those Sible scripts and it should mostly work. So you don't have to contribute to OpenQA but if you want to, you can. And I'm now going to hand off to Miro to talk about Fedora CI. So hey, so if you want, so Adam has less work than onboard CI. Because once you do that, you do your part that your software is stable, Adam will have less work and he will be glad. So Fedora CI is here for you that you as a contributor to Fedora can do your part. You can stop basically your build which sometimes breaks, it's software, you can stop it before it enters Fedora. If it enters Fedora, it's on Adam. That's the story. So what can you do? Fedora CI is a maintained community. It's very close now to what we do in RHEL because it has similar, it's done similarly. There is a Jenkins instance which is actually calling testing farm what my team takes care of. It's fairly stable and it works well. You can see the documentation here at the docs. There is also a nice how to guide to get you started, quick start guide, really copy, pasting stuff and you can add your test, whatever you want. So Fedora, there are two places where you can run tests. First is I can watch here. So you can run tests. The tests are always placed in the this Git repository. You just drop there some files. The tests don't need to be there. They can be leaked from GitHub wherever you want. That's thanks to TMT because it can share tests with upstream repositories and so on. So it's maintained by you. It's your test. Fedora is not gating. It's not gating your this Git pull request. You can open this Git pull request if you have that test there. It will be reported, but it's not gating. You can just merge it if you want. Then you will break maybe Fedora. It can be made gating. I think Zul has capabilities for that. That's the first place. Open a this Git pull request. Nothing gets yet to Fedora. Tests will run. You will validate it if you want to. You can do that. You can do that. The pull request actually works against Fedora. And before breaking Fedora, you can fix it. You can also run tests from other components. For example, I would love if Selinux would run cockpit test suite because cockpit touches a lot of components and so on. So it would be great if the great test suite would run after merging. So once the stuff gets to row hide, there is no gating again. So on Bodhi, you can again see after the production build was built. You can see the results on Bodhi. So that's the second place where Fedora CI runs. And also OSCI team, which is the second team who is running this stuff, is running some generic tests. So if you want to do that, if you want to do that, you can do that. The RPM is fine. Make sure that your sanity of your RPM is fine. Sure. This is all I said before. Sorry. I think if you want to do it super smart, do it via packet on GitHub or GitHub. If you are lucky that your upstream is there. If you want to start gating, you can enable gating on Fedora. So your tests can be gating, but currently they are not. It's really on you to make them gating. That's it. I do want to quickly highlight the packet workflow. If people aren't familiar with that, if you really buy in, it's a cool system because you go from your upstream project all the way to Fedora in one workflow. So you have your packet you can do all your development upstream and all the packaging stuff happens automatically and the tests get run on your upstream pull request, on your spec file pull request. It's a really nice integrated workflow with all the testing in the background happening via testing farm which is Fedora CI's backend. That's a cool thing that it's worth checking out. Have a look at packet. I'm just going to blow through this really fast. There are other things which are automated test systems which is anytime your packages build dependencies change, it tries rebuilding it in copper and tells you if it's broken which that's effectively a QA thing so it's kind of cool. Fedora release auto test is written by our colleague Lily Nye, the person on my team and that runs tests on weird hardware that we can only really do in Red Hat Speaker. Beaker is a Red Hat internal test farm thing so it can test weird enterprise storage things like fiber channel over ethernet ice guzzies and things that we can't really test in open QA. That's really neat so that automates those tests which were really hard to get done before. Relval which is a silly thing for reporting results manually to the wiki also actually when a release validation, when a new candidate release comes out it runs the size checks automatically and files those results into the wiki so that's kind of an automated testing. Fedora Core OS has its own entire CI release workflow which is really cool and really integrated and it's very modern. Fedora is kind of old and we're bolting all this stuff on afterwards. Fedora Core OS was invented much later so they have a really cool cycle and they do all their own CI. Google based CI is getting tests run in Pagger for your project and Packet I just mentioned. Result delivery reaction so testing is one thing but you have to do something with the results so this goes back to that earlier slide about the four places testing can happen so diskit is like the earliest place where you can get your test results in. So your package is in diskit we have a Pagger user interface on the front of that which is the github style forge thing and your pull requests if you manage your package, if you actually do pull requests which not all package maintainers do but if you do pull requests then your tests that are in your repository will get automatically grown and you'll get the results in your pull request as Miro mentioned and you can optionally have your pull request gated on those results if you want to. So this is like the earliest point you can get it in and if you're disciplined to use the pull request workflow it's really useful to have all your test results at this point. Bodey, the web UI of the package of the update management system is probably the main integration point for results. This is where you're most likely to actually see your results I think for a normal Packager working in a normal way. On the automated tests tab in Bodey you get like a list of all the test results and this is you get your Fedora CI tests and your OpenQAI tests OpenQAI tests all together. The ones that have Fedora-CI at the front come from CI. The ones that had update at the front come from OpenQA but because the back end they're all kind of compatible you get all the results in Bodey. We've improved this quite a lot recently. The first version of this talk had me apologizing a lot for a lot of the bugs in this view but as you may have noticed it's got better recently. We would have weird things where the sync between the back end and the front end of Bodey was off so one would say your package was gated and one would say it wasn't and hopefully that's been fixed. You now see running state so when a test is queued or running you will see it on this tab before it would just say the result was missing which is confusing but now it actually tells you that the test is running so yeah, new queued running state with fewer bugs hopefully more consistency. Right now updates for stable releases and branch releases are gated on most of the OpenQA tests which means if one of those tests fails your update cannot go stable. Right now Rohide is not officially gated. I'm probably going to turn that on after this talk. I wanted to do it during the talk but we're running out of time because Pesco has approved it I believe and that'll be a big thing but anyhow. We've been shadow gating Rohide for a while which means when we find a bug in an update for Rohide we will untag it before it makes a compose so effectively Rohide has been gated for several months we just weren't telling anyone about it. That's me and Kevin Benzie but it's been working great it's made Rohide way more stable. Packages can configure additional gating requirements so if you want to gait on the tests that are in your repository you can drop a gating.yaml file in your package repository and it will be those tests will gait. There's a button for waving bogus failures if any test is failed and it's blocking the update there's a button that says wave failures. Please only click that if you're really sure the failure is a bogus one. There's another button for re-running the tests. If you think the result might be a bogus one hit the re-run test button and see. If it keeps failing then it's probably a real problem or a bug we need to figure out. What do you do if you see a failure in Bodhi as a packageer? If it's a test that you put in FedoraCI yourself you should probably go ahead and debug that. That's your problem. If it's one of the generic tests which are the installability, RPM, depth lint, something like that that shouldn't gait the package by default but maybe you've turned on gating for it try and debug it if you can't then contact the FedoraCI team and they will help out. Open QA failures again this is part of our full service model if you're keen and impatient you can try and diagnose the failure and figure out the details of Open QA but if not just stay calm and wait and we, usually I refresh the Open QA web interface about 12 times a day. I'm always looking and any time I see an update failure I'm going to investigate it. I'm going to either fix it or I'm going to file a bug for you so you will get a report which you don't need to know how Open QA works you will just get told what's going wrong and I will ask you to help fix it. If you need help you can contact us on FedoraChat, the mailing list send me an email, carry a pigeon whatever you like and yeah, rerun the test if you're not sure if the failure is genuine, waive the results if you're really sure it's not a genuine failure for problems in Bode itself you can contact CPE, the Fedora infrastructure team or you can contact me because I still work on Bode too so I know how to do it quickly, yeah we still use the wiki like 20 something releases later we still have tables on the wiki with all the results in them the reason we do this is just because it's the only way we have to integrate automated and human results so we still need some manual testing for composers so people put their results on the wiki page and the crazy thing I wrote let's Open QA file its results into the wiki so when we're deciding whether to release a Fedora release we go and look at the wiki page and we check that all the tests are covered and all the tests passed that's why we still have the wiki I'm going to skip over this because we're short on time just wanted to say that there's a lot of stuff behind the scenes here and this is all shared between Fedora CI and Open QA like on the front end there are different systems with the way they file results the way they all talk to each other is all kind of integrated and shared this I really like this slide if you've been looking at cat pictures for the last 30 minutes but you want to say you came to this talk take a picture of this slide this is everything, this is the whole talk updates and composers tested by Open QA tested by Fedora CI we don't gate composers right now but we review the results we gate updates you can gate your pull requests in diskit and Cache does FDBFS that's what we've been talking about the whole time the future again quickly, what are we doing with Open QA we want to do more tests we're always writing more tests so cover gnome applications gate raw height updates I would really like to turn that on there are a lot of features that just need more hardware cool thing we're working on right now is doing bare metal testing in Open QA which uses Raspberry Pi KVM it's a really cool project more tailored update tests so because we now have this grouping thing I kind of want to do let's run all of the installer tests when we have an anaconda update so that would be cool and maybe move it to the cloud Fedora CI plans, Miro we are almost out of time one thing, if you have STI there is a nice guide, that's the most important stuff otherwise I think Fedora is quite stable if not then you can reach me on chat Fedora project or Fedora CI we are there to help you and if you watch the recording of Miro's talk he has a lot more about all this stuff in it other plans one thing I skated over the biggest challenge we have right now lots of other things go into making a Fedora compose the kick starts the comps workstation OSG config all the immutable configuration lives and so when these things change there is no testing if you make a mistake when you are editing comps the compose fails and we just don't know about it so I would love to do some stuff to do more testing of these but we got to have plans and we made it through the slide deck thank you everyone for sticking with us by talking fast I think do we have five minutes for Q&A or is that the slot okay cool so the question was can we also test Turkish yes we could I am loading a new language just requires doing quite a few screenshots but yeah we could we do have an issue tracker I will talk to you after the talk with OS Auto's district Fedora is the project for the tests and if you file an issue on there we can definitely look at doing that cool let's do one at a time so the question is can we test suspending theoretically we could test it because you can suspend a VM and resume it but it's not very useful because all the problems that suspend and resume tend to be on real hardware so yeah that would be that's one of the reasons we are looking into the real hardware testing so yeah sorry I am not sure what you mean application cache files debugging yes good question I kind of scourged it over Miro can talk about CI test causes if there's an application crash so there's a thing in OpenQA when the test fails it runs a post-fail hook so we upload a bunch of generic logs and stuff but we do also test for abort crash dumps and core dump control crash dumps if we find one it gets uploaded so there's a tab in the OpenQA web interface assets assets and uploads I think and if you click on that for a fail test you'll see a bunch of logs and any crash report that happens so one of the things I do when I see a failure is oh there was a crash I grab that crash I analyze it and then I file you a bug with a proper backtracing I just want to say that maybe there could be a generic test for all packages like maybe but then the test would have to run it to see if it crashes anyway more questions I think you had your hand up first okay I think great question so the question is how long do the tests take to run for Fedora TI it's very complicated because it all depends on it's all in the test definition for OpenQA the compose tests I believe take about two hours to run completely for an update it takes again about two hours in total but most of that is for the silver blue test which isn't gating so for all the gating test to finish takes a little over an hour I think because mainly because of the live build and install test which takes a while and yes there's a lot of parallelization so the way OpenQA really works is you have these things called worker hosts which are you know systems that spin up virtual machines and run the tests in them and it's configurable how many workers will be on each worker host we run on fairly powerful machines so our main test boxes have gotten 30 workers so at any one time they can be running up to 30 tests that's how that works you are alright thank you so much