 Hi, everyone, and welcome to DevCon of CZ 2022. I would like to welcome here our next speaker, Dan Cermak. Welcome, Dan. If you have any questions for Dan during his talk, please use the Q&A section in Hopin. We'll get to the questions at the end of the talk. Thanks for the introduction. Hi, everyone. Welcome to DevCon. Thanks for joining my talk about Open QA, about testing Linux distributions and appliances. Great that you made it. First, let's talk a bit about me. So I'm Dan Cermak. I'm a software developer working at SUSE. I joined part of the developer engagement program working on development tools, and currently I'm building containers. If you are also familiar in the Fedora community, I am a member of the i3 SIG, where we ship the i3 spin. I'm a package maintainer, and I used to be in Fesco for the Fedora 34 cycle. So if you say the magic words in Matrix, then I shall emerge as therefore lost there and disappear into the void back again. Anyway, few of my big hobbies are development tools and testing. That's also why I'm presenting to you Open QA, and also writing documentation, which I actually quite enjoy doing. If you would prefer to also stalk me on social media, there are a few links on the presentation slides, which I'll share at the end of the presentation. Since we have only 20 minutes, let's dive right into what are we going to talk about. First, I'll give you a very short overview of what is actually this Open QA thing that you might not have heard about yet. I'll give you a brief sales pitch, and then we'll take a look at how it is used to test the Linux distributions, appliance builders, hardware, et cetera, pp. And at the end, we'll have a Q&A. So with that, what is this Open QA thing? You might ask yourself. So in case you have never heard of it, I'm going to give you the perfect explanation. It's a web application. Now I've helped you a lot. OK, so what is actually Open QA? Open QA is actually a test framework for systems under test. So and I think I need to elaborate this a little bit because usually if you think about testing, you're usually thinking about testing individual components. So for instance, you want to test a program. And so you're testing this program or a certain component of a program. And Open QA is a test framework that's really designed to test a whole system. And by a whole system, I mean something like a PC, like a real PC, or if it would be very easily doable, a cell phone operating system, something like that. So you give it really a real thing. And it tests that this whole system is working as expected. So what Open QA does, it's designed for user-oriented testing. So what it can do really well is simulate user input. So stuff like wiggling a mouse, pressing. OK, I can't lift up my keyboard. Sorry, pressing keys. Looking at a video output. So what Open QA also can do is it can record the video output that comes out of your system under test. And then it uses Open CV to match the output against expected parts. So essentially what you would, it does essentially what would a tester do. So you tell your QA person, OK, you go to the installer, you look for this icon, then you click it, then there should be a dropdown, this should appear, and so on. And your QA person would then check, OK, there's this icon, I click it, I press these keys, and so on. And this whole thing is really operating system agnostic. So it's usually used to test Linux, but it also works on Windows. You could even run it on really stripped-down operating system so it can do all kinds of things. So why should you do this? So let me put on my sales person hat that I forgot at home. Sorry. So without the sales person hat, why should you use this? You want to do system-level testing, i.e. you have, for instance, a Linux distribution. You have an installer. And you want to verify that the installer works every single time. Doing this every single time is probably a very boring job. And your QA person will murder you in about two weeks. So you can use OpenQA to automate this. When should you use it? If you really care about user-centric testing, so if you have a GUI application, or if you want to test on your system and to test something that has a GUI, then OpenQA can really help you out because it really moves a mouse and clicks things. So even if you, for instance, if your installer somehow creates an overlay due to some weird bug and you can't click it anymore, OpenQA will catch this. Also another thing, OpenQA, while it by default runs in virtual machines and tests virtual machines, it can run on bare metal. And it can test real bare metal machines. I'll show you an example later on that this is done for Raspberry Pi. What it can also do is if you write individual tests, you can label these. You can attach bug reports to test results. You can review full test suit runs. So imagine you create, if you know how OpenSusa works, then every new tumbleweed snapshot gets an OpenQA run. After each OpenQA run, the release manager reviews it and says, OK, go, no, go. Same thing happens on Fedora with every compose of Fedora raw height. And last but not least, this is really battle tested tech. So we are not talking about some fancy new thing that I hacked up together in my basement. This thing has been running for 10 years at OpenSusa, at SUSE internally. And Fedora, I've heard that it's been run in Red Hat as well, but I have no confirmation about that. So treat this as hearsay. So this is really battle tested tech, and it works really, really well. Let's talk a little bit about the architecture of OpenQA. So OpenQA itself is primarily a web page and a web API that you can see on the left upper part of the picture. That's what you would interact as a user and as a test writer. That's also what serves, what schedules, tests, what stores the test database, et cetera, pp. Then the actual, so this thing has a REST API and has a web UI. The tests are then dispatched to individual workers. So these are jobs. These can run on the same machine. These can run on different machines. They can be close by. They can be somewhere totally different. You just then have to invest a little bit more work into the infrastructure. The actual testing is then done by a job called OS Auto Inst. This is small binary, which usually connects via a serial line to a system under test. And then what it connects to the system under test, this can be, this is by default a QEMO virtual machine. It can be bare metal. It can be also all kinds of other back ends. And then if you want, you can make OpenQA simulate key presses, mouse movements, and you can use the video feed. But you don't have to. So you can run pure console tests and you don't have to do any kinds of video matching. But it's usually that's the thing where OpenQA really shines. So here is a small screenshot of the web UI. So what you would see if you click on an individual test. So what you see here are, this is the installation test suit from OpenSUSaTumbleweed that I think I took the screenshot maybe a year ago. So you see here individual tests. And every single screenshot here is when OpenQA was specifically looking for something. So this now feels probably all very opaque and abstract and OK. So how does this look in practice? And the cool thing is, since OpenQA does actually user interaction, what you are currently seeing here is a small, is a short video, what OpenQA actually does during the installation of OpenSUSaTumbleweed. So I hope this works since I have Hoppin open and it looks like it's not really playing. In case it's not playing, I'm sorry about that. But what you would essentially see here is a very fast installation process of OpenSUSaTumbleweed. And that's what OpenQA does for the OpenSUSa distribution every single, on every single snapshot. It just runs the whole installer. So we can really be certain the installer still works. It creates useful images. The image recognition part itself works via so-called needles. So this is a feature of OpenQA. You don't have to use it mandatorily, but to not go into too much technical details. But what it does is what you see here, these small green rectangles. These are image areas that OpenQA really looks for. So it doesn't compare the whole screen. It just compares parts of the screen that you tell it what it should look for. And that's really like what a QA person would do themselves. So if you tell the QA person in a screenshot of this is Anaconda from Rohite, and you tell it, click on the keyboard icon, then they don't really care that there's also a language support below that. They just need to find out, OK, there's a keyboard icon. I want to click that. So that's also kind of the reasoning. And matching all screens would be just totally broken. Features that I didn't mention yet. So what OpenQA can also do is it can produce test artifacts. Simple as example, you run an installer. It creates a virtual disk image. OpenQA saves this image and can boot from it, which is perfect. Because it means you can check that your installer actually works without writing it on a physical disk. It can handle these assets, so they won't clog up your hard drive or your hard drive on OpenQA. You can have tests depend on each other. So essentially, you have one test that creates a virtual disk image, and other tests are scheduled afterwards, which use this disk image. OK, I mentioned this one previously, and that was you can tag and review. You can restart jobs. You can group jobs. You can group tests. And there's a plethora of backends. So the default one is QEMU and LibVert. But you can also run it on TrueBearMetal. You can use IPMI. And there's also this X3 270. So that's for S390X if you want to test one of these, if you want to test on IBM's architecture. So first part done. Second part, OpenQA in the wild. So where is it used? I'll showcase you a few very prominent users. Starting out with the thing where it started, and that's OpenSuser. Please don't get shocked. This is a figure of the whole OpenSuser release process, or a little bit of a simplified version. And since this contains a whole ton of information, we actually only care about this part. So Open and we'll focus here specifically on the OpenSuser tumbleweed distribution, which is the rolling release one. So the development of the distribution happens in so-called development projects where you submit your packages. Then they get into staging projects and trickle down into factory, from which then we take snapshots and create the actual OpenSuser tumbleweed release, usually every few days or every single day, depends. And for the staging projects, there's a bot. So a staging project is essentially you submit a few changes there. Then all the packages in the staging project get rebuilt. Discard effects get built, and then OpenQA runs on those. And that essentially verifies that the submission did not break the essentials of the distribution. And in practice, it looks like this. So you have a bot on the OpenBuildService. This is a screenshot from the OpenBuildService. And the bot tells you which OpenQA tests succeeded and which other tests failed. Essentially, the same thing, as I described, happens also with OpenSuserLeap, which is the stable enterprisey variant of OpenSuser tumbleweed. Only there it's done with maintenance updates, but it's kind of the same. Fedora, so you folks here might be more familiar with that one. So Fedora, how the Fedora development workflow is. So we have our package sources in this kit that they get built in Koji. Nowadays, everything goes through Bodi. So Fedora Update System, that's Bodi, that creates updates for the stable branches and for Rohite. And now the process splits up. So for Rohite, you get, I think, every night, you get Pungy to create a new compose and provide that this compose succeeds. The resulting images get taken and pushed into OpenQA, where all kinds of tests are being run on it. That's usual. And then you get, and if you're subscribed to the development mailing list, then you'll see one of those emails that get sent out from OpenQA. Now, then there's also for branch and stable releases, so currently Fedora 35 and 34. OpenQA does not run on every update, but for critical path updates. So think of something like the kernel, and there's also a few other critical path packages. Their Bodis test repositories are used. So OpenQA then takes the Fedora stable base image, it installs the updates, and then runs OpenQA. And you can see the results of that actually in Bodi. So this is a screenshot from Bodi for a recent update about the kernel. And the green parts, these are actual OpenQA tests. So they essentially verify that the kernel works in the expected cases. OK, next one. This is a kind of unusual one, and that's about the Kiwi image builder. So since I guess many of you are not familiar with Kiwi, Kiwi is an image builder. It's kind of comparable to OS build, kind of does a comparable job to Pungy. It's the default image builder in the open build service. It's used to create all most of the images of the open source distributions. And I've been involved with it a little bit. It's a pretty nifty project. And since I see Neil in the chat, he's also involved, but he's involved in everything, so he doesn't count. Sorry about that, Neil. Anyway, so the Kiwi release process used to work essentially like this. Stuff was developed in Git. And then we tag a release, run a few test images, and decide, OK, is this good to go. Yeah, that's not super ideal. So very quickly, there was a staging project design. So essentially on every tag, you push this version of Kiwi into a staging project, and you create all tons of test images, so live disk images, installation ISOs, and full disk images. And the open build service would then every time we built them, and you get a bunch of test images. And then you find out, OK, do my images built. But that doesn't tell you, do they actually work? And currently, the staging project contains, I think, something like 50, 60 images. And just booting all of them and just verifying that they boot is you can spend all week with that. So that's really a job for automation. And the past, yeah, more or less the last year, we spent on adding OpenQA for this. So what OpenQA nowadays does, it's not fully automated yet, unfortunately, but we're working on that. And the idea is that OpenQA then takes all these produced test artifacts. And when we say we want to make a new release, we take all those test artifacts, we shop them into OpenQA. OpenQA boots all the live installation media, verifies are they booting, the installation images, they are started, they install a disk, and then we boot the disk again and verify whether the disk that's been created really works. And it's called already quite a few bucks. So I mentioned previously bare metal testing, works with the Raspberry Pi. Sorry I'm breezing through that right now because I'm running out of time because I'm terrible at time management. So this is great work that's been done by Guillaume Galdee from ARM. And so this is really testing of a real Raspberry Pi. This has been running for over a year now on OpenQA.OpenSusa.org. And what this needs a few additional tricks. So it uses a nice piece of hardware called the USB SD Max, which is an SD card multiplexer. So it allows you to have an SD card plugged into two devices at the same time and switch between them. So what is this? You have your OpenQA worker, it flashes a new disk image onto the SD card of the Raspberry Pi. They are connected via a serial line and they communicate via the network. Then you have an option how to power the Pi on and off. And so essentially you turn the Pi off, you flash a new image, you power, you switch the multiplexer, you boot up the device, you run all your tests. And if they work, you're happy. If they don't and something really breaks, you just unpower the Pi and everything's fine again. Cool. So and people have now started working, starting putting this, taking this a step further and introducing, oh, sorry. This is how it looks in practice or how it looked in practice about a year ago and Guillaume's test setup. So what I was saying, cubesOS, they are taking this a step further and that's running true bare metal tests. So this is all very much work in progress. What they're essentially doing is testing cubesOS on real laptops. If you are curious about that, you can go to the slides and this is actually a, this is a link to a blog post pull request by Marek who's driving this initiative. It's quite an interesting read if you wanna find out all kinds of technical details, how you can, which kind of difficulties you run on if you try to really not only test connections via VNC but really grab the screen output if you want to boot from physical hardware and so on. And one of the issues that you run with laptops if is you need to remotely kill them. And so just the tiniest sneak peek about this, this is how cubesOS does it. So this is a servo and it essentially presses the power button. Okay, since I'm running out of time, if you want to get in touch, you can find us on IRSE on Matrix. Here are a few links, you can find them in the slides and with that I'm happy to answer your questions. Thank you very much for your presentation. We'll go straight way to the questions. The first one is by David. Is OpenQA looking at how to use public cloud instances for remote testing? I am thinking of something related to the IPA tools. I think IPA has got a new name. So I think there have been ideas to do that to use public cloud images but it's not super trivial. So there's been the idea floating around that if you have a semi-active OpenQA instance where you just need at burst times a lot of workers, you could spin them up on AWS or Linode or wherever you have your public cloud hat. But I think it's been at ideas only at the moment because it's actually not super trivial to pull off as OpenQA is it usually wants to have a serial connection somewhere and integration with these public cloud tools is not super trivial in an OpenQA context. It's definitely doable. So I have myself added a very rough around the edges backend for OpenQA that talks to vagrant. So it's possible but it's going to be, it's not there yet, if that answers your question. Thank you. And there's a second question by Jan. If the test is really a kind of state of the world of the whole system, how is that what I would want as an app developer while for the system snapshot testing installer or whatnot sounds great for a single app. It sounds like the functionality might be a little out of my scope of interest. That is correct. So if you for a single application, it's not always perfect because you get all your, you get all the other state. But so for instance, OpenQA is, as far as I know, it's started to being used for the GNOME development. So what you could do as an app developer is if you really want to run user centric tests, then you would create some kind of known good base image. So you take your stable Debian, stable CentOS, stable Fedora, OpenSusilip, whatever, you consider yourself stable and you use this baseline image. And in OpenQA, you would have an initial test that shoves your application into this and then runs your tests. But you'd really have to have to consider whether you really want to do that. If it's worth the effort, since OpenQA is setting it up is a non-trivial task. I don't want to downplay that. So it's really not super simple to pull off. It's doable, but it's not super simple. Thank you very much for your answers. Sadly, we're around of time. So there are a few more and there's not time for them but then should be available at our work adventure after the presentation. So if you want to ask him any questions, you can do it there. We'll post the link to the work adventure in the chat. So I believe everyone who wants to ask you something, they can find you there. Yeah, sure. So thanks everyone. Thank you once again for your interesting presentation. Thanks for having me. See ya.