 Okay. Thank you for the introduction. First thing first, I will just take one quick shot of the audience. It's actually, since we have Brian and Dukie in the board, it's actually over. We are obligated to do so. Okay, so OpenQA. OpenQA is the next big thing, but before I blow it up, let's just take a little history lesson. So just see the motivation why the OpenQA was created and so on. So this is very similar to how regular software was developed before some changes in development phase also. And this is very similar to also how distribution was developed. You had basically for the half a year, you were messing with packages, you were packaging new stuff, adding patches, disabled feature, enabled and other features. Then you just made a build check and when you had more time, you just repeat the cycle until you hit the alphas and you build the alphas and shipped it to the QA. Usually the QA weren't even able to install it, so they just shipped it back until you fixed it and so on. This was very difficult for a release team to actually release some proper versions, especially giving the milestones like alpha, beta and so on. So with the increasing number of contributions, for example, which we had in OpenSUSE, it becomes so challenging to do so that we needed to rethink the complete situation. So we were inspired by how software development was improved by, for example, introducing unit testing and test-driven development and so on. So we just inserted the QA step right into the development of the whole distribution. The difference between development of individual projects and the distributions is that in a regular project, it's kind of easy to do unit testing and you just can test individual parts. But when we are talking about complete distribution, it's by definition integration test at least. So for distribution, we just basically copied the QA from the final releases or the milestones and put it to the development model or the development cycle itself. Of course, even if we had enough manpower to test all these variously failing builds and whatever, it would be a pretty boring job for a QA engineer just each day of a week to go to work and install the same release for once again. So we definitely needed to automate. Now it brings another set of problems because since it's a whole distribution, you have different bootloaders, you have different installation workflow depending whether you want to install on one disk or more and in the rates, encrypted disks and so on. And of course, different desktop environments. So we needed some universal system how to approach this, how to do this. And also, we needed to focus on the target user. The target user of the distribution is like it's not another software, it's a regular person. So we need to test that it's usable for them. So the first thing we had to do that is the automated system which is testing it, need to probably see what regular QA person see and what regular user see. So we just take the VNC output of our virtual machines we were using, we were testing. We were testing on. And we just created a bunch of reference images and our test engine, our snapshot regularly takes a screenshot each 25 milliseconds or less, something like that. And compare it depending on which stage of test you are, whether the output is actually correct or not. This is another benefit that it works for a complete stack. You can test even a BIOS if you want to do this kind of stuff. And the next step was that if we already have a VNC connection, we just started to use the VNC for input. So we don't need to install anything new on this system on the test. We just, everything we test, we simulate the keyboard strokes. Initially, we were using the QMU monitor to channel to simulate the key presses on the PS2 input or something like that. But there were problems with that. There were many dropped keys and small buffers, so it wasn't typing reliably. Just we changed it to the VNC input. It's a good thing to know that once you will be writing the tests, all you do in the test is just typing what you would do when you are using the machine. So when you combine all of this together, you will get something like this. This is the reason why this presentation is 18 megabytes in size. And also, we see how the test engine sees it. By the way, it's two or three times faster than in real life. That's because if you want to review the tests, you don't want to spend 20 minutes looking at some video. Another thing is, I don't know if you can say it, but there are usually some artifacts in there. The artifacts are not from the OpenQA itself. The OpenQA is doing good screenshots, but this is the thing we can't yet catch in using the OpenQA, as that there is the bug in our Intel drivers. So yeah, that's a complete video. So when we finish our testing, we have a complete test distribution, but when we ship it, there are some few percentage of users who can't install it due to some hardware bugs, which we can't yet catch in our test environment. So this is the first part. This is the test engine of the OpenQA. But for the distribution, that's not enough. As I was talking, we have plenty of different combinations. We have different DVDs, installation, network installation, and rescue images, all the different encrypted disks or rates or whatever you need. And of course, text mode installation, graphical modes and so on. And you don't generally want to write the tests from the scratch for each of these workflows. So in OpenQA, we kind of get with some ads when you will first try to install it or write a test, it will feel unnatural how the tests are written. We have one, I will talk about it later, but basically, you know, our frontend where you can configure it, we have several tables called machines, test and play, the media, where you put different variables in it. Usually, there are gathered groups in these logical groups. In the machines, you usually put how many disks it has, whether it uses special networking and so on, on test templates, whether you want to encrypt your disk, whether you want to separate home directory or whether you want just plain installation. And of course, products or media, it's called now, you put the media type you are testing. Then the OpenQA scheduler takes all of this information, it collects from these tables and generates all of the possible combinations and generates test jobs, like here, which collects all the settings you have from machines, test templates and products, and so from a few variables, it can generate like tens, 70, even 100 jobs, whatever you have configured. That's the end. When it's all properly set up, for example, for OpenSUSE distribution, when the new build, new image is created, the OpenQA automatically generates around 80 tests and run them. And of course, there is the last benefit of the OpenQA and that's automated. So there is, well, there is downtime when it crashes, but usually not. And beside it, it always works. It's also working in parallel. So you can add, numerals are workers. So this is, yeah, I just made up the number, but we experimented with up to 50 or 60 workers. So we were able to test 60 tests in a parallel. So whole distribution, you can test in like two hours. So it's really handy for this kind of work. The important thing to remember is that even for OpenSUSE, or this model, the OpenQA is only a part of all this. There is also the second part which is driving the workflow and it's in open build service. But you should really check it out. But that's another topic. So we will take a quick look on how the architecture is, so it's designed. So this is kind of an older picture, but I like the colors. So I use it all the time. We have the web UI node. On this node, there is a web UI you can see as a user. There is a REST gateway. There's the WebSocket server and the scheduler. There are also different processes. You can have a separate database like MySQL or Postgres or SQLite. You can work with that. This node actually also takes care of the scheduling of this template of these parameters I was talking about. The worker node, you can be on one computer or you can use remote workers. This is basically only middleman between the test engine, which is the OS auto-ins and the OpenQA. The OS auto-ins is the actual thing which executes tests and collects the screenshots and compares them and generates results from it. Now, this, we have the OS auto-ins. We can use different backends, test backends for it. By default, we use QMU, KVM QMU. That's also the most feature-wise backend we have. It's running basically. It's managing itself some bunch of virtual machines. We have IPMI for real hardware testing, but you need to specialize hardware for it. It supports actually an IPMI protocol. If you have this one, you can eventually get or capture some hardware bugs. But the problem is that with KVM, you can run multiple workers or multiple jobs on one machine. For the IPMI, you actually need one machine for one test. There is no parallel execution. Then S3IT, PowerPC, PowerUM, just PowerUM is the latest added. And that's basically probably only for an enterprise segment. And a brief look into how the test architecture is written in the parallel. But don't be afraid, we are trying hard to hide it. We provide some test API and just type string and put a string, and it types it in. So usually in tests, you don't need some different or complex algorithms. So you can't say it's parallel. So the tests are divided in two parts. There is a test loader. It's called main PM. And actually in this test, when you include it, then this test main PM is loaded. It gets all variables, generate a combination of variables from the scheduler. And depending on these variables, it includes various tests from the test sub-directories. In this directory, you have individual tests. Then the third part is the needles. These are the reference images with proper metadata data. Because we don't need to match co-complete screens all the time. We are usually interested only small cutouts for it. But tomorrow at 9 a.m., unfortunately, the first one at 8.113. So it will be an OpenQA workshop when you will have the opportunity to do some hands-on session and see how it's actually implemented. And let's see about some features why you should consider OpenQA and why you should use it for not only distributions, but for whatever applications you are using. We finally think we even or someone even managed to run Android testing using OpenQA. So you can try it. The main selling point is quite powerful reporting. You already saw the video recording. It's done by default. Then you have all the screenshots where during the tests they are displayed in the order as the tests were executed. Even the text output when you just look for the serial console and what's there, it's included in there as a picture. When everything is okay, the border is green. When not, the border is red. And when you click on it, you actually see the screenshots. There is a slider. You can move it left and right. And it will just show you, usually, you will see what's the problem usually. But sometimes, for example, when fonts is changed or little color variations, you, for a flow of humans as we are, it will look like 100% same. But OpenQA would say no, like 90% and that's failed. There is one reason for that also that the OS auto is the test engine to save resources. It actually downgrades the number of colors to like 16 colors or something like that. So something which looks quite similar in the normal original form. The matching algorithm can see it completely differently and it may not match. Also, we have, we can capture audio. So we have some audio tests. And of course, as mathematicians and even computer scientists, when they are presented with some problem, they tried to find whether there is a trivial, reliable solution. And if not, they will convert it to the known problem. So we have, we capture the audio and actually, we convert the audio to the visible form using some Fourier transformation and then we just compare images. That's how we test whether the audio is working or not. So quite, quite, quite an neat. Except for that, more, we have quite, you can upload anything from the test machines you want. The first four is always there. The video, the bars, this is the generated variables for inspection, the serial output of the machine and the logging data of the test engine itself. But then you can just select whatever. You just put in the test the upload logs and the name of the end parts to the file. And it will upload it and you can then see it in the, for the review whenever everything is okay or not. And the last one is asset. It basically works the same as the logs. The only difference is that assets can be reused in some different, in other tests. And also there is a list of the original media which was used for testing. So you can actually even download it. So for example, when you generate new images each day, you can go back and use image from two days ago. You don't have to use always the latest which is on the build service. We also support, provide complex test cases. This one, this is the chain relation. This is the dependent test. This basically means you specify that this test will run only when this first one will finish and finish as successfully finished as in the past. So at first you can preserve resources. For example, when the installation failed, you just don't need to run the installation again and again when it's broken. So you save the time. And of course, you can mark the install job that I want to upload. I want the result of the actual state of the machine when the test is finished to be uploaded to the open QSR itself. And then the rest of the test will start from the point the last test finishes. So you don't need to reinstall in this test everything from the scratch. For example, here you can have install KDE and here you have test KDE with whatever on Wayland, on X, or whatever you want. The next are multi-machine tests and the combinations. This is very useful when you have client's server modules and you can't have all these services on one machine. For example, when you are testing DHCP or some network stuff, you need some test support for it. So you can say that this one is a master, it's a parent and it should be running in parallel with these tests. So the scheduler makes sure that all the tests are scheduled in the correctly. So the tests are not waiting for some blocked workers using the other jobs. And of course, it also maintains that, for example, when the parent fails in some tests and is terminated, all of the children will be terminated. As a side note, when you will be watching or going through our GitHub commit log, when this feature was implemented, the Git log is full of chained, killed children and parents. It was actually pointed to me that I should stop talking about children and parents in this context. Related to this is networking. Because when you want to test, for example, the DHCP server or networking stuff, you usually need to more ensure that the network is separated and that you use various other configurations. By default, we are using QEMU, user networking, and the individual QEMU machines can't talk to each other. They are isolated. They have access to the outside internet, but they can't communicate between themselves. The easiest one is the tap networking, but the easiest one for us to implement. But it's probably the hardest to set up since all the setup of the networking is on the actual administrator of the OpenQA system. We have a virtual distributed internet that's, I think it's a QEMU feature to what the good thing is that the scheduler or the OS AutoWinds will know based on which group these tests are that it should interconnect them. It automatically creates a connection between these jobs when they are running in parallel configuration. It knows that they should be automatically connected. The same is with OpenWaySwitch, but there is OpenWaySwitch is a little harder to configure because it's a combination with tap networking and OpenWaySwitch, which creates the interconnection. Yeah. Also, the OpenWaySwitch is running. It adds another service to the worker system. So, when you are developing, you hit, of course, you hit the box there, and sometimes they are quite clear to see. But many times you need to extract some other logs and don't want to restart the test from the scratch each time. So, we have a support. You can enable that as the test is running. It's divided in the individual test modules. And after each module is passed, the new snapshot is stored in the QEMU image. It's also important to know that this is only available to QEMU backend, for example. It may, you are screwed. You don't have this one. And the storing of the snapshot you can use for two things. The one thing is during the test development that you will add up the test and continue when you left. Or you can take this QEMU image and boot it from outside of this, of the test environment and look for the files you are interested in and another logs and so on. So, you can either test debug the issue itself or debug the test. Another thing is interactive mode. And we also enable remote VNC connection to the systems under test. So, when the test is running, you have an icon when you click and it will essentially pause the test execution. And you can either generate scripts, you can hear under the test, you have the host name where it's running and the VNC port you should connect to. You can connect to the machine, do your stuff, update needles and so on. And then, if everything works correctly, you should continue. But sometimes, this interactive mode is a little fragile, so you need to be patient with it. And the one thing which the administrator of the whole distribution release process may be interested in is the infrastructure to the other services. So, we have a rest interface. You can get a query for test results and trigger the new test, download assets, download, yeah, basic assets. And there's quite a big API using the rest, but so, you can hook it to other systems. In our example, in open to set way, the build service is hooked to the open QA. So, when there is some current jobs running which are checking the state, when the build is finished, when the DVD build is finished, it will automatically trigger open QA to download it and then you should do completely new tests and so on. We have background tasks which helps with cleaning all the results and it can also, this feature added by Fedora's Adam here. It will automatically download the ISO in the background. And, of course, database support, the authentication, external authentication. There is also one I didn't list here. This is fake authentication. That's mainly for the test development, because it annoys, at least I was annoyed, to always need to re-log to my machine where I was developing the open QA. So, we have a fake authentication which is you hit login and immediately you are logged in as administrator. So, with infrastructure integration comes scalability of it. The funny thing that originally open QA was a single process, single thread date. So, when you added more workers to it, it stops to delete. You hit quite often timeouts and the jobs were, the jobs were terminated due to it and so on. So, we've switched to modules pre-forced module, which essentially is a multi-process one. But that's it for our PUI. You can scale up when you have enough CPU power and memory. You can add more workers. It's a similar thing as Apache workers. You just add more workers and hope it will solve your issues. For the workers, you can scale it up if you have resources or you can add remote workers. For the scaling up, because the worker usually needs two high-demand processes. One is the actual quimo running the test and the other one is the system which is grabbing all the input and comparing the images and generating video and so on. So, in my case, I usually reserve two cores, two CPU cores, or if we have hyper-trading with these several cores, to one open QA worker. So, for eight cores machine, we are running four workers, usually. And of course, the memory constraints aren't too high for today's standard, because by default, the open QA will create virtual machines with two gigabytes of RAM. So, for the RAM, it's really not the issue here. The CPU power is. Okay. Now, last context. If you are interested in it, we don't have any open QA dedicated to mailing clays or IRC. We are hanging out on OpenSUSE factory IRC channel email. We have progress for our issues, but you can use also issues record on GitHub. This whole thing is open source, including the tests for OpenSUSE. They are always open source. And it's all GPL version two. You are free to try it and contribute back. So, that's basically, yes, okay. For now, it's only KVM QVMO. Because we are not using some, like, we are not using LibVert for management. We are directly, the S Auto, as Tangin directly executes a QVMO binary with all the parameters to construct what you want. So, so far now, there is the S-Wirth support, but I don't know how exact, how mature it is. It was added quite recently, so far. I didn't see any tests on it, but the results page? Yeah, we don't want to show, watch only you. Yes, there are various screenshots. I actually can maybe show you, if the network, of course, will work. Yeah, but I need to log in back. This is annoying, but yeah, there is absolutely a way to see all the results in the pictures. There is an overview page. Yeah, I know, but I am trying to log into the network, but yeah, it's not, yeah. Yeah, well, okay. I will, it should be, yeah, I got it, but I just need to do, okay, now. So, this is for the open SUSE test. Let's check the latest Thumbler results. I hope the network, yes. And you'll see, of course, better, right? So, this is usually the overview for the latest build, this is the build number we have for the build service, for the open Thumbler lead. And you just, this is all the tests we run for the test suits, like they usually test your combination of things. And let's see, I will check the KDE here. These are actually the green, no, one, there are two greens. The greener green is everything's okay and more of the yellowish green. It's, we need to use some workarounds to successfully pass all the tests. So, you can even mark a note when you use the workarounds. And then when you click, you have all the tests which were performed. And yeah, you see the other results, what it looks like. You see this, here was used some workaround and it was used here because it's a different color of the border. So, this is the kind, but this is for displaying as a, for a user. We don't have, you can ask the, using the REST API for general overall status of the job, but we don't have like XML file from the, for example, like Jenkins does the JUnit file and so on. But we have, we can work with Jenkins that you can run Jenkins jobs from the open QA. And when the JUnit is, you collect the JUnit and we have a functions which will parse it and puts the Jenkins results in the similar view as this one. So, you will see all in one page. Yeah, and then there is the logs and whatever is was collected from C. The log from the test engine usually looks like this. So, you see what you scheduled, all the tests. Here is the actual command line which created the QEMU machine in connection to VNC. And then the tests usually, yeah, you see whether the matching and so on. When you click, for example, on this welcome, let's see some more complex test case. You click on this and you get the source code of the test. The source code viewer isn't very, like I would say, you can use it to see the actual test. But you see that it's based on some console test. And this console test usually has some other modules, some helpers or whatever you can use. And so far, in this source viewer, you can't click on it and show the underlying source. We are aware of it and have a bug about it. But so far, I don't know, maybe it will be implemented, but I don't know when. Yeah, it's a per code. Yes. We have in the test API. Yeah, okay. I'm out of time, but tomorrow, exactly. That's the right way to do. Okay. Yeah, of course, there were two questions, so two squares. But I have three more, so three more questions. Okay, I'm out of time. I know. Thank you.