 Good afternoon. My name is Adam Major or Adam Meyer. Both pronunciations are correct, depending on where you're located. And today I would like to speak to you about OpenQA, or to put it more verbosely, how you can do a daily fully tested release of valid distribution. So first of all, who am I? Who is this person in front of you? I've been a diggin developer for quite some time, since approximately 2001. I don't quite remember it's been such a long time. Over the years I have maintained a few packages here and there. I think, and if I package the Ruby on Rails before it became popular, I stopped maintaining it before it became unpopular. Currently I maintain a Qt creator, which is very nice. You probably would like to use it instead of VI Remux, if you do see your C++ development. I also maintain ISC Kia, which is the replacement for the current DHCP server. Currently for the last year I've been working at SUSA, which would explain all the green presentation and myself. I also maintain some packages, things like Node.js and Boost and some other small things. This is also my first diggin conference. It's been a little bit eye-opening, because there is a very diverse crowd. A lot of people come in from a lot of different aspects, like not just developers. It's very encouraging about the community around AV. So I'm getting back to what is OpenQA? Because we all hear there's OpenQA, I don't think it is. Maybe you don't even know what it is. The best way to describe it is a system under test monitoring framework. It's not an application testing framework, it's a complete system under testing framework. It's also a web application. It's not something that you generally want to run on your desktop. It doesn't monitor any application. It's a web application, so it runs on a web server, more or less. It's written in Perl. It's still a lot of people still like Perl. So it is written in Perl. People working on OpenQA really really like Perl. And it is using Modules as a web framework, if that is interesting. To me it is not, but to me it is. And the nice thing about OpenQA is that it can run multiple testing scenarios at the same time using multiple backends. Generally what is being used is that it is using QE-MOO, class of ENC for executing tests. So this is the most common way that it runs tests with QE-MOO. But there are also other backends that can be used. IPM, live or direct hardware tests, if somebody is interested in these things. Lidvert or S390, because this has a different variant. And in graphical form, this is what OpenQA actually looks like. The green bits is the OpenQA bit. And the best way to understand it is if you look at it from the bottom up approach. If you are looking at the QE-MOO, you all know what QE-MOO is. It is a nice simulator writer and virtual machines. And the OS Auto-Install which is already in Davion. This runs the actual tests on the virtual machine with ENC as a connection for example. And that is where the OpenQA comes in. The green bits, the worker. There is one worker assigned to an instance of OS Auto-Install. And this worker assigns specific tests to this OS Auto-Install instance to run the tests. And the worker is pretty done, just like the rest of it. So it needs to be assigned and managed by the actual web application. The web application manages the state. And with the database. And it communicates with the worker with a web socket, the rest API. And generally it has things like a screenshot for use for matching. Something like an NFS final system. And then the user that is actually monitoring if the tests are working correctly or not. Is using the OpenQA web interface. You see all the nice things. Everything is green and the world is nice. And you don't have to worry about it. It actually stops working. That's when it's interesting. So why would you like to use OpenQA? It's because OpenQA has a very user-centric testing model. It doesn't hook into any special hooks anywhere. It literally interacts with the system under test just like a normal user would. So it looks at the screen. It tries to find stuff on the screen. But the tester found that would be interesting in this scenario. So it could be the entire screen that you want to match. Or just a portion of it or whatever. It can type on the keyboard. And it can move the mouse and click it. So that's the things that it does from user-centric point. It also interacts with the other system in a serial console. So those are the four things that it does. And you may want to use OpenQA because it's already been used quite a lot extensively out there. So for example that OpenSuzan. It's run over one million tests already over the years. So that's quite a lot. The test time that you would need to spend for this to execute is almost 25 years. So if you want to sit on a computer clicking the same thing for 25 years, it's not a very exciting job. And it actually did find, it has found or maybe prevented or however you want to put it, approximately or nearly 1,000 months over the years. So instead of a user finding something, it found something and notified somebody about it. So who uses OpenQA? Obviously Suza is using it on the Enterprise Z products and desktops and the new CASP, the container as a service platform and all these things. So Suza is definitely using it on the Enterprise bits. It's OpenSuzan, the OpenSuzan distribution. There's two distributions that are using it. Leap is the more traditional distribution. So it is like a deviant stretch and it has a GRE cycle. Leap is actually tuned to the Enterprise bits but I don't want to talk about this. There's no time. But the interesting bit is tambo weed. Tambo weed is almost a daily distribution. And I say almost a daily distribution because otherwise it could have been a daily distribution where there's a new release every day. But it is not because sometimes OpenQA will stop it. There's going to be a test that will cause the release to be prevented. So the user doesn't get a broken system. It's prevented. And I'm not talking about just something glitchy. There could be a scenario where you're doing an upgrade, an upgrade from a previous version to the next version, and the system doesn't boot because, oh, well, there's a symbol missing. Or you're installing things and it doesn't install it because it stops if the partitioner is broken or something in the city like that. So this is almost a daily distribution because OpenQA prevents users from having a bad day. And who else is using it? Well, Red Hat is using it. So yes, it's not just this. It's also Red Hat, the quote-unquote art enemy. And when should you be using OpenQA if you're looking at this? So these people are using it. When should you be using this system? Whenever you want to do your tests more than once and you expect the same results. This is always nice. So for example, in install tests, install tests are pretty boring. You always have to type the same things and you click the same thing. You expect the same answers. And this is not something you want to do because it's boring. UI application tests are also possible. You put the system on the desktop. You install the application inside the system under tests and then you can do everything that the user can do with OpenQA. Console applications, same thing. You can use serial pumps or for texting with output. Or you can use screen if you would like, either or. And you should also maybe look at OpenQA if you're trying to reinvent the same thing that is already existing in this form. So if you're trying to reinvent the square wheel don't redo the same thing that already exists elsewhere and there's a lot of effort done to make this function properly. I know it's very simple. You may think that I just need something simple and it just needs to match something here and there. It's meant to be easy, but it cascades and then you end up with special corner cases. And you also want to use this because testing is repetitive. So we end up with extremely high failure rate. And I tend to say failure rate in finding something interesting. So you do something once and you expect the result to happen and you do it the same way and you get the results you expect. And this is very demotivating. I can't imagine a more demotivating job when you do the same thing. Yeah, I expected this to happen. Yep, it happened. It's very boring. You just want to find things that are interesting so things that you don't expect to happen and that's when you should be alerted not of what happened. We're just following the script and it's working, which is the boring words. So can it actually install Vivian? Can we do OpenQA with Vivian? This is OpenQA installing Vivian. I've done these tests in just a few hours and it was mostly just waiting for it to put an edit and make any script specific stuff. And yes, every single interaction is a test. It's an asset where OpenQA has to match something and click on something or type something. So, oh, yeah, we did it with desktop and it actually, there's a testing user. Yeah, so it can test Vivian. And this is not very informative, it's just a screenshot of the console for tests where everything is fine. Each one of these green boxes is a screenshot and it's green because it passed. So that's all the tests. There's quite a few of tests but it's not difficult to make these. So, for example, how do you define a test in OpenQA? The first file you'll look for is called main.pm. That's the main parallel file that loads. And here you can define a test but the best way is just to load another test. So you just use the API here which is just typing. For example, in the test for loading, for testing that installation, I just have three separate test suites, I guess. One for booting, one for installing, one for booting into the desktop and checking whether everything's fine. So there's the three files that you'll define. These are just file names. Nothing special. And how is this test, for example, defined for booting? And it's two lines of code. So every test in OpenQA has a sub-routine called run. So we're not difficult here. And API is very simple because it only interacts with the system under tests with very small footprints so it only looks at the screen, type something, or moves the mouse and clicks it. So in this case, it looks at the screen and there's an assert screen. It looks for a boot loader for 15 seconds. If it doesn't find it, it will fail the test. If it finds it, it goes to the second line and sends a key code return to boot it. That's it. And for example, this is just a quick way of how you define an area of interest in a screenshot, which OpenQA calls a needle because it finds a needle in a stack. And the green bit is the highlighted area at the bottom here where it's continued. This is for finding the continue button which is just a bit buffed in pieces of OpenQA and it clicks in a manual. So it's assert and click. That's the command. It's not complicated. OpenQA and Debian. Most of OpenQA dependencies are already in Debian archives. There's just a few things missing. And the OpenQA, the actual web application could be installed shortly. And the nice thing about Debian Conf is by accident, I met the person that actually has been acting on these dependencies and we've decided to collaborate on these things. So Hideki, there's Hideki right there. And if you would like to maybe join us, help to package these things, make some tests and reduce the burden on the user. Our user would be very nice. So I guess you can catch us here at the conference. I'll be here today and tomorrow but IRC is always fine or email. And there's some resources. I have some links there for source code for all these tests for the installing Debian. You can look at them, they're very simple. And because if you look for example at Open Fedora, Fedora OpenQA instance which is also linked here or at OpenSUSA tests which is on GitHub and also linked in here, those who can get a little complicated since they have various scenarios and upgrade paths and non-rained scenarios for installation and upgrades et cetera, et cetera. So just looking at the simple Debian installation script from what you just saw for the installation could be the better way to get started with OpenQA. And I would like to also show you something. There it is. I rerun these tests yesterday and the interesting bit is it's big enough, I think. The interesting bit is that these tests they failed with the same ISO and they failed here. This is the failure screen. You get your output and then you can see where it is much and against. That is the expected screenshot and the area where it is matching. That's what is expected to see but what it saw during the boot was whoops, there was a virtual device and there's a name on it so that's caused it to fail right there. So you have a comparison what it expected, what it saw. Right? And that is just... I didn't look exactly why this happened but this is a question to fail and maybe this area needs to be excluded because sometimes the device name appears sometimes it doesn't, I don't know. So I think I think I'm almost out of time. If you have questions I guess I can take one or two questions. I'm not an expert on OpenQA but I'm just a basic user. Thank you so much for teasing me with OpenQA because of course it's been interesting for the delineator. I'm one of the guys not to reinvent the square wheel too much but the last time I checked in 2013 you guys were storing checksums and so on so we couldn't just look at images and say which part changed and why and how and so on. So I'm really interested in seeing this. I'm wondering is there any way not to do some byte-by-byte comparison? Is there a way to be explored? It doesn't actually do byte-by-byte comparisons because there were problems with any of these things. I don't know what it did in the past I just know what it does I kind of know what it does now and what it does now is basically it takes the entire screenshot the matching area is just, I think, a YAML or probably a YAML file so you can have multiple matching area zones you can have exclusion zones so you can have a very complex matching area if you'd like to and the matching algorithm I think it reduces the color space and there is a fuzziness you can define on it so it's not byte-by-byte comparison at all anymore, at least. Okay, perfect. And I guess it might be pluggable so one could exclude the banner at the top so when it gets updated for the antenna or whatever we can just exclude that part and keep the test working. Correct, correct. You can either select which part of the screen you want to match you can select which part to exclude in a match instead of just having a rectangle matching area you can have a very complicated matching area where you have a rectangle that is matching and inside this rectangle you can have an exclusion zone where you want to have the testing excluded for example, one part changes that is not important. Okay, talk to you soon. I'm going to leave the mic for whatever reason. I was wondering, especially about this breathable part it looks like this breathable part the interesting standalone regardless of the worker and the master is that in the design possible at all or am I just saying something stupid because you're interested as well. Which part? The graphical part? You're here saying I guess you need a worker to actually interact with the actual device but it doesn't design allow for other workers for instance we have already invented the wheel I guess for the market. Well the worker interacts with the back end and the front end is what you see the front end is to define the screen shots define the matching areas that's the front end bit and the back end bit is for example the OS auto install and it also has different back ends for that so you can have the QEMO as the back end IPMI and all those things Would the front end also be useful outside of this whole QEMO frame? Oh you just want to use the front end without the back end I have no idea but maybe you could talk to some people that work on these things there's a person you will find who prints the IRC channel you can ask which they actually work on QA for their day job In this system where do the tests be used? So in the end we have a system that we call auto package test Each package has its own test inside of itself In this case you have like an external repository to test Yes, if you go to the resources link I can link with this I guess back If you go to the resources area there is a link to the GitHub repository for example for open-srusa tests and it includes open-srusa tests for Tambuid and EVE and also the enterprise bits all of it It's like it's just a GitHub repository I am assuming that Red Hat has the same thing but in each of the tests you can also click on the test and you get a source for the test So for example here if I scroll up where the tests fail like let's say install test if you click on it you get your source code for the tests and lots of assets that's where the video is generated and you have all the installation script like all that step-by-step execution and so I think it's mostly for different if you wish to have different type of testing suites different scenarios but yes tests are a bit helpful for open-srusa It's quite a lot of work We are finally thanking you for your attention