 Welcome back everyone to the Fedora Leads and Linux Distribution Development track, and we're here today with Lucas Ruzitzka for a talk on auto testing in Fedora. So I'll hand it over to you. Hello, my name is Lukas and I work for Fedora Quality Engineering and today I would like to give you a step-by-step guidance on how to set up OpenQA. I'm not going to go over the concept much deeply because yesterday this was done by Adam already. So those who were interested in it basically know that OpenQA is an automated test tool. Originally, it was developed by Suze. It still is developed by Suze, but Adam also adds some patches into it. So Fedora also has some part of it now and it allows to test various features of operating system using the hands-on approach, like as if a user would do it. It basically creates and runs a virtual machine, loads it either from an ISO file and a QCao 2 image and it then performs various actions inside the virtual machine, compares the expectations to the real-estate and evaluates the outcomes. The OpenQA architecture is basically that there is a controller that does the scheduling stuff and the web UI and job handling and communicates with the database and then there is a worker or there are multiple workers that do the actual testing. So you can have many workers, you can just have one worker, it depends. For the local installation that we are going to talk about, we are going to use one worker because I think it's enough to consume a lot of our memory. So first, we need to install OpenQA. That's fairly straightforward process because everything is packaged in Fedora. So basically we use DNF to install the whole stack and especially the packages like OpenQA, OpenQA, HTTPD, OpenQA worker, Python 3.json scheme and Fedora messaging if you want to consume the Fedora messages. But if you don't, you can omit the Fedora messaging packaging. So to start with the DNF install, OpenQA, OpenQA, HTTPD and the rest of the packages is a good start and this will install the whole stack. So you will need like two minutes to do it, for example, which gives us 18 minutes left. Now when everything is installed, we need to configure the HTTPD server. That's very simple too because basically there are template files, template configuration files provided by the OpenQA packages. So we only navigate to etsy, HTTPD point D directory and then copy the OpenQA conf template into OpenQA.conf and we copy the OpenQA SSL.conf template into the OpenQA SSL.conf and we enable the HTTPD can network connect for as Elinux and restart the HTTPD. Now basically that's the first part. There were times, maybe two years back when the Seelinux cooperation was not that great. So it was recommended to switch Seelinux to the permissive mode, but this is not longer required. So you can operate on enforcing mode quite safely and without any issues. Then we need to configure the VEP UI. The configuration resides in the etsy OpenQA, OpenQA.ini file and basically we need to do two settings to the file. Under the global chapter we find or global section we find the branding which Fedora recommends to set to plain. The other option being Suze, I believe, and if you wanted a nice chameleon, you can use the Suze branding. Otherwise, it's totally the same. Just there is the quite nice logo of the chameleon. And the download domains are fedoraproject.org. The authentication can have multiple modes, but for the local instance, the fake authentication is good enough. Which basically creates a demo user and you will control the VEP UI using the demo account. Normally on OpenQA production there is the OpenID authentication, so you can use your FAS account to control the VEP UI if you have the rights and if you have the permissions to do so. But for this local instance, we are not going to need it. Then we will install and configure our database. So DNF install PostgreSQL server and let's initiate the database with the PostgreSQL set up initdb command. And that's basically it. Now OpenQA is ready to be run and you can see that there is a procedure of several services that you should start and you have two options basically. Either you enable them and you make them start all the time, but because I don't want OpenQA to run all the time on my computer, so I usually start them with a script. And I believe they should be started in this order. Actually, I didn't start any other order. I always followed the guidance we have on the Vicky page. So first you start the PostgreSQL and HTTPD, OpenQA Grew, OpenQA Scheduler, OpenQA WebSockets and OpenQA VEP UI. This is enough for the VEP UI to start and to make further settings. But at this moment we are not able to do any testing yet. So let me start the OpenQA for you. It's now installed. I am using the start OpenQA.sh script which basically starts it in that order. And now you were able to see that it's unable to connect because the server wasn't running. But now when I hit a five, it still is not running. How come? I don't want to debug that now. So it might not... I don't know. Oh yeah, it shouldn't be. Great. So now you can see that the OpenQA is running. That's the VEP UI. Now, yeah, it says HTTP localhost there. So you go there, open it, you see the VEP UI and you click login. And that would immediately switch to logged in as demo. Right? And now you find manage API keys. And I don't want to show you my API keys, so I'll switch to the presentation here. And manage API keys. You click create to create the new keys and you copy the key and the secret. To some files I'm going to show you in a moment. There is the expiration mark or radio button that you can check or uncheck. So I didn't know about it or I didn't pay attention to it first. And I was pretty much surprised after a year because the expiration usually is one year. So after one year I was quite surprised that it didn't want to start the tests and it claimed about there is no API and I was like, why, what's that? And then I realized that the key has expired. So for a normal use you could create an infinite API key that will never expire if you uncheck the expiration radio button. And then you edit the etsy openqa-client.conf and under the local host section you copy the key and the secret. The secret is the second part of the key. It's quite visible in the web UI. And that's it. We have installed and set up OpenQA to work. And when we start the worker, system CTL start OpenQA worker at one. So this will start the first worker and connect it to the OpenQA and now we are ready to run the actual tests. And this can be done in 15 minutes if your network isn't extremely slow. So which gives us five more minutes and that's downloading the tests. Pardon? If I dial up. If you have a dial up, I don't think it's going to take you very much time installing OpenQA but it's going to take you some big time downloading the tests. Would you say I'd be able to finish college? Between you download the tests and start it. From my experience, the problem is I don't know how dial up connections work nowadays but a year ago I have the LTE connection back at home and it wasn't the fastest connection in the world. I got problems with downloading the test repository because it has, I think, 14 gigabytes, I think. Is it that much? Yeah. And the LTE connection would have dropouts occasionally so GID would complain about them and it stopped working. So this was a problematic thing to download and once I had to head to the Red Hat office to download the repository. I know that you can use the GID depth one and you only download something but it still is quite much because of the needles and we are going to see what the needles are. So I'd say my 56k modem is not going to be downloaded anytime soon. Hello, instance. Actually, you don't need to download the tests to work with OpenQA but in this talk I am assuming that you can. Because, yeah, I will tell you a little bit later. So basically you go to a packier repository and you download the test into VAR-LIP OpenQA tests. Then you do the repository GID clone, you clone it as Fedora and you change the ownership to Gecko test. Gecko test is the OpenQA user and so that it has rights and permissions in those OpenQA test directories. It's needed to give it the permissions. And then we have the tests. We have the tests, you can see it on the right hand side. But the tests are not loaded into OpenQA yet. So we will do it with the fifth loader tool and you can see that there are templates.fif.chason and templates.updates.fif.chason which tell OpenQA a couple of informations. It will give OpenQA informations about the available machines, about the available products and about the available profiles and about the test suites that it can run it. And to load it with the fifth loader pie application, it's a tool written by Adam and it works wonderful, I must say. Because before we had this, the tests and the machines and the profiles had to be defined by YAML file or in YAML files through the VEP UI. And when you made a slightest mistake in the YAML file, it would complain and never work. And this fifth loader changes the game totally and now it's very easy. Because if you need to add, well, from the user's perspective, it's very easy actually. Because if you need to add another test, so you can just take a look how the test is defined and you copy the JSON section and then that's it, right? Basically. So you load it, the slash C means clean all or dash C means clear all and dash L means load, templates.json. This will give you the basic Fedora tests. If you want to load updates testing, then you use the second template with updates. And that's it. 20 minutes and everything is set up and ready. So just some explanations. We will work with machines, test group, images, jobs, test suites, tests and needles. So a machine is a QEMU-based virtual machine, basically. There can be different, but we are not using anything else, just the QEMU-based virtual machines. And you can set it in the machine section of the template file. So for example, the UFI x86 underscore 64 is defined like this. So the architecture is 64-bit. The partition table type is GPT, QEMU CPU, number of CPUs, RAM, the VGA driver and so on. UFI is one, which means true in Perl. So you also have the P-flash codes and P-flash wires to define the proper QEMU virtual machine here in this machine section. You don't need to define anything else unless you need something very specific. Normally you would just use one of the available, which is either a BIOS or the UFI machine. A product is something, it's like a group of tests that run in the scope of the Fedora flavor. And it can be, for example, a workstation server or everything, and then you put tests inside of these groups. So when you run the workstation, then it runs all the tests that are scheduled for the workstation and it doesn't run the tests scheduled for server and so on. And you can define this in the product section of the template file. So, for example, the Fedora workstation life product is defined as this. So the distribution is Fedora, which is the entire thing. The flavor's name is workstation life ISO and then you have settings with variables telling the system that the desktop should be GNOME that it should use the install default upload test to actually create the installed Fedora image. The HDD size should be 20 gigabytes. It should run out of the life. The packet set is default and the test target is ISO. These are, however, user-made variables. So you can use them in the tests. If your tests should be structured without those variables, you don't need to define them, but our tests are structured around them so we make differences in them according to the desktop types and for KDE some specifics are used and for GNOME too. So we usually have one test that can have various branches depending on what we need. So some branches switch on for KDE and some other branches switch on for GNOME. This is how a profile is defined in a profile section. This basically tells us that the product Fedora workstation life ISO should run on a 64-bit machine, which is not UFI. The 64-bit is a BIOS machine. These chunks are taken from the code. So this is exactly how you do it and then you could define your own profile telling OpenQA that your section should run on a specific machine. And then there are test suites and they define how a test or a group of tests will run and it allows you to set the test variables to control the tests. So basically you can use the variables that are defined in the machine, you can also use the variables that are defined in the product and you can also use the variables that are defined in the test suite and I believe the later you define the variables then the more valid are they. So if you have the variable which has the same name and a different value, so the test suite value would override the machine value. Yeah? You can override that by pre-pending the variable name with a plus. It got very complicated because over time we realized we need to sort of do things in different orders but more or less there's an inheritance. Normally also it's good to mention that on the production the variables are pre-filled by the OpenQA schedule and by Fedora messages that are coming into it. When you run the tests locally some of the variables are not pre-filled so the test might break so then you need to fill in the variables and we do it on the API CLI command that's the best bet I think because you don't need to update the templates. So what this means basically that there is a desktop terminal test that runs for Fedora Workstation Live on x86 and ppc64 and that it boots from the hard drive, the hard drive being the disk, some flavor and machine variables .qcow2. The flavor and machine will depend on those variables being set in the machine for example or in the other test that runs before because let me show you the install default upload, the deploy upload test called install default upload will basically install Fedora and upload the installed image to the OpenQA and then this would start after the test deploy upload test so after install default upload and it would use the image created by the install default upload and now the post install variable says just take the desktop terminal and run it there are several ways how it could be done but when there is a test that should follow the installation of the image then the post install variable is the cleanest way to do I think and needle is how OpenQA would recognize what is expected we need to tell OpenQA what we want to do so we define needles needles are PNG images with defined areas so I decide some portion of the PNG and OpenQA will look at it and it will try to compare it with what it sees inside of the running virtual machine and if it finds it, it will do something about it it might click on it or it might just check that something like that is there and if it is then this tiny little test will be passed and if it doesn't see it, it will complain and it will fail so you can for example check that there is a nice Fedora logo in the upper left corner using OpenQA by defining a needle with that logo and basically the needle is two files it's the PNG file with a screenshot there is a mistake here on the slides it should be a PNG file with a screenshot and adjacent file with the area definition plus some other info each needle consists of two informations the area description and the list of tags and it looks like this so you can see that the tag is evince about shown which controls that the about window of the evince application has been shown on the screen and the specimen picture will be taken from the evince about shown.png and it would start on X position 445 and Y position 286 the area will be 133 pixels wide and 146 pixels high so this is like something like a square almost and the type of the needle is match match means visual comparison and that's what works sometimes or I have heard I have read in the documentation about the OCR needles but mythical OCR needles that I was told during FLOG that they work and that somebody on SUSE actually tests battle of Westneth using the OCR needles but it seems that the code needed to run it is still not merged so you need to patch the open QA and I haven't had time to test it yet when you want to write the test of your own you don't have to now you understand everything and you can locally run all the Fedora testing stock but if you want to write a test that's a per script that defines what you do inside of the virtual machine and what you want to expect so you basically define some mouse actions some keyboard actions some checks and evaluations and you can also evaluate script outcomes so basically you can test some graphical user interfaces but you can also test some CLI commands and all that will work and tests have various statuses such as past failed, soft failed running and so on if you want to create the test you need to create a Perl module put it in the test directory of the open QA directory we have created having cloned the repository and you should use the libraries in the lib directory you don't have to but it's there it's been already created it's been already programmed you don't want to reinvent the wheel so you can check what commands the Fedora has made for you and you can use it for example if you want to do some login you can log into a console as a root for example so you don't need to program all the stuff and all the typing and all the checking but you simply use the login to console and something like that or login to the graphical graphical session and it's in the library so you can just take it and use it in your tests and then you should probably study the test API which is the description of the commands at open.QAAPI slash test API then each of the tests should have a test header which basically is what libraries or other packages it uses so this will tell me that as a base the installed test is used strict is the Perl thing which keeps looking that your code is correct and that you don't do dirty stuff with it because normally Perl doesn't check for for how do you say that? Yeah if you define a variable so the namespace should be limited to a subroutine for example or to the entire package and Perl normally doesn't check for it Does it only support Perl or other languages as well? The tests should be written well are written in Perl the whole thing is programmed in Perl but apparently according to the documentation the tests could also be written in Python but I have never tried Using Perl makes it dirty by default doesn't it? Pardon? Using Perl makes it dirty by default I don't know I don't know if Perl would have been our first choice but the other alternative is to go and write our own whole thing and I don't know Perl couldn't possibly be that bad than having to go write your own framework from scratch and Adam will know more Yeah just quickly they have this crazy translation layer upstream which I don't remember the details of how it works but it uses some fairly janky stuff and you can write a test in Python and some team internally at Red Hat I initially turned this off I thought it was so hideous but some internal team at Red Hat asked me to turn it back on so it's now on in the Fedora packages I haven't tried it myself but it should work Using Perl to write tests is not that terrible because tests tend to use the functions from these libraries that are very simple functions and quite well written so most tests just tend to be strings of type this, assert this screen then type this, then assert screen and it's very kind of formalized so you're not writing ugly Perl most of the time there are a few cases where you write ugly Perl Yeah so for these tests a lot of the tests it's fairly simple so it's fairly readable Yes Tim? You can write good Perl it is possible it doesn't care if it's readable or not so it all depends on the person reading it so the tests that they're talking about should be reasonably legible For me for example doing this I saw Perl for the first time but I somehow got used to it now and it's okay but the truth is that sometimes we fight over readability with Adam because he is better at Perl than me so he thinks it's super readable I think like maybe not Well if you work in the sewer every day you get used to it eventually Yeah so basically if you use strict it doesn't let you do variable definitions with wrong namespaces for example then use test-top is that you should use the built-in functions and use utils means that you use the basic Fedora library where most of the pre-programmed Fedora routines are placed so test-uppy is the total basic if you don't use test-uppy you will not test anything and if you don't use utils you will need to do a lot of typing then the test file should have a sub-routine called run so this is basically a function where everything what you want to test is put anything else what's outside of the scope of the sub-run sub-routine run will only be valid inside of the test package and then you add another sub-routine test flags where you can define what to expect what will it do after the test finishes what will it do when the test fails or if you want it to fail because for example the fatal flag will tell that if the test fails then the whole test suite fails but if you don't want it because you have other tests inside of the test suite that do not depend on the first one you don't want to make it fatal then you can ignore the failure and that means that it will ignore it and continue sorry you can set the test as a milestone which means that after the test finishes the state of the art of the virtual machine will be uploaded to OpenQA again and then the subsequent test will be starting off that milestone so this is for example useful this is for example useful when you want to test an application and you don't want to do starting it all the time but you would like to take all the subsequent tests from a clean application so we start the application once it's running we will upload the state of the art to OpenQA making it a milestone and then we for example create a new file and then I don't care what happens next because the next subsequent test will return to the milestone and again will start with a clean started application so sometimes that's good for example when a test fails and the subsequent test would expect something what's not there because the previous test has failed so I can fight it with this rollback and always rollback means return to the milestone when you don't set any flags then it will be zero I believe or maybe fatal will be if you don't set any flags I believe it will rollback if a previous test module fails but it won't die because fatal's default is zero I think I always use at least one flag so that I know that what it should do basically and there is a test example for desktop that looks like that but I'm going to show you another test these are the libraries that currently are available so it probably is self-explanatory like modularity.pm would be functions that we use when testing modularity Fedora distribution will be functions specific to Fedora cockpit you know it's you can expect what's in there what makes a library a library if you want to create one it's another Perl package that starts with the package keyword then you give it the name like for example package desktop tools and you use the base exporter and then you export the variables using for example our export is startgnome software and install application this will enable the sub routines in this package to be used very easily without having to call anything else even the test files without exporting it you would have to call that like desktop tools colon, colon and then startgnome software which is not very convenient so it's good to export those functions now let's take a look how we can create a calculator test a very simple application test for a calculator the test will be placed in the test directory of the repository files or the repository directory and we can start for example by touching it before we start to write it we can register the tests in the templates to make sure it will run in OpenQA because normally without the registration in the templates it would not work it would not start bad thing Sumantro is not here because he's going to need this hopefully he will see the recording so the test must be registered in the templates and you want to give it the architecture, the product and the variables so basically you need to add a section to the test suites section that has the name of the test calculator you will give it the profile where it should run so this would run on workstation life 64-bit machine so it won't run on the UFI machines just the BIOS machine and it will take the pre installed ISO it will run the calculator test and it will run it from the disk flavor machine QCOW 2 whenever you make a change to the template file you need to reload it to OpenQA so you do the fifth loader C-L templates.fif.tason and when we want to run this test from a pre-installed image which is also possible we can replace some variables and say that the HDD-1 is not something generic but it's a specific one it's the workstation.QCOW 2 and the user login is test and the user password is weak password and now I am loading the tests using the entry point system which allows me to put a list of tests that I want to run and so I can start the login test which logs me into the system into the GNOME session and then it runs calculator test and again I need to load it using the fifth loader so the basic syntax of the test file is this we have talked about it there is one more thing I wanted to stress to you that each test module must end with the one because each Perl module must return a true value which is defined here on the line with one if you don't do it it will complain and it will not run and it's problematic and it bit me a couple of times in the beginning so don't forget about the one and then you can create a subroutine that will only be valid for the test itself for example you want to repeat something a couple of times and you don't want to repeat yourselves so you can define sub delete result for example and say that this result, this delete result subroutine will always press the escape key when called so such a simple one but then you can use delete result instead of send key escape and if this is more complicated then you can save some time typing the stuff all again and all again you could theoretically take this subroutine and place it into a library if it makes sense to you and then we want to start the application so in GNOME, normally we can hit the super key and we do it with send key super we type string calculator the max interval 10 makes it type a little bit slow in Fedora you can find a wrapper for typing strings that is type safely or type very safely but I'm not using it here because I wanted to make it as generic as possible using just the test API commands so the max interval makes the typing a little bit slower so that the GUI has time to respond and the texts are really what they should be because if you type too quickly sometimes letters are gone and then the texts are incorrect and basically the tests fail because of that of the typos that are made with the engine then we send the enter key and we check that the calculator has started assert screen means check that we see this particular thing in the screen then it's merely some clicking assert and click would mean that it checks that the needle is there that there is the widget that we want and if it's there it clicks on it you can add the button parameter to the command and you can define whether the button should be left right or middle if you don't do anything it's left so normal clicking is assert and click also by default the open QA will wait 30 seconds until the widget appears so if you think that it should be there and you want to specifically check that it starts in 10 seconds then you could define a timeout parameter and you could make the timeout to be 10 seconds if you leave it out it's 30 seconds for some tasks you might need longer so you can make it longer timeout 60 seconds 120 seconds for some installation purposes it's maybe 400 seconds of course for uploading tests this is quite long so basically click on button 5 click on button add click on button 7 click on button equals and check that the result has been shown so that's one part of the test then we can multiply but no clicking just using keyboard so we will type the string 12 times 15 max interval 10 we hit return or enter and we check that the result has been shown we delete it again and then we can switch to the keyboard mode using the control alt K combo sending the escape key a complicated string with brackets that should basically be very clever about what to calculate first and we hit enter and check that the result has been shown again and that's it we have the test now we have registered it so we can start it but I'm going to do it in it so OpenQA WebUI now will show us everything about the tests but it can't be used to start the tests actually or at least I don't know how so you are using a OpenQA CLI command to make an API call basically it looks like OpenQA CLI API dash X post ISO and you use the file.ISO which is the ISO that you want to install just quickly to make this part a bit less scary possibly there's a slightly higher level of running you can use if you are okay running on official Fedora images called Fedora dash OpenQA which is the same thing the official schedule uses and with that you can use it to just say hey schedule on this image from this compose and it will do everything for you but the tradeoff is that it can only schedule for official Fedora images and it will need to download the image so you can use it to make it faster I am using this because I found it in the documentation the first time I was trying to run the tests and I used to it of course and then I keep those commands in a file so I just uncomment the one I need and run it so you can pass variables also using this command some of them must be passed like district, version, flavor, arch and build sub-variant, desktop and development are good for installation tests so if you have a pre-installed image it's not that important but if you want the test to install it so it must know whether it's installing a pre-release warning or a pre-prohide if it's being developed because the development for example checks that there are some parts present during the installation like the pre-installation warning or pre-release warning and if it's not there so it pretends that no pre-release warning is shown but the pre-release these variables should be passed and then it is being scheduled and you can see it on the all tests page so I will you can also see the tests that have already run whether they passed which is green whether they soft failed which is yellow whether they failed which is red and you can click on the dot to explicitly see the details of the tests and then you for example see that this is probably from the it looks like the KDE start-stop test suite so the ABRT started with some hiccup aggregator started and finished okay and so on test needle when you click on the icon of the image you will see the screen that was recorded and you will be able to see the area that was compared and if it is green it was found and the candidate needles and tags tells you this was 100% K mail runs and some number which means this image from the virtual machine resembled 100% what we expected so that's fine normally if you don't do anything so by default it tolerates 4% so when 96% is still there so the needle is taken as past if it's less than 96% the needle is considered not found and you can of course lower the bar a little bit and make it 90% or you could set it higher because that's what you need and if there is an error it's marked in red and there is basically two red fields for every error and mostly the error in one test there is just one error because then the test finishes but there are two places where the needle was expected and not found and the next one gives you some information that information is a generic one for like this test died no candidate needle and sometimes you could define your own strings so you could basically get quite a nice information about what happened in the test and Tim I was thinking whether if we actually put some effort into describing those failures a little bit more exact whether that could be used to train the artificial intelligence to make the prediction a little bit better I don't know if the test failed you can restart it from the web UI the test detail page all the time so there is a restart button and you can stop it when it's running by pressing the stop button or you can just restart it when it's still running dealing with needles so you add a missing needle using the needle editor that is part of the web UI you can define the area you see the green area here so it's the needle defined for the P button on the calculator you can name it and check the name or select the name it's above this not part of this and in the upper right corner you can change the match level so you can make it less than 96 or more than 96 so it's good that if you don't want to deal with the needles elsewhere so you can just write the test without the needles and you can create the needles with the first run of the test by using the developer mode that could be also switched on in the web UI I am going to show it to you when the test runs I am sometimes using the needle application that I sort of wrote when I thought that one totally needs a needle editor and it runs offline and it's quite good nowadays because it can connect to a virtual machine so when you develop a test and you make it manually you try it manually in a virtual machine so you could use the needle application to take screenshots out of the virtual machine and you can create the needles and run OpenQA so you can save some time because creating needles in OpenQA is a little bit little bit slow let's say and it starts to be tedious if there are lots of needles because you need to open the test editor you edit the needle, you save it then you have to click on come back, then it waits for some time then you continue the test as the needle and it fails with another one and then you repeat the procedure so when you feel like that it's good to do this while developing the test and then you load everything into OpenQA and you only fix what doesn't work you can okay that's not important you could use the Chancery application which is a very simple editor that I also wrote as a sort of exercise in Python and this is good because it has pre-loaded test-uppy routines so you don't need to go through the test-uppy but you just select what you want and it will give you the snippet in it so it can help but once you know the routines, actually you don't need it anymore because then you just type the routine and yeah okay so integration with Fedora of course everything is based on Fedora everything is supported from Fedora everything is installable from Fedora we use it on a daily basis so the Fedora testing stack is up to date and should be working we don't have breakdowns much because our procedures don't allow to merge something that has not been reviewed and Adam is a strict reviewer so his hawk eye will not let any problem pass into the production repository so if you want to test something on Fedora using OpenQA it's very easy to do so and as I said you can install it, you can set it up in 20 minutes and you can see this here that you can follow this is just for the sake of the recording or I will upload the presentation on the SCAD so then you can take it and you can see that there is the OpenQA documentation that is maintained by SUSE also the test that this talk is based upon so when you open that link you will have a step-by-step how to install it and the OpenQA outer instance repository that is used to load to hold the tests and hold the needles is on Peggy here thank you for attention but we still have not have the 10 minutes I believe so let me show you how the test really runs inside of the OpenQA so first I need to show you the Farlib OpenQA factory where the images are placed you can see that there are lots of images, lots of QCAU images starting with a number and then disk workstation life ISO 64 bit.QCAU2 so remember the product flavor variables that were in the test suite definition so this is the disk workstation life ISO 64 bit that's the part and each test will create its own image and the number is the test number to which the image belongs actually when it's there you could actually repeat the tests again and again and again because we have something they called an asset to start with once you delete the underlying image you can't restart the test anymore. That happens to me on production where I could come back I would like to come back a couple of days and try but it's already deleted so it doesn't work but on the local machine until you manually delete those assets you can still repeat those tests so you can see that there is the workstation.QCAU2 which will be used as a starting image it's a pre-installed image so I don't need to do the installation test before because we don't want to waste time on 15 minutes Fedora ISO installation you can say you can see that the it's owned by GecoTest here you can make an exception you can either make it worldwide readable also that will work too but I do it I always change the permission to be GecoTest because I think it's cleaner that way but here in this directory it's not that important okay then I will go to CDOpenQA where I have the run test script you can see that there are lots of them those commands commented out and I can select what I need so it's easy to start it I like it sort of and I am going to start well I realized actually that it totally doesn't matter what is in the ISO variable if you use a QCO image for the test you could also leave it out in this case so let's ignore the ISO variable the first one is FEDORA version is Rohite Flavor is Flock ARCH is X8664 BUILD is Calculator Test SUBVARIANT is Workstation and DESKTOP is GNOME to make it safe but I don't think we are going to need the GNOME or Workstation variables because we are not installing stuff so now it tells me that one test started or one test suite started zero failed the ID number is 4158 it starts with zero so you know how many tests I have run on this installation particularly and the product ID is 152 that's not important and now when I go to all tests I can see the calculator test running 0% but the progress bar is a little bit slow and it changes in steps when I click on it I get the live stream of what the test is currently doing so you can basically watch if it does what you want it to do here you can have the developer mode so by clicking on it you can switch it on and you confirm to control the test and now here you have fail on mismatch as usual which means if the needle is there fail and if you leave it like this it's like with the development mode off so you can still have it on but it doesn't affect anything when you change it to assert screen time out then any time a needle is not found it will give you the opportunity to open the editor and create it so this is how you can create needles when the test runs and we are not using it in our test but sometimes you can use check screen instead of assert screen which returns a true value if the needle has been found and that's a specific thing and you can also make the development mode control the check needles the problem is if the check needle for example is not there on purpose and you switch it on then it will complain and it will force you to create the needle once you forget that it's not it's on purpose not there and you create it then you will get some troubles later so now you see that the calculator test has ended in the meantime and it has passed which is great on the detail page I can see what the test actually was sometimes if you forget to sing the repo you might be testing old tests I am not developing in the open QA repository because it's that requires everything to be typed with sudo so I am developing a side and sometimes I forget to push or pull and then it does the same thing that should have already been corrected so you can check that the test is what it should be because you have the test script here you can see what steps were taken and what needles were compared so for example this one makes sure that the calculator has started and it's 100% which is great sometimes when the GUI changes so it might be 0% and then you need to read the needles then it checks for the button 5 again it's 100% and so on you could see the variables that it uses to run the test so sometimes when you have a Fedora stack running and the tests start failing in big numbers then probably something is wrong with the variables so you can compare the production variables and your own variables and you can set the variables correctly and then that works like magic and suddenly the tests start working you can also take a look on the Auto Instlog TXT which is very important for failures which basically tells you everything what happens during the test run you can see that you have the blue line here which basically is the graphical weight login starts here and this is what happens so it wanted to check for the needle must match login screen and at first it didn't find it didn't find it then login screen timed out and it wasn't found but because it's check screen so it was probably correct then it wanted to assert the screen login screen and then it found it after 10 seconds approximately and continued you could also use the DIAC routine to put or print out messages to this Auto Inst Log TXT file so that might also be the way how you increase the readability of the log files if you need you can leave comments here of course in production you can put the bug number here and then it shows that the bug has been already created so on production you might see that there are little bugs, symbols just below or next to a failed test and you know somebody has already created the bug and that's it also what is interesting you get a nice video of the process so you can it's quite fast video but it can be slowed down a bit using the Firefox menu and you can set the speed to 0.5 it still is quite fast in 0.5 but at least you can see you can stop it also but that is very difficult to find the correct place but it can be helpful too so I think this is it and if you have questions you can ask with all the talk about needles I'm missing either thread or a haystack or to sum it up I don't know to be sincere maybe it's because of the haystack maybe because it finds small portions of a picture in a big image maybe it's because of that well sometimes you do you know there is a strategy how you might find a needle in a haystack to carry on that analogy usually it's you're looking for a needle in a haystack so the haystack is given the needle is given what do you use for that a magnifying glass or whatever or a magnet but what you're doing here is you define the needle which is put into the haystack and then you check whether the needle is there it's a weird terminology I might the haystack basically as I understand it is the PNG file the needle is the little portion and both are there just the one you expect might not be there but since you define the needle you can make it as big as you like it could be the entire picture true and then it's not a needle anymore is it then it's a hammer or something more okay that's a very good point which gives me which reminds me to tell you this the size of the area actually matters because the bigger it is the more problematic it will be to find and sometimes where do we have a tiny tiny pixel glitch in the image of the virtual machine so it won't be found if it's too big and you expect 96% you will have troubles so the best strategy is to keep the needles as small as possible you can if you need to check for more you can define more areas inside of the one needle but the smaller they are the better for you on the other hand of course I once experienced a case when I wanted to check whether the button was lit or not so basically it looked the same it just would be a little bit blacker or with dark a shade went off and with a lighter shade went on and the classical 96% wouldn't be able to cover that because in both cases it would be above 96% so it wouldn't be able to differentiate between off and on and I had to explicitly make it 100% and only then it would distinguish between the states so sometimes it's like it's a funny play with those needles my first comment was of course more like a joke and trying to get my question in there I totally understand what it's doing and it's very useful the way it works sometimes you ask yourself these questions like why did they use that name and why did they use that terminology it doesn't make sense to me it doesn't make sense to me how a chameleon could look at one side and another side at the same time does it support accessibility testing accessibility testing like like this for example or which accessibility do you mean so the calculated test that you ran if I do it on a high contrast or say a color change mode can I reuse the test you can reuse the test but you have to recreate the needles so you could have basically a couple of sets of needles and you could for example you could use the calculated test on high contrast normal contrast large test text and it would work this is interesting because this is why the needles have the tag concept so you would have the exact same test logic and you just have three different needles which have the same which all match on the same tag so we use this a lot we have lots of cases where we have different needles that match on the same tag because of various conditions making the screen look different and basically just to add to Adam if you run a test the calculated test and you have the needles to create you have created the needles to support the high contrast for example or the normal contrast so it doesn't matter actually which one of the tests will run both would pass because the needles are there already so you don't have to tell it use the high contrast needles so if the test is high contrast and the needles are there along or next to the normal contrast needles the test will pass okay I have a question about let's say the target audience for this so who should be most interested to look into this let's say if I am package manager I have some graphical application in Fedora should I try to install this and create a test then submit it as a pull request for you or is it targeted at me or not well I would like to say yes but the question is do we have space for it do we have resources for it I believe there is a single test for example a single application is tested we would have resources to do it if it's like a hundred applications maybe we don't have the resources because the time to run all the test is pars so basically if you are a package who develops an application that is part of the installation or heavily used in Fedora so then I think it's good to make a pull request or make a pull request we could take it to our stack and test it there are different directions someone could go with this as Lucas says if you want to get a test before putting too much work into it it's probably best to file an issue and we can discuss whether that's a test that we would want to carry in the officials but you can also just stand up your own OpenQA instance as Lucas has explained and use it you can do this kind of permanently there are cases of people doing that there are other projects which are sort of OpenQA like Nome and Debian stuff so that's another way you could go with it but yeah we do have resource constraints on the official instance I have for example talked to some Red Hat teams about OpenQA and they said it looks great but it's too complicated to maintain it actually I don't think it's too complicated to maintain a local instance because it must work sometimes so the basic thing is to install it and run it that's it nothing to maintain because it's maintained by Adam it does take you 20 minutes to get your initial instance running but then after that it will kind of sit there and work like you can have your pet instance sitting there and not use it for 6 months and if you come back and update your system and try and run a test it will probably be okay but once you've done it once and figured out how to write a test and add it into the template it all gets a lot less overwhelming so it's a little bit of an initial setup and then a sort of plateau of there is one problem that could actually arise after an upgrade and that's the Postgres server because sometimes it gets updated and you need to update the database running the specific command if you don't do it it won't start and then OpenQA won't start but it's probably pretty once or twice maybe and one more question if I understand correctly I guess this is most interesting to some teams that could be working on some bigger projects or some maintainers of some high profile applications for example like a library office or something that would be interested to be pushed into the Fedora production instance or perhaps some passionate maintainers that want to run their own local instance is it correct? Or by anyone in the community who wants to help us creating the tests for Fedora stack thank you very much