 One thing, there is a conference that you can join at the end of the conference. I think it's for Friday, where you can win some nice prizes, like some boards, new wikis, and stuff like that. I think you can find a description either in the books or on whiteboards. And let me pass the word to Andrei Hovecek, Susse, a QA developer, who works on Open QA and that will be his board show. Okay. Good morning, this unholy hour. I have a clue for all of you. Because this is a board show, I imagine that you will be also participating in a QA, but of course we don't have, so it's fine. I'll just pass some sticks with it there, so then we'll find the particular machines for Open QA, so we don't look against anything, and we'll move there. So it's for the robot or for the machine? For the robot? Well, I'll push it to the end. Okay, but I think the whole system, because I don't have a unit here, so... Well, we didn't even start to grab an idea. Okay, okay. Okay, thank you. So I'm trying to... Yeah, we're fine. Is it possible to boot from the spring? No, it's not good. There's one file, the Open QA, a table which contains the image and the contains the image and the definition file for the machines, if you want to run. And will there be somebody using this? If I... you will? I'll do that. Yeah, okay, okay, so this is just instructions where to... You can of course put it into a different directory, but then you will need to edit the definition file for it. And for the rest of you, I'll just... So I will cover only the first two parts. I will skip the installation and the administration, but you will see all the slides for our... Usually our internal training, so it's a little broader, but you don't need to do that today. So, at first, probably while we were on my presentation with yesterday, this is just only a little cleaner picture, what's the architecture of the Open QA. So we have an administration node and we have a UI and API, it's Mojolipus framework. If you know Mojolipus, it's a parallel-based framework which looks like Rails, it's... It tries to imitate the workflow, but it's already a system works. It provides the UI for the browser and of course the rest API for external applications and the worker. Then we have a standalone scheduler and a WebSocket server, which is not displayed here, but it's different process also, and database external. On the worker node, we have the worker, which is middleman between test engine, which is OS Autoense, and web application. The dashed line from the OS Autoense to the UI, it means that there are some situations or there are possibilities that OS Autoense can talk and upload results and ask some stuff from directly from the web UI without using the worker interface or middleman. And it's used mainly in multi-machine tests when you are asking for the state of children and jobs and so on. So the purpose is the same as I was looking at the presentation of the integration testing, as a team testing, single and multi-machine testing. You control it, you write the tests as if you are controlling the machine by yourself, so in a typical scenario when you are developing new tests, you just do all the installation by hand for one machine, it takes notes what you do, what keystrokes you press, what you where you click and so on, and then just write it down in the test API. So this is how basically they look like. We have one main test loader, it's called MAPM, and it's basically, I don't know how much you are familiar with parallel, so the MAPM is not actually executed, it's only required, so all the variables or all the commands are not in any subroutine, it's in the main section. So when it's required, it just goes and evaluates all the variables using cool helpers, and then loads whatever test it's enabled. These variables, it's accessed by Gepard, it's the variables it gets from the scheduler, we'll see later. And then the tests itself are, exactly there is one, there is a run subroutine, I don't know how I missed it, because they are executed every test has a run, subroutine, it's called run, and the test executes this part of code. Yeah, this is there. The other thing is our needles, we'll show how they are created. Screenshots. Yeah, it's basically screenshots with some metadata where you write the tag, so you can have many different screenshots with the same tag, so when you are matching the screenshots in the test, you just put the tag name and the test engine will try each and every one, every screenshot with the same tag to match it. Our test API, it's some helpers. We try to use that the test API itself is distribution or OS agnostic, so in theory you can test even windows, the systems and using OpenQA. On OpenSUSE, for the process, we actually have tests which test the dual booting. It doesn't do any antipensay with windows, but at least it starts to boot it and check that it's actually working after to ensure that when you install tumbling on a dual boot machine, that it won't screw up your windows installation. So this is to clarify the naming, because I usually speak about tests and then somehow I switch to jobs and so the test is only under the code. This is like if you compare, if you compare the process, yeah, job is to test as process is to code. When you execute some binary, it becomes a process. So job is the test which is running, test is under the code. So that's when I will open for scheduler will generate all the things you will want to test. It's he actually generated jobs. It generates it from the test code here just to be able to with the naming. Another concept is the variables. The variables divided in several categories. They actually are, you can choose whatever category you want and put everything in them. That's not like you have to put hardware related things to machines. You can put everything just one, but we just to divide it. So it's more logical to have some variables which influence the creation of virtual machines and so on in the machines tables, which influence the other tests or the job flow in the test suits and medium times if you influence the, for example, the installation methods and so on. Then the job groups initially, it all was done automatically, but we then changed it a little to be able to manually so that it doesn't do every combination, but you can have the job groups when you specify for which machine or some of the defined from the tables run only the specific test case. So you don't have to always do the same to schedule every possible combination. This was the, you just have the job group, you define the machines like x86, 32 bits, or RR, then you have test cases, test suits, and then you just enable, yeah, the architectures are from the community, and then you just enable different machines for different combinations of data, should generate the proper jobs. Yeah, probably I will, I can show you how we use it in a real war. I just hope that the Wi-Fi will be working okay. Ah, I can use that one. Oh, okay. Yeah, yeah, it's right faster if you use it. Okay, but connection deactivated. I was told it should work. Yeah, I don't know, it's, that work manager is complaining. Have you tested it? It's work manager, right. Yeah, but I'm already, I'm trying to log in. In the meantime, I can't open another tab here, but I will, when you boost the machine I get from the sticks, it will automatically start the WAPUI and Apache, which we use as a proxy for that, because our WAPUI is running on some, always on localhost and some high report. When you open it, it's completely empty. You can, you will be initially locked out, but we support various authentication methods, and for this one, the fake authentication is enabled. So when you click, just log in, you'll be automatically locked as a demo user. And admin tables, admin section, there is, there are the tables. They are empty here in, because it's dynamic installation, but it's basically you can, you can out all the variables you want. I will show you when, once I looked, this is how it looks in recent versions, in the job groups where you have selected, you created new job group, then you, if you have some defined machines and so on, it will, it will allow you to create the connections. Did you manage to start it? Okay. Yeah, I was hoping we will write some tests together, but that's it. This way I can only show you, we do it, for example. Yeah, I will continue with the presentation there, because I have, it's okay. So this is the build, so machine tables you see, the backend, you don't need to specify, if you don't specify it, it will ultimately be queued, but you can have all the PowerPC, S-reality, it may for hardware testing. The worker class, I will get to it, that's a good thing to distinguish when you have some specific configuration on some hardware, on some, for some virtual machines, so have that, you can use worker class to limits some tests only to be run on this set of workers, and you can usually use this to specify the way, yeah, networking, the different networking, and so on. And it tests you, you see, as I end up using the logical things that you expect to influence the job work mode. And the assets tables are generated automatically from the, when you schedule a new ISO, or some new build you have, it will create, because when you schedule it, you need to put some mandatory options, which are the architecture, the flavor, the version, and the distry. The distry is when you have reorganized all the tests, we are already in different districts, these three are directories. So when you have tests for like windows, and that you test for some Linux stuff, you put them in different directories, and then you distry to, to differentiate between them. And job groups, we created new job groups, and it will show you the arched arcs from the medium, which is, because you create the new job group, and then there is a button to associate the asset or the medium with this job group. You can add your mediums to one job group. But yeah, it's a little confusing, because I know that when I am doing this at work, almost no one get it on the first time. So I understand, but, that's, once somebody gets used to it, it makes a good sense. So we're managing to begin. So, these are the links, so when you can look for the test, all the stuff are on the top, I think this link is still correct, like for Fedora. Yes, that's correct. So, there are the Fedora tests, you can see. Yeah, you probably don't get this one, because this is the internal one, but you can get Windows for open to set. And the tests, we actually share the test with the Sleaze and Open to set, so. The Fedora needles are in the Fedora repository. Yeah, we have a different layout for this. So, we can take this. Yeah, it's open. The second command we do after you define the virtual machine, you do a virtual define, XML, and then how? You can start it. Also, if you do have a web manager, somehow, yeah, so with web manager, you can open it. This is, I'm locked in here. This is from the actual open to set. See, this is updates and so on. These are the actual job groups. When you create a job group, it will create them in the overview. So, that's also the purpose of the review, is to organize them, you know. So, you can see what's in some good overview. How we are on GitHub? It's really strange that the network is not working for me. This is not locked completely. I will, oh, here we are. So, this is, I will show you the NPM, which is the test loader for the open user tests. It's quite long, but hopefully, I will show you in a moment. In the actual layout, we see the tests. We have actual tests. We have a lab where we, when you want to create some new library or some helpers for the tests, we put it in the lab and it's actually automatically appended to the pedal search path for the libraries, for modules. The data thing is where you put some static data you need for tests. For example, when you want to download some data on the testing machine, we put it in data and then our test IPI functions, which interacts, which helps to download some stuff to the testing machines, they are automatically gathered from the data architecture, the data directory. So, I can say if anyone is a little overwhelmed with all this, it is nice. We found when we start out, you don't need to know all of these things. It's quite easy to set up one or two tests and all the combinatorial variables, you can just go by trial and error until you get through it and then you get two or three tests running and a lot of this stuff is just what you need when you get bigger and you kind of figure it out. It makes sense as you get more tests built, but you don't need to know it all to start out. Yeah, that's true. I have too many Firefox windows open now, so I'm getting close here to what I wanted. Oh, got the machine running. Yeah, perfect. So, okay, I will just finish how to run the test and then we can see how to write it up and then start what I need to find. Where is the, where is the... So, you need to download it to get the full power. Yeah. So, there is the presentation I have. So, anyway, I will continue from this column. So, this is a, as Adam was talking about, you can run the jobs in two different ways. When you want to use all these distribution combinatorials from the tables, you will like run it in as I call it an ISO-centric approach. You will just get a new ISO, a new image you will build, and with our API, you just shadow you pass it that I have basically said I have this new image and shadow all the jobs you can find for it. And then we have a jobs entry where you can create just one job and you will even need to anything in those tables. This is for the ISOs. You put it in the directory, it's partly open, create a directory as well. This directory is usually shared because we need some common, some shared storage between the views, the remote holders. So, this basically is exported as NFS or whatever you want to use as long as it's a regular file system and it's a logic storage. And then you created, you created a post request with the ISOs path out of the mandatory variables and it will basically create whatever test inverter, how many tests it actually created and the IDs of the tests. It's a turn back in JSON format. On the other hand, for the individual tests, you use jobs path for a post command it doesn't consult anything from the tables, it completely ignores the tables so you need to specify all the variables yourself. There is one middle ground for that and that is when you use the ISOs post command and specify the test variable with the test name you want to create. You want to schedule and it will take all the things from the tables, all the tables you have, but on the schedule only the one test is specified. And the reason we did this because with this approach when you have a multi-meshing test you create only one and you need it's very hard to create multi-meshing test using this because when you specify it in the variables you just say that I want this test to be run with this test with the name you have written test suite. But internally, when the tests are created as a job all these blinks between these jobs are passed forms to the job IDs. So when you... And so when you want to hack around and use this one for multi-meshing test you will create one job to return the ID and then you will create a new one and you just need to add a special variable just underscore parallel underscore jobs or underscore run after jobs and you put the ID of the previously created it's all really hacky. So that's why there is the middle ground with the isos post when you specify only one test name but it will also detect that there is some dependent tests or parallel tests it will run them also. That's why there is basically three ways to run it but yeah it's all in a parallel nature right. And yeah for the... When you want when the tests are already running you have also no space how to restart them or as we call clone them also either using the UI but you need to be locked in you need to have at least operate source authentication level to do that the same applies when you want to run it from the from the command line. The client script actually it's not that there's no magic in it the only thing is that the part this one is the path of the rest command it only the client only reads configuration so it knows where on what what host name is is the open query on the rv key is used for authentication and we all the rest api is slash api slash version one slash the path so it will prepend the revoke the complete rest query and this is the method you can expose to get put delete whatever it's in the rest fashion. There is the clone job helper which can clone all the the jobs but what clone job can also do is for example when you have your own instance running and you want to clone jobs from different open qa instance you can specify the clone jobs that you can from this URL from this instance from the open query running on this URL clone it to my open query instance which is running on different URL and it does actually ask so that the remote open query installation whatever you want you need what assets you need and so on it downloads it and push it to so you can clone from from different open qa installations okay so in the section yeah also how to write a test I started from the easiest way and that's how to enable it it's how to just when you have filing test so you need to update the screenshots only yeah this is again as I was talking again reiterating just so we remember it and yeah this is the three things I will cover the graphical mismatch that's when for example a new branding comes in and you need to change only screenshots for flow change and you need to put some new commands in it for example you get a new license or added license so you need to add new commands and a completely new distribution that's the hardest path because for this first tool you usually don't need to mess with the main PM or when you are writing completely new tests you need to write there and the problem also is that we as for open to the test have so much helpers and so many base classes for the tests that the way we actually write the open to the test if you want to try it from scratch you will find that we are missing so much stuff so yeah that's a little sometimes it's unnecessary hard to write it right to begin the new test but sometimes you have to yeah before we can even let me what's wrong we need to see what's where the problem is so the job can be okay fail or or soft failed oh yeah modules are the soft fail yeah past and failed it's self-explanatory soft fails is when we during the test detected that we need to use some work arounds to get we know that there is a bug but we know how to work around it and we can continue so when we I'll do this work around things we can mark market that's this basically we use work around so it's like it fast but yeah it's not so okay it is still in this slide I still use the alternate all colors but this color is so now more or more greenish than yellow now but and you may also see these icons that can be three of them and it depends this one is the importance of the of the test so for example it will not abort the whole test the whole job job but when it once failed even if everything else passed the overall result it will be failed a little harsher is the fatal flag it if it fails it aborts whole test and it doesn't even try to continue and the milestone this icon it's a thing that when you win this module when this job module is passed it will create the snapshots for of the current state and when automatically whether you as define any other variables or not it will always create snapshots and the effect is that in the next time when some module fail even if it's important or not but the if it failed the test engine will automatically reload from the last good snapshot and try to continue from that one so it's tries sometimes to reverse to assemble your own working state and see what's whether it will work between for the rest of the test in the on the needles when you select the fails test you will then see all the needles the green one are passed the red ones are failed so I was playing the same the tag so the needle all needle had their tags and it's basically this is the algorithm so it's matching very different screenshots with the same tag it's can it tries and generally it takes just one which fits so part of the metadata is matching area for example when you really generally don't want to match whole screen for example when the time is changing you would need it will won't match anywhere and it won't match at all if you are comparing the whole screen so we just specify the region we want to to be matched and that's a part of the needles I will just quickly check but how are we on the okay I'll find the lock it in some just hope it didn't time out here I wanted to show the the arrow how our needles look like I may open this one and just stress test the wi-fi open it in the in background right here so this is so update the you're updating the needles here we have some fancy things like a needle editor which looks really down thing but so you basically need to learn some basic workflow only for that this is you can see that there is the screens view with the names are automatically added generated there here are the tags the screen isn't very readable seems but you're basically out of the tags you either have already some redefine you can copy them usually from ugly existing windows or art yourself here is the workaround property which will mark the test of file if this needle matches and this is the way who updates you just select the wrong screenshot here this is the already existing screenshots and the first one is the one which was captured in this jupra that's the most up to date you can then specify are the new matching areas or copy matching areas from some other needle so when I have I select this one means that I copy the matching area from from this from this needle and the same when I select the tags from this needle I just copy from that and I don't have to invent my own and then you just save open field can be configured to automatically push these these changes to a github repo if you have needles in in its own repository for open to the use this function so when you try to help and update some failing needles save it it automatically pushed it with its own it's the out of any it's automatically deployed on the system so it's quite easy and if I have I don't have yeah dude how the matching works is that you don't have to worry about the explicit location where where you put the matching region because the you had it on the previous slide or a little bit back where it was explained in the lecture yet yeah it is great at the bottom yeah exactly yes yeah so you that's it started with when you expect to be but if it's not there it's just making like spiral motion and still expanding expanding search area and when it's hitting the time out it tries just to hold screen and just try to find it somewhere there so it's reusable like when the scripts for example when you are checking for some the text data from the log and you are using a screenshot for that and you don't have to worry that sometimes it's at the end of the screen sometimes it's on the top and so so just how to update needles for test code we will work for usually with the pilot module with the pilot needle there is a test code so that a code viewer test code viewer which I showed you today and do I have I didn't do anything yeah too bad so you are can use various tools to up to find where the where the problem is either with the needles or with the videos or the auto-enclog it's a log from the test engine when if there is test code error or you just need to update its module maybe yeah yeah I have a github somewhere right I'm checking you somehow give me a second I didn't have to go to a challenge talk but we run open qa on fedora we'd love to have help if anyone is inspired by this you can get in touch with me anytime and I'll you know I'll walk if you're helping with our open qa and any questions you have deploying it on fedora because we have a package for fedora too so I'm Adam Williams And for anyone who doesn't know me, I'm around at the conference and I'm Google. Thank you. Oh, I have, I can show you the actual, I don't need to use GitHub, right? So, that's, that's he is. This is actually for, for update. So, this is how our blog looks like. Basically, this is some helpers. We don't usually, usually use your writers on. This is for cleaning, it's still. And basically, it starts always asked for various variables, if they are set or not. We have also some helpers that you can ask for. Yeah, this is just setting some other, you can check, you can, you can get variables. You can set during this, you can modify or how this is part of this API. The documentation for test API is in the test engine code. It's a plain old, it's parallel based. So it's a POD data. And you can explore it. It's not passwords in clear text. But actually, do you somehow encrypt it before pushing to get it? No, not this. It's one of our tests. These are visible in GitHub too. But they are used only, only in our test machines. And we even have some very SSH private keys there in GitHub. But that's it for use for testing. I know that you will get some statistics about it, but that's never used anywhere. And at the end it looks like this. Just load all the things you want. The important thing is that this code is executed before the virtual machine is running. So you don't have access to anything. You can't check anything regarding the tested system. Also, don't use any sleep here and so on. Because it blocks many things. And for the actual tests, you can see, for example, the update, for the upgrade of the system. Zippered up is, if you know Zippered, that's like apt-get. It's for installing packages and so on. And it can do also a distribution update. Zippered up is using this. Now, you see, it's parallel. This is the tricky line, the base or the install base. That means that we are not using the base test shipped with test engine itself. But we are using our own helper, which already provides some functions which may be used here. And then we use the test IPR. And this is the main run. We have some pre-compiled regexes here from the battle. And when it says script run, it actually do is that it will type this command using the keyboard, the main CEO on the destination. But there is the other benefit that the script run checks when the script is finished. It will automatically append the redirection for the serial console with the exit code of the command. So when you are serving the screenshots, you are guaranteed that this command is finished. But the script run doesn't check whether it was successful or not. We then have assert script run, which closes the test if it fails. There's also that thing that the script run, even though it's part of the test IPR, it uses Linux specific things. So for example, you can't use script run on Windows machine. You need to use type stream, which just type the stream. Or create override it and create your own helpers. Since this is all object-oriented battle in the most basic way, no battle has some strange ways to do some object-oriented stuff. So you can override some functions and then inherit from them. Basically, you can send a wait for the serial output. It just waits and collects from it. After this command is finished, it sets the pointer, not the pointer, but the position. When you open the pointer, it seats at the end of the file, which is gathering the serial output. Between this run and this run, you can see only what was appended to this file after the end of this serial. So it doesn't match all the previous stuff. So you can be quite sure that anything happens between these two calls is the new stuff and not the old one. This is test-flex. You can set all the whether it should be about the test-suit or not. This is still... Now we have not documented, but best practice things. You don't actually need to set a fatal and important one, because currently you have a fatal test, it's also an important one. Also, many tests won't have these test-files at all. And even though some important flags are set, it's usually done in the base test, which we are inheriting from. You see that when we are writing this call, it's just look at the stuff and write as you would expect when you are using the machine yourself. Oh, here we have some result partially loaded. What is it from yesterday? Yeah, this is how you check the results. The good thing is that when you wait for the serial output, it will ultimately throw in the inside of the screenshot. It will print you the thing it's captured on the serial output. Yeah, that's the network problem once again. So, test 4, test 4 a bit. You just update the test itself. And from scratch, when you create the test from scratch, you need to create completely the serial test loader, as I was saying. Also, when I was preparing this, when we were trying to write the test from scratch, it seems we used so many helpers. Yeah, even our test loader has so many helpers that when you get used to it, for example, when you start working on a federal test, then you will also use many helpers and then you will want to try, you completely do this new test. It's actually sometimes very misleading. You will need, to learn basics, you just, you need to be aware that all of the federal guys and the open system guys, we are using so many helpers that you need to be watching for that. This is the test structure I already talked about. This is one of the things. Because for when you were, I was showing the file, we just use load test everywhere, because we have helpers with ads, so this, that you don't need to prefix it with the test module, which is probably a bad thing, because we should put it as an exported symbol into the auto-test, but no one fixed that yet. Yeah, but as this, in this source, where it is, it's correct. This is again the loader, again reiterating. The machine is not running, so you can't check anything except different variables, and that's it. That's not your, there is a loader in the short test. You don't need to use strict here, because we enable it by default in our base test, and all the tests which inherit from it, automatically have strict and warnings enabled. So we use something like modern parallel module. And then basically for the latest API, the assert screen is to check the screenshots. This is the tag of the needle we want to check against. Xeleven start program, it's, you know, what it does, it starts the extension, but it actually expects that you are in the geographical environment. Again, highly unit-specific. The same becomes root, and this is the only thing for, from the assert screen, the tag screen is universal, so you can use it on very different, even different OSes. The rest are very unit-specific. Again, the only mandatory was brought in the test flag, so it's all optional. Well, you need to inherit from at least the base test. Base test is, I can show you. This is the beauty of open-solid, I can show you all the sources. So this is the test engine. And this is the base test. This source for every test is based on... You can get the statistics even in tests, if you want it, but it's not for me. And it's also defined, so some over... Some helpers. So this is run-tested, just run-tested. Yeah, this is good that we have hooks, which you can run before the test, that when test is run. This is basically, in the test itself, you put the run-tested hook subroutine. It will run before test, then, of course, past run. And then we have a fail hook. When it fails, you can run past-fail hook. You all can put it in your test cases, test modules, and execute them all together. That's what I wanted to show, except the test API. There is also distribution, I think, which is full of... The test API, sometimes, for basic things, it will set... It will have the test API, has some low-level stuff, like the type string, and then this is X11, and you are installed, and they can run it and so on. It checks if you have some custom distribution, which it inherits from this distribution. And this is where you would want to override, for example, when you are preparing for trying to test Windows, you will create a new Windows distribution or Windows, whatever, inherits from this one and overrides the helpers. And then you can use it even by test API, but you need to, at first, override it. And in tests, I didn't show you, but okay. In the loader, at the beginning of the loader, we have half error, this one. So, basically, in the lead directory of the test, I showed you your distribution, if you want Windows, you will do Windows there, and then in the loader, you will just set the distribution, and it will instruct the test API that basically all the overriding stuff, you have to, again, it will come into effect. So, this is required for some different OSes than Linuxes. And even there, you get ours distribution. I don't even know what all likes we have here, but initialize some. Okay, this is another override, the installation, and you're installing it, it checks whether the package is installed, even in, I think, yeah, it only works in X, in X11 and so on. You have many places where to put some helpers on overrides, I have a question about the test. If you, now, you have some tests outside, you test if OpenOffice standard, and you need to add some new tests outside, if, when you open the help, it's a slightly closed bottom at the bottom, yeah. So, how do you identify the area for this new test, because you don't have screenshots for that, how do you add it into the SQL, because there is only some save screen or check screen, but you don't have a reason to ruin it? The save screen doesn't check anything. The save screen is there, actually, you want to, you will write a test that it will get to this situation, and then either you just wait for a few seconds and then save the screenshot. This save screenshot will then appear in the results, and you just can create a new needle from this screenshot. Okay, so this I can do from the web page? Yeah, you can do this from the web page. Or you can create your own screen, you can do this by hand, like take a screenshot of the machine, and add the JSON's metadata and put it there. So, it's easier to just put a save screenshot call, it will save it and then convert it to the needle. Okay, and one question. You have prepared all these tests for some distribution. Everything is fine, but now you need to test, let's say, some high contrast, so you change the colors and also change the form. You need to re-brow out everything, or you need to only go through the website and change the needles? Only through the website. That's usual for the new branding. Usually the workflow is the same. You don't need any new commands, but you need to completely change the screenshots. And then again, you just reuse the same tag. So you don't need to touch anything that's called them. And it's also, you can, but usually because the needle name is auto-generated, you don't overwrite the old one. So you can test the old version, the new version, and it will just work. It was high. Yep, test API. I have somewhere generated all the commands we have. This is generated from regular POD to HTML. This is all we have in categories. So for what would say screenshot as we were talking about. So failure for working out. You can manually, if you either can use the workaround, take on the flag on the screenshot itself, or when you use workaround like in the code, you just, I don't know why it's in the old video. Probably this is misplaced, but you can mark it anywhere. I see. If you know what happens. Assert screen. They just, they can be array, or the name, or just the needle, and it will try to find one. Check screen is the same as Assert screen, but without failure, failing. The good thing about, for example, this one, the matching stack is that both of the Assert screen and check screen, they return the tag of, they return the needle, the object, which matched. You don't need to save it. The last match is, there is always, we automatically save the last matched object, and you just can check what stack it was later and don't need to use some special array variables. This is how you can click without actually specifying the coordinates. You can move mouse by coordinates also, but this is good that it will find, it will find the needle, and also the matched area, and it will calculate the middle point of the fair find area and click on it. So, if you want to just click OK, just this one, or double click. This is very important for still video handling, whether it's changing, or whether it's not changing. This is for access to variables, so you can influence the test workflow or the job workflow. Not very interesting, but intuitive. And then serial helpers, just white serial, which just expect the checks for the serial output of the machine, and script and also script and these other helpers. The important thing is that when you, as I said, the script is going to wait until the program ends, when you don't want it, for example, that you need to check for some serial output yourself, you need to disable timeout, or use type string instead of script and so on. It gets a little to use it, to get used to all this API, but at least we have now a list of them. It wasn't such a case like two weeks ago. Do you have no documentation? Well, at first it was obsolete a little, because nobody, whether it's related to updated PODs. It was all over the place, because I actually spent a week working on collecting the documentation to be in the right order and do updates, all these strings. And when you wanted to know which test API we offer, you just need to go to the code. And even some of our QA guys didn't have any idea. This is the thing that many people don't also know parallel as good. So when you are new to some projects, the first thing you do is just copy and paste some existing tests and adopt it. And that brings sometimes bad code style and so on. The last week or two, when I was preparing for all the presentations and so I basically was updating the documentation as well. You see we have some different for weekends shut off the machine without, of course, not all of them works for all back ends. This usually works only for key rule. It may, maybe it's also what I don't know, but it's all depending. If you will call it on the back end which doesn't support it, it will crash the test. Need to be aware of that. Weight idle is currently being this, like, gradually we are stop using it because for key rule, it actually looks at how much the virtual CPU is utilized and when it drops below some threshold it will say okay, we are idle, but on the other back end it just sleep. So it stops the execution of the test for I think 19 seconds by default. I don't know why 19 per se, but that's in the code. So that's why we are dropping this and use the waiting for the serial. This is the things to upload logs and to get the data you have if you have something that you're going to need, something for the test, uploading logs, uploading assets, keyboard support, send key for individual keys. There's nothing which we use for it. This is nice hack or helper when you, for example, need a list of menu, for example, in a group and you want to select only the third and fourth and you don't want to put the send key in the fourth cycle and sometimes it can even change your positions. For example, when we have a rollback for BATI or BTRFS, it's always different. So we just have a screenshot, a needle of the selected correct entry with the selected proper name and we just said that it's down until you match the needle and it will automatically iterate and try to push down and down until it matches. So it's sometimes useful. It's type stream, type password the same, but password is then hidden from the log. So that means it's no support, you can move it. We have a mouse height helper it's actually quite important to use because when you have a mouse cursor somewhere and you have it in your needle many times it can happen at some time that mouse cursor is hidden sometimes now and even the small cursor can make the needle to be failed. So we use it quite often the only thing it does it just moves to some out of the screen and just it will not be in the indoors. Yeah, the motor console so this is stuff which basically it's again, it looks only, but you can you can override it, but the idea is that you don't need when you the distribution the distribution part can because this part is not available from the test API, this art console is not available from test API it's only from distribution that's mainly that the test the writer of the test doesn't mess up with the system configuration of this of this testing system and basically you can you can add some hooks for it so when we have root console it knows that it's always TTI2 so it always when you call the select console root console it will automatically push control F2 and during the art console when you are adding the console it automatically locks you in as a root so now you don't need to this like become root root helpers you just I need a root console and it will switch to it in the text mode I need a graphical console and it will switch to if there is and user console it's the LTF4 so you don't need to like watch whether there is a login prompt and do I need to log in it's already locked in and that's the audio support just this is the last one you can collect the audio as you saw yesterday probably we can just you can play the audio in the web UI just to have it sound but when you play itself it's just comparing the screenshots even for the audio so if you want it this is from everything generated from the test AP IPM and you call it's playing on data over each function as this about it with what's happening there this is probably what you want also the important thing is that by default when you don't when the job is running and you don't set the results explicitly and it will pass it will regard it as a failed it will like I think it will be states of unknown state but the overall state will be like really failed so you need to explicitly set that the result is okay to be actually okay but you don't need to do it for failed and of course we use now nowadays so we just die like not literally but in the test because when you can have you can add a reason why the test died and it will show in the web UI but yeah because of this the test died yeah recording failures yes already we're all about it we're going through the test maybe there are various logs and files with the test produced so it depends whether you are debugging the test suit itself or whether you are debugging the SUT or what's happened so for the debugging test there is the importance of the log the output of the test engine where you see what what needles was matching, what was typed and it's quite chatty so you can usually find what's wrong for debugging the actual problem with the test system we have these helper variables and this will force because when the test is finished all the data is uploaded the image is deleted and you specify the keep AGD variable it will preserve it and you can then copy it outside and do whatever you want make snapshots so this basically overrides the milestone flag and it adds it to everything so after each successful test module it will create a snapshot then you can use skip to variable which will when you restart the job, for example using the clone job helper it will start the test from the milestone you specified so that can be used either for debugging of the SUT or when you are developing the test so you don't need to again do the installation step and so on so you just concentrate on the one on the one module you are interested in there is one drawback though and that for example these consoles when you are using the console test the consoles are set up during the first when the test is started it skips over the setup of the consoles so when you then try to use select root console and you skipped over the initial phase it will either don't know the console or you will know when you locked in and so on one needs to be careful not to skip over the initialization usually it's good for when you are testing the X stuff because it's usually in the correct correct state this is the the almost at the end with time for dependencies start after test you set it as a use you do it as a variable then you push the test plane and it will create the chain dependency which means that the one test will be run at the first and the second test will be started, will be scheduled only after the first one is successfully finished parallel test the scheduler will ensure that all of these jobs will be run together and I think to remember is that scheduler checks the state of the mass of the parent task for a parent job and you need to write the test in that way that it will wait until the children will finish because when the parent job stops or finishes either successfully or not and the children jobs are running the scheduler will stop for the children jobs just kill them kill all of them so we have for we as I was showing you the test API we also have MMIP which is exactly for multi-machines open and here we have you see this is what the API exports so you can ask what the states are in what state are the children and from the children you can ask the parent and even you will have the call to wait for the children it will just wait but you need to be aware that the parent job needs to be running again and for that we also it uses the logs and the logs you have in the log API and it created you have motex create motex log and here it is so all these motexes are named so you just pass a name of the motex you create and you can ask for it and the important thing is that they are real all these are always related to the job groups they are running so when you for example run you use the same names in different tests they don't interfere so you can safely use whatever name and reuse it it's actually it still can be updated but so far for our test this is what put enough for us the job asset that's as I said different between logs and assets is that assets can be assets are stored and can be reused in different tests there is the difference between store-hgd and publish-hgd and store-asset and pass-publish there is the apical upload-asset and you can specify whether you want it to be private or public and the difference is that the private assets the job ID is prepared as a prefix for the asset so the idea is that the asset is only reused in the job group and it's not reused outside of outside so to guarantee that you don't overwrite it in some other tests if you use a public it will pass the name of the of the asset and each test when you run it and use the same name it will overwrite it you need to be you need to be aware of these variables are the store-hgd and publish-hgd use it as a test variables and it will trigger the upload of the image the machine which finished and then of course you have the apical upload-asset for individual files these variables are for the images so that's a nice idea for example when you test something, you can publish it you can somebody else from your different tool can download it I think we use it in a Jenkins port cloud tool when you really install it to some state publish the asset and then Jenkins is called on this for ISO image so it do its test and you can actually plug the open qa nicely in the infrastructure yeah that's my side basically there are no packages but I they managed to package it for Fedora so I know it's different but it's not like success-specific so you can no packages for it but I see no reason why it shouldn't yeah, there is a looker image and you don't have a SIPAM module no we've already yelled at Adam already yelled at us about pod8 or someone I think we don't get nothing we actually do sometimes some nasty stuff in Fedora so far no one was complaining so and yeah it's all in a built for for build surveys we all currently build it only for SUSE distributions but we can enable if there is interest for I think CentOS is there too but so far we weren't approached for this and Fedora guys just do it themselves I haven't found this documentation online yet which documentation? the test API yeah, it's a clue to generate it yourself well, the plan is I didn't have time to finish it but plan is that this documentation will be generated and then published in OpenQA because the test engine where the test API is always auto-ins but all the documentation about how to write how to work with it is OpenQA so eventually in a week or two I expect to have some automatic uploading from OpenQA and then there is also a documentation where these presentations are there will be also my yesterday's talk, I will upload it there and this one we're watching now is also there and I think there is also talk from other guys of OpenQA team so you can see basically all presentation we did okay thank you for the workshop thanks see you can keep it you can keep it thank you I think flash cars are one of the things you can always you can always have and so I don't know what is it on the web I have somewhere to generate sprint but it doesn't work so I don't know look I'm on the web and I can create the tracks with this link you can find it it could be enough I'm I'm surrounded by right so it will be where we go I expected it to be a little different the workshop it was but it's weird I tried to do something but I can't it doesn't work it doesn't