 So, hello. Welcome to my presentation about testing of Fedora and how we test Fedora now. My name is Petr Schindler. I'm part of the Fedora QA team in Redhead in Brno. And I'd like to say a few words about tools we use and ways we test Fedora and make sure that Fedora is not broken. Most time. So, Fedora QA, that's the team whose goal is to make sure that Fedora works at least somehow and that it boots, doesn't break your computer and doesn't delete your data. And so, if you want to know more information, there is a QA page on our wiki, Fedora wiki. As I said, our primary objective is to ensure the quality of Fedora and the releases we produce. So, what we do? Partly we are trying to automate as much as we could. So, we are developing some automation tools. We are using some tools from other distro. It's Open QA and we are working on some other support tools like blocker bug apps and some things that are not so important from people outside of the team. And of course we do some manual testing of stuff. We do some testing of the releases of all and testing about packages. We are testing updates before they land to a user who don't want to have untested stuff on their computer. Now also the part of our work is the communication with other testers and with developers and the community in general. We are quite a small team for such a big task like testing whole release. Internally in Redhead there work about 8 people. Two in USA, one in Canada. He is very active. He works for 10 people. There are 4 people in Czech Republic, one in China. In India we have manager and few interns. But there is also community which helps us really tremendously. I hope that when you come to this presentation you want to help us too. So you can join us too because we need a lot of help from community. There is a lot of packages we need to test updates for those packages. So everything is helpful. So our work is from part some manual testing. So we need some general Linux knowledge about almost everything in the system. Kernel or how the GNOME works and so to debug we have to know how to debug some problems and so. And also because we do some automation tools we have to know some coding. Mostly in Python, of course in Bash. And the OpenQA tool is made in Perl. So yeah, that's too. For communication mainly we use IRC channels. On FreeNode there is Fedora testing. And also mailing lists. And we use conferences to talk to people in person. And communication with community is the key to our work. Because without the community we would be a small team and couldn't do proper testing for the system. So what automation tools we use? Like most of the work is focused on TaskoTron. It's tool for running automated task execution. Then we have OpenQA tool. It's not the tool we brought. It's tool from OpenSUSE. We just run it on Fedora and it works really nice. And we don't have to maintain it because OpenSUSE do it. So that's a big help. And also there's something new. It's not from our team but it's used for test of newly built RPMs. And it's called Corsche. I won't talk about it much but you can look on Fedora Wiki. There is nice page about Corsche. Where you can read about what it does, how it does. And where you can see the results and what can you do with those. So what is TaskoTron? TaskoTron is an automated task execution framework. It means that it listens to some messages on FedMessage. When it gets some message which is important to us or to some task, it will try to trigger a running test. Where the build bot will run the task, which is mostly some tests. For example, tests which are now running our RPM in the RPM grill. Update by depth check. Mostly when some package is built, we make sure that the package or RPM is correct and that it can be installed. And it doesn't break your computer or your system. You can also read about TaskoTron.FedoraProject.org. You have very nice newly designed page where you can look what it does, what you can do with it. There is some documentation links to the task which was run. So if you have some packages, you can look there, which test was run there and look on the results. There will be talk about TaskoTron or about how you can make your own tests in TaskoTron or how you can run your own tasks in TaskoTron. It will be on Friday. The name is TaskoTron Create Automated Package Specific Tests. It will be run by Camille Pyle, which is also from our team. So if you want to know about TaskoTron, how it works, how to create, how to run your own tests, just go to this one presentation. OpenQA, as I said, it's still made by OpenSUSE. In FAST, we had to do all the testing manual. We had no automated tests. And if I show you the matrices we had to fill and we had to test all those test cases, it's really long list. It's like those are all test cases. Every line is some test case we had to run manually. And it took like two days of four people work. Yeah, it was really hard to test. And usually there was some release compose and it had to be tested in one day because there was go no go and we had to decide if it's okay to release it or not. So that wasn't a great pleasure time. We had to work really hard. And it's quite boring to run installation tests like 100 times a day. So Joseph Skladank and Jan Seblak came with the idea of trying to run something which could automate the tests. And OpenSUSE, which I think they use Anaconda for installation too, they had some tool which was used to test installation. So yeah, they tried to run it on Fedora. They said okay, let's try it. This week we will work on running this on Fedora and if it will work, we can use it if not, let's find something else. And it works and it works quite well. If you look on those matrices, there are results from Coconut. It's our project Coconut, which is our open QA. You can tell by the logo it's supposed to be a robot. Yeah, it's automated testing. And it tests a lot of tests which took us a lot of time. You can see that that's almost everything now in the installation. So it helped a lot. And now when there is some release, almost everything gets tested by OpenQA. And we have to test only some hardware specific things that, for example, the DVDs or live images will boot from USB or something which should be tested on hardware and not in the virtual machine. Because OpenQA of course uses virtual machine, I guess, QEMO is used. So, and the great thing is that it's made by OpenSUSE. OpenSUSE, they made it, or made it in the code. We just have our instance for Fedora. But if we have some bugs, we just fill some ticket and they will probably fix it. It uses OpenCV to match patterns on screen, so it takes a screenshot. You have some predefined screenshots or some parts of the screenshot and you search if it's somewhere on the screenshot which it made during the installation. If it finds it, it can click or use the keyboard and it uses BNC for this. And so you just define those steps like wait for this image to appear, click somewhere and it will go through the installation, make the screenshots. And then it has really nice, not nice, but it has some front end. And you can see those results. These are results from one test of some build. You can see those green runs are which passed. Those red ones are which failed. You can look on what failed. You see those screenshots and what it tried to find and where it failed so you can see what happens and it's really helpful. And writing tests for it isn't hard too. There are some more complicated things that aren't easy to write and use, but when you get to it, it's not so hard and it still saves a lot of time. If you want to know more about OpenQA, there is a talk made by Jan Sedlak and Josef Skladenko. It's today at half past four and it's called OS level testing with OpenQA. So go there and see how it works and how it's guaranteed. Okay, that was everything from automation and what we automate right now. We have also other responsibilities or we have to test things still manually. There are composites made now every day, but from time to time there are some more important composites which for example contain new Anaconda or something. And for those are new matrices, testing matrices created and we have to test as much test cases as we could. Also, we have to test during release. I will talk about it a bit later. And another thing we have to, we should test are updates testing. That's a thing that we don't have capacity to test all because there's a lot of updates in system. We don't understand to every package or we don't know what it does, we don't know the package. So yeah, that's a thing that the community helps a lot and it would be great if the community would be much greater. So test matrices, test matrices for testing composites. There are more types. We test installation, that's the longest matrix you've seen. There is some base functionality of system like it builds, it looks nice and the logging is working, Selinux is working and is on and services are working and so. And there is also matrix for desktop where we test cases for our desktops like GNOME and GDE. Those are our primary or release blocking desktop environment. And there are also test cases for other like Cinnamon and XFC. But those are not release blocking but they are there so if you use it you can test it and fill results. So for example developer can see what works and what not. And there are another matrix for server because one of, I'm not sure how to correct name, product. There is workstation server and cloud so we test server and cloud too and there are matrices for that too. As I said we have those matrices only for some composites, those more important which contains some big changes. And also we test create matrices for release composites which are something which we would like release as alpha but our final release. When it comes to new release cycle when the new Fedora is branched from Rohite. For example now the Fedora25 is branched and the testing of future Fedora25 starts. Now we are in pre-alpha phase that means that we have some criteria which should be met with in alpha. Like it can be installed on locally on one disk and that's all. If there are no bugs which would broke those criteria we can release alpha then we move to beta phase where those criteria are most strict. And at the end there are final criteria which are most strict. Everything should work and installing two disks on network like using iSCSI and something like that have to work and the logo or background have to be right. There is like small things which should work and when everything is okay when we test everything and there are no bugs which are important enough to block the release. Then we say go and if everything else is okay too then the release is released. We test both on hardware but mostly on virtual machine. There are tests which can be or could be tested in virtual machine but the results could be different from those when we run it on hardware. As I said for example booting from CD or USB is quite important because it happened in past that we released image with which when it was placed on CD it didn't boot. So we try to make sure that it won't happen again. If there are some bugs which break some criteria then we call them blocker bugs. There are meetings where we discuss if the particular bug is blocker or not. If there is some blocker the release can be released or the compose can be alpha can be released and we have to wait for the fix. And as I said OpenQA helps us a lot with this. For now it works just for installation tests and base tests. Those like running services works and so on. But it helps a lot. That's metrics. During the release of new Fedora we have events called test days. Those are the days which are focused on testing of one feature. If there is some new feature in Fedora for example new GNOME or anything you can imagine. And it has to be properly tested or developers want to test it more properly than we could do in our small numbers. Then the test day event is set and the great thing about it is that it's focused on one day. So everyone can come test cases prepared just for this one feature. And there are developers on the IRC and so when you have some problem you can communicate with them. It helps debug those bugs with developers or testers. And usually a lot of people come or sometimes not a lot but more than it would be normally because it's usually more than four of us who normally test it. More people will come on test days so the feature will get more coverage. And also problems are really solved quicker because developers focus just on this day. He can have those data, he can debug it in the time and it helps a lot. One of those test days was for example here workstation graphical upgrade. You can see that a lot of people came and tested those names of testers. And there are problems which they found. So for example Camille when he came to this test day he found a lot of bugs. Just because we were focused on just one thing. So it's really good opportunity to developers to test their new feature. And of course for example when there is some GNOME test day you as a tester can come and see if there are new features you can try them before they will be released. And from time to time I find something new which I didn't know that it works in GNOME for example I don't know recording of workstation or something. So yeah you can find their new stuff you didn't know about. So it's good to look if there is some test day which would be important to you. For example very good test day is usually paper management because you can bring your own laptop and test how it works. If it doesn't work you can talk to developers what you can do to make the battery life longer. So yeah. Okay blocker bugs. As I said we have those criteria for releases. It looks like alpha release criteria. For example there is a list of things that should work. Like requirements on REC installator which is anaconda that it must run. It has to use remote package source and it's just basic things. Like that disk layout it has to use one disk and that's all. You don't need anything else in alpha. So yeah there are those criteria and if we find some bug which hasn't met them then we discuss it on our really long blocker bug meeting and say that it's blocker then it's put on the page. That's the blocker bugs app where you can propose some... If you find some blocker, find some bug and you think that it's blocker that it's important to have this bug fixed. You can propose this bug here. Everything you need is delocked in with your first account and then you just put there the bug ID which milestone it breaks if alpha or final and if it's blocker or freeze exception. Freeze exception is something that's not so important to block the release but when the bug will be fixed we consider it important enough to put it to the compose even after the freeze happened. Freeze is something where it's moment after which no updates are put on the compose only updates which fix some blocker or freeze exception. Freeze exception are for example bugs like some glitches in graphic or something that isn't important enough to block the release but it's really annoying for example. And if you propose the bug, it will appear here as proposed bug then we discuss those on the blocker bug meeting and decide if it's blocker or not. Of course you can join the blocker bug meeting too so you can say what you mean, what's your meaning about that bug. There is also list about accepted blocker as those are bugs which have to be fixed before for example now alpha release will be released. Yes, and the good thing about it is that developers usually see to it see that their bug is blocker bug and they know that they should fix it really quickly because we would wait on their fix. That's also why sometimes Fedora gets some delays. Almost always. Because when the release is for example delayed for 8 weeks it doesn't mean that we are out of schedule it doesn't just mean that we want to have all those bugs fixed and wait for it. So usually we don't say okay let it be it maybe breaks stuff and delete all data from users and kill all those kittens but we don't do that usually mostly and just want to get those fixed properly bugs. Sometimes it's not so optimistic because for example when you have feature where only one developer is there and he has to do something else or it's a lot of work because he's alone for 20 important packages then we want the bug to be fixed at least somehow so from time to time it doesn't work correctly but at least it won't break your stuff like your computer or your data. And we wait for it so if you will see some delay that's probably because Fedora QA said that there are the problems. The most time consuming stuff would be software updates testing Those updates usually don't go directly to the distribution to the release it gets to the updates testing repository firstly there it should be tested it gets some karma which is the way you say yeah I think that it works you put the karma to it and when it has enough karma then it is pushed to the stable and all users will get those updates So that's something how we ensure that those updates won't break the computer system but there is a lot of that's ideal case that there is a lot of updates for a lot of packages and that means that not every gets tested properly so there are some ways that for example if it's not critical update from some important package then for example it can be pushed to the stable after 5 days of being in updates testing so yeah we wait if someone tests it then it's great if not then okay we push it anyway it would be nice if it wouldn't happen more tests would be useful updates are only in stable releases and soon to be stable release for example now F25 will get some updates I guess starts to get some updates or no updates testing because after updates testing it goes directly to stable so the updates testing will be pushed to the released version there is updates testing which is repository usually not used you can enable it and there is update repository which updates to the starting point of the release for example when the F25 will get released it will be in stable repository and everything after the moment we release it is going to updates repository so after the GA? yes only when during the phase of releasing or testing the new release for example F25 it goes from updates testing directly to the stable which will be the release like Fedora repository but after GA it will go to updates instead so in Fedora repository the original is the state of the system which was tested most properly and then you can install dates which are in updates repository and there is also Rohite and it has no updates testing it's like testing on your system directly system for updates is body it looks like this where you can look what's new there who tested most updates which updates are the newest or everything and you can also find for example if you believe that something doesn't work for example I don't know 7x okay 5x then you can find all updates you can see the status it means in which repository it is for example testing means that it's in updates testing repository and it's not in stable and stable means usually updates but it could mean also in Fedora 25 which usually it's updates repository so you can look on the updates you can see the update ID there should be a link to Koji where you can see those packages and there are comments from testers and you can put their karma if you think that it works you can put the karma plus one or you can say that it doesn't work which means minus one so you can download those updates too if you find out that it breaks your system or it doesn't start after the update you can put there the negative karma and it won't be pushed to the stable and the new update has to be created again you need only fast account and then you can test and provide karma and you will get the badge for this so if you want to have a new badge and you don't have it now you can give karma to two updates you can get multiple badges it's like for one karma, 100 karma, 1000 karma so if you provide karma you will get badges everyone wants badges it's great there are also some test cases not in every package but some have some test cases so you can find out how to test it or how developers want to test it so you can look there plans for future as I said open QE now tests only installation so we would like to test desktop too so one way to do it is to use behave and doctail which is behave driven testing and it's something what desktop QE in Red Hat does so yeah that's one way another way would be open QE to use open QE but there would be a lot of problems with changing of phones and everything because it's really problem for open QE when small things changes it makes the image recognition hard and yeah that's it we would like to have beaker instance in Federa in future so people who have some tests on beaker would be able to run those in Federa too and also better coverage of tests in automation and there is a lot of work on task return still yeah there is a lot of things so how it can help us you can join us if you visit the wikipage QA slash join then you will find out what you can do and how to join us you can communicate with us on the IRC channel it's Federa QA not testing as I said you can submit to test mailing list add QA calendar if you know Fedokal then we have our calendar too now there are only meetings and test days so if you want to know about test days just add our QA Federa calendar or you can join on test days you can fund also on the calendar of course use Abert when something breaks usually you will see pop up message it failed so try to at least report it by Abert or click to report the problem you can also report it in bugzilla with it or you can directly use the bugzilla and of course if you have some packages which you use and you would like for them to be functional then you can provide karma it would be great to have more testers here also you can try rohite that's really good place to find some bugs and it's really good for those who likes new technologies and if you want to know how future Federa will look like that's the place where it can look for it and there are some links there is actually a talk on rohite in two hours of living on the edge so you can have a look okay so that's all from me if you have some questions how is the time depends on the localization and currently having the localization QA in a manual way is any possibility that can have automated localization QA? localization in open QA what exactly do you mean by localization QA? at this stage that the manual QA test is focusing on to pick up any incomplete translation strings as well as any bad translation in terms of the quality and that took more time on the quality wise but not the quality wise like incomplete translation probably can be picked up by the automated QA test but isn't the translation coverage seen in the translation system wise? so you can have an overall picture of... but for example there are test days for localization and they are trying to find bugs where something is not set in properly in the translation system but it doesn't show for example or the translation is wrong because the string is simply not mobile because there is no translation and the other way is there is incomplete translation but it is not mapped to be translated strings by the developer side so we cannot provide a translation so either way that we have to follow back to be fixed by the translator or fixed by the developer I have to say I am not really sure how we could automate that I can imagine it but it would be a real long way and really hard work because you would have to take some picture for example and find text and see if it's in the correct language and so but with our current tools it wouldn't be possible probably I'm not QA so I can't tell exactly how but it can be done in one command against the build so then it can be run in task tutoring probably if you already have some tool which is able to detect some of these errors for example if we can run it on a new Goji build if you can run the tool on the RPMs or maybe build something like this 100% translated Japanese so hammer strings, translated two badges, three untranslated we can look later another questions now we are on the TV now open cable the third is there seems to be a simple actually there is a positive testing for PC I'm not really sure because open QA it uses fume in a highly pronounced way but I don't know and if you are able to run it should be also possible but I don't really know if somebody built it it's an open source project if you want to know something about the open QA and they would probably know more than we do I'll tell you when so yeah they are the right person to ask but I think that it's possible because we do not directly work on open QA but we have other guys who do so you can ask them directly but in general QA we just focus on primary ones so I don't really know what the stage of open QA is here integration tasks such as this project do you have any issues with that? do you hit any technical issues with these open QA tasks? yeah we don't send failed results because we have to look at them if it's something, some problem with open QA the certain level of those tests is not the high enough that we do not submit automatically failures, we only submit passes and if there is a failure we have to manually look whether it is a real failure or whether something and there are some issues