 So hello, everyone, thanks for joining my presentation so late, yeah, getting late, so for me. So I'm here to talk about today about testing your GitHub pull request with package on Fedora, or CentOS Stream. So let's start. Interesting. So what will be the agenda, I'm going to do some brief introduction. Then I will tell you something about what is testing form, what I'm working on with few couple of folks in Red Hat, tudi z nami pek. Tudi o testovom farmu, bomo tudi sem vse možete izvedati nekaj možem skupniti na različno in če je to? V霧u moram odmah počenil tega noženja vse naša vse vse, tj. md. in mi to nekaj počenil. Prv tudi je tega vatkertija, ja, vse je tudi engineer prejputno bolj si se početil, da bi skupala v Karel do srpitičnih komandov. In početio, nače nekaj ne večo, da se razpronimo v KRL in da se početil, da se početil. Nel ko sem tudi červlj, da smo v KRL ljube neskaj v KRL, nekaj tačima, da se početil vsak nače, da so s tih vseh so početili. If you want to find me, you can find me as m. That's worked on 3.0, IRC, or that is my email, if you want to reach out. So introduction I'm not going to talk so much about packet. Who knows packets. After somebody. Okay, most of the folks. So if you don't know packet, basically it's a tooling automation that's trying to integrate upstream open source project into Federa and later also to real center stream packet is basically as I'm going just to tell you the important things that are important for this talk. Paket vseč, potrebno, je kontainsi v mene servici. Sledaj je Paket, to je komandobljan tulej, kaj je pravda vsočite in skupaj je. Tukaj je vseč vseč, ki je github aplikacij, o kaj je pomegne. Tukaj te prezentacija je na paketu vseč, da je github aplikacij, ker je to, da tudi je za tukaj, nekako je tukaj, da ste priprotili pr Mama. To je vseč, način. Tudi nekaj ga bomo si počkali o to, da vse počkali, da imajo pripoznutje in zdo. Zvom dobro tudi se pripoznutje vse srečicne pridazkanje z 10, 11, počkali pripoznutje, na verge, začinej prejavno, počkali se v kratku, vzve, vsih 3, 30, 40, 25, tako počkali. Zanimali smo, da je bila pripoznutje. Tudi testujem, testujem testovimi, We are creating a testing system as a service in RedHat that basically will behave as a service that others can use, other CI systems can use and this is basically not something new, this is something that we have been already running in production as one of the CI systems for RHEL and it's running quite stable, but we are creating a service for it, basically that other services can easily integrate. So we are dropping some of the features that CI has in vse božje, ko je vse pukaj. In Pakit je nekaj... Vzlučujem, da je namojevanje, poslepa namojevanje z umbrella, zato je tudi pakit, pakit testin sistem. Tako, kot... To smo vse pukaj, ko se božem vse pukaj. Vzlučujem, ko smo propočili, vse pukaj pozdajte nase vedno, kako je delaj. več nu, če je tudi sem moželi vzvojiti. Najbolj smo zelo, da je vzvojila vredna prejzina, zelo sem sem posledal, da je in tudi vzvojila skomodilil se za testmanj med data format, zelo sem posledal FEDORA in svojimo zelo vzvojila vzvojila, da je sem zdajal za večo večo verjstv, zelo se posledal FEDORA, zelo se do nekaj, da je vzvojila. Peket testovom dokumentaciji je zelo tudi na pasteri. A steueločno, da vedno je še pravda vom činev, ta je dobrojno o vselt bowlerenרne uspevene, v k moderatičnih vsešljenih je prejave. Protočno, da se vsešljeli vsešljenih vsešljenih vsešljenih vsešljenih v sešljenjah. Vsešljeni uspevene uspeveni uspevene uspevene uspeveni uspeveni uspeveni. To je prišel v presetji in udovidoj, v kateri ne je poseljeno v sWatchu. Prosti vsešel je zeločen. Na kateri vsešljeni skorjno je knivesak. sentos open-shift infrastruktur for running the VM, so we run privilege containers in in that we run Kremuk AVM where we run the workloads. Like the testing form the service currently has four main components, API which basically services that can use to submit testing requests. Then the console where usually users interact with and can look up like the test logs and results. Then there is the artifact storage and there are CI workers which really do the run the testing and test the stuff that is defined in the metadata. Yeah, the code we have it on GitLab currently but it's moving, it will be changed as we open source a lot of stuff that we have on downstream. So testing form and this testing is available to everybody using packet as a service. Like currently we have hard coded the VM specs, something that you can't change currently but in the future this will be then changeable via the metadata specifications. So currently we run VMs with 2gb memory and 4gb hard drive which people can use. We can only currently support only single-host tests. So all the tests that here run are single-host but they can be run in parallel. I will show you the multiple plans that we can define there. The current test execution is defined via metadata specification so to basically to run tests you need to add some metadata describing basically what it should run. In the future we want to have a very easy way that you don't need to use this metadata specification but you will have just providing one script to the testing form via packet and it will run it if you have your tests available like this. So we want to make the onboarding to testing as easy as possible. Yeah, so currently we test packet and I will show you the workflow packet builds, copper builds which we then install on the federal releases like each copper build, for each copper build we find the latest nightly build of that operating system booted, install the copper build and then run the tests on it. Recently we had the CentOS stream even though the QCAL that we have there is quite old I just realized that there are new QCALs already generated by the CentOS stream, by the CentOS team so we will be updating that so it always looks up basically the latest CentOS stream image which then it boots and runs the tests. So, yeah, I told about the latest QCAL images. So how easy it is to enable testing form in packet. So first of all you have your GitHub application that is a requirement currently there is no other integration for example with GitHub. So you need to have a GitHub application you install this application packet as a service via the GitHub marketplace. This is how it looks like, I have it already installed that's why there is edit your plan because I have already like installed it for my project. After you basically install it you go and configure packet. There is a documentation for it how you enable it and for enabling the testing it is important to add this one job here that basically says that it should run tests on two requests and it should run against all federal targets. This basically means for this specification of targets means how many there will be records of tests in the pull request so you will see for which federal the tests run. For the targets you can use different targets federal development, table or concrete just one so if you want to build your PR only for one target for example for Roha then run only for Roha you can do that. Currently we execute all the targets in sequentially but in the future we will run them parallel speed things up. So once it's done this is how it looks like the feedback for the pull requests so I didn't add any tests here right now so the first test that will be enabled by default for all your projects is basically the copper build that has been built from the PR will be installed on the latest night limit of federal or center stream and that will be the first test. This test is done always. So here after enabling the configuration it just installs the copper build. This is how currently our console looks like so this is something where you can investigate and see how your tests run. We plan to improve this experience a little bit it's not very nice but as you can see here for this pull request it looked up the latest night limit as I think from a few days back I took this screenshot and put the image and installed the copper build on the machine and that was the first very simple test that will be enabled for all basically packet projects. So a few words about the metadata specification. In RHEL we have quite a lot of test coverage downstream but there are difficulties to move this test coverage upstream to projects also basically the upstream can benefit from it. So we came up with the metadata specification that can deal with all the complexities that we have for RHEL now so the key engineers in RHEL can basically have an easy way forward to the metadata specifications they can convert the old metadata specification that we have there that is in various legacy systems that are not even upstream and are scattered all around the places have one concise place where they can store this metadata indeed and this is not only test metadata and test related metadata like runtime and such things but also CIM metadata we would also like to have one place where you configure also the gating properties currently all this information is scattered via various places or hardware in the CI systems you would like to have one concise place where you can configure all this stuff so that is our vision unified test CI metadata for the components that we have in our operating system and have it the same way in upstream projects Federa and RHEL and that basically means that we can easily move the test between RHEL, Federa and upstream projects so that is the vision that we would like to get to so testing as a configuration should be consistent, concise test metadata format should be flexible enough to cover future extensions so the test metadata should be prepared that you can basically extend it to various executions if you didn't hear about the flexible data format it's like YAML on steroids it has some nice features where you can basically organize your data in a very concise way so you can like it has some nice features here is the structure so when your project grows you can like split the metadata into various files and you can there is some inheritance that you can basically define some properties only on the upper level and the other metadata on the lower levels automatically gets this we had a presentation on floc 2019 if you are more interested into FMF I'm gonna skip this slide so don't have enough time but this is basically we have some metadata levels defined in FMF that relate to the test definitions and I will show you the examples so as I said for floc 2019 we have enabled this for packet for defcon 2020 this is now going into production for one of the components as the first start and we will be on-cording other components so then later on begin for the federal move a lot of test coverage that we have for RAL so that is our main goal why we are doing this to be able to open source a lot of test coverage that we have been created that has been created by for the years for RAL so that is on floc 2020 and I hope we will have some presentation about it there so test management tool is something as a new tool that we have been creating so you can interact with the metadata so if you want to I will show you for example for TMT where you have dogfooding our own metadata for the TMT tool how you can easily discover what tests are available for your component this is landed in federal very recently you can install it by installing TMT whole package then you have a command where you can convert the old metadata that is the RAL metadata and that works only for the engineers then you can easily create the TM the metadata by base so you can generate this metadata tree so you don't need to write it, copy it over it and you can discover and you can discover all the all the metadata that is there and we want to have an easy way how you can execute the test on your localhost that was been one of the big pains that there is always problem to execute the stuff on your localhost so we will have one command which you basically run the metadata in CI and the same way on your localhost so the runner that we are creating there is already used for RAL and will be later for packet for packet we have a little bit different code there but the runner will be also used in CI so basically the same code will execute the stuff on your localhost and same way in CI so let me yeah so this is the TMT GitHub project if I run tnt command I can discover here that there are two tests defined in these repository tree plans and we have another level of metadata on our story so we also envision that people want to currently don't have very good ways how to describe the test coverage for their component so the test coverage when you ask how is this component covered with test you can't answer that question it's hard because you don't have any ways where you would map the test to features so this is where we are looking for another level of metadata where you can map basically some stories to plans or test which cover those so if I'm talking about stories story so this like Petr who is working on this has defined here a lot of stories so you can see that here we are mapping user stories defining concrete user stories and even defining where this is implemented so you can point it to a test so you can even look at how your stories are covered by looking at the coverage just a second so that's why I call it test management tool because it's more than just about running tests it's really an open source way how you can manage your test coverage for your component in a very concise way directly in it so the same way we have tests here I can try to view them so these two tests here that we have are very easy they run just one script in a specific part in the repository test each and also this one here we have the duration basically the longer that if this test runs more than that amount of time it will automatically kill it so that's the maximum execution time and then we have test plans which basically I call CMF data that we envision that would that you could configure running a set of tests set of tests according to some filters so in our case we have a lot of tier tests from one of which can run for 5 minutes another 30 minutes and you want to basically choose that you run only tier 1 tests for the gating itself but later on these stages you want to run the tests the whole battery of tests so you can create different plans here and yeah in here you will be able to change also the properties of the VMs infrastructure that the CI system brings up so you can change the memory hard drive or things like that do you guys still have them? yeah so here are some examples from the wild and simple vendor people use now this basically defines this is the level 2 metadata where people define basically two plans called vendor smooth and vendor full which is inherited by these two plans here so you write it there only once and that is just running some visible playbook that prepares the test machine and here are two tests that are simple shell executors that you would know from GitLab CI or from anywhere else where you just write commands which are asserted on zero written code do you have any questions? if they are yeah so can you repeat it? maybe the question is it so the question was if the stories and the coverage status is somehow parsed or needs some human interaction so currently you write basically there that this story is covered by a test and when you make that link it works and yeah so for the results investigation this is not about this is just about defining the test coverage, the test plans and for the results viewing or something this is not like currently included there but TMT, the executor, localize the executor will be able to of course run the test and also provide some reporting not exactly like this is the main idea for us is to be able to move all the coverage that can be moved from rel to federal I wouldn't consider it as a general purpose CI system where you can define all the phases as you wish like here we are very opinionated I would say how we envision things to be run we have also currently only like five phases that are strict, it's like prepare phase execute phase before that is provision phase then reporting such things so we are quite opinionated about how the test metadata should look like, we are not giving people to complete the freedom you can see on this project that I have linked there that they are using Zoo, they are using this thing to test and they are using also Travis it depends on what your use case is and currently this is only for simple use cases as I would say and it's targeting that one benefit that I see Tomas said here that you can still use it as a replacement for Travis because you can basically do some very similar things with it that's true, one benefit that we see is that we will be providing VMs to the execution for everybody using packet and that's usually not available for Travis if you don't have subscription I guess and that's definitely a good question I would expect Zoo to be integrated with the RRunner to run those tests that's what I would expect so we will need to talk about it the question was if we plan to integrate with AGR and currently no we hope, we really like Zoo and how it's managing basically the full request workflow so we hope that we could use RRunner basically to run these tests that is how we run it for CI we are focusing most on the post merge testing currently also for RL also internally for RL we don't have a good way for request workflow so it would be nice if we could also use Zoo for things like that there because it's like very far away on the road what about integration is it possible? what about integration githlab ask packet guys tomorrow I think it is coming but I'm not sure about the priority so I will ask this question on tomorrow's talks most relevant information yes so currently not so many people are using it we are quite well enough using CentOS but we have access to ABS that is sponsored to us from Amazon for Federa testing because we are supporting this Federa so I think we will be able to scale this out so the question was if it possible to run Docker inside of these tests basically you have VM with full root access and you can do whatever you want of course we are watching you so because yeah so the question is how we actually mitigate any misuse of the good usage of this CX system or this system, the testing system so currently we are looking at how to avoid it but we don't have any concrete tests but we are watching very closely currently manually what people are doing there of course if something is there we already have monitoring in place that we see some spikes so if you are doing something nasty you will definitely look at what load testing are you running there I would even say that load testing is not really the best platform for this this is more about functional test integration test that you are looking for what to be used there so if you start running your miners you might run it for a small brief of time so best intentions but yeah, outside the wild it can be dangerous yeah, I am out of time, thanks very much