 So, welcome to the CI.FMF talk, what it is about, like the main goal is to make the life of the developer in the point of view to enable tests more easier, as easy as possible and also to enable open sourcing tests very simple. So, a bit of thoughts about the agenda, after a short introduction, I will give some short info about the FMF format, about level one and level two metadata, then we will give a couple of real life examples and then show a demo about how this could be working in the future. We have a simple proof of concept working there. Yes, so the introduction, who we are, that's the slide. So my name is Petr Szprichal, I'm from the OSCI team, this is Mirovat Kerti, also from OSCI and also testing from team and then we have Feranta Schumschal, who is from the Plumber's team and works on system detesting. Let's start with the motivation, so there is a feedback, a repeated feedback that if you want to enable tests in CI, it's kind of awkward, complicated, there are multiple files in multiple locations with different extensions and names that needs to be created to enable tests. The gating for example is a completely separate file from how you enable tests themselves. Also we internally keep metadata which are necessary for test execution and they are stored in different places and scattered across the place, so it's very hard to take them and open source them to make the testing as early as possible. So what we are looking for is to have like the very simple common use cases, super simple, you just place one file with couple of lines, everything is already enabled. We want to have a consistent way of storing all test execution metadata and this should also allow us to easier open source tests and run them earlier. The format should be concise, easily readable and also flexible and extensible for future. So bit of words about FMF, so what is this flexible metadata format? This is something which we were looking for some time and this is the answer for our question how to efficiently store all test execution related metadata in a sensible way and we found it that the FMF could be the answer, it allows us to save everything in Git, it's nicely in plain text format there, versioned under version control, basically it's based on YAML so the concise human machine readable format and it adds a couple of nice features like hierarchy inheritance and elasticity and I will speak about them now. It's available in federal footprint so you can directly install it and start to experiment with that. So the features, simple use case as simple as possible, as you see summary, contact, sex, how to run the test, duration, everything very simple and it could be for a simple single test like this. Then the hierarchy, there is a virtual hierarchy support so you can in a single file if you have like five tests and you don't want to create five files with this information, metadata for the test, you create single file and you put the virtual hierarchy there and save everything in a single file. Then inheritance, it quite often happens that you have a bunch of tests and the contact person for those tests or the duration or the requirements or the component which is relevant is common for all of them so while repeating and duplicating this information you can say like this is the common information, this should be inherited so the previous slide as it was like this, you can write like this, you have the common parts at the start and the objects which are under this hierarchy inherit of these options. This also allows you to create something we call virtual test cases so you have one implementation, one shell script and then you run a short version as a tier run test for example very quick can be like part of the gating and you could do some like deeper test which then you can run outside of the gating but you can still like see the results and make sure it passed. Elasticity, just a couple of words like from the start you usually like start with a couple of lines and a couple of tests but as the time goes you run into like the file grows and it becomes hard to maintain so FMF allows you, if you need you separate those information into different directories or files and have it nicely structured and under the file structure. So if you want to try just ENF install, PIP install or take it from the GitHub and start experimenting the package itself contains a couple of examples so you can just go to the directory and start to experiment with that. It has CLI so like quite intuitive subcommands I would say FMF, I will list all the objects which are present in the directory where you currently are, you can filter objects which are you interested in, you can say like just show me all test cases which are stored there or you can do some filtering based on tags or I don't know runtime relevant component as such but I will not dive into that and also there is a Python module so if you want to like study our application to have the support for this kind of metadata you can just like read the metadata 3 using FMF.3, climb the whole tree and filter only those objects which you are interested in. So that was a little bit about FMF and then level one metadata. So when we were thinking about this and after some time of discussions we ended up in this like separation of level one metadata and level two metadata. The earlier now we call those data which are necessary for test executions and which are closely related to individual test cases and usually you would place this information very or you would want to place this information as close to the test code or the source code even as closely as possible. So in federal CI slash metadata project on Pegure we have proposed or refined for now a couple of attributes for this like summary description, contact, test path environment you have seen some of those in the examples and for each of them we have like short description, motivation, user stories examples so that it's clear for what this attribute should be used. Now how it was before, so internally we have a kind of metadata for running B-curly tests in mic files. It looks like this and started like that like 10 years ago and we still keep in creating these files when creating new tests and I think this does not deserve any more command. Another file, purpose where you would place like some more detailed description of the test. Then another bunch of metadata we have in our test case management system which some of the data are duplicated, some there are some additional data and there are some fields which we were missing so we invented something like structured field because it was only possible to add attribute to the tool so we invented like this workaround. So you see like four places where you keep information about the test case. How it could look like, single one. As you see, key value, some extensions of course the support for inheritance as I mentioned before so in fact like this, this example could probably would look like this because the rest of the data are shared in the components so you would not want to repeat that again and again. So that would be about level one and level two, what is level two metadata? These are metadata, additional information for execution of one or more test cases and it says things like how the environment for testing should be prepared, which set of test cases is relevant for gating, which is not relevant for gating and which frameworks should be used and some stuff like that similar as the previous L1, this is defined in the federation metadata project, L2 so you will find some details there, a couple of words about the concept. So artifact as the thing you are going to test and we would like to define or enable users to say that like for pull request I would like to run these tests for build I would like to run these tests and when there is update maybe it contains multiple packages, for that case I would like to run a different set of tests. I think in the previous presentation there was a question about like if it would be possible like there is update and the side tech like multiple packages so this could be one of the options that you say like for the when it is update there are multiple packages you can do a different type of testing some integration testing like that. Well a test set is basically something like a group of tests, it can have like summary and then the definition here you see like the structure so for the pull request artifact you have one test set called PEP like check the PEP 8 there then some linting for the build when the build is built you can you could run a smoke test which for example runs fast and you would enable it in gating and some feature test which can maybe take a long time and you would not enable that in gating. So the test steps so there's the idea to separate the steps in the configuration so that it's possible to select those steps which are interested in we have like user stories I as a developer I would like to check my component repo and see what tests will be run but it's not possible there's test YAML playbook and other playbooks and I don't know and if I run it it takes an hour and I don't find it so you would say just run the discover step and show me the tests you get the tests or another thing for example you could you would like to do a quick testing on your local host so you would say skip the provisioning parts skip the prepare part just do the execute or discover execute and each of the steps could do like could have multiple implementations there so for the discoverer to discover is to detect what are the tests that should be run provisioning part prepare the machine for testing take it from beaker from open stack or sell like testing on my local machine prepare some additional steps for like setting up the box so installing packages starting up services things like that execute the step for execution itself then reporting maybe for the future some possibilities to set up the notification of how the reporting should be done where the messages should be sent finishing for some cleanup tasks so before the current situation is that if you want to enable tests and enable them in gating you create the test YAML file and already here you see some like implementation details which are not that interesting for a developer like hosts local host what does it mean text classic the only possibility currently and so and if you if you run into some like more complicated complex situations the test YAML file could look much more much more complicated but that's how it is currently then this was yeah there was a request to be able to configure amount of RAM for for for the test so we added as one of proof of concept how FMF could be used this this file provision FMF which we should which can contain the information about the memory but like as a proof of concept just we didn't like put it into some more of thought so there's this there's this syntax standard inventory QC of why is it there like that and the M and so it's not very it's not very intuitive and for the gating YAML yeah copy and paste one once or multiple times if you want to enable multiple gates and it looks like this and here's our vision how this could look like in the future so you would have provision memory three gigs execute the binary which was the same the same in the in the example with test YAML and then you say like this this tested should be enabled for the for the gate for push push to testing everything everything in one file so that's about the first part and I will give over to Franta to to say something about the the examples what okay so the Apache one quite simple example let's say you have a component you just want to run a set of shell commands so you add the test set this is the first line you add the step the execute step and then like the implementation how is shell and the list of commands and that's it in this way you should be able to create a simple integration test like install package here's start of the service create web page check that it runs so these these like like I don't know seven seven lines could be everything or even if you have like smoke test minus minus test something it could be just like something like four lines to to enable the test return to yes all these fails basically it blows up yeah actually you would probably want something like this to do like in the prepare in the prepare step to install the packages and and then let's let the framework install the the actual rpm which is which is to be tested instead of like replacing it with the dnf commands or something like that system example so in right here in system d we managed to collect hundreds of regression tests and for many customer scenarios and so on and usually these scenarios they depend on upstream features which were back ported into rail and it actually kind of makes sense to run this test in upstream first to check if they are not broken by future updates so I wanted to upstream the whole test to into into the for example fedora but I wasn't able to because many of the metadata are scattered around the internal infrastructure and the metadata are vital to run these tests so even if I upstream the test to it I couldn't run it for example in federal so thanks to the fmf we managed to move this metadata into the separate files which lies in the gate repository with the test themselves so this is just an example of tiny fraction of the test suite we actually collected the metadata for hardware requirements and dependencies and so on into the files so everything is in the one gate repository and you don't have to hunt it around the infrastructure so this is the infrastructure of the test repro and now this is the config in the upstream repository or around eight repository which basically there's the prepare phase which installs all the test dependencies which were usually in the make file but the dependencies are actually infrastructure dependence so you have different dependencies per infrastructure so it doesn't make sense to have them in the test metadata themselves because they don't know where they run then we have the discover phase which allows you to filter the tests depends on the tier text distribution and so on and for the historical reasons the system detest is written in Bicolib so you can tell it how to run the test suite this is the most basic config but for example in the rail we do several waves of testing which is called tiers and basically for the tiers the only thing the only row which changes is the filter itself so it wouldn't make sense to just copy over the whole every phase again and again so you can write a common config which has all the common data in one place and then you can just overwrite specific phases to contain different tier tags different distro tags and so on so this should make the data application less common I guess and thanks to the FMF and the work which mirrored it we managed to run the internal test suite on Federa or at least the part of the Federa part of the test suite in Federa so hopefully in the following months we will manage to upstream the whole test suite and run it at least in Federa and Central CI. Yeah sorry for the quick presentation our talk was short and I took 25 minutes instead of 50 so we need to be very quick so let me show you how we managed to basically put on running the tests on basically copper builds which packet builds and if you were in the previous presentation you saw that so packet here the tool which here was presented by the guys to integrate upstream projects direct to Federa and part of it packet service basically beards copper builds right so we take those copper builds and want to run it on a real VM for those Federa according to that copper root and re-board it back directly to github yeah so packet service so we have a testing system we'll provide this as a testing system as a service but it's not not really here for an announcement I just say its name it's called testing form and basically provides a service which packet uses to contact us once the copper builds are built to do this testing and we after the testing is done we do that on a VM and we run on central CI open shift we spin up there a VM install the packages according to according or spin up the VM according to the copper root right so if you are building for three copper routes then you spin up three VMs and around the testing report back report back to packet very shortly yeah so how does it look like so this for the simple shell executor the one that Petra was showing here so this very simple step very simple shell test that basically runs for these commands and asserts each line we have a example packet configuration here so this is how you actually configure packets to run those tests I think the copper build job I don't know if it was shown on the previous presentation so that's how you say to packet that it should build copper builds and basically after the copper builds are built and you add additional test jobs basically packet instructs testing form to run around the testing on these copper routes and so here is a PR and this is all packet staging so we need to be done packet service so it's not yet in production we'll work on that in the next weeks to get it done but this is a simple pull request this is just empty one all the metadata is already stored as I showed you that's already in the repository and here are basically the checks so you can see that packet staging here built to the ARP and built successfully and then there have been three different tests that's being run on these three different feathers 29 30 and row height and if I go to the details tab to testing farm console I could see yeah the output of testing form which is which is we try to be very concise so there is no like middle layer between so you know what is going on so as you can see here has been installing the copper builds installing the copper builds from the from the copper route yeah downloading downloading the basically the tests which are placed directly right in that repo so we actually clone the the PR and here at the end at the end you can see basically the commands which have been run so installing the the the stuff system the system satellite start HDPD here echo foo and this curl right so this is actually the test right and after this is done we contact back packet or set it to the rest API the results it's very simple for the for the system D very similarly so it is the FMF test that front I was showing packet the ammo completely the same no magic there and here we mocked actually the results so they fail so you get an idea how it would look like if it fails and if it fails even for the testing infrastructure so yeah here only one test is being run which actually failed and you can go again to the console to inspect and go again to artifacts to inspect the test output and here is actually infrastructure infrastructure error so we are trying to be concise here and return some reasonable error out that as you can see are installing copper build what could that mean so you can go back to testing forms console and basically investigate so yeah the here is well actually the packet copper is enabled here where it is installed and here at the end I will find that this peer pur request builds actually older build of system D so it's like it conflicts with the new one that we are testing on already so yeah most probably you need to reverse the pur request and to the latest latest sources um yep that's very shortly it um so one very important thing of course like you won't be in testing farms like the the test system that does this to be available to users right so we will expose this as one one tool that will basically use a container so you don't need to install and myriad of 10 of hundreds of packages but we'll provide this as a container so here you can run this command already and this you'll run uh the uh basically the hello world hello world um uh test there that was the first time which the attpd and you can execute this on your local host and it will do basically the same thing as as I showed you on production instance which reports to packet uh yeah and we'll you won't be running docker here uh we'll make it possible make it available as a tool uh so you don't need to specify some things and of course if it's a container there will need to be some additional magic uh once the tool will be more major uh currently need to run it in privilege because we run vm in the container of course like uh that's that's how it is if you want to reproduce the stuff you will need to run uh the in privilege mode and podman unfortunately doesn't work because there is a bug opened and I think it's not yet released uh to be able to do this here because it needs kvm in the in the container uh yeah and the second example that doesn't work because I broke it but uh this would actually run the tests uh that are placed in your uh local directory so you basically if you won't just create tests you just add it to how you just create them and with some one simple command you will be able to run them and execute them so that's where we are getting yeah more announcement about testing farm it will be coming uh yeah we don't have yet we don't have yet the announcement so I I'm just mentioning it here yeah does it make sense yeah so like uh like you don't like of course like uh we can run tests for example in podman so for if you will be using directly podman container where you run the tests you don't want need root but if you will want to run it on a full-fledged vm you need some access right and the local command is actually not exactly not run as root but yeah it runs as a root yeah of course like we can support that we already actually the tool supports connecting to an existing instance but then it expects that you provisioned into some some certain way because you need to connect it to it somehow or you supply all the information like the the key and everything of course but like that's supportable definitely and yes so I need to cool we can drop it thanks well no we would like to have uh and that was the intention like to have the the framework open source and and free and so that it could be used used uh at different different places and protesting different distros so I think like if it really possible ideally we would have like the very and the very identical way of how to enable tests and these test tests and gating and in fedora and in rel as well plus if I understand correctly we have also like an an rfv for fmf to be able to have some like default configuration in a metadata tree and then have somewhere for example downstream another fmf tree which you would just merge with the upstream so you would say like everything from the configuration in fedora is copied or inherited and you just override some internal information so you would have still have all almost like just a single sort of truth so for example if I configure a test from fiora I come in this cdio and I want to open my uh the fiora part yes the idea is that you have just a single configuration you reference the other one which should be inherited and then you have like just a single single line uh with the internal change for example so definitely like we will need to work work work that out or have discussion about it like um how it should look like and and with these things so like I completely agree that gating is separate for and depends on the context of the product like the level one metadata are sure like for example if you're short testing a single branch in master but level two metadata are usually branched for specific distros so you would have the possibility and you want to have for like different different configuration for different branches so like naturally I would say this this configs are in different branches but if if needed like to remove some additional duplication even here I think it would be possible to like introduce this remote references but not necessary I think any other questions like we are out of time I think it's coffee break or something but if you want to ask something yeah so I actually just mentioned like this is the prototype that we have now uh for uh a github request and like we want to add this like also to federal cei the normal one like a possibility to how to define testing differently and yeah uh we are internally working to moving basically all the all the legacy stuff are getting rid of it but we need to do it uh continuously as slowly because it's a lot of tests that have been done during 10 years thanks very much