 Hello guys, welcome at our next talk and here I would like to present Amador with his talk about testing framework internals Okay. Thank you. Hello guys The afternoon. Thank you for coming by My name is Amador Paín. I work for Red Hat as a software engineer and I work Inside the virtualization team in the testing framework team And I'm here today to Talk a little bit about the testing framework that we are investing a lot of effort in it and In this presentation specifically we will be Approaching the testing framework Looking to the internals to the internal APIs and internal layers. So the Go here is to show you how we approached some Challenges that we had during the development of this testing framework and Also show you how the testing framework that we are using In our team and it's in some teams inside and outside red hat how He works from inside so you can Know the project know how it works Probably get involved at some way helping with Using it or helping to develop it. So let's get started The testing framework that we will mention here is called avocado Do you guys know avocado testing framework already? Yeah, some of you that's nice more or less Okay What's avocado avocado set of tools and libraries and the goal is to help with automated testing Native tests are written in python Actually avocado itself is written in python, but any executable can serve as a test for avocado So it's a generic testing framework It was mainly designed to give a common ground a bridge between QA and developers. So Both groups both teams can have access to the same tools and the same way to test the software that they are working working on And that was probably the main goal of the project since the beginning One of the components of this testing framework is the test runner itself with the test runner You will have automated logging sees info collection Nice user interface output format. So you you can with this test runner Have a lot of Features without having to use the avocado API itself. So the test runner is powerful by itself without Needing to use the API We have the API's we have the libraries Here it's just an example of a test that was written using the avocado test and some avocado you to use as well You can see here. We are imposing importing the test API here I here we have our test class our setup phase test phase and Here we are already using some features from the test API itself Also, we are using some features from the utils a library as well just a quick example So you can see how an avocado test using the avocado API Would look like Well some other components Another important part of avocado are the plugins. We have a bunch of them Almost everything in avocado is a plugin. We have avocado vt plugin, which is wrapper plugin to run the virt tests tests that were Used in virt tests and auto tests before to run them inside avocado We have html plugin remote run a bunch of them so Avocado is modular so you can create plugins for almost everything so you can plug them in avocado and use it Um, so let's talk about about the internals Of avocado the internal API is the internal layers. This is a General architecture of avocado itself Here we have a job and From this job, we will interact with all those layers here The goal here in this presentation is to go through every layer we have here until we have the Final test results here being published by the results plugins So the first one test resolver In avocado we have the first step that is executed by a job is calling this test result This test resolver sorry the test resolver will be in charge to Receive the test references if you see if you take a look in this avocado command line here You can see that we are informing to these avocado execution to this job. What are what are our references? Here they are files, but we call it in avocado avocado test references and out of these test references we the goal of this test resolver is to create a test suite that will be used in the next steps and How the resolver can get this test reference and transform them them in a test suite Job will call this internal method called make test suite the steps here are Load the plugins So we have in this resolver some different plugins that can be used to Resolve the reference differently depending on the the the needs that given Plugin has The main plugin we have is the file loader I will talk a little bit how the file loader will transform a test reference in something that's useful afterwards These test resolver will we will receive just a list of the references that were Informed in this command line here. This is all the test resolver will get from avocado from the job There is a filter Filter phase here currently we can filter test by text you can inside your test method Create dog strings with tags specific tags and these tags can be used to filter Tests in to include a set of tests in your job But the main point here is to create this test suite that can be used back by the job itself And this is how the test suite looks like this is the Return from the test resolver to the job itself so out out of those test reference up there We created these test suite and in that What's present in test suite here? We have the class name We have the method name the test function itself the module path And also these name that's used to identify that test inside of avocado If you take a look we have only three files or three References up here, and we have four Elements in these test suite That's possible because in this Test module here my test zero one. We had actually two Tests defined to test methods inside that that's that that test class So we have the same reference here for the same module, but we have different functions different method names Coming from this file here So this is the product of the test resolve that will be received back by the job now talking how Talking about how our file loader file loader can Identify that a test is using or not the avocado API to test something Well as we don't preload the code itself. We just Try to make some static analysis in the code itself and Based on that Analysis of the code we can that determine if the code is using or not the avocado API so we have some rules if The test module has this avocado from avocado import test and if we have a class that that inherits from Avocado test API then it's probably an instrumented test. What means that it will be using the avocado API different form of Identify Avocado instrumented test if you are using just import avocado We will look for a class that looks like this or if you are not Heriting directly from avocado you have to Flag that your class is actually using avocado API and If it's not an avocado Instrumented test and it's executable then we consider it a simple test Which means that it will be just executed as Process in the system that you are running your tests And this is an example of the least common the least common will just trigger the test resolver To discover what are the tests that are present? That are present in these test reference that you are informing there okay once we Have the test suite. It's time to call the variant there the variant in avocado is a feature that allows you to create different variants of The same test the goal here is to execute as a given test As many times as you have different parameters to inject in that test and then you can test different scenarios With the same code These variant here we will use yaml files So you can define in your yaml files the parameters that you want to feed your tests with and Then afterwards when the times to call the test runner comes You will have Multiple combinations of parameters to inject in the test So you don't need to write multiple times the same test You just need to write it once and create different variants of parameters that will be injected in that test Job calls this method very enter This model variant there variant there and it will parse the yaml files possible possible injecting data It's an advanced feature of the variant to inject data in the Resulting tree and it will return the variant there object containing the multiplex We call it mox tree. It's this object that contains a tree with all variants That can be used by the test afterwards Okay, this is how I am a file That that's providing parameters to avocados looks like You can define for example CPU C flags three different with three different values here and You put this flag here to tell Variant there that it has to actually create different Combinations out of all of those parameters here So the first time your test will be executed It will have access to a combination of all those parameters The second time the test will be a we will have access to a different set of parameters out of this file So in this example, we have two times two times two times three Different combination. So we have 24 different variants that will be used to feed your test that will be executed 24 times Here just the first three variants created out of that file There's no enough space to put them all but you can see that in first variants You have these four Values for those keys in the second variant you have just these changing here In the third variant you have we have the same system D as in the first one, but now it's with a different value for this type and so on and This is just just to prove that out of that file. We will create these three and Any test that is informed here in the command line using that YAML file it will be executed 24 times Each time having access to a different set of parameters Okay, next up for the job is the job data recorder the goal of these Feature in avocado is to keep mmm Data actually Machine-readable data that can be used for different features afterwards after the job finishes right now It's being used only by the replay and deep features. Those are two features from avocado that Will use this machine-readable Job data to create something else to provide something else to the Users for example with the diff you can compare Two different jobs in several aspects. I have here an example of the diff feature Being used I'm informing here to job IDs. It could be the job results directory as well You don't have to remember or to keep the job IDs. You can just put the path For a given job result directory and a path for another job result directory. It will compare Several aspects of those two jobs using that data that was recorded by the job data recorder So this is just an example of two jobs with different total time different test results different variants Difference also in C's info information. We consider each of these parts as a section So you can show only one or other section depending on your Depending on the information you're looking for Okay, so after job data recorder. We have the test runner Here we already have the test suite from the resolver. We already have the MOOCs tree from the Variant there and we will merge them together and create the test run the test itself Job will call make test runner test runner Will be selected by the full we use the local test runner Which what means that your test will be executed locally in the same machine that the common line is being executed But we also have remote VM Docker different test runners And those are the test runners that can is acute your set of tests your job entirely in a different hose and not in the same hose That you are using to create the common line Job will call test runner After selecting what test runner will be used. We just call the run suite Method and the run suite will receive the test suite as I said before and the moods the MOOCs tree objects so the test runner can can merge them together It's the test runner will loop the test suite and for every Element that's present in the test suite you remember the test with that big guy for For every element that we have there. We will loop the all the variants that we have in the multiplayer and We use a good you merge them together and pass for a test class So the test class can actually is a kid the test from there and this is what the test class gets from the test runner You might notice that we have here Some information coming from the test suite and we have also the parameters coming from that specific variant that we are using at that point The test class will receive this guy here and based on this guy it will load the module it will look for the Test class it will look for the test method name and will execute the test itself This is the same Test factor we call these Information here together. We call these test factory for the test class and this is the test factory for a simple test It's simple than the the previous one because it's not using the Avocado API so what we need here are the parameters because even simple test can have access for those parameters We need here the name of course the base log here the same way and only the simple test class that's already instantiated since the resolver phase and these Instance of the simple test class will just as acute the command itself Whatever command you you'll define that as a simple test. It will be just as a cute it and that's Making what they just said more Explicit we have for instrumented test We will create a fork so the tests are executed in a different process not in the same process as the avocado main process So we have these isolated results for the test itself In these point at this point, we will load the modules from the test that Are specified by the test writer We will run this setup the test method and then the tear down its Behaviour that's well-known like you into tasks Has something like that so For the test writer this is what you need to create But all of that stuff are happening Behind internally in avocado if it's a simple test if it's not using avocado API the simple test class will just call the avocado utils process and Use the method run to execute the command Either case it will be a different process. It will be not executed in the same avocado main process Okay, and how can test writers have access to the parameters that were informed using YAML files and came all the way through the stack until Arriving the test itself if you are using instrumented tests You can use the API self params params get and Tell what's the key you're looking for and then this will return the value of that key as specified in your YAML file If you are using simple tests, we still can have access to those variables But in this case you have to use environment variables We will create for that process the environment variables with the same name of the parameters that were Created in your YAML file. So even simple test can have access to The parameters that that are specified in the YAML file. Okay, any questions so far. Okay question is Okay question is There is a remote is a cushion of avocado and is that something like Ansible? No, it's different. It's very different actually but What we have right now is a remote test runner and If you you specify in the avocado command line that you are using a remote as a cushion You want a remote as a cushion then instead of the local test runner We will call the remote as runner and this remote as runner will connect to a remote machine and the difference Leaves in the fact that you have to have avocado installed in the remote machine You'll have to have your tests available in the remote machine. We will Execute everything remotely collect the results merge with the local job results and show you the results and It's not the same seal. It's a remote as a cushion, but The point here is that we trigger this remote process in the test runner Selection phase because you created the remote as a cushion in the command line We identify that and we say oh, okay So we have to use not the local test runner, but the remote one and that triggers the process Sorry now because you are inter interrupted in the the job. Yes So you can write a test that waits for that But avocado itself will not handle that because we are waiting for that test to finish And if you reboot the machine the test will not pick up from from where it stopped Yes No, not right now Yeah, we the question is can we run testing parallel? No right now we have only a sequential test We have I will talk about that But we have some plans for this year to change some architecture architecture of avocado some improvements that will eventually allow us to Run testing parallel, but right now there are sequential. Yes. The question is the test Must be using avocado API or it can be anything else like a bash script or Anything else as I said before all those tests that are not using avocado API We call it simple tests and it will be just executed as this We will not like load the test class analyze what's in there Make the avocado test API available for it But still we will run it and collect the test exit code See if it worked or not and or not and show the results with our result format So it's perfectly possible to execute anything from avocado You can execute it as it was any binary We will not understand the API use it inside that test But we can run it if it's executable you can also inform an external runner like slash bin slash bash To execute your test with that. Yeah, and if you have a let's say something else that understands that test You can use it as external test runner Thank you. Okay, let's move on But if you are using the test API what you get from it if you are using the avocado Python API to write your tests What are the Adventors? Here's some examples of some features from the test API We have apart from the pyramids the params that are accessible from the test You have also some logging facilities or capabilities to log your test as it's moving on so you can even tweak the user interface to show a different UI based on the test that you are Executing so you can disappear with the default user interface and show only your log methods that are Specified inside your test or you can merge them together and show both the user interface and the log message that you are creating inside your test we have some special methods to Tell avocado that that test actually failed so you can inside your test function inside your test method You can call the self dot fail if a given condition Is reached so avocado will know from that point that that test was failed It will not appear as pass it will be reported as failed in every result format The same for error you have the self dot skip so you can skip that test from To be executed We have these utility here called fetch asset Which helps you to get assets from the network from the internet some URL some local NFS Some local network NFS sharing you can get the assets from there We even cash those assets so you don't have to download it every every time you Executing our test if you put an expiry time for it. We will read on hold it if it says expired already We have some references for some important directors that are available for avocado during the execution And we have also the runner queue so we can communicate all the way from the test directly to the test runner Through the runner queue so we can for example tell to the test runner to After that test is executed You please is a cuter call a given function. That's defined somewhere else So that those are just some features that are available in the test API So you can benefit from them if using the API Okay, and how the job back there We'll know the results from the test that are executed by the test runner How do we propagate the results back up? If they are simple tasks, it's all about receiving the is it called zero and then we consider that that test actually passed If it's instrumented as then we should we check if there's no exception in setup phase There's no exception in test method is a cushion. There's no exception and tear down phase Then we consider that that test past these information these final test status is Return to the test runner it will receive the test status and the test runner will create these set of Failures that might happen in the test during the execution and we will inform back to the job only this summary of what happened in The test so the job actually gets a summary from the runner and this is how the summary looks like for these job For instance, we have here a past as a fail test a slash bin slash false It's lit 10 minutes and we have a job time out of five seconds So these three tests up here. They will be executed within five seconds But these is leap ten minutes test will be interrupted because it takes ten minutes and We only have five seconds so These two guys here are contributing for the test runner to inform job that something felt here this guy here as it will be the Main responsible for the job to be interrupted it will be reported as interrupted and and it will contribute for these information here I'm just making it's pleased that we are using a set because it means that different jobs that are failing For the test runner they will result only in this summary only one flag fail for the job and Then avocado with this information here Avocado knows that it has to is it with the is it called nine This is is it called nine. It means We have some test that failed and we have a job interruption Because of these job time out so our is it called it's horrible. You can Have more information than just paste or fail From the as it could Okay, and final layer here are the results format We have a result class. We have a bunch of plugins So you can create your results in different formats depending on how you are going to integrate your test your your job with your CEO CI tools or anything else you want Some Some of those plugins they will wait for the final job results to generate the report in that format and Provide you that report because they cannot be interactive They have they have to be created all at once So you have to wait a little bit because the job is execution and then you will receive the final results in that format and Depending on the plug-in like the tab plug-in you can have the you can see the results Coming from the job as the tests are going This is just a screenshot of an integration between avocado and Jenkins here We are we are propagating the X unit results from the job back to Jenkins so Jenkins can show you All the tests that failed you can have the Backtrace here the details we can see that 392 tests were Executed five failures and here the details and you are looking for these details in the Jenkins Job itself you you don't have to go through the avocado logs because we are propagating back the X unit results here, okay, so what's next? We had some meetings already this year and to Thank you to define the future of avocado and We have a strong focus this year in this job API. So you will be able to Create your jobs not only by the command line, but you can create your jobs like using a Python API to define to create to Load your test runner load your resolvers and everything So this will unlock a lot of power from avocado that we know avocado already Has inside but it will be unlocked with this job API Also very enter enhancement disease on ongoing work as well and also remote enhancements actually something that That every time comes up the multi-hose test executions and this is something that is Under heavy work and reveal right now Anyway, if there is something that interests you you can go to our Trello board and vote vote for your features You can report or request stuff in github as well We have these release management information right now current version 45 We have tar balls and rpms or Fedora and enterprise line Linux. We have a long term So Supported version which is the 36.3? It's supported for 18 months new LTS ever year or so we are Fedora package already in Fedora You can install avocado by running DNF install Python to avocado the version there is 43 and peep we have always the avocado framework package with the very current version Three three weeks release a cycle. We have we are three full-time engineers. That's right. That's two of them are right there And we have contributions. This is from git log We have contributions from from all those domains here already and I'd like to see more domains contributing to the project as well and some community information and That's it If you have questions, that's the time Okay Question is if we have any plans to run avocado as a demon Remotely and then you can like trigger jobs remotely from that point We we never discuss it that as a feature to be implemented Actually, I have never seen requests on these regards, but yeah, we would happily Discuss that and consider if it would be something feasible and also useful according to the the project Directions, but no, we don't have anything in that regard Anything else? Library to use top. Okay. The question is I showed some Example on kernel building tests and if we have some library to use that in a virtual machine No, we don't have such thing that library is from avocado utus and it's Something to make easier to download configure and build your kernel without having to go outside your test but other than that after builds the goal would be to use it to Test it some way, but we don't have more than that after kernel is ready and built the library ends there, but I mean you always can go there and make it better, right? Okay, thank you anything else? No, this is the avocatics care from sorry the question is those avocado take care on the target environment preparation before running the test No, it doesn't Avocado will be started in an environment that it's It's expected to be ready for the avocado execution already. So you have to prepare the environment Of course as we are discussing the multi-hose Implementation and how a test that's local will be sent over a network to a remote and what Will be ready in this remote system to execute that test is under discussion right now, but What we have right now is When you define the test locally and say that will be executed remotely We expect the remote host to be ready for that test as a question Not remote not remotely. We we have a different module a different Runner class called remote and these different runner class will connect the remote host using SSH and It expects avocado to be installed remotely and it expects the test that the file the module itself to be available remotely as well and then it can be executed remotely and we collect the results back We have some Python dependencies, but other than that we have fabric So, yeah, it's quite easy to to get the system writing Okay, that's it something else Okay, thank you guys You