 Well, Jan and Peter Stotka, sorry, Jan Stotka and Peter Raczek, we will present you MetaTestFamily. Well, short intro. We will show you during the let's say 10-15 minutes what is about and later on we will make a real test and show you how does it work. What is used for? The MetaTestFamily is used for testing module, RPM-based containers, multi-host testing. Basically, MTF is not CI alone at itself. It is your job to set up your CI server and stuff like that. I will not talk about future. In future will be an open ship. Well, MetaTestFamily, shortly MTF is able to prepare an environment for your testing for docker, RPM, and spawn. Or you are responsible for preparing your environment and we will only run the test. At the end of the presentation, we will show you where you can download the config file and how to test. Well, if you would like to test, to try to test your container, image, module, whatever you want, then we have a copper repository. The installations are mentioned there. And of course the MetaTestFamily is part of the Fedora, but it is a bit outdated. And now, how to use the MTF? Easily. First of all, you would like to prepare the MTF, the environment. Just go, module means equals docker means what kind of the testing you would like to use and test environment. You will see in an example in a config file, there are these sections. Once it is ready, prepared for your test, you can run all tests inside the container. Then let's run the test, simple. Model, which means docker, RPM, and spawn. Avocado, MTF user avocado for later on other tests suite. And run all Python files, which contains real tests. And at the end, once it is ready, you can only clean your environment and that's all. If you have prepared your environment, like you have your docker, container, module, whatever you want, and you would like to only test this kind of, just write this command. Easy as possible for you. Don't care about setting stuff and stuff like that. It's boring. Well, this should be a workshop. And we would like to give you a chance to test your own module. Container, whatever you want. Why it is called Metatest Family. I forgot to mention at the beginning. Langton mentioned that the original name was Modular Testing Framework. It was fine, cool, but later on we decided we should have some nice cool name which does not belongs directly to Modularity Framework and stuff around that. And first letters should be MTF. I am proud to have the Metatest Family and he selected the one. I don't know how did you get so much MTF shortcuts? Yeah, yeah. We came up with like 20 interventions? Yeah, yeah. Actually, it's similar. The main idea was to have the same abbreviation as the Modular Testing Framework because it began as a framework for modules but actually it's not directly dependent on a testing of modules and you can test normal artifacts, not module one. Well, this is important. We created a tube interpad for this workshop. You can see here hopefully. I will try to show. First of all, if you would like to test, if you want to test your container module like Tomasz mentioned, Node.js for example, then install via these commands, download the config.yaml file which I will show you, means like this where, again, I don't know, I really personally I hate the format where there is mentioned a file which contains module. A module MD file for this module. And this structure of this document means what packages you need and at the end is a container which will be used for the testing. You can have either on your local system or on the Docker Ion. And as you can see, there are various module types. One is Docker, that's why Petr shown there module equal to Docker and then it's able to use a RPM or N-Spawn and it's under one section in this file because there is no reference actually. Some big one which is used for our module called MMKHD is there. It contains how to run, how to run a container or if you have a module, how to run the module and stuff, all stuff around that. And at the end, there are two possibilities. How to run the test either directly in the SAML file mention what do you want to test either directly in the container or how to test it from the host. Another possibility is to write down your own Python files which contains the tests. It looks like this. Simple, just import the module testing framework and write whatever you want to test. Okay, that's all. And I will show you how does it work in real case. Well. It is good to mention that it's possible to use this framework for write box testing and also for black box testing. So it depends what you prefer. For example, when there is module for base runtime it does not provide any functionality outside the module and any services. So it's possible to just do write box testing. Vesaversa and Mkhd provides some services on some port so you can test it from outside and look at it as a black box. Okay. I've prepared the directory code FLOG 2017 Mkhd. Inside that there is a config YAML file which was downloaded from the section mentioned in the etherpad. Once it is done, there is an easy some smoke test for the Mkhd whether it works. Well, let's set up the environment via command module equal occur, be running container. I guess right. I see. It does not run the container itself. It is done during the testing, but it prepares our pool container from the Docker IO app. Okay. Well, let's start with the testing alone. Again, Avocado for testing and all Python files, nowadays only one. And if you don't know Avocado, it's a framework developed internally by Red Hat it should replace Autotest, which is all very big and it's based on a unit test. So this test looks like a normal unit test. What do you prefer? Well, you see that MTF tests the Mkhd inside the container to test paths and that's all. And at the end, once it is test, call for cleaning your environment so that you are not liquid. And that's all. It's your turn. I'm trying it here. It kind of works, but there are two dependencies in that, like RPN package and that's Python behind and Python local power farms, so I just want to report it. Yeah. Yeah, sure. Yeah, perfect. The last slide is there. I guess this was already reported by Slavik Kabardan and in PyPy it is already fixed, but in RPN node. But feel free to report. That's all. It's really for us how it works, how it's working, and now you can test your modules for containers. I think it's also important to note, right, that even though you have to be using Avocado, it is actually testing framework agnostic, which is part of the goal here, is that it just gives you a universal way to run the test, and then the framework on the inside can be whatever you want. It just tells what other people run your test without you having to know what to do. We have several meetings with Steph Walters about the CI test, and the MTF will be one part of the CI, this central CI, so that the two hours ago, I guess there was an Ansible workshop where we tried to include the MTF into the CI via Ansible, and it was working. And in case you would like to do some not-simple configurable in our repository, there is documentation, and there is configurable with the commands, so it will lead you to what these sections mean, and you can write whatever you want, and use that as a source for your other information, and of course we are trying to provide the users to other who tries to use the MTF. We have a read-out document where I mentioned all steps, either for installation or for how to use it, all API and stuff around it. So the tests are supposed to be in the same Git repository as a module, or it could be like a separate repository? Yeah, yeah, yeah, really good question. For modules, the test should be inside of this Git modules and stuff around that. For containers, it should be in the docker namespace separately, because you are testing totally something different, and during the pipeline, there should be something like once the module is finished, it runs the test. This is currently covered by PascalTron, and it is working. Once you build, we are trying to test. Of course, there is no rule. If the test fails, then docker build should not be built. But during the pipeline, this should be covered, like Steph mentioned, okay, it has to bust. This should not fail, and stuff around that, and once the module is built properly, test properly, then container can be built. Yeah, yeah, yeah. This has to be included in our framework, but this is a really long way. You mentioned a good question, and it leads me that the main purpose of this project is very rarely to have the same way how to test, and just separate, set up from a test. Actually, these namespaces for a test is a little bit of a problem, because in an ideal case, this test will be the same, so there will be the same copy in a discrete wrapper for a module and the same copy for a container. It's not an ideal, but it will decide it. I were looking for a testing space in a discrete, but they will decide that it will be not there, so it's a little bit pity from that point of view. I think the general consensus was that people would rather keep the test close to the thing being tested. Yeah, it makes sense from that point of view. Ideally, we can do that by some similings or maybe an unsymbolized solution that we will, for example, tell that for a container, we will say in an unsymbolized file, look for tests in a module directory on a discrete. So it can be solved in that way. And if you want to see lots of examples of how to write tests, there is an example directory in this project, and you can be inspired by that. Like how to copy files to the... Yeah, yeah. There are several ways how to test copying, escaping, I don't know, simple tests, how to skip tests. You can show it. Can I write the test in any language? Good point. Sorry. Yeah. Great question. No? No. Nowadays, we are supporting only Python and Pesh. Okay, so it's not like it would execute everything in the directory? It is possible. The problem is that we are setting up this environment, and for example, I wrote the helper for Pesh to handle these aspects, how to get artifacts to test. So in case you will write something similar to another language, then yes, or it's also possible to use this Pesh helper, and for example, if in your preferred language you use this Pesh helper for this proposal. It's not clean, it's a little bit hacky, but it will work. If you write a test in Pesh, then you see only, let's say, one test per whole script. I don't think that you are able to track okay, there are several use cases or several tests like in Python, and therefore we are trying to communicate with SCL which uses the Pesh. Yeah, you have only two possibilities, fail or pass, but Python provides several multi-tests. We can show this Pesh test, open that shell test in this example directory, and I can describe that. So the important is that this test, there is a special framework, actually it named multi-framework command, and it does, you have to call these commands to a setup environment and start a server, and then you can do whatever you want, and then you have to call teardown. It's just something like a simulink from a Pesh to Python calls. So that setup command is like download the Pesh? Yeah, or run Docker on or whatever is there. So it is mandatory to have this command, and then there you can do whatever you want. And by a run command, you can do command inside container. So it's run inside. But it's a little bit tricky, this part, this Pesh helper, because it's something like just a wrapper around these Python things. And in the close future, we are planning to test container images based on the modules in OpenShift environment, because many folks thought, okay, we have a container, one, two, which should be connected, short volumes, and we would like to test whether it works in OpenShift. Everybody said, okay, OpenShift, OpenShift, Docker, Docker. And MTS should go this way, and in close future, we would like to support, okay, you have environment, let's test it, because this is totally different. And I can show the reason why we decided to rename it to Metathas Family, because from that point of view, there are also examples, for example, for multi-operating system testing, that you can schedule more machines and cooperate between them. I'm unable to scroll. So it's very simple. You just initialize these machines with some repos. There are various versions of Fedora, I think, Rohite, 25 and 26. It's called like that. You have to define these own set-up and tier-run sections, and then you have there one test, which starts these three machines and just one commander, but there can be something more complicated and somehow cooperate together. And for SCL guys, we've wrote example for a broker testing, and these tests are not based or intended for containers based on modules, but for pure containers. But it's more or less the same. So there is some config, and there are some simple tests, and you can run command inside these containers. Or as I've shown, there are various versions of Fedora. In a previous example, there is also example for multi-host testing of dockers. So it's similar in a set-up section you enable two instances. You do initialization of them, and then you just start these containers. And for them, it's very important to change the start action of what is in a config. So it's very simple. You just redefine the config for the docker and then you call start for the supply machine. And actually you can see this test that my SQL command select one works on a default port, which is 3.3.0.6, and a second command test, the second container works on a port 3.3.0.7. But in this case it's important then if you define your own set-up, you have to call parent set-up and also it's important to call tier down for these containers because if you will not do that, these containers will not be killed. So it will be running and in case you will try to do or you will be debugging your test, there will be so many containers that sometimes it killed my computer then I forget that. Okay, can you scroll up? In case I do some syntax error in the test multiple instance, will it call the tier down? I think so, but I'm not sure, but it's not connected with our framework, it's connected with an avocado or maybe in a unit test. I'm not sure if this cloud will be and it should build in case there is a syntax error. But if yes, then yeah, tier down is called in every case, also in a failed one. That's why we have to add a clever logic to work around a lot of issues, for example, that is unable to stop that. So I try to terminate the Docker and wait if it's already terminated and similar. So yeah. And the containers are running for all the test cases. Like one container is, okay, the container is started, all the test cases are running in the container. No, no, no. Good point. In a unit test world, it was like that, set up and tier down is called before and after each test. So it's every time clean environment. But there are, in a documentation, there are various environment variables and it's possible to redefine this behavior and for example, reuse the old container. But I don't recommend that because it calls that environment is not clean and may cause some side effects when your tests are in various test sets and sequences. But this is basically used for, let's say you have a lamp with several combinations and you would like to test whether it works in 2.4, 2.2, postgres, several combinations. And this is use case when you can define one test case and run in multiple scenarios. This was first asked from SSSD folks, okay, it means that we are able to schedule several metrics and test whether all works. And if you can imagine, you can combine in rail, Fedora, whatever you want. And it's also good to mention the main purpose was to have same tests for these all artifacts. But still it's possible, for example, it's in this class, they want just to test the containers. So yeah, you can derive your test classes from my container, Avocado Test. Not from Avocado Test but container and it's specified in case you define another module, it will not run. Yeah. It will be just skipped this test. So how do you define that like container one is MySQL, container two is HTTP and so on because I saw that they are like Docker one, Docker two. But empty. Sorry? Yeah, but empty. What you said is a little bit tricky but it's still possible. It's similar to... to show it here. It's similar to that. That you are... I'm redefining start action but in that case you can redefine also containers action which leads to what you are testing. So in case you will redefine that but what you want this must be redefined in a setup section because the container name is called or is used in this setup function so it has to be here. But it's possible. I think we are not... we know about that somebody would like to do this huge orchestration but actually our framework is not ready for these purposes. But yeah, we will work on that. Then there will be more time. No, no, no. Good question, but no. No, no. It's based if... If somebody will want this functionality we can edit as soon as possible in case there will be a huge... I would prefer... No, never. Send the poor request. But it means never. Not personally to you of course. But as I said, it's possible also actually it's similar to this start action but it's a key solution. So it's not preferred one to use this hackish solution to work around that is not implemented there but you are right that some kind of these examples should be mentioned in our Gitwrapple how to set up multiple machine host testing, how to set up testing with several containers with direct, like, okay, there is a HTTP, there is a PHP and connect. This would be nice. There is a documentation and we will tell our documentation. Okay. Any questions or fling wars? For today's night? Why do you use Kubernetes instead of tests? In examples, there is lots of examples for various services so... I'm so pleased to test that and you will see what happens. And maybe we can also show you CI in HRAVs, what we have after a poor request and you can see there how it works and it is really fast. Well, after... As you mentioned, this was your question. Once the module is built, then the MTF automatically triggers that build and runs the test which was written by the developer and you can see that, for example, there is some Python free bootstrap, I don't know, and you can see that what is tested? Nothing. Okay. No, no, there is a module link. Okay, okay. When it opens, we will come in parallel, the module link is divided from a test itself, so to be clear that, for example, the module does not have any own test. This is a good point because you can see that this is test for module and several tests are skipped because these are dockerfiling and, of course, it does not make sense. Basically, the MTF runs all tests, modeling, dockerfiling, and we are selecting what is used for each module, OpenShift and whatever. And we also added support for a Compose test. Next is this first one. Next there is RPM validation. It was another project, I think, wrote by Stephen Gallagher, and we've taken it to our project and we run that as a part of this module in Tasko Tron. It has still old name. But actually, we have a little bit of trouble with Tasko Tron because there is several issues. We have a slow network and so sometimes it fails because it's unable to download packages from ECOG or sometimes happens that it's unable to contact PDC to find module dependencies and it will time out. For example, here you can see for bind, module bind, command microDNF install bind feign for some reason. I think that microDNF should not be used there. DNF, I guess, right? But it fails. Yeah. Actually, what I've shown, there were some changes in an avocado and this log is not available, but it should be and it worked well before a few weeks. Maybe they changed some location of this. Here's a real test. That's where S2I exists in the module. And there is full output on the avocado. There were some changes. Basically it tests whether S2I script exists in the module. And we can show you our automation. But at Revis, we have to add their support for Ubuntu-based distro because Revis is Ubuntu-based. So, actually, it works. And Revis tests all our pull requests, at least this blocker section. In case you will change something and spawn our RPM section, it will be not tested. Because on Revis, there is no system. It's Ubuntu trustee, so we are not able to use unspawn containers there for unspawn testing or RPMs for RPMs testing because there is no RPM or DNS. But we will work to have full support for the Debian-based distros also for testing. And now there you can see how, for example, it looks like. It will do some installation of dependencies. These dependencies are a little bit... or another than on Federas. So these dependencies are exclusively there. And then I will do a make install for a package installation from a Git repository and then just a testing target for our Revis. And these tests are from examples from a testing module, what should contain lots of features and examples of what we've added there. So there is actually 24 tests and everything passed. What is perfect? And you? Passed or ticked too much? Yeah, it's still typing. Trust me, I'm done. Okay, okay. So and although you can see that 28 tests passed, there is zero error fails and there are four skip and two tests are canceled. There are two different features how to skip or cancel tests. In case you, for example, decide to skip whole test classes, you have to skip it in a setup function and in this case, it is just one case when teardown will not be called. It is based on the definition of avocado and the cancel test. You can decide inside when your test is running to skip yourself and in a test function and these tests are marked and as a cancel. Do you have any questions to us or you can discuss it with a beer? So even if nobody has any questions? We have at 15 minutes. We can stop by recording and do it from a framework. I don't know if there are any other frameworks like this. This is just some information about so I have no idea if it is good or bad. I just don't know what it can do. Maybe it is also a good point that it was my first fault that I used word framework because a lot of people hate framework word. I don't know why but maybe because a lot of people use that in a various way. So from that point of view we renamed that to the family because it leads that you can write tests for a family of your artifacts what you would like to test. I would say framework for some time because you might have like the first approach would be or will be to change the SCL to use our framework because as I mentioned before their test finished with the bus or failed and there have no more tests covered and we would like to improve this failed, this is passed and stuff like that. This is our aim. We are in progress but the folks told us why shall I rewrite our batch scripts to Python and first of all I thought okay I don't know but after discussion with our team we decided okay we have several possibilities how to track the tests not only one script but one SCL image. Let's split it to more tests. So we have to find more benefits for them what we can provide but actually it's also a big idea of a step forward that why to rewrite these tests? You can have more tests to it written in various ways and it's exactly how it works in a cockpit or why for example is there these unsymbolized standard test roles that you can schedule whatever you want and however you want and not be dependent on assembly writing. For example it was also my first experience when I came to a cockpit team as a QA that they have their tests to it and they were written lots of workarounds and hackies and other things and I wanted to rewrite that to use for example pure Selenium for web testing and after I rewrite two tests it takes a very long time because they used PhantomJS directly it's a command line browser or how to say that so I decided with a step it's better way to use more tests to it so there is their own old and we've added also a Selenium one and it provides us more benefits like some part is tested by a PhantomJS browser and in our tests to it there is a Selenium and we use or we test by Chrome and Firefox and also it's possible to use Windows in Internet Explorer browser I guess that the NTF will be let's say sold once or as soon as the container images will be based on the modules and all will be integrated together in factory to the routine this is the first step I don't know if the first or second but the other step is to integrate it or test it in the open-shift and we will write several blog posts how to use it well let's say we have a few five minutes left I will do a lucky picture no no no and maybe selfie with us please go here yeah please here here here yeah yeah and we have all attendants I have short my short thank you very much and we are looking for issues or pull requests and write your tests