 I'm pleased to introduce Rafael Martins from Red Hat. Thank you. Hi, everyone. First of all, thanks for coming. I'm Rafael Martins. I'm from Red Hat. I work on the integration team for Overt, where we basically do all the releases. We maintain all the installation scripts for Overt. And I am also a gentle developer for more than six years. So most of my background is on distributing software and packaging software. And this talk is to show you how we use some tools in Overt to improve the way we test software. And we release software. So it's not working. So let's start by describing what's the problem we are actually trying to solve. Basically, virtualization products are complex. They are complex in the sense that they always have lots of integrations. For example, we need to integrate with storage, with network. And when you get some bug, some new feature that you need to implement that will require you to rebuild the environment to test, you need to reproduce the environment. And sometimes you can take a really long time building this environment. Personally, I already spent more than one day trying to reproduce the environment to fix a bug. So most of my research on this topic and the tools, the plugin I wrote for Alago were because I was really tired of spending too much time on reproducing bugs and wanted something automated. So the objective of this talk is to show you how you can do the same for your project if you want to and if you find it viable for your project and showing how we do it for Overt. So I will use Overt as a new case for the talk. How many people here in the room already use Overt or at least know what it is? Overt basically is a virtualization manager system. We have a web application that we call Overt Engine where you can create data centers, clusters, new virtual machines and stuff. And it can manage the VMs for you. It's based on KVM and Libvert that are technologies that are well known in the open source and Linux community. And this is what the Overt Engine looks like for those that don't know it. This is a screenshot from one of our internal instance that Red Hat that we use for testing. It's not the biggest deployment we know but it's a quite large deployment with three data centers. And this is the main dashboard. We have options and screens to change about everything on the machines. And now we will talk about the solution we found to make testing and deployment and verifying of patches easier. That is Lago, Lago Project. It's basically a network framework for virtualization that can build environments for you using Libvert and KVM so you can buy environments with how many machines you want and it will allow you to integrate those machines and do the tasks needed by using plugins right in Python. And it's totally extensible. Almost all of the layers of Lago are extensible by plugins in Python and the core itself just carries the very basic functionality that is needed to work. Now we will talk about how we use Lago in Overt. All the tasks, all the code that connects to Overt and Overt related stuff is on a split plugin that is called Overt Lago and Lago itself doesn't carry any code that talks to Overt. So in the same way that we have Overt Lago for Lago you can have a specific plugin for another product and another project and use it with Lago. Lago is just responsible by orchestrating the machines and calling the plugins to do the job. To solve the problem of automated tests because as release managers we need to be able to trust the software we are releasing. We need some frequent tests to make sure that anything big is broken and one of the solutions for that is an automated test suite that we have. It is run by Jenkins frequently and with this we can make sure that the code is ready to release and that the main flows we support are not broken. For example we have Lago and one engine and two hosts. This is all virtualized by Lago by using the Overt Lago plugin and Lago outside just managing the VMs and the Overt Lago plugin creating everything that is needed. This is automated. We have scripts and repository descriptions to cache the RPM files. Lago can create a local repository that is used to install packages quickly without downloading from internet every time. With this we can run into N tests in like 25 minutes, 20 minutes depending on what we are doing with a really big test suite. The good things about system tests and about having automated tests for you is that you can quickly see big breakages or big issues on your project. If you have a patch that is really breaking the basic flows and it is not really bad, it may still pass unit tests but it won't pass system tests because it is actually creating the machines, it is actually creating the VMs, actually trying to add hosts, trying to add storage, so it won't pass. It is very well maintained. We have a lot of people writing tests for it and keeping an eye on it to see if it is really working and make sure that everything is fine. And a last point that is important to what I will tell you after is that the virtual machines are left to be used after the tests are running so you can still, after running the test suite, you can still log in the machines and do something else like run some command, read some logs, or do something else with the machine that you want to verify your patch. This is a screenshot of our Jenkins instance. We have lots of jobs for all the over-supported versions. We can look at it and quickly see that, for example, in this day we had some issue on 3.6 that was breaking most of the builds and we can look at the logs and see what's wrong and talk with the person that sent the patch and fix it, that is more important. I'll talk about the real point of this talk that is doing manual testing because automated tests are good, are needed, but we also need manual tests to improve our workflow when testing patches. We have several jobs on Jenkins that can build Kuston RPMs from a Jared patch. We use Jared on Ovid and you can easily build some RPMs from any Jared patch and use those RPMs on Ovid system tests to create an environment with the new functionality you implemented or with the fix you are doing. An important point is that the developer can do it on its own laptop, so the process is a bit memory hungry because we are running some heavy stuff, but it's definitely possible to run it on laptop and build your RPMs and just test them. And most of the time, a lot of actions, more simple patches, don't even need some manual intervention to be tested. The automated tests may just be verifying all the flow that you are affecting with your patch and sometimes you won't even need to do any manual test to test it. But like anything in software, basically we had some trade-offs to have good automated tests on Ovid system tests. We had to do some choices that sometimes are not really... How can I say? They make the manual testing a bit harder than developers wanted. For example, when we are running... When we are using Ovid system tests to test a custom patch, you need to always run the full test suite. It means that if your test is correct, it's fine, you just run it once and be done with it. But if your test has some issues, you need to iterate. Anytime you change the patch, we run the full test suite to get it fixed, to get it to work. And... If the patch changes the behavior of the test, like if you change it to either a new option to install it, or you change the behavior of some part of the system, it will break the test. And to be able to use Ovid system tests to test it, you need to fix the test. This is not a bad thing because this means that Ovid system tests will keep working because the developer will need to fix the test. And it's a good thing because the developer will use it to write in tests. So this is not really a downside, but it will spend time because instead of just fixing your patch, you still need to fix the test, too. But this is a good thing for the project. And sometimes the environment that is deployed by the automated test is not enough for your test. For example, you may need more hosts, or you may need more engines. And in this case, in this case, the environment is not really what you wanted to test. To try to solve part of this problem, I created a logo plugin that is called Ovid Partware 5. And it is very simple. It will just create VMs for you based on what you define on common line. Like, you have a declarative way to say how you want your machines to be created, and it will just create them, link them, and get them ready for usage. And the good thing about it is that you don't need to run the full test suite, and you can have how many machines you want. But sometimes it may have, like Ovid's system test, it may also have some downsides or some trade-offs that you have to accept. So it's your job to find what of the tools is better for you, or extend the logo to create your own, and fix your issue. So these are some common examples. The first one is the deploy itself. We are, like, creating one engine machine with 8 gigabytes of RAM and two hosts. And we are using a custom patch from Jarrett and from Jenkins. This command will create three VMs, one for Ovid Engine and two for Ovid Hosts, using the RPMs from the build I mentioned. So after that, I will have to run the command to deploy the engine itself. This command will run the engine setup command. That is the script that Mighty maintains that will actually deploy the engine and install the RPMs and make sure that everything is running. Or if the automated setup is not good for you, for example, if you have a new option to the install script, you can still run the login to the shell and run engine setup manually using whatever options you want, whatever options you need, and doing any other manual testing you want, too. I prepared a demo for you. It is, like, showing this process. I was demonstrating. I should say that the video is not full here. I cut some parts of it to make it more quick to explain here. But I will focus on the times that it took to do the tasks. Let me just open it here. So this is the command that we will run. This command will create three VMs. One is the engine. One is Ovid Engine and it will be named Engine2. And the other two ones are two hosts. One with one gigabyte of RAM and the other with one gigabyte of RAM. This is just an example. Nobody would create a host with one gigabyte of RAM because it would be, like, crazy. But this is the environment we are creating to test. For example, if you just see if the host is correctly attached to the engine, you don't need to create a big engine just to see that. This is more than enough. So it's creating network, creating bootstrapping the VMs and we start deploying. Those steps are all based on scripts or Python code from the Lago plugin or the Ovid Lago plugin or the Lago code itself. All the parts that are marked like doing something related to Ovid are from the Ovid Lago plugin, not for Lago itself. So now it's actually deploying. It is running some shell scripts that are installing RPMs, importing stuff from the repository, et cetera. And this is one of the steps that takes more time because it's downloaded a lot of stuff from the Internet because the Lago, the Ovid patch verifier plugin, it's one of the trade-offs that it has is that it's not really optimized for downloading. We can't maintain a list of RPM packages that are needed for the testing because it changes a lot. So it usually downloads a lot of stuff that's not needed but it's the price you pay for being automated. So the deployment took almost five minutes and now I will actually run the engine setup step. It also takes some time because it will actually install the RPMs. This command will use a pre-created answer file that works fine for most of the cases, but sometimes it may not work. So if you have a custom answer file that is the configuration file for engine setup, you can pass it for this command and it will be used instead of the default one. So it run engine setup. It took three minutes, 338, and now it is adding the host to the engine automatically for you. It's important to say that everything is based on templates. In this case, as I run the default commands, it uses EL7 templates that is CentOS 7 in for Lago, but we have templates for Fedora and it won't work with Ovid, but if you have some other project and you create a template for some other distribution, it would work too. But your plugin would need to be able to install packages for that distribution. Now I am running the interface to show that it works. Authenticating and the engine is running. The two hosts I created are there and they are currently installing the hosts as expected. I have two here, but I could have as many as I wanted without any issue. Doing this manually to test a patch, it takes a lot of time. It's something that really saves a lot of time for us. I got from zero to the system running in less than 10 minutes. It's really a good progress. As everything in software, I have already started commenting about it before, but having it fully automated makes it hard to cache. And the script can install more than one engine simultaneously. The engine setup script from the plugin, but if you want to create more than one engine from the command line and do the setup manually running engine setup on both engines, it is possible, it will work. But just the bonus you have from having automatic configuration of hosts won't work with more than one engine, but you can still do it. An important point of this talk is to show that this is not just for overt. We want to have this used by other people. And the tool was created with being as agnostic as possible from the start. This is why we don't have any vit code, any vit related code in the core of Flago. So it can be used for several purposes, like testing all the virtualization managers, testing some applies, doing some end-to-end testing on some software that is run as a virtual machine. This is all possible to do with Flago by writing plugins. And another topic is, we frequently get some questions on why should I use Flago instead of use some other project that is already popular in the community. The most common question we get is about Vagran. When we started Flago, Vagran wasn't in the best state yet, like it is now. So it was not really an option at that point. But personally, I see Vagran more as a tool for develop code instead of a tool to test code. Like if I have a website and I need Postgres and MongoDB and something like that, I have a Vagran file that built that environment for me to run my website without configuring stuff. I personally don't see it, it really has a tool to build virtualized environments with lots of machines that are integrating there. But it's possible to do. The other tool we frequently get asked is about Avocado. But Avocado is more targeted at testing the hypervarsal. For example, it is at a different level than Flago. It is more used to test KVM and KMU. And Vagran and Lago is more to test the full environment. But still, if you want, you can write a Lago plugin to use Avocado as a test runner instead of the default test runner. Use Avocado to run your tests. The same is true for Vagran. It is possible to write a Lago plugin that will use Vagran to provide on the VMs instead of Libvert. And so if you really want to use Avocado, it's possible to integrate and benefit from the best side of both worlds. The other option is Lava. It's the Leonardo validation tool. It is very similar to Lago, but it is targeted on testing up real hardware, like ARM devices and Intel devices. It will run the tests on the real device, not in virtual machines. As far as I know, it's possible to use virtual machines as the target device for Lava, but this is not really popular in the community. That's it. Before I finish the talk, I have a few comments. For those people that say that don't know Lago, don't know Over... Sorry. We have a stand here in the conference with some demos and stuff. You can bring some gadgets. Also, I was talking with some developers. Some of them are using Gentle. You also have a stand here. You can compile it around the bottom if you want. You can visit both. If you want to talk with me later, I should be around one of those stands. Also, the slides we are in the website of the conference. And that's it. If you have some questions, feel free to ask.