 So you should, you should, yeah, 15, 55 probably, because we need 5 minutes more to prepare the next speaker. Okay. And specifically for the presentation itself, and then there is the minutes for the presentation. Okay. We will show you the signs. That's fine. Whenever you are ready, you can start. So, hi everyone. Thanks for coming. I'm Rafael. I'm a software engineer at Red Hat. I work here in the Bruno office. I am a member of the Ovid integration team. Among other things, we basically handle the Ovid releases and the Ovid installers and so forth. And the main goal of this talk is to show how we use Lago in our project and how it can be used by other virtualization projects. To improve the development and testing and stuff. So I start by describing the problem. The main problem is that virtualization products are usually big. They have usually lots of integration with other stuff, like with different strategies, different authentication backends, networking and stuff. And when you need to test a patch or find a bug or something like that, sometimes it's hard to reproduce the environment. You need to reproduce the bug or to verify if the patch is correct. Lots of people like to use snapshots to reproduce VMs and stuff, but sometimes the bugs are too specific to some specific use case. And it's not possible to just rely on snapshots. And setup usually takes time. For example, to get open running from scratch, you need to set up lots of VMs and integrate all of them together. You usually spend a lot of time doing that, so it's something that should be automated. Other than that, the release manager also needs some automated tests to make sure that code is ready to be released. For example, in November, we basically do at least one release per week. So having automated tests to make sure that nothing got broken in the meantime is needed, because having manual tests every week sometimes is not really possible. So we need to make sure that at least the basics are working fine before we do a new release. Even if it's just a release candidate, it's not really a stable release. So I use Ovid as a use case for Lago. That is the main topic of the presentation. How many people here know Ovid? A RAV or something similar? Nice, lots of people. So basically, Ovid is an application to manage a multiple virtual data centers using remote network storage like NFS, iSCSI and Gluster and similar. And it's powered by KVM, that is a technology that most people use on Linux. So it's something that should not be new for people. It also supports several authentication backends like LDAP, internal authentication and integration with Active Directory and others. This is a screenshot of the dashboard of one of our internal instance of Ovid. You can see that it's quite large deployment, not really large, but it's not something usual. We have three data centers and stuff, lots of memory, lots of VMs, hosts. This is how the Ovid engine looks like in the browser. So the solution we found for improving the testing and development of the Lago project that is a Vito framework that helps you create environments and create Vito machines and stuff. So you can automate that deployment that I said in the start that was hard to do. It's based on Libvert and KVM, and the good thing is that Lago is not really tied to Ovid. I'll talk more about that, but with our Python API for plugins, most of the functionality is implemented as plugins. For example, even the Libvert support to create VMs is a plugin on Lago. So you can potentially write a plugin to start VMs using some other backend. So this is how we use Lago for Ovid. We implemented the Ovid Lago plugin that is what does all the integration with Ovid services and start Ovid related stuff. And Lago just manages the Vito machines and makes sure that everything works. But all the Ovid related job is done by Ovid Lago plugin. For example, the plugin can repulsing RPMs for some caching to be used by VMs. It will also create a local repository that is used by the VMs to speed up installation and stuff. And we have plugins to... We have some test libs to help implementing engine-related stuff, like calling the API, like collecting logs and stuff like that. So we can say that Lago is totally isolated from Ovid Lago, and you can take Lago and make it support any other product you want without too much difficulty. So to solve the problem of the automated test, we have the Ovid system test that is basically a big test suite that will test the main flows we have for Ovid. It's running Jenkins and we have some basic setups like creating some engine, host and integrating them, adding storage, adding configuring network and creating VMs and making sure that it all works. So we have Lago managing all of that and the environment being tested. It is very good because with the system we have a really quick visibility of breakages. If some patch just broke the integration of the system and something is really bad, we will quickly know about that through Jenkins. And the good thing is that as the visibility is very big, we have most people looking at it and improving the test case because we quickly realized that it was useful to maintain it as updated as possible. But on the other hand, we can't properly cover most corner cases on this test suite because it's a big product with lots of moving parts. It would not be possible to cover everything. And as I said, the virtual machines are not destroyed automatically after the execution. This is useful for manual testing. That is what I will introduce later. So this is our Jenkins instance. I got it on a day that it was not really nice, but it's a good example. We have tests for most of our supported versions, 3.6, 4.0, and 4.1. We are not really doing 3.6 releases at this point, but it's still tested. And it's easy to see if something is broken, when it broke, and for how many days it's broken and to see the logs and see what's the issue and stuff. So it's really nice now about manual testing. As I told before, the Ovid system test won't destroy the VMs after the execution. So if you want to test something and have the environment ready and the Ovid system test environment is good for you, you can just run the test suite and when it's done, you just access the VMs and do your job there without any issues. Our Jenkins instance can build RPMs from Jarrett patches. We use Jarrett to manage our development. And with those RPMs, you can just ask Ovid system testing logs to build VMs using custom RPMs. This way, you can test your patch without having to set up everything by hand. And developers can also build it on local machines, but it's easier to use the Jenkins because they are all integrated already. I don't need to set up local environment to build RPMs. But there are downsides on using the system test for that, that you need to run the system test every time, and it takes time, the test suite is extensive. And if the patch you are working on changes the behavior, like for example, if I am writing a test for the setup script and I added a new option on the process, it will break the test. And I will need to go there and update the test. This is not really a bad thing. This is a good thing because this makes developers that are implementing changes that will change behavior to look at the system test and update it beforehand. Using system test for this case is really a benefit because it helps keeping the test updated. But if you take time, if you need to do some iterations on testing, it will take a lot of time to run the test every time and stuff. So this is not always handy. So I will give this as an example of how you can use Lago to integrate with stuff by writing plugins. To solve this problem of having to run the test every time, we created a plugin that just creates the environment without running the test. And you can have the environment faster. And for example, the system test, we always create the same VMs. Most of the tests will create one engine and two hosts or one engine and one host. Sometimes you need more hosts. Sometimes you need a more specific setup and it's not possible to do with the system test without editing JSON and stuff. So we created this thing that is a common line where you have a declarative way to say that you want two engines and two hosts and memory and stuff. This is all based on Lago. And we use it over Lago to interact with Orbit. It could be some other plugin to interact with other services if it existed. And it uses Lago API to create some stuff. And it also have the benefit that it will give you the environment ready to use. It will create NFS if you want, and it will attach the host to the engine. So you can create any kind of automation for testing you want using Lago. Here I added some usage examples. The first one is a deploy. It basically creates an engine and two hosts using some RPMs that were generated by Jenkins. And we have a very simple syntax to define the VM properties, the name, memory. There are other parameters you can define the distribution to, for example, use Fedora instead of Santos. And it's very flexible and you can create as many hosts as you want and they will be attached. We have another command that is engine setup. This command will automatically configure and connect the host to the engine, but sometimes you don't want it. Sometimes you want to connect to configure everything manually. In this case, you just connect using Lago to the engine machine and run engine setup by hand and do whatever you want. But at this point, you just have everything handy. So, yeah, it's very easy to configure anything by hand. I have a small demo I want to show you of this working. So this is the command that we run. It will create an engine with default parameters that is four gigabytes of memory and you create two hosts with one gigabyte of memory each one. I cut some parts of the demo to be faster, but it's creating network, bootstrapping VMs and stuff. It's creating VMs, saving VMs, creating network. It will get everything ready for you and now deploying the engine and the two hosts I asked for. We have, those steps are defined as deployed over the environment, for example. They are all defined on the Lago plugin. They are not defined in Lago. So you could have the exact same tasks for, for example, OpenStack or something else if that's the case. So everything is deployed. The next step is to set up the engine. This is again done by the Ovit Lago plugin, but it can be extended. It copied the answer file from the VM and run engine setup. After that it's configuring the host. That means adding the host to the engine. And after that it will add the status to the engine as well. So engine setup took like three minutes and I forgot to show before, but the deploy of the VM took like five minutes. And after that we will have the engine running the browser right away. The certificate is self-sign it, so it needs to be accepted. So after like 10 minutes you have engine running with your custom patches and the host is being installed right away as you wanted and you are ready to test your patch and see if everything is okay. I can say that as I work on the setup scripts, sometimes I even spent the full day trying to reproduce an environment and to build an environment to test. So this is really handy. But there are some downsides on this approach. To make things automated we don't do the same that Lago does by default. We want cache the RPMs from the repositories and make a local repository to be used by the VMs because sometimes we want to use different distros. We need to test the same time with Fedora, with CentOS. And we need to... The thing is that in Overtis and Test we have a big definition of the repositories that is manually written. And so we can, for example, we don't need to fetch off EPEL and cache off EPEL. This saves a lot of space. When you are doing everything automated we are just downloading the release RPM from the mirrors and installing it. It's not possible to benefit of that because it needs manually maintenance. So we have this downside that it will always download everything. So this is something that I think that can be improved. But, yeah, it's already way better than what it had in the past. So at least for me it's not a big concern. And right now we can deploy more than one engine at least if you want to use the automated engine setup because it wants to know to which engine attach each host. It's just for that. But if you don't want to use automated setup, if you want to setup it manually you can't deploy as many engines as you want. Another thing to say is that OvertLago, the plugin, knows if a VM is an engine or a host. We implemented this this way because it's important to test to know if a VM is an engine or a host. But Lago itself doesn't know that. For Lago they are all VMs but for OvertLago they are different. They are handled differently. As I'm telling all of this talk Lago is not tied to any product, even Overt. So we can use it for whatever you want. To test virtualization managers, something similar to Overt to test appliances. Or even if you just want to build a VM environment to do some work that's not really related to a virtualization product but you need lots of VMs and have them in the same network easily you can just use Lago for that as well. This last one can be done even without plugins if you don't want to. Also, there's a question that is frequent that is why we should use Lago instead of some other well-known solutions in the market? The first case, the most obvious case is Vragan that is a very popular tool. Most people like to use it because it's really easy. You can have definitions of VMs in a Git repository and just call Vagrant up and have it working. But the thing is that at this point of view, Vagrant is more targeted at development. You need a VM to help you write your code not really to run your code. I think it's a different phase of the process. I see it being used more like to create the development environment instead of the testing environment. I would say like that. But it is still a good alternative and actually it is possible to write a Lago plugin to use Vagrant as the provider of VMs. I don't know if someone is working on that. I heard about it some time ago, but I'm not sure. Yeah, as you can change the provider, it would be possible to use Vagrant as well, generate the definition and just build the VMs with Vagrant. Some people ask about Avocado. Amado is here, he did talk about it yesterday. Well, it's a different thing. Avocado is more a test runner. It's not really built the environment. At Lago we have our own test runner that is based on Python unit tests. But it would also be possible to write a Lago plugin to run tests using Avocado instead of using our internal test runner. Actually to be more specific, the test runner we use today is not even part of Lago. It is implemented on your virtual Lago plugin. So basically you are free to do whatever you want. If you are, for example, implementing a plugin for OpenStack, you would be able to just start using Avocado for testing if it was what you want. And as a last example, there is Lava. I don't know if a lot of people know it. I worked with it in the past. It is from Linaro. It's Linaro automated validation architecture, something like that. It is very similar to Lago, actually. But it is targeted at running tests on devices, on ARM devices or Intel devices. It was not really designed to run tests on virtual machines, but the design and the architecture are kind of similar to Lago, but it is for real devices, not really virtual machines. That's it. If you have any questions, if you want to ask something, say that I told something wrong, it's your opportunity. Out of the box, no. Out of the box, no. Yeah. It's possible to do, but I think it needs some code. Not really hard, but I think it needs some code. Anything else? Okay. Basically, Lago itself can run on most, on almost any system that has Livered, as it is now. There is work to part to Debian. I'm a gentle developer. I'm writing an overlay to run Lago on gentle because I want to use it there. But the virtual machines, the question, he asked if there are plans to support other distributions in Lago, other than the Red Hat-supported distributions. Yes. There's someone working on Debian, and I'm working on gentle. Out of the work. It's not really part of my job, but I want to use it. And Lago itself can run mostly anywhere. You have Livered. You may have some issues. If you have something different from SC Linux, if you use something like APPR more, you need to do some manual configuration, maybe. But it is possible to run. The thing is that most of the code we have today will deploy systems on CentOS and Fedora, RPM-based distributions, basically. So, for example, if you want to do some testing, we're using DABS. You need to write a plugin for that. But it's totally possible. We are not tied to Fedora or CentOS or RL. More questions? He asked how we make sure that obviously we use nested virtualization to run VMs inside VMs, that great VMs, something like an inception. How we make sure that this does not affect our all-test results? Basically, we don't have a way to make sure of that. I'm not aware of anything in that way. But for hardware restrictions and stuff, almost all developers of Ovid and of other services are using nested virtualization. So, I would say that running Ovid, at least for developers, with nested virtualization is a more common case than running on bare metal. So, yeah, there's no way to... I think that there's no way to even detect that we are running on nested virtualization and not on bare metal from Ovid. So, yeah, I don't know if... I don't... We don't have a way to detect that, but I don't think that it affects the results in the end. More questions? So, thank you. Here you have my contacts. And it's free to reach me out. We also have... I forgot to put it here, but we also have a logo channel on Fremont. It's called Lago. You can just go there and talk to us. Thanks.