 Hello everyone, I am Baddhansal, I work with Red Hat India and today I am going to talk about Overt system tests using LACO. So before beginning, I would just like to cover what is LACO and after which I will be explaining a little about Red Hat hyperconverged infrastructure on top of which we run Overt system tests and how do we recreate the hyperconverged infrastructure using LACO. So what is LACO? LACO is an ad hoc virtual in framework which can help you build virtualized environments on your laptop or your server. This can be for any use case and it uses DigPort to create VMs. You can use this plugin to create the setup you need very easily, be it for deployment, development, automated testing, regression or for pipeman. We can divide testing into three broad categories starting with unit tests which is basically isolating the smallest functionality to perform a test on to a functional test which implies that we need to test a specific component of the system and the system test which as the name implies, we need to test a whole system from end to end, starting from deployment to its users to a point where we are getting the expected results or not. LACO helps us to create tests on top of which we can run unit and functional tests. So LACO basically is a system testing tool on which we can run unit and functional tests. Setting up a LACO environment is rather simple. You can define your requirements through a LACO init file which is written in YAML format. The init file contains two sections domains and net. You can define your virtual machines or hosts you wish to replicate under domains and the network topology under net. When you add a nick to a VM it must be defined under here that is under the net part of the YAML file and all the virtual machines need to be defined under domain. We can define a LACO init file as per our requirement and simply run LACO to create that environment. The environment is saved on a disk as a QCao 2 image and you can start, stop or pause images and destroy them as per your need or as your testing is completed. So now talk coming to Red Hat Hyperconversion Infrastructure or RAI for short. An example of system tests on NACO is perfectly encompassed by the over system tests for RAI. A RAI setup requires a minimum of three hosts with nested virtualization capabilities so that it can run VMs on top of these three hosts. Now to test this setup rather than using three different machines to reenact this setup. First we use LACO to recreate our required hosts on top of which on top of these three VMs we recreate the virtual machines which would have been in the hosts instead they are within another virtual machine. And we can simply run this from a laptop or a small server which ensures that we do not use a lot of resource and we do not waste our valuable resource on testing and rather we have resource available for development. So a RAI setup basically needs three VMs and on top of which on one of the VMs we have another hosted engine or overt engine VM which can run tests. We generally have cluster storage at the bottom and on top there is overt. After setting up a mock RAI environment we can run tests on the recreated systems. Overtest system test is a framework written in Python which helps automating these tests and covering multiple cases. It uses Python SDK provided by Overt to automate the test cases and complete the test cases with minimum hardware requirements. The tests are broadly divided into two categories. One are the bootstrap tests that are the tests which are required for or during the setup or are the baseline setups. And on top of those are the functional tests that is the expected result for when a setup is completed. We can also automate this logo process by using Overt system tests. We can just define a bash file which here is represented by the control.sh which is represented here by the control.sh. And in here we can also define a replacing file which basically ensures that you download the needed packages only and you see a space by excluding the packages which are not required or which are not necessary. So you do not waste bandwidth on downloading packages every time you test. And the replacing file actually downloads the packages and stores them on your local system or your local server so that you do not have to download them again and again and rather only once before starting the test and in case you happen to delete them then again it will download. After downloading the reports to your local system and deploying the VMs you can copy the reports to the VM as discussed earlier LagoNet file is used to define your VMs and network topology. And then coming to how specifically UI is set up firstly we have to create three VMs and after that we set up passwordless SSH because we use Ansible in the background to automate a UI deployment. So we need a passwordless SSH setup between all the three hosts and the engine. Then using Ansible we deploy this setup on top of these three VMs firstly it creates cluster storage setup then it deploys the overt hosted engine and then it adds this hosted engine VM to one of the host VMs. Once the setup is complete the pre checks are done it is followed by the bootstrap tests which is run by Overt system test or OST for short. Once the bootstrap tests are completed then it runs the functionality tests and once that result is completed it gives us a all good or all test pass sign or else it will tell us where the test is failing and what error exactly is it facing. We can also use Jenkins CI to pipeline these tests to run automatically at a regular interval or every time your patch is pushed. So we have our Jenkins setup for Overt. This is for multiple projects and not just prior or a hyperconversion infrastructure this is for rev over testing only hosted engine testing and even for other components. So here you can see at the bottom on the last four tests had failed and we were trying to debug after these failures and before that it was regularly passing. Here we can also see that we have multiple parameters we can build it with a custom repose or we can simply rebuild the last and if you can see the time you can see that we have triggered it for every day at a particular time and whenever the test is completed triggers an email to everybody whether it fails or pass. So as I was mentioning Jenkins can be done using custom parameters as well in case you wish test for a particular patch on an updated custom repo. You can directly mention your get it patch over here and provide the right path it will test for that patch. You can create custom repose and add it over here. So thank you so much for your time and patience. In case you wish to read more about Lago or OSD you can follow these two links. If you wish to contribute towards Overt OSD system test you can continue to this link or this link where it can basically guide you how to start Overt. Sorry about that. You can check out the Overt dashboard over here and you can read about Red Hat hyperconversion infrastructure over here. So now I'm going to show a small demo for how to deploy OSD using Lago, how to deploy hyperconversion infrastructure using Lago specifically. So as I am on a server on which I have installed the required packages for OSD and right now I am starting a mock session so that it's easier to deploy on top of mock you do not have to do this but in case you feel like can use a mock session right now the mock as you can see it's installing EPEL 7 and it's going to take some time so let's skip ahead to once it is done and I will show you how to deploy OSD with the simplest so now as you can see mock is complete it's running we'll just go to the directory where we have our tests and run the command run under soot.shell followed by the suit which we wish to run as I am planning to run hc basic soot master I am going to deploy that and simply press enter and Sibyl will do its work shell script firstly we'll start and it will find if the environment is there if there is a pre-existing environment it will ask you to clean it in case not then it will look for the reports which are downloaded locally here you can see the Lago init file is available and the Lago init file our description can be seen here how we have described three different machines and after that we can see that it's downloading the required packages and now it has it is creating disks and after for each VM you can see what all has passed and here we can see that engine VMs is success so it's now just deploying all the VMs this will again take a little while increases the first one it might take even longer since it has to download the required report and store it on your local system first and then it will copy it to the VMs after they are created so I will though even though I have the reports it will still take time to spin up the VM images so I will just stop the recording here for a while and once my environment is deployed I will start it again and I will show you in case something important comes up in between now we will do a quick code walkthrough you can get the project from get a dot over dot all kind of projects you can look for which are some tests and you can clone the project from hell after cloning the project if you open I will I will actually particularly look at the basic suit master and in case you feel like going through something else you can there are based on the same concept so firstly here's the most important file which is the control dot sh file as you can see firstly it copies the config file and then it copies the repo file into the VMs and then it deploys our environment on top of that and under here you can see that it's copying from your local run to the VMs temp and from here you can again see from your local where it says the reposing file it is copying it to the VM ZTC VM dot repose now it is setting a password list as such which was the initial step required to allow Ansible access Ansible root access then it's running the automated deployment through Ansible itself and after that it starts running the test which is the test suit which we have and to run that we need to install pip and we need to install libgastfs and we need to install a few few other things which are mentioned in the airpms which is available and from there we just start running the test which we have listed in one of the files which I will show you again and you can just run test it goes through each test in case any test fails the test stops there and reports that it has failed under execute playbook we have actually defined how Viki are going to automate the Ansible deployment using shell script which just provided the path under which Ansible actually Ansible hyperconverged Ansible deployment files are available and we have provided a path to the playbook we have saved a copy of playbook in this folder which we copy to the VMs before the execution and you can see this is the host inventory file and here host 0 and other variables are replaced according to the deployment this is the lago init file which you can see firstly we have defined the domain and under domains first is engine under engine we have defined nick where we have provided the network interface for this and again we have defined three hosts as that's what we require for our deployment and again for those we have defined a network and these are some file require this is a file required by HEE deployment is required for HEE deployment this is the repo file which we save and we download these repos locally on our system and then copy them there are the various repo files we need and this is the setup which is required to do pre Ansible run so that there are no interruptions here we are installing the basic packages needed on the host which is Ansible cluster Ansible rows over to hosted engine setup over to Ansible repositories and Ansible engine setup you can enable them through these repos you can include them in the repo and then you can install them from here here we have the templates available since we are running let's see basic suit master on top of 8 2 so we use the 8 2 base and similarly you can have a look at other suits also there's basic suit available for running just HEE engine and then there is an HEE basic suit also available now we have some more the major tests which we have divided for a hyperconverged basic suit are into two parts one is the bootstrap test this is the tests which are done to check that the deployment has been successful and there are no errors to the deployment so we perform first basic functionality test and then we have a basic sanity test which performs other tests which are that is a day two tests as we like to call them that once your deployment is successful we have to create new VMs add new networks whatever will be required to maintain the setup so we have these tests are defined over in these two files which are basically Python files which use Python libraries and over test the SDK API to run the tests as you can see some of the test examples here we are trying to check that the host type is are available we are trying to check that the hosts are in data center and then we are trying to restart the engine and wait for the engine now we are trying to install cockpit overt which is basically a UI plugin trying to add storage domain and then we are trying to add different storage domains here and all of these tests are basically run over here at the end which is through the test list there is a particular instance in the control file which runs these tests as you can see we have we are collecting the test list and from each suit we are passing the test and from test scenarios and we are running each test and once these test passes then we move forward so these are the tests which we perform first and followed by these tests over here a few of the tests in basic sanity suit are adding a new blank VM adding disks to VM adding direct learn snapshotting snapshots adding VM templates running VMs migrating VMs and again live merge snapshot and so on so forth other than that we have a vars file which basically defines how many hosts we need if you look at our logo file we do not redefine the number of hosts as the hosts are similar and the only difference in these hosts will be the host name and the IP assigned to them so we just have defined a small variable over here which counts our host in case any day we wish to do a six node deployment we can change this and do a six node deployment as well so now that my environment is deployed I'll just quickly run through what all has happened as you can see this is answer will log so since they're not discussing that I'll just not deep dive into it and maybe you know show one part or two now since I think last we left from here at I cannot find it sorry yeah we left at bootstrap bootstrapping the three VMs and then we are initializing we have saved the nets we have saved the VMs they've saved the environment and now we are initializing the same we have started the VMs over here and then we have copied the files as you can see we are copying the vars file we are copying the repos over here to all the three VMs now it basically outputs the logo in it file again and from here it starts the deployment process and for the deployment process is a part of a IV or a hyperconversion infrastructure so we have added the public is such key to that one host from which we are deploying so that there are no Ansible issues and now you can see that this is the starting of the Ansible log where firstly we are trying to go through some pre-flight checks if there is enough space on our log if there is space on where and in case we pass certain size we are checking for the block size and there are many such tests which then we deploy Gluster and after that we start once Gluster is successfully deployed after a bunch of steps we go on and deploy hosted engine hosted engine and Gluster use two different Ansibles Gluster is deployed using Gluster Ansible and over tension is deployed using over tangible now these are the over engine Ansible processes and skip through them and directly go to the success of our deployment here now after successfully creating the deployment we do some testing to ensure that the deploy environment is created properly and there are no problems with the environment we are running the bootstrap tests first as you can see all the tests have passed successfully and then we collect the retail details the logs and related logs in a file and then we run the basic sanity tests after which we again collect the logs and once all the tests have passed we get a success message in case something fails we get a trace back for that and we also get the related logs and we can we can always check check our deployment using vorsch list you can see that we have three VMs running and there is another VM inside this VM so we are using nested VM nested virtualization let's try to log into a VM and check for a few things we'll just see is Gluster deployed and how hosted engine is running we use vorsch console the password and log in is defined in the files and let's quickly run a Gluster V status you can see that there are three hosts which are running three different volumes and all the volumes are up the hosts are connected and let's take a VM status hosted engine VM status now we can see that the VM is up and it's running on host zero right now and so we can also interact with the setup it's not that we have to do the tests automatically we can do we can do the steps manually as well and in case you have web UI you can access that through if you are running it on your local machine you can access it on port 8443 and in case you are running on on a remote machine like I am you can always a such tunnel into it so here is the command to tunnel which is such minus 8443 followed by the IP address of your engine VM or your UI wherever your UI is deployed by the port on which it is deployed for me as the port was 443 I have mentioned it here and the user name the machine name on which it is deployed once this is successfully connected you can log on to the to the machine using this particular URL which is HTTPS local host 8443 so thank you for your time and thank you for listening to my presentation I hope I was able to give you an insight upon on logo and OST if you have any queries please do get in touch with me and again thanks a lot for your time so thank you very much Parth would you like to join us for some questions and answers thank you for an excellent presentation I don't hear any audio from you Parth I think you're muted excellent so anyone from the audience have any questions and our David made a couple comments so thank you for all the links Parth that will be very helpful to those who are in the audience and I'm gonna place a link to your video as well in the chat so did you want make any comments so in case anybody wants to understand if I was on here at any point or it was I think it was a little confusing because my demo was not very well rehearsed was just I sort of did it in a hurry so in case somebody wishes to get a deep dive they can contact me or find my email ID just paste in the chat and reach out to me and in case you'd like to contribute with diversity or what I contribute towards coffee to what UI as well so in case somebody is interested in contributing towards either of those it leads out to me and I can help getting started yeah and thank you for the your contact information all that information is great and they'll definitely help people get going and overt obviously is a very important project so there's a ton of folks that are depending on overt and KBM back end and it's extremely important to the open-source community in general one of the most interesting things about overt and KBM is that you do something a lot of the other virtualizers can't which is cross-processor virtualization so when you're prototyping new solutions like risk 5 or arm whatever you're doing you can virtualize those very effectively on x86 if that's what you have so yeah I'm I'm a big fan of overt isn't I'm sure you can tell yeah but I don't have much experience with overt I see yeah excellent well thank you very much then I guess we certainly appreciate your time and that was excellent presentation thank you