 Hello and welcome to my talk, Know Your QMU and KVM Test Frameworks. My name is Thomas Hoot, I am working for Redhead and I am part of the QMU KVM community since a couple of years now already. Since I am also involved in some of the various test efforts here, and since the KVM virtualization stack is a rather huge and complex stack, I thought it might be a good idea to give a quick overview of the various available test frameworks in this area for newcomers, but also for established developers who are not aware of these frameworks yet. In this short talk I will focus on the test frameworks for KVM and QMU only, that means KVM self-tests, KVM unit tests and the test suites that are part of the QMU repository. There are of course more test suites out there that address additional needs, like the tests from LibVert or Avocado VT, but due to time constraints I am focusing only on the first three here. Let's get started with the lowest layer first, the KVM self-tests. The KVM self-tests are a framework written in CE designed for testing the KVM part in the kernel directly at the IOControl API level. The framework is part of the Linux Git repository and provides some library functions for directly creating virtual machines and virtual CPUs in a test program. That means there is no typical user space application like QMU involved here. Another very interesting thing to know about the KVM self-tests is that the framework also contains an Elfloader which allows you to load the binary code of the host test program into the guest2 so that both the test on the host and the code that runs within the guest can reside in one source file. Let's have a closer look at the KVM self-tests now. Assuming that you already have a checked out Linux kernel Git repository, you can find the KVM self-tests in the folder tools, testing, self-tests, KVM. To compile the self-tests, simply run make there. To run a test, you can simply run the test executable directly. For example, let's run the dirty log test. The files here in the main directory are the files that are common between all architectures. There are of course also some tests that are specific to the various architectures. For example, in the x86 folder you can find the tests for the typical x86 architectures. Now let's have a look here at one of the rather simple tests. For example, the CF4 CPU ID sync test. If you scroll down here a little bit, you can see a function called guest code. This is indeed the code that is run on the guest. It modifies a register and then here calls a macro called guest sync, which transfers the control flow back to the hypervisor. The main function is indeed run on a host instead. If you scroll down here a little bit, you can see some interesting parts here. It creates a virtual machine using this guest code pointer function. And then in a while loop, it runs the guest code. This vcpu run function only returns once the guest hits this macro that transfers the control flow back to the hypervisor. Then when the guest has finished, you can test some conditions here with some test macros and check whether the hypervisor call called you call in this test framework returned the right values. At the end, you just clean up and return. The next framework that I'd like to talk about are the KVM unit tests. This test framework is almost as old as KVM itself as it has been used for testing the very first implementations of KVM already. The tests which are written in C are mainly very low level CPU and device tests compiled into standalone binaries or you could also say mini kernels, which then get run via the dash kernel option of QMU. To ease the development of the tests, the framework provides a simple libc and some other library functions for the test kernels. If you want to have a closer look, you can clone the repository from GitLab or visit the wiki page on linux-kvm.org for some more information. To use the KVM unit tests, you basically just have to clone the repository, run the configure script and then run make. If you have a cross compiler installed, you can also compile the tests for other architectures like ARM or S390 by passing the dash dash arg and some other options to the configure script. To run the tests, you can use the provided run tests.sh script which takes the test descriptions from a file called unit tests.cfg. Alternatively, you can also run the tests manually by directly starting QMU like this. This runs the sieve test and if you want to make sure that the tests can terminate properly at the end, you should also specify the either debug exit device at the command line. Let's see these tests in action now. To use the KVM unit tests run configure first, then you can compile them with make. This creates the binaries in the corresponding folder of the architecture. To run the tests, simply use the run tests script. Since this takes a while, I'm interrupting it here. You can also run single tests by specifying the name. To have a look at the available test definitions, have a look at the unit tests.cfg file. There's a description of the options at the beginning and then you can see the single tests definitions. This is especially useful if you want to run one of the tests directly with QMU. For example, I already prepared a command line here for running the xsave.flat file in QMU. That way you can see the serial console output of the test, which reports paths or fail for each single subtest in the file. Let's have a look at that file now to see how the source code looks like. So there's some inline assembly metric here of course and here you can see what generated the serial output from the test run before. You can see that you can use the typical libc functions like printf and there are also some other library functions like this report function which is used to print these nice paths or fail lines. The entry point to the test can be found at the very end. It's a simple main function. So all in all, writing a KVM unit test is very similar to writing a normal C program. Next set of tests that I'd like to talk about can be found in the QMU repository itself. The QMU repository contains multiple different test frameworks that target different areas. You can get some help on how to run the different test suites by running make-check-help. The easiest way to run most of them in one go is to simply run make-check. Now let's have a closer look at some of these frameworks. The first set of QMU tests that I'd like to mention are the unit tests. These reside in a test slash unit folder and are the typical unit tests that you might know from other projects already. That means they are written the same language as the QMU binary itself, that means C, and the test code is linked with certain code from the QMU binary to exercise it. The main purpose for these tests are the library functions from the util folder in the source repository. But there are also some unit tests which test other parts of QMU. You can run these tests with make-check-unit. Let's have a closer look at these unit tests now. To run all the unit tests simply type make-check-unit. Since this takes a while, I'm interrupting it here now too. You can also run single unit tests by simply executing the binary directly. For example, the UUID test can be run like this. Now let's have a look how that works in detail. For example, the UUID code that is linked into the QMU binary can be found in this file here. And you can see this is a typical library function for generating a UUID or comparing a UUID with a null UUID. And the test is exactly exercising these functions. So if you have now a look at the test source code, which can be found in a test slash unit test UUID file, you can see here that there are some predefined UUIDs. And for example, here's the test that checks whether a UUID is null. So we define some UUIDs and then it simply runs the library functions and asserts that it returns the right values. The main function at the end is the entry point here too. And as you can see, this test uses the G test functions from the GLIP framework. Most of the other unit tests work similarly. The second set of QMU tests that I'd like to talk about are the so-called IOTests. These reside in the test slash QMU IOTest folder and are mixed back of various Bash and Python scripts that are used to test the block layer of QMU. You can run a subset of the tests with make-check-blog, but for running all of them with more fine-grained control, you should go to the tests slash QMU IOTests directory and run a check script there directly. Let's try that out. To run the IOTests, you best go to the IOTests directory where QMU has been built. Here you can find the check script and if you run it with dash H, you get a nice help text about the available options. For example, if I want to run the tests with the raw image format, I can do so by running it with dash raw. Since this takes a while, I'm interrupting it here now. Let's have a look at the sources. Most of the IOTests are still named by a number only. Some newer tests have real names, but the majority is still just a number. So for example, if you have a look at the test 51, you can see that this is a simple Bash script. If you scroll down here a little bit, you can see that this test here runs QMU in various ways. And at the end, the output is captured and compared to a reference output file in the .out file. And if the output of the test run does not match this reference output, then the test fails. Some other tests, like tests 310, for example, are written in Python instead. Another major test framework in the QMU repository is the so-called QTest framework. The related files can be found in the tests slash QTest folder. This framework, which is also written in C, can be used to exercise devices in QMU directly from within the emulator. To do this, the tests start a QMU binary with the so-called QTest accelerator, which replaces the CPU of the emulated system with an interface for the test binary. That way, the tests can read and write guest memory, trigger interrupts the clock of the emulated system and do some other tricks that would not be possible otherwise. To visualize this, here's a picture of how QMU is normally run. The QMU binary emulates the guest environment with the CPU, the memory and the devices. Now, if a QTest runs, it replaces the CPU with this QTest accelerator that also provides an interface for the external QTest program. That way, the test code can read and write memory or trigger IO actions in the device, just like the normal CPU of the emulated system would normally do it. So how does this look like in action now? To run the QTests, simply run make check QTest in the folder where QMU has been built. Since this takes a while, I'm interrupting it here now again. You can also run individual tests, but in that case, you have to set the QTest QMU binary environment variable first to point to the corresponding QMU binary, in this case the x86 binary. And I want to run the Q35 test here. Let's have a look at the corresponding source code, which can be found in tests QTest. And as you can see here, it's also based on the G-Lib testing framework. You have the GTest init function here and the GTest run at the end. But the tests itself, they are added with a QTest add wrapper function. A QTest subtest looks basically like this. So you start with a QTest init in this case. And this starts a QMU with the dash M Q35 machine. And then you can do various things like accessing the memory or also more complex stuff like PCI config space accesses and the like. At the end, QTest quit. Test on the M later again. The last framework that I want to present here today are the avocado-based tests in the QMU repository. These can be found in the tests slash acceptance folder. Note that the name acceptance is a little bit unfortunate here and there have been discussions already whether the folder should be renamed, but up to QMU 6.1 this has not been done yet. Tests here are written in Python and use the so-called avocado test framework for various actions. The Python scripts can be used to run QMU in many ways. For example, some of them download a Linux or another test kernel from the Internet and then check whether they can be started successfully. There are even some tests that take a snapshot of the emulated graphic parts and then use OCR or some other image processing tricks to look for the expected output on a screen. The tests can be run with make check dash acceptance. Since the tests require a working Internet connection to download the test kernels, this test suite is not part of the common make check step, so please make sure to run this separately if you'd like to use it. Now let's have a look at some of the avocado-based tests. The avocado-based QMU tests, which by the way should not be confused with Avocado VT or other avocado-based test suites, can be run with check acceptance on the command line. This will download the images from the Internet first and as you might guess, this can take quite a while. So I'm interrupting it here now and we are having a look at the sources instead. A nice example are the boot Linux console tests. These download a kernel from the Internet. So for example on here the x86 test, download a Linux kernel from the Internet, set up the command line and other stuff, launch the VM and then simply wait for a console pattern on the serial console to check whether the kernel has been started right. Quite simple, isn't it? A more complex example is the machine 68k next cube test. This for example uses the Tesseract OCR tool to analyze the frame buffer for strings. So you can also check whether the output on the video screen is the right one or not. And since we are in Python, you can use all kind of fancy Python libraries to check your test runs. So this is quite a powerful framework for testing complex scenarios. Beside all the frameworks that I already presented so far, there are some more test suites in the QMU repository that I'd like at least to mention here. There are benchmark tests for checking the performance of certain functions. There are some tests for checking the QAPI schema of QMU and tests for verifying the CPU emulation parts of QMU like the FPUs of float tests, the TCG tests and the decode tree tests. Additionally, as I already mentioned earlier, there are more external test suites out there like the Avocado VT tests, but these certainly deserve their own presentation and are available for some of them by other people already. For my presentation, I've got to stop here now. I hope I was able to give you a good overview of at least some of the basic KVM and QMU test frameworks and that you enjoyed my talk. Please let me know in case you've got any questions. Thank you for listening and enjoy the rest of the conference.