 Hello, I'm Drew Jones, and this presentation is about testing targets other than KVM with KVM unit tests. To broaden the testing domain, we also present building unit tests as FE apps. Presentation is organized as follows. We do a quick introduction to KVM unit tests, followed by a quick status of the non-KVM targets we already can test. Then we present the motivation, current status, and implementation of building unit tests as FE apps. Finally, we wrap up with a summary of the main points. So what are KVM unit tests? Well, as the name suggests, it's the test framework for testing KVM, and then also test KVM. How does it do that? Well, we run tiny guest operating systems that when they generate traps to KVM or exits to KVM, we can test for specific behaviors. What does a unit test look like to a test developer? It looks quite familiar. It looks like a typical C program that starts in Maine. Also, the API, in many cases, is familiar, such as when we mimic kernel named functions, IRQ enable, for example. Also we have some libc functions implemented in order to be easy to adopt. One thing to keep in mind is that we are not in user mode when we enter this main. We are in kernel mode. If you want to learn more about KVM unit tests, there's a write up on the KVM web pages, which has a link down below. And immediately following this presentation, Eric Auger will present on KVM unit test as well as another KVM test framework. So this diagram is just to make sure we're clear on where KVM unit tests run, exactly the same place as a normal KVM VM. So what's the current status of testing non-KVM targets? Well, because a KVM unit test uses QMU for its KVM user space, then other QMU accelerators are a natural target. TCG has probably always worked to some degree and been not only another test target, but a common way to develop tests for cross arch before moving the tests to KVM to see if it works on KVM and real hardware as well. Also hypervisor framework and Windows hypervisor platform are supported for testing. Beyond QMU accelerators, other hypervisors can already be tested. For example, on S390X, ZVM, and LPAR hypervisors. If you want to learn more about those, then you can check presentation, KVM form presentation from last year that was presented by Yano Spahn. Finally, just a bit over a year ago, VMware contributors posted over 50 patches to KVM unit tests in order to enable the test to run on bare metal and on VMware. The approach used for that is to launch the test from Grubb. And they also use a lesser known feature of KVM unit test, which is environment variables. As stated before, KVM unit test looks just like a user space C program with a main where you have your command line arguments that you can parse as well as it has an environment. So you can use get in or whatever in order to be able to check for environment variables, which allows a nice way to configure tests for different environments without having to recompile them. So what's the motivation for building these unit tests as FE apps? Well, as we want to expand the domain of targets that we can test on, then choosing something like an FE app, or an FE target, which is the reason why it exists in order to help operating systems and secondary boot loaders be portable, we can also make KVM unit tests portable. They are tiny operating systems after all. Also similar to the Grubb approach, we can use environment variables if necessary to avoid having to recompile the tests for different targets as FE targets also support environment variables. It's a relatively easy targets to choose as well in order to get a new target quickly. And we have a couple of other benefits from choosing this particular target. For example, similar to the Grubb, being able to launch it from firmware or from the boot loader, we'll be able to now remove a large amount of the stack necessary when we want to develop or test the tests themselves. So when emulators are used instead of hardware, which is sometimes the case for developing KVM unit tests, like actual tests for KVM, now we should be able to make the testing process of those tests quicker because we won't need to boot an entire Linux operating system and even start up KVM user space. We'll just go straight from firmware to test. And if you're thinking that maybe not all emulators or models will support EDK2 in order to be able to do that, well, that's don't be you have no fear. You boot also supports launching FE apps. Also, maybe a lesser benefit because I'm sure that UVFI has several unit testing frameworks already, so it probably doesn't need KVM unit tests. But we can now also test these FE implementations with KVM unit tests if we can run on them. So it's yet another target. OK, here's our diagram from before. On the left, it's basically the same. It just shows that in order to run the unit test now in the typical verge stack, we also need to add in the virtual machine firmware. For AR64, we refer to that as AVMF. And for X86, we refer to it as OVMF. And both of them exist and are supported and developed. So we can do that. Also, we can cut out the entire verge stack and go straight from the hardware or emulator into the firmware, which supports FE, and then straight into the test. So what's the current status of this FE app building? Well, nothing is merged. Nothing is even posted yet. But I do have a proof of concept patch set available in my GitHub repository on the target FE branch. And you can compile unit test as FE apps by simply enabling the target FE configure switch running make. And then when you move these FE apps to the FAT file system, the FE file system for the target, for example, KMU and AVMF, then you can launch them. And currently, the patch series is only for age 64, but I intend to expand that for X86 as well in the near future. Anyway, they work great over KMU, but that's not super exciting because we could already run the tests over KMU without the need for firmware even. So the work in progress is to get them to also run directly on bare metal. And I've already started that work testing with an AMD Seattle. And so as I said, X86 is in the queue. Naturally, I'll start with OVMF as the target. But a quick second stop, OVMF of Acunia for the first target, but as a quick second stop, it'll be OVMF over VirtualBox because VirtualBox also supports OVMF. OK, so the rest of the talk is about the implementation, some of the details. So to do that, this is kind of an outline of the remaining slides. First, we will talk about when needs to be added the framework in order to support building as FE apps. Also, when needs to be removed, actually nothing is really removed, but compiled in different ways or bypassed. And then some other changes that are needed in order to start supporting multiple targets. So what do we add? Well, the main thing we add is the dependency on GNU FE, which is an FE development environment and that uses the GNU tool chain. When linking with GNU FE and creating FE apps in this way, it's a bit of an odd build process. The app that you're building, you start out by compiling linking as a shared library, along with OVJ copying select sections and to create this FE binary. Then one thing to know about all the GNU FE apps is that they all start in an FE main function, as opposed to main, which you need to write yourself for your app. And in our case, for a KVM unit test, we would like to have just a single FE app implementation that will work for all architectures and for all tests. That's a goal. Also, the point of the FE app for our purposes for KVM unit test is to do the startup setup before launching the unit test, which is the main function. So in other words, FE main will do test preparation, test startup, and then call main. Another thing that now that we're running on a FE target, we can do is we can exit from the unit test or quit the unit test in a different way. So one thing that's a bit odd about how KVM unit test works when running over QEMU and KVM is there's no easy way to exit from running VM at the VM's time of choice, without using sort of power management or something which originally wasn't implemented for KVM unit test. It is now for at least ARM and PowerPC. But we also want to be able to hand back a status code to the shell that launched the test. And so even then it's insufficient to just implement the power management. We use a thing called a test dev that allows you to pass a status code and tell QEMU it's time to quit. Well, we can no longer do that because we're gonna have a test dev, obviously it's not when running directly on the hardware. We can still have a test dev and running over QEMU, sure, but now we prefer to use the FE UVFI runtime service to be able to exit. Also I should point out on the second to last bullet I bring up exit boot services as something else we need to do after preparing to launch the guest. I want to test. And this is an UVFI thing which basically says, okay, thank you UVFI for getting our app launched but now please get out of the way. So some of the things it's a setup while running the app such as timer events. So interrupts being delivered. We'll interfere with the test, of course. So we need to call exit boot services in order for those things to go away. The runtime services, another UVFI thing are left in place for us to use when we want to exit the test. Okay, so what gets removed or bypassed? Well, our own linker script, for example, the KDMUNITEST defaults or original linker script cannot be used anymore because GNU FE provides one. So we're swapping that out. This also means any symbols in the linker scripts need to be either avoided or maybe renamed. In the case of the AR64 proof of concepts, I did a little both. I avoided most of them but there was actually no reason not to just rename eText to underscore eText which is a more popular name for that symbol anyway. And then another goal then for this port is to take all of the assumptions or all of the references that the original linker script create and shove them into as little space as possible. So just into the startup code that we have in our init section is where I want those to live. That way when we build as an FE app, we just need to have that init section if deft out and we can continue from where we would have left off in the same way for both targets. So all the common initialization between being an FE app or being a traditional target build, QMU target can be shoved into the setup which is run in C and that can be a function called from FE main. So the next slide makes this a little bit more clear by illustrating it. So on the left, you see the original or default target and on the right, so that's the flow for that target. On the right, the flow for the FE app target. So we've rearranged a little bit on who does the relocating of the unit test and where the stack is set up. Now we rely on the FE app side, we rely on the UFE loader to do it for us. We also currently have the preparation of the command line arguments, the environment variables and the memory map all kind of squeeze in the setup on the original target, but on the FE app target, a lot of that stuff it's done in FE main now using the UEFI calls. So we can pass that information in to set up instead on that side. Beyond that, the setups should be the same ideally and then definitely a goal is that the unit test stays the same. We shouldn't need to have different code running when we're FE app versus not a FE app when running the unit test. The unit test developers should be able to focus on just what they want to test and it should run in all targets. And I already talked about how exit can be different using a test dev or otherwise on the original target and on the FE app target, taking use the runtime service call, but the flow should still be the same. So this slide talks a little bit about how those differences we saw on the previous slide on where things happen, how they're actually different as well, what happens. So the top part, the top line is just about the relocating and there's not much to say about it. As far as getting the information, getting the device tree or other boot time information in the original target, that's from DT. But actually a good thing for AR-60, DT is for AR-64 power and multi-boot info for actually six, but it's good that we already have taught giving me a test to look at the DT for these things because then when we switched to bare metal, with the FE app target, all we need to do is provide the bare metal, the actual hardware DT rather than the QMU machine model DT. But so we already can get the information, we just have to get it a little bit differently. On the FE side, we actually need to read the DTB from the FE file system, whereas the DT is provided by QMU directly, a pointer is directly provided to the unit test from QMU on the default side. But command line arguments come from slightly different places, we could fish them out of, we fish them out of DT on the original targets, we need to get them from EVFI on the FE target and environment variables. So here's where they live in the original target, they actually live in an NRD, that's how we provide them to the unit test. But on the FE app target, well, EVFI supports environment variables, so we get them with the native EVFI service call. Memory map on the original target side is something you can extract from DT or multi-boot info. Sometimes we've just assumed we were going to be running QMU for so long, as that was the main target, that some stuff is hard-coded. And so that all needs to change and use the memory map that we can get from EVFI. We can't just continue to, obviously for bare metal, we can't just have hard-coded memory map addresses, but even if we were using only DT to get those addresses, it wouldn't be enough because EVFI reserves some regions of its own for the runtime services that we want to make sure we're aware of. So we need to pass in that map to the unit test and the unit and KVM unit test framework needs to be able to handle that stuff. The last bullet is interesting. So we call setup from both. Our goal was to get our flows to synchronize again after startup, but there's still a difference. And that difference is that when we come from on the FE app target side and we come to setup, the M&U is on and some of the devices have been initialized. Whereas on the original targets, that's not the case. So this can cause some other issues based on assumptions that we have in our startup code, our setup code, and we have to work around those. So that's what brings us to the next slide. So from the point where the paths should synchronize, where both targets should have the same code, which is a goal, we don't want lots of if target FE then this and else that stuff. Then we need to remember that we've started with M&U on. So there might need to be at least a way to go ahead and disable that in order to allow it to be re-initialized with the same path. That's quite possible to do because without any trouble because while the M&U is on, it's actually just using an identity map. So as long as we clean the caches, when we disable M&U, we should be okay to just turn it off, run on physical memory. It will have the same addresses and then turn it on in any way we want, again, the same way we would do for the original target. But also the devices, some of the devices have been in use at this point. So they're already initialized and they maybe need to be reset before we init, which is new because we always assumed in the original target, the M&U KVM target they're fresh and ready for us to start poking and using. And then there's other device driver issues. I didn't have any problem writing to the UART right away with the FE app target, but I could only write 32 characters because the driver is so simple, just writing in nothing else to the data register that we would fill the five phone or just get stuck. So now I've added some FIFO handling to the driver, still keeping it as simple as possible, which I should point out is a main goal of KVM unit test. We don't want to write another operating system, we have Linux for that. We want it to be so simple that developers can be confident that their unit test is doing just what they want it to do and nothing but what they want it to do and also so they can jump in and contribute quickly. I already talked about bullet three, another kind of difference between the bare metal and KVM world, KVM target world is sometimes you need a carrier turn. We could make that configurable, of course. Other things that we already talked about the M&U on the last bullet's there, but anything else that we find that, well, maybe we do need a different path, we can't exactly have perfectly synchronized paths, but we could possibly do a little better job in making these things dependent on environment variables or another tool we have in KVM unit test is Augs Info thing. Augs Info isn't quite as nice as the environment variables though because you do need to recompile when you use that. It's a compile time setup. Well, you can use the structure even after you've compiled but then you need to write to it. Which would be yet another path, so it doesn't help. So talk about most of these, or some of these problems already where we need to now get our reset the devices or get our information from different places for actually six, this may require parsing ACPI, I'm not sure yet. I forgot to mention one problem though that I still have is AR64. And that is that in order to use the FE memory map that we get from UVFI, we need to be able to use it at the granularity that it's given to us. And currently the implementation for AR64 unit tests only uses 64K pages. So this memory map, which doesn't have 64K alignments is no good for us at the moment. And so I need to rework some of the memory management framework for HTML setup framework for AR64 in order to allow 4K pages as well. So to wrap up, KVM unit tests is already testing more than KVM. And if we add an FE app build target to KVM unit tests, we can further expand the test targets we can run on since that'll allow us to have a portable unit test. One of the other benefits of being able to run a unit test directly from firmware is that we will be able to write the test even for KVM, not just for bare metal but KVM, KVM unit tests faster because we won't need to boot all of Linux and run KVM user space on top of an emulator if we don't have the hardware available to do otherwise. So the proof of const of AR64 is pretty far along but unfortunately not all the tests are running on bare metal yet. So work in progress. Also work in progress, although I'm not really started much yet is the x86 work to be able to do it as well. I expect some different challenges there and I actually hope that all the work done by VMware in order to remove assumptions about KVM being the targets in order to run on bare metal and VMware already will allow the test to run more easily than on bare metal than what I'm having or my experience with the AR64. Thank you. If I understand correctly, there's going to be some question and answer time reserved after this presentation is made available. So please ask away.