 Hi, everyone. Thank you for joining my talk, Fuzzing Linux with Zen. My name is Tomache Langell, and I'm a senior security researcher at Intel. And I also maintain a variety of open source tools, such as the Zen hypervisor, LibVMI, DropWoof. And I also participate in the HoneyNet project, where we usually run Google Summer of Code projects during the summer, developing open source tools to fight against malware. What this talk is about is that we had this task of fuzzing the device-facing input points of several Linux kernel modules, kernel drivers. And we had to build new tools to get it done. We open-sourced them. We found a bunch of bugs and fixed them. I will talk about those. But really, the point of this talk is to show you how we did it so you can go out and do it yourself. To start, let's talk a little bit about feedback fuzzers. They are not just about feeding random input to your target. They use feedback as a mechanism to better exercise your target code. And they do it by effectively collecting the execution log, or called the coverage, when you are running the fuzzer. And you can use that to compare execution from run to run to determine if the fuzzer was able to discover some new code that hasn't been seen before. The idea is simply that if you've discovered some new code region that hasn't been exercised before, it's worthwhile to focus on that because the chances of finding some new bugs is higher on code that hasn't been exercised as much. Obviously, what feedback fuzzers need the most is determinism. If the target code behaves radically differently from one run to the other, then the fuzzer might just get stuck in focusing on inputs that don't actually lead anywhere because it will think that it's opening up new code paths when it's, in fact, just noise. So if you have garbage in, you will have garbage out. So then VM forking is supposed to address that shortcoming. It is effectively a way to add determinism to kernel code execution. If you think about the kernel, it's pretty undeterministic. You have interrupts firing all the time. You have multiple threads and scheduling. It's as far away from deterministic execution as you can get. So VM forking allows you to split the runtime of a VM into multiple VMs and populate the memory of these fork VMs from the parent one. So to make this as fast as possible, you effectively can just, when the fork is executing a memory access where it's a read or an execution, you can just populate those page table entries in the fork with a shared page table entry. And you only have to actually deduplicate the entire page of memory if the fork is writing something into memory. To get even better speed, once you have a fork VM set up, you can actually just reset the state of that fork VM, just copy the VCP registers from the parent and throw away any of the deduplicated copied pages, but keep the shared pages in place that will get you the best performance. If we take a look at numbers, if you run these operations in a tight loop, you can create about 1,300 VMs per second. If you are doing a reset, that's about 9,000 resets per second. So these numbers are fairly OK for fuzzing. Obviously, you will not see these numbers because these are the theoretical max if you are doing nothing just resetting the VM. Obviously, between those resets, you actually want to run your target code. Couple other building blocks to mention here to really understand what we will be doing is, most importantly, Zen's introspection subsystem. So this is what I've been working on for the last 10 years. It allows you to really peek into the runtime execution state of a guest. You can read and write the memory, translate virtual addresses, but it also allows you to pause the VCP of the VM at various hardware events and get a notification of those hardware events in your regular user space application in DOM zero, which makes development of introspection tools really quite convenient. You can get notification for CPIDs, breakpoints, single-stepping, EPP faults, and a bunch of other things. The other really cool feature that just got upstreamed into Zen is called VM trace. We did this in collaboration with the Polish cert and Citrix. And this is an effective way of turning on Intel processor trace to record the execution of a full VM from DOM zero, where the CPU itself will store enough information about the execution of the VM in the low overhead so that later on that log can be decoded to reconstruct the execution of that VM, see what happened. And obviously, this is what we will be using to collect the coverage information. So if we look at the full flow of how the fuzzing setup is working on Zen, if we start from the parent VM, you boot up your regular VM. And then the target code is reached. You compile the target code with the magic CPU ID in place that will signal to the fuzzer that this is the point you want to start fuzzing. The fuzzer will find that CPU ID when it executed, it will create a fork, which we call the sync VM. In that sync VM, we look up the virtual address of various kernel functions that usually get called and something bad is about to happen in the kernels, such as a panic is happening, Casan or Ubison, built-in error detection systems in the Linux kernel trip. We add a breakpoint to the entry of all of those functions and we create another fork. This is what we call the fuzz VM. This is where we will actually be performing the fuzzing. This works effectively by taking the input that's generated by the fuzzer. In this case, we are using the AFL, American FuzzyLob. We take the input that's generated by the fuzzer and we write it straight into the VM's memory. On positive, see what happens. If we catch a breakpoint, that's going to be at one of those entry addresses that we breakpointed earlier. Great, we just found a crash that we would report back to AFL. If we catch the magic CPU ID again, that would be the end harness. So you have a start harness and an end harness. If you hit the end harness, then nothing bad happened. If neither, then we report a timeout. Afterwards, we can take a look at the Intel processor trace log, decode it and we use that to report the coverage back to AFL so that it will understand if something new happened while fuzzing that VM. And then we just reset the state and go to the next input from AFL. Let's take a look at the demo of how this actually looks in practice. I am creating an Ubuntu 2004 VM and I will be booting a Ubuntu Linux 5.10 kernel that has the harness already compiled into it and booting with KSLR and PTI disable just to make debugging easier later on. And what we will be fuzzing is a USB driver in that kernel. I effectively have a TomDrive USB stick attached to a USB3 hub. And I fire up the GFX fuzzer to listen to when that magic CPU ID, which in here is called magic mark is executed. So now it's just listening to see when that CPU ID happens, the VM finished booting I will log in and we'll initiate some interaction with that USB TomDrive that will trigger that harness I have pre-compiled in there. So I ran fdisk and you see that fdisk never returned, it never finished. And that is because that VM is now paused because the KFX caught that CPU ID and we have the information about where the target buffer and the target size is that we want to fuzz. We can go to that virtual address, read out that memory to be used as the seed for the fuzzer. So this is the normal execution would be that the kernel was just about to do while executing the fdisk and we will start mutating from that structure to see what can happen if that input is malformed or malicious. We'll be using the f++ here, fuzzer is up and running. We are opening up a bunch of paths as you can see. And in less than a minute, there is already a crash found. And we'll go into the details of what this crash is about but this is actually a real bug in the Linux kernel that was discovered just like that. So at this point you're probably wondering why okay, what the hell did we just fuzz and what the bug is and you're right. There is more to fuzzing than just running the fuzzer. In this engagement we discovered that really the biggest pain points is not running the fuzzer. Once the fuzzer is up and running, it's great. You can go and take a walk, grab a coffee, it's awesome. You don't really have much to do, it's all automated. The real pain point is performing the analysis, figuring out what to fuzz in the first place and then once the fuzzer finds something, getting enough information out about the crash so that you can create a report or fix the bug. So how do we do all of those steps? So let's start with analysis. What we were fuzzing there is DMA. And DMA is memory that the kernel makes accessible to an external device. And this is to facilitate fast IO operations. Shared memory, you have better speed. And the way this works is that the device has direct access to that memory so that it doesn't have to go through the CPU and the MMU to actually read or write to that memory. There is what's called the IOMMU but usually the IOMMU restricts access to other pages. Pages that are explicitly made accessible by the kernel to the device will be allowed to be accessed by the device through the IOMMU. So the IOMMU is not going to protect you against a malicious device that is placing random stuff on the DMA page that it's allowed to do. So we figured, okay, let's take a look at the Linux source code see where DMA memory is getting accessed. It should not be too bad, right? Like when you have a system called the kernel is doing, when it receives some buffer from user space, the first thing it does it copies it into a kernel memory and does its processing there. So we figured, well, that's how DMA works as well. The kernel should copy the DMA memory first to an internal buffer and go from there. But boy, where are we wrong? It turns out that the kernel is accessing DMA memory all over the place. There is no single function that copy from DMA. Once DMA memory is established, the kernel can access that and does access it all over the place. So even just figuring out where Linux reads from DMA is not trivial. So what we did was we looked through the source code looking for hints of when the kernel might be doing a DMA read. And it's quite painful because just by looking at the source code, you don't necessarily know whether some pointer is a DMA memory or not. So what we did, we looked for the IOMM cookie or the best one was actually to look at the NDMS conversion functions that go, big NDM, little NDM to CPU. That is a pretty clear indication that the memory that is being, or the data that is being read might not be in the right NDM format that the CPU expects. So that usually means that there was some cross communication with an external device. And then take the output from Ftrace, which is a built-in subsystem in Linux that allows you to trace the execution of the kernel internally. Cross reference, what we found that we think is DMA access and see that better those functions actually get called during execution of the kernel. Because we found a bunch of these accesses, but those were functions that never actually executed during runtime. And those are not really good targets to fuzz because if you can't get the code to execute, then you can fuzz it. So this was not great. So we also decided to just be old school and dig through the spec. Maybe we get a better understanding of what's going on here because the kernel code is not the easiest thing to read. So looking at the spec itself, we can find pictures like this that are immensely helpful to try to understand what the hell is going on. Obviously this subsystem, as you can see, is quite complex. But really the biggest boost that we got for our engagement here was to just discover what the name of the rings are that the subsystem uses for device to kernel communication. And these are the event ring transfer rings and the command ring. So just knowing those names, we were able to just grab whether there is a variable called event ring and see where that is being accessed. So what we found is this location where, yeah, there is what's called the event ring and this is a function that gets called from the interrupt handler. Obviously what happens is that the device or the USB hub is placing some data on this ring and sends an interrupt and then the kernel goes and processes whatever structure that the device sent and it dequeues that from that ring page which is DMA accessible. So what we just fuzzed, what the harness setup was is just after that, the structure is dequeued from the DMA page. We have the harness start and the transfer information about where the pointer is and what the size of the structure is. And then we have a couple of points that we want to stop the fuzzer and go to the next iteration. Effectively, whenever this function would return, we want to stop the fuzzer. We want to fuzz everything that's in this function and whatever this function calls. As for what those harness functions actually look like, they are really just the CPU ID instruction where we stuff the magic information into registers that the user space tool in DOM zero that can receive. So these are effectively what you can think of them as hypercalls. All right. So once we found that bug, what's the next step? With VM forks are a little special on Zen in that they are not fully functional VMs. You can turn them into fully functional VMs, but for fuzzing, it's obviously there's no point. But because of that, there's a little bit of pain in actually figuring out what happened in them because you don't get to just log in and gather the logs because there's no network. There's no disk, no console. There's no IO into VM forks. They are literally just running with CPU and memory. But fortunately, the D message buffer that the Linux kernel uses to store information about runtime events and errors and whatnot is just sitting in RAM. So we can go and carve it out. The way we do that is we're gonna use GDB SX that's been shipping with Zen for over a decade at this point. And it's really just the minimal GDB bridge. If you build the kernel with debug information and frame pointers, you can access the information of the kernel state using just GDB. So let's take a look at how this works. So this is where we were, we just found the bug. We want to figure out what happened, what is the bug? So we'll take KFX, we'll re-execute, but instead of taking the input from AFL, we'll just use that file that was found by AFL. We'll inject it into a VM for it and see what happened with the debug output. And we see that, okay, Ubisoft Prologue tripped. So Ubisoft Prologue is the function that gets called when the kernel starts to construct the Ubisoft report. So we want to stop the VM fork after it actually finished printing the Ubisoft report into the dmessage buffer. So we will stop at Ubisoft epilogue. And at that point, we can really just attach the debugger to it and read out the dmessage buffer. So I fire up gdbsx, attach it to that domain, go into the source folder where I compiled that kernel, load up the symbol file for the kernel, attach to that bridge, and print the dmessage buffer using lx-dmessage. And there we go. Right at the bottom of the dmessage buffer, we see the report that Ubisoft generated for the bug that the fuzzer just found. This is an array index out of bounds error in XHCI ring. Awesome. So this is pretty much how you triage the errors that you find using the fuzzer. For most of the cases, this has been perfectly sufficient, right? We have the source line that we have to take a look at. Usually it's pretty straightforward of where the bug is coming from, but not all the time. Sometimes the bug triggers a code that's far away from the driver that we are actually fuzzing, right? So there is some call chain from the start point that we are fuzzing from that reaches some deep layer of the kernel. And that's where some bug happens, then figuring out what's going on there is a little bit more difficult. So let's look at triaging beyond the basics. Here is the harness that we use to fuzz the IGB network driver. So these are network drivers that receive packets and packets. Here we have an interrupt handler of when a packet is received by the kernel and the kernel goes and reads this Rx description buffer that the device places on the ring that has information about what size of the packet that was just received is. So this is not the packet itself, this is metadata about the packet that the device itself constructs. And we want to fuzz that. So what we do is we jump in just after that Rx description buffer was received from the ring. We'll start fuzzing there and we want to stop fuzzing when the loop loops around. We also have a harness stop when that loop breaks out. That's not shown here. So using this harness, we found the following bug. You get a Casan node pointer G reference in a function called GRO pull from frag zero. All right, we also get a helpful stack call trace where we see that there is Casan report and just before that there is a mem copy. So GRO pull from frag zero calls mem copy. All right, let's take a look at that function. It turns out that this is not in IGB itself. This is in net slash core slash dev.c. So this is some deep layer of the Linux networking stack where it receives this SKB buff structure and does a mem copy from one place to another. We have no clue what those are, but this mem copy obviously trips a new pointer D reference. So it's either the source or the destination is corrupt and it got corrupted because the fuzzer found a way to corrupt it. So at this point, the idea I had was, all right, let's take a look which one of these pointers is the culprit. We would want to stop the execution of a VM fork just at that mem copy. So we would be able to take a look at the state of the VM, the registers that contain those pointers to see which one is null. So we want to stop the execution at that mem copy. The way we do this, it has a couple steps and we need a couple of bits of information for it. First of all, we want to figure out what is the address of Kassan report because that's where we want to execute that VM opt tail using single stepping because just before Kassan report is reached, obviously we'll have the mem copy. So what we do is we just execute the VM with that crashing input and we see that Kassan report is indeed tripped and we see the virtual address of that function, Kassan report where it is. So we want to, at this point, create a VM fork, place that buffer that we know that will trip Kassan report into that memory. So we will use this tool called RWM, and we will write the contents of that file into the target buffer and that will allow us to execute this VM to reach the crashing input and record what happened. Obviously we could use processor trace as well for it, but I found single stepping to be just as effective and it's a little less convoluted. So now we have that VM fork set up with that crashing input injected into its memory. So now we just use the tool stepper that will use MTF single stepping to go all the way and stop at the virtual address of Kassan report when that's reached and we will pipe the output of that into a file. And if you take a look at what this file contains, so this is effectively just the disassembly of each instruction that I executed and this reaches Kassan report at the end. So there's a ton of instructions in there. Just looking at that is not all that helpful for the task we are trying to do, but what we can take from that is really just take the instruction pointers that we're observed and translate it using the kernel's debug symbols, using address to line that will actually get us the source lines of what each of those instructions actually are. So now if you take a look at this decoded log, what we will see is that each instruction pointer and what source line corresponds to. So at the bottom of this file, we see immediately, all right, so that's where GRO pool from Frank zero is and there is the mem copy that trips that will point to the reference. So I just take the last instruction that's still at the mem copy before Kassan report trips and I want to re-execute that VM to stop at that mem copy and the last instruction in that mem copy to be able to see what the register state is. So again, I just create a fork. I use RWM, same way as before and just change the domain ID. And now I want to stop on this address, which is the mem copies address. I don't actually need to save the output because I know where it's going. But now this domain ID 61 is paused at that mem copy. So I can just go and take a look at the register state, take a look at the source pointer. RSI is the register that holds the source pointer in this case and RDI holds the destination in this one. Oh well, it kind of looks like both source and destination in that mem copy is a null pointer. So they are both corrupted. So this approach did not really yield us anything that we could use to figure out what went wrong since it looks like that entire SKB buffer is just bogus. So what else can we do? Well, the idea is that well, if we can't figure out just at the mem copy what went wrong at the location where the bog trips is we can compare the execution that goes to Casan report with the execution that is normal. So we can take the normal input that was used as the seed, right? We know that this is the input that the kernel would have executed with. And normally, and that does not cause a crash, right? We see that it reaches the harness signal finish. So we will just create a fork from that, use stepper to go all the way to the end harness. Just as we did before, we'll stop on this address, save the output to a log file. Now we can take the instruction pointers from this log file, decode it using address to line. Save that as well. And now we have the decoded log for both the execution that goes to Casan report and that goes to the end harness and we can just diff them. The very first line in this diff is going to be where the execution diverges from the normal one. And we have the source line. And just go straight there, look at the code. And bam, this is the first line in this execution that only happens when the fuzzer, with the input that the fuzzer found. So it turns out that the SKB buffer is constructed by the driver and it's being passed to those deeper layer kernel subsystems but the way it gets constructed here is based on information that came from that Rx description buffer and it's bogus. So obviously what needs to happen is that even if the Rx description buffer says that oh, there is this bit set, there needs to be a little bit more sanity checking in place before that SKB structure is manipulated. So if you actually look at the latest kernel code, you will find that this code has been fixed and it's effectively was just missing a sanity check on data that was coming from DNA. All right, let's look at a couple more bugs just for fun. Can you spot the bug here? How about this one? If you haven't noticed yet, but kind of the team of these bugs is how about the same? You get some DMA source input that is used without input validation and is just used for whatever the kernel decides. In this case, for example, this is used as the slot ID is derived from DMA memory and is used for an array index. Well, what can go wrong there, right? So yeah, we've found nine null pointer dereferences, three array index out of bounds. We found some infinite loops in the interrupt handlers, but also during boot, the kernel can trip with user memory access as well, which is not great. And these are all pretty much stemmed from the same problem in that the kernel does not treat DMA memory as a security boundary. DMA memory is kind of treated trusted and consequently it means that all of these devices are treated trusted. And when you are talking about USB devices, well, it's not great that all USB devices are treated like that. Another problem, case that we wanted to look at is when these kernel codes might perform double fetches. Double fetch is effectively a race condition where you can have problems where you have time of check to time of use, where the kernel is performing, even if it did perform some sanity checks on memory that's DMA, the problem would be that even if you do sanity checks on DMA, accessible memory by the time you finish your sanity checks that data might have changed underneath because the device has access to that same memory. So if it wins the race, you might finish security checks but the data is still corrupt. So obviously we wanted to detect if that happens. And the idea was to, okay, let's remove EPT permission from DMA pages and just create a record of when DMA pages are being accessed. And if it's, if we get a page fault where the kernel is reading some address from DMA and it's the same page with the same offset twice in a row that's the strictest definition of a double fetch, we can detect that and report that as a crash to EFL so we can go and take a look at the code to see whether the double fetch is a security concern. We thought it would be rare but it turns out that it happens all over the place. Some kernel drivers, three DMA memory as totally trusted. So they would just keep going back and fetching the same memory left and right. But so far we haven't found a strictly speaking security issue because it turns out that the same byte is being fetched but different bits are used from that byte. So far it hasn't looked dangerous but obviously this practice of just treating DMA memory is bad and it needs to change but we've received considerable pushback from kernel maintainers for various reasons, performance, regressions, fewer of regressions but ultimately to close this class of bugs DMA memory should really be treated the same way as user space memory is. Like every DMA memory should get copied into a local buffer before being used by the kernel and that's absolutely not the case today. All right, so we found a bunch of bugs, we fixed them, mission accomplished, right? Not so fast. As you recall the way we found the DMA input points was just through reading the source code and doing some experiments with F trace but there was this lingering feeling of like, hey did they really discover all DMA input points? Like what data do we have to back that argument? And I mean we had a bunch of people look at the code so that gave us some confidence but we couldn't put a number on it. We also got bugged on by just documenting all the bugs we found and at some point it just became non-productive to keep staring at the code because it was just annoying. So let's do better. This tool called DMA monitor would be added to the project to be a standalone EPP default monitoring tool. So this effectively came after the double fetch detection code was added and the idea is that well, if we can already detect when DMA is being accessed for double fetches, well we can use the same approach to detect when DMA is being accessed at all, right? We can really just trace who is accessing DMA and where by using EPP faults. The only thing we need is to know where the DMA pages are. Fortunately, the Linux kernel has its own internal DMA API that all kernel modules should be using to set up DMA for devices. And in that there is a function that is used to allocate memory to be used for DMA, DMA alloc attributes. We can hook that function using a break point through the hypervisor and hook the return address when the function finishes and that will get us the virtual address of all pages that the kernel uses for DMA. And then we can just remove the EPP permission for all those pages on the fly, effectively giving us a way to log all the code sites that read from DMA as the kernel is running. So let's take a look at how this works in practice. I'm booting up the same VM and on the right I'm firing up DMA monitor. I just tell it, you know, what is the domain and what is the debug JSON of the kernel is. DMA alloc attributes is hooked as the kernel is booting and then we pretty much immediately start to see a ton of DMA accesses happening as the kernel is still booting. As you can see, there are quite a few pages allocated for DMA and we can grab through that log and see when it's, you know, the access is just to read access. We can take the instruction pointer for each sort through them and just take the unique ones. There's still a ton of them, but you know, we can feed this through address to line to get a, you know, explicit list of all the places that the kernel touched DMA from. So, you know, this is quite a few places, but at least now we have an explicit list that we need to go through and take a look and see, you know, whether the data that is being read from DMA at these locations is complex enough to warrant fuzzing, which is awesome. We didn't have to look at the source code to figure out where to start. So this is, you know, miles better than what we were doing before because we just, you know, have the list that we have to take a look at instead of having to keep, you know, parsing everything in the kernel to see whether that's DMA access or not. There were still some corner cases in the DMA monitor, either though it's way better than what we were doing before. And that's because some of the times the DMA access that the kernel is doing is just reading something from DMA and stashing it into some structure and returning and then the kernel is going away and doing something else. So we are like, okay, well, there is that data going to get used after the DMA access, nothing warranted fuzzing, but you know, that data is still sitting now in the kernel, private kernel memory, but it still can be potentially malicious. So, you know, where is it getting used and is it safe? And we're like, well, you have no idea. We didn't want to go back reading the source code because it's very hard to follow that type of data, you know, lifecycle in the kernel and it's very error prone and it's very manual and annoying. So that's where this next tool idea came from that we call full VM taint analysis. The goal is to really just track the tainted data propagation in the kernel. We know where the data is coming from, right? We have the source, that's the DMA access. So we want to taint that address and track, you know, what the kernel is doing with the data and where the data lands and when, you know, how it affects the execution of the kernel. We will use VM trace, aka Intel processor trace to record the execution of the kernel with very low overhead. And after some time replay that recorded the instruction stream through the Triton DBI's taint engine. So that's a separate open source project. That's really awesome that we integrated it and that will tell us, you know, what instruction pointers get tainted by that data that we just, you know, read from DMA and that will tell us, you know, all the locations where the control flow of the kernel depends on tainted data. So let's take a look at this as well. Here is a VM fork that I know will perform a DMA access at this page. So I will, you know, fire up DMA monitor on it, I'll pause it and yes, we see, you know, there is a single DMA access to that page where something was read out from the DMA and stored somewhere in the kernel. So we don't know, you know, at this point like where else that data is getting used. So now the idea is to use VM taint to figure that out. We will use VM taint to save the state first. So this saves the stack and the resistors of the starting point into a file that we all need for the taint engine. We create another fork. We start the collection of the processor trace buffer. We pipe that into a file and on pause the VM fork. Now it's running and it's recording the execution into that buffer. We let it run for a second or two, we pause it and we can start decoding that processor trace and feed it through the taint engine, pipe it into the taint.log file and take a look while that is processing what it found so far. So right off the bat, we see, you know, where that move copied the data what register got tainted. And from there, you know, where else, what else got tainted during the execution of the kernel. And there we go. Right off the bat, we can see, you know, all the different instruction pointers that got tainted from just that single DMA access. And if you do this, you know, for the boot of the kernel, you can really check the full lifecycle of DMA source data through the execution of the kernel without even having to open up the source code of the kernel and just giving you right away all the locations that the control flow might depend on tainted data. You go take a look, if it looks complex enough, you put the harness around it and you can start the fuzzer. So this code is released as well as everything else. Most of the code is upstream in Xen, but this code you can grab from GitHub. There are also a couple of goodies that I wanted to mention. This is pretty new. Some of the targets that we wanted to fuzz were kind of difficult to get working in a Xen VM. So we came up with this way of being able to transplant the state of the system from one hypervisor to another. So in this case, you can take the, take a snapshot on QEMU, KVM or CEMEX and load it up on the Xen because VM forks really only need the CPU state and the memory of your target to be fuzzable. So you can use all of those different hypervisors to take a snapshot and load it up on Xen and fuzz away. Couple of things we want to work on next or already are working on. Top of the list is automation. Putting an end-to-end, you know, automated fuzzing system together is what everyone is asking about. So that's absolutely something we are looking at. Would be also pretty awesome to capture system state using Intel DCI, which is a USB three base debug connection that you can attach to a bare metal system and capture the full system state. This would allow us to, you know, really fuzz any code that runs on any system, including BIOS and SMM code. So this would be pretty cool. Another idea we have is creating the sensitive to ring zero mode tool, adding nested virtualization support so we can fuzz hypervisors. Obviously with Intel DCI, we would be able to capture hypervisor state as well. So that might not necessarily be a requirement, but still would be cool to have. And couple of things I didn't cover in this talk that are already possible using VM forking and all the tools that are available open source, like fuzzing other operating systems, fuzzing Zen, user space binaries are absolutely something you can fuzz with the system, black box binaries and even malware. So if you're looking for ideas, here's a couple of things that are already possible. So thank you. That was my talk. If you have any questions or comments, please reach out. And, you know, thanks goes to a whole bunch of people who made this work possible. So this was not, you know, a single person's job. This was, you know, large teams working on this. So thanks everyone for your involvement and absolutely for the open source community for releasing all the tools that make, you know, rapid security development like this possible. So thanks. I hope you found, you know, some cool information in this talk and, you know, the goal here is to get you to go out and go fuzz the kernel cause we found some bugs, but you can bet there are more to be found. So thank you. Looking forward to your questions.