 Hi everyone, welcome to this talk on VM forking and hypervisor-based fuzzing with Zen. My name is Tom Ashlendiel, I'm a senior security researcher at Intel, and I'm also the maintainer of Zen's introspection subsystem, and also FluVVMI, which is a hypervisor-agnostic introspection library. And my background is primary malware research and black box binary analysis. What we will talk about today, we'll walk you through a bit of an intro and motivation of why we are doing this, what VM introspection is, and then how VM forking works. And then we'll look at various fuzzing scenarios on Zen, how to set up harnessing and coverage tracing, how to fuzz kernel modules of devices that are pass-through to the VM using PCI pass-through, and also how to detect double fetches. So briefly, why we are looking into using VM forking for fuzzing is because fuzzing is a time-tested approach to software validation. It's conceptually straightforward. You generate random inputs to your target code and see if anything bad happens to it. In practice, of course, this can be quite difficult, depending on what you want to fuzz. If you talk about fuzzing to kernel, then you have to answer the following questions of how do you get coverage tracing information for the kernel? How do you recover your system fast enough for fuzzing to be effective? If you are fuzzing your system online and it crashes and it takes a minute to reboot, then your fuzzing is not going to be very fast. And also, how do you ensure that the system is in the proper state between fuzzing iterations? Ideally, you want to start from the same starting point, otherwise, you might not have bugs that are reproducible. Also, how do you fuzz kernel internal interfaces? It's quite challenging when you fuzz well-established APIs and APIs, like the system call interface. This might be a little bit simpler, but for kernel internal interfaces, this can be quite challenging. And also, we might want to detect more than just crash conditions when the kernel crashes. We might be interested in detecting other scenarios like double fetches, for example. So there are really kernel fuzzers out there, and these are just select few that I'm familiar with it. So one might ask why you make another one. Simply saying, our motivation to make this new system was that all the existing kernel fuzzers are very tightly coupled to their use case. For example, in case of SysColor, it is designed to fuzz the system call interface and it does a really good job at it. But if we want to detect other things between the kernel or fuzz other interfaces that are not necessarily the system call interface, then it's not necessarily the best tool for the job. As for the other systems, we really wanted to use something that's upstream or at least builds on as stable APIs as possible, just to cut down on the time it takes to debug when things don't exactly work the way we want them to. Zen's VMI subsystem is experimental, but it's been upstream for a long time and it has been tested very well. So we thought it's a good idea to start building on that. And also it allows us a type of flexibility that the other systems didn't. So introspection, this is the idea of looking into it at the runtime of a virtual machine from an external perspective, in this case, using the hypervisor. You can think of this as a combination of live kernel debugging and memory forensics where we have access to the entire memory footprint of the VM, but also the hardware state. And using that information, we can reconstruct what happens within the VM. We can also pause the VM and look at the state of the vCPU and the hardware. And there is a bunch of events that you can configure using the hypervisor to trap and pause the system, such as VP defaults, breakpoints, CPU ID, single stepping, and there's a bunch of others that can be configured, but these four we will actually be using quite heavily in our fuzzing exercises. So as for VM forking, we wanted to have a way to restore the VMs into a starting point quickly after each fuzzing iteration. First, we looked at just simply restoring the entire system from a safe file, but this can be actually quite slow, even if you do it from a fast SSD or tempFS, it can be up to two seconds to restore the VM, which is obviously not ideal for fuzzing. Furthermore, Zen already had a subsystem that allowed us to not start from scratch when implementing VM forking. And this is the memory sharing subsystem where VMs can actually share the underlying memory with each other and only have to allocate memory when the memory footprint diverges. So we can use this to create forks in a fast and lightweight manner. The idea is that we create a VM with an empty EP team. So effectively the VM will have no memory. We just specify its parent VM that has been booted and it's a fully functional VM. And from this parent VM we copy the VCP parameters into the fork so that the hardware state is equivalent to the parent VM. And when you resume this VM or start this VM, it will page fault to Zen simply because it has no memory. In the EPT page fault handler within Zen, we can then populate the page table for this fork we have on the fly, depending on the type of access that was performed when the page fault got triggered. So for example, for read and execute accesses, you can populate the page table with shared entries. That meaning we didn't actually have to de-duplicate the full 4K memory. We just populated the page table for that VM with a shared entry. But if it was a write access, then they would de-duplicate the memory completely and just assign that new memory to the 4K VM. It's a little bit different than forking on Linux or kernel same-page merging would be with KVM. In that, there is currently no way for the parent VM to continue executing when you create a fork. This is primarily just due that it was not our use case for fuzzing if the parent VM had to be paused. That's perfectly acceptable. And then we didn't really want to spend the extra time to implement full domain splits, but it would technically be possible. The nice thing about VM forking though is that forks can be further forked. So effectively you can have a long chain of forks each in a different state of execution going back to the initial starting point. So you can think of these as in-memory snapshots of the VM's execution at various stages. Another nice thing about these VM forks is that we can actually run them with only CPU and memory. That means that we can actually execute the code without actually having to spin up Kimu to emulate the backend devices such as disks or networks, network devices for the VM, which speeds things up quite significantly. Starting up Kimu can be quite slow. Kimu doesn't have a reset function right now. So not having to come up startup Kimu is a performance benefit for us. And furthermore, if we have no devices we can also disable interrupts for these fork VMs which makes the execution of the code a lot more deterministic. Technically it is possible to launch Kimu for VM forks and then these VM forks will act and look like fully functional VMs. And the patches for that are implemented but these have not been yet merged upstream. But if anyone is interested in taking a look at those patches that are available but for fuzzing it's not actually required. So resetting the VM fork is a nice performance optimization. Again, just creating a new VM fork would be possible for each fuzzing iteration. But it actually turns out that it is a lot faster to just reset an existing fork depending on how much memory got duplicated for it. So we can actually just throw away the duplicated memory for a VM fork and recopy the VCP parameters from a parent VM. And in a lot of cases this actually is a lot faster than creating a full VM fork simply because you don't have to allocate new metadata structures for the new VM that tracks things like domain ID and a couple other things. Resetting a VM as lower overhead. And the speed that we can get under normal fuzzing iterations is that VM forks we can do about 1,300 VMs per second but VM forks can go up to about 9,000 resets per second and these were just measurements I made on my own laptop. These can of course vary based on your CPU and memory speed. Now when you do fuzzing one of the first steps of fuzzing is that you need to harness your code. And by harnessing what this means is that the fuzzer needs to know where the target code starts and where it stops. Currently these harnesses need to be manually placed into your target code and the harness needs to be something that traps to the hypervisor and should not have any side effect on the code itself. Furthermore, the code that you are trying to fuzz needs to execute normally between these start and stop points and need to consume some type of input that we will be actually fuzzing. What we use for harnessing is a simple CPU ID instruction which always traps the hypervisor. This is something that's architecturally required. So CPU ID when you run it in a VM always traps the hypervisor and we just specify a magic CPU ID leaf as our mark for fuzzing. This is the magic number you see in this snippet on the left. And this CPU ID instruction is also has no side effect on the code so you can really make a call to this function anywhere in your code and it will have no side effect. The only requirement before you call your harnesses to display the information about the buffer that or the memory location that you want to fuzz. So for example, here on the right, you see that we want to fuzz this test function in the middle. So we have a start harness before and the stop harness after. And what you would just need to do is insert a print K to display where that input buffer that test function receives is actually located in memory whether that's on the stack or the heap that doesn't really matter. It's just we need to know the virtual address of that variable before it is passed to that function. So the steps are effectively this. The parent VM will display the information about the target printed to the virtual serial console so we can actually see it. The parent VM built trap to the VMM on the first CPU ID instruction when the harness executes detect that it's start signal using that magic leaf number and then increments the instruction pointer of that VM so that the CPU will actually be sitting just after that CPU ID. We won't be actually executing the CPU ID during the fuzzing. We will only catch the stop CPU ID at the end. So coverage tracing for fuzzers is the idea that instead of just randomly generating inputs the fuzzer will need to know if the input that it just passed to your program has actually opened up new code pads. This is very useful to tune the mutation that the fuzzer makes to your input so that it actually smartly identifies input that actually exercises new and newly discovered regions of the code. And for this, there is a need for actually obtaining coverage trace meaning seeing all the basic blocks in your code, all the branches need to be instrumented. For this, we don't actually want to recompile the kernel. With AFL, normally you would actually need to use a special version of C-Lang that will actually compile your code with those hooks in place. We don't actually want to do this. We want to be able to instrument and collect the coverage tracing information without having to recompile the entire kernel. The way this is implemented right now is using break point and single step. From the hypervisor perspective, we can read and write the VM forks memory. And what this means is that we can actually place break points into the VMs execution into the code regions that are executing right now. And break points are also something that can be configured to trap to the hypervisor. So what we will do is actually just read and disassemble the code from the start point. So this would be the first harness in the code and find the next control flow instruction. We would replace that with a break point and let the VM run, we would resume the VCPU. And then that break point traps, we remove the break point and enable single stepping. And then the single step also traps the hypervisor to repeat this process. So that way we can follow the code as it's executing. And this is also something that works in nested setups. So this is quite convenient to test with. The only downside is that it adds significant overhead since all of those break points and single steps we all need to transfer to the hypervisor and back to the VM. As for detecting crashes, we don't just want to detect panics within the VM. And the way we do detection of interesting points is by break pointing the kernels functions that handle various events. So for example, we can detect panics. This is obviously something that is, is easily the most important target, but we might want to be detect other types of events as well. So for example, oops, begin is a function that gets called whenever something wrong happens within the kernel, but the kernel can handle the situation and continue executing so it doesn't actually crash. But this would be definitely something that we would like to get notified of. We would also be usually interested in if there's any type of page fault happens within some kernel internal code, which for the most part should be rare. But you can also extend this list to include other stuff. For example, it would be entirely possible to hook in ASN and UBSN messages to be trapped so that when they are fuzzing and your kernel is compiled with those address sanitizers in place, we can detect those address sanitizers trigger as well. So putting all of these things together, the three steps to actually set up fuzzing is that we set up the parent VM. So we trap on the first call to the harness function. We create a fork from this VM. This is where we will break point the sync points. These are the kernel internal handlers for the various types of events we want to get notified of when things go bad. And then we create a second fork from this first one that we just break pointed, which we will actually use for execution and collecting the coverage trace from. So here I have a Ubuntu VM running and we will load this kernel module inside that we will use to test the fuzzing system. And what we see here is that it has two character arrays, dead beef and not beef, has the harness function as a couple dummy path functions that we will just use to test the coverage tracing. And inside there is a mem compare that checks if those two character arrays are equal or not. And if they are equal, then we will do a new pointer dereference that pointer Y is set to null. So this will trigger a oops slash page fault and we want to catch that when that happens. Obviously, if you load this kernel module normally, nothing would happen. Those strings are hard coded. So they would never be the same. But that said, those strings will live in memory and we can fuzz the underlying memory of that buffer. And that's how we will actually find the debug that we planted in this kernel module. Here again, we see the harness before and the harness after the call to this test function. And then we see the print cave where I just display the pointer of those two buffers just so that we can pass that information to the fuzzer when the fuzzer executes. So now on the right, I have the VM running attached to its serial console. So I am just going to compile this kernel module, load it into Ubuntu. And this is the output that we would see normally. Again, nothing happens, really it's just displaced the information that we saw in the print cave. So we know what those addresses are. So what we will do now is we will start with the first step of the fuzzing exercise, set up the parent VM and we do that using the KFX tool, kernel fuzzer fuzzing project where we specify that we are doing a setup step. We specify the domain's name and we specify a JSON file which is the kernel's debug information. And now we see that this tool is waiting for the harness to start. It's listening to the CPU ID events and checks if the magic value is observed. So now when I inject the kernel module we see that this event was caught by KFX and that parent is ready, meaning the parent is paused and its CPU is sitting there just after that CPU ID executed. So this is now ready to be fuzzed. So we just need to know what the address of the buffer is that we want to fuzz. And we are going to use AFL, American Fuzzy Lot for fuzzing. And for that we need to have a input seed. So here I already have not XB specified as the input seed that we'll start mutating from. This is just one ASCII character away from not beef which we'll make that mem compare through. So we're just gonna use the fuzzer to stumble upon the matching input. And we start from a seed just for this exercise to finish fast enough for the demo to be meaningful. So now what we'll do is just copy the same command line arguments we did before and we're gonna pass it to AFL. So with AFL you specify the input folder that has your seed, the output folder to store the crashing outputs when detected specify dash X for Zen mode and we add some extra memory for AFL. And then we do KFX without the setup step. We specify the address we want to fuzz which is going to be the first character buffer. In memory we specify that the input will be coming from AFL. So at that is the magic mark for AFL. And then we just specify how many characters how many bytes we would actually want to inject into the VM. Technically with the hypervisor you can override the entire kernel memory space. So you want to be careful with how much memory you overwrite here. So we know that the character arrays was size eight. So that's what we want to fuzz. So now you see AFL starting up. It's running with about 1400 executions per second using the VMI breakpoint single step coverage tracing mechanism. We see that it discovered all those dummy paths that the kernel module had and it already found the crash condition that triggers that mem compare to be true. So we can actually check what that is. So we go into the crash folder and we see that it found that the crash condition happens when the input injected is not a beef. There's an alternative method to collect the coverage tracing information that is using Intel processor trace. So the idea here is that this assembly breakpoint and single step is expensive. So we can actually just use the processor itself to collect the information we need to determine what code was executed. The way a processor trace works is that you designate the memory location up to four gigabytes in length to use as the processor trace buffer and then the processor itself it'll record the information for when that VM was executing. The format of that buffer is quite clever. It is very smartly packed but that also means that to reconstruct the coverage information we need to parse this buffer and the existing parser libraries that are available were not really designed for fast decoding which would also limit the fuzzing speed we could achieve. Fortunately, there was a new library that was just released by the open source community in this space, LibxDC that actually offers very fast processor trace buffer decoding and we are integrating with that right now to see better speeds than with the breakpoint and single step mechanism. Only downside of this processor trace-based mechanism is that it is not available in nested setups and you can also only fuzz code that's within a single address space so you can't fuzz code that actually crosses kernel user space boundary or switches between processes. Here, we'll see a demo of how this looks like. Again, we have the same test module loaded as before and what we'll do is we'll set up the, this is, we'll start the fuzzer just as before except at the very end of the command chain we actually just specify dash dash ptcov that will activate KFX's processor trace coverage based decoding feature. And now what we see is that the fuzzer is actually running significantly faster than as before and before it was about 1400 executions per second with processor trace-based decoding we are up to about 4,000 executions per second. So this is significant performance improvement. Processor trace-based coverage is still not upstream in Zen but it will be available in Zen 4.15. Now, what if we can't recompile our target to add the harness code? This works fine if you have the source code and you can place those harnesses in and recompile your target but oftentimes you don't have that luxury. And fortunately with the hypervisor as we saw already we can trap breakpoints to the hypervisor. So we can actually just use a debugger to add breakpoints and use those breakpoints as our harness. The idea is that we would run GDP and use GDP to set the breakpoint before and after the target code we want to fuzz. And only downside of this is that when you place a breakpoint into your target code you are actually overriding the code that was originally there. So the fuzzer actually needs to know the first byte that the GDP will overwrite with the breakpoint. So you need to get that information to the fuzzer but when that information is available then KFX just replaces that breakpoint with the original content and then fuzzing will work exactly the same way as before. So now what we have here this is going to be a test program that's runs in user space. It's extremely trivial. It has a character array that just contains the string world. We have a printf that displays hello world and then a test function that sees if the first character of the buffer is the letter capital E and if it is then it will create a sector condition if it will do a page fault effectively. And then we have a printf after. If you're on this program obviously what happens is you just see that through hello world printfs happening nothing bad happens. So what we want to do is actually start GDB on this program and place the breakpoints so that we can actually fuzz that middle section of this code. We will place a breakpoint at the printf function so we just say break printf and we can start running this code right now. Obviously this breakpoint that says 0x1050 is not the correct virtual address yet but once the program starts running the printf function is located by GDB automatically and breakpointed properly. So now we see that we are sitting at the first printf function in this program. So we can actually start poking around its memory to find out where things are. So for example, we want to actually start fuzzing then this printf returns. So we want to fuzz just after that printf is done. So we just go up once backframe and place a breakpoint where we get to in May. And now we are technically fully harness this program to start fuzzing it. We have a breakpoint in May just after the first printf and then printf again is still breakpointed at the start. So only thing we actually need to determine is where the virtual address of the character buffer is that we want to fuzz and also what is the start byte that GDB overwrote in main so that the fuzzer can replace it again when fuzzing begins. So here this is the string that, this is the command line arguments that I've already said before and where that 0x0f comes from is actually we can determine that by just looking at the memory address using GDB and we see that 0x0f is the last byte there displayed. That's actually the first byte in memory it's just due to NDNS displayed in the reverse order. So now I started GDB with, I start kfx with the breakpoint harness type and now I have the address of the character buffer that we want to fuzz. So what we just need to do is continue the execution of this program. We see the first hello world printed out and now the program is in the sitting just after the printf ready to be harnessed, the VM is paused. We know the address of the buffer we want to fuzz so we can just pass this full information to AFL and we use the seed of x world to start fuzzing from. Again, this is just to get to the crashing condition fast enough for this demo to work. So we can again just start from the same point as above. We specify the input folder, the output folder that it's Zen mode that we're running in add extra memory. We are not in setup mode anymore. We keep the harness breakpoints there, start byte and then we just specify the address we want to start fuzzing. This is where the character buffer is in memory of the program. And then we get the input from AFL and then we tell it to limit the input to the first four characters of that, first four bytes of that buffer. So now AFL is starting up and immediately we see that it found the crashing condition. Obviously this was really close to, the seed was really close to the crashing condition. And then in the output folder we can see what was the crashing condition it found that is e-world. Obviously it's only the first ASCII character that matters here, but this just shows that you actually can use just a regular debugger to place your harness. You don't actually have to compile that into your code. Another nice feature with KFX and Zen is that we can actually use devices in PCI pass remote. And we do this because a lot of the times that your target code, especially when we are talking about kernel modules don't fully initialize unless they actually see a real device available inside your VM. And we just want to make sure that the kernel module can get into a state as if it was running on a real system. So we attach a PCI pass route to the parent VM that causes the kernel module and the parent VM to fully initialize and activate the device. And everything else is the same. The harness fork, the kernel module the same way, but the parent VM is going to be the only VM that actually has access to the device. So a VM fork can't corrupt the physical device that's attached to it, but that also means that the fork can't talk to that device. So any type of device communication, MMIO stuff is out of scope. We can read memory that was MMIO accessible to the kernel module but anything in the kernel module that tries to talk to the device right to MMIO regions or DMA is obviously not going to get any response. The device is not available for the VM fork. So what we will look at here is a demo where we are going to fuzz the i9-15 kernel module that drives the integrated video card of Intel CPUs. So here we see that the PCI device is zero to that zero and we will pass this device through to a VM. We see that in DOM zero, this is currently loaded i9-15 kernel module is loaded. So we're just going to make this device assignable. And this is done with the ZEN tool stack Excel. We just say Excel PCI assignable add. And afterwards we can verify that the i9-15 kernel module is no longer used in DOM zero. So this means that this device is not assigned to DOM zero anymore. Then we can just pass it through to the VM. In this case, the Ubuntu 20 VM you're using and here at the bottom, what we do is we just tell Kimu not to emulate any type of VGA device and then that we do GFX pass-through or this PCI device that we just made assignable. So I'm starting this VM up now. I can connect to its virtual console. And while this VM is booting, I will show you the harness that we placed into Dynamo kernel module. This is the kernel module that we'll get loaded on this VM boots. It's already installed in there. And the harnessing, as I said, is effectively going to be the same as we saw for the test module. You see the harness function here in the CPU ID and the magic number. And then we have the print K that displays the information about this iocool system call here on the right, which we see when the VM boots this system call actually gets called a bunch of times. And it receives a buffer from user space and then passes it on to a function that does some type of parsing with it. So we wanted to fuzz that buffer that was received from user space. We can log into the VM on the right and verify that this is actually coming from Dynamo 5 kernel module. We see that the module is loaded and it's active and that the device actually shows up inside the VM as we expected. So now we would repeat the same steps as before we start the setup step. And we would wait for the first occurrence of that system call happening. And in this case, I got super lucky the system call happened just as I started the setup mode of KFX. So now this VM is sitting there just after the first harness finished executing. We see that the buffer size that was received from user space is 1680. So we want to actually read that memory out from the VM and use that as the seed. And there is a tool called RVMAM coming with KFX that can do just that. We specify the domain that we want to read that memory out from there. We want to actually write it to and how much memory you want to read from that address. And we see that it dumped that memory out of the VM into that file. You can take a look at the content of that file, what it looks like. So this is the actual memory content of that buffer sitting in kernel memory space. And this is what we are used to seed or fuzzer. And this is the buffer that we start mutating from and see what type of execution we can trigger if we start using this input in this system call. So again, the setup step is about the same. The fuzzer is about the same as well. We just replace the address, make sure to prefix it with 0x. And we'll use processor trace-based coverage for this one. And we can start fuzzing this system call right away. So now if this is running, fuzzing that buffer, creating a VM forks in the background and resetting the VM forks and we get quite okay speed. It does go up and down. But we also see that new paths are getting opened up in the execution of that kernel module. So depending on what kind of code we open up, if that code is compute heavy, then we will actually do see the execution speed go down simply because it takes longer for that code to finish executing and get us back to the finish harness. But that's to be expected when you are fuzzing such a large buffer and there is a lot of processing happening, but we have been opening up quite a lot of different execution paths within that system call handler here. Now detecting double fetches was something that's a hard problem that we wanted to take a look whether we can integrate that into the fuzzer. And the nice thing about using the hypervisor for this type of fuzzing is that we can really detect, define any condition as a crash condition to be notified of when it occurs and just feedback that to AFL to record what input trigger that condition. So normally detecting double fetch conditions is quite difficult because just reviewing the source code might not reveal that you have a double fetch condition. Sometimes double fetches get introduced by the compiler itself. And this is of course the condition when you are reading the same input from a memory location twice in a row. But if you read that from a memory location that is accessible by say an external device via a DMA page or by users based on another CPU, then you might run into time of check to time of user errors. So detecting when your code performs double fetches is very important. And with the hypervisor, we already hooked into the VMM page fault handler. So we can detect double fetches using EPT. And the idea is that if you know the address of a page that you suspect it is double fetch from, say you know that it's a DMA page so there is a possibility that if there is a double fetch to have a problem condition, you can just remove the read write permission from that page and detect when it's being accessed. And obviously if the same offset from the page is being accessed twice in a row, that's the definition of a double fetch. And we can do that easily using the hypervisor on Zen. So here I have the same VM as before and we have another test module, kernel module that we'll use for this and it's called double fetch. Here I have a single character buffer that just contains that beef. And the test function below will be checked if the test function will get called if the first two characters of that string is mutated to an O. And if that happens, then we actually build the reference, the memory location passed to that function which is a DMA page that was allocated just above there. So the print K, we just display where that string is and we'll be able to display where the DMA page address is. Again, if the first two characters of that string becomes an O, then it will actually trigger a fetch from that page twice in a row from the same offset which we want to detect as double fetch. So now we start the setup mode, inject the kernel module, collect the information necessary, the strings address and the DMA page address. And we just use both of those to specify during the fuzzing that we want to watch that memory address. The input that we start from is that beef which is the content of the buffer that I hard coded. So this is the string we will start mutating from if the first two character become an O, then we will see the double fetch happening. So again, we start the AFL, same setup steps as before. Just replace the address with the new address that we see here. Input is coming from AFL. We limit the input to the first two characters in this case. That's what we want to fuzz. We will use processor trace-based coverage and then we say detect double fetch on that DMA page address. And this address happens to be, again, still the same as before. So this is all ready to go. So now AFL is running and it already has stumbled upon the crash condition. So we can actually take a look at what the string it found to be causing the issue going to. We see that it starts with NO, which would obviously be the crashing condition in the code. So we know that it found the right input for it. But just to verify, we can actually run KFX without AFL. So we can actually just give it that crashing input directly. And if we add the debug option to KFX, we will actually see a little bit behind of what's happening. So this will just do a single execution of the code and we see that the result is indeed reported as a crash. But inside the logs, we actually see that the EPT events are happening and what is the memory that was accessed when the EPT events triggered because we removed the permission from that page. And we see that indeed the memory was accessed at that location and it was accessed as a read, access and we see that happening twice in a row from two different instruction pointer location. So we do see that this indeed detected the double fetch condition that we were interested in. So all of this code is released as open source under the MIT license. The VM forking feature is already upstream in Zen and that was released just this summer. And the kernel fuzzer for Zen project KFX is available on the Intel GitHub page at vtub.com slash Intel slash kernel fuzzer for Zen project. So I hope you will check it out and that I hope it will be useful for security validation of various tools in the future. So thank you for attending this talk. If you have any questions or comments, please reach out and I would also like to give a special thanks to the following people for their significant help. They are awesome people. So thank you guys for helping with this. This project wouldn't be the same without your help.