 Thank you, everyone. So my presentation is about Behive and GPU password. So first of all, let me introduce myself. So my name is Corvin Kürner. I'm a software developer and mainly developing for x86 and developing hypervisor technologies. And I'm working for back-of-automation. And back-of-automation does industrial automation and PC-based control. So back-of-does everything for industrial automation. So starting from panels, PCs, I-R controls, drives. And we also provide software. And we're interested in GPU password because since our founding, we are using Windows as operating system. And we integrate our software into the Windows system. And now we want to move on. And don't use Windows to integrate our software anymore. So we decided to use FubiSD. And now we are integrating the software into FubiSD. But our customers are familiar with Windows and they want Windows on the machines. So we're using the Behive hypervisor to start a Windows VM and gives the user a machine which looks like a Windows machine. But it uses FubiSD with the hypervisor. OK, so move on with the interesting part. I will start with a short live demonstration how this GPU password works. Then I will tell what's new because this presentation is a follow-up of my presentation 2021 and the vendor summit. And at the end, I give some instructions for all of you can test the GPU password. OK, for my live demonstration, I'm using a system with an I-Core 7 PC. So it looks like this device. And yes, I'm using the integrated graphics and pass it to the Windows VM. OK, so let's move on with the live demonstration. OK, so for the live demonstration, I've connected a KVM switch to my device. And so I've also prepared an SSH session to the device. So first of all, we can look at all the PCI devices, so with PCI-CONF. And there we got our VGA device, so our GPUs. And because we also want to control this VM, we also are interested in the USB device, so the XHCI device. So we want to pass through both of these devices. OK, so first of all, we have to load the VMM kernel driver. And if we have done this, we have to make sure that no other device uses our GPU and the USB device. So there's a special driver for it in QubesD, so it's called PPT. So we can use DevCTL to detach the driver from the device. So we're interested in PCI device 2 and 20, so we call DevCTL detach for both devices. And after this, we can use DevCTL to set the PPT driver to both devices, so for device 20 and 2. Yeah, and now we're done, so if we can check PCI-CONF again. So now we see that the GPU and the USB device is connected to the PPT driver. And now we can just start Behive. So we add some flex to the Behive core, spend some cores, some memory. And what we need is, we always need the host bridge. Add slot 0, then we need a disk. So I've prepared the Windows disk. Oops, next. So next we add the GPU, so we can just use it at the password device. And there's some special about this GPU, which I will talk later about. So there's a ROM option. And if you add the ROM option, you can add an expansion ROM to the GPU. And here I'm using the GOP driver. And I'm on this later in my talk. So next we need the USB device, the LPC device, a boot, no, SCDIO, lastly the user local chair. Do you have the camera link or do you have? Oh, sorry. We have been lost. OK, what's missing? Boot from user local chair. Do you have the phone? Oh, this. OK, it's all NUEF. So if you're familiar with Behive, there's no special thing about this. We have called except this ROM, but I will talk about later on this. So now you can see that on my device, the hyper-vital starts up and you immediately get graphical output from the VM. And now the Windows VM starts. Yeah, we know so it's taken a while. So we were in this device, is it weird at the moment? At the moment it's in our country and there we have a small rack. And we have connected the KVM switch to it so that we can see what's happening on the device. Yeah, and if you check the task manager, you will also see that the graphic card is detected by Windows. And there are also some benchmark tools, and now I've installed just one of them to show you that you can even run an accelerated graphics because here you can run different benchmarks, 120. So yeah, DirectX 12 wouldn't work with out-accelerated graphics. So we can just, for example, run this test. And as you can see, the run test starts and it works. So here we can use accelerated graphics. So maybe wait a moment until it really starts. Okay, and there it is. Yeah, it's a bit laggy, but I think it's more issue with the network connection and not the GPU as you saw there was an FPS counter with 400 FPS. Thank you. Okay, so let's continue with the presentation. Yeah, so I've showed you that it works for Windows and I've also tested it for Linux. So for example, here's an Ubuntu VM and there also we can see that we are running in a Behive VM, but the device is detected and also the driver is attached. And yeah, this also works. And the benchmark I've shown you, I've run the complete benchmark and there you can see, yeah, for the 2D benchmark on the left side, there's a Behive VM on the right side is a native system. And if you compare both numbers, you see that you get about 80% of the full GPU performance in the VM. And it's very similar to 3D graphics. So that's even a bit better because you get about 90% of the native GPU performance. Okay, as I mentioned earlier, this is the follow-up of the Windows Summit in 2021. And yeah, the state was that the dedicated AMD graphics, it works on the current branch. Yeah, but integrated graphics don't work. Dedicated in the graphics, I don't know because I never tested them. And so in upstream Behive, with my patches on our system, for example, the integrated Intel GPU works, as you have seen. Okay, so let's start with dedicated AMD graphics cards. So they only require standard PCIe pass-through. But there's an exception for Linux and BSD because the driver there needs the video bios. And yeah, it is part of the PCIe specification, but it wasn't implemented in Behive. But I've added this support. And it's also upstreamed and supported in 30.1. But there's one issue, and that's not Behive related. It's OVMS, so boot room related. Because the video bios is a bit special because the bios have to shadow this video bios. And the operating system searches for the shadow copy. And if OVMS doesn't do it, the guest VM wouldn't find the video bios and can't use it. So yeah, it's supported on the Behive side but not on the guest side, sadly. And yeah, that's the current state for AMD graphics. And for integrated graphics, I don't know what is required. I've tried it and I hadn't success until yet. I've also tried to pass-through it on other hypervisals like QEMO, which often has some more features. But even on QEMO, I wasn't able to pass-through the integrated graphics of my devices. So yeah, I don't know what they're missing. And the next device is the dedicated Intel graphic devices. But as I mentioned earlier, I haven't tested it yet. I don't know if they work or if not. Yeah, we have to see. And the integrated graphics for Intel, they work on Linux and BSD in upstream, but not for Windows. And if you want to know what's missing, the problem is that the integrated graphics have some non-standard PCI resources. And you have to handle them. And yes, the Linux and BSD driver don't care. And I don't know. Maybe they don't use them. But the Linux driver does. So first of all, there's a so-called graphics store memory. And we have to make sure that this memory is allocated and in some way assigned to the device. And there's also the so-called operation. And you also have to allocate and assign it. But you also have to use the host operation because there are some information about the configuration, which should be used for the graphic card. And this has to match with the host configuration otherwise it don't work correctly. And the last one, the last common graphic cards are from Nvidia, those dedicated graphic cards. But I have tried it a bit, but didn't work much on it. So it's still work in progress. OK, so if we look how we continued. So on the vendor summit, only dedicated AMD graphic card works in current. Now they work in 30.0.1. And yeah, this is one exception for Linux and that the guest doesn't work. And the integrated graphic card works for Linux and BSD. OK. So now let's move on how to test this by your own. And I would start with the GodDriver, which I mentioned in my live demonstration. So if you boot up your machine, first of all, the UEFI starts. Then the UEFI tries to find and start the bootloader. And the bootloader tries to find and boot the final operating system. And yeah, this could take up some time. And yeah, all of you know that on normal systems, you can enter your BIOS and modify settings and so on. And responsible for this graphical output is the so-called GodDriver. And so at some point in the UEFI stage, the GodDriver gets started. And it's responsible for graphical output. And then at one time, when the OS starts, the operating system driver will continue. So it is not required to use the GodDriver. But yeah, if you want to get graphical output while you're in the boot stage, you have to use it. So and of course, now the question is how you get the GodDriver. And for AMD and Nvidia, they are dedicated graphic cards. So they have to ship the GodDriver with their video BIOS, which is somewhere on the cards. So there are some ways to dump this video BIOS. But for integrated Intel graphics, there's no really common way. And even Intel with their own Acron hypervisor say that you have to ask your board manufacturer for the GodDriver. Yeah. But for AMD and Intel, so I'm not aware of any method to dump it on FubiSD. But for example, on Linux, they can use this FS to dump the video BIOS on Windows. There's a tool called GPUsZ. And there you have an option to save the video BIOS to a file. And yeah, but maybe it's a bit harder to dump it with it. And there are also some video BIOS files online. And you have to take care that you use the same version, because if there's a mismatch between the version you're using on the host system and the guest system, yeah, I haven't tried it. But I can imagine that it can get you into trouble. And yeah, to add it, it's relatively simple. I've did it in the demonstration. You're just at the ROM option. And yeah, append the pass to your GodDriver. OK, so let's start with specific instructions for the different kinds of graphic cards. So if you want to test GPU pass-through with AMD cards, yeah, just use your pass-through option like you do with any other device. So maybe one hint there, because while you need the operating system driver to get, if you don't use the GodDriver, if you want to install a VM, so on Windows, there's mostly no driver. So you, first of all, have to create a VM with VNC. Or yeah, if you have a VM, you can set up a remote desktop so that you can remotely connect to the VM. And then attach the graphic card and install the graphic cards driver. OK, but yeah, so on Linux and DSD, it doesn't work it on upstream. So you have to do some patches to the OVMF. And on back off, we have a GitHub repository. And there we have an EDK2 talk. And yeah, I have a branch which is called FabCorp and K. And then the release date of the EDK2. And you can use this branch to build your own OVMF, which works for GPU pass-through. Or I also have an open patch on Fabricator. And there is also a binary file, so you don't have to compile the OVMF by your own. You can just download it. And then, yeah, you have to attach your own modified OVMF to the boot ROM option. And of course, yeah, you have to append the ROM option to your pass-through line. OK, and the steps for integrated integrated graphics are very similar, so you also have to use the patched OVMF. But you also have to rebuild Behive. And then you also have the option to either use my Fabricator patch, or you can use our GitHub repository and get the same Behive version, which I'm using. But there are some things you take care of, if you want to start CVM into CPUs. So first of all, you have to use the ICPI tables, which are built from Behive. Because normally, if you run a boot from, they are used from the EDK2, but they don't match. So yeah, the Windows driver won't be able to use the GPU device. And you also have to take care of the slot of the way to attach the GPU, because it's an integrated GPU, and Intel always attached it at slot 2. So there are many drivers, which they all, the integrated graphics is always on slot 2, and they don't check for any other devices. And there are also some issues on Windows, if you don't use slot 31 for the APC device. And lastly, you have to use the boot room. So things like Behive load or grab to Behive won't work. Yeah, this GPU pass through. OK, and yeah, of course, it's not required, but if you want POS output, you have to attach the GOP driver. Yeah, the Intel and BSD side is a bit easier, because you can just, like you normally do, attach the Intel GPU pass through the device. And this also works on Upstreet. And yeah, so as a short summary, if you want to test these, all these things, you can take, first of all, take a look at my fabricator patch. Then you can also check our guitar repository from back off. And yeah, from 3BSD and ADK2. And if you have some questions, you can also mail me. OK, so thank you all for your attention. If you have any questions, just grab a picture of us, people. There is a vegan version available if you need this. But if not, I'm sure there's lots of questions, but I'm here, so I'm going to ask my first, what's blocking this landing? Like what do we need to do to get it into the video? Yeah, so it's a complicated topic and it's also a bigger patch. And yeah, and so it's a bit more complicated to merge all of this stuff. And also, I think on QEMU, the whole GPU pass through with integrated Intel devices is a bit complicated because they don't like to merge some of the patches because Intel does something a bit strange because it's an integrated GPU. So it's always there. And so you can include some drivers in the BIOS instead of providing a good video BIOS for the integrated and so on. And so it's a bit hard. I thought even the Linux community don't want to merge some of those patches. OK. Is there any other? Yes. I guess to talk, I feel it's amazing that you start with the demo of the landing. Thank you. Thank you very much. Thank you. Yeah? So for the two, it is important because, yeah, as I mentioned, some driver use the slot two for the Intel GPU. So it's a fixed value and they don't scan the whole PCI bus. But for the 20, it is not important. It was just a choice from my side. Yeah. Maybe I can take a step back here. So here I'm marked. I don't know if you see that this is green, the two. Yeah, so this 31 is important. And this two is important to work properly in Windows VMs. Yeah, I'm not sure. So I'm always using slot two to don't have any issues. So I don't test it much. I know that the God driver has problems if it isn't attached to two. I don't know if the operating system driver has problems. Yeah, but I always attach it just to two and then I don't have any problems. Yeah? How are your users connecting to those workstations? Are they physically less connected to the experience or is it to VMC? Well, you mean how I managed to see the live demonstration? No, I think we mean how I use those. The end users that are using it is their Windows workstations How do they access it? They have a show on there or VMC or is it that physical thing? So Nomet, if you're a pass through, you can just run it on your device. Yeah. And then you go to the first slide. They show our devices. They cannot be seen. No, something like that. And on the left side, this is a penalty scene. And this is the use case for our GPU pass through to sell this device to a customer. Run BST on it for our win time, OK? And give the display output from Windows and the pass through the GPU. So it is passed through to the Windows machine. And so actually the end user will access the Windows in the background there, so I can get running the time stuff. OK. Thank you. But the need for this is automated. Yeah, you can just sample this drone and drone as an attribute option. And there you can just say, add reboot. Run this behave command. So if your machine boots up, you see for a short moment the previously screen, and then the VM starts up, and you get your VM screen. Yes? Can you switch back to the previous ones, the previous ones? So like, you turn on the Windows and you get the relighted back to the previous ones? Not yet. But this is not a problem of behave. This is a problem of the Intel driver, because if you attach the Intel GPU driver, everything is fine. If you detach it, everything is fine. If you attach it a second time, the system crashes. And the problem is normally you boot up a boot up, you have your Intel driver, then you detach it to use it for your Behive VM. And if you're done, you would attach it a second time, and then your system would crash. And I've also talked to someone who is working on the DMK mod port, and he's also aware of this issue, but there's no solution yet. Before the Intel driver? So I'm not sure what the problem is. I once take a look in the core DOM and it looks like there was an issue with the driver naming because the Intel driver tries to attach a driver a second time with the same name, and then it crashes. And yeah. Yes, because there's our signature child device, it's probably doing it twice here, so we need to attach it. Great, thank you very much. Pull it. Thank you.