 I guess we can start. So welcome to my talk about GPU pass through this beehive. Before I start with my presentation, let me introduce myself. So I'm Corvin Kröner, I'm a software developer working for back-of-automation and I'm focusing on x86 technologies and hypervisor technologies. And I'm a through with this source computer since 2022. Yeah, and at back-of-we're doing industrial automation. So we have a product line ranging from industrial PCs of our IOT terminals and also motor drives and yeah, all those is controlled by our control software. And yeah, that's the motivation because we like to use GPU pass through with beehive because we are using a Windows system for more than 25 years. And yeah, our control system which is software which is called Twinkat is integrated into Windows and runs on Windows. And but now we want to run our control system in FreeBSD but our customers are used to Windows and want a Windows interface and want to run their Windows tools and Windows. So we like to have a machine which has this user interface, a Windows virtual machine and yeah, underneath of it, FreeBSD should run with our controls software. Okay, so yeah, for my talk, I would like to start with a live demonstration then shortly discuss the current state and then discuss how to use GPU pass through that so that everyone else can test it. And at the end, a short question on some more. Okay, so let's start with a live demonstration. So therefore I have a system. Okay, so I have a system which is connected to KVM switch. And yeah, I've also connected to shell over SSH to control the system. So in the first step, which we can look up is that this system is running FreeBSD 14. So it's as it's currently not released, it's an alpha version of FreeBSD. But as you will see, yeah, the GPU pass who still works on FreeBSD 14. It's not fully working on FreeBSD 14 because one thing you have to do is update the firmware of Behive. So the firmware is in the ports tree under this utils edk2. And there I have a public branch on our Git repository. Yeah, which is at GitHub dash back off dash FreeBSD ports. And there I have a branch which updates edk2 port to enable fully enable GPU pass through. Okay, so first of all, before we can start GPU pass through, it's like any other PCI pass through. We have to detach. If you look at our PCI devices, we have a slot to the VGA PCI device, so our graphics card. So in the first step, like if you're doing normal PCI pass through, you have to detach the driver. So I'm detaching it from the USB device and the graphics device. And as a next step, you have to set the PPP driver which is required for the PCI pass through. So I'm doing it now for both devices. And then I've prepared a small script to run the GPU pass through. And yeah, the most important line is here where you add the graphic card, which is that slot two to the behave command line. There are some options which are required, but I will talk about them later. So and if you now start the VM, you can see at the KVM switch that, yeah, first of all, the UFI starts up. And yeah, after some time, so it's a Windows virtual machine. So we have to write a moment until it boots. Okay, so one moment. Okay, so here we are, here's the Windows desktop. And now to check that we're really running on a graphics card, we can, for example, start a graphical load. So I've picked up the firmware benchmark. And if we start the stress test, so yeah, I'm not sure if it's large enough, but you should see here that our frames per second are around 200. And yeah, it wouldn't be possible if we have an emulated graphics card. So as you can see, this should work. Okay, so yeah, so, oh. So in preparation of my talk, I did a screenshot of this. And as you can see here, we have 200 frames per second. So the GPU pass-through is working. Okay, so let's start with what is supported on previous defaulting. And so the short answer is on porting, we are supporting AMD graphic cards and Intel graphic cards, but not Nvidia graphic cards. But yeah, there are some issues. So for example, on the Nvidia graphic card, many of the cards have a hardware issue that they can't reset the card properly. And yeah, this means that if you start up your virtual machine, it works once, but if you reboot your UVM, it won't work properly. And on Intel, as I already said, you have to update the EDK2 port, but there's already some patches online and I'm working on merging this too. And a bit longer answer is if you're looking at AMD and Intel's integrated and dedicated graphic cards. So I've tested on the Ryzen V1000 system if GPU pass-through is working on integrated AMD cards, but it seems to fail. And it even fails on QEMU, so it looks like other hypervisors have the same issues too. And for the Intel dedicated graphic cards, I don't know which the current state is because I've never tested it and never heard that somebody tries to do it. I do not even know if other hypervisors support it, so we have to see what's the current state there is. Okay, so let's start with our how-to. So if one of you would like to use GPU pass-through, and the good answer is it's basically the same as GPU pass-through, that's really nice because I think most of the behind users have already used PCI pass-through and know how it works. But yeah, it's mostly like PCI pass-through. So it's not always that easy. Yeah, because the problem is it has often some constraints which you have to take care of. And if you do that, yeah, it's like any other PCI pass-through. Okay, so on those constraints, we try a ROM file, a grob driver, a female config interface, and a proper behave configuration. And yeah, to understand this more, I will now continue to explain what your ROM, grob, and female config is. So let's start with ROM. So it's a driver with a chip on the hardware itself. So if you have a graphics card, there's a small flash chip on it, and if you plug in the graphic card in your system, the UFI reads these flash chip and executes the driver. And it's responsible to initialize your hardware because you can't build a UFI BIOS which is capable of initializing any kind of hardware, especially complex hardware like GPUs or some complex NICs. And it can also add a UFI runtime driver. For example, if you want to pick the boot from a NIC or if you want to have graphics output, you need the ROM driver. And that's the answer for what is a grob driver? It's basically the graphics driver for the UFI. So if you build a system, you first start with the UFI stage, then with the boot loader stage and then the OS stage. And to get graphical output in the early stages of the boot, you need the grob driver. Yeah, and then at some later stage, the OS driver continues to be utilizing the GPU. Yeah, and the grob driver is mostly included in the ROM of the graphics card. And so it's not really required for GPU password, but it's really useful because when you're installing an operating system, you mostly have the problem that the installer don't have a graphics driver. So you can't install an operating system without a grob driver. And for example, if you want to use the loader or grob menu on your virtual machine, you also have to use the grob driver. Okay. And last but not least, the FEMA config, it's basically just an interface which is used for host and guest communication. So it was developed by QEMU and it's used to pass some information from the hypervisor to the guest and it's mainly used by the firmware and that's why it's called firmware configuration. So for example, you can add pass ACPI tables to the guest or you can specify a boot order or make many much more things with it. Okay. So now that we know the basics, let me start with the AMD GPU password which is a bit simpler and easier than an Intel GPU password. So the first step, you have to extract the ROM from your graphics card and you're on Linux, you can use the ZSFS to extract this. On Windows there's a GPU Z tool and unfortunately on previously it's not supported yet or I'm not aware of any method for that. So yeah, if you have a graphics card, somehow extract it from another system and then you can continue. And then in the next step, you have to call Beehive and yeah, so let's start with the basic Beehive command which just adds an image basically, it sets some flags and so on. So the first step, we have to generate ACPI tables because if the ACPI tables don't meet your hardware, yes, the GPU password won't work properly. So you have to add the minus A flag. Then you have to always boot with UFI. The reason for that is to make the ROM accessible for the operating system. It has to be executed by the UFI firmware. So we have to use the boot ROM. And there you have also to specify the firmware config option because otherwise the firmware config option is mainly useful to pass ACPI tables probably to the guest. And yeah, as the last step, you can just, you just have to add your password device just as you normally do. And there, don't forget to add the ROM. So there's a ROM option. At the path to the ROM you have extracted in the first step and that's it. Okay, let's go on to Intel. The first step is the same, extract the ROM, but on Intel it's a bit difficult because sometimes it's not possible and there's also the Akron hypervisor which is a hypervisor utilized by Intel and there they say you should ask your mainboard vendor or Intel to receive a drop driver. Yeah, so you have to try if this works, maybe you're lucky, maybe not, I don't know, but it's not required that you have the ROM file but it's really helpful. Yeah, this is good news. Okay, so it's the next step. You also have to call B-Hive and you also start with a basic B-Hive command. Yeah, you also have to generate ACPI tables. You also have to use a UFI ROM. You also have to use FEMA config. You also had to add the password device and now it starts to get a bit more complicated because on Intel systems, the GPU is always connected to slot two and some drivers are expecting that the device is connected to slot two. So it's really required that you assign the device to slot two, otherwise some drivers refuse to work. Okay, yeah, then you can also add the ROM file similar to AMD and also it's also required by Intel GPUs is an LPC device which mostly is always added to B-Hive, but yeah. And here it's also important to have it at slot 31 because otherwise some driver, yeah, just fail. And what is also important is that the LPC bridge has the same PCI-IDs like your host system because some driver are checking the PCI-IDs of the LPC device to identify your platform you're running on and if they see all this platform is not supported, they refuse to work. So yeah, therefore we have those config options in B-Hive to match the host PCI-IDs. Okay, and then that's it. So thanks for your attention, Michael. What is the state of OVMF support at the firmware as well? Yes, so I've already posted some patches on Fabricator to solve these issues with Intel GPU passthrough. As I showed in my live demonstration, I have a public branch where you can just fetch the port street from and build your EDK too. So yeah, hopefully we can soon merge this and then it will be available on 14. Yes? So the only issue you've added to NVIDIA is the reset problem? What are the issues with NVIDIA? So what I found out on NVIDIA is that you have to add some quirks to B-Hive. So if you're looking in the QEMU source code, they are doing the same. Because for example, the config space of the PCI device is mirrored into the MMIO space. So you have to trap these special MMIO space for NVIDIA. I, yeah, but I've tested it a bit, but not much. And I'm not sure what is failing currently. And it could be that legacy PCI interrupts are not supported by B-Hive yet. And because, yeah, it's required by some graphic cards. So it requires some further investigation. No. Sorry? Oh, repeat. So the question was if we can achieve these options with the VMB Hive package. And I don't think so. Because, yeah, you need really fine grain options for that. I'm not quite sure if VMB Hive already supports the optional format. Yeah, so it would require some patches to VMB Hive. Yes? Are you working on some kind of, like, you know, I was thinking about the virtual VR, for instance, like having the VM use the graphics malware for like, the event, like, yes, how is this related? Is it related in the north? And are you, because I know there is some work about virtual VR, is it something that, I don't know if this is related to that or... So, yes, this part completely passes the, this part completely passes the GPU to the guest. So the guest had full access to the device and there are some technologies, I think, like SRIOV to pass, to split a device and assign it to multiple users. But this is for one guest only? Yes, this is for one guest only. And SRIOV would be for multiple guests, for example. And I think, I'm not sure about the current state on Behive, if Behive supports this. But, yeah, if it supports SRIOV for other devices like NICS or something like that, it should support it for GPUs too, yeah. But, yeah. And besides that, for Intel, there's two technologies which are called GVTG and GVTS to somehow share the graphics card, but I'm not currently working on that. Okay, so any other questions? Then, thank you for your attention.