 I guess it's time. Welcome. Let me start with questions to wake you up. Please raise your hand if you've ever run a Windows VM on Linux. Excellent. And please raise your hand if you've ever run a Windows VM on Linux using KBM QMU. Or you don't know what it was, then it was probably KBM QMU. Good, good. That's actually a majority of the audience. So welcome to my talk. My name is Ladi. I'm a software engineer on the virtualization team at Redhead. And I focus on Windows Guests, which is the topic of this presentation. So the presentation will have roughly three parts. In the first one, I'll kind of start talking about virtualization in general to explain what are the unique challenges for closed source operating systems, guest operating systems like Microsoft Windows. And in the second part, I'll go into detail of our Windows Guest support that currently exists for running Windows Guests. I'll tell you what's new. And if we have time, I'll do a demo or two. And in the last part, we look at the development process, where you get the sources, how you build them. And we also look at the roadmap, what's coming next for Windows Guests. So the first picture, this shouldn't surprise you. This is like a very high-level picture of my mental picture of virtualization. You have virtual hardware or like the hypervisor at the bottom with virtual firmware and the guest operating system sitting on top of that. These arrows are meant to be interfaces, standard interfaces. And for the sake of this presentation, you can assume that we're talking about a PC. Like X86 PC would be like a good thing to imagine. So this could be things like PCI, ACPI, IDE. And these could be like BIOS calls. This could be like BIOS. So how would you go about building all this? So let's say you build the hypervisor, you build virtual hardware, you build the guest operating system. So you look at the specification, you open the PCI spec, you implement it in your hypervisor. You expose it exactly the way the spec says. Then you do the same thing in the operating system, right? You read the spec, you write your driver for the device and then you run it and it works, right? Except it probably doesn't. And why? So I think if you guys saw John's talk in the morning, you kind of get an idea of what the obstacles might be. So the spec is one thing and the reality is slightly different. So you'll find yourself debugging and iterating and fixing this whole stack, right, for quite some time because you will find things like devices which, where the operating system, let's say, expects a device to behave a certain way, which is not in the spec, but all the actual hardware behaves that way. So you kind of say, okay, I have to implement it that way, right, because it makes sense. Or on the other hand, maybe in your operating system, you'll find out that you rely on specific timing that the real hardware provides, right? But your virtual hardware doesn't really have these strong guarantees because it's running on like a system where it may be scheduled for a moment and not scheduled running, not running, right? So you'll fix the operating system. So you'll be doing, you'll be fixing both sides and you will probably follow. It may end up looking like this. And this is like an artist's impression of the thing. Don't read too much into it, right? There's a, the operating system here, they decided to completely bypass the firmware and go directly to hardware. There's an impedance mismatch here, which is kind of corrected with the transformer of some sort. And so, but what you will, in general, what you will end up doing is, if it's a bug in the hardware, in the virtual hardware or in the hypervisor, you will just fix the bug. If it's a bug in the operating system, you will, you will fix your operating system. If it's something in between, you're not quite sure, then you will follow your, your engineering sense, right? And you'll probably say, okay, so I'll, I'll, my hyper, hypervisor should probably support as many operating systems, guest operating systems as possible. So I'll do something that kind of maximizes these chances and the other way around. So now it works. Now you've built your system and it works fine. But it still has one problem. It's going to be slow. And why is it going to be slow? If you were here like two hours ago for Rageem's talk, you saw the way virtualization works at hardware level. And you know that VM exits are expensive. And real hardware, unfortunately, tends to cause many VM exits. For example, IDE, which again, you may have saw the talk this morning. Maybe you remember that in order to read data from disk, you basically read them word by word. Like there's a IO operation for two bytes of data read from the disk, right? This is insane, right? So for two bytes, there's going to be a KV, there's going to be a VM exit will cost you like 1000 cycles, 800 cycles on the CPU, right? Then you, you just return that little bit of data and then you get back in the VM and this will be going on and on and on. And it's going to be slow. So what's, what is it that you're, that you're going to do at this point? You are probably going to do something like this. You will say, okay, so real hardware is no good. Most of it is like legacy designed like in the dark ages of computing, right? I'll design something brand new. So some of this hardware, some of this hardware is actually not, doesn't exist in the real world. It's something I made up and also built some matching support in my guest operating system to be able to talk to it. And it will be very fast, which you can tell by the number of arrows because it's very hot, right? So, so far so good. So this is virtualization in general. And of course in our case, our guest operating system is Microsoft Windows and our hypervisor is KVM, KVM. The problem is you will not be able to fix bugs in Windows, most likely, right? Because you are not Microsoft. So the best you can do is you can report those bugs. They may listen to you, they may not listen to you, they may fix it in a future release, but you are very limited. So that's one aspect of running Windows as a guest operating system is that you end up implementing a lot of workarounds and kind of seemingly stupid things in your, in the layers that you are in control of, which would be the virtual hardware and virtual firmware. So you may find a lot of comments in KVM saying, because this is what Windows needs, right? It doesn't make any sense here, but there has to be a comment because otherwise somebody would delete it because it doesn't make any sense. So that's one thing. So that's functionality. So we do a lot of reverse engineering and a lot of, like, black box observation, right? Because we don't have the sources. But, you know, it's fine. It works. So that's, I guess, that's the good in the title of the talk. It works. How about performance? How about power virtualization? Luckily, Windows does have a mature and kind of a reasonable driver model, kernel driver model. So you can build drivers and you can plug them in your guest operating system in a way very similar to how you load kernel modules in Linux, right? As long as they're built for the right version of Windows, they just become, effectively become part of the Windows kernel. And so you can build drivers for your power virtual devices. Now, one slide. I think that this, Bert Io has also been mentioned here today. So just briefly, this is the power virtualization technology that we happen to use in QMU. So just very quickly, the thing that it's important here is this one port or memory IO, or actually it should be like at most one port or memory IO. So you see the intent here is to minimize VM exits, just like I mentioned before. So instead of exiting for each byte, you would exit only once for each unit of something, which, or something is defined by whatever you're trying to do, what kind of device it is. So, and Bert Io uses this concept of virtual queues serve as a transport between the guest and the host. So now finally, I can show you the name of the package because now you will understand why it's named that way. So the name of the package containing Windows guest support is Bert Io win, where Bert Io is the protocol and win stands for Windows. And also we'll kind of, you will understand why it has this in it. So it has a bunch of drivers, and these are all power virtual, drivers for power virtual hardware. It also has a guest agent in it, which is just a piece of code that runs inside the Windows VM and provides services like changing passwords, setting time, getting time, accessing files in the guest. It's not, it's not like essential. You can, you can do without it. Currently, we're at version 130. So this will be like an overview of the individual drivers in the package. I'm starting with the driver called NetKVM. This is kind of a historical name. It's in the Bert Io spec. I believe it's called just network. So this is the network device. This is the driver for the power virtual network device. To kind of reinforce my point about performance, this is like a simple benchmark I did on my laptop. So nothing scientific. It was, it wasn't meant to be like a super, super precise. You can just tell roughly you can get like twice the performance of real hardware. So these two are real network cards emulated by QMU. And this is the power virtual one. So like in the, when you get to larger, larger packets, it's roughly twice as fast. More details about this, about this driver. So here's how you would run a QMU. If you want to have this device exposed by QMU, these dots mean that you are supposed to supply your back end here. You're supposed to say, what is it connected to? Like the actual network adapter or some kind of a net, like a virtual interface or whatever. And on the, on the right side is a screenshot from Windows device manager. So this is what it's going to look like in Windows when you have this driver for this device installed. And it's just an Ethernet adapter. Nothing special. And it uses the virtual Q's to move data back and forth, send packets, receive packets over virtual. One interesting feature new in version 126, which is from last September or October, I believe, is RSC. This is just, this name comes from Microsoft. It's supported on Win8 and newer. And it is one way of doing basically segmentation offload. So the adapter, even though it's an Ethernet adapter and it kind of, it's MTU, it's like 1500 or whatever for Ethernet, right? It can indicate larger segments for TCP and UDP. So you don't waste that much time with overhead, like processing headers and stuff. So once you have network, your network is fast, what do you need next? You probably need storage. Yes. So I also benchmarked the storage driver. This time I used IDE, AHCI, Megasass. These are all hardware emulated by QMU. And this is the para-virtual device, I guess, driver. Oh, by the way, these names, these titles, slide titles, these correspond to directory names in the ISO with the drivers. I'll show you, you'll see that later, but just so you know why these names, because you won't, I don't think that you will find these names anywhere else. These are like, for historical reasons, these are the names of the Windows guest drivers. So again, it's faster, significantly faster. And these slides have kind of a similar layout, right? So you'll see how you can configure this in QMU. You'll have a device manager screenshot, and it's a SCSI adapter. And this one is actually, it says pass through, because all it does is really just passes through SCSI commands down to QMU. So Windows issues a SCSI command, the driver puts it on the vertio queue, and it goes pretty much unchanged to the host, which processes it. Simple. What's new in 126 is multi-queue. So you can have more than one of these vertio queues running in parallel. If you have enough CPUs, you can get better performance out of that. We have another storage driver, because there's another storage device in QMU. This one doesn't have pass through in the name, because it's not pass through. It actually does some transformation. It's very similar. Only the protocol is different. It's like a custom protocol defined in the vertio spec. So the SCSI command comes in from Windows. The driver transforms it into something else, sends it over vertio to the host. Simple. And by the way, this one is supported on Windows XP and newer, as opposed to BIOS SCSI, which only works on Vista Plus. So if you really, really have to run XP and you want faster storage, then you have to use this driver and this device. The balloon driver, this is a... The concept for those of you who don't know, I'm assuming most of you know, so just quickly. The concept of ballooning is that you may have... Your host may want to reclaim some memory from the guest. So let's say the guest is running with four gigabytes of RAM and the host decides that it would want two gigabytes back as the guest is running. So how do you do that? It's a nice trick. You make the guest allocate in its memory space two gigabytes and promise not to use them for anything and send addresses of the pages that get allocated back to the host. And the host knows this, that these pages are now back in its ownership. It can be fragile, especially on Windows because there's no guarantee that the driver will be loaded. The malicious user can unload the driver, in which case the balloon kind of bursts and can get into trouble. But it's a nice trick. There's a serial port device and driver. This is just a bi-directional pipe between the guest and the host. On the host side you can connect it to whatever you want. These three dots are kind of a specification of what the pipe is connected to. It can be a socket, it can be a file, it can be whatever. On the Linux side there's a bunch of options. On the Windows side this will be exposed as a file. And the file will be kind of hidden in the Windows kernel object space, which is something slightly obscure. It's kind of similar to Linux having devices in slash dev, but this one isn't that easily accessible. You have to jump through some hoops to get to the file in Windows, but once you have it, you open it, you just write to it, the data appears on the other side, you read from it, you get the data from the other side. It's just a pipe. And by the way, this is what the guest agent, and not only the QMU guest agent, but I guess more guest agents built for Windows, this is what they use to communicate with the host. RNG is a per virtual random number generator or number source. It actually doesn't generate random numbers. The idea is that you have a random number device on the host, preferably like a hardware RNG, which generates high quality entropy, and you want to have this available in the guest. And for Windows, we have a driver which plugs into the crypto stack in Windows. So Windows gets the ability to pull these random numbers from the host automatically. It's really neat. Input. Input is a brand new driver. The device is also not old. There are four ways how you can configure this. So you don't need all of this. You need just one of the lines. So this is not super clear, so I'll just mention it explicitly. So you can either do this tablet, mouse, keyboard, or you do this, which is pass through. So these are kind of emulated devices by QMU. And this is where you would assign a hardware device connected to your host. You would assign it to the guest or do pass through. Now, what is it good for? Why wouldn't you just use USB or something? So there may be scenarios where you don't want to have USB or PS2 or anything else, and you really, really want to have VRT-IO as your transport for your input device. And what is nice about this, but it doesn't quite apply to Windows, is this pass through. So if it's a, you have a device which is some obscure joystick, say, and you want to make it available in the guest, you can do it with the pass through command line, except that it uses this EVDef protocol, which is Linux specific. So what ends up happening is that on the host side, this device will very likely be an HID device. There will be an EVDef layer built on top of that in the host. This will be used at the, in the VRTQ will be these EVDef commands. And then in the guest, in Windows, because Windows doesn't understand EVDef, there will be HID built on top of that. So you have this double translation. And the more obscure the device is, the more likely it is that something will be lost in the translation because it's not, the mapping is not one to one. But for completeness, we do have it, we do have this driver. And I think this is the, almost there, last driver slide. So this is video driver. There's actually two drivers because the video stack in Windows changed. So Windows 7 was the last one to support XDDM style drivers. So we have drivers for both. All of the, the new one, the WDDM one for Win8 and newer is not that feature rich. It is actually just a frame buffer. It doesn't do any kind of acceleration. So I personally actually use RDP with my Windows 8 or Windows 10 guess, because it's faster than that. I think, by the way, I think this is the reason why the demo in the G-streamer talk was done on Windows 7, if I'm not mistaken, is because the driver there is more capable. And there's a little, there's a bunch of smaller bits also part of the package. I'm just briefly going to mention the floppy. It may surprise you why is there floppy image. And it's because of Windows XP, which still technically is supported by the package. And when you install Windows XP, and you're installing it on a system with the VertiO disk, you need to load VertiO driver as part of the installation. And the installer is very simple and it only, the only storage device it can access is a floppy drive. So that's why there's floppy image. It's probably not that useful anymore, maybe removed soon. Okay, let's do a demo. So I just installed Windows 10. It's Windows 10 64 bit in virtual manager on my Fedora. So it's freshly installed, working fine. I'll shut it down. And we look at the devices. So one thing you'll notice is that you have an IDE disk. And you have an RTL 8139, Nick. So this is exactly what I told you you shouldn't be doing, right? Because it's slow, like remember those, those charts, these are the slow options. We want the fast options, right? Fast is good. So let's do VertiO for Nick, for Nick. And let's do VertiO for, for the disk. I'll save it. And I'll start the guest. What do you think is going to happen? I heard somebody say blue screen, you are absolutely correct. There's going to be a blue screen. Why is there going to be a blue screen? We don't have a driver for that, right? The problem is that that's our boot drive. We're booting from it, right? And we made it the VertiO drive. So the BIOS knows how to talk VertiO to the drive. So it kind of looks like it's booting the OS, but the OS doesn't have the driver. So it crashes. So I'll show you a trick. This is like a very simple trick, but like surprisingly, many people don't know about it. So I still want to do that, right? I want to run my, my, my guest with VertiO disk because it's faster. So what I'm going to do is I'm going to switch this back to ID temporarily, right? I'll add another disk. Storage can be small, like one gigabyte, which will be VertiO. So I have one VertiO, one ID. Now I can run my guest again, like this. No, we fixed it. We know we fixed it. And this time it's going to boot because the boot drive is still IDE, but it will give us the option to install the driver for the, for the disk. So let's look at the device manager. We have a bunch of devices with no drivers, but my guess is that it would be this cousin controller. I'm going to say update driver. And now you have two options. First, okay, I'll do the, I'll first do this. I'll go and do it the hard way. I will say, I know which driver I want, right? I will be, I will keep saying, no, no, no, give me the most. I want, I'm going to customize it. I want to load it from, exactly from, from my, from my driver disk. So okay, I should have, should have mentioned this. If you look at the VM configuration, there's a CD-ROM and I have plugged in this ISO. So this ISO is the important part of the VertIO Win package. So this is what you get if you install the package and you're supposed to do exactly this. You're just mount it, make it available for the guests to use. So I know that I'm installing, I know I'm installing by a store, actually, this time. And I know that I am in Windows 10, 64 bit. And this is the driver. I'll say fine. I trust it. And now that I have the driver, I can go back here. I can remove this disk. I no longer need it. And I can switch this guy to VertIO. And it's going to work and see it's faster. Do you see that? How fast this was? Because of VertIO. And then now I'll show you like the other way, like the easier way, but it has a, there's a gut check. You have to be aware of something. So I also switched, I also switched the NIC over to VertIO. So that's going to be this Ethernet controller, probably, right? So I'll go driver. I'll go update. I don't think this the first one will work, but I can just say D, because that's my CD-ROM. And I'll say include subfolders. And a little bit of luck. Windows will find it and install it. So, okay, done. And now there's one thing to be aware of. Prior to version 126, this was not reliable. And Windows could pick up the wrong driver, the wrong version of the driver for maybe different Windows version, which would be causing problems. So just be aware of that. If something is not working fine, go there. Go there. Be explicit about where you're loading the driver from. Go directly in that, you know, win eight or whatever, win seven directory and say, this is the driver I want. Don't make the choice for me. I'll, I know what I'm doing. Okay, so that was one demo. I have another one, which is kind of a, kind of a bleeding edge. So this is a Windows Server 2016 running in a VM. And it has the Hyper-V role enabled. And I have Windows XP installed in a VM. And I just launched it. And now I have Windows XP in a VM, in a VM. And this is, and I can, oh, sorry, this, this is for example. So you'll need Linux kernel 4.10, which I believe is still in RC for this. And you'll need QMU 2.7 or newer. So this is really like something that was enabled recently. So this is nested virtualization. If you saw Radim stock, he said it was slow. Yes, it is slow. Why would you want to do this? So it may be, may be useful for testing, may be useful. You can like package all your, your, your VMs with its hypervisor, and you can move it somewhere else, right? So that's like, kind of like this one use case. And we also heard about new security features in Windows, where Windows does basically also what Radim showed, self virtualization to harden, to harden its, its kernel security. So it is something that may, may actually be required in the future to run Windows guests on QMU KBM. So back to the presentation. So we're, this is the final and third part. The source code, our source code of the private driver lives on GitHub. Feel free to open a pull request open issue. We'll be happy to, to work with you. The QXL, the newer video driver lives here and the guest agent, which I only mentioned briefly, is actually part of the, of the QMU tree. We built the drivers with Visual Studio 2015. I think that it may in theory be possible to build Windows kernel drivers using like main GW or any kind of kind of open source or free compiler. But for practical reasons, we choose to use Visual Studio 2015 running on Windows. But the good thing is that you can use the free edition. Like I just checked it, you can, you can just download the free community edition of VS and you can build these drivers. What to be aware of is that the binaries as you build them are not signed and Windows will not load signed drivers except for some very, very old versions. But look at, look at MSDN. There's a good, there are good tutorials on how to do that. You can just create your own self-signed certificate, signed your drivers and they will be loaded in Windows. What's coming next to Windows guest support? So performance is kind of an ongoing project. There's been a lot of work done to NetKVM to the network driver performance. So this is still, still continuing. Biostore, the other storage driver is going to get multi-Q support as well, something that the bioscasin driver already has. For you, all of you, gamers, good news, Vert.io GPU. The work has started. It's still like the early stages of development, but this one would be a video device with full 2D and 3D acceleration. So be perfect for playing Windows games on your Linux hosts. 9p is a protocol for file sharing. It's like convenience. You don't have to set up Samba on your host or any of that. Very convenient file sharing. Vert.io VSog is a relatively new Vert.io device. It's similar to the serial device, the Vert.io serial, in that it's a bi-directional communication channel. But serial is more like, it has more like a stream or file semantics, whereas Vert.io VSog is more like a socket. It's, it would basically, the idea is that it would add another socket address family, right? At some point, we may want to do, we want to port the drivers to ARM. If ARM turns out to be a thing, really. And if Windows ARM, like server Windows ARM also kind of materializes. And the last one is kind of very interesting. This work was started by a company called Virtuoso. And the idea is that instead of adding third party drivers to Windows, the Vert.io drivers, right, to make Windows compatible with QMU, you would build, you would change QMU to also expose VMBus, which is a Hyper-V equivalent of Vert.io. If you do that, if you exposed VMBus devices from QMU, then you would be able to run Windows unchanged. Like you wouldn't be running into these blue screens because Windows would have these drivers in them because that's what they use when they run on Hyper-V. So that's like a, that's, that's more like a long term project. So please, I think this is the end. Please, if you want to contribute, please, please do. We have very few outside contribution at this point. It's mostly Redhead, a little bit of Virtuoso, a little bit of Google, but it's it's mostly the project is mostly run by Redhead. So please get in touch, send pull requests, talk to us. This is my email address. This is my blog where I occasionally post something related to Windows guests. And again, the most important thing is the is the project on GitHub. And I'll be happy to take questions. Yep. So question is, which one is better? Wioskazi or Wiosstore? As far as I know, performance wise, it's about the same. Wiosstore is also supported on XP, like a minor point, right? It has wider support and it's also better tested. It has existed for a longer time in KMU. Wioskazi is so, so basically it doesn't matter. It's up to you. I think that if you are designing like a new, if you're creating a new VM, I would go for Wioskazi because it's newer, right? But there's no, it's really, it's really washed. Like if there's no, there's no huge selling point of either. Oh, that's a really good point. Yeah, that's a really good point. So the comment was that Skazi supports the discard or the trim, right, the trim command for, which is very important for SSDs. Yep. So the question is, would it make sense to have virtual implementation of Fuse? What is Fuse? I am not sure. I'll probably talk to you offline after the talk. So I'll have to thank you through. Yes. So the question is, is there a benchmark comparing different Windows versions? I don't think there are good benchmarks available. I wouldn't expect the performance to be different between like client versions of Windows and server versions of Windows. It's probably going to be about the same. The question is, is it still, is it still possible to provide a driver disk when installing Windows 10? So yes, it is still possible. And starting with Windows Vista, it can be a CD. So it can be exactly the same ISO that I used later when installing driver on the live system. So it can be used for installation. There are also ways how you can completely eliminate this step from Windows setup. For example, you can build your custom Windows image, which already has virtual drivers in it. There are tools available for that. I think it's called something like a deployment kit, like a Windows assessment and deployment kit. There are tools provided by Microsoft where the input is the vanilla Windows ISO. You add your drivers and you get out another ISO, which already has these drivers. You install from this media and you see the disk just like if they were like IDE disks. And the question is, why are these drivers not part of Windows? Why aren't they inbox? And it's a good question. And it's something that we would like to look at. Step one would actually be to have them on Windows Update. So you wouldn't need the ISO unless you want your own special version. But for most users, they would just get them from Windows Update. They would say, search for this driver automatically or something and they would just get it from there. And that's something that actually should be coming pretty soon. It's not that hard to do. Yeah. Yeah. So the question is, is there collaboration with Microsoft on this specifically? And so there are no contributions from Microsoft. We do have a partnership of some sort where we, in theory, can get support when we run into issues. Because sometimes something just doesn't work the way it's documented or the documentation is not enough. So in theory, we have that support. In practice, it hasn't worked that well. And we end up going through the usual Microsoft support that you use when you have MSDN subscription. So yeah, sadly, there's room for improvement here. So the question is audio. I don't think there's a par virtual audio device. And the one that QMU exposes is HD97, I believe is the name of the device. That one is supported by an in-box Windows driver. So Windows supports this device out of the box. Let's talk about it after a talk. I'm curious what it is. It's worth for me when I tried it. More questions? So the question is, with Word.io GPU, do you have to dedicate your actual video card to one guest? And I don't think that's the case. I think that the idea is that it will be a shared resource used by multiple guests. Okay, no more questions. Thank you. Let's see. I do have, no, don't, don't worry. I just wanted to put it down. I think the problem was that the KVM packs the audio data in the VMC string with the screen data and the world manager cannot pray or something like that. Really, let me just unmute this. Yes, I'm not sure why it's still, still looks muted. Okay, this is it. Let's try playing. So this is the default. I haven't done anything to the, maybe I can turn this off.