 And even more specifically, I work on the weird architectures. That means if it's not x86, I'm probably going to be assigned that bug. The scope of this presentation, we're going to be talking, as I said, about the 64-bit ARM architecture, also known as AR64. We're going to be focusing on the KVM virtualization stack, which I will assume you are mostly familiar with. And we are going to be focusing on the improvements over the past year or so. We will be focusing on a selection of user-visible changes because there's been so many improvements on all layers of the stack, but there's only so much time in this presentation. So we're going to just pick a few. And they will be at the LibVert level because that's what I work on. So that's the stuff that I know best. And you will notice during the presentation that I will go into increasing level of details that match with my increasingly increasing involvement with the specific feature. So we'll be skipping over a lot of improvement. There's been so many, but we will just focus on a few. And if you were attracted to this talk because of the pun in the title, there's not going to be more puns. So I'm sorry for that. I wanted to put a lot of puns there, but it didn't happen. So the goals that we had when we started working on this, one was to get rid of the limitations. There were some limitations in AR64 virtualization, and we didn't want them because of limitations. We wanted to move towards a world where AR64 and X86 are basically in feature parity. And we wanted to increase the user friendliness, which mostly from the LibVert point of view means that if you are using AR64 or X86, you shouldn't have to think too much about it. You should just be able to take your existing skills. If you know anything about virtualization on X86, you should be able to just start right away on AR64 without thinking too much about it. So the first of the features that we're talking about is PMU. So PMU stands for performance measuring unit. And it's something that allows you to look into the performance counter of the hardware and figure out whether your application is doing too many cache misses or stuff like that. So when we started on X86, you could just get this snippet in your integration and you got a visual PMU you could look into. On AR64, you had nothing because that was not possible. Actually, it was implemented in QMU, the support for PMU. But there was no way to control it. You could just, depending on the machine type, it could be on, it could be off. You had no way of deciding as a user. So that was our problem. Solution to this problem was fairly easy in a way. It was fairly easy for me because I didn't have to write any single line of code for that. The support that was in X86 just worked for AR64. So for me, it was easy. But it was that easy because we pushed for a consistent name and behavior on the QMU side. So if you need any proof of the fact that being consistent is good, this is one example. So we compare again, X86 with AR64, and now they are exactly the same. So exactly what we wanted. Next up, video cards. So again, we start with the comparison. On X86, you could get a video card, and the model is Virtio. So it's going to be a Virtio video card. On AR64, you get nothing again. So you're stuck with Serial or XVNC or whatever. So what are the issues? We have two models in QMU of Virtio-backed video cards. One is Virtio GPU. The other one is Virtio VGA. The only difference between them is that Virtio VGA provides a legacy VGA frame buffer, whereas Virtio GPU doesn't have it. So Virtio VGA has more stuff. We would want to use that. But unfortunately, it cannot work on AR64 due to cache coherency issues. I will not go into details. That's what they told me. It's cache coherency issues. We just cannot have a legacy VGA frame buffer on AR64. So the solution, of course, is to use Virtio GPU instead. To make that happen, there were some changes needed. The first one was to create an EDK2 driver so the firmware can put stuff on the screen. You can just see your UEFI menu. Then we had to fix XORG because XORG did consider any GPU without a legacy VGA frame buffer as secondary device and such required configuration for you to work on it. So instead, now it has been patched in a way that if the only GPU you have doesn't have the legacy frame buffer, it will consider that one as primary. And it will start right up. And by the way, this was also useful for people using physical AR64 setups. So this was pushed by virtualization but actually helped also people running actual AR64 machines. And then we had to change Livert. So it would use Virtio GPU instead of Virtio VGA on AR64. We want to stick to Virtio VGA for all other architectures because, as we said, it has all the same features and more. So why not have the legacy VGA frame buffer? It doesn't hurt when you can use it. And now we see that this is what you have on your X86 guest and this is what you have in your AR64 guest. So you have exactly the same XML, very user-friendly. Then we're going to talk about interrupt controllers, very fundamental part of any architecture, or maybe not. On AR64, we have the GIC or GIC. I'm not sure how it's pronounced, but it's generic interrupt controller. And it's basically the same thing as the APIC on X86. So before you have, this is something that cannot be quite the same on X86 and AR64. So on X86, you have your APIC feature. On AR64, you have your GIC, but you have to add it manually. So that's not very friendly to the user. The problem, the reason you have to add it automatically, is that there are two versions of the GIC. One is V2, and the other one is V3. And they are not compatible with one another. So if you are running, in theory, you could have a host that can run both V2 and V3 guests. But in practice, most of the time, if your host is V3, you're going to want to have V3 guests. Otherwise, they will just not run. So you have to be mindful of that. But we want the user not to have to worry about that. So QMU added a new query GIC capabilities QMP command. And through that command, Livert can figure out what versions the binary supports and pick one or the other automatically for the user. So if we now compare, again, we have APIC on X86. And we have the same XML as before, but now it's been added automatically for you by Livert. So you don't have to worry about it for one second, unless you want to. Of course, you can override that choice if you have very good reasons to do so. And then the last bit is addresses. So this is a Virtio serial controller, and we can see that it's using a PCI address. This is on X86. Instead, on AR64, we have this Virtio MMIO address type. So this Virtio MMIO address type has several limitations. The first one is that it is slower than it should be. Second one is that you have a limitation on the number of devices that you can plug using Virtio MMIO. I think it's something between 6 and 12, something like that. It's a very small number. Another limitation is that it does not support auto plug. So that's a pretty big one. And also, it's just plain weird. It doesn't match what it's on X86. So it looks weird if you're just coming to AR64. Luckily, we have Virtio PCI, which solves all of those issues. It's between 10% to 60% faster, and it solves all the other issues that I mentioned. So of course, we want to use it. And we also want to use PCI Express rather than legacy PCI. Because unlike in X86, AR64 is new enough that we don't have any legacy to worry about really. So we should just avoid all the backwards compatibility stuff and just go for the newer standard, which is supported natively by the architecture. Problem is that when we started looking into this, it was still not well understood field. PCI Express was not very well understood. So the situation was that we had the i440FX machine type, which is the classic PC machine type, which only supports legacy PCI. This is on X86. And still, on X86, we have the Q35 machine type, which supports PCI Express natively. But we were not really using it. At the liver level, we were using legacy PCI devices to compatibility. So that was something we wanted to address. And the PowerPC64, on the other hand, is doing its own thing, as usual. So we identified the opportunity to share our efforts between AR64 for the VRT machine type and X86 for the Q35 machine types. So we set out to try and solve both issues at the same time. The first problem that we had was, what should a legacy-free PCIe topology look like? What kind of controller should we use? We basically had no idea. Turns out there are lots of caveats, like so many of them. It starts with what kind of device you can plug into what kind of controller and what kind of device you should plug into which kind of controller. Because maybe you will plug a device into a controller. It will work. But it will work just due to QMU quirks or being more relaxed than actual hardware. Or maybe it will work just fine, but you will lose the hot plugability. So that's all stuff, lots of quirks and caveats that we had to learn about. Luckily, we know folks who know this kind of stuff very well. And that's the QMU developers, because they have gone through the specifications and they have actually implemented virtual hardware that behaves like real hardware. So we got into discussion with them. It went on what seemed like forever. But at the end of the day, we managed to collect all the QMU recommendations in a document. This document is now part of QMU.get. So it's a living document that we can update whenever we feel like there are better ways to do whatever we are doing. And most importantly, we can use that as a blueprint. So we could update Libvert to follow the recommendation outlined by the QMU folks. And that's what we did. And now you have the situation where this is Q35 and this is AR64. So they are exactly the same. Now, maybe I've been lying a bit up until now, because I told you that that was a situation. All the x86 examples up until this point were actually after we implemented all these changes. But anyways, the point is that if you're working on x86, you take your exact same skills. You bring them to AR64. They just, everything works. So availability of all this stuff we talked about. Well, it's upstream. Everything is part of a released version. So if you get the latest QMU, latest Libvert, everything is available. It's not in downstream distributions, unless you want to use Fedora Roheide, but it's there. And when it comes to future work, of course, there's going to be a lot of future work. But in the short term, there is something that we want to work on is these generic PCIe root ports. Because as it's said right now, you create AR64 guest. And you look into it, and you will have all the root ports are going to be marked as Intel something-something root port, which is just weird. So we want to have generic PCIe root ports that we can use for whatever architecture and doesn't have any brand on it. And we can possibly extend in the future if we fill the need, because we don't necessarily need to match the exact behavior of existing hardware. Also, whatever you want to do, because, I mean, it's an open source project, just get involved, jump on the mailing list on IRC, tell us what direction you would like to see our work go into, and we work on it. So now it's time for a demo. And again, I've kind of been lying, because it turns out this was a demo all along. So this has been running for the entire time on AR64 guest. Thank you. Thank you. And we can see that we have all the hardware events, because the PMU has been enabled. And also, everything here is using PCI addresses. We have the Virtio Ethernet, whatever. We have the Virtio GPU is here. We have the PCI bridges, which are the, this is a preview. These are the generic PCI bridges I'm talking about. I'm running very unreleased software. But these are the PCI bridges, the generic ones, the non-intel ones, and they work just fine. And I mean, yeah, this is full GNOME. Maybe I'll make it bigger. So LSW dash, like this? No, just the same thing. OK. I don't know how to try it. We'll try. I can try. No, I don't think so. Because I'm connecting through Wi-Fi, but the system is remote. And it has a fairly decent connection. So it should take just a bit. Well, OK, it's happening. So, oh, I don't have VI mode enabled. That's a shame. So that's your, I can run that as root, actually. So it's going to be more comprehensive. So there's a bunch of hardware, I guess. You can look at the recording later or something. Or I can provide it to you if you give me your email, whatever. Yeah, go ahead, take the picture. Perfect. So that was the demo. Conclusions. Well, we have mostly achieved our initial goals. I mean, of course, X86 feature parity is kind of a work in progress and also moving target. So, but, I mean, I'm pretty happy. And virtualization in AR64 is better than ever. That's a takeaway. So go play with it. There are bugs. Report them. We will fix them. I'm a minute early. So questions, OK? So the question is, is there plans to port this back into CentOS 7? I don't know. I mean, all of the stuff is in upstream. So as soon as the downstream projects pick up a new upstream version, they will get it whether they like it or not, really. So I guess, yes. Other questions? So he's asking about the list of actual physical hardware you can buy. Yeah, that's a tricky subject. I know that there are some that you can get, some development boards, stuff like that. I don't shop for the stuff, though. My company thankfully provides it for me. So I never have to bother with that, sorry. There was a question there. So are you asking what the underlying hardware, the host hardware that I'm running this guest on? That's the question. What? Right. So this specifically is a Mustang development board, the one I'm running the demo on. But it can be any other number of 64-bit ARM that they have to, of course, have the virtualization extensions in order to make it fast. And I'm using a developer board from, I think, a couple of years ago, and that's a trick. So I don't know. Most of it, most hardware, I guess. So the question is, do you need any patches to the guest OS? No. The up until quite recently, guest support was an issue when it came to Virtio PCI. So we had some resistance to switching to Virtio PCI. But as the second half of 2016, every major Linux distribution ships that out of the box, except Debian, which we are waiting for the next release that's going to have it. So that is not an issue. The only thing is the Xorg patches, I believe, are fairly new. So if you want to run graphical desktop without having to do any configuration, Fedora 25 works out of the box. Other distribution, not sure. But mostly, no. You can just pick any AR64 distribution and run it as a guest. James was asking something, and then it decided not. OK. Cool. To connect remotely. So the question is, what software I'm using to connect remotely? It's VirtuManager. I can show you. I can prove it to you. So there you go. This is VirtuManager. And that guy is the doctor. Don't make him angry. I'm using VNC because reasons. I mean, technically, Spice should work just fine. But you need to have the Spice headers in the host-to-protein systems. I didn't want to fiddle too much with this stuff. So I used VNC. VNC is not the best, but works. OK. So if there are no more questions, I would say, oh, thank you.