 What can everybody? My name is George Dunlap, and I work for Citrix on the zenproject.org team. And today, we're going to be talking about how to secure your cloud with Zen's advanced security features. So Zen is an open source enterprise grade type 1 hypervisor that was designed for the cloud before it was called the cloud. So the Xeno servers project at the University of Cambridge, back in the early 2000s, envisioned a world where anybody could rent their CPU space to arbitrary other people across the internet. And they designed Zen specifically with that idea in mind to be a very lightweight, but very secure system. So Zen has a lot of advanced security features, but many of them are not or cannot be enabled by default. And a number of them seem, when you first look at them, quite complicated, even though, in fact, they're not that difficult to implement. So my goal for this talk is to give you guys some tools to think about security in Zen so that you will know some of the security features that Zen has and to be equipped with the basic knowledge to be able to get them working. In order to accomplish those goals, we're going to cover these things. So we're going to give a brief overview of the Zen architecture. I'm going to give you a brief introduction to the principles of security analysis. Then we're going to consider some attack surfaces and some Zen features that we can use to make those services more secure. These, some of these features we're going to talk about will include driver domains, PV grub, stub domains. We're going to compare PV versus HVM and the Flask example policy. And I found that using man pages is a bit like a dictionary. So you can use the dictionary to show you how to find the spelling of a word if you already have a very good idea of how it's spelled. So a man page is very similar. You have to have an idea of what the overall solution looks like before you read the man page makes any sense. So I just want to give you a brief overview of what the solution looks like so that when you come to the man pages, when you come to the documentation, it's easier for you to get a handle on things. Right, so start with the Zen architecture. Zen is a type one hypervisor, which means it runs directly on the hardware, not next to or on top of another operating system. It is a micro kernel style hypervisor, so it only controls the CPU, the memory, and the interrupts. And it offloads the hardware drivers to guest VMs. Now, in instance of a running VM, a Zen is called a domain. And the first domain that Zen starts when it boots up is called domain zero. Domain zero typically contains the hardware drivers to drive the hardware, as well as the tool stack to allow you to create, destroy, and manage all the VMs in the system. Now, this can be, typically it would be Linux, but it can be any operating system which is ported to Zen. So NetBSD currently is ported to Zen. And back when OpenCelerus was still being maintained, OpenCelerus could actually be run as a DOM zero as well. Zen has two different kinds of guests. The first, the PV guests and HVM guests are fully virtualized guests. So, para-virtualized guests are a Zen's major contribution to the field of virtualization. So back when Zen was invented, hardware virtualization, the hardware extensions had not been implemented by Intel and AMD yet. They hadn't published. And so virtualizing x86 architecture was very difficult because there are a number of instructions that behave differently depending on whether you are in ring zero, ring one, or ring three. And so the state of the art of that time before para-virtualization was something called binary translation, which was incredibly complicated and still in many cases very slow. Zen's contribution was to say, well, what if, instead of trying to, what we're doing here is we have a piece of software with a hypervisor talking to a piece of software which is the operating system, but we're using an interface defined by this hardware. What if we made a software interface and we got rid of all the things that were hard to virtualize and let the guest operating system know that it was running on a hypervisor and gave it a software interface. And so they ripped out anything that was difficult to virtualize for x86 and replaced it with a software interface. And the result was a very fast, slimmed down hypervisor that had very good performance. One of the things for this talk that's important is how they did the device driver. So rather than trying to emulate some kind of a device driver, they came up with what's called split driver model. So you have something, the piece of software which provides access to the hardware, so the virtual hardware, network or block, it's called the back end. And the piece of software which consumes it is called the front end. And these share rings to communicate with each other. So we have net back and block back which run in domain zero and provide the service. And we have net front and block front which run in the guest domain and consume the service. But of course, only a few years after Zen was made public, then Intel and AMD have published their virtualization extensions, which Zen can now use to invent a new kind of, I mean to use a new kind of domain called a fully virtualized or HVM domain where you can run unmodified guest operating systems. So the HVM extensions allow you to virtualize the processor, but you still need to have an emulated motherboard, emulated disk and network controller and so on. And to do that, rather than reinvent the wheel, we kind of do a KVM data and we use QMU to run the device model for these various devices. And the QMU typically will run inside of domain zero. Okay. So, security. The first thing you need to do when you're thinking about security is to define your threat model. What is it you're actually trying to protect against? What do you assume that your attacker is able to do? So in our example, we're going to assume the attacker can access the guest network to send packets to it and we're going to assume that he controls one guest operating system. That's maybe because the attacker has compromised one of your customer's operating systems or it may be because the attacker is one of your customers. And he controls one guest operating system, he's going to try and break out of that and attack other operating systems. So, unfortunately, so vulnerability is the ability to break into other things come from bugs and unfortunately we have not figured out how to write bug-free software yet. And so if there's a bug, an exploitable vulnerability that allows someone to gain more access than you intended them to have than an attacker can break in. And so it's not a matter of secure or not secure. It's a matter of more secure or less secure. Where more secure means there's a lower probability that there is a bug and less secure means there's a high probability that there is a bug. So to compare two things to say if it's more secure than that, we can ask a couple of different questions. So the first question is, how much of the code is accessible? If we assume that each line of code has a very small probability of having an exploitable vulnerability in it, then the more lines of code that you can light up, the higher the probability the attacker's going to have find in one of those lines of code is some way to attack and escalate his privileges. The next question to ask is, what is the interface like? Is it really complicated, like this one? Was it really simple, like this one? The more complicated the interface, the more likely it is that the programmer is going to get it wrong and there's going to be some exploitable vulnerability that an attacker can use to break into your system. The next thing to think about is something called defense in depth. So imagine if you have a castle wall and you just have one wall, then the attacker, and someone's attacking it, they're going to break into the weakest point of that wall and once they get across the wall, then basically they have the run of the whole place. And so a lot of castles you'll see have two walls, right? So that after you do all the work to actually break through the one wall, you're still not done and then you have to break through yet another wall, right? And so this picture here is minus Turith, which is Tolkien's fictional city that had seven walls. And how this applies to software is, what you want it to be is so that rather than just being able to cross one boundary and have one bug and then be able to have everything that you want in the system, you want it to be the case that they have to cross several boundaries because each time they cross a boundary into a new part of the software, then they need yet another bug. And the chances of having two bugs, exploitable bugs, is exponentially less than the chance of handling only one bug. And we'll see how this applies in some of our examples. Okay, so for example, this is something we're going to analyze. We're going to have two networks, the control network and the guest network. You have to have an IMME with interrupt remapping, so either any AMD system or an Intel VT version two. And we're going to start with the default configuration, which will be the network drivers are going to be in DOM zero. The PVGress are going to use PyGrub. I'm going to describe what PyGrub is in a minute. And the HMGress are going to have the QMU, the device model running in domain zero. Let's take a look at our first attack surface. So the first attack surface is the network path. So how might someone break in here? So there may be bugs in the hardware network driver. There may be bugs in the bridging or the filtering code. And there may be bugs in the netback via the ring protocol. The netback is a very simple interface, relatively low chance of there being a bug there. But IP tables and the bridging code are very, very complicated. And there is fairly non-negotiable chance that there is some kind of exploitable bug somewhere in there. And if they do break in, well, what does that buy you? Well, it buys you control of the domain zero kernel. All of these things run in domain zero in the kernel level. And because domain zero is a trusted kernel, it has to have access to be able to control the entire rest of the system. And basically, it gives you control of the whole system. And I want to point out that this is not just a problem for Xen. So any virtualization technology that has these things running in a privileged space, which is most of them, KVM, Hyper-V, VMware, are all going to have the same basic problem. So the problem is not unique to Xen, but the solution is. So the solution is something called driver domains. So a driver domain is an unprivileged VM, which is given access to one piece of hardware and then provides that access to the guests. So now, an exploit buys you, if you manage to break out of the rogue domain into the driver domain, all you have is control of this VM. Now, that does give you a further base to make further attacks. So now you have, if the previous domain was an HVM domain, now you have a PV domain that you can try and attack through. Now you have control of all the guest network traffic. You have control of the physical NIC. So now you can actually try to break into Xen or into DOM zero through the IOMU system. Or you can try and break into the net front of the other domain. But the point is that you have to have yet another exploitable bug. And so this is the whole idea of defense and depth, right? Is that now, in order to actually get something you want, you have to have two exploitable bugs instead of one. And so the system should be much more secure. So driver domains are fairly simple to set up. There's a lot of individual steps, but it's not too complicated. It's very similar to setting up domain zero, the networking domain zero. So you begin by just creating a VM with the appropriate drivers. So the same distro that you used for domain zero should be just fine. You make sure that you have installed the Xen-related hot plug scripts. And the easiest way to do that is just to install the same Xen package in the driver domain as you did in domain zero. And then you give the VM access to the physical NIC via PCI pass-through. And that, again, takes a lot of steps, but it's very straightforward while in the stir process. And then you configure the topology in the driver domain just like you would for domain zero. And then you have a driver domain. And to use it, you simply configure the guest vf to use the new domain ID. So for instance, if you've named your domain DOMNET, then you add back-end equals DOMNET to the VIF declaration like this. So you just add that at the end there. And then that VIF will use that back-end. And there's a lot more information, including step-by-step instructions and links to how to set up PCI pass-through and all that stuff on this wiki page. And at the end, I'm going to have a link to a page that will have sublinks to all of the wiki pages on all of these things. So the next attack surface, PyGrub. So what is PyGrub? So PyGrub is a grub implementation for PV guests. It's a Python program which runs in domain zero. And it will read the guest file system, parse the guest grub.conf, and then present a menu. And then based on the results of that menu or the defaults in the grub.conf, it will pull the kernel image and the inner ID to the domain builder, which will then unpack those and put them in the domain's memory, ready to execute and start the domain. So if we assume that the attacker controls the guest disk, how can he break in? So there may be bugs in the file system parser in PyGrub. There may be bugs in the menu parser. There may be bugs in the domain builder in the part that parses the kernel or in an already image. And again, if you break in, where does that by you? Well, it buys you to control the domain zero privileged user space. And because this user space needs to be able to read and write the guest memory, it basically gives you control over more or less the whole system. So one way you can mitigate this problem, which might be called a security practice, is called fixed kernels. So rather than using PyGrub, you enforce it. You only pass a known good kernel from domain zero. And this completely removes the avenue of attack for the attacker to attack the domain builder or the system. However, this has some disadvantages. So to begin with, now you as the host admin have to keep up with all the kernel security updates and stuff, which is kind of a pain in the neck. And furthermore, it's a lot less flexible, because now a guest admin can't pass in kernel parameters. They can't pass in custom kernels or anything like this. So this is not quite so good. So there's another feature that Zen has that allowed you to get the best of both worlds, which is called PVGrub. So PVGrub is miniOS plus a PV port of grub that runs inside the guest context. So if any of you guys are familiar with OSV, miniOS is kind of like it's a similar idea. It's such a very small operating system that runs only on Zen and doesn't have a user kernel mode separation and provides a really basic libc functionality. And you can only run one single-threaded applications. So the PVGrub will, so this is essentially the PV equivalent of the HVM's BIOS plus grub. So you're reading the guest disk, executing the menu, and then loading the kernel in the PV context. And so now an exploit buys you, control of your own VM. And PVGrub is a really simple setup. The main thing is just to make sure that you actually have the PVGrub image. So it's called PVGrub-arch. This would be either x86 underscore 32 or x86 underscore 64. Normally lives here. It's included in the Fedora Zen packages. Unfortunately, it's not included in the Debian-based Zen packages, including downstreams like Ubuntu. So if you're using those, you'll have to build the image yourself. But once you have the image, you put it in the right place. And then you just use that as the kernel for the guest config like this. And then when you start the VM, it'll load up PVGrub. PVGrub will then kexact the kernel that you choose as the root of the menu. And there's a wiki page here where you can find links to more information. OK, so the next attack server that we're going to consider is QMU, the device model. So how much will it break in? There may be bugs in the network interface card emulator. And there may be bugs in the emulation of the virtual devices so that an attacker from the guest operating system may be able to break into QMU that way. And if they do, again, because QMU has to be able to read and write other guest's memory, we have it's a dom zero privilege user space. And so it gets you pretty much control of the whole system. And this is not actually a hypothetical scenario, because in part of the preparation for this talk, I went back over the security things that we've, vulnerabilities that we published in the last two years for XAM, and there have been three exploitable bugs in the QMU emulator. Now none of them have been in devices which have been on by default. OK, so two of them have been in the E1000 network card emulator, which defaults to one of the RTL emulators. And one of them has been in a SCSI emulator, which is, again, not on my default. But the point is that this is possible, and it may happen. It's a risk. And this is not a problem that is unique to XAM, right? So anyone that's using QMU would also be, like KVM, would be successful with these bugs. And anyone who is doing an emulation service, whether Hyper-V or VMWare, may have the same kind of risks. So the problem is not unique to XAM, but the solution is. So there's a security feature called QMU stub domains. And a stub domain is a small service domain that runs just one application. Again, it runs on many OS. And the QMU stub domain is where each VM will have a domain that runs alongside it that just services QMU. And now, an exploit buys you control the stub.vm, which, since it's a PV interface, does actually get you a few more interfaces to attack, but not worry that much. Yeah, so again, it's this idea of defense in depth. So now, in order to break through the system, you need actually two exploitable vulnerabilities instead of just one. So QMU stub domains are also a fairly simple setup. It's the same basic idea. You need to make sure that you have the image first. It's called IOMU- and then the architecture. Lives in pretty much the same place, including this Fedora Z packages, not the Debian packages. And once you know that you have it, and it's in the right place where the tool stack knows where it is, you just need to specify stub domains in your guest config like this. And you can find much more information about stub domains, including links to how to build it and so on if you're running Debian here on this work page. So next, we're going to consider Zen as an attack surface. So what interface does Zen have for HVM guests? So we have the HVM hypercalls, which are a smaller subset of the PV hypercalls. We have instruction emulation. So all of the MMIO that is done by Hanzeloff QMU is emulated by Zen. And if you're not running with HAP, you may be running shadow page tables, and that's emulation. Zen also emulates a number of platform devices, which need to be emulated for performance reasons, so the APEC, HPEC, and the PIT. And additionally, it may have features enabled like nested virtualization, which allows things like the Windows XP compatibility mode to be run in Windows 7. So as far as PV guests go, they only have the PV hypercalls, so there's a few more hypercalls than the HVM guests, but it doesn't have much more limited set of instruction emulations and no emulated platform devices. However, looking over the, as I did a survey of the vulnerabilities, there is one thing that PV guests have, which HVMs don't, which is that they share the address space with the kernel. And this means that a lot of bugs which would potentially be exploitable in HVM or not, that are somewhat exploitable in PV. And so as it turns out, when it kind of classified all the different bugs into denial of service, host crash, information linkage, and exploitable privilege gain vulnerability, they basically look statistically very similar. They're all very low, but there's only one or two difference for all of them. So however, as we said, HVM domains do have QMU. And so if you can't use stub domains for whatever reason, then you should use PVVMs. But if you are using stub domains, then either PV or HVM should be about equally secure. But so the next thing feature we're going to talk about is the flask example policy. So what is flask? So the Zen Security Module is a set of hooks that allow you to build a plug-in security module, just like the Linux security module. And flask is a framework for XSM, which is developed by the NSA. So basically it's the Zen equivalent of SE Linux. In fact, it uses basically the same concepts as SE Linux. And it uses the same user space tools to compile the policies and so on. And basically what it allows you to do is to set a policy to restrict the hypercalls, which can be made for a guest, to instruct or enable certain hypercalls. So at a basic level, it allows you to restrict the hypercalls to those needed by a particular guest. At a more advanced level, it allows you to grant fine graven privileges and break down things down into a really small sub-domains. So if you're using something like, I think the NSA actually have a Zincline XT, for instance, uses XSM, to break things down into really small levels of granularity. So that the system looks much more like the minus turret thing, where if you control this VM here and you want to get something over here, you have to break in through several layers of several different walls to get where you want to go. Now, that level of usage of flask is far beyond what I can cover in this talk. However, there is something called a flask example policy, which is very straightforward to use. And what it is, it contains example roles for DOMU, DOM0, stub domains, driver domains, and so on. And basically what it does is it's kind of like this picture here. So if you're wondering what this picture is, this is a picture that I found on lifehacker.nap. And the problem that's trying to solve is, if you have a complicated audio-visual system set up with several different remote, with a whole bunch of different buttons on it, and you maybe want to go away for the weekend and want to leave your parents with the kids, and they come back and they just want to watch TV, but in trying to get the whole thing to work, they pressed all these random buttons and misconfigured your system, and now nothing works. And you have to spend an hour sorting things out again. So the suggestion was that you figure out what is it they need to do, right? They want to turn the TV on and off, they want to access the volume, they want to change the channel. That's all my parents wanted to be able to do. So you figure out what they need to do, and you put construction paper over and cover every button that they don't need to use. And then, first of all, it helps them to only use the buttons that need to be used. And it helps you because there's much less chance they're going to be able to do something to misconfigure your system that you're going to have to sort out later. So the Flask example policy is basically like that. As you have a policy that says, you know what, DOM zero needs to use these things, but a DOM, a normal VM, doesn't need to use all these hypercalls. This restricts the amount of code that an attacker can light up, which reduces the probability that they're going to be able to find an exportable bug in the parts that they can actually access. So basically how to set up Flask. So you build that with XSM enabled, you compile the example policy, and then you add the appropriate label to the guest config files. So sec label equals the name of the label, or step on label equals whatever the name of the label. And there's a how to here, which obviously there's a much, a lot more to this than I can cover in this talk, but there's a how to with all this stuff here on the, on this wiki page. Oh, sorry. And the last thing, yeah, the last thing is, every time I give a practice talk, I forgot about this line. So we, the 4.3 policy is not extensively tested, so the NSA tests it, and we test it intermittently, but it's not part of our regular regression testing yet. That's one of the things that we're going to be working on. So if you decide to use the XSM example policies, we strongly suggest that you put it in warning mode to begin with, and run it for a couple weeks to make sure that there's nothing that you actually need that gets filtered out. OK. Right, so we have given you an overview of this in architecture. We've talked about, we've given you a brief introduction to the principles of security analysis, and we've considered some intact services in XM, and some security features that we have been able to use to make them more secure. This includes driver domains, PV grub, stub domains, PV and HVM, and the FASC example policy. So hopefully, I've given you tools to think about the security in XM, you know some of the security features in XM, and you are equipped with the knowledge to get those working. And with that, I will take any questions. Sorry. Yeah? What is the relation between them, what can be accessed? Like, the back end can be accessed, up to what? It's like 10 slides before, I know that's it. No, no, yeah. Can you repeat the question? Oh, sorry. Let me see if I can understand it. So you're asking about this? Yeah, yeah, yeah, yeah. Yes. Oh, yes, yes. So the question is what, I mentioned some interfaces here, so PV interfaces. So all I'm saying is, so HVM guests and PV guests have access to slightly different interfaces in XM. Most of it's exactly the same. But there are a handful of hypercalls that PV guests can make that HVM guests can't make. And first of all, there are a handful of hypercalls that HVM guests can make, which PV guests can't make. So if we're going to say, so hypothetically, let's say that there was an exploitable bug in one of these hypercalls that only PV guests had access to, and hypothetically say that you managed to gain access to an HVM domain, but not a PV domain. So now what I'm saying is, if you could, so I'm an attacker, and I know the only exploit I found into Zen is this PV hypercall, but the only guest that I have is an HVM hypercall. So now I have one exploitable bug, but I can't get to it. So now, if I can find a bug in the device model, so if I had a second exploitable bug, now I exploit the device model, and then now I can exploit the PV interface. So I'm being really sort of a bit panantic here. But the number of PV interfaces that are accessible to PV guests and not to HVM guests are very small. So like you said, there's a possibility. There is a possibility there, but it's not that much. Does that answer your question? OK. Yeah. Yeah, I think so. So the question was, is it possible to maintain something which is fairly secure, but easily maintained for people who are not security experts? And this is kind of what I've been trying. The point of this talk, actually, is to say that most of these things that I've covered here are actually fairly simple to set up. And the main thing is that the thing with driver domains to set those on by default would require a bunch of coordination with the distribution. Does that make sense? And it would be very complicated because you'd have to set it up so that when you install the Zen, that suddenly it would create an entirely new VM for you, and it would automatically configure the network that you had done in DOM 0 and all those other stuff. It's just a fairly complicated thing to set up. So it would be very easy for someone to create a Zen-based distro that had all this stuff on by default. So you could make a Debian or Ubuntu or Fedora spin that had all these things enabled by default. And in fact, so Zen server is going towards that. So if you search for a Project Windsor, I think that is a Zen server Project Windsor, you'll see that Zen server itself is working towards what they call desegregation, which is, again, rather than having, so one of the things we've seen a lot of times is this thing where everything is running in domain 0, any exploit gets you control of the whole system. And so the Zen server is working towards being able to break that up into little tiny chunks so that, again, if you break into one of those little chunks, it doesn't actually buy you that much. And so we expect sometime in the next, let me just say maybe next year and a half, two years, possibly, that we would have that Zen server may have this kind of thing. And Zen server now is a fully open source. You can use that. Or it'd be fairly easy for someone to take Debian or Fedora or one of these things and make a spin that has this stuff on by default. Or to take an existing distro and just do a couple of little tweaks, like I've said here, and make it quite a bit more secure. Does that answer your question? All right. Your security features make another kind of side effects in terms of performances or capacity here? Does that bring? For the most part, the aggregation stuff, so the stuff of putting things in separate domains, make things more scalable and makes them faster. So if you have all of the UQME stuff running in domain zero, that means that you have to have a domain zero has to have access to a large number of CPUs. And that means that domain zero itself has to have, so all of the locks have to be shared across all of the CPUs. So if you're running a big system with 128 cores, you may need to have 32 or 64 cores for domain zero to be able to handle all the things. And that makes a lot of lock contention, right? Whereas if you break down the things into individual stub domains, then each stub domain is its own little tiny thing, and you don't have to have locks across the whole system to run, to do a lot of that processing anymore. The same kind of thing happens for a lot of the different systems, the network stuff. Now you can have several driver domains, all of which, maybe each of which have their own separate network card, and because they're separate kernels, they don't have the same degree of spin lock contention and so on. Does that answer your question? So there's some other people ahead of you, so. So there was concern and some POC that a taker came from some coding to the gen hypervisor? Mm, oops, sorry. Okay. So you mean, as far as, so the question was about, as certain as the code is in hypervisor, are you talking about like into the software repository that people would then compile and install, or so as an attacker, so yeah, so seeing this example here, you would have to attack through the Zen hypervisor interface into the Zen hypervisor, is that what you're asking? Yeah, so that's kind of what I was just discussing here. So there is obviously a risk that there's going to be exploitable bugs in the interface, and one of the things that you can do to, so one of the things we do try and do is make things very simple, and one of the things you can do is to use the Flask example policy, because then what you're doing is, again, we can't promise there's never gonna be any bugs, systems have bugs in them, but if we reduce the amount of things an attacker can have access to, then it reduces the amount of code that might have a bug in it, which we hope will make it less probable. Is that what you were asking? Okay. I think he was ahead of you. The driver domains, I think it must negative influence the performance, because you have another layer between the DOM0 and the virtual machine. So the thing is DOM0 doesn't actually have to, so you know it's driver domains, it's the disadvantage of this kind of a system. So from Zen's perspective, the driver domain and DOM0 are exactly the same, right? So it's just a domain, which happens to have access to the hardware, and so the performance of the driver domain should be basically the same as the performance of DOM0, right, so the thing, so if you see here the arrows, so the domains are talking to the net, the net fronts are talking to the net back, goes to the bridge, the rapid tables to the net driver down to the net, and DOM0 is not involved. Oh yes, yeah, so he said the notable distinction is that exclusive access versus shared access, so the driver domain controls the guest network card directly, so it's issuing MLIOs and domain zero is now not talking to that network card. So to do that you have to have to, this is why I said we have to network to a control network and a guest network. So the DOM0 is still controlling the control network, where you tell it to start VMs and things like that, but the guest network is on a physically different card, which is given to the driver domain, and DOM0 doesn't have anything to do with it anymore. Do you have a follow on, yeah? Another question, and the new security feature, like as a Linux XSM, is it enabled in XM 4.2 in Fedora? How can it be enabled in XM 4.2? I'm not sure, I suspect you have to build it around. Does anyone know if Fedora has XSM enabled? Okay, I suspect not, I suspect you'll have to rebuild it, yeah. So, do you have another? Just a question on the other side, is it like feature agnostic, this approach, like you have a driver domain and the staff domain, so for example, from the perspective of XM Live, live degrading of the masses, how is it working? I mean, is it fully supported? Yes, it is. So, there are no constraints that put on the perspective of security? Yeah, let me double check this, yeah. So, no, because as far as the backend, so basically whenever you do a migration, if you're using this split driver model, you have to have a way to go and tell the guest, okay, disconnect from the network interfaces, and then you migrate it, and then you say, okay, now connect up again. And again, as far as the, so it happens that domain zero does most of the, almost always has netback in it, right? But the whole system from the very beginning was set up so that it doesn't have to be domain zero, it can be anything. And so when the guest wakes up kind of on the remote side and the tool stack says, okay, now connect, it just says connect here, and it connects to the place that the tool stack tells it to, and everything just works. Okay, so the work, we need to divide the query. In this, the stub domain you mean? Yeah, it's stub domain. Right, so no, and so in the case of the stub domain, you, because the state you need is in QMU, and so what you do, if you're not running a stub domain, whether it's in domain zero or in the stub domain, what you do is you talk to QMU and say, now dump your state, and QMU will pickle it up and put it into a binary, a blob for you, and then you ship that over to the other side, and you make a new QMU, and then you say, okay, QMU, here's your state, read it in, and then it'll load it back up. And so you could, in theory, so for instance, if you were running a, I think this is the case, again, I haven't actually tried this, right, but in theory, it should be the case that you should be able to say, actually to migrate from not using a stub domain to using a stub domain, or vice versa, because in both cases, you're destroying the old QMU, making a new one, and as long as the new one has the state, which is the same thing, then it should be fine. That doesn't answer your question, right? Lars? Yes, it's one of the things we would like to do. I forget. It has to do with the three's about, I mean, the W maintainer and what should be in and what shouldn't be in, and part of the thing with the stub domain is, it has yet an extra copy of PVgrub or an extra copy of QMU. This apart from not the normal QMU that comes with Debian, I think that's the issue, and so it's one of our things to do to try and work that out to get it sorted. I'm not sure what the state of that is. That wasn't a great answer, but any other questions? Okay, well, thank you very much.