 So my name is Gurney Hunt. I work at IBM Research with a group of people, some of whom have already spoken like Mimi. Used to work with Dave. Dave moved on to perhaps greener pastures at GE and others that you've seen. So what I'm going to be talking about is a product that's about to come out in IBM. IBM is obviously changing because we're talking about things that you can't buy before you can buy them, but not telling you when you'll be able to buy them. But it's close enough that we're going to talk about it. This work was done as part of, is derived from work that we did under a contract to the US and Canadian governments. That's what all that fancy words on the left says that our contract requires us to put on all derived publications of which this is one. So I'm only going to speak from my view, although it represents fairly closely what's going on if you have any and I'll answer any question I can or tell you I can't answer it. So basically I'm going to go over these four things. I'm going to give you an introduction to protected execution facility. I'm going to give you an overview of the architecture. I'm going to give you some lower level details, and then I'll give you a quick summary. So the team that's working on this is IBM Research, IBM Cognitive Systems, otherwise known as Power, and the Linux Technology Center. Some of the people from the Linux Technology Center who have contributed to this are objective is to deliver this technology on for open power and power systems and all the software and firmware associated with it is is either already open-sourced or will be open-sourced. So the basic idea is that there's a, there's, it's, we're, we have a model that's similar to the model that AMD introduced if you heard their talk earlier in the conference. Basically, it's being increasingly difficult to guarantee the security of systems that are in a virtualized environment. So we've come up with our approach to giving you a way of having, having a virtual machine on, on top of a hypervisor without necessarily having to trust the hypervisor. When we're saying that, we're not saying that hypervisors are bad guys. We're basically saying that our customers are asking as we try, as we try to move more and more people to cloud computing, especially the, especially the people that we look at, they, they want to use cloud computing, but they don't want to give their stuff to the cloud provider or the cloud providers administrator or anyone that has access to the cloud system. So our objectives are introduce something we call secure virtual machines. We want to protect the secure virtual machine against attack. We want to protect the confidentiality and confidentiality and integrity of the secure virtual machine. And we're going to integrate with the trust of computing tooling. So what we're doing is dependent on the secure and trusted boot stuff that Nina and Al talked about earlier in the conference. We innate in our approach to enable secrets to be embedded inside of the secure virtual machine. And we, we get secure virtual machines by converting existing virtual machines into secure virtual machines with new tooling that's yet to be open sourced. The whole, the whole idea is that instead of having to trust all of the hypervisor, you have to trust your host boot, Opal, which we're talked about in the secure booting stack. And then we're adding a new piece of code called the protected execution ultrvisor or ultrvisor for short. And then, and that's it. And the hardware. So we're relying on the open source ecosystem. We're not limiting the amount of protected memory you can put in a machine and any existing and, and the way we do this has no impact impact on the applications you run in the virtual machine. And these foils will be on the deck. Okay, so this is our quick overview. We got our hardware down at the bottom. We got this protected execution ultrvisor on top of it, which is firmware. We have Linux KVM on top of that. And we have both normal virtual machines and secure virtual machines running on top of Linux KVM. As a little red arrows indicate you can't access the secure virtual machines memory from a normal virtual machine or from Linux KVM. The system doesn't allow that. So how's it going to work? So we start with, if you want to create one, oh, well, I should tell you that we still allow the hypervisor to do pretty much everything it normally does for a normal virtual machine, for the secure virtual machine, and I'll talk a bit about how we make that work in a minute, although it may sound familiar. So to create a secure virtual machine, you start with a regular virtual machine, you run, you develop your applications, whatever you want to do, while it's a regular virtual machine and then you run the tooling to convert it into a secure virtual machine. You have to collect the public keys of the authorized machines, where one of the things we're introducing in here is every machine will have a public private key pair. The private key of the machine will remain inside the TPM and the public key will be available to the owner of the machine. We make it possible for you to use the same public key on more than one virtual machine or to have every machine you own have a different public key, that's your choice. We do that not by stuff that we've created, but by exploiting the capabilities of the TPM 2.0. Our tooling confirms that your file system is encrypted. The gotcha here is that you'll get only the protections for your encrypted file system that the encryption form that you use allow. We recommend that you use an encrypted file system that gives you integrity protection of which there is one for Linux, but we will support any encrypted file system you use. I think in this talk I probably talk about using DMCrypt, but DMCrypt doesn't have an integrity protection, but there are encrypted file systems that do. The tooling builds some integrity information and outputs an SVM. I'll give you the format of it in just a little bit. To let you know what's going on, it starts like any normal VM as a normal virtual machine, nothing special. During the brute process, the SVM executes an intersecure mode syscall instruction, which is a new instruction we inserted into the instruction set. That instruction is actually executed right at the end of prominent. The ultravisor receives the SM instruction and it points to some encrypted information and it's included with the secure virtual machine that enables the ultravisor to check the integrity of the secure virtual machine. Okay, so what does it do? It grabs the entire blob of the secure virtual machine that's in memory at that point and moves it off to secure memory, which cannot be referenced by the hypervisor. Once it gets it to secure memory, it moves the whole thing off, including the secure blob. Once it gets to secure memory, it opens up the secure blob, assuming it has permission. If it doesn't have permission, it fails right there. It then opens up the integrity information and checks the integrity of everything that it moved over to make sure that it's the same as it was when the user created the secure virtual machine. If it is the same as it is when the user created the secure virtual machine, it shifts the secure virtual machine into secure mode, which I'll explain in a minute, and then it resumes execution and the machine starts to run. Now, I have, there's an important point that during execution of the secure virtual machine, the ultravisor receives all interrupts from the SMN. It saves the SMN and only reflects information required to process the internet. Basically, the ultravisor sits between the secure virtual machine and the hypervisor, and interrupts come into the ultravisor that are intended for the hypervisor, or asynchronous interrupts that would occur while the SVM was running. And what the ultravisor does is it saves off all of the state of the secure virtual machine, which is all the registers. Floating point, vector registers the whole nine yards, puts them into a structure that's in secure memory, and it puts other state into the register state. If it's an asynchronous interrupt, then reflects the asynchronous interrupt to the hypervisor. In our case, Linux KVM, and when the hypervisor's done, it has to call, it's, there's some patches out, so go call back and we will restart the secure virtual machine. If the interrupt is because of an H call coming out of the SVM intended for the hypervisor to do some work for it, we do exactly the same thing, except we leave in the registers reflected to the hypervisor, all of the state that the hypervisor needs to do its job, otherwise it couldn't do its job, and then it comes back. And then we take this stuff out. Now, we'll get into that. We're going to go now at a slightly lower level and give in more details of that architecture. Okay, so the base principles are the following. Previously in power, we had hypervisor mode, supervisor mode and problem state. Now we have ultravisor mode, hypervisor mode, supervisor mode and problem state. Ultravisor is at the bottom because it owns the machine. So it's the highest privilege mode in the machine at this point. So this allows us to do these things. We're saying that we're minimizing the trusted base because you only have to trust the stuff below the hypervisor. And with the TPM and stuff, you can find out that this is in fact actually running our stuff and it's a machine and all this other stuff. If you're doing all the remote attestation and you're concerned and you're on a remote cloud provider. We introduced this notion of secure memory, which is only accessible by secure virtual machines and the ultravisor. The other way to say it is it's only, there's a new bit in the MSR, the machine status register that says you're in secure mode. And you can only reference something that's in secure memory if that bit is on. And the only thing that can turn that bit on is the ultravisor. So for those of you who are really into hardware architectures, you realize that the machine therefore has to boot in secure mode with secure mode turned on. So it comes into host boot in Opal if you're familiar with our stack with secure mode on. And at the correct point Opal loads the ultravisor and when the ultravisor goes back to Opal, the secure mode bit is off. And then Opal loads, starts the booting of the guest kernel just to put it in perspective from what you saw about the secure boot on power. We introduce ultravisor mode and we enable secure virtual machines. So here's our overview and we'll work through this in a couple of more slides in a little bit more detail. As I told you at the bottom, we're relying on TPM where we embed our private key. Whoever we get our RAM from, we have the CPU with protected execution facility modifications. We have attached the secure memory associated with those modifications. We have normal memory that anybody in the CPU can read including things in running in secure mode. We have NuFirm where we call the protected execution organizer. We run Linux KVM hypervisor and on top of that we have virtual machines, either secure virtual machines or normal virtual machines. They both come from whatever storage the system has and we create a new tool there where you give the public key so you can convert a machine from one to the other. Protected execution facility refers to the changes made to our open power architecture and in particular each machine has a public-private key pair. Protected execution organizer refers to the open-source firmware that will be open-source that's not open-source yet. I said all of this so we won't say it again. Starting from the bottom up, the private key remains in the TPM. There's a function of the TPM for those who are familiar where you can say give me a public-private key pair, keep the private key and give me the public. That's what we use so the TPM generates the key. The alt-visor does not, for those of you who are familiar with the kernel, the TPM device driver remains in Linux KVM. Therefore, we're going to have a TSS inside of the alt-visor so that we can appropriately share the device driver with the VMs that are running up above. However, again, exploiting the trusted computing base, we can set up a secure channel to the TPM because we have a shared secret. And we can therefore talk through Linux KVM to the TPM to do the work that we need to do as the alt-visor. So the only thing that you can get is the denial of service attack. You can't get a loss of secrets because the information we're passing back and forth will be properly encrypted. The hardware separates the memory into normal memory and secure memory. And after boot, only SVMs and the alt-visors run in secure memory or can reference secure memory. When ESM call is received, if the calling SVM has not been modified, the alt-visor will transition it into secure mode. So what's going on with the hypervisor is we're slightly more privileged. It has to be para-virtualized to run with the alt-visor. In our research project, we built a version of this architecture where the alt-visor did not have to know that the alt-visor was there. In other words, we virtualized the hypervisor. We did that just for... Why not? We had a small team. It made it easy to do. We didn't have to have hypervisor skills. We got it up and running. We could run virtual machines. And the performance would kick us out of the market because it was really bad. I won't give you the numbers unless you really need to know them. We tried it. We did it. We know how to do it. The performance sucks mud. And that pushed us into a para-virtualization model. And in the model that we're running now, the impact on normal virtual machines is nearly zero when they're booting. The overhead for secure virtual machines is a single digit and a small single digit. By our projections, we're still working to get the final numbers on the actual hardware. So if the hypervisor needs to update, the partition scope page table will have to ask the alt-visor. And if it's running in SVM, it will have to ask the alt-visor to complete the return. If it wants to go back to an SVM, and we're going to update HMM to help manage the secure memory. I think those patches are already pushed out. At the VM level, they run on the same hardware, and we use Grubb. We built a prototype using Petiboot, figured out how to use Grubb, so we set Petiboot aside for now. SVMs and VMs both get services from the hypervisor, as I indicated earlier. The alt-visor sanitizes everything that goes from the secure virtual machine to the hypervisor. An SVM can share unprotected memory with the hypervisor. It has to. If you couldn't have what we call normal memory and a secure virtual machine, the secure virtual machine wouldn't be able to communicate with anybody, because whenever the hypervisor can look at the secure virtual machine's memory, it only sees it encrypted. So you're sending a packet off to some other remote system. It's been encrypted by the alt-visor, and you don't have the key, so the remote system can't do it. So that says that in order to do IO and things into and out of the alt-visor, you've got to do bounce buffering. And you have to bounce buffer through memory that the hypervisor, and its subsystem can reference. We do not point the hypervisor at memory and secure memory, because it can't reference it. When I say it can't reference it, I'll be blunt. The hardware will not allow it. We have changed. We went through every single subsystem in the power chip, and we looked at them and we analyzed them deeply to determine whether or not they could be made secure by our definition of secure. And every subsystem that we couldn't make secure in the first round cannot reference secure memory. If it tries, you'll get a machine fault. So you have to move data from secure virtual machines to the normal virtual machines. You have to bounce buffer. That's the cost. We create them with new tooling, and the secure VM starts executing as a normal VM, as I've said, over and over. But it executes, this ESM instruction is a syscall instruction level three in power. There used to be syscalls level one and two, one. And now we've introduced level three, which goes up to the alt-visor. A revocation. I talked about this at LSS in Vancouver, but I didn't talk about revocation, so I'll name up in the questions, so I'll just talk about it briefly now. Revocation means disabling an SVM from executing on a machine where it was previously authorized. For all intents and purposes, the SVM is encrypted. I'll get to the format in a minute. Just think of it as an encrypted blob. Obviously not completely encrypted, otherwise it couldn't start as a normal virtual machine, but think of it that way for now. In the case where the SVM is multiply authorized to say, so I told you you had to have all the public keys. So I can take and build this SVM object and I can embed stuff in this encrypted blob inside of it and I can embed keys for more than one machine. So with that, when the SVM starts up, the alt-visor has to look in that list and it's fairly quick to say, well, which one of these am I supposed to be able to decrypt? And if it doesn't see its identity in there, it fails. If it sees its identity in there, it uses the TPM to decrypt that blob, looks at the results, and then proceeds on with the execution. So if you're revoking, you're saying, well, do I want to revoke it on every machine that I've authorized it for or do I want to revoke it on a single machine? So those are the two questions that we're sort of looking at and thinking about. But who is revoking the access? Well, there's really three choices. There's the user of the SVM, the creator of the SVM, or the owner of the infrastructure. Those are like the three primary parties who would want some revocation. And should revocation be reversible? All right. For users, nobody's forcing a user to use an SVM. If they happen to have one and they don't want it anymore, they can just erase it. So that's simple. That's a complete revocation. It's not reversible, but it works. The creator of the SVM, we have a model that exists today in software called the license server model that gives the creator of the SVM the ability to grant and remove licenses on an SVM and to even have a revocable. If I revoke your license because you didn't pay me my fee and you pay the fee, then I can reinstate it. If you write your SVM, and you can do this because the SVM is essentially encrypted so you can embed in secrets and call home or call wherever and say, am I still authorized to run? And if you're authorized to run, you'll run. And if you're not authorized to run, you won't run. And we have integrity protection on our stuff. So if somebody starts tooling bits trying to break it, it won't run anyway. So that revocation is handled. What's not handled gracefully and what we've done is enabling the infrastructure owner to decide I don't want that SVM to run because he can kill it but he kills them all because he changes his public-private key set and that gets rid of all of them. We realize that's fairly heavy-handed so we're looking at various alternatives, all of which will involve some form of a revocation list and therefore will probably not be reversible depending on which way we go. If you have comments or questions on that, we can talk about it later. In the first release, we'll not support suspend and resume and migration. That's to make it easier to get the first release out. And we'll also not support over-committed SVM memory in the first release. Our architecture and design allows the hypervisor to page SVM memory. We're just deferring that till we get it running. Then we'll put that in. And we will not in the first release support dedicated devices to SVMs. That's because in the timeframe in which we had to design the hardware, we couldn't figure out how to make it work well so we said, nope. So you can't dedicate a device to a virtual machine. What can you do? You can do all the virtual IO you want to. We have the virtual IO subsystem for virtual machines. It works well. And we're modifying it to the bounce buffering so that's a modification to the kernel and all the virtual IO devices will be supported. We understand that that's a limitation. It may not be acceptable to some customers and in future releases those are things that we will consider and address. It does not support transaction memory currently. So an application, if it uses transaction memory and it runs in an SVM, it will crash. A few more lower level details. All right. Here's our contents of the ESM blob. In this case, I've illustrated this particular one is encrypted for three different machines, A, B, and C. And there's the symmetric key that decrypts this blue thing wrapped under that's been encrypted with the public key of each of those machines. The verification information contains integrity information for the kernel, the init ram FS, and our tasks. It may contain some symmetric key blobs and it does contain the passphrase for the encrypted file system. Remember I said that your main disk is encrypted. Artulie, make sure you've encrypted it one way. If you haven't encrypted it, we'll encrypt it with whatever we decide is the default encryption method and stick the passphrase in there. This passphrase is stuck in this blob and is inside of the ultravisor. Net, one of the mods we have to make to the kernel is to use an ultra call to get the passphrase so that you can mount your disk and see the contents. Otherwise, you can do whatever you want with it but it will all be encrypted. So, since it's locked up in here, you can't, as the, this came up in the KVM form which is concurrent with this one, you can as KVM play around with ESVM and then emulate it because as KVM you won't be running in secure mode and you won't be able to get the passphrase for the disk out of this blob that's associated with the secure virtual machine. So how do we boot? This picture refers to booting a virtual machine on top of Linux KVM. Sloth is not part of our design. It's what people normally use. Wait, yes. So what happens is Sloth eventually passes control of the grub which is in the prep partition. It is not encrypted. That's why we can start unencrypted. It goes in it and it goes and eventually it does, okay, which one of these kernels do I want to boot? It does its thing. You could in an ESVM have multiple kernels that you wanted to boot. You don't have to have just one. When we had design choices, we aired on the side of flexibility in this case. So you could have just one. You could have more than one. In any case, they'll all be Z-images. Grub boots is the image just fine. So you'll get your Grub menu. You'll have your default. You'll have however much time you've allowed yourself to switch to a different one if you want to. Inside of that Z-image is the blob. So you start booting that. And that gives you a slash boot file system, which is unencrypted, but we have some integrity information on the things that we need to know about that we'll check. And then you have the real root file system, which is encrypted. And I've explained what we do there. So we start as a normal virtual machine. At the end of prominent, we say, switch me to secure mode. You get copied into secure memory. We search for the properly wrapped symmetric key after we copy you into secure memory. If it's not found, execution fails. If it's found, we decrypt the verification information. If decryption fails, if decryption fails, verification fails. Using the verification information to confirm the integrity information of the kernel, the init ramfs, and the R test information. If that is successful, if all that, if the integrity means if the bits have not been modified, then we pass control to the SVM in secure mode and the pass phase is available through the Ultra call. The ESM Ultra call is the only one you can make from a normal virtual machine. It makes it easier. And we have some Ultra calls for the hypervisor, and we check to make sure that if you're coming into the Ultra visor, you're coming in from the right location. All right. So we're going to make this open source. So we've already, in each case, I'm walking through the things that we think we're going to have to change in order to make this work. We had to change prominent around so that this notion of letting you run through prominent unencrypted works as long as prominent doesn't make changes that would cause the integrity verification to fail. Originally it did. These patches change that. So it doesn't do that anymore. So we can do that. There's a wrapper. And so we've made some changes to let the ESM stuff, the ESM blob be added to the Z image that changes up. We're probably going to have to make some form of a change to grub. We're not to grub the core of grub, but to either some of the scripts or some of the things around it. Those changes have not been pushed up, but they will be shortly. I've, this is a summary of what we call, of the ESM calls available from the SVM to the ultra visor. Or from the hyper visor to the ultra visor, scum, read and write scum. Scum is scum registers of the registers in the power architecture that the hyper visor and other entities use to control the configuration of the hardware. The entire machine can be reconfigured, and not the entire machine, but a good part of the machine can be reconfigured using the scum registers. And for those of you who are hardware, you realize this means you could bypass almost any security feature you wanted to. Which is why you can't write a scum register without the ultra visor looking at it and said, do I like this? If I think it's okay, it goes. And if I don't, you get told you were successful, but it doesn't happen. It's, you know, we just, anything that we don't like about your scum request will be blocked out without telling you. I guess we're concerned about security. So we can, you can pick up, you can page in and page out. You can write, you can write the partition table. You, UV return is the call that the hyper visor uses is to tell the ultra visor, I want to start this secure virtual machine. We had, we were originally going to not let the hyper visor know that the machine was a secure virtual machine or not, we decided to change that since we didn't think it could really hurt it, so we make it explicit. Registering and unregistering memory slots is about getting and sharing memory. We can, you can terminate us, SVM terminate I think is coming out of the SVM. Share and unshare, oh, sorry. Well, we've got, we've got, in power we have something called VPA and then, and a couple of other things that we have to share with the hyper visor. The shared pages is the share page and unshare page. The memory slots are the other things. And we have the ESM. We also have some things so that secure virtual machines can share memory across from each other, but they're not on this list right now. For KVM, there's some special H calls we needed. We needed start it, finish it, terminate it, page in, page out, and then TPM com is just, we need to add this H call, so the ultra visor will reflect TPM com to the hyper visor when it needs to talk to the TPM and the response will come back to the ultra visor. The ultra visor reflects it to the hyper visor as if it's coming from the SVM, but it's really the ultra visor that's doing it. We have to modify HMM because we want to use HMM. We don't want to write a bunch of code. We're trying to exploit much of what Linux is out there right now. So we've got a proposal set there that will allow HMM to manage moving memory between things between secure and insecure. Of course, the ultra visor will get in and encrypt or decrypt as necessary as those things happen. Oh, I got vert.io here. We have a set of initial patches out for vert.io. This is a set that was designed, but we're debating a simpler set of patches for vert.io that rely on the DMA ops, the DMA ops structure that's in the vert.io that might be simpler than the ones that are out there, and we're working with the vert.io maintainer to zero in on exactly the set of patches. The sorts of things that we've been modifying and looking at modifying are similar to the things at AMBs and looking at modifying. And then we're talking, we have a set of changes out for VPA, which is a power specific thing. Let's see. Now, just to get a little more into the hardware, this gives you the point, this is what we changed at a high level. We added an address bit that indicates the memory is secure and not secure. It's a high order address bit, much higher than any machines that we have exist for a while. We added an MSRS bit that indicates the process is running secure. We added, we now, because we have that, we now have, these are your, what state you're in if you're in secure mode, these are what state you're in if you're not secure, and we have one that's reserved for future capabilities that we're looking at. We added a bunch of new registers like SMF CTRL, which tells the system whether SMF is enabled, hardware is designed so it comes up with SMF enabled. If it gets, if SMF gets disabled, it cannot be re-enabled without rebooting the machine. Once it's turned off, it's off for the duration. We added a bunch of ultra-visor specific registers, URMOR, USSR, USR01, USPRG01, which are similar to the special registers that we make available for hypervisors on power. We added a new instruction, URFID, which the ultra-visor can use, which allows it to return to something and flip on the SBIT, a convenient feature. And, okay, so, and these other things that I mentioned earlier, all the hypercalls go to the ultra-visor, all the interrupts go to the ultra-visor when you're in secure mode, and we reflect stuff to the hypervisor. Just a quick summary. We protect you from the hypervisor or other software system admins. The security domain is the VM at rest and transit or executing. We don't change the applications that run in SVM. We have some new K-config operations. KVM must be para-virtualized. Those patches are being put together. Where your secure memory has integrity and confidentiality protection, you can embed secrets. We're limited by the available memory, and all will be open source. There's a bunch of links to papers and things that are related to this, and do you have any questions? Questions? What? Reference slide. Okay, so one of the things that we have in SCX is the ability for enclave to prove that it runs inside secure enclave and not inside emulated environment. Is there such thing in this system? Well, so that if you have an enclave running on a cloud... Yeah, I know what you're talking about. Basically what we're going to do is be exploiting the TPM for that so that you can know that you're actually talking to a system that has a real TPM that was on an IBM platform and then you can use a remote attestation for that feature. It's similar to the attestation that you have in the enclave because what you've done is you've put in an attestation capability so that the enclave can attest to the user. We're just going to exploit the attestation from the TPM. It's actually equivalent to a hardware chip in a way. Yeah, so we're just exploiting the attestation in the TPM. Questions? You're saying that everything will be open source. Does it mean ultravizer also? Does it mean what? Ultravizer also. Ultravizer will be open source. Okay, great. Could you please, Mikova, Mandra and Siar, could you please tell if there are any key differences from ARM monitor for ultravizer? From what? ARM monitor. ARM monitor? Yes. I'm not that familiar with ARM monitor, so... Right. Let's take that question offline. So does Bova system have protection against somebody installing something to the random access memory bus? Is the secondary memory encrypted? Is there protection against hardware tampering? Okay, so we do not have a hardware encryption on our memory right now. It's fairly difficult to tamper with the hardware memory, but that's something that will be coming, but it won't be in the first release. And probably for power and open power, when we do hardware encryption, it's going to be sort of like what you see in Z-Series. If you're familiar with that, we now have this pervasive encryption in Z-Series. So the encryption of the memory that protects the encrypted memory will be orthogonal. The hardware part of that encryption will be orthogonal, and the ultravizer will still have its own independent encryption. Monty Weissman, General Electric. You mentioned that the VMs are going to have a tunnel down of the physical TPM, so they're going to be sharing the same physical TPM? Is that the proposal? We don't in our... No, we don't have the VMs share the physical TPM. The ultravizer has access to the physical TPM as the hypervisor has access to the physical TPM. The operating system that boots on the hardware has a device driver for the physical TPM. And it has a TSS in it that allows the virtual machines above it to share access. So I guess if the virtual machine is written to access the physical TPM, yes, we'll all be using the same one. But we're not bypassing... There's only one device driver in the entire system. We're not putting another one in. So kind of get to where Yoriko's going then. I mean, the TPM's key hierarchy works very well for virtualization and sharing, but the PCRs do not. So how are you able... If you have these multiple VMs, how are you going to be able to attest to launching each one of these VMs and be able to attest what operating system launched and then kind of a follow-up to that, if they turn on IMA, how do they all share the same... If they're all sharing the same physical TPM, how would they do that? Seems like they have to go to a VTPM. Okay. I'm not sure I've got all of your question. The ultravizer actually won't... The ultravizer gets booted as part of the firmware. And it actually won't start if the PCRs are not in the correct state. So if the PCRs are not in the correct state, in other words, you're trying to boot something else, the secret for that machine is locked permanently. That's... The only other thing we need, the ultravizer needs the TPM for, is using the private key to decrypt things. That's it. We don't ever reference any other PCRs or anything. So as we're coming up, we have a fairly fancy policy that allows us to set a password for access to the private key. And so once on the boot of every SVM, the ultravizer uses... That password is created when the firmware is booting up before you get to the boot of the base operating system. And as part of creating that password, the PCRs that it's based on are extended to so nobody else can do it. And then the ultravizer has that password, and then every time it wants to start an SVM, it asks the TPM to decrypt this blob and hand it back and then keep going. So we're not... We don't care about the PCRs or anything else after that. That's all we use it for. We use it for holding our secret and decryption, and that's it. Okay, so you really do have VTPMs for individual... Yeah, the VTPM is up in KVM. Oh, I see. We're not taking that away. We're not using the VTPM. So each VM has its own VTPM. Every VM, including SVMs, would have their own VTPM. Exactly. But I think that probably addresses Yarkov's question as well. Okay. I think we have to... If there are more questions, maybe there is a break right now so you can come talk to the speaker. So let's thank the speaker.