 So, we're going to be talking this afternoon about the AMD SEV S&P feature and specifically an update on the open source development that's been going on related to that. My name is David. I'm going to be starting things off giving a bit of an overview of what that feature is and some of the important flows with it. And then I'm going to hand things over to Bergeshe here. He's going to be talking about some of the patches that he's been working on and have sent upstream related to S&P. So the S&P feature is the latest evolution of the AMD SEV or Secure Encrypted Virtualization Technology. If you've been to past security summits, you probably had to endure me talking about it. And S&P is the newest version. It is a substantial security update to the SEV feature. In particular, it offers some stronger protection against various kinds of hypervisor based attacks and builds on top of the previous versions of the features. So initially we had the SEV feature which had memory confidentiality with memory encryption and then the ES feature which added register state protection and now S&P is the third generation of that. The initial SEV and SEV ES features are currently supported upstream. They've been supported in hardware since the first generation Epic server processors. S&P is the newest version. It premiered in the third generation Epic processors which became available, I believe, in March of this year. And this is our entry into confidential computing. This is our confidential computing solution and it can be used in a variety of use cases. You can think of public cloud as being a primary use case for this type of technology. So as far as the threat model goes, it's very similar to other kinds of confidential computing technologies. The trusted components include the AMD provided silicon, so that's a CPU hardware as well as the AMD signed firmware. And then of course we trust the operating system within the guest VM itself to do its best to protect itself. Nothing else is untrusted, so the hypervisor, the other VMs in the system, external devices, et cetera. The threat model goes into specifics of what we are and are not protecting against with this generation of the technology. I talked more about this in my presentation a couple of years ago. We also have a white paper that goes into these in more detail. As a summary though, we have a few big categories of things. So we have confidentiality protection, which is primarily handled through the AS engine in our memory controller, so we encrypt the guest memory. We have integrity protection, which prevents the hypervisor from manipulating or corrupting guest memory, and I'll talk a bit about how that works. We have some physical access protection for things like cold boot attacks, which is again accomplished in memory encryption. There are also some new controls around protecting against malicious interrupt injection, bad CPU ID values, as well as certain side channels, including speculative side channels. The S&P technology as it stands today does not protect against things like availability attacks on the guest, so the hypervisor retains control of scheduling and system resources. More advanced physical attacks, including active voltage tampering, as well as other types of side channels such as monitoring page faults over time and things like that. So I'm going to talk about a few things in particular that are going to be relevant to the patches that Bridgesha is going to be discussing later. One of those is around integrity, which is really the big new thing with S&P, and this is enforced in the hardware through a new structure called the reverse map table. This is a large memory structure that's allocated at boot. There is one entry in it for every 4K page of assignable DRAM in the system, and those entries are 16 bytes each, and they indicate the owner of that page. So that could be the hypervisor, that's the default. It could be a specific guest at a specific guest physical address. It could be what's listed here as firmware, which would be it's reserved for use by the AMD secure processor, which helps manage the life cycle of these guests. The RMP's primary job is to enforce writability in order to enforce integrity, and so it makes sure that only the correct entity is able to write to those pages. That checking is done in hardware, both in the CPU as well as the IOMMU. So a device DMA goes through the same source of security checks. A violation of the RMP table results in a page fault or nested page fault, depending on the circumstances, or an IOMMU page fault, if it's DMA, and the RMP table itself can only be manipulated through special X86 instructions that are designed to enforce the security properties. So as an example, I want to talk about this concept called page validation, which is about how we get pages into the guest address space, and this uses some of those new X86 instructions. By default, pages start in the hypervisor state, so they are accessible to the hypervisor they cannot be written to by the guest. The hypervisor first assigns the page using an instruction we call RMPUpdate. And that changes the RMP table to switch over the ownership, but it is not yet accessible by the guest. The guest then has to accept the page by doing something called validation using the pValidate instruction. And it uses that to transition the page from that guest in valid state you see over to the guest valid state. And once the page is in that guest valid state, then the guest is able to read and write it normally. I should mention that this is specifically for private memory in the guest, so memory that the guest does not want to share with the hypervisor. We also, of course, support shared pages, which are accessible by both and are used for things like DMA and hypervisor communication. Those pages stay in the hypervisor state. So the guest accesses memory, it can access it either as encrypted or unencrypted using something called the CBit. And if it accesses the pages unencrypted, it should be a hypervisor state page. If it accesses it as encrypted, it needs to be a guest valid page. And this two-step page validation sequence is very important for security. And there was a discussion about this on the mailing list upstream recently, I believe, that page validation ensures that there is only one GPA to SBA translation that's valid at any given time for that GPA. And so it's very important that the guest OS does not execute Pvalidate multiple times on the same GPA because that could potentially create multiple GPA to SPA mappings that a malicious hypervisor could switch to it will. So it's important that the guest tracks what page that's validated and make sure that it never double validates a page. So the next thing I wanna talk a little bit about is the launch flow, since that's of course how we get started. The launch starts with a plain text image, which I think is rather common in this area. And the hypervisor works with the AMD secure processor to get the guest going. The AMD secure processor for those who aren't aware is our little embedded security subsystem, there's an ARM core in there and some crypto hardware. And there's an API that, and we have the specs for this on our website, with the various commands that the hypervisor can call. And so the first things it's going to do is it's going to allocate what we call a guest context page that contains information about the VM. And then it's going to call this activate function which will generate a random encryption key and assign it to that specific VM. The next step is we're going to go through this launch process. So there's a launch start command that creates the context and then the hypervisor can call launch update multiple times to add memory pages into the guest and then finally a launch finish command that closes the context. The initial memory image in the case of the open source patches typically consists of the OVMF image as well as the CPU register states. So that'd be the reset state for the VM. And then there's a few other things in there too. So going into a bit more detail there, there's actually six different types of pages that you can add as part of the initial VM state. The most common type is the normal page, which is just a standard data or instruction page. And when this is added, both the contents and the metadata of the page are measured and are included in something called the launch measurement. So the contents of the page, that's obviously the data. The metadata includes basically the entry in that RMP table. So that's the GPA, where the page is placed, as well as the information about the type of page. So in this case, the fact that it's a normal page. As opposed to something like the next type, which is a VMSA page. The VMSA or virtual machine save area is a special page of memory that holds the CPU register contents for the VM. And these contents are arbitrary, so you can specify whatever reset state you want in the case of Linux. We typically use the standard x86 reset image. And this is, you can almost think of it as a runnable page. So this is a page that can be targeted by the VM run instruction. It's marked in a special way in the RMP. The next thing is we have a zero page, which is really just a shortcut if you need a lot of pages of zero. We also have an unmeasured page type. I don't believe this is currently used in our patches. But this is a page that can be used to supply information from the hypervisor to the guest, but that information itself is not measured. However, the metadata is measured. So the fact that you have an unmeasured page at a specific location in your address space, that is part of the measurement. The contents are not. And so this can be used in order to pass host specific information. The final two types of pages here I'm going to be talking about a little bit more detail. These are special pages that the security processor helps create. So the first one of those is the secrets page. This is a new concept. Actually, both of these are new concepts in the SNP architecture. Typically, there will be one secrets page in a VM. The guest physical address of this page needs to be at a known location. So the guest is going to be built knowing that it is going to have a secrets page at a certain address. And of course, that is measured as part of this metadata. And there's a structure in our spec that defines what's there. There's some information such as the actual family model stepping of the silicon, which can be important for some reasons. Some fields to help with live migration. But really the most important thing in this page are a set of keys that are called the VM PC case or virtual machine platform communication keys. These are ASGCM keys that are used for secure communication between the guest VM and the AMB secure processor. So we have this mechanism that we call a guest request, where if the guest wants something like an attestation report or some certain types of key derivation to get ceiling keys or things of that nature, it can construct this encrypted message. And send that through the hypervisor to the security processor. And the security processor, of course, has the same keys, and it can use us to verify that the message is authentic and provide a secure response. So these keys are very important. I'll talk about attestation in just a little bit here. The last type of special page that we support in the architecture is a CPU ID page. And this is a page that is filled in initially by the hypervisor. And then it's filtered by the security processor. And the security processor verifies that the hypervisor does not violate any security properties as we define it when supplying CPU ID information. CPU ID information in some cases is completely harmless. The values really don't matter. There are other cases where we are concerned that there could be security issues in the guest if it were to get bad information. An example of this would be something like the size of the XAV area. If the incorrect size information was provided, it could lead to a buffer overflow in the guest or something like that. So the hypervisor fills in this table with information for various CPU ID leads that it thinks the guest is going to need. And if those do not pass a security policy, we'll fail the launch update. What that means is that if the guest is actually running and has managed to get through the launch update and launch finish, then all the information in that page is safe. And so the guest can consult that page for any CPU ID needs. It doesn't need to go back to the hypervisor for CPU ID emulation. The specific rules that are used to enforce the security are documented in something we call a PPR, the processor programming reference. That is a public document. It's on our website and there is a table here with the information on each leaf. In some cases, for each field, what security checks apply. Because each generation of processor that we make has different CPU ID functions in it, that's why this information is in a processor specific document. So if you look up the PPR for family 19 hex, which is our third generation EPIC, then you'll find the security policy table here. And there's a few different types of common checks that are used depending on the field. All right, so this point we have created the launch context. We have done launch update to install pages into the guest. And then the final step is to do this thing called launch finish, where we can supply an ID block. This is another new concept in the S&P architecture. The ID block is something that is created and signed in advance by the guest owner, and it contains the expected measurement that we are going to verify as part of launch finish. It also contains some information about policy as well as various unique parameters that can be used in key derivation. And one of the most important things is that it is signed by the guest owner using their private key. And this is important because that signature can then be used to identify my guest from your guest versus somebody else's. So technically this is an optional aspect of the S&P architecture. You can launch a VM without it. However, if you do supply an ID block, then that is used in both key derivation as well as in attestation. And Projesh is going to talk in a bit more detail about attestation in some ways that it can be used, but I want to give you a high level first. During runtime, one of these VMs can ask for an attestation report. And it does that using those VM PCK keys, those ASGCM keys that we talked about a minute ago. And so it creates a message saying, I want an attestation report and I want this data to be included in it. That encrypted blob then goes through the hypervisor to the security processor, which will turn around and provide a report. That report contains the identity information. So from that ID block, the VM supplied data, some other platform information. And it is all signed with something called a version chip endorsement key, or VCK, which is a chip unique key that is specific to the TCB versions running on that platform. And so that signed report is given back to the guest who can then supply that to anyone it wants. The remote party can verify that report by taking that report combined with a certificate from AMD. So if you go to our website, we have something called a KDS, a key distribution server. And you can supply a specific part number, along with the TCB versions running on that part. And we will give you a certificate, assuming of course it's an authentic part. We'll give you a certificate that is signed by an AMD root key that you can use to verify attestation reports from that box. An important thing is that that certificate is valid for all the attestation reports coming from that box at that version. So this is not something that you need to go and do every single time. It's more of a at the time you provision the platform type thing. And so you use that to verify that the attestation report you get back is legitimate. The data in that attestation report can then be used to securely communicate with the guest. And so one example that's shown here is the guest generates a public private key pair and asks for an attestation report that contains a hash of the public half of that key. And that way when the guest gives that key and the attestation report to a third party, that party can verify the report and it can know that that key is in fact associated with that guest. And so it can then use that to encrypt communication that it knows only that guest will be able to observe. So that's not the only potential way to use attestation but that's one of the key ways that we see. And attestation is a very important part of this architecture because until you do the attestation you really don't have assurances that the guest is running in a secure environment. That it's running at TCB versions that you consider acceptable. So I know that's a very quick sort of whirlwind tour through the basics of S&P, I'm going to hand things over to Brejesh. He is going to talk through a bit of an update on the development status. So. Thank you, David. My name is Brejesh Singh. I've been working on S&P enablement. I'm part of Linux kernel team at AMD. I did the SEV work and the SEV is now on S&P. So I'll just walk through what are the things we have been submitting upstream and get your feedback. So the very first thing which we have been, if you follow the SEV and SEV ES enablement is we have introduced a bunch of iOctals in a KVM driver to create or manage the encrypted virtual machine. So in a SEV case there was these iOctals for SEV initialization, launch start, launch update and all those things. The similar concept is there in S&P. So now KVM will have more support called S&P init which can be used for initializing the context and it can allocate the ACID. Because one of the thing with the encrypted guest is it need to use a particularly specific ACID range. So with the init we allocate that ACID and with the launch start command which David just talked about can be called by VMM with all the parameters given to it and KVM will take those parameter, pass it down to the security processor. And launch update, same. It will take all the parameters given by the QMU or any of the VMM and pass it down to the KVM and KVM pass it further down. While passing these parameters down, KVM also does some pre-processing. For example, if VMM gives guest memory or basically guest BIOS and that BIOS might be put in the virtual RSS space. And KVM will go and find it's corresponding system physical address space and call the PSP firmware commands to encrypt the pages. And then finally it calls this launch finish is there, which can take all these various parameters like ID blocks and all those stuff and pass it down to the KVM. So one of the new thing which as David talked about is in S&P architecture is we allow guest to communicate to the PSP through the guest matches request. So there is one of the things which we have been adding is controlling how many guest request can come from the guest owner. Because it can call this starvation malicious guest can just start issuing all these bunch of the guest request to the PSP and could compromise or create denial of service attack. So with these eye optals, we can limit the number of the request you can receive. And for the QMU support what we have done in the past, there is a new object we added called a CV guest. So basically you take the current QMU command line which you used for launching a virtual machine, you add the CV guest object in the QMU, and it automatically becomes an encrypted guest. For S&P, all you do is you just actually, instead of using a CV guest this time, you use a CV S&P guest. And it converts that as a S&P guest. There are few new eye optals also added for the system wide configuration such as the platform status. So this eye optal actually gives the status information about the firmware. For example, what version of firmware is being used and all those details. So you can find all the parameters or all the output information in the S&P specification. And then some eye optals to get the configuration or set the configuration which is used during the attestation report. So next is the GSCV version. So what are the things if you follow the CV development, right? From a CV ES, the registers are encrypted. In a S&P, registers are also encrypted. So what this basically that means is whenever guest is doing something which causes the world switch, the registers will be encrypted. Guest, a hypervisor will no longer able to access those registers. To help that architecturally, there is a notification which goes from the hardware to the guest telling that there is no automatic exit happening. And then guest can validate what are the registered states it want to expose to the hypervisor and then. So that all this contract is being done through this GSCV called guest voice communication specification. So this specification was developed during the CV ES development. And now we extended it to add few more non automatic exit to support the S&P. So there are a bunch of, you can see many of them, I'll not go through each of them. I'll just go through some important one. So one of the most important one is the page state change. As Zephyr was talking that one of the thing which we need to do is before guest accesses a private page, it need to validate that page. And validation is two step process. First, you transition a page in the RMP table, then you call P validated. So the page state change VMG exit can be used by the guest to ask hypervisor to add a particular page in RMP table as a guest private page. So that's the page state change exit. And also then there is new VMG exit which can be used to query the feature supported by the hypervisor. Because in S&P there are multiple features available. And it's possible that hypervisor may implement certain feature. Other feature may not be yet available. So through hypervisor feature VMG exit, a guest can query what are the capabilities of that hypervisor and then act accordingly. And then another one is the guest message request. So this is the VMG exit which a guest can use to issue a request to the PSP about the attestation report or the key derivations and all those things. And then there are a bunch of other VMG exit which is mainly done to support the restricted interrupt injection. So those are AP create and adorable pages. You can find all these detailed information in the GSC specification. The link is there. So let me go through the page validation because this is one of the very important part in the S&P, right? So page validation, if you look at the specification, then there's this VMG exit called PSP page state change. And the page state change, VMG exit basically takes a structure. And so there is this structure here on the slide. The first thing which a guest does it, issues the VMG exit sign, add this particular GPA as a private GPA in the RMP table. So he issues the request. Request comes to the hypervisor. Hypervisor goes, finds that system physical address for that page, and then uses the RMP update instruction to add that particular page in the RMP table, then resumes the guest. Guest would go and call a P-validate instruction to make that as a final step as a private ownership of the page. So this entire process, when you can actually, through VMG exit, we can batch multiple requests at a time. So you can go up to all the way up to 253 entries. And one of the things in the page state change request is the page size. So guest can actually say that he wants to add this range of the GPA as a two-meg. So you can do two-meg operation. And it is just a hint to the hypervisor that guest wants to add this as a two-meg. Hypervisor may choose to add those pages as a 4K, because if he's not able to back those as two-meg, then he may choose to do as a 4K. So in that case, when the guest will go and try to validate the page as a two-meg, he will get the error. P-validate will give the error something like size mismatch. So there's a hint to the guest that the page size which you requested was not honored by the hypervisor. But then guest can go and use the smaller size, which is 4K, to validate those particular range. And this process will keep on continuing to complete the page validation state. So one thing in the integrity, in the RMP table check, which Davis was talking about, there are two types of RMP fault can happen. One is the RMP fault can occur during the page walk from the host, or RMP fault can or nested page fault can happen when the page walk is happening from the guest side of it. So the first one I want to cover is how we deal with the RMP violation fault if it is coming because host is trying to access a guest private page. All these violations will occur if the host is trying to access a guest page. So that page could be sometime a private page or it could be a guest or it could be a shared page. So the strategy which we have taken is what we do is when the page gets added in the RMP table, we actually un-map that page from a direct map so that Linux kernel should never ever be accessing those pages because if it accesses those pages it's going to see a page fault and there is no way to recover from those page fault. So that's our proactive way of just making sure that page is not present in direct mapping. But the user space can go and try to access that. So when user space accesses a guest memory and we see a host RMP violation then there are two things which we could do. If it is a right access then hyper-project don't have any region to write to the guest private page. We send a SIG bus signal to kill that process. But if it is a read access that's all fine, we'll never see the fault. So another thing what happens is the host backing page support. So there are situations where we will run into is that for example like VMM allocates a backing page as a two-meg backing page. And guest issues a page state transition says this page needs to be added as a two-meg in RMP table and we will go and add the page as a two-meg. But sometime later guest can come and make one of the sub-pages within the two-meg as a shared page. And ask us to access the page, ask Hypervisor to access that page. For example it could be a DMA page in between that. So when Hypervisor goes and attempts to access that page, if Hypervisor mapping is not the same page size as what is added in the RMP then it's going to cause RMP violation. And to solve that problem we split the pages on demand. So when we see the fault, if it is fault is because of the page size mismatch then we go and split those pages. So that's the strategy for the host. So in case of the VM, if VM is trying to access a page and that is causing a page violation then when the RMP check violation happens then we will get this nested page fault with some additional information about what caused those RMP check failure. So there are a few more bits added in the nested page fault. So for example one of the bit is RMP bit which basically tells that this nested page fault happened due to the RMP check failure. And then there is a one bit which actually can tell whether that particular page was getting access as encrypted which is c equal to 1R was getting access as a sale. And if there is some kind of size mismatch for example consider the case where guest is trying to be validate a page as a 4K page but the page was backed as a 2 meg then that's a size mismatch on the Hypervisor side and we get those hints. So whenever we see the nested page fault and the fault was due to RMP check failure then we go and take some corrective action to fix it. And so one of the action which we do is basically call the RMP update or RMP update or p smash and all those other instruction to resolve the fault. So yeah there are multiple ways on how the page validation can be approached or can be done here right. So page as we said page validation is one of the key thing. All the memory pages need to be validated before accessed. So there are two approaches which we can take here. One is be pre-validated the entire memory before it's getting accessed. So in case of in the current implementation what we are doing is we have the guest OVMF bias pre-validating the entire memory before Linux kernel is loaded. So what this was like so in the first one is Hypervisor copies some or put some data or guest bias in the guest memory as a space using the launch update commands. So when launch update command is getting issued it is copying the data in guest memory space and it is also validating those particular pages. So there are few pages which will be pre-validated before virtual machine is being even booted. So when the virtual machine starts booting the OVMF bias takes over and OVMF bias very first thing he does as soon as he knows that how much memory is there on the system he uses the pages to transition request to add all these pages as a private pages and then cause the pre-validate instruction to validate the entire memory. So from there onwards all the pages are pre-validated and guest kernel does not have to worry about validation. But if guest kernel makes a page a shared page then it need to issue the pages to transition to unvalidate or basically change the pages state as a shared in the RMP table. Another approach which we could take is the lazy validation which is we do on demand validation whenever the page gets access that's when you validate it. So in this model what we could do is guest bias will validate only the memory which he needs during his execution then there is in the latest UFI specification a new memory type is added called unaccepted memory type and guest bias can build the unaccepted memory type table and pass it down to the guest OS. Guest OS can pass that table and validate the section which was not previously validated. And yeah and yeah so while doing so he also probably guest OS or even in this particular case guest bias also need to ensure that he's not double validating all those memory. So he need to maintain this information in itself. And also another interesting problem could happen if guest OS is wants to do kexec then if the memory is previously validated by the guest OS then you need to pass that information down to the next kexec kernel so that the new kexec kernel would not go and double validate those memories. So one next thing which we added in our kernel is the VM attestation driver as Davis talked about it right so one of the things which SNP provides is provides a method by which a guest can talk to PSP and to talk to PSP it need to call the VMG exit is specified in the GSCV specification so for that we wrote a driver call SCV guest driver it provides few I octals to the user space and user space can call those I octals to send the request down to the to the PSP so though from user interface point of view is a it's very simple they'll just open open the device driver and issue an I octal to it and kernel driver will go construct the entire packet because when you are sending a request down to the PSP that request need to be encrypted through one of the key provided by the secret and the SCV guest driver will take care of encrypting the packet sending it down to the PSP receiving the packet decrypting it verifying that the message is successfully decrypted and then passed down the report to the user space so it can do the get report or key derivation or there is another one is called get extended report so the idea behind the get extended report is the same as the report but it provides some additional certificate which can be configured a system-wide configuration can be done by the hypervisor and this was done through one of the I octal which I talked about in very start of the presentation at AMD our colleague Jesse and Liam they have been developing an example application is we're calling it a CV guest driver which actually makes use of this attestation driver to query the report and that verify or validate the entire chain so what they have been doing is they are extending multi-pass open to multi-pass package to first create an SNP guest so that's the first change which they are doing so in in in this what they are doing is multi-pass client running on a guest owner side sensor request down to multi-pass demon and multi-pass demon over here creates an SNP guest while they send the request they also give this guest owners public key so during launch of flow one of the thing which you can provide is you can provide user data while creating the launch context so what what what that multi-pass demon does it takes the user provided data the in this particular case guest owners SSH public key and use that while during the context creation and boot the virtual machine and once the virtual machine is booted then use the attestation driver to query the report and when you query the report the report contains measurement and all other additional information it also contains the information which was passed during this host or user data during the long text launch context creation they take that data and give it back to the to the multi-pass client which goes and verify verifies the entire certificate chain and once the entire chain has been verified they verify the measurement and have stuck and make sure that it is actually a guest it is actually the guest which they wanted to launch and while doing so they also add this in the SSH SSH known host so that now guest owner can easily connect to the connect to the client or in this particular case to be so just to summarize everything for a CV and a CV ES so a CV support landed in the kernel 4.15 the host that's where the guest support went hypervisor support went in 4.16 kernel and a QMU 2.12 that's where the a CV support landed and Libworld 4.5 and from a distribution distro point of view I think from Ubuntu 18.04 has the a CV guest support and then later version of Ubuntu or Fedora have support for a CV hypervisor all the rel have a good support for the this and the CV ES is recently landed and one thing which we are right now working on is the live migration support for a CV and a CV ES so let me quickly go to the CVS NP for a CVS NP I've been submitting the patches for almost four to five month we have been getting very very good feedback from the community and so far we have reached to the version 5 we are feeling very pretty confident that things are going in right direction and for guest OS for guest BIOS we have been contributing to the OVMF and this slide says it's version 6 but actually recently I just submitted version 8 so we are we are getting pretty close over there I think guest BIOS patches are in very very decent shape what we have because SNP is very complicated feature right so what we are doing is we are doing base enablement first and then we will keep on adding the support as we progress so what we have support for right now is the guest driver which you can use to carry the attestation and then a thing which David talked about the CPU ID so the guest guest kernel as well as the guest BIOS they use the CPU ID page to get the CPU ID values instead of going to the hypervisor and in the current support we go with the pre-validation approach where OVMF validates the entire entire guest memory and it supports the multiple vCPUs yeah so where we will be focusing after the base enablement is the restricted interrupt injection so one of the things which SNP provides optional feature is provides is to partially disable the interrupt induction interface from the hypervisor and it then hypervisor can use this newly defined exception vector called pound HV to send an event or interrupt to the guest and then the VMCV specification has good information about how all those things can be achieved and then we will be also working on lazy validation stuff where we can speed up the boot time then we'll work on live migration support and another thing I'll just touch this real quick is the vTPM support so one of the things in the SNP architecture provides is the VMPL virtual machine privilege level so here idea is that you can divide the guest addresses space into multiple privilege levels so what we can do is for virtual TPM one thing which we could do is we could take and run a code in the VMPL 0 a separate code in the VMPL 0 which which is called SVSM so this is the specification proposed by Microsoft they have proposed a specification call SVSM where they are defining what are the things can be SVSM can provide a bunch of services and they are defining what are the services SVSM can provide and in this model is pretty as what because when you when you divide the virtual when you divide that is a space then then you need to there are certain instruction which lower VMPL cannot do for example P validate lower VMPL cannot do that for that it has to go to the VMPL 0 and when it goes to VMPL 0 that's the specification will give I think I'm running out of time so just for just to conclude everything the SNP support all the patches are available in the github you can download it from there and the fullest and and this will if you have epic processor 3rd generation then this all should both work there and yeah and please give us a feedback the feedbacks and stuff on the mailing list thank you