 All right, well, thank you very much and good morning. So as mentioned, we're going to be talking about the NRX project, which is built on top of AMD's Secure Encrypted Virtualization, or SEV technology. So I'm going to kick things off. I'm going to give a bit of background on what SEV is and some of its capabilities around lifecycle flows, including attestation. And then I'm going to hand things over to Nathaniel, and he's going to talk more about NRX specifically and how they're utilizing those capabilities. So one of the things that we've been doing at AMD, which may have heard me talk about it at previous conferences, is we've began building a hardware AES 128 engine into our memory controllers. And this is present on all of our Zen-based processors for the last couple of years. And this encryption engine is capable of transparently encrypting and decrypting DRAM traffic as it leaves the boundary of the SOC. And we have two different features that utilize this encryption engine. We have, we call our Secure Memory Encryption or SME feature. This has a single key that's generated randomly at boot time. And this can be used to encrypt some or all of the physical memory, primarily to protect against things like cold boot attacks. This is a feature that can be enabled in BIOS, or there's a Linux kernel command line parameter as well. The second feature, which is the one we're going to be focusing on today, is Secure Encrypted Virtualization or SEV. This is a feature that uses multiple encryption keys and assigns one key for each virtual machine in order to cryptographically isolate their memory both from each other, as well as from the hypervisor. In the case of both of these features, the software on the CPUs is not aware of the actual encryption keys. The keys are generated randomly, and they're held in special Harbor registers, and they're maintained by what we call the AMD Secure processor, or you sometimes hear referred to as the PSP, the Platform Security processor. The AMD Secure processor is a dedicated Harbor subsystem that exists in the SSE. It is anchored by an ARM A5 core. It has some dedicated Harbor resources, like a private SRAM, some crypto capabilities. And it is responsible for managing encryption keys and performing various lifecycle tasks related to virtual machines. So in a typical configuration, we have this AMD Secure processor forming the root of trust, and it exposes an API which is publicly documented. And that contains functions related to VM startup, VM migration, attestation, and so on. Those functions are called by the hypervisor. The hypervisor is responsible for performing this interaction, for allocating system resources, such as memory and physical CPUs, and of course, scheduling the virtual machines to run. The guest operating system inside of the virtual machine is a trusted part of this architecture, and it is responsible for dividing its memory between its private memory, which is encrypted with its own key, and its shared memory, which is visible to the hypervisor. So in a typical SCV system, the vast majority of the guest memory is encrypted, thus protecting the contents of the data that's being actively worked on, except for a few pages like the software IoTLB that are used for DMA operations to the hypervisor and other outside entities. The applications that exist within the virtual machine are not modified at all, and that's one of the benefits of this architecture is that the enlightenment happens at the operating system level. And I should say that this is the standard virtual machine model. It's not the only model, as we'll talk about here as well. In a typical setup, we have support for this with KVM and QMU. The support is upstream. We have a GitHub page on our website that has instructions for all the major Linux distros now. So if you are interested in kind of setting up a VM with this enabled, we do have instructions on how to do that. I should mention that this technology first came out a couple of years ago with our first generation Epic processors. Only our server processors support this multiple key encryption mode. The single key mode is supported by all of our products. The first generation processors only support a maximum of 15 keys. So you can only have 15 different virtual machines running at the same time. We recently, about two weeks ago, released our second generation server processors. Those support up to 509 keys, so you can have substantially more guests running at the same time. So the SEV feature protects the memory of the guests. But there are other things as well. One of the big ones is the register state of the virtual machine. And we created a second optional feature called SEVES, or SEV with encrypted state, that is designed to protect the register state of the virtual machine across world switches. And in particular, this involves hardware changes such that all of the register state of the guest is swapped as one atomic operation. Whereas previously, this took a number of instructions in x86. All of that register state is protected using the guest's encryption key. And because of that, there is special flows that have to happen whenever there is any kind of virtualization support that's required around things like MMIO or instruction emulation. And so there is a new exception that occurs that whenever the guest does something like a MSR access that requires hypervisor support, there's actually now an exception that's thrown inside the guest. And a handler is invoked, which communicates with the hypervisor in order to resolve the situation. This protocol uses something we call a guest hypervisor communication block, or GHCB, which is an unencrypted page. And the idea is that the guest chooses what register state it wants to expose as opposed to letting the hypervisor just see absolutely everything inside the guest. The protocol for this is documented publicly, again, on our web page. And this feature is newer as far as open source support goes. We do have patches on our GitHub that support this today, which you're welcome to try out. They haven't gone upstream yet, but we're expecting to do that relatively soon. So combined, SEV and SVES, they don't protect against all possible attacks on a VM, but they do reduce the attack surface substantially. Both of these features can protect the memory of the virtual machine against things like a scrape attack, like someone with root access running DD on the process. They can also protect against things like some VM escapes or side channel vulnerabilities that result in reading memory as the hypervisor. Because the hardware is aware of what mode it's operating in, it ensures that the hypervisor is only able to see the cipher text of the virtual machine. Similarly, we have DMA protection that if you plug in a device, devices are not allowed to access memory using the guest encryption keys, and so therefore they can only see the cipher text as well. The VM register state, as I mentioned, that was added in the SVES feature. That's really the primary difference. Both of the features can protect against we call offline physical attacks. So things like cold boot attacks, again, because that memory is encrypted and the encryption key is stored inside the SOC hardware. It's not stored anywhere on the DRAM chip itself. And we can also protect against certain kinds of boot time attacks, such as an integrity attack on the initial image of the VM as it's starting up. Or what's listed here is a counterfeit machine attack. So that could mean either attempting to start up the virtual machine on hardware that isn't real or just not turning on the security features that the customer is expecting. And both of these features are handled through the attestation flow, which I'll talk about here in a second. So before I get into the attestation flow, let me just walk through how a typical VM is started. So when the hypervisor wants to run one of these SCV VMs, it will first ask the secure processor to generate an encryption key. This is a random key and assign it a specific slot in the key table in the memory controller. After that, the hypervisor will place the initial image of the guest unencrypted into memory. This initial image is assumed to be something like a guest BIOS, again, this is speaking in the context of a traditional virtual machine. It's not the only way this can be done. But so this initial image is not expected to contain any secrets. And the hypervisor then calls the secure processor asking it to encrypt this image with the VM's encryption key. And as it does this, it is computing a cryptographic measurement of the contents that it is encrypting. At the end of this, the hypervisor closes the context and the secure processor generates an integrity hash over what it has measured. And this is at the time that the attestation protocol happens. This allows the owner of the guest, which would be, say, the cloud customer in a typical cloud computing scenario, to determine if they like the way that the VM was started and if so, then they have the ability to inject some sort of secret material. So diving into that a little bit further, this slide makes things look more complicated than they really are. Essentially, what happens is in order to support this attestation and secret injection protocol, there is a Diffie-Hellman exchange that occurs between the AMD secure processor running on the physical box and the guest owner, which again would be like you if you're the cloud customer. And this is done using what we call the platform Diffie-Hellman key. This key is signed by a number of other keys. To simplify things, there's really two chains of trust here that's going on. The first one traces back to what's called the OCA, the Owner Certificate Authority. And this is something that the owner of the box, like the cloud provider, can install in order to basically demonstrate ownership of the platform. And so this can assure you that your VM has started up in the data center you expect. This is primarily used for enforcing things like migration policies and whatnot. It's an optional feature. The second chain of trust goes back to AMD, and this proves that you're running on authentic AMD hardware. And this goes through fuses that every part we manufacture has a unique key, we call it chip endorsement key. And that key is eventually signed by an AMD root key. And if you go on our website again, we have an interface where you can get the certificate chain for any AMD Epic part. You just, you put in the unique identifier of the part. You get the certificate chain, and we also have our public key available on the website, which allows you to verify this entire chain when this Diffie-Helm exchange occurs. So putting this all together, when the VM starts up, the hypervisor will start off by getting this PDH, the platform Diffie-Helm and key, from the security processor, and it will also gather the various certificates that, again, we posted on our website, and send them to the guest owner. And the guest owner will verify that the certificate chain is correct. And if everything checks out, then they will generate their own Diffie-Helm and key, send that back to the system. And this is then provided to the secure processor during the launch process. The hypervisor will continue by installing the initial pages for the VM and encrypting and measuring them. And when it's finished, it gets, we call this launch measurement from the AMD secure processor. And this measurement is an HMAC that contains the measurement of the pages, as well as information about the policy of how the guest was started up, like if it had debug enabled or not some other version information. And this gets sent over to the guest owner. And I should mention there are some nonces in here to ensure freshness, which I've kind of glossed over, but there's details on the spec. The guest owner verifies that this HMAC is correct, that it contains the expected information. And if so, then they can turn around and encrypt a secret. And the secret could be something like a disk decryption key to allow the VM to continue booting. It could be some other sort of root key for the system. That is then sent over this encrypted Diffie-Hellman channel to the hypervisor. It passes that in turn to the secure processor, which is able to decrypt it and then inject it into the initial image of the VM. And at that point, the VM can then be run. And if this whole process occurs successfully, somewhere in its encrypted memory image at a presumably well-known location, it would be able to find the secret which it can then use to continue the process. If the attestation process fails for some reason, or if the hypervisor just chooses not to even do it, then presumably the secret would not exist and you would not be able to continue the boot. So this is the basics of SCV and the basics of the attestation flow. And at this point, I'm gonna hand things over to Nathaniel and he's gonna talk about how this is used with NRX and the cool things that that can do. Awesome, thank you, David. I'm gonna switch some slides here for a moment. So, okay, so I'm Nathaniel McCullum from Red Hat and I wanna be talking about the NRX project. And I'm gonna go pretty quickly through these slides. We have a lot of material to cover and but I wanna make sure we get it all in. So the most important thing, of course, is our website, which is very small, apparently, on the screen. That should be a little better, nrx.io, so enarx.io. So let's quickly outline the problems escape at a very high level. We all have need for privacy and integrity in the cloud. This is just a small set of examples. If you have data, if you have algorithms that qualifies you for privacy in the cloud. But the difficulty with this, of course, is that we have a fairly complex stack. This is your sort of traditional virtualization stack where you have everything from the CPU at the bottom up through BIOS, firmware, hypervisor, bootloader, kernel, user space, middleware, and application. And in order to have a trusted application, at least currently as we deploy software, we actually have to trust the entirety of this stack. But unfortunately, you don't have control over all of the stack. Typically, you're only the person writing the application at the top. And the different colors here highlight where at least traditionally different trust relationships came from. So in virtualization, right, you could buy an operating system like RHEL. You could have support for it. And that would give you at least one single point of trust for a large portion of the stack. But this gets much worse when we do containers because there's a lot more layers. They come from a lot more different places. So the interactions are pretty bad. And there's a fantastic XKCD comic which basically gives us the entire stack and shows the compromises pretty much at every layer on the stack. So one way we could try to do this is we could try to measure the entirety of the stack. But that becomes somewhat fragile because the pieces are changing out all the time. So what we want to do is we want to use these trusted execution environments as a way to in fact remove a bunch of the stacks so that we just don't have to trust it at all. We want to basically just trust the CPU at the bottom, middleware including NARCs, anything that would include NARCs and then anything you use in your application, and then finally the application itself. So let's just look at trusted execution environments at a very high level. The basic thing that trusted execution environments provide is this, there's a host and there's a tenant. The host is going to attest to the tenant. So it's going to give it some kind of cryptographic attestation which typically includes things like some sort of Diffie-Helman public key so that you can set up a session key with the actual hardware that's being done there. This also typically includes a hardware route of trust or this could be more than one. In the example of SEV you have two, you have one for the platform owner and one for AMD itself. And then finally you have a trusted execution environment measurement and this proves cryptographically to the tenant that the environment they're about to inject code and data into is in fact not compromised. It's an okay state. Then once the tenant is assured of this through attestation the tenant can then deliver code and data over an encrypted channel into the trusted environment and they get things like memory encryption and integrity protection and their own independent hardware and a random number generator, et cetera. So in the industry right now we are seeing a development of basically two different approaches for how to do trusted execution. On the left hand side we have the process based model and this is basically where you draw a line right down through the middle of a process and there's a secure portion of the process and an unsecure portion of the process and the two major public examples of this are Intel SGX and Risk Vives Sanctum. These have several problems at least from the perspective of being able to develop on them today. First is Intel SGX is not upstream nor is it even stable yet in terms of the patches that are going upstream so you really can't develop on it today at least not in any way that scales. Risk Vives Sanctum of course don't have any hardware for that yet so that sort of doesn't work. On the right hand side we have the VM based model and this works by instead of having the security boundary going down through the middle of the process it happens at the virtual machine boundary so it's an already existing boundary that we know, we trust, we've deployed it very widely, everyone knows how it works and basically we're just gonna increase the security of that layer by adding things like encryption and attestation and so on. And there are three public models for this. AMD SEV is my esteemed colleague has already given us a demonstration. IBM PEF I believe it was last year at Security Summit. Gurney from IBM gave a talk about what they're doing with PEF. The hardware for that's not released yet so that's also not available to develop on today. Intel also has a technology called MKTME which is currently problematic because it has no attestation. However they've filed a patent for adding attestation to MKTME so you can probably guess what's coming next. But there's two things I needed to highlight as not a TEE and these are TrussZone and TPM. TrussZone is not a TEE because it's really more of a set of utilities to build a TEE and it also has very significant hardware constraints and the problem of key management is not really solved by TrussZone so we are excluding it for our purposes it can't really scale. The other thing is a TPM. We all like TPMs, everyone has them in their laptops but it is just simply not a trusted execution environment. So we've got a lot of models here and it's probably no surprise why we started with AMD SEV it's the first one for which we have both hardware available and fully upstream support in the Linux kernel. However, we fundamentally have this problem, right? We have a bunch of these technologies that are cutting out they all provide very vastly different ways to approach the technology and it's really not clear what developers are supposed to do in order to embrace this technology. So this is why I want to introduce the NRX project. The goal of the NRX project is precisely to make it so that you don't have to worry about these technologies but even further, we want you to not write your application to any technology including NRX, okay? Well, NRX is not an application platform as you'll see in a minute because what we do is basically on the bottom we have the process-based keeps on the left side keeps are what we call our trusted execution environments. So process-based keep, this includes SGX and Sanctum we have VM-based keeps on the right-hand side and NRX is planning to work in both of those cases and basically what we do is we put a WebAssembly JIT as the very first thing that happens inside these trusted execution environments. On top of that, we use the WebAssembly system interface which is not yet finalized but is an ongoing W3C standard. On top of this, you can build language bindings like libc and then you just write your application and deploy it. The important bit here is we're implementing standards here. There's not magical NRX stuff that you need to do to your application. You take an application, you compile it in a certain way and NRX takes it from there. So for example, here is the deployment model. You'll notice that I've put at the top here NRX is not a development framework, it is a deployment framework. So the first thing you do of course when you're writing an application is you choose your language and tools. You don't really have to think about NRX at this stage. You just develop your application and then when you're done you compile it to WebAssembly using the standards that are already available. Then at the time in which you want to deploy that application that's when NRX gets involved. You take the application, you pass it to NRX, you choose a host and you give it the instance configuration and then NRX will take the responsibility to deploy that into a secure environment on your behalf. This does have very significant benefits because since we're using WebAssembly here you can deploy on the same binary on any of the environments. So imagine if you get a vulnerability in one implementation you just change your deployment configuration and shift it immediately to something else. You don't change the binary. You just keep running with the same binary just deployed to a different host. So NRX is also going to insist on best practices which will be on by default and will be very hard and we will yell at you loudly if you tried to turn them off. This includes things like, we do not allow plain text networking shock. Okay, so you can't really do plain text networking if you don't want your traffic to be observed. So we enforce TLS. We have a project called CypherPipe for this. It's in the process of being slightly rewritten. So we actually have two projects, one called TLS Sock and one called CypherPipe but you can see this at our GitHub. And also we plan for a no plain text persisted data. So whenever you write data you just get block encryption from the host side and from the guest side you see files. We will also insist on an independent keep random number generator and we will reviews all APIs that are available to the hosts to make sure that they don't leak data. This by the way is why you really can't put containers into these kinds of environments because containers can just call any syscall they want which means you're basically carrying a pot of soup to the dinner table in a colander. So we're gonna give you a demo. As I said, NRX is not fully production ready but we do have some portions of it working. What you're about to see is you're gonna see a client who is going to be the tenant and they're gonna deploy some code to run in a server. The on the server side we have AMD hardware with AMD firmware and we're going to, the first thing we're gonna do is we're gonna do the attestation handshake. This is what's going to give us the cryptographic validation that we're talking to real AMD hardware here. The next thing we're gonna do is we're going to use the session key that's derived from that attestation in order to deliver code and data into the secure VM. And then we're gonna execute it. So keep in mind while we're doing this that what's important here is not what we are doing but how we are doing it. We're gonna take two numbers, three and four and we're gonna add them together and the output is seven, which I believe is cut off the screen, unfortunately. Yeah, you can see it there in the second time apparently. So it does output the correct number. I think we all agree three plus four is seven. But again, the most important thing here is what's being done. Notice that we ran the command twice that we will talk about that in just a second. But I wanna walk through the steps of what we did. So the first thing we did was the client retrieved a certificate chain from the server. This was the certificate chain that David just talked about. Then we validated that certificate chain, which means we now know that we are talking to authentic AMD hardware. And we validated that chain so we say, chain, okay. We now can generate session keys and an execution policy which we can deliver to the host to start the VM that's validated by the firmware and the session keys are decrypted by the firmware. So the host can't see them. The next thing we do is we have the server start up a virtual machine. This virtual machine is empty and by virtual machine here, I don't mean an operating system. I mean, think of like a virtual CPU. That's all we're talking about. There's nothing in this and by nothing, I mean nothing, no instructions. So we started up, we measure the empty virtual machine and we send the measurement back to the client. The client validates the measurement. Okay, yes, this is in fact an empty virtual machine. You haven't injected a bootkit in here that you can use to grab data. So now the client has fully attested to the server and is ready to deploy code. It has session keys and so the next thing it does is it encrypts its code and data and sends it directly to the server. This is the point which I want you to notice that we did this twice. This is exactly the same code and exactly the same input but notice that the ciphertexts are different because we have perfect forward secrecy on the delivery of this code. So finally the server receives this encrypted code, hands it over to the firmware. The firmware injects the encrypted code into the VM, launches the VM, our code runs, we get the number seven output and this takes about 50 milliseconds. We can't exit full screen. So as we said, we walked you through exactly what was gonna happen and that's exactly what we did. We did the attestation with the host firmware. We got a session key, we delivered code and data all the way to the secure VM and we ran the code and we did so without the host being able to see the code that we injected, the data that we injected or what was happening while the code was running. So this is what we call the in arcs keep, right? This is what we're trying to go for here. We wanna be able to deliver encrypted code and data into some sort of a trusted execution environment which we call a keep and we want to be able to run that. However, we need your help. So we noticed that what was actually used in this demo was hand coded assembly. We don't have web assembly running yet. We do have some good start on that. We do have a demos repo that shows some other demos. We have pull requests against the demo repo but we would really love to have you participate. This is of course open source. Everything that Red Hat does is open source. It's licensed under Apache 2. Everything we're writing is in Rust which gives us the great properties of memory safety that Rust provides and you can of course see our website at narchs.io and github slash narchs. So I think I've left enough time for questions here. I'm gonna have a mic run around with a microphone and ask questions for either myself or for David. If you ask a question, you can get a sticker. Oh, do I get the sticker first? Thank you. Even if it's a bad question. Hi. Yeah, bad questions are great. I'm Mike Calcro with Google. Have you, I was curious about the Azzolo project. ASYLO, it's an open source framework and SDK for the development of applications for trusted execution environments. And I was wondering if you could compare and contrast with Azzolo. In fact, we anticipated this question and we have a slide for it. So currently there are two other major initiatives in the open source world right now. The first one is to just write to the hardware specific SDK. This is not what you get with the virtual machine based technology, right? Virtual machine technology, you just get a virtual machine. You don't get an SDK to develop on it. You only get the SDK in the process model. So you can write directly to the hardware vendors SDK. Unfortunately that provides hardware lock-in. You're literally writing your whole application to that SDK. You can only run on that one piece of hardware. And whatever happens if that one piece of hardware that you can only run on because you can only develop to that one piece of hardware gets a vulnerability, you're stuck, right? So we don't think that's a great option. Option number two is to create some sort of abstraction framework over top of these. And there's been two attempts at this. One is from the Azzolo project from Google. The other one is the open enclave project from Microsoft. And these basically only work with the process-based model. So again, not VMs. They only work with process-based model. And it's really great that people are thinking about abstracting this. However, we think it's not great for several reasons. First, we don't want you to develop your application to it. We want you to develop your application like normal and just deploy into these environments, right? We don't want you to change the way you develop your application. Secondly, they provide, both projects only provide abstraction over SGX and TrustZone. But does anybody remember what I said TrustZone was not? TrustZone is not a TEE. It's a framework for building TEEs and has significant problems with scalability. Both of these, our arm is very aware of. So because TrustZone is not a TEE, we end up with an abstraction layer over a single technology, which I don't think provides a lot of value. We end up with implicit hardware locking even though we've written to an abstracted interface. The other downside, of course, is that these abstraction layers only work on the process-based model so you don't get to capture any of the work that's being done in the secure VM space, which is precisely why NRX wants to build on both of them. So I hope that answers your question. We'll be very happy to answer further questions in the hallway track. Yeah, but you only get one sticker. Any other questions? So you're running in a virtual machine. What's the kernel that's running in the virtual machine? Is it kind of a unicernel, micro-cernel? Yeah, so we actually haven't decided yet. We are, we're currently investigating an approach which I call the monocernel. And basically what we want to provide is we want to provide an integrated hypervisor, kernel, and application, all in one binary. So in terms of where we actually get that kernel code from, that's not been decided. We are probably not going to use Linux for this because we want to keep everything licensed under Apache 2 and also because we need a significant number of kernel components to be reusable in the process-based model as well. So we need something more modular. We need, for example, a file system that can work in both the kernel and in SGX because we only want to expose block devices. So having a kernel like Linux could work well in the virtualization case, but then we have to reinvent the entire world for the process-based case. So we're looking at very similar technologies. We haven't made a decision yet. I'm having a meeting on Thursday to have a deep dive hammer it out and try to make a decision and move forward. Thanks for the presentation. I have two questions. So one of them, so how are you loading this encrypted code and data? Are you injecting the key and then load this code and data from encrypted disk image or...? It depends. So, no, there's not an encrypted disk image. The code is going to be delivered at runtime and the code, how precisely that workflow works is dependent on the hardware. So the different hardware technologies have different approaches for testing what's in the trusted execution environment. That question was for SCV. So for the demo, we used the launch secret functionality of SCV and launched it directly into an empty virtual machine. I believe that functionality is going away at some point and we're gonna get a slightly different functionality. Is that right? Yeah, there might be some changes in the future, but yeah, in the current technology is literally the code bytes are the secret that is injected and then it's just executed from there. It's not the only way that it could be done, but that's what's done today. Yeah, thank you. And another question. So are you planning to use Intel SGX Protected Code Launcher? I'm sorry, I missed the second one. You mentioned Intel SGX, which is not available yet, but are you planning to... Intel SGX has something called Protected Code Launcher. So I heard something about Protected Code Launch. I'm not sure exactly what you're referring to there, but we do, so we are working on SGX. We actually have a pull request for a demo doing attestation. Basically what you already saw, kind of a little bit less than what you already saw on SEV. So there's a pull request from one of my team members, and we're gonna be hopefully merging that this week. So we are looking at other technologies. We have no intention of locking anyone into a technology. The whole point of this architecture is to make it very, very easy to work across a variety of technologies and not have to develop your application to one specific technology. Anyone else? James, how are we doing for time? Was it about time? 10 minutes left. Oh, 10 minutes. Come on. I talked very fast. Okay, one ahead. So to launch this application, we seem to be going through the attestation framework. So what's an overhead that happens before you could actually launch? The overhead for the attestation model. In the case of SEV, we were looking at 50 milliseconds, and that was all local, so there was no network. You would need to have some network latency as well. You'd also probably gonna deliver a larger application than that as well. So it's really hard to predict, and it's going to be basically linear with the size of your application. We would probably implement some sort of a caching framework as well, I would imagine, although there's a tension with delivering perfect forward secrecy. This is Anshil from Amazon for David. So are we not worried about the side channel attacks, including like page forward base attacks or cache-based attacks when you're encrypting the entire VM with SME? You're asked about side channel attacks like Prime and Probe and Page Emacing. So those are currently not part of our threat model of SEV, and they're kind of not part of the standard X86 threat model either. We don't have any way of preventing you from writing code that might be vulnerable to a Prime and Probe-based attack. So right, there's nothing in this generation that we provide specifically for that other than just the guidance that, please don't write your crypto libraries to use values dependent on secrets and things like that, but beyond that, we don't have any special capabilities. I should also note here that in the way that NRX is designed, you're going to basically get a block of native code that's all delivered by NRX, and we would hope that to be side channel resistant, and anything that has problems we would want to invest in mitigations. And in fact, I already have an open bug about developing mitigations for side channels. They do both have different problems. We can detect some of that at compile time based upon your compile time target because you're gonna be building NRX for SGX or for SEV. In SGX, CPU ID is actually not unavailable instruction, although if you actually look at the Intel SDK, they currently proxy to the host to virtualize the CPU ID instruction, which now we're trusting the host. Again, I don't really know that why that's there. But yeah, so we're going to have to consider that at some point, exactly how we do that. I don't have a clear answer to you. And I'd say also that if you have particular concerns about particular attacks, when you are deploying, you say, well, I have concerns about attacks on this platform, so I won't allow deployment on that platform. So you have that option as a deployment time. Yeah, a big point of this is, right, you have your application, it's the same binary no matter what platform you're launching it on. And so if you consider a particular platform to be broken, just disable it and launch somewhere else. So I'm a bit curious about why and how you feel good about WebAssembly, given the current state of its... Okay, so let me ask this question. What is the largest decentralized computing network in the world? Anybody wanna guess? JavaScript. It's JavaScript, by many orders of magnitude, right? WebAssembly is, I'm currently participating in the standards for WebAssembly to make sure that we get what we need, but I think it is clear that WebAssembly is the only thing that we have to potentially dethrone a JavaScript as the largest decentralized computing network in the world. And so I think it's a very good fit. It is also very conveniently the WebAssembly system API covers as a system API basically exactly what we can do inside trusted execution environments. So it's a very natural fit in terms of capabilities. Did you have a specific concern? Okay. Mighty Weiss of NGE Research. In the earlier slide, you were eliminating all the middle stuff from your TCB. I'm gonna argue there's probably some use cases for having that tested as well. Yes. Okay, so. Yeah, so they protect different sides, right? I understand. Is there, and I agree with you, is there any notion of binding, the thing that attests to that stuff in the middle you crossed out would be, for example, a TPM. It would be really nice to be able to, when I talk to this application, to have some binding between the TPM that's on the platform that has keys for these other things you can test to this middle ground. So you know, you know, binding, and maybe this is more toward the question to AMD than you. Is there any notion of binding these two sets of attestation keys so that I can answer the question? These two things are on the same physical platform. No, and intentionally no. Attestation of the platform protects the platform deployer. Attestation of the tenant application of the trusted execution environment protects the tenant. These, they protect simply different parties. If you want to do attestation of the entire stack, there are projects for that like key lime, right? But there's trying to link them together fundamentally undoes the sense of radical portability that we are trying to accomplish because now you've just tied your application to something very specific that the cloud service provider is giving you. And we want our environments to be homogeneous precisely so that you can move it somewhere else, right? Red Hat is very, very against lock in. And we would see that as a very clear opportunity for lock in. I'm gonna try people who haven't already asked questions. Gentlemen. Just a quick question. Is there any way that you can detect, say a hostile hypervisor? Say that you're handling all this stuff going back and forth. Yes. I can't really say more. Yeah. Patent time. You might have already answered this but I might have missed it. So where does WebAssembly actually fit into the stack here? So like I understand how the attestation process works and I understand how like once you've actually developed like a sense of trust for your VM, you inject code. So essentially NRX is gonna be the bit that you're going to be a testing and NRX is going to include a WebAssembly JIT. So once you've attested, now NRX is just waiting for you to deliver WebAssembly code that it will JIT. Okay. And so you will then have encrypted code and data that you deliver to the NRX runtime. NRX runtime will handle that data in a way that's appropriate and will compile, or just in time compile the WebAssembly to the native platform and then execute. I'll hand over to James. I think we're out of time. Sorry. We're very, very happy to be around. And Nathan is around for the rest of the day. I'm around for the rest of the week. There's a buff, right? Later this. Oh yeah, there's a buff. Specifically on open enclave stuff, mic stuff running. We're happy to be involved in that if Microsoft folks are happy. And we have stickers. So you can ask this guy for stickers if you want one. I think that buff is a separate. Okay.