 So, for the next talk, I have Jofan Bulk and Fritz Alder from the University of Löwen in Belgium and David Oswald from a professor for cybersecurity in Birmingham. They are here to talk about the trusted execution environment you probably know from Intel and so on. And you should probably not trust it all the way because it's software and it has its flaws. And so they're talking about ramming enclave gates, which is always good, a systematic vulnerability assessment of TE shielding runtimes. Please go on with the talk. Hi, everyone. Welcome to our talk. So I'm Jof from an image design research group at Löwen and today joining me are Fritz also from Löwen and David from the University of Birmingham. And we have this very exciting topic to talk about ramming enclave gates. But before we dive into that, I think most of you will not know what are enclaves, let alone what are these TEs. So let me first start with some analogy. So enclaves are essentially sort of a secure fortress in the processor in the CPU. So it's an encrypted memory region that is exclusively accessible from the inside. And what we know from the last history of fortress attacks and defenses, of course, is that when you cannot take a fortress because the walls are high and strong, you typically aim for the gates. That's the weakest point in any fortress defense. And that's exactly the idea of this research. So it turns out to apply to enclaves as well. And we have been ramming the enclave gates. We have been attacking the input-output interface of the enclave. So a very simple idea, but very drastic consequences, I dare to say. So this is sort of the summary of our research with over 40 interface sanitization vulnerabilities that we found in over eight widely used open source enclave projects. So we will go a bit into detail over that in the rest of the slides. Also a nice thing to say here is that this resulted in two academic papers to date, over seven CVEs, and altogether quite some responsible disclosure lengthy and viral periods. Okay. So I guess we should talk about why we need such enclave fortresses anyway. So if you look at traditional operating system or computer architecture, you have a very large trusted computing base. So for instance, on the laptops that you most likely use to watch this talk, you trust the kernel, you trust maybe a hypervisor if you have, and the whole hardware under the system. So CPU, memory, maybe hard drive, trusted platform module, and the likes. So actually the problem is here with such a large TCB, trusted computing base, you can also have vulnerabilities basically everywhere and also malware hiding in all these parts. So the idea of this enclave execution is, as we find for instance in Intel SGX, which is built into most recent Intel processors, is that you take most of the software stack between an actual application, here's the enclave app, and the actual CPU out of the TCB. So now you only trust really the CPU. And of course you trust your own code, but you don't have to trust the OS anymore. And SGX, for instance, promises to protect against an attacker who has achieved a root in the operating system. And even depending on who you ask against, for instance, a malicious cloud, cloud providers, imagine you run your application on the cloud, and then you can still run your code in a trusted way with hardware level isolation and you have attestation and so on, and you don't no longer really have to trust even the administrator. So the problem is of course that attack surface remains. So previous attacks and some of them I think will also be presented at this remote congresses here have targeted vulnerabilities in the micro architecture of the CPU. So you're you're attacking basically the hardware level. So you had foreshadow, you had micro architectural data sampling, Spectre and LVI and the likes. But what less attention has been paid to and what we'll talk about more in this presentation is the software level inside the inside the enclave, which which I hinted at that there's some software that you trust. But now we look in more detail in into what actually is is in such an enclave from the from the software side. So can an attacker exploit any classical software vulnerabilities in the enclave? Yes, David, that's quite an interesting approach, right? And let's let's aim for the software. So we have to understand what is a software landscape out there for these SGX enclaves and TEs in general. So that's what we did. We started with an analysis and you see some screenshots here. This is actually a growing open source ecosystem. Many, many of these run times library operating system SDKs. And before we dive into the details, I want to stand still with what is the common factor that all of them share, right? What is kind of the idea of these enclave development environments? So, so here what any to set execution environment gives you is this notion of a secure enclave oasis in a hostile environment, right? And you can do secure computations in the green box while the outside world is burning with us as with any defense mechanism. As I said earlier, the devil is in the details and typically at the gates, right? So how do you mediate between that that industry world where the desert is on fire and the secure oasis in the enclave? And and intuition here is that you need some sort of intermediary software layer, which what we call a shielding runtime. So it kind of makes a secure bridge to go from the interest world to the enclave and back. And that's that's what we are interested in, right? To see what kind of security checks you need to do there. So it's quite a beautiful picture you have on the right, the fertile enclave and on the left, the the hostile desert and we make this secure bridge in between. And what we are interested in is what if it goes wrong? What if your bridge itself is flawed? So to answer that question, we look at that yellow box and we ask what kind of sanitizations, what kind of security checks do you need to apply when you go from the outside to the inside and back from the inside to the outside? And one of the key contributions that we have built up in the past two years of this research, I think, is that that yellow box can be subdivided into two smaller subsequent layers. And the first one is this ABI application binary interface, very low level CPU state. And the second one is what we call API application programming interface, so that's the kind of state that's already visible at the programming language. Then the remainder of this presentation, we will kind of guide you through some relevant vulnerabilities at both these layers to give you an understanding of what this means. So first, Fritz will guide you to the exciting low level landscape of the ABI. Yeah, exactly. And you just said it's the CPU state and it's the application binary interface. But let's take a look at what this means, actually. So it means basically that the attacker controls the CPU register contents. And we do have to proceed. And that on every enclave entry or on every enclave exit, we need to perform some tasks so that the enclave and the trusted runtime have some well-initialized CPU state. And the compiler can work with the calling conventions that it expects. So these are basically the key part. We need to initialize the CPU registers when entering the enclave and scrubbing them when we're exiting the enclave. So we can't just assume anything that the ticker gives us as a given. We have to initialize it to something proper. And we looked at multiple TTE runtimes and multiple TEs. And we found a lot of vulnerabilities in this ABI layer. And one key insight of this analysis is basically that a lot of these vulnerabilities happen on complex instruction set processors, so on CISC processors. And basically on the Intel SGX TTE. We also looked at some RISC processors. And of course, it's not yet representative. But it's immediately visible that complex X86 ABI seems to be have a way higher, larger tech surface than the simpler RISC designs. So let's take a look at one example of this more complex design. So for example, there's the X86 string instructions that are controlled by the direction flag. So there's a special X86 rep instruction that basically allows you to perform string memory operations. So if you do like a memset on a buffer, this will be compiled into the rep string operation instruction. And the idea here is basically that the buffer is read from left to right and well written over it by memset. But this direction flag also allows you to go through it from right to left, so backwards. Let's not think about why this was a good idea or why this is needed, but definitely it is possible to just set the direction flag to one and run this buffer backwards. And what we found out is that the system VABI actually says that this must be clear or set to forward on function entry and return. And that compilers expect this to happen. So let's take a look at this when we do this in our enclave. So in our enclave, when we in our trusted application perform this memset on our buffer, on normal entry with the normal direction flag, this just means that we walk this buffer from front to back. So you can see here, it just runs correctly from front to back. But now if the attacker enters the enclave with the direct direction flag set to one, so it sets to run backwards, this now means that from the start of our buffer, so from where the pointer points right now, you can now see it actually runs backwards. So that's a problem and that's definitely something that we don't want in our trust applications because well, as you can think, it allows you to override keys that are in the memory locations that you can go backwards. It allows you to read out things. That's definitely not something that is useful. And while we reported this, this actually got a nice CD assigned with the base core high, as you can see here in the next slide. And well, you may think, OK, well, that's one instance. And you just have to think of all the flags to sanitize and all the flags to check. But wait, of course, there's always more. So as we found out, there's actually the floating point unit, which comes with a whole lot of other registers and a whole lot of other things to exploit. And I will spare you all the details. But just for this presentation, just know that there is an older x87FPU and new SSE that does vector floating point operations. So there's the FPU control word and the MX CSR register for these newer instructions. And this x87FPU is older, but it's still used, for example, for extended precision, like long double variables. So old and new doesn't really apply here, because both are still relevant. And that's kind of the thing with x86 and x87 here, that old archaic things that you could say are outdated are still relevant or are still used nowadays. And again, if you look at the system VABI now, we saw that these control bits are called eSafe. So they are preserved across function codes. And the idea here is, which to some degree holds merit, is that these are some global states that you can set. And they are transferred within one application. So one application can set some global state and keeps the state across all its usage. But the problem here, as you can see here, then, is our application or enclave is basically one application. And we don't want our attacker to have control over the global state within our trusted application. So what happens if the settings are preserved across codes? Well, for a normal user, let's say we just do some calculation inside the enclave, like 2.1 times 3.4, which just nicely calculates to a 7.14 long double. That's nice. But what happens if the attacker now enters the enclave with some corrupt precision and rounding modes for the FPU? Well, then we actually get another result. So we get distorted results with the lower precision and different rounding modes. So it's actually rounding down here whenever it exceeds the precision. And this is something we don't want, right? So this is something where the developer expects a certain precision, or a long double precision, but the attacker could actually just reduce it to a very short precision. And we reported this, and we actually found this issue also in Microsoft Open Enclave. That's why it's marked as not exploitable here. But what we found interesting is that the Intel SgX SDK, which was vulnerable, patched this with an X restore instruction, which completely restores the extended state to a known value, while Open Enclave only restored the specific register that was affected, the MXCSR instruction. And so let's just skip over the next few slides here, because I just want to give you the idea that this was not enough. So we found out that even if you restore this specific register, there's still another data register that you can just mark as in use before entering the Enclave, and with which the attacker can make that any floating point calculation results in another number. And this is silent, so this is not programming language specific, this is not developer specific. This is a silent ABI issue that the calculations are just not a number. So we also reported this. And now, thankfully, all Enclave runtime uses full X restore instruction to fully restores this extended state. So it took two CDs, but now, luckily, they all perform this nice full restore. So I don't want to go into the full details of our use cases now, or of our case studies that we did now. So let me just give you the ideas of these case studies. So we looked at these issues and wanted to look into whether they just feel difficult or whether they are a bet. And we found that we can use overflows as a side channel to deduce secrets. So for example, the attacker could use this register to unmask exceptions that inside the Enclave are then triggered by some input dependent multiplication. And we found out that these side channels, if you have some input dependent multiplication, can actually be used in the Enclave to perform a binary search on this input space. And we can actually retrieve this multiplication secret with a deterministic number of steps. So even though we are just like a single mask, we flip, we can actually retrieve a secret with deterministic steps. And just so that you know there's more you can do, we can also do machine learning in the Enclave. So if you all said it nicely, you can run it inside the TE, inside the cloud. And that's great for machine learning. So let's do hand written digit recognition. And if you look at just the model that we look at, we just have two users where one user pushes some machine learning model and the other user pushes some input. And everything is protected with Enclaves, right? So everything is secure. But we actually found out that we can poison these FPU registers and degrade the performance of this machine learning down from all digits were predicted correctly to just 8% of digits were correctly. And actually, all digits were just predicted as the same number. And this basically made this machine learning model useless. There's more we did. So we can also attack Blender with image differences, slide images differences between Blender images. But that's just for you to see that it's small. But it's a tricky thing and intricate that can go wrong very fast on the ABI level once you play around with it. So this is about the CPU state. And now we will talk more about the application programming interface that I think more of you will be comfortable with. Yeah, we take, thank you Fritz. We take quite a simple example. So let's assume that we actually load the standard Unix binary into such an Enclave and there are frameworks that can do that such as a graphene or so. And what I want to illustrate with that example is that it's actually very important to check where point has come from. Because the Enclave kind of partitions the memory into untrusted memory and Enclave memory and they live in a shared address space. So the problem here is as follows, let's assume we have an echo binary that just prints an input and we give it as an argument string and that normally when everything is fine points to some string, let's say a hello world which is located in the untrusted memory. So if everything runs as it should, this Enclave will run, will get the pointer to untrusted memory and will just print that string. But the problem is now actually the Enclave has access also to its own trusted memory. So if you don't check this pointer and the attacker passes a pointer to the secret that might live in Enclave memory, what will happen? Well, the Enclave will fetch it from there and will just print it. So suddenly you have turned this kind of like into a memory disclosure vulnerability. And we can see that in action here for the framework named graphene that I mentioned. So we have a very simple hello world binary and we run it with a couple of command line arguments and now in on the untrusted side we actually change a memory address to point into Enclave memory. And as you can see, normally it should print your test but actually it prints a super secret Enclave string that lived inside the memory space of the Enclave. So these kind of vulnerabilities are quite well known from user to kernel research and from other instances and they're called confused deputies. So the deputy has a gun, can read the Enclave memory and suddenly then does something which is not supposed to do because he didn't really check where the memory should belong or not. So I think this vulnerability seems to be quite trivial to solve. You simply check all the time where pointers come from but as you will tell you know it's often not quite that easy. Yes, David, that's quite insightful, right? We should check all of the pointers. So that's what we did. We checked all of the pointer checks and we noticed that Intel has a very interesting kind of algorithm to check these things. Of course, the Intel code is high quality. They checked all of the pointers but you have to do something special for strings, right? And we're talking here the C programming language. So strings are null terminated. Terminated, they end with a null byte and you can use the function sdln string length to compute the length of the string. And let's see how they check whether a string lights completely outside of Enclave memory. So the first step is you compute the length of the interested string and then you check whether the string from start to end lights completely outside of the Enclave. That sounds legit, right? Then you reject the string. So this works beautifully. Let's see however how it behaves when we pass on a legal string, right? So we are not gonna pass this string hello world outside of the Enclave but we pass some string secret one that lies within the Enclave. So the first step will be that the Enclave starts computing the length of that string that lies within the Enclave, right? And that sounds already fishy. But then luckily everything comes okay because then it will detect that this actually should never have been done and that the string lies inside the Enclave so it will reject the e-call so the call into the Enclave. So that's fine but some of you who know side channels know that this is exciting, right? Because the Enclave did some computation it was never supposed to do and the length of that computation depends on the amount of non-zero bytes within the Enclave, right? So what we have here is a side channel where the Enclave will always return false but the time it takes to return false depends on the amount of zero bytes inside that secret Enclave memory block. So that's what we found, we were excited and we said, okay, let's simple timing channel let's quote that up. So we did that and you can see a graph here and it turns out it's not as easy as it seems, right? So I can tell you that the blue one is for a string of length one and the red one is for a string of length two but there's no way you can see that from that graph because x86 processors are lightning fast so that one single increment instruction is completely dissolves into the pipeline. You will not see that by measuring execution time. So we need something different and well we are smart so we read papers and in literature one of the very common attacks in SJX is also something that Intel describes here. You can see which memory pages, 4K memory blocks are being accessed while the Enclave executes because you control the operating system and the paging machinery in there. So that's what we tried to do and we thought this is a nice side channel and we were there sketching our head looking at that code, a very simple for loop that fits entirely within one page and a very short string that fits entirely within one page. So just having access to 4K memory regions is not gonna help us here, right? Because vote the code and the data fit on a single page. So this is essentially what we call the temporal resolution of the side channel. This is not accurate enough. So we need another trick and well here we have been working on quite an exciting framework. It uses interrupts and it's called SGX step. So it's a completely open source framework on GitHub. And what it allows you to do essentially is to execute an Enclave one step at a time, hence the name. So it allows you to interleave the execution of the Enclave with attacker code after every single instruction. And the way we pulled it off is highly technical. We have this Linux kernel driver and a little library operating system running in user space. But that's a bit out of scope. The matter is that we can interrupt an Enclave after every single instruction. And then let's see what we can do with that. So what we essentially can do here is to execute that for loop with all these x86 increment instructions one at a time. And after every interrupt, we can simply check whether the Enclave accessed the string residing at our target as memory location. Another way to think about it is that we have that execution of the Enclave and we can break that up into individual steps and then just count the steps and hence have a deterministic high granular timing channel. So in other words, we have an Oracle that tells you where all zero bytes are in the Enclave. I don't know if that's useful, actually, Dave. So it turns out it is. I mean, some people who might be into exploitation already know that it's good to know whether zero is somewhere in memory or not. And we'll now do one example. We'll break AES and I which is the hardware acceleration of Intel process for AI. So finally, that actually operates only on registers and Joe just said, he can kind of like do that on pointers or on memory. But there's another trick that comes into play here. So whenever the Enclave is interrupted, it will store its current register state somewhere to memory called the SSA frame. So we can actually interrupt the Enclave, make it write its memory to its register, sorry, to SSA memory. And then we can run the zero byte Oracle on the SSA memory. And what we figure out is where zero is or if there's any zero in the AES state. So I don't want to go into the gory details of AES. But what we basically do is we find whenever there's zero in the last in the state before the last round of AES. And then that zero will go down through the S-box will be XOR to a key byte. And then that will give us a ciphertext. But we actually know the ciphertext byte. And so we can go backwards and we can kind of compute we can compute from the zero up to here and from here to this XOR. And that way we can compute directly one key byte. So we repeat that whole thing 16 times until we have found zero in every byte of this state before the last round. And that way we get the whole final round key. And for those that know AES, if you have one round key you have the whole key in it. So you get like the original key you can go backwards. So sounds complicated, but it's actually a very fast attack when you see it running. And so here is SGX step doing this attack. And as you can see within a couple of seconds and maybe 520 invocations of AES here, we get the full key. So that's actually quite impressive, especially because the whole, yeah, one of the points in AES and I is that you don't put anything in memory, but there's this interaction with SGX, which is kind of like allows you to put stuff into memory. So I want to wrap up here. We have found various other attacks. So both in research code and in production codes, such as the Intel SDK and the Microsoft SDK and they basically go across the whole range of vulnerabilities that we have often seen already from user to kernel research, but there are also some interesting, new kind of vulnerabilities due to some of the aspects we explained. There was also a problem with O-calls. That is when the enclave calls into untrusted code. So that is used when you want to, for instance, emulate system calls and so on. And if you return some kind of like wrong result here, you could again go out of bounds and they were actually quite widespread. And then finally, we also found some issues with padding, with leakage in the padding. I don't want to go into details. I think we have learned a lesson here that we also know from the real world and that is it's important to wash your hands. So it's also important to sanitize enclave state to check pointers and so on. So that is kind of one of the takeaway messages really is that to build enclave securely, yes, you need to fix all the hardware issues, but you also need to write safe code and for enclaves that means you have to do proper ABI and API sanitizations. And that's quite a difficult task actually, as we've seen I think in that presentation. There's quite a large attack surface due to the attack model, especially of Intel SGX where you can interrupt after every instruction and so on. And I think from a research perspective, there's really a need for a more principled approach than just bug hunting. If you want, maybe we can learn something from the user to kernel analogy, which I invoked I think a couple of times. So we can learn kind of like how what an enclave should do from what we know about what a kernel should do, but there are quite important differences also that need to be taken account. So I think as you said, all our code is open source. So you can find that on the below GitHub links and you can of course ask also questions after you have watched this talk. So thank you very much. Hello. So back again, here are the questions. Hello to see you live. We have no questions yet. So you can put up questions in the IRC below if you have questions. And on the other hand, oh, let me make close this up. So I'll ask you some questions. How did you come about this topic and how did you meet? Hmm. Well, that's actually interesting. I think this research has been building up over the years. And there is some, so I think some of the vulnerabilities from our initial paper, I actually already started in my master thesis to sort of see and collect. And we didn't really see the big picture until I think I met David and his colleagues from Birmingham at an event in London, the rice conference. And then we started to collaborate on this and to look at this a bit more systematic. So I started with this whole list of vulnerabilities and then with David, we kind of made it into a more systematic analysis. And that was sort of a Pandora's box, I dare to say. From that moment on, this kind of same errors keeping repeated. And then also Fritz, who recently joined our team in Leuven started working together with us on more of this low level CPU state. And that's Pandora's box in itself, I would say. Especially one of the lessons, as we say there, that x86 is extremely complex and turns out that almost all of that complexity, I would say, can be abused potentially by adversaries. So it's more like a fractal and a fractal and a fractal. You're opening a box and you're getting more and more of questions out of that. In a way, I think, yes, I think it's fair to say this research is not the final answer to this, but it's rather an attempt to give a systematic way of looking at a poorly never ending attacker defender race. So there is a question from the internet. So are there any other circumstances where AES minus NE is writing its registers into memory or is this exclusive to SGX? Should I repeat? I do not understand the question either. So, well, I think the question is that this AES attack that David presented depends on, of course, having a memory disclosure attack to read the register content and we pull that off using SGX step to kind of forcibly write the register content into memory. So that is definitely SGX specific. However, I would say one of the lessons from, let's say, the past five years of SGX research is that often these things generalize beyond SGX and at least the general concept of, let's say, the insight that CPU registers end up in memory one way or another sooner or later. I think that also applies to operating systems, right? If you somehow can force an operating system to context it between two applications, it will also have to dump the register content temporarily in memory. So if you would have something similar like what we have in an operating system kernel, you would potentially mind a similar attack. But maybe David wants to say something about operating systems there as well. No, not really. I think one thing that helps with SGX is that you have very precise control as you explained with the interrupts and stuff because you root outside the enclave. So you can single step, essentially, the whole enclave where it's like interrupting the operating system at exactly repeatedly at exactly the point you want or some other process or so tends to be probably harder just by design. But of course, on a context switch, CPU has to save somewhere its register set and then it will end up in memory in some situations, probably not as controlled as it is for SGX. So there is the question, what about other CPU architectures other than Intel? Did you test those? So maybe I can go into this. So well, interest SGX is the, let's say, largest one with the largest software base and the most enclave shooting runtimes that is also that we could look at, right? But there are, of course, some others. So for example, we have this internal TEE that we developed at Sankos. Some years ago, it's called Sankos. And of course, for these, there are similar issues, right? So you always need the software layer to interact, to enter the enclave and to access the enclave. And I think you and David in the earlier work also found issues in our TEE. So it's not just Intel and related projects that mess up there, of course. But what we definitely found is that it's easier to think of all edge cases for simpler designs like RISC-5 or simpler RISC designs than for this complex x86 architecture, right? So right now, there are not that many besides Intel SGX. So they have the advantage and disadvantage of being the first widely deployed, let's say. But I think as soon as others start to grow out and simpler designs start to be more common, I think we will see this that it's easier to fix all edge cases for simpler designs. Okay, so what is the reasonable alternative to TEE? Or is there any? Do you want to take that or should I say what? Well, we can probably both give our perspective. So I think... Well, the question to start with, of course, is do we need an alternative? Or do we need to find more systematic ways to sanitize the software layers? That's, I think, one part of the answer here that we don't have to necessarily throw away TEE is because we have problems with them. We can also look at how to solve those problems. But apart from that, there is some exciting research. But maybe David also wants to say a bit more about, for instance, on capabilities that are in a way not so different than TEE is necessarily. But when you have hardware support for capabilities, like the Cherry project in Cambridge, which essentially associates metadata to a pointer, metadata like permission checks, then you could, at least for a subclass of the issues we talked about, pointer poisoning attacks, you could natively catch those with hardware support. But it's a very high-level idea. Maybe David wants to say something. Yeah, so I think alternative to TEE is whenever you want to partition your system into parts, which is, I think, a good idea, and everybody's now doing that also in there, how we build online services and stuff. So TEEs are one system that we have become quite used to from mobile phones or from maybe even from something like a banking card or so, which is also a protected environment for a very simple job. But the problem then starts when you throw a lot of functionality into the TEE as we saw the trusted code base becomes more and more complex and you get traditional bugs. So think like, yeah, it's really the question if you need an alternative or a better way of approaching it, how your partition software and as you mentioned, there are some other things you can do architecturally. So you can change the way we or extend the way we build architectures, for instance, with these capabilities and then start to isolate components, for instance, in one software project, say in your web server, you isolate the TLS stack or something like this. And also thanks for the people noticing the secret password here. So obviously only for decoration purposes to give the people something to watch. But it's not fundamentally broken, is it? SGX? Yeah, no, SGX TEE. I mean, TEEs are the menu of them. I think like you cannot say fundamentally broken but for SGX it has... The question I had was specifically for SGX at that point because Signal uses it, mobile coin cryptocurrency uses it and so on and so forth. Is that fundamentally broken or would you rather say... So I guess it depends what you call fundamentally, right? So there has been in the past we have worked also on what I would say full breaches of SGX, but they have been fixed, right? And it's actually quite a beautiful instance of where research can have short-term industry impact, right? So you find a vulnerability, then the vendor has to devise a fix that they are often not reveal and they often work around to the problem, right? And then later, because we're talking, of course, about hardware roots of trust, so then you need no processes to really get a fundamental fix for the problem and then you have temporary work arounds. So I would say, for instance, a company like Signal using SGX, if they... So it does not give you security by default, right? You need to think about the software that's what you focused on in this talk. You also need to think about all of the the hardware microcode patches and or newer processors to take care of all the known vulnerabilities. And of course the question always remains are there vulnerabilities that we don't know of yet? But that's with any secure system, I guess. But maybe also David wants to say something about some of his latest work there that's a bit interesting. Yeah, so I think what yours has or my answer to this question would be it depends on your thread model, really. So some people use SGX as a way to kind of like remove the trust in the cloud provider. So you say like as in Signal or so, I move all this functionality that is hosted maybe on some cloud provider into an SGX enclave and then I don't have to trust the cloud provider anymore because SGX also has some form of protection against physical access. But recently we actually we published another attack which shows that if you have hardware access to an SGX processor, you can inject faults into the processor by playing with the undervolting interface with hardware access. So you really solder to the main board to a couple of wires on the bus to the voltage regulator and then you can do voltage glitching as some people might know from other embedded contexts. And that way then you can flip bits essentially in the enclave and of course do all kinds of kind of like inject all kinds of evil effects that then can be used further to get keys out or maybe hijack control flow or something. So it depends on your thread model. I wouldn't say still that SGX is completely pointless. It's I think better than not having it at all but it definitely cannot, you cannot have like complete protection against somebody who has physical access to your server. So I have to close this talk. It's a bummer. I would ask all that questions that are flowing here. But one very, very fast answer please. What is that with a password in your background? I explained it. It's of course like just a joke. So I'll say it again because some people seem to have taken it seriously. So it was such an empty whiteboard. So I put a password there. Unfortunately, it's not fully visible in the stream. Okay. So I thank you. Joe van Burg, Fritz Alder, David Oswald. Thank you for having that nice talk and now we make the transition to the Harris News Show.