 Right, so let's let's go so T is why open source is vital and it's smaller, but actually this is a very exciting fun bit This is we're going to talk about this as well So what is trust with T's and T's and sorts of T's and I don't know I want to get moving so I'm not gonna hang around on this so we're I've spent a lot of time talking about trust and thinking about trust over the last 20 years And this is the definition that I've come up with over the years Trust is the assurance that one entity holds another perform particular actions according to specific expectation We'll come back to this in a bit, but when you search on pixabay for trust you get this picture Which is I think an African land squirrel being trusted not to buy the child's nose off This is a bad example of a trust relationship Okay, so let's think about what you can trust This is a simplified standard stack, right? We all know what this looks like everyone everyone happy with this It's a representation of a stack good good. Okay, so You have a problem. So you write the application Right, so you've got some trust you do that, but it's sitting on top of middleware and you didn't write the middleware and there's every opportunity that someone may have Got something wrong or does something wrong on purpose or just done bad things, but it sits on top of our user space No one ever hacks anything in user space, right? That never goes away. So that's okay. Well, we better not trust that but the kernel we can trust that hmm We have over the years come across some problems with kernel Luckily, there's the boot are You can see around going with this, right? So hypervisor entirely safe firmware Right, okay So we have a bit of a problem Assuming that you wrote the application and you trust that what do we do if we actually want to run stuff on a computer? Which I guess most of us are interested in doing It would be nice to find a way around some of these problems and the CPU vendors realize that we in they had a problem It's a turtle problem. Who knows the turtle story. Oh I get to tell the turtle story, right? Okay, so This is told about a variety of people But it's someone like George Bernard Shaw or someone like that in the early years of the 20th century and in those days people used to go around scientists eminent persons usually men going and lecturing to people about science and and philosophy and and how the world worked and George Bernard Shaw, whoever it was went to a to a packed audience auditorium. I think in the north of England and explained about cosmology and evolution and You know how things works and you know the dinosaurs and strut and all of this and it went down very huge applause at the end And at the end he was receiving questions and an old lady came up to him and said that was fascinating, mr. Shaw but of course it's all rubbish and he said well, what do you mean? It's all rubbish and And and he she said well everybody knows that the earth sits on top of a turtle He said okay. He said ha ha but What what does the turtle sit on? She said don't be stupid. It's turtles all the way down and The the moral of the story is which is used by security people is that in the end you have to have a bottom turtle You have to have a turtle that you can trust and you can build stuff on and that's what the CPU vendor realized fairly recently and They so they created TEE's TEE's are trusted execution environments. They are Ship level instructions, which basically allow you to run an application in memory Which nobody else can see not the kernel not the hypervisor not someone with pseudo access None of these people none of these entities And this gives us something we can work on so some examples that are in the public domain AMD have one called SEV which stands for something secure something something Intel has something called SGX which stands for secured guard extensions IBM has something called PF which stands for protected something something and The definite and I'm sorry we look them up. This is not the interesting bit of the talk. Okay, we'll get into that So we have a definition of What's a TEE isn't it we call a hardware-based technique for securing sensitive data? So this is key, right? anything that you don't trust to be on the public cloud or on frankly any old system where Joe random hacker can can look at your patent data or your payroll data Or your financial data or your health data. So securing sensitive data And algorithms, okay, this is not just stuff at rest We've typically in the past thought of securing and encrypting data at rest On basic storage in transit basically over the network, but this is the third type This is for in process so it's being processed So algorithms as well as the data in Such a way that even the kernel or the root user or the hypervisor can't see what's going on So that's our definition of a TEE it does cut out a couple of technologies NHSM is not a TEE a TPM is not a TEE Can you emulate them in various ways of the TEE? Yes, would you want to they perform different functions generally? We could have you talk about that another time right so Yeah, so arm trust owners not a TEE TPM not a TEE Using TEE's Do you have any sensitive data not you have sensitive data? Do you have any sense of algorithms? Well, maybe you don't but I'm sure that your banks and your health care providers do Do you trust your cloud provider with them? Are you happy to hand over all of thank you for shaking your head at the back? So that's that's you're quite right. You should not trust your cloud provider with those things But you can trust of your sys admins can't you know? No, okay fine. Um, and do you trust the entirety of your software stack? Are you sure that all of those things I had there completely safe have not been compromised and are entirely safe to be trusted? No, no do be quiet. Mr. Jones. It's a troublemaker in the front. He's got a beard as well So if you answered no to any of these then you need TEE's And I did a pretty picture for the video All of these things are things you might be wanted to put in a TEE um open source If you're going to use a TEE in open source, there's a bunch of things we need right? We need that user space support. We need colonel space support Once we've got those we actually need applications that use them right just saying we've got a set of of Hardware based instructions CPU based instructions And now you can program them is great if you love Assembly but not many of us write assembly for fun anymore There was a time We need to get them into dev ops and hybrid clouds so people actually write stuff and use them and deploy them and Here's the problem if we don't make them The proprietary people will You know Linux is winning at the moment Yeah, open source is winning at the moment but if the Enterprises out there see ways of keeping their stuff safe in Proprietary software, but not in open source software. They will use the proprietary stuff So we need to up our game and do some stuff So this is where my beautiful assistant Takes over. Unluckily all I've got is an Nathaniel. So I'll let him do it right go Can you hear me? Oh, there we go. Okay, so my name is Nathaniel and I'm the technical lead on the nRx project And the nRx project's goal is to build precisely what Mike was talking about And you can see we have a website nRx.io pretty easy to find Yes, there are stickers at the red hat booth I think Mike might even have some as well so so you can get stickers. I know that's the most important part of the talk But so basically we're just going to summarize precisely what Mike talked about which is that we have this stack and The each different color represents a different trust relationship in a in a traditional stack, right and the The big section usually is in red because that kind of stuff kind of comes from an operating system vendor particularly red hat of course ships Ships this stack and so at least you have one trust relationship there And this is the way people have traditionally mitigated their their risk surrounding trust but we still have a bunch of other people you have to trust as well like the firmware and the BIOS and the CPU and They all need to be trustworthy for your application to be trustworthy. So this is what happens when we do containers Containers actually make the problem much worse in terms of trust now I'm not saying the containers are horrible and you shouldn't use them There's lots of good reasons to use containers, but they do increase the complex complexity of our trust relationships Because typically you now have a user space that's coming within your container Which is not part of the container host and so we now have different trust relationships there and Xkcd gives us a really lovely illustration of this problem. They talk about the modern tech stack and they're all compromised So this is a fantastic comic by the way if you don't know what Xkcd is you should absolutely go Spend like at least eight hours of your life going through the kind of information for non-commercial use office Pictures in the sort of way. So we did check. Yes Okay, so we're gonna give you a demo of Some some early functionality in the nRx project. This is not the totality of what we are building But it is a first step in this goal So what you're gonna see here is we're gonna have I'm gonna start and say what's the Alex project for the Alex project Ames to make it easy for you to write and deploy deploy stuff into a TE But we're not an application framework. Okay, so you're not going to write something specifically to nRx You're gonna write using standards and you're gonna deploy those standards using nRx So what you're gonna see today is not that unfortunately We're not there yet, but we're getting there and what you're gonna see is you're gonna see Two entities a client and a server in this case the server is a cloud host of some kind and the Client is the tenant that wants to run some code on that host But doesn't want to disclose any information about what is happening inside their execution to that host So go ahead to the next one The first thing that's gonna happen is that the tenant is going to attest to the AMD firmware and Notice that this goes through the host. We're actually talking to the firmware on the box It's true that the host is proxying that information, but that information does have Integrity protection so the host can't modify it and do malicious things I'm gonna stop a bit more and say what is attestation and why do we care so? What we care is that the AMD firmware is Going to create one of these trusted execution environments Using the chip instructions in a way that it's provably correct Cryptographically we can be cryptographically confident can never be sure of anything We've been cryptographically confident that it's a proper AMD chip And it is created as expected a proper TE a secure VM that we can use Thank you. So and this is happening directly to the hardware, right? So we get that attestation we validate to that attestation cryptographically We now know we're talking to genuine in AMD parts We know that know that we trust those parts to create this environment For us exactly in the way that we expect and then we're going to inject code into that secure VM That's going to be launched. So we're going to deliver code and data Encrypted into the VM because one of the things that we got from the attestation handshake was a lovely key Cryptographic key which means that we can wrap our code in our data in such a way that nobody can look at it including Nobody on the host so it goes straight into the secure VM where it's decrypted So we've encrypted it with that special key and now we're happy that we've delivered something Into the host that it can't see Absolutely. So next up we're going to do the demo. So we're gonna we're gonna play a video demo here We're gonna run this command two times with exactly the same inputs Okay, but we're gonna get slightly different behavior and I want you to pay attention to the slightly different behavior So go ahead and play Please bear with my associate here. He does have a hard time with computers It is that complicated things. How do I get to work? Do I do that? I think you press the play button there you go So So, yeah, we're gonna run this we're gonna run this twice same inputs We're gonna take the numbers three and four and we're gonna add them together and we're gonna produce the number seven Okay, which is which is correct output. We pause that for a second sake. I'm sorry Yeah, it's very small. It's very small. Can we I don't know if we can make it big in it Yeah, do that on the bottom full screen Yeah, okay, let's run it from the beginning again. Let's run it. Okay And you still put a concept, but it is true. So it's fine. Yeah, I've seen happening. It's okay. You can trust us it works So so we're gonna we're gonna run notice that we're doing time so we want to measure how long this takes We're also going to run the command with two inputs two numbers the number three and the number four We're gonna add those two numbers together and if my lovely assistant would point out the number seven as output I think we all agree that three plus four is seven. Anybody disagree excellent, okay? So what what actually happens here is the interesting thing that happens here is not what we do, but how we do it, okay? so Anyone can add three and four together, but instead what's happening is we are first fetching from the server a certificate chain This is coming from the hardware and one of the certificates in that chain has its private key burned into the CPU There's no other way to access it So we have the root certificate from AMD We can validate the entire chain which means that we now know that this is an AMD CPU the chain is okay The next thing that happens is that the client is going to craft its execution policy This is what it's going to give to the firmware to say this is how I want you to make this secure environment And these are the things I allow you to do It's also going to calculate session keys. It's important to note that the session keys are generated on the client We don't have to trust the server for and to entropy the client is fully in charge of all entropy for the encryption so next We have the server starts up the secure VM and it provides a measurement of this VM This meant this this VM is empty by empty. I don't mean it's like a blank Linux distribution I mean there is not a single instruction not a single CPU instruction in it yet So we've lizard. We've literally measured emptiness which should be the same across all Executions right so the client gets this gets this measurement from the server and validates it We now know cryptographically that's the secure VM does not have any contents whatsoever This means that there's no bootkit installed. There's no other malicious instructions installed. So we know provably Through through cryptography that it is in fact empty at this point We are going to take our code which is some handcrafted assembly adding the numbers three and four together We're going to encrypt it using the session keys and that's the actual code It's encrypted so you don't know what the code is, but that's the actual code We're going to send that to the server the server hands that of course to the firmware the firmware Decrypts that for us injects it into the VM and then finally we launch the VM our output comes the number seven Okay, so I'm going to ask you a question Which is can any of you guess why we chose the numbers three and four? You know Anyone else want to guess? It's because Nathaniel couldn't be bothered to work out how to display two digits When we got to here we know that three and four add up to less than ten and as we're working in binary He was happy with that. It's basically because Nathaniel is lazy You are very welcome to write a patch and we will merge it Dust off those assembly skills, right? So So anyway, like I said at the beginning the interesting thing here is not what we do, but it's how we do it Notice also that the execution time for this is 50 milliseconds. So this is something that we can do relatively quickly It is not a big when you think of VM is typically you think like I start the VM and then like a minute or two later. It comes up We are not talking about that here. We're talking about a very rapid execution, but let's run the program before we do that I want I want people to memorize this string, please. Okay. You're the first four digits second four digits that no, okay fine It's important that All right, we're gonna do exactly the same thing again So now you would expect that if we ran the command with exactly the same parameters, right? The code hasn't changed and the inputs have not changed Therefore the code should be exactly the same, but notice that the packet we actually deliver to the server is completely different This is because we have perfect forward secrecy. So this means that we are now resistant to Statistical based attacks on the code, right? Nobody's gonna be able to do statistical analysis and determine what the code Runs giving given all the other things being equal. So that's why we run it twice Notice that our time is also predictable, but we're right about 50 milliseconds in terms of execution so that's the end of the demo and This is not we don't we do not expect to have you write assembly that is not at all what our project is about This is just simply where we are in our development process So so where we are what you just saw is this we we did the attestation. We got that certificate chain We validated that the hardware was who it claims to be we then Delivered our code and data encrypted to the firmware which injected it into the VM ran our code and we got our output Okay, so we are calling these keeps Each different technology has a different terminology for their sort of insecure environment Anarchs has a term which we call keep which describes all of the environments as being operated in an arcs This is currently AMD only, but we are very close to getting other technologies working For example, we are now attesting SGX. We haven't delivered code into SGX yet, but we are we are now able to attest it So I've got a bit further just to explain about that and ox is One of the things about n arcs is that it is Intense to abstract away all the nasty things that you don't really need to want to know as a developer About your CPU architecture, right? You don't care whether you're running on arm or power PC or Intel or AMD You shouldn't care you're far too high up at that level We want you to be right able to write microservices or applications in C C plus plus Java rust pearl one of my favorites Python Haskell and for it just to work So we're abstracting all that stuff where you say I want to keep I want to know it's going to run safely in it Make it happen. So that is the point of n arcs and as My wonderful assistant said we're not going to make you write assembly. So yes so That's going to queue you up there with it The thing he says he wanted to talk about okay, we're talking about that today. Well, okay, let's do it So our actual goal is not to run assembly, but WebAssembly So if you are familiar how many are familiar with what WebAssembly is JavaScript is Anybody know what JavaScript is? Okay? So JavaScript is slowly being eclipsed in the browser by WebAssembly It allows you to write in your own language You compile to WebAssembly and you distribute that code to a browser every single browser supports it, right? It think about it this way the largest Decentralized computing network in the world is JavaScript and JavaScript is being eclipsed by WebAssembly, right? So we're on a good technology technological grounds here. We want you to be able to develop in whatever language you want We're going to deliver that application into a keep and you'll be able to execute it securely Using your own application development framework. This is this is not And or you're not developing to n arcs. You're not going to be locked into n arcs You develop using standard APIs and they all run inside n arcs So if we're successful, this is what we've done We've removed all of these little these middle layers You still have to trust the CPU because you're executing code. You have to trust the CPU What do you think is going to do your instructions if you don't trust the CPU? But that doesn't mean you have to trust all of the motherboard doesn't mean need to cast Trust all of that the memory buffers and stuff None of that. Yep. So you're trusting the CPU and the you know microcoded firmware for the CPU But not from another vendor notice. It's not a different trust relationship You're still trusting the CPU and the firmware from a single vendor Then you're trusting any middleware you include in your application because well you have to include middleware in your application And then finally your application itself, but the entire middle layer has been removed and our plan is to make n arcs Small and easily audible and of course completely open source I had to answer a question from a lawyer yesterday about an article I was writing about n arcs And said it will always be open source and she said will it always be are you sure you can say that? Yes, yes, always always open source. All good. Actually too for those people who are wondering. Yeah So in this current code, there are I do want to be completely transparent There are some attack vectors and I want to disclose those So there are some possible of hypervisor attacks on the guest on current hardware The hardware we ran this on was the Naples generation of AMD SEV And some of these have actually now been mitigated in the next version of the hardware So this is I'm now talking about AMD's Roam chips, which were just announced So the the first or the first two attacks are roughly the same thing Which is that the hypervisor has full access to read and to write to the guest CPU registers Which means that the the hypervisor can see quite a bit of what's happening inside the VM and can tinker with it Which is potentially just as dangerous So this is solved by a CVES, which is no longer future AMD roadmap This has now been announced. So in the realm generation of chips, this is this is no longer a problem We still don't have full software support for the strategies in these chips But AMD is working to bring that upstream. So we should shortly have a resolution to this problem number three is that there are There are attacks with moving memory pages. So the hypervisor could say Convince the guest to write some information to a page here and then move a page somewhere else and get the hypervisor decrypted in it and in a different way that is a possible attack from the hypervisor and the fourth was a Vulnerability which we disclosed to AMD and that is the code that's executing allows the tenant to Select or excuse me allows the host to select what what? location in memory the encrypted code is decrypted to and We we see a potential for attack at that place. However, our code is not vulnerable to that The way that we have designed the code, but this is a potential problem both of these problems AMD Is working on and if you would like more information on that you could you can talk with AMD But we definitely expect a future without these problems So as we said before this is AMD at the moment the demo you've seen is on AMD But as Nathaniel mentioned we are very close to having similar functionality on SPX and It's not I'm not going to suggest Lily is lazy. It's just she decided she had finished a thesis instead Thought that was more important. So We'll get that we're nice and close. So yeah, we've got a really great team of developers We're working on all of these different platforms. Well, well, well, we have a great but very small team of developers and always looking For people with an interest in doing this sort of stuff So if you have expertise and an interest, please get in touch. You can go there and find out all about the project Sorry big plug that. Yes. Thank you So what you where to go to find out more edits on github My this is really funny if you're a cryptographer. This is a really funny Name might I have a blog called Alice even Bob Mike likes to imagine himself clever. So just just engage him I like to think we're having to convince other people that I am and then he has a he has a blog as well Then we have LinkedIn and I'm on Twitter and he refuses to admit. He's on Twitter I think That is it. Yeah, okay. So one will make these available the slides This is going to be big people really care about this stuff The industry is beginning to realize this is really important and that we need to be able to make this happen So please keep an eye out and as I said, we're very interested to talk to people. So I've been up for questions We're about 10 15 minutes. Yeah I will talk into Mike's lapel I will talk in this microphone. Are there any limitations in terms of like size of the code you could send to the Today yes In the future I would so all of the limitations are basically limitations that are Imposed by the hardware manufacturers and as I would suspect you would imagine hardware manufacturers are looking to increase those limits And they again it they are they they're different based on the different Hardware vendors and they've all said they're gonna make changes and what exactly those changes are either either can't remember or can't tell you under NDA so yeah, so I can give you some public information Currently on Intel chips. There is a limit to 96 megs of resident memory for SGX That number will disappear very soon And will become a bigger number a bigger number the on a AMD the current limitation on Rome is there's 512 key slots and I think 510 or 509 of them are usable for VM so you could have 509 different instances If this number will also be in the future replaced by a larger number so and then IBM is has announced PEF and PEF has Some similar restrictions. I don't remember what they are off the top of my head But those will also be getting bigger as time goes on. So hopefully you'll notice the trend here, right? They're starting small and getting bigger You oh, it's on back here. She's sorry first. Yeah My question is why do you say I'm trust zone is not T Because it basically allows parallel execution rather than execution in the main Yeah, it's so basically Trust zone allows you to build a separate operating system rather than a separate child of an operating system and It's sort of a tool that you can use to build a TEE, but it doesn't really have It's a more monitoring what the other operating system is doing rather than doing a separate execution And we don't we don't want you to build that right like we we want you we also don't want So when trust zone gives you the abilities to build that it also means that everyone who builds it has a separate trust level Which doesn't really help us? Reduce the amount of parties we need to trust It also has extremely limited constraints I believe the limit and if you want to know more about this Patrick Ruderweich is the guy to ask I Believe the limit for resident memories only 10 megs and there's really not a whole lot You can do in 10 megs and that 10 megs is the entirety of the parallel operating system Not just your application and we spoke to arm and they said yeah, don't use that don't use trust zone to do this stuff Was the algorithm that I use the encryption algorithm used Yeah, so it's dependent on the on the manufacturer we basically we have to use their attestation process We hide that from you so you don't need to worry But yeah as 256 in in in the AMD Attestation Yeah, everything that's all of the algorithms that are used in these attestation processes are all NIST compliant they the hardware vendors know that they need to sell this eventually to To people that are bound by by NIST requirements, and so basically everything is NIST compliant So the solutions you've described sound perfect for a deterministic compute What about applications that want to do IO or make syscalls or anything, you know more interesting so There so let me let me take up the syscalls one first We actively are disallowing anyone to make syscalls out, right? We want to enforce a double barrier on the technology Any time that you don't do that, which is precisely what SGX does by default Means that we all you've done is created the perfect place to put malware If the if the if the guest in this case right or the tenant can do malicious things on the host without being detected We consider that a very bad workflow, and we don't want to enable it So in terms of in terms of making syscalls We are not going to allow raw access to that on the other hand. We are going to allow things like networking We are going to allow storage at some point in the future where we're not there yet But the when we do offer things like persistent storage The only thing that the host will see is going to be a block device And there's going to be a block level encryption on top of it And then inside the guest you will see a file system that you can do file system IO on But the host will never see that file system and another thing we are enforcing that all Connections out let's talk about the network connections out at the moment are Encrypted there's going to be no Just plain as it's going to be CLS encrypted. It's just the way it works We're not you're going to deliver your application, right? We usually bundled in a certain way with a certificate for that particular application And then you can create TLS sockets using that Certificate, but you will not be allowed to just do TCP or UDP We may allow that as a secondary mode where you get to keep all the bits if you lose your security, right? But the point is we want you to be able to write an application without doing special magic steps and We remove a whole selection of our whole classes set of classes of attacks, you know There is no secure. We know there is no perfect security. We're not pretending there is But we want to do is we want to make it more difficult for you to do the wrong thing For developers who are not experts to create things which have the highest levels of security We can create without making it easy for them just to mess up. So that is what it is You know, so they don't have to worry about the attestation They don't need to worry about the encrypting it so far it goes in it just happens This is an application deployment framework. That's what it's about. Sorry Can you talk about the variations in the different enclave? Technologies which ones are more and less vulnerable? I know that people say that's GX actually stands for stuff gets exposed Or something else instead of stuff But I know there's a wide variation there and I thought it'd be interesting to hear about it Yeah, so there's generally speaking in the industry. There are two essential models One is the what we're calling the process model and the other one is the virtual machine model The in the process module a model You're basically going to carve out a specific section of your process that section is going to be isolated and encrypted in certain ways according to that technology and But there's no two-way barrier So the stuff that's inside of the keep can not in an arcs, but inside of the secure area Can can talk out mess with the system. This would include Intel SGX and But we believe the market is actually converging on the secure virtualization model it appears that Everyone is either working on or Or has already released a secure virtualization technology. So we think that the market is going to kind of urge on that technology The process model has some inherent weaknesses in that it's a completely new security model, right? We all sort of understand user separation. We understand process separation. We understand VM separation These are all existing barriers that we are using every day. We're deploying them every day at very large scale They're well understood the problem with the process model is that it comes along and puts a barrier right in the middle of a process which is We've never done that before and And so even if it can work we have a long way before we really feel confident as a community as a community So I think the much shorter path to acceptance is by building secure VMs because all we're doing with secure VMs is Increasing the strength of an already well-known barrier. I want to stress again that when we say VM We don't mean a thing with a whole operating system in it. We just mean Literally an empty VM into which we could decide what we stuff Yeah, very it is not a virtual operating system think virtual CPU. Yeah Yeah, sorry any other questions we saw a few minutes if and I have I have stickers So when you're running code in a TE what does the kernel see does how does the memory management work? Does it see? Like a process. Yeah Well, so the answer is that it sort of depends on the technology whether it's process-based or VM-based In the in the case of a secure VM It the the way hardware virtualization like KVM already works right is that you start up the CPU in this particular mode And then there's a bunch of things that the CPU handles itself without ever exiting to the kernel when it does exit in the kernel We call this a VM exit and this allows the kernel to do things like emulate hardware or other things In NRX it's our intention to develop our own custom hypervisor for this not QEMU And we already have as you've seen it some send some demo of that And this is because we really don't want to emulate a lot of hardware We want to give that we want to have as few VM exits as possible And there's some really interesting questions that are Have not been decided yet in terms of exactly how things fit So what does a unit kernel look like exactly what you base that on and that's that's the stage of the project We're at which is why it's a great time to be involved because there's lots of interesting stuff today So, yeah Everything's also currently coded in rust so we do rely on the memory protections that the rust language gives us Which means that we also can have a higher degree of confidence in the code itself Yep, please so when you're when your Code is executing it runs on a CPU and then you're done your return to the user Who runs on that CPU after and are you worried about what they can see? After you've run your it's a it's run on a virtual CPU that's encrypted And then when your code is done running that CPU goes away and no one ever Does but the physical CPU correct right on the hardware runs on let's say CPU 17 and I mean then you're done and somebody else Get CPU 17. Yeah, something after you So all you get is encrypted garbage get encrypted Yeah, yeah Are you worried about a pattern of someone watching the cryptic garbage or will that change from everyone every? So statistical analysis of ciphertext is explicitly not within our threat model We are essentially trying to reduce it to the fact that the only thing you can really do is statistical analysis Which nobody's really succeeding at and clearly, you know, it would be possible for the chip vendors to have miss implemented The instructions and all that sort of stuff so they could get it wrong and when it comes down to it That's you know, we can only do so much We have to accept that threat model and you know feedback whatever we can to the to the chip vendors But this is one of the strengths of an arcs right because if you have an application that you've written You've compiled the two-web assembly You're deploying it with an arcs and all of a sudden you discover that Chip of Ender X has a huge vulnerability You just redeploy on a different manufacturer, right? It's there's it's literally just deployment configuration in our model So if you find something you're worried about if you find you can't trust a CPU just don't use that kind of CPU anymore Are there any risk five designs that include These features there is an initial proposal from risk five called sanctum. You can Google that and you'll you'll find the details of that proposal It's the sanctum project is a process-based model similar to what Intel SGX does I really wish they would focus on the secure view model instead We think that that's a better technology, but but we do plan it to support it eventually It's not higher in our priority list right now so just to finally To wrap up This does not make for perfectly secure computing nothing makes for perfectly secure computing What it does is it removes the number of trust relationships you need to maintain Allows you to have greater assurance and Cryptographic confidence in the bits that you do need to trust Either because they're from the chip vendor or because they're using open source Writtening languages which we have a fair amount of trust in which can be audited both at the code level like a pile level And also at the application level that we're providing Webassembly is key here because even if your code does something wrong web assembly always faults in a predictable way Right, so even in the failure case one of the benefits that web assembly gives us is a constant known mode of failure So we'd love to talk to you more about this. You know how to get in touch with this. We'll be around for the next couple of days I've got stickers go to nox.io There's a You can put issues in there. You can look at the wiki You can submit code. That'd be great. So thank you very much indeed for your time and Appreciate your coming along. Thank you