 Right, well welcome folks to this session on NOx, an open source project which we're very proud to be presenting. I'm Mike Bussell, I'm one of the co-founders of the project and with me is Nathaniel. Nathaniel. Hello everyone, welcome from the lake. Yeah, Nathaniel is on PTO, he's actually taking some leaves, so he's dialed in from the lake. So if you hear lapping of water and feel very jealous, that's what it is. I encourage you to make the slides as big as possible, we're going to have a demo slides in a bit with fairly small text. So we will walk you through them of course, but if you can make them as big as possible, double click on the on screen and that should help. Right, so let's carry on. So first of all, a little bit of an overview of NOx. Nathaniel, please tell me if the slides aren't moving on, I assume they are. So just a five bullet over you to give you a basic intro, we want to focus really on the technical here, but I want you to want to know a little bit about it. So the first thing is that NOx uses TEEs, that's trusted execution environments. Some examples are SGX from Intel, SEV from AMD, they recently announced TDX from Intel. And what they're going to do is to isolate workloads and we'll talk about that in a minute. So we want to help you to deploy confidential, sensitive workloads using TEEs. Another really important part of the project is we want to make it really easy to both develop and to deploy your workloads. Usability is really, really important to us as part of the project. We have chosen some very strong design principles in terms of security. We want to make it as difficult as possible to do the wrong thing. We can't always force you to do the right thing, but we want to make it difficult to do the wrong thing. And easy to do the right thing. Cloud native, absolutely. We very much hope to be integrating with things like OpenShift and Kubernetes. And this is of course all open source and I want to be very clear, this is a project that is not production ready yet, but we will have some exciting news that we'll be announcing in the course of this slideshow. And it's part of the confidential computing consortium, which is a Linux foundation project to encourage use of TEEs and open source. So that's enough to be getting on with. Let's talk a bit about isolation. I want to set the scene. So let's assume you've got, you'll do putting stuff in the cloud, public cloud, private cloud, whatever, but let's say you've got some stuff in the cloud and your tenant is on the right and you've got some workloads you want to deploy into a host on a cloud. And you've got your workload, which is in yellow and there's another workload from another tenant. And of course there's the host OS, hypervisor, all the other bits and pieces that a host includes. So we've got three types of isolation. The first type is workload from work in isolation. So you do not want somebody else's workload to be able to interfere with yours, either stop it running or look inside it or change information inside it. That's a bad thing. Luckily, actually we know how to do this. We've been doing this for a long time. I don't mean enough. I mean just the community, the security community with things like VMs, with containers, C groups, all those sorts of things. I do a great job in actually stopping this sort of thing. So type one, we pretty much know how to do. Type two is protecting the host from the workload. If your workload is compromised or it's malicious, we really don't want it to be able to interact maliciously with the host. Because it can alter what's happening on the host or maybe even break out into the other workload. This is sometimes known as breakout. And again, actually, this is something which we've been doing in the community for 10, 15, 20 years. It's pretty well. We know how to do this VMs, C groups, SC Linux, all those cool stuff that Dan Walsh was talking about a bit before. It's how we do this sort of thing. So this is good. What I hear you cry is the third type of isolation. Well, that is protecting the workload from the host. This is a lot more difficult. So what if the host is compromised? What if it's malicious? Well, if you've got a sensitive workload, and by sensitive workload, I mean anything from customer data, credit card data, your CEO's pay packet, some Hadoop machine learning, some pharmaceutical information, some firewall rules, some cryptographic keys, all of those sorts of things. What if you want to protect that from the host? Well, this is a lot more difficult. And the answer that the industry has come up with, the chip vendors has come up with is something called TEs, trusted execution environments. These are chip level instructions which allow you to set up a special execution environment where basically all the pages are encrypted. And so the rest of the host can't see it. So the only time they're unencrypted is when they're actually being acted on by the CPU or whatever's providing the environment. TEs, trusted execution environments. So that's all good. Well, I'm good. But this is what you need if you're going to use sensitive workloads. And there are lots of sectors who can't deploy many of their workloads onto the cloud because their regulators are unhappy about it. Healthcare, finance, government, whatever. But also, you know, what about vulnerable hosts? Well, if you're in the edge, maybe you've got machines which are vulnerable to physical tampering or dodging networks. So you really want to be able to start making use of this. And this is what the NRX project aims to let you do. So, Nathaniel, why don't you talk a bit about this slide? Sure. Thanks, Mike. So the key bit here is that the TEs allow us to actively distrust the middle part of the stack. In other words, we don't have to include them within our threat model because they are, they're external and they're protected by these TEs. So our principles are pretty simple here. We don't trust the host. We don't trust the host operator. We don't trust the host owner. And then we make sure as part of our software deployment that all the hardware that we are using is cryptographically verified and all of the software that is in use is audited and cryptographically verified. So pretty, pretty much strong principles all around. NRX is well suited to microservices. This is sort of our bread and butter of what we would like to be able to tackle is to be able to put microservices inside these keeps, which is what we call our protected area. This is also well suited to sensitive data or algorithms, right? These are generally things you want as microservices anyway because you want to be able to isolate that data from the rest of your workflow. We really, really want to have easy development integration. Usable security is incredibly important to us. This is why we also want simple deployment as well. And everything that we do is standards based. So this is not a writer application to some new proprietary APIs. This is deploy using existing standards. Cool. So to sound great. Everyone wants them. Okay. But we do have some problems. The first is that they're a different platform. So if you want to deploy on an Intel SGX T you're going to have to develop and deploy in different ways because they're just different platforms. And that's that's tricky and difficult. So many of them require you to write your code to follow a specific SDK. And we don't think that's good. We don't want you to have to only code in C or C++ for instance or only in Java. The other thing is that one thing we've not talked about is attestation. Attestation is how you prove that your workload is actually executing in one of these TEs because it would be very easy for something to set up. A malicious host would have to set up something. So this is definitely a TE. You're safe. You can you can put your workload in here, but actually to do that maliciously. So the TEs provide attestation models, but they're all very different. And you shouldn't as an app developer need to understand all of this stuff. It's it's orthogonal to what you're trying to do, which is just write an application. So that's really quite tricky. Then of course, different vendors. We've already seen some vulnerabilities in some of these these implementations. How do you know which to track, which are important to you, which are going to affect you? I just want to deploy workloads. So it was with these things in mind that we decided to come up with with the NARCs project. So which technology I build my application. Well, the answer of course is NARCs. So why don't you have a brief talk through this as well? And then we'll get on to the demos in a couple of slides. Sure. Thanks, Mike. So this is essentially what our architecture looks like. As I mentioned before, we call our constrained areas keeps. And basically we have a separate keep implementation for each hardware technology. So these basically deal with the specific CPU instructions and make sure that everything is set up properly. And our goal is to normalize this. So you can see that immediately above the hardware specific layer, we are immediately jumping into WebAssembly, which is a normalized standard. So all of our applications run on top of this. You don't have to worry about, you know, everything else that's under the covers. So for looking from our WebAssembly layer and above, we have wazzy, which is the WebAssembly system interface. This is an upcoming standard from the W3C that basically provides. You can sort of think of this as, you know, what POSIX is to, you know, to a Unix syscall wazzy is to WebAssembly. So these give you system calls that allow you to do interesting bits of things. And then above this, of course, we have all our language bindings. So this is not something that's provided by NRX. This would be whatever tooling is supplied with whatever language you're coding in. For example, this would just be the Rust target if you were programming in Rust. And you would install the Rust target for WebAssembly and just compile your application for that and you're off to the races. And then, of course, your application is on the top. So the key bit here, of course, is that we want to normalize things so that they look the same as quickly as possible. And then above that, you get a standard environment to develop your application. So what you want to do is you want to develop an application and it to run on both. You don't have to recompile it for the different types of thing you're running it on. You want to be running the same application, the same binary. So it's demo time. We've got a few demos that are fairly short. I'll go to a first two or three and then hand over to Nathaniel to do some of the more detail. But just to be clear, some of the sequences are shortened. That's because compiling takes a while. We sped the compiling up. Apologies for that. But you didn't need to spend loads of seconds watching that. However, this is real gameplay. This is real code that you can use right now. It's checked into our repositories and we really want you to. So let's just remind ourselves. This is the setup we've got. We want to protect our workload from the host OS and the more general environment of the host. So let's start our first demo. I'll make it as big as I can. Apologies if it's not huge. So first of all, we're going to clone our demo code. This is pretty simple. And we're going to look at it. And so what this is, it's a very simple program, which generates a random number. And it doesn't print the random number out until you press enter. So you do this and it's done it. We press enter and there we are. We have a random number. So that's the very basic thing. And what you've just seen, in fact, is this. We've got a secret generator as a workload in a standard binary. So nothing particularly exciting, but we want to show you step by step. So let's go to demo two, which goes just a little bit further. So we're doing the same thing. We're cloning the demo so we know what this looks like. And just to prove it's the same code, I'm sure you can remember exactly what it was. This time we're going to run it in a different way, though. So run it first off as you expect, but we have a secret. We talked about being able to compile to a different target. So this is how you compile your application, your Rust application to Wazzy, to WebAssembly. It's literally this simple. You use that target and assume you've got your environment set up. You have now created a .wasm file, which is the binary for it. And we're going to run it now in a second. Here we go. And you can compile to WebAssembly from lots of different languages. C, C++, Go, Rust, Python, Haskell, C sharp, lots of different things. And OK, we've just built up what we're also going to do now actually is not just run it straight. We are going to run it via a loader, which we call the wazm loader, which is part of nOx. This is all part of how we run it together. So what we're going to do is run this binary through the wazm loader. So this is what we're just doing here. And hopefully it's going to come up with a number. Yep, and then we can enter in the area. What we've done, let me show you, is done exactly the same. We've got a secret generator, but now it's running not as a standard binary, but as a wazm binary. And it's running via a keep loader, which is a piece that we've written. Right, I'm going to take you through the third demo briefly, and then we'll pass over to Nathaniel. So you may have noticed when we looked at the original clone, there was something called secret search. And this is an evil, evil program, which basically, once it has the process ID of your running demo, it tries to find the secret. It basically scans the memory to find something that looks like it's secret. And it's doing it now. It's going to see if it can find a secret. We're going to look through the pages associated with that PID. And can it find a secret? Dot, dot, dot. Ah, it's found something. It thinks like a secret. Let's have a look on the left, and it has indeed found the secret. So what we've just seen is our secret generator. And the secret search was able to look inside it. And that makes us very sad. We do not like this. So this basically shows that in normal usage, there is no isolation of type three. There's no workload from host isolation. Right. So let's talk through demo four. So it starts getting really interesting. Sure. So in demo four, we're basically going to do the same thing that we have been showing all along, which is the ability to scan the memory and find the secret. One important thing to note is that the reason why we don't reveal the secret until you press enter, this is essentially your application security controls. The vulnerability that we're seeing here is that the host can, through memory scanning, bypass the security controls of your application and can access the secret. And this is part of everything we do in the cloud today, right? So this is just normal way that we operate today. We want to see if we can do something that is a little different and gives us a stronger guarantee of security from that third form of isolation. So I can actually see what's going on on the screen here, Mike, because... What we're running here is we're doing pretty much the same as we did before. So we're running it with nil, which means we're not making any attempts to improve things. And sadly, as before, it was able to find it. What we're going to do now is run it in a KVM. So do you want to talk about that? In the first example with the nil backend, basically we're going to run this as a normal process. And we don't get any protections. Now we're running it in KVM. And we're going to see if KVM provides us any additional protections. So right now, on the left-hand side, you can actually see that that wasm code is running inside of a super tiny micro virtual machine. And it is just KVM. It's found the secret, Nathaniel. Yeah, it has. And the reason for this, of course, the reason of this for this, of course, is that virtual machines don't provide this kind of isolation from the host. So what we're going to do now is we're going to turn on SCV mode. We're going to execute this using the SCV keep. And this is the technology from AMD, which allows you to create an encrypted virtual machine. And so we are going to use this encrypted virtual machine to do exactly what we just did in the KVM case. But this time we are really hoping that we don't find any secret in this VM. So it's running. It's looking. It is looking. And it's just returned. It doesn't seem to have found anything. Yep. And it didn't find anything this time because all of the pages for this virtual machine are encrypted and cannot be decrypted by the host. And this is guaranteed by the CPU and the CPU's firmware. So we've just run it again just in case you wrote the code badly and it didn't find it second time. So it seems and there we are. We've seen it. So we saw it on the left to prove that what actually was a secret. So what we see, I'm not going to show all three cases because you've already seen that sort of nil case. We've seen the KVM did not provide protection. But as soon as we put it in an SEV keep, we did provide protection. The secret search could not look inside it. So this is pretty cool, right? We're happy now on the left-hand side, but still very unhappy on the right-hand side. So we have one more demo to show you. And we're going to see something very, very similar actually. But this time we're on a different machine. I have bad problems with the... This is not displaying very well. It's very difficult to read on my screen. I suspect you're having the same problem. Do you want to just talk through what it's doing anyway? I will, yes. So this time we are not running on an AMD box. We're running on an Intel box. And Intel CPUs have a different technology called SGX. So we want to basically run the same application inside SGX. And we're showing it incrementally here as well. I believe this video does it incrementally where we start off... It does. Yep. Yep. So it starts off with the nil keep and we're able to find the secret. And then we show it again, executing in KVM on this Intel CPU. And that's what it's doing right now. And I believe it's the search. We're doing our secret search on KVM. And we should still find the secret in this case. Yep. It's just about to find it, I think. I should say that this presentation with just me talking on it is available if you go to our booth and we'll be making it on the NRX YouTube channel if you want to go and look at it there as well. So we did find this. We're not doing a new one, an SEV one, I think. SGX, yeah. And it's core dumped. Yep. So in this case, the hardware profile of SGX is slightly different to the one in SEV. SEV will allow the host to read all of the pages, but it can't decrypt them where SGX does not even allow access to the pages. What happens on the right-hand side is that when we attempt to scan those memory pages, we actually get a core dump and we can't find the secret at all. And it's just core dumped again, which is good news for us. I'm really sorry the screen resolution is terrible. I don't know what's going on here, but okay. And we're just going to see, yeah, there was a secret. So that's good. That's all shown very nicely. So in this case, we've seen something very similar as we did before, but this is KVM. Again, it could look in SGX. It couldn't look in. And again, we're happy. So we have just shown you a WASM file, so a WebAssembly binary running in a keep in a trust execution environment on AMD and on Intel. This is a really, really big step forward in the project. So what I'm going to do now is hand over to Nathaniel for a bit of an architectural view. We want to leave space for questions, but we have loads of time. We're doing well. So here's a chance, Nathaniel, to talk about the specific details of the different types of key. Sure. So in this example here, this is roughly what everything looks like on all of our CPU architectures. In this case, it's Intel and SGX. So we have, of course, our CPU at the bottom. This is our route of trust, and we can validate this route of trust using attestation to prove to us cryptographically that that is the real CPU that is made by Intel and has set up our environment properly. Above that, we have the host kernel and the NARCS loader. The NARCS keep loader. And these two bits are actively distrusted. So they exist outside of the trusted domain, but these layers are both silicon architecture dependent. So you're going to have a kernel built for the Intel instruction set, for example, and you're going to have a loader that is built specifically for SGX. The same is true with the shim. So the shim is a layer which adapts the specific hardware technology to be able to support a static pie binary. And so basically everything from the shim layer above looks exactly the same, both to NARCS and the application, where everything from the shim layer and below is architecture dependent. So once you go ahead. Yeah, so this is the same example on an AMD CPU. You notice that things are slightly different, but all the layers are the same. And basically once we get up into that WebAssembly and WASI layer, everything is the same from that point on. One important bit is to note that NARCS plans to distribute those four layers, the loader, the shim, and the WebAssembly and WASI layers. Application is your responsibility. And of course the kernel is the cloud host's responsibility. And the CPU is of course the CPU vendor's responsibility. Absolutely. And now we've talked about Intel SGX and AMD SEV. Those are the ones that are available at the moment. We absolutely plan to support others as they become available. So Intel has announced something called TDX, which we've already said that we plan to support once we have hardware. And as and when other chip vendors come out with similar technologies, we very much intend to support them. And we talk very closely to all of the major silicon vendors. So one key thing that we didn't talk about particularly in the demo was that the binary we created was the same. The compiled WASM binary is exactly the same. So if you have a cloud or a bunch of clouds with a variety of different chip architectures, you don't care. You as the tenant don't care because NARCS will deploy to whichever chip architectures are available and which fit your security policy. And you don't need to worry about it because as far as your concern, your binary, your application will see just the same underlying hardware because of the abstraction that we perform. So here we've got a brief, very simplified way of thinking about the different pieces that we will be talking about. So on the right, we've got a client and you might be using a CLI. You might be using an orchestrator like Kubernetes, OpenShift, OpenStack. And there's an NARCS client agent. And the NARCS client agent knows how to talk to the pieces on the host. And the NARCS client agent is trusted. The NARCS host agent involves things like the loader, which we've already said isn't trusted. But it helps load the workload into the keep. Although it never sees it itself. The workload is always encrypted, data and code, until it actually runs in the keep. So the host agent can't interfere, can't see. The worst it could do is refuse to load it. And you'd know about that because your application wouldn't be loading. I've got a slightly more complex view of this. Do you want to talk around this, Saturn-Athenial? It's off my phone, I'm afraid. Okay, fine, that's the problem you have with being at the lake. This just goes into a bit more detail. And if anyone has any specific questions about the architecture, we can come back to this slide maybe if we need to discuss things in a bit more detail. So one final thing is we've talked about low-level stuff. We haven't talked about syscalls particularly, that's the business we're in, we're in syscall territory, we're in low-level runtime territory, kernels and all those sorts of things. But actually, as a developer, we don't want you to need to care about this. We see NARX as a deployment framework, not a development framework. You should be able to choose whatever language you want, as long as it can compile to WebAssembly and many of them do over 40 already. You develop your application, you compile it, and then you've got something that's ready to run. You can think of that as the equivalent of a virtual machine image or a container image, something like that, or a jar file that you're ready to ship and deploy. And the stuff in orange is one set of steps, and then you hand that over and you do the second set of steps, which is actually the deployment side, where you choose the host, it's configured, and then it runs. So in the example, you might be using dev tooling, which will do your development and compile to WebAssembly, whether that's Emacs or VS Code or Eclipse Chay or whatever, but it spits out a Wasm file at the end. And then whenever you're ready, you can deploy. You might be using OpenShift alongside the NARX pieces, and you deploy into the IBM Cloud, Azure, AWS, whatever it may be, Alibaba, you can deploy what you want. We separate out these two things, because that's one of the joys of the Cloud and the way of thinking about Cloud Native is that you shouldn't need to think about the different parts at the same time. So where are we? Why don't you talk about these design principles a bit, Nathaniel? Sure. So as Mike mentioned at the beginning of the talk, we have some very strong design principles, and these are essentially our 10 commitments. So number one is that we want to have a minimal trusted computing base. That is the amount of code that is required inside of the keep in order to make your application run should be as small as possible. This protects you as the tenant from compromises, and it also allows you to have greater density for your deployments. Number two is that we want to have a minimum number of trust relationships, and this is who do you have to trust in order to deploy an application? And from the NARX perspective, you have to have two and basically no more. You need to trust the CPU and the CPU's firmware. We consider that as one trust entity since it comes from the same provider. And the second one is that you need to trust the NARX code base. Everything else outside of that is, you know, is libraries that you're bringing into your application or is internal to your own application process, but you don't have to trust many other people in order to get your application running in a way that that's secure. So third is that we care about deployment time portability. So we basically want to make sure that once you've compiled and validated your application that you can take that binary and you can deploy it across a number of CPU targets without having to go through any sort of redevelopment cycle. Number four, we want to make sure that the network stack is outside of the TCB. And the reason for this is that there are quite a number of vulnerabilities that have occurred historically on the network stack. In fact, if you look, you know, at the Linux kernel, for example, really this applies to all kernels. But it's very oftentimes the network stack that provides the worst kind of vulnerabilities, some things like Heartbleed and such. So we want that to be outside of the TCB. We want, we want to have security at rest in transit and in use. So we want to be protecting your data at all times, and this should be on by default. This means that when you are, when your application is actually running, it's protected through the hardware isolation. We also want to make sure that your data is protected in transit via enforcement of encrypted channels like TLS and at rest with disk encryption. Six is auditability. We want our code base to be easy to read and auditable. So we want, we want basically to allow people to be able to look into it and be able to have trust that we are doing what we are, we say we are doing. Seven, of course, as always, we are open source. That's, that's never a question. Eight, we, we are placed heavy, heavy emphasis on open standards. So again, you don't have to develop something to a proprietary API. We just want to use standards for this. Nine, we have an emphasis on memory safety. This is protected precisely why we chose WebAssembly and Rust as our implementation tools. And 10, we're committed to ensuring that we will not place any backdoors in the project. I think that one's pretty self-explanatory. Yep, absolutely. So last, last slide before, before questions is we are an open project. It's not just the code. The Wiki is open. Our design documents are open. Issues and PRs are all open. Our chat is open and hosted by Rocket Chat. Thank you very much indeed. Our CICD resources are open by Packet IO. Thank you very much indeed. We have stand-ups every day of the, the Monday through Friday. Anyone can turn up to those. You're very welcome. And we, we implement the contributor covenant code of conduct because we believe very strongly in diversity and inclusion in the project. Here's some information here. The booth is open. There should be some folks from the project there if you're interested in chatting to them. Or you can just join us on our chat. Any point. Chat.nlx.dev. It's pretty easy to remember. It's all Apache 2.0 license and it's all in Rust. So with that, I'm going to, I'm going to stop sharing and I'm going to see if we have any questions from, from anybody. And we'll try and answer those if we can. I don't see any questions in the chat. We can wait for a bit to see if anything. Nathan, anything that we should be talking about that we haven't, we did rust, well, not rushed through, but we, there's only so much we can go on the slides and think we should have mentioned, you think. Just, we'd love to have people show up and be a part of it really. I think that's the most important thing. We're a pretty friendly group of people and there's a really tech, really challenging technical problems to solve. And so it's in, we think it's an interesting project to be a part of. Indeed it is. I'm going to put a couple of links into the, into the chat. One is the, the booth. So if anyone wants to, to join us a booth, they're very welcome to do so. Another one is our, our chat, which is there. And last but not least is an ox.dev where you can that'll lead you to our code and in fact all of all of the other information that, that you might be needing. So, yeah, as I said, there's some folks, I think, willing, ready and able to take your questions or you're always welcome to join us in the chat. And we have a mentoring system as well. So if you want to get involved in the project and you're, you know, bit scared by the size of the code base or not sure how best to contribute because it's not all about coding. There's lots of other things to be done as well. Then you can raise an issue, a GitHub issue, which is a mentoring request and we will find somebody who will come and work with you to try and get you involved in the project. That seems to have worked very well so far. We had a new person turn up today who, who came to a talk earlier in the week, which she was interested in and she is hopefully going to be starting a mentor mentoring request. So that's always, always nice to have new people like that. I think that's great. I really like the open source culture. I mean, open standards and mentoring of things. So anyone who wants to get in touch, there are links in the chat and then get in touch with these people. I think they're doing amazing work. And I really like the idea of an ox, like the third type of isolation that you talked about. I'm not from a security background, but learning about that was really fun. So thank you for sharing all the cool work that you did. You're very welcome. Thank you for your attention, everyone. Is this the last, is there another thing to today in the, in the track or is this the last, last one? This is the last one. So we're done for the day after that. We have a comedy show and a show and tell party after this. So, like, feel free to join that and it's going to be fun. So we hope to see you again in my time. So I think I'm probably going to bed. So thank you everybody for your attention for coming along. And we look forward to speaking to you and meeting on the project. Thanks a lot. I'm going to leave now. Thank you. Thank you.