 Okay, we'll start off with the most important business of the day. There will be stickers coming around, everyone will get one, so no need to fight, there does not need to be a rampage over the stickers, plenty to go around. My name is Nathaniel McCollum, CTO of Profion, and we're going to be talking about Wazzy Networking, and in particular, what drove us to propose the addition of SOC accept to Wazzy Networking, what the most recent developments are in this space, and the unique ways in which the NRX project is using them. So I think a lot of people know the answer to this question, but just in case there's somebody in the room who doesn't and has seen this acronym all over the place on all the talks today and is still abundantly confused, Wazzy is actually a pretty simple concept. We have our WebAssembly code that's running, and we have some native code underneath it, and we just need an interface between the two so they can talk to each other. We can always do this with custom APIs, but custom APIs aren't great for building communities, and they aren't great for scaling code, and so we want something that's standardized that can give an excellent experience on every language platform out of the box, and so Wazzy began as, we'll go to the next slide and get a little bit of history here, so Wazzy actually started as under a different name called Cloud ABI, and along with a few other inspirations in Cloud ABI started in 2016, and after basically we released the WebAssembly MVP in 2017, people started to think more systematically about this, and so Cloud ABI sort of developed into what we today call Wazzy, which is a subgroup in the W3C and is actually working on this standard. So at that point Cloud ABI was deprecated, and so everybody is basically trying to use Wazzy today with some degree of success, and some of the more recent developments are the ones I mentioned. Basically we're trying to drive this effort on modularization, which is really important because there's a lot of environments that could use WebAssembly, but they may not be able to expose all of the interfaces that could be available under Wazzy, so we want to divide up the Wazzy specification into multiple different modules so that platforms can support only the APIs that they are able to support, and then specifically we added the Accept Call this last year to Snapshot 1, and we're going to talk a little bit more about that in a moment. So the Wazzy snapshot is really insufficient, right? This is what we're all basically trying to run on today, and it has a lot of niceties to it, so it does contain a bunch of interfaces, things like clocks, file systems, networking, arguments, et cetera. However, it's really not modular as I was talking about before. You sort of have to buy into the whole thing, and one counter example, for example, of where this doesn't work is actually in the NRX project. So although we are working on file system support today, as of today in our latest release, there is no file system support at all, so if you attempt to call any of the file system APIs, you'll simply get an error. That will magically disappear in a future release, and we will have transparently encrypted file systems, so look for things to get better in that regard, but we still need to divide up Wazzy into multiple different modules so that we can advertise support for different feature sets. And for a long time, modularization has been blocked on interface types and more lately on streams, which is currently under active development thanks to many of the people in this room, and we're looking for this glorious future, which will arrive any day now. So under Wazzy Snapshot 0, if we're just looking at the networking calls, Snapshot 0 had a variety of interfaces, but if we just look at the networking calls, we really only had three directly having to do with sockets, and that was being able to receive packets, being able to send packets, and being able to close a stream. And it's a remarkably simple API, right? And the question is sort of what can we do with this? Now, there's actually two other sort of interfaces that sneak in under the guise of networking, which is pull one off. Pull one off allows you to basically receive a notification when there is IO ready to be performed on a given file descriptor. And of course, there's the non-block flag as well, which allows you to set a file descriptor in a non-blocking mode, which is what allows you to get to this poll event, and then it won't block if there's not enough data to read and so forth. So this core basically summarizes what Snapshot 0 was in terms of networking, and you should immediately notice a problem that there is no way to create a new socket here. So basically, the runtime could create a set of sockets, could hand it over to the WebAssembly application, and you could operate on those sockets, you could read and write on those streams, you could close them, you could wait for IO, but that's really all you could do. And so we really need the ability to create new sockets, but the problem is, on what capability do we do this? And for those of you who are not familiar with capabilities, we need to give a brief introduction to capability-based security. So each WASI call has a capability context, and you can look at two calls there. We have an open call and an open at. Now, WASI has no open call because open is global, it operates on a global context, where open at operates on a directory context. So you can say, within this directory, open a path. And this actually goes all the way back to the 1960s when we first started to getting memory controllers in our hardware. And so we invented this concept of processes, where a process can have a separate address space. So every time you call fork, you basically get a different address space for the process. And the end result of having a different process space means that one application can't muck with another application's memory. And that's a really great feature. The problem is nobody ever thought to extend this notion of privacy or multiple views. Today we would call them namespaces in the Linux kernel, to all of the other resources that are available on the system. So for example, while you've got a private view of memory, you've got a global view of the file system. So everybody saw all the same files on the desk, and the only access control you had was whether you had permissions to access that file. It was the same thing with networking. You typically would have a set of networking interfaces on a Linux system, and if you had access to one of them, you had access to all of them, or plus or minus. And so basically capabilities provide us a low-cost alternative to OS namespacing. And so there's been a significant amount of effort, of course, basically all of containers are built on top of operating system namespacing, where you can actually create different namespaces for network interfaces and file systems and give people separate views of those. But capability-based security basically says, we're not going to have any APIs that don't have a context to them. And so what this means is that we can always create private views of the resources on the system because every API receives a context. And so what we want basically is we want a system where there are no global resources, and the runtime can always indicate which resources a particular WebAssembly executable has. So this poses particular challenges for the global APIs that we all know and love, particularly in the networking world, where you're typically used to having a global view of network addresses and so forth, and you operate in this global context. Well, when we're trying to do capability-based security in Wazzy, that's not exactly a great fit, and there's a tension there. But it's not just the Berkeley sockets, it's also file systems. So we've solved this pretty efficiently with using OpenApp, for example. But even today for file systems, for example, if you look at the Rust standard API and compare it with OpenApp, OpenApp takes a directory file descriptor. Well, Rust doesn't expose at all in their standard library a primitive for operating on open directories. And so even though the underlying operating system does provide OpenApp, the Rust standard library provides no way to actually access that. So the fundamental situation we found ourselves in with Snapshot Zero was that there was no way to create new sockets. The runtime could create sockets ahead of time, could hand them to the runtime, you could read them right on them, you could close them, but that was it. If you wanted to do more, you were out of luck. So fortunately, we were able to rev this to Snapshot One. And in Snapshot One, we got most of the same stuff, but we at Peruvian felt pretty constrained by not being able to create any incoming connections. And so we essentially proposed the addition of Sock Accept as an API. And I think it's great. Everyone was pretty enthused about this. We were able to move really quickly. Peruvian sponsored addition of the entire networking stack into the Rust standard library. And we also provided patches to Wazilib C. There are a number of people here, like Microsoft, for example, has done this in .NET. So it's great to see a lot of people taking this up. And basically what it means is we do now have the ability to accept incoming sockets. And the reason we can do this is because when you pre-create a listening socket in the Berkeley Sockets API, that listening socket already provides a context. So we are not violating capability-based security here. We are just simply using the incoming listening socket as that context. And so it was pretty easy to add this. And so as I mentioned, this has been implemented in a variety of places. This has been a lot of work. Peruvian has done some of this work, but others have done it as well. So thanks to everyone who has contributed. And basically what we see is in Maine right now, in Wazilib C, there is Sock Accept support. So this means anybody who is consuming Wazilib C as their interface to Wazilib automatically gets Sock Accept as part of this. This would include a bunch of the dynamic languages like Python, Ruby, and so forth. In the Rust world, we added support for networking to the standard library. This is available in Knightly. We also added support to MIO to be able to support Pull One Off, which it couldn't currently support. And so now MIO actually supports the ability to do non-blocking IO based in Wazilib. We currently have somebody working on getting Tokyo up and running. So we would really like to see the entirety of the Tokyo framework. And we're also evaluating asyncstd. If anyone in this room is interested in collaborating with these, we would love to have your collaboration. This is work that really benefits everybody. So we'd love to make a good showing of it. So as we look beyond Wazilib Snapshot One, however, we still have a variety of things that have to happen in order for us to make forward progress. Fortunately, we have pretty mature interface types at this point. The tooling is rapidly maturing in this area. We are also starting to get the Streams definition to be somewhat mature. I'm hoping that this will accelerate in the coming days as people show more and more interest in it. I think it's pretty clear. There was at least four talks, I think, that mentioned that their biggest pain point in WebAssembly today was networking. So it seems to me that there's a pretty broad consensus that this is something we need to pay attention to. We really need to target three different scenarios. And right now, there's a lot of work being done in the last one, but I want to talk about what these three are. And I want to talk about the subtle differences between them and why I think we need to actually adopt all of them. So the subtle difference between them is the first one is blocking. And blocking is the old Berkeley sockets that we know and love since time immemorial. If you create a socket and do a connect, you're going to wait until that connection completes before the function returns. The same thing with reading or writing and so forth. Non-blocking was a mode that was added to this where you could set the non-blocking flag on the socket. And then if there was no I.O. available to be performed and you did a read, for example, the function would return immediately and with an error E again saying that you need to be called, you need to call this function again when there's actually I.O. available. And so this is combined with then with a polling function of some kind. This is pull one off. And with pull one off, what it allows you to do is you can call pull one off and pull one off will block regardless what the state of the non-blocking flag is. And so when pull one off returns, it gives you an indication that there is I.O. ready to be performed. And then you can call the non-blocking read and instead of receiving that E again error, you will instead receive some of the data that was available on that connection. And so we might call this, I got this term from Dan Goman. This is notification, a notification mode. Async is different, but it's very subtly different. Async is where we indicate to the kernel or the runtime that we want to perform some I.O. And that function immediately returns and then we can call another function later to block and it returns only when the I.O. is complete. So the distinction between non-blocking and async is that non-blocking provides you a notification that I.O. is available and then you perform a non-blocking read where async you give an indication that you want to do a read and then you call a function that blocks until all of the data is available. So notification versus completeness. Thank you Dan Goman for that great phrase. And so we still also need to port existing tooling to it by Njan and I know Dan is working on that furiously. We also have a new networking proposal that's been proposed and the proposal that has been proposed is fairly reminiscent of what we know of from traditional Berkeley sockets. But that may actually pose some problems and you'll see why when we get to the NRX demo in a moment because one of the things it does is it exposes all of the lower level protocols and then one of the questions is do we actually want to expose all of those lower level protocols or do we really just want to say I have this named thing maybe it's an outgoing connection maybe it's an incoming connection and I want to perform operations on it but all of the details of what that thing actually is may be hidden by the runtime. You'll see why this is important for a moment. We do a spoiler alert. We do transparent TLS in NRX so when you create sockets you're automatically getting a TLS socket it's not TCP. We don't allow the use of TCP at all. So this does provide some challenges for example for TLS. It also provides challenges if we're just going to wrap the the bare Berkeley Sockets API and expose all the underlying protocols it also means that we are going to have difficulties with multi-layer policy. For example if let's say in a world where you're not doing transparent TLS like NRX is and you want to do TCP operations but you also want to do TLS operations well how do you control the policy over which is allowed to which hosts? It becomes a fairly complex problem to figure out what the actual interactions are in between those things and the reason for that is because TLS is a species of TCP so now you have to on every packet you have to analyze okay if TLS isn't allowed is this packet that I'm receiving on TCP actually a TLS packet and if it is then I have to evaluate it on my policy so now we're sort of forcing everyone into into deep packet inspection which is probably not a place where we want to be so we really need some good thinking about this and really this is just an invitation to participate I know there's a lot of people in this room that really care about WAZI care about networking so this is a really good opportunity to contribute to this discussion and help us create a design that looks really good by the way all my credit goes to the author of the proposal it's a very thorough proposal so I'm not knocking him at all and it's really just a matter of what can we come up with is that that's the best if it's the needs of the community the best so we actually have a demo today and I want to be able to demo essentially what you can create in a sock except enabled world so everything you're going to see today is running today on the most recent release of NRX which was last week 0.5 and we're going to show an application called Crypto and Crypto is a clone of everyone's favorite game Wordle except it is done in an encrypted environment and first we're going to show it running in NRX just so you can get a feel of what the application does then we're going to attack Crypto on wasm time and I'm not singling out wasm time here is the bad guy okay wasm time is fantastic we use wasm time internally okay what I am trying to show by using wasm time here is that we're going to take the same exact web assembly binary that we ran in wasm time and we're going to deploy that binary using NRX and we're going to get a bunch of other protections for free and so so we're going to we're going to show an attack on Crypto using wasm time we're going to do an attack retrospective we're going to analyze why the attack worked and what we could do to stop it and then we're going to try the same attack on NRX I need to pause here for a moment because a huge thank needs to go out to Harold Hoyer Richard Zach Roman who's here Rway Roman and Nick but all who's also here Nick wherever you are you guys put in a tremendous amount of work on this demo and I'm just really pleased to work with you all so thank you very much by the way Harold was supposed to be giving this talk today but his wife is expecting so if you know Harold send him a congratulations all right hopefully this video is going to come up here go go gadget internet this is when you record the video so you don't have problems and then of course you have problems with the video oh there we go okay so we have this game Crypto and Crypto is basically a multiplayer wordl demo and you can guess some words on the left and one of the things that's different about Crypto compared to the normal wordl game is that in the normal wordl game the word that is actually guessed the word list is all actually in the client it's not on the server so anyone who is good at inspecting in the browser console they can figure out what the word is but we wanted to do something that's that's more secure we want the the word to actually be chosen on the server side and then more than that we wanted to allow multiple players to guess and we wanted them to see when they actually guess other players words so this is not a this is not a super competitive game it's just a game for a little bit of fun and so we have three players here and they're all basically playing the Crypto game and you can see oh we got words we got three letters there and now we're going to guess world and see we we actually guessed one of the other players words and so it showed up in a special color and finally we're going to play on the on the third player here and we're going to be doing the same thing just guessing letters while this is playing I'm going to make a brief PR announcement we did release last week 0.5 we now have support for running an arcs in the unencrypted mode on both Mac OS and your favorite Raspberry Pi this is in preparation by the way for ARM realms which has been publicly announced so stay tuned for news in that regard so basically we've we've seen our application here and we've guessed another word here and that's more or less we can see who the winners were based on this so now what we want to do is we actually want to this was actually shown by the way this is running in an arcs on the latest release and we're going to we're going to skip ahead and we're going to do this we're going to show the application running on a basm time so the text is probably a little small hopefully you can see it we're going to do a cargo build of this rust crate and the rust kid is crate is just the crypto crate you can actually see this there'll be a URL for the for the demo later if you'd like to see it and so we we have run it in wasm time it's now listening on a socket but what we want to do is we are an attacker who has managed to gain root access on the server and we're trying desperately to get this most prized wordle word and so what we want to do is we want to scan the memory of the application for any of the words that are in the dictionary because we want to find out what the word is basically bypassing the guesses the guessing rule here here by the way you should understand that the guessing rules in wordle are really just the access controls of your application and we want to by accessing this host you're going to see here in red we found words that are in the word list and so as we scan this memory for the application we pick up I think there's three words in this particular instance yeah there's youth and there's one more and so although wasm time has performed spectacularly we are performing an attack that is out of scope for the security model of wasm time so again wasm time is not to blame and if we were running this in n arcs in debug mode you'd see exactly the same thing you would be able to access the memory and bypass it so the question is why did this attack succeed and the fundamental problem is that we have three different forms of workload isolation type one is protecting one workload from another type two is protecting the host from a malicious workload and both of those we actually can do pretty well today right there's lots of companies doing this at scale so this is not a problem the problem is we don't have really any protection until confidential computing for the third type of isolation which is protecting a particular workload from the host because currently the host has access to read all the memory of the application and can tamper with that application while it's running and so forth and this is fine right basically as long as you trust your csp and all of their sys admins and all of the hardware software and firmware stack so fortunately it's not millions of lines of code or oh wait yeah it is and then from either compromise because they may not be doing it directly they may just have not been able to secure something or for a supply chain of attack on the on the actual operating system both now and in the future right so and that's of course if you agree with also your cfo and your board and your auditor and your regulator so all of these are up this is a pretty high list of criteria in order to be able to trust it and this is something we just sort of accept in the industry today and we accept it because we aren't aware that there's another way to operate and that's because the hardware simply hasn't been available but uh not all not all clouds are good so uh the question is what makes nrx different and how nrx is different is that we use confidential computing confidential computing is a new set of hardware technologies that have come out from all of our favorite favorite cpu manufacturers for example intel amd and arm has also announced arm nine realms and basically this allows you to create an application or a virtual machine within which the memory pages are actually encrypted and so while the actual application is running even if the host can scan memory of a normal application if you've set up this normal there are this special confidential application correctly then you won't you won't be able to uh to tamper with it so we use trusted computing environments which is based on cpu hardware we encrypt the workloads and we provide two things that are really important and and the nrx project will not uh implement on a te platform if it does not provide these two properties we want integrity and we want confidentiality in other words no peeking no sneaking so uh our peeking and tweaking that's the that's the first peeking and tweaking so um basically what you want to do is you want to have start off with your workload here and you want to put the workload in the host somehow but the problem is how do you actually know that the workload that you are attempting to deploy to that to that host is in fact the workload that gets deployed and we should be thinking of this as a certain kind of supply chain attack we tend to think of supply chain attacks as everything north of me I grew up in upstate new york and if you ask anyone where is upstate new york any new yorker will reply to you well it's what's north of me right so if you live in new york city upstate new york if there's anything north of new york city if you live in albany well then upstate new york is anything north of albany well the same the same thing applies here downstream from you is also a supply chain attack and so what we want to do is we want to create this tee and we want to create a measurement of the application in this case the nrx runtime and then we want to offload that measurement signed by the hardware to an attestation service and the attestation service must not be in your cloud because your cloud provider can't prove to you that they set up the environment correctly you need an independent source of trust so we offload the measurements to an attestation service and the attestation service proves to you cryptographically that the environment that was set up is has those two properties confidentiality and integrity but what we actually want to do is something more than that because we actually want to create an empty keep there's several systems out there today that they try to do something like this but they deploy the application immediately into an untrusted system and what if the algorithms of that application right what if it's a risk model and you're an insurer or what if it's an AI model or what if it's any of these types of code that need to be protected of which there's quite a few today and so what we want to do is we want to bring up an empty keep that we call this an undifferentiated keep it contains only the anarchs runtime and that's what we measure and then the attestation service validates this for us and provides a certificate identifying the workload that gets deployed in that keep once the keep has the certificate it can then fetch an application from drawbridge you can think of drawbridge as something like an attestation aware docker hub where it contains the software all the software that you're going to be deploying and it will only release that software if you perform a successful attestation to the steward and so we can show this same exact demo on narchs now one of the things that's not immediately obvious here is that when we ran on wasm time our sockets were unencrypted well what's going to happen when we run this time on narchs we're going to do the exactly the same thing deploy exactly the same binary but we're going to do the attestation we're going to get a certificate that identifies the workload and then we can do transparent TLS on everything that's involved coming soon will be transparent and encrypted file system as well so anything that you persist to the disk is always encrypted the point is once data or code enters the system it never leaves unencrypted unless you do something seriously seriously wrong we can't make it impossible to do the right thing but we can make it hard so this is an example of this is the wasm time we're going to kill that that was previously running and we're just going to do the same thing we're going to start it using narchs instead of wasm time and we have this configuration file that identifies the environment and which steward to contact for the certificate this this file is going to come from the drawbridge in the future but what we're going to do right now is we're going to do an upload to drawbridge of the files we're going to upload the wasm file and we're going to upload the narchs.toml and once those are both in the drawbridge they are now ready to be deployed so the upload is completed and now we're just going to do an narchs deploy giving the url of the particular application normally we have a shorter slug here that's that's more similar to the docker style but we're specifying a full url here because the drawbridge is running locally and unencrypted the support for unencrypted drawbridge will go away it's it's just currently there to support this latest release so it's taking a moment here to actually start it's a little bit longer to bring up because we have to bring up the hardware environment we have to do a bunch of cryptography we have to contact the steward and do our attestation receive our certificate set up all of our sockets but once that's done there we go now we're now we've switched to the scanning page the application is running we are going to do the same scanning attack that we saw before this time we're going to scan for narchs instead of wasm time and we're going to do the same the same mem dump we had just with the different PID this time it's the PID for narchs and you notice that we've not found any words and the reason for this is because all of the memory is encrypted on different hardware platforms that this was on Intel SGX we can also do this on AMD SEV S&P which is the latest Milan generation and if you do this on S&P you'll actually get a denial because the hardware actually denies access to the memory that's inside of the encrypted VM so if you'd like to find more about the narchs project as I said we just released our 0.5 last week we have releases coming every four weeks now and it's a train so if you want to contribute please come hop on the train with us you can go to the website narchs.dev we have a blog we have github we have chat as well we're a friendly bunch of people so come along and help us build a better future oh by the way Profian's hiring so if you're like a crackpot ninja wizard at doing all sorts of like op stuff and you like performance and low level hardware stuff or cryptography if any of that intersection interests you we're doing really really cool work and we have a great team of people so come check us out oh yeah there is there is one more thing today we're announcing the crypto hack challenge and basically we want to see what your elite skills are we want to know if you can actually hack the narchs runtime now I do have to give a little bit of caveat here is that although narchs is very close to production we're nearing production capability this is still a pre-production release but we want you to help us find the issues before real attackers do so we want to see if you can prove it there's going to be a cake and by cake we mean prizes this will include some hardware and will include some cash and we are going to basically if you have an attack you submit the attack to us we will run the attack on the server and we're going to livestream the whole thing so we'll see if your attack succeeds or fails and if it succeeds you win a prize and all of the winners will be announced at black hat we're going to be doing this in two phases so the first phase will be announced at black hat so come along and show us your stuff questions yes that is correct so please open your phones go to github star the project and win a free t-shirt yep narchs slash narchs any other questions thank you very much oh you have a question they have not been put up on youtube but we can put them on youtube yes repeat your question for the streaming okay uh will the demo be available on youtube yes we can put we can put the demo on youtube okay nick can you take a note to make sure it's on youtube nick's going to take care of it thank you very much everybody