 All right, welcome to Wassum in the Wild West, a practical application tale. I'd like to introduce you to your outlaws for the day. I'm from Colorado and we are definitely cowboy-ish here. I'm Matt Butcher. I lead an open source team at Microsoft. I have worked on Helm, Kubernetes, various pads systems, OpenStack, and this WebAssembly world is my latest foray into the wide world of cloud. You can reach me at any of these social media handles. And I'm Taylor Thomas. I also live in what used to be the Wild West, but that's not why we're seeming it this way. I'm a Cresslet and Helm Core Maintainer emeritus. I've been doing Kubernetes things since around Docker, since 1.2 and Docker around 0.7. So I've been doing this for a while as well, the cloud native space. I came to Rust by way of Go, which kind of makes sense given the Kubernetes background. I have a problem with consistency with my social media handles, given when they were created and whatnot. And I'm always afraid to change them because we all know what that can do. So yeah, that's us. So let's go ahead and just get straight on into things and talk a little bit about just kind of a, to whet your whistle, let's talk about the projects that we're gonna be referencing in here a little bit. And we're going to be talking and focusing mostly on WASM, but we wanted to give these as kind of a backdrop. So you know where we're coming from. So both Matt and I have worked together on all of these projects, along with the other members of our team. And the first one is Crestlet. In Crestlet, we are running WebAssembly workloads in Kubernetes. And so this allows you to take whatever kind of compatible WASM binary and be able to run it inside of Kubernetes just like you would a normal container. It's not running in a container, it's running actually just as a straight WASM binary. And then Bindle is an aggregate object storage engine. We've given another talk on this in case you wanna check it out. This allows us to store all of the WASM things together. And then last is Waggy, which we kind of took from all of our dogs on our team. But it is an easy way to write cloud side WASM kind of as a HTTP handler. So those are the three projects that we've come from with all of this knowledge as we discuss how we've used WASM and what it's like using it in really what is the wild west right now. So let's go ahead and kind of explore how we got there. So my team, our team meets usually under normal circumstances about once or twice a year together. We're distributed most of the time, but we like to get together brainstorm about cool interesting ideas and go through a number of exercises to see how we're doing. Well, in 2019, we all got together in Vancouver and Vancouver Island rather and had our typical offsite meeting. But somewhere along the way here we got sort of sidetracked from our usual Kubernetes container virtual machine helm kinds of conversation and got going on WebAssembly. And we were talking about the interesting aspects of WebAssembly's runtime and how while it was intended to run in a browser it actually had a lot of potential for running on a cloud side and creating the kind of isolated runtime that we need in multi-tenant cloud situations. So as soon as that sentence sort of made it out into the room, we just started talking and talking. It was after dinner. We must have spent hours sitting at a table in the restaurant just talking through all these various scenarios with ideas sort of flying fast and furiously. It was a lot of fun. Finished up our meeting, all went back home and then a month or so passed and we decided to give it a shot. And the first idea was to try out building a Kubernetes kubelet that could execute WebAssembly. We wrote this in Rust, so we named it Crestlet because kubelet, Rust seemed funny to us. And it turned out to not be too hard to basically just write a small amount of code that could connect to a Kubernetes cluster, advertise itself as a Kubernetes node. But when it got a request instead of pulling containers and executing containers it was pulling WebAssembly modules and executing those. So this was sort of our first heist. We were pretty proud of ourselves. We got enough of this done to show off to a few people. And of course it started peaking more and more people's interests. And what happens is that your cute little proof of concept suddenly runs into actual production use cases and people say, can it do this? Well, no, can it do that? Well, no. So then we had to sort of upgrade from a heist to an all out great train robbery style thing. And we went from a proof of concept to an MVP. And anytime you make that an MVP being a minimum viable product, right? So anytime you go from the nice little demo where to something that could actually execute production workloads on it, you're gonna encounter some growing pains. So we'll talk a little bit more about that here and also in our talk about Crestlet in the, no, sorry, the rest day event. But we got going on that. It was very exciting. The team got together, produced something really cool and more and more ideas started to percolate up. And we wrote Bindle and then we wrote Waggy and then we wrote an HTTP client library. And then we started writing Yo-Wassam and more and more different projects that got us really kind of deeply entrenched into the WebAssembly ecosystem and deeply, deeply passionate about the tools and the potential that we were seeing there. So that really tells the story about why we care about what we're talking about here. We wanna talk about what we've learned, what some of the gaps are because what we've noticed is that many times at these Wassam conferences, everyone is looking into the clouds, which is a good thing. That's what we want here. This is such an exciting area, but we sometimes forget about the people actually trying to implement this in the Wild West. So we wanted to talk about the things that people on the ground are running into right now. And so first off, we have this saying if it's never too late to check your cinch, if you have never ridden horses before, you have a saddle that sits on top, which is obvious, but then underneath, there's a cinch that runs through. And you always wanna check it for two reasons because either you'll cinch it too tight and your horse will pass out on top of you or you'll have it too loose and it'll slide right to the side or off and you'll fall off. So both ways you end up under a horse and falling off. So this is kind of our, well, what did we learn in this whole experiment that we've done? And first off is our horse and pony show. We're gonna do a little bit of comparison here. So one of the things that happens is when people talk about Wassam, they're like, well, and they always hold up a finger, haven't we tried that before? And it's like, yes, I mean, everything kind of reinvents things just every 10 years, we know that. But let's talk about some of the most common ones as compared to, we'll start off with the Java VM. The Java VM is very tried and true and infinitely tweakable and configurable. People have careers at tweaking the JVM. However, it is very, let's just put it nicely bulky. It takes a lot of resources and it's very large. So it's just kind of bulky. And also it's limited to a single language ecosystem. Now, granted, you have Java, you have Kotlin, you have all the different things that run on top of the JVM, which is great, but you're still limited to just that ecosystem. And it's also a little bit of a leaky abstraction in that the Java libraries encompass everything from files and networking with no real way to say, when I say, when the guest module says, I wanna file the host module can fulfill it this way. And of course, there have been many tools written to do that, but one of the things we liked about WebAssembly compared to Java was that WebAssembly provided some of that for free right out of the box without quite the, as Taylor put it, without the sort of bulkiness of Java. Containers were another one that we were fairly passionate about. We've done a lot of work in the container ecosystem. Container system, they're more lightweight than Java in a sense, right? Particularly single, well-made, sort of carefully crafted containers. You can get fairly small ones. And you can run things written in just about any programming language provided it meets the operational constraints you need to execute a container, right? But cross-platform is hard with containers. So there have been many, many attempts to do them in different ways with running Linux containers on Windows, running Windows containers on Linux, multi-architecture containers and so on. But a lot of those require extra build steps or other specialized tooling. And oftentimes also added some constraints upon what the developer themselves had to do in order to make this run in the expected ecosystem. And that brings us to the actual Wasm run times. So especially, and this is really in comparison to the Java VM and also containers, the thing with Wasm run times is they're very lightweight, both the run times and the binaries. We're gonna talk a little bit more about that in a second, but the binaries can be very, very small, especially in comparison to a container, even well-made containers. I'm not talking about the nightmare containers that are multiple gigabytes. I'm talking about well-made containers. They're still smaller than that. The other thing is that it's sandboxed by default. And so for us, that's a very important thing because you are completely locked in without having specific, what are called capabilities granted to you to be able to access files or access network or whatever it might be. And Wasm run times are the closest thing we've seen in a while to be truly cross-platform. I can build a binary on my Mac, have it run on my Windows machine, have it run on my Raspberry Pi, have it run on a Linux server in the cloud, and I built it on a Mac. There's no changing, there's no different, like it's just the same binary. And there's also this idea of being able to compile from any language without additional steps. So with Docker, it may developers have to learn the intricacies of all their dependencies, which in one sense is good because sometimes in the old fashioned way of doing things, developers are just kind of shovel it over to the ops people. But the thing is as devs, that's not their specialty. And sometimes there's little nuances that people don't know about unless they've done it for a long time or that's their specialty. And so you have to know about that with Docker files and building a Docker image. But with a Wasm run time, as it stands, you can just compile into a Wasm binary. There's no special steps except specifying a target. And that's obviously depends on the language, but you can compile straight to it instead. And one of the interesting things about WebAssembly is that it's already a W3 standard. And there are already many, many run times available for WebAssembly that all implement consistently the same spec with different pros and cons and advantages and disadvantages to the way it executes it. But the same WebAssembly module will run on say Wasm3, WasmTime, and script and so on. Which brought us to our first choice, right, Taylor? Oh, yes, the language. So this is always this, we all know as developers that languages generally cause flame wars. We are trying to avoid that here, but we're just trying to inform like why, why we decided the way we did. And for us, it came down to be a question of Rust or JavaScript. Now that doesn't mean other languages can't do it. We're gonna actually talk about that too. But the idea was with JavaScript, it was the first language to do Wasm in a real, quote unquote real way. It's obviously flexible and very popular as the creator of JavaScript is said, like bet on JavaScript. I mean, and Wasm, he said actually, but like there's good reasons to use JavaScript. It's very accessible to people, but really even with Node, it's not as good for system level development as other languages. Rust has the disadvantage of being newer and having a still evolving ecosystem and therefore not being as popular. And even if it was as popular, it's definitely not accessible as JavaScript. Most of the tooling though on the bleeding edge of the Wasm space is written in Rust. And it's amazing for systems level development with really, really strict and good safety guarantees. So we chose Rust because those, even though it was less popular and we'd have a little bit harder time getting people hooked into it, it was much better for choosing how we did or building things for how we were going to do and use Wasm. Yeah, it's sort of in the end for us. Even though we chose Rust and we use it in our major projects, we do still do a lot with M script in and Wasm 3 and other runtimes. We just found that our projects, we would probably stick to Rust for the main ones, but in the end doesn't represent any hard feelings or anything like that toward any of those other runtimes. One of the really interesting things we've been learning as we've been doing WebAssembly is the long path to optimization. And this is a virtuous long path to optimization. When you, the compiler is going to mainly just try and transform the source code to WebAssembly's byte codes, right? And so when I was playing around with Swift, my first Swift application when I compiled it to WebAssembly was about 9.8 Meg which was considerably larger than most of the other languages I had compiled in. So I went about kind of investigating the different ways that we could optimize this and there are many tools out there. Simply running the Wasm opt command line reduced my binary from 9.8 Meg to 4.3 Meg by stripping out pieces of the WebAssembly code that actually weren't called or weren't used and doing some other optimization passes to compress down some of the symbols and things like that. So instantly I cut my binary size in just over half just by running a tool. But then we started looking at all these various other techniques out there. And there are things like being able to preload the binary into the runtime to allow you to start up to allow you to drop startup times from maybe a hundred milliseconds to just nanoseconds there are fractions of the amount of time. Running in a runtime with a JIT optimizer means we could get even faster run times. And we're even experimenting now with ahead of time compilers that when we know for example we're running in a pass kind of circumstance and the end user has uploaded a WebAssembly binary we know that the user's intent is to execute that binary on the platform they uploaded it to. So we can even do ahead of time compiling on that and even achieve further optimization and faster startup time. And there are even some amazing tools that are being developed out there. One of our current favorites is called Wiser. And this can run the initialization code on your WebAssembly module. So basically get it all started up and then re-freeze it as a WebAssembly module so that the next time you run it it's already initialized and you're just running you're just diving straight in. So there's some tools like this that are just sort of mind-boggling but opens up the possibilities that WebAssembly is presenting us with because of the way the format works and because of the sort of flexibility of the runtimes and the multiple implementations of runtimes. This kind of leads into the idea that there are many different ways in which you can execute a WebAssembly module. So it's not merely a matter of how you optimize it prior to starting it up but WebAssembly is structured in such a way that you can run it in a flat out interpreter which will execute as it reads through the WebAssembly file. This is excellent if you are working in an embedded space and you only want to allocate very small chunks of memory as you're executing things. So WASM3 which is a great interpreter if you're interested in running in Arduino's or other limited space devices it uses this interpretation model. But for when we're optimizing for speed we often wanna go one step further and look at a JIT style compiler where as it's reading in the byte codes it's compiling it into an intermediate representation then might take more memory and might consume more CPU and other resources as it's ramping up it tends to execute much faster. And then I think I mentioned on the last slide even now we're looking at ahead of time compilers WASM time now includes a with I think 0.26 now includes an ahead of time compiler. So you can even pre-compile some of this stuff which even speeds things up further at the cost of having to have storage and also at the cost of sort of having to bind it to one particular runtime as you're executing or one particular family of runtimes, right? But again, if you're willing to make that trade off for speed ahead of time compiling might be the fastest object. But the thing that we really appreciate about this is the developer never has to be concerned with whether it's running in an interpreter a JIT environment and ahead of time compile environment or whatnot that the developer simply writes their code and compiles it and something else can optimize something else can tweak and tune and something else can run it and the developer can be blissfully unaware of that. We like that because that tends to alleviate the kind of operational decisions from the developers plate while at the same time giving the operation staff all kinds of options for how they'd like to execute something. So the last thing in our learnings section is around the WASI spec. So this is really important to us. So we consider WASI to be the future of WASM. Now that isn't meant to be a technical declaration or more a community declaration. We'll talk about that a little bit but it gives us a common interface that any WASM project can then build on. So these can be POSIX-like or libc-like APIs common abstractions for passing data. Now the thing is it is still very, very much in flex but progress is being made. We're adding streams, we're adding starting at least experiments with something called nano processes. All these things are very important but for us the most important thing is just being able to do this in a way that allows people to extend and build on top of a great foundation which kind of leads into some of the next things we're gonna talk about. So our next topic we're calling keep your butt in the saddle. Now that is a phrase that I hear my wife does horsemanship stuff and that's a phrase you hear all the time. It sounds like something stupid but sometimes you forget to keep yourself in the saddle and if you get out of the saddle or out of place your horse will ride poorly and you can fall out. So this is kind of our discussion of how we can avoid the gap. And to kick it off, I wanted to talk about something that is very close to what the last thing was around the WASI spec is around community fracturing. And this is a potential gap that is very serious for us. So in general, there's an unevenness across all these different WASIM implementations. There's various runtimes and protocols. We have WASIM time, WASIMer, WAPI-C and WASIM cloud. There's WITX bindings. There's other projects like suborbital and the thing is is each project has its own buy-in. You have to use their custom libraries and their custom things. And really there's no major community or foundation to use as a gathering place or a watering hole, shall we say? Given the theme of this talk there's no community place for us to just talk these things over. So where we want to be with this, and this is meant to show the gap as we would like common specifications with various implementations that people can build around them. We would like to have less lock-in to custom libraries and we want a better community, collaboration, meeting, foundation space to do this. Our opinion in all of this is that we're better together. If we can work together, like we know that there's going to be things, there's competitors working together to build this, but we don't have to like this foundation doesn't have to be something we compete over or to create competing standards. And the thing we worry about as we've observed things we worry the community is trying to do everything right now in fracture, which will cause lots of problems in the future. So we would prefer to be where what we kind of listed in this right column. And in a moment, we will come back and talk a little bit about where we see Wazi on this particular, where we see its role in this ecosystem because we do believe that the Wazi specification may be sort of the key to help tie this all together. But before we go there, we wanna talk about another thing that is actually very near and dear to our hearts, which is the developer experience. To us, we feel like at the end of the day, developers are going to use the tools that they feel the most comfortable with. And the tools that we use as developers today, they could use improvement, always they could use improvement, right? So we want to be part of that story of how we improve the average experience of creating code that can run in this kind of cloud native ecosystem. So where we are today, we see a lot of work in order to bootstrap environments and in order to get objects imported in a web assembly of runtimes and so on. We see a lot of work with custom bindings and annotations, each of which matches its specific little platform, like we talked about on the last slide. A lot of work with sort of like bespoke tool chains where you've got to line up, you need these four things, you need to execute them in this order. But if you're writing for that platform, you need these six things and you need to execute them like this. A lot of stuff that just becomes developer cognitive overhead. You've got to do all of these different steps before you can run your application. One of my, my boss, Brandon Burns, was recently saying, I just wanted to create my first web assembly module and I had to go through so many steps just to create Hello World that it was really frustrating, even though it was so exciting to then have this binary that I could use in all these different environments. That's the kind of experience that we want to alleviate, right? And then what we're seeing is the very beginning of an emergence of sets of tools that we'll be able to do this. We've worked on one called Yo Wasm, which is a Yeoman generator for web assembly, where it'll walk you through the process of saying, yeah, I want a C project that will compile to web assembly. And I want to use GitHub Actions and I want to push my resulting artifact here and then it'll scaffold everything out and set up VS codes so that you have a nice, pretty environment. The Wash CLI for Wasm Cloud, we're very excited about because that ecosystem has gone from very much the nine steps from development to running a test down to this nice little CLI experience where you can run a CLI interactively and it'll help you bootstrap everything and get it running and you can instrument your running process. So we are seeing things start to move in the right direction, but we want to be even further, right? We want this kind of one step where you say, all right, build and deploy this thing and it builds it, signs it, pushes it and then the next thing you know, you're checking out the results of your work to see if it all looks good. We do think that there is room in here for code generating, particularly as we start seeing multiple language support where you might want to be able to very quickly say, okay, I want the same functionality and see that I had over here in REST or that I had over there in AssemblyScript or whatnot. And then finally, we want to be able to compile a web assembly binary as easily as a normal binary. And some environments like REST, I feel like make this trivially easy while others, the compilation process still needs a little more tweaking before it feels quite as native and quick as it does in REST or maybe AssemblyScript. Now, as Matt mentioned, we're going to talk a little bit more about Waze. So where do we, where are we at with Waze? So in case we hadn't mentioned it yet, Waze is the web assembly systems interface. It's kind of this main spec that we were mentioning. And where we're at right now is that networking is kind of a mishmash of stopgap solutions, including something in that mishmash is something we wrote that's our Waze Experimental HTTP project that allows us to get some sort of HTTP support. There's this current ongoing streams versus POSIX debates that is kind of the current, I guess, topic of discussion inside of these meetings, inside the W3C group subgroup. And also concurrent tasks and nano processes are, there's some solid design ideas, but they're still just an idea. There's only a few kind of very, very experimental implementations and ideas. So where we want to be, and this is kind of a, in a year, because ideally we want everything to be all in place and a wonderful 1.0 spec. But where we'd want to be in a year is a working stream implementation, which allows for flexible extensions on top. So for IO, networking, all those things, and we would like an initial nano processes implementation. So that's kind of where we'd like to be at in about a year. That streams, the IO streams, IO array work that is happening under the streams kind of idea is what really opens us up to being able to write guest modules that rely on resources outside of them, and we can write very convenient APIs. So we can very quickly write things like key value storage drivers, database drivers, file system implementations, and so on. So if we can get there in a year, then we feel like we've really unblocked kind of the key bit of potential that is currently sort of locked up in the WebAssembly ecosystem and is accounting for a lot of what we're currently seeing as fracturing in the ecosystem. So we're optimistic that that could really sort of re-level the playing field and get everybody excited about building together instead of each separate project having to kind of build their own thing. So we'll go ahead and whisk through these last two points just because they're relatively quick so we can finish up with why we're excited about all this. So one of the things is guest languages. There's really only five and of those, if we're being honest, like the ones that people use in production, there's three of them, Rust, C, and Swift. And the others kind of have, they can build Wasm for the Web, but they don't really build Wasm for like wazzy compatible things. So where we would like to be in a year again is that we'd like to have first like native-level support in C-Sharp, Python, Go, and Java, something like the big names in enterprise development just so people can have an easy target. And then- It's the one Pearl, but you know, it's just- Oh yeah, everyone wants Pearl. We love Pearl. And then the last thing we have is storing and sharing. So right now to store a WebAssembly binary, we use OCI, there's a Wasm to OCI tool that allows you to store inside a container registry. We have our OCI distribution crate that we've written in Rust. There's other object stores people have used and other companies like Glue that have created their own custom registries. But really where we want to be is to have better OCI support. We know that we can never escape like having OCI support in specific places in the cloud native ecosystem, but also something like Bindle, our Agria object storage engine we've worked on. Now, obviously we're biased towards that, but it's something along the lines of how we would like it to work in the future, which you can check it out by clicking on the link there. So those are the gaps we've seen, but we really just want to close with our excitement for this. Yeah, the reason that the things that got us excited about this when we first looked at them at Vancouver, you know, a couple of years ago are still the things that we are really excited about. We see here the potential to have a secure lightweight VM, language VM that is portable and that ultimately with just a little more work will deliver a really, really solid developer story. We feel, we know we all need to work together and it's exciting to see the community sort of starting to form ad hoc and common patterns emerging, largely the people in the WebAssembly ecosystem, we all know, we've all done this before and we're all going, okay, we know that if we start working together and sharing information, it's gonna make everything easier for us, everything easier for our users and ultimately, you know, that kind of translates to an ecosystem that's inviting and approachable for new developers or new developers in this space. And really like, there's a gap here that we can fill because there's not, we don't have this nice portability with a very small footprint that just doesn't exist yet and that's such a big gap in the cloud native ecosystem that would make a lot of people very, very happy combined with that developer experience. And also this is kind of our call to action to just help us gather the wagons. This conference is a good first step. It allows us to talk about these things with people who want to have these features we've talked about, but as if our invitation is to join into this community and help out so that we can get, wasm to the point where it's accessible for everyone and that we can get these features. Yeah, and there are places like the WebAssembly Discord server is a great place to get connected with people as are, you know, the various issue queues on the various projects. And of course, you're more than welcome to reach out to Taylor and I and the other people on our team because we are passionate about getting this community sort of rallied around the wagon circled and doing some really interesting, cool things over the next few years as we see this ecosystem go from, you know, some fancy ideas to something that we think is just going to be a lovely experience for developers and a powerful experience for operators. So thank you very much for coming today and please again, reach out to us if you have questions, comments or just want to get connected on this stuff.