 I'm Surma. I am a developer advocate for the open web platform and work with the Chrome team for Google and London. And I have the great pleasure today to talk about one of my newly found passions, which is WebAssembly. You have probably heard of it a little bit, and you can contact me on Twitter if you have any questions in the future. And later on, my colleague, Deepdy, from the Wasm Engineering team is going to talk a bit about the future of WebAssembly. Before we start, though, I wanted to bring us all onto the same page, because WebAssembly is often associated very tightly with C++. So much so that a lot of people think it is all about C++, when in fact, WebAssembly is so much more than that. Many of the demos that you can find online are about C++ and M-scripten. And that makes sense, because M-scripten is an absolutely amazing tool. But it is very important for web developers to realize that it's not just C++, and that WebAssembly by itself is actually a really useful tool to have in your back talket. And that's what I want to talk about in this talk. I want to show some other languages that support WebAssembly, how you can use WebAssembly maybe without learning a new language. And then, as I said, Deepdy is going to talk a bit about what the future of WebAssembly might hold. So to just make sure everybody knows what we're here for, this is WebAssembly.org, where we explain what WebAssembly is. It is a stack-based virtual machine. And if you don't know what a stack-based virtual machine is, that is absolutely OK. What is important, though, is that you realize it is a virtual machine, meaning it's a processor that doesn't really exist, but something that has been designed to easily compile to a lot of actual, real architectures. And that is called portability. So it is a virtual machine designed to prioritize portability. So when you write some code in whatever language, and you compile it to WebAssembly, that code will get compiled to the instruction set off that virtual machine. And then those instructions get stored in a binary format, usually in a .wasm file. And because that virtual machine is designed to easily compile to real processors, this file can now be ingested by a runtime, which in our context here is most likely the browser. The browser can turn that wasm file into actual machine code of the actual machine the browser is running on, and then execute that code. And from the very start, WebAssembly was designed to make that process secure. So yes, you are running code on your bare metal, but it's not an insecure thing. We already talked about WebAssembly at the last IO, actually quite a bit. It's a technology that's growing super quickly and actually also maturing at a very impressive pace. And we talked about how some big companies are using WebAssembly to bring their existing products that they probably wrote in C++, for example, to the web. So for example, there was AutoCAD, who had been working on AutoCAD for years, and it's a well-known product. But now they put in the effort of compiling it to WebAssembly, and suddenly it was running in the browser, which is kind of mind-blowing when you think about it. Another example would be the Unity game engine or the Unreal game engine, which now support WebAssembly. So these game engines often already have a kind of abstraction built in, because you build your game and then you compile it to a PlayStation or to Xbox or other systems. But now WebAssembly is just another target, and what is impressive is that the browser and WebAssembly are able to deliver the performance necessary to run these kind of games. And I find that amazing. And these things are going to continue to happen. So you have already seen in the web keynote that my colleague Paul Lewis built a Perception Toolkit, which helps you build kind of immersive experiences, and they wanted to build links into the real world in the form of QR codes and image detection. So QR codes can already be detected by browsers using the Shape Detection API, but not every browser has that implemented. So what they're doing is they're using that Shape Detection API, but if it's not available, they have cross-compiled Zebra Crossing, a QR code library to WebAssembly, and load that on demand to fill the gap if they find it. And image detection isn't on the web at all, so they built that themselves and used WebAssembly to give a new capability to a browser. The UI Toolkit QT announced that they're also supporting WebAssembly now. So that kind of means that you can now actually take an old LibQT app and compile it to WebAssembly, and then you have this weird window in a browser tab experience, which is not ideal, but it just shows that it works. But LibQT is a very powerful and generic UI library, so there's loads of demos on the website at the bottom where they actually built a kind of good and native looking UI using LibQT and WebAssembly. So if you don't know that much about WebAssembly, you might be asking, how did they do that? And the answer in these cases is M-Scripton. M-Scripton's goal is to be a drop-in replacement for your C or C++ compiler, and instead of compiling to your machine code, it spits out WebAssembly. And they really try to be a drop-in replacement. Whatever code you wrote to run on a system should just magically run on the web. And that's very important as a distinction to make, because M-Scripton does a lot of heavy lifting behind the scenes to make that happen. And I think because it works so well in these scenarios, is why it is so tightly associated with WebAssembly currently, because originally M-Scripton was an ASM.js compiler. So this was an idea by Mozilla, where they wrote this compiler that takes C code and turns it into JavaScript. So what you see on the right is ASM.js. It's just normal JavaScript. And every browser that can execute JavaScript can execute ASM.js. But the plan was to give some browsers an understanding of ASM.js so that they have a dedicated fast path to make these kind of programs run faster. So you have a chunk of memory, and you have some variables, and suddenly your C++ code can run in your JavaScript engine. But C and C++ often use other APIs, like file opening and maybe OpenGL. So M-Scripton made that work by using WebGL to pretend to be OpenGL and to emulate a file system so you can pretend to be working on real files. So they're basically emulating an entire POSIX operating system to make code run on the web that was never written for the web. So they hacked that happen. So when WebAssembly came along, M-Scripton just added a new output format, but kept all the work they had put into making that emulation. So M-Scripton was basically able to use all the experience that they had with making POSIX code work on the web and apply it to WebAssembly. So they were able to extremely fast, deliver extremely impressive and actually mature demos and tools around WebAssembly. And they deserve a lot of credit for taking everybody along with them and leveling out the playing field for all the other languages that have come along since. And I think that's why WebAssembly is so tightly is so good with C++ because of that quick maturity of M-Scripton. But what about web developers? How about you who might be working at a web agency or maybe even a freelance developer? How can WebAssembly be useful to you? Do you have to learn C++? Spoiler alert, no. When you are a web developer and you think, oh, I should learn C++ so I can use WebAssembly, many people end up like this. Because what even is C++ when you know JavaScript? And fun fact, it works the other way around as well. When I see a C++ developer see or write JavaScript for the first time, they make the exact same face. And I'm not saying that because one language is better than the other, but just because they require such a drastically different mindset to write. I have written both professionally. And whenever I switch, I twitch a little bit. It takes some time. It's just very different to think about. What I'm saying is there was so far no incentive for a web developer to learn C++. And so the amount of people who are comfortable in both worlds is fairly small. And as a result, WebAssembly seemed like a very niche technology when, in fact, it's actually a really useful tool to have in your back pocket. And so what I want to talk about is the two main use cases that I usually see when I think about WebAssembly. On the one hand, I want to talk about the surgical replacement of tiny modules in your JavaScript app, the hot path, the bottlenecks, with WebAssembly. And I want to talk about the myth that WebAssembly is faster than JavaScript. But first, I want to talk about the other facets, about ecosystems. It might seem a bit weird because nobody will probably disagree when I say that the JavaScript ecosystem is pretty huge. I mean, just look at NPM. It's massive. But it's just a fact that not for every topic, JavaScript is the first choice while other languages might be. So sometimes you're faced with a problem, and you're looking for libraries to solve these problems, and you find them in C or in Rust, but not in JavaScript. So you can either sit down and write your own JavaScript port, or your new option is to tap into another language's ecosystem using WebAssembly. And that's exactly what we did with Squoosh. Squoosh is an image compression app that runs completely in the browser and offline, no server side, and you can drop in images, and you can compress them with different codecs and then visually inspect how these different codecs have different effects on the visual quality of your image. Now, the browser already offers that, if you know, because with Canvas, you can decide in what image formats you want to encode an image, and you even get control of the quality. But it turns out that these codecs by the browser are optimized for compression speed rather than compression quality or visual quality. So that was a bit lackluster, to be honest. And also, you're kind of bound to the codecs that the browser supports. So until recently, only Chrome could encode to WebP and none of the other browsers. So that wasn't enough for us. And so we googled a bit, and we found some codec encoders for JPEG written in JavaScript. But they were kind of weird. And also, we didn't find a single encoder in JavaScript for WebP, so we thought we have to look at something else. And we found loads of encoders in C and C++. So WebAssembly. So what we did is we compiled, for example, the library called ModsJPEG into WebAssembly, load it in the browser, and replace the browser's JPEG encoder with our own. And what that allowed us is that we actually got much smaller images at the same visual quality setting, which is kind of cool. But not only that, it only also allowed us to expose loads of expert options that the library had, but that the browser obviously didn't expose. So things like configuring the chroma sub-sampling or different quantization algorithms are not only valuable to squeeze out the last couple bytes from an image, but also just as a learning tool to see how these options actually affect your image visually and the file size. The point here really is that we took an old piece of code. ModsJPEG goes back to, like, 1991. It was written definitely not with a web in mind, but we're using it on the web anyway and using it to improve the web platform. And we used M-scripten. So with M-scripten, to show you how that works, I usually find myself in a two-step process. The first step is compiling the library, something that you can link against later. Often image codecs make use of threads in SIMD, because image compression is a highly parallelizable task. But neither JavaScript nor WebAssembly have support for threads or SIMD just yet. Deepdy will talk about it a bit later what is coming on that front. But for now, we disable SIMD to make sure we don't want to do any problems. And in the second step, you have to write a piece of what I call bridge code. This is the function that I want to call from JavaScript later on. So it takes the image, it takes the image dimensions, and then uses ModsJPEG to compress it and returns a typed array buffer, which contains the JPEG image. And once you have written this bridge code, we call emcc, mscripten-ccompiler, with our C++ file and the library file. Link it all together. And provided I didn't make any mistakes, we get this output, a JavaScript file that sets everything up for us and the WebAssembly file. Now, here is something to keep in mind. Because mscripten is a drop in replacement and does all these emulation and all that have you lifting for you, it is always a good idea to keep an eye on your file sizes, because these kind of file system emulations and API tunneling, that's code that needs to get shipped. So if you use a lot of C APIs, these files can become quite big, especially the JavaScript file. We have been working with the mscripten team quite intensely to help them keep it at a minimum, but there's only so much you can do if you want to be a drop in replacement. So keep an eye on your file sizes. Another example of WebAssembly in Scrooge is image scaling, because it turns out that making an image bigger or smaller, there's many ways to achieve that with many different visual effects and visual outputs. So with a browser, if you just use the browser to scale an image, you just get what you get. It will probably be fast. It will probably look good. But sometimes having control over the different variants of scaling an image can really make a big visual impact. So on this video, you can see me switching back and forth between the Langsoz 3 algorithm and whatever the browser has. And you can see that with Langsoz 3, I actually have a linear RGB color space conversion. I actually have a much more real perception of brightness in this picture. So in this case, it's actually a really valuable piece of code to have running. So these image scaling algorithms that we are using in Scrooge, we actually took from the Rust ecosystem. Mozilla has been heavily investing in the Rust ecosystem, and their team writes WebAssembly tools for Rust. But the community also abstracts those away to just generic tools. One of these tools is WasmPack, which really takes you by the hand and generates, turns your Rust code into WebAssembly modules that are more in JavaScript and really small. And I think it's really fun to play around with. So with Rust, same kind of principle. We have a library, and we want to write our little bridge code. So in this case, the resize function is what I want to call from JavaScript. It takes the image and my input size and the output size, and then I just return the resized image afterwards. And then you use WasmPack to just turn all of that into a WebAssembly module that you can use. Now, the size comparison is not quite fair, because it's a different library and it's a smaller library, so don't compare it byte by byte. But on average, Rust tends to generate much smaller glue code, which kind of makes sense, because Rust doesn't do anything of the POSIX file system emulation. You can't use a file function in Rust because it doesn't do file system emulation. They have some crates that you can pull in if you want to have that, but it's much more of an opt-in approach. So the bottom line is that with Squoosh, we are using four different libraries, at least, from two different languages that have nothing to do with the web, but we still proceeded to use them on the web. And that's really what I want you to take home from this entire thing, is that if you find a gap in the web platform that has been filled many times in another language, but not on the web or not in JavaScript, WebAssembly might be your tool. But now let's talk about the surgical replacement of hot paths in your JavaScript and the myth that WebAssembly is faster than JavaScript. Now it's really important to me, and that's why I came up with this really far-fetched visual metaphor. Both JavaScript and WebAssembly have the same peak performance. They are equally fast, but it is much easier to stay on the fast path with WebAssembly than it is with JavaScript. Or the other way around, it is way too easy sometimes to unknowingly and unintentionally end up in a slow path in your JavaScript engine than it is in the WebAssembly engine. Now that being said, WebAssembly is looking into shipping threads and SIMD, things that JavaScript will never get access to. So once these things ship, WebAssembly will have a chance to actually outperform JavaScript quite a bit. But at the current set of things, the peak performance is the same. To understand how this whole falling off the fast path happens, let's talk a bit about V8, Chrome's JavaScript and WebAssembly engine. JavaScript files and WebAssembly files have two different entry points to the engine. JavaScript files get passed to Ignition, which is V8's interpreter. So it reads the JavaScript file as text and interprets it and runs it. While it's running it, it collects analytics data about how the code is behaving. And that is then being used by Turbofan, the optimizing compiler, to generate machine code. WebAssembly, on the other hand, gets passed to lift off the streaming WebAssembly compiler. And once that compiler is done, Turbofan kicks in and generates optimizing code. Now there is some differences here. The first obvious difference is that the first stage has a different name and a different logo. But there's also a conceptual difference. Ignition is an interpreter, and lift off is a compiler that generates machine code. So it would be an overgeneralization to say that machine code is always faster than interpreted code, but on average it's probably true. So here's already the first difference in terms of speed perception. But more importantly is this difference. For JavaScript, the optimizing compiler only kicks in eventually. This code has to run and be observed before it can be optimized, because certain assumptions are made from the observations. Machine code is generated, and then the machine code is running. But once these assumptions don't hold anymore, you have to fall back to the interpreter, because we can't guarantee that the machine code does the right thing anymore. And that's called a deopt, a deoptimization. With WebAssembly, Turbofan always kicks in right after the lift off compiler, and you always stay on the lift off, on the Turbofan outputs. You always stay on the fast path, and you can never get deopted. And I think that's where the misconception comes from that WebAssembly is faster. You just always get easier to get deopted in JavaScript, and you cannot in WebAssembly. And Nick Fitzgerald from the Rust WebAssembly team actually did a really nice benchmark that he wrote a benchmark in both JavaScript and WebAssembly. JavaScript is red, WebAssembly is blue. And run it in different browsers. And what you can see here is, yes, OK, WebAssembly is faster, but the main takeaway really here is that JavaScript has to spread. It is kind of unpredictable in how long it takes. Well, WebAssembly is spot on, always the same time, even across browsers. And I think that is really the key line I would like you to take home with you. WebAssembly gives you more predictable performance. It delivers more predictable performance than JavaScript. And that's actually a story I can tell as well from Scooch. We wanted to rotate an image. So we thought, OK, let's use Canvas, but we couldn't because Canvas is on the main thread. Offscreen Canvas was barely in Chrome at that point, so we actually ended up writing a piece of JavaScript by hand to just reorder the pixels to rotate the image. And it worked really well. It was very fast, but it turns out the more we tested in other browsers, the more it became a bit weird. So in this test case, we are rotating a 4K by 4K image. And this is not about comparing browsers. This is about comparing JavaScript. The fastest browser took 400 milliseconds. The slowest browser took eight seconds, even off the main thread. That's way too long for a user pressing a button to rotate an image. And so what you can see here really is that we clearly stayed on the fast path in one browser, but we fell off the fast path in another. And the browser wasn't usually a fast browser. Just some browsers optimized differently. And so we wrote our rotate code in WebAssembly, or in a couple of languages that compiled to WebAssembly, to compare how that performs. And what you can see here is that pretty much all the WebAssembly languages bring us somewhere around the 500 millisecond mark. I would call that predictable. I mean, there's still a bit of variance, but nothing compared to the variance of JavaScript. And that's a logarithmic scale. And also what you can see here, I just noticed, is that the head-to-head performance of WebAssembly, the peak performance of WebAssembly in JavaScript, is pretty much the same. So if you look at the graph, you might be wondering what assembly script is. And if you haven't, I'm really excited about this, because assembly script really brings me back to the title of my talk, which is WebAssembly for Web Developers. Assembly script is a type script to WebAssembly compiler. Now, that might mislead you, because you can't just throw your existing type script at this compiler and get WebAssembly out of it. Because in WebAssembly, you don't have DOM API, so that you can't just use the same code. But what they're using is the type script syntax with a different type library. So that means you don't have to learn a new language to write WebAssembly, which I think is kind of amazing. So that's what it looks like. It's like type script just with a couple of minute differences in that something like i32 is not a type that JavaScript has, but it is a type that WebAssembly has. And then there are these built-in functions like load and store that put values onto the memory or read them from memory. And the assembly script compiler turns those into WebAssembly modules. So you are now able to write WebAssembly without learning a new language and harness all these benefits that WebAssembly might offer to you. And I think that's kind of powerful. Something to keep in mind is that unlike type script, WebAssembly doesn't have a garbage collector, at least not yet. And DeepD will talk about this a bit more later. So at least for now, you have to do memory management yourself. So assembly script offers a couple of memory management modules so you can just pull in, and then you have to do the C-style allocations. It's something to get used to a bit, but it's very much usable right now. And once WebAssembly does get garbage collection, it could get even better. So that's one of the full disclosure. Assembly script is a fairly young and small project. It has a group of extremely passionate people behind it. It has a couple of sponsors, but nothing compared to Mozilla behind Rust or M-scripten. That all being said, it is absolutely usable and very enjoyable. My colleague, Aaron Turner, wrote an entire emulator in assembly script. And so if you're interested in that, you should look him up on GitHub and take a look at the code. Now, one thing that I want to make sure that I say out loud is, at the current state of affairs, putting everything in wasm is not a good idea. Java and WebAssembly are not opponents. There is synergy between them. Use them together. One doesn't replace the other. Like debugging is going to be harder, and code splitting is much harder with WebAssembly currently than it is with JavaScript. And you have to call back and forth. It's just not going to be a great experience. I had some people treat it me that they want to write their web components in C++. I don't know why they would want to do that, but apparently they want to do that, but I wouldn't recommend it. What I would like to say is, WebAssembly is the right things. Do some performance audits. Do measurements. Where are your bottlenecks? And see if WebAssembly can help you. Did you find a gap in the platform? And you can fill it from a different language. Again, WebAssembly is your tool. But now to talk a bit about the future of WebAssembly and the upcoming feature, I would like to welcome Deepty to the stage. Thanks, Surma. Hi, everyone. I'm Deepty. I'm a software engineer on the Chrome team, and I work on standardizing WebAssembly features as well as implementing them in V8. Sorry about that. So most of what you've seen in this presentation so far has landed and shipped in all major browsers, which is the MVP or the minimum viable product of WebAssembly. And we've been working hard on adding capabilities to make sure that we get closer and closer to native performance. The MVP itself unlocks a whole set of new applications on the web, but this is not the end goal. And there are a lot of new exciting features that the community group and the implementers are working to enable. The first one of these is the WebAssembly threads proposal. The threading proposal introduces primitives for parallel computation. Concretely, that means that it introduces the concept of a shared linear memory between threads and semantics for atomic instructions. Now, why is this necessary? There are many existing libraries that are written in C or C++ that use P threads, and those can be compiled to Wasm and run in multi-threaded mode, allowing different threads to work on the same data in parallel. Aside from just enabling new capabilities for applications that benefit from multi-threaded execution, you would see performance scale with the number of threads. So the threading proposal builds on primitives that already exist in the web platform. The web has support for multi-threaded execution using web workers, and that's exactly what's used to introduce multi-threaded execution to WebAssembly. The downside of web workers is that they don't share mutable data between them. Instead, they rely on message-passing for communication through post-message. So they rely on message-passing for communication. So each of these WebAssembly threads runs in a web worker, but their shared WebAssembly memory allows them to work on the same data and making them run close to native speeds. The shared linear memory here is built on the JS shared array buffer. So if you look at this diagram, each of these threads is running in a web worker and can have a WebAssembly instance that's instantiated with the same linear memory. This means that the instances operate on the shared memory but have their own separate execution stacks. So the API to create a WebAssembly memory remains almost the same. If you look at the first line there, you create a WebAssembly memory with a shared flag and a mandatory maximum. This creates a shared array buffer underneath with the initial size that we've specified there, which is one page of memory. So with all of these threads operating on the same memory, how do we ensure that the data is consistent? Atomic modifications allow us to perform some level of synchronization. So when a thread performance and atomic operation, the other threads see it as happening instantaneously. But full synchronization often requires actual blocking of a thread until another is finished executing. So the proposal has an example of mutex implementation, and I pulled out how you would use this in a JavaScript host. If you look at it closely, there's subtle differences between what you would do in a worker versus what you would do on the main thread. So on the main thread, the tryLockMutex method is called, which tries to lock a mutex at the given address. It returns one if the mutex is successfully locked, or zero otherwise. And on the worker thread, it will lock a mutex at the given address retrying until it's successful. So basically why this is the way it is is that on the web, you can't actually block the main thread. And this is something that's useful to keep in mind when using the threading primitives. So what is the current status of this proposal? The proposal itself is fairly stable, but there's ongoing work to formalize the memory model that's used by the shared array buffer. The thing I'm really excited about here is that this is shipped in Chrome 74 and is on by default. So Surma mentioned Qt earlier in the presentation, and Qt uses full thread support. So this is something that you can use in your applications today. As the shared memory primitive here is the JavaScript shared array buffer, and that's temporarily disabled on some browsers, WebAssembly threads is not currently available by default on all browsers. But you can still try this out in Firefox Nightly behind the flag. One of the goals of WebAssembly is to be a low-level abstraction over modern hardware. This is especially true of the WebAssembly SIMD proposal. SIMD is short for Single Instruction Multiple Data. It lets one instruction operate at the same time on multiple data items. So most modern CPUs support some subset of vector operations. So this proposal is trying to take advantage of capabilities that already exist in hardware that you use every day. The challenge here is to find a subset that is well supported on most architectures, but also is still performing. Currently, this subset is limited to 128-bit SIMD. There are a couple of different ways to use the SIMD proposal. By using auto vectorization, you can pass in a flag to enable SIMD while compiling, and the compiler would auto vectorize your program for you. On the other hand, many SIMD use cases on niche and highly tuned for performance. And use hand-coded assembly. So these would be using Clang built-ins or intrinsics that generate machine code that is tuned for performance. Now, SIMD can be used for a large variety of applications. So you can use it for image or audio video codecs, applications like Google Earth and Photoshop, or even machine learning applications on the web. We've had a lot of interest for WebML and SIMD collaborations as well. So let's take a closer look at how this data is operated on. Here you see a simple example of an ad instruction on an array. Let's say this is an array of integer values. So on the left side is what a scalar operation would look like, where you add each number to the other and store the result. The vector version of this would just boil down to one hardware instruction, for example, a P add or a VP add on some Intel architectures. So SIMD operations work by allowing multiple pieces of data to be packed into one data word and enabling the instruction to act on each piece of data. This is useful for cases where the same operation has to be performed on large amounts of data. For example, take image processing. Say you want to compress an image in Squoosh or reduce the amount of color of an image in Photoshop by half. So SIMD operations would actually make this a lot more performant. So we've talked about making use of underlying hardware capabilities and always capabilities to make your applications performant. Now let's look at what happens on the other side. What are we doing for better interop with the host? One of the proposals that's being implemented by multiple browsers is the reference types proposal. With the reference type proposal, WebAssembly code can pass around arbitrary JavaScript values using the any ref value type. These values are opaque to WebAssembly, but by importing JavaScript built-in functions, WebAssembly modules can perform many fundamental JavaScript operations without actually requiring JavaScript glue code. So the WebAssembly table object is a structure that stores function references at a high level. So this reference types proposal also adds some table instruction for manipulation of tables inside of WASM. The neat thing about this is that the reference types proposal is actually setting up, setting the stage for really useful future proposals. So for efficient interop with the host for the WebIDL proposal or exception references for exception handling, and it also enables a smoother path to having garbage collection. And I'll be talking about all of this in the next few slides. So a proposal that our team will be focusing on in the near future is the WebIDL bindings proposal. WebIDL is an interface definition language, and it can be used to define interfaces that are implemented in the web. We touched on this a little bit with the reference types. The basic idea here is that this proposal describes adding a new mechanism to WebAssembly for avoiding unnecessary overhead when calling or being called through the WebIDL interface. The WebIDL bindings proposal would allow compilers to optimize calls from WebAssembly into existing web APIs and browser environments today, as well as other APIs that may use WebIDL in the future. So let's take a closer look at this. So when you have a WASM function, you would call this call a JavaScript through the JS API. The JS API goes to the binding layer that facilitates communication between the JS API and the web APIs for DOM access. This adds a lot of glue code and additional overhead. The goal of the WebIDL bindings proposal is to reduce this overhead and optimize calls from WebAssembly into existing web APIs. So effectively, this would not have to go through the JS API. And the bindings would also be optimized to reduce the overhead. So you would have streamlined calls between WebAssembly and Web APIs. So currently, when we talked about C, C++, Rust, and porting these languages to WebAssembly is very well supported. And there's a lot of work ongoing to bring different classes of other languages to the web. One such feature is garbage collection, which is necessary for efficient support of high-level languages. That means faster execution, smaller modules, and outside of C, C++. This is really a requirement for being able to support a vast majority of modern languages. This is also a large and open-ended problem. But we've been making progress by carving out smaller proposals and honing in on the exact design constraints. So currently, WASM design explicitly forbids tail-call optimizations. In the future, we want to enable correct and efficient tail-call implementations of languages that require tail-call emulations. So these would be functional languages like Haskell. We8 already has an implementation for this, and this is actually moving along quite well. So for full C++ support, we need exception handling. In a web environment, exception handling can be emulated using JavaScript. Exception handling, which can provide the correct semantics, but really isn't fast. So post-MVP WebAssembly will gain support for zero-cost exception handling. And this is something that is being actively worked on as well. We're also working on a number of other proposals. So feel free to check out the future features documentation on the WebAssembly GitHub page. The other thing I want to emphasize is that a lot of these are in the design phase. And if you're interested in wanting to participate, all of the development is done in an open community group, so contributions are always welcome. We also talked about performance. So if you have performance bottlenecks and you've used WASM to elevate some of the concerns, we'd love to hear from you. Sriman and I are going to be hanging out here and later in the web sandbox. So if you have questions, please come find us there, or obviously find us online. Thank you.