 Great, so let's talk a bit about the history of WebAssembly. So I think you've heard about it briefly in a previous talk, but just to set up what we're talking about. So the web has two general-purpose languages, JavaScript and WebAssembly. So WebAssembly was added much, much later after JavaScript. And it's an interesting story, I think. And we edited it because it does different things in JavaScript, in particular that the stuff noted on the slide here. It's a binary format, so it's much smaller, more compact. It's a compilation target, so you typically don't write it by hand. You compile to it. And it's designed to be fast from basic principles, essentially. So this talk is kind of how we came to standardize something like that, and specifically WebAssembly. Maybe the place to start is way back in 2008. So back then, suddenly JavaScript started to become fast. Before it was just an interpreted language, so it was quite slow, but all you did was write little scripts on pages, so it was fine. But in 2008 Chrome launched, so obviously that happened in with the V8. And also Firefox and Safari launched two JITs as well. So by the end of that year, there was a big speedup. And this kind of the arrow tries to show it for many, many years after this constant speedups were happening. So these were very exciting years in JavaScript space. But kind of getting back to the 2008, 2009 time frame, there was still a question of how fast will it get? It was starting to speed up. But if we want to run something like a game or a photo editor or something that requires a lot of heavy computation, will it be fast enough? I think it was very reasonable to be skeptical at that time. And maybe JavaScript won't be fast enough in it, so maybe we need something else. So one possibility of something else is Native Client, which we'll talk a bunch about now. So Native Client basically defines a safe subset of a particular CPU's code. So you can actually run normal code for the CPU, normal machine code, but it's in a safe subset you know is not going to cause any problems. And this was started in 2008 at Google by Brad Chen, Benny and David Sehr. And it was quickly announced publicly and open sourced, which is great because if we want something to be part of the web, it makes sense to work on it in the open. It made a lot of quick progress in 2009, there was a research paper, and it graduated from research into a plan towards becoming a product. So this is some data on speed. The key thing is the column on the right, which shows sort of the extra overhead of running a Native Client as opposed to running the same code natively. Maybe it's a little hard to see the numbers, but anyway it's essentially in all the benchmarks it's single digit overhead. So you're running maybe 5% slower than the full possible maximum speed. So this is very, very low overhead, very, very impressive that you can do this in a safe way. And safety is really a super high priority here. I mean we just had a talk about safety and security on the web, so the timing is good. You're running content from that people are sending over the network, you don't know what it is, you have to be very careful. The Neckle had two layers of sandboxing. So one is sort of what you see in application code, that's sort of by running in a subset that you can prove is safe, that's sort of one layer of sandboxing. The other layer is you actually, all that code can do is communicate with the Neckle runtime, and all of that is in a process, again kind of mentioned in the previous talk, and that process is sandboxed as well, so you have a second layer of sandboxing. One issue that you might have been thinking about is portability, so I said it's specific to a CPU architecture, obviously that's not great because we don't want to ship just native X86 or native ARM, we need to ship something portable. You need to be able to view a website no matter what CPU you have. So of course this was not a surprise, and in 2010 portable native client was launched, so Pinnacle, and it basically replaces that sort of native machine code with something else, in this case LVM IR, LVM is a compiler, IR is the internal representation in the compiler. So you can kind of imagine that you compile the code halfway, then you ship that over the network, and then you finish the compilation on the user's machine where you know the architecture of their CPU. That's kind of the model, and on the client you finish the compilation to NACL, so same as before the same sandboxing, same safety. There was some initial controversy with Pinnacle over the use of LVM IR, because LVM IR was actually not portable, had undefined behavior, but over time these issues were fixed. Okay, something worth spending time on is the fact that NACL ran in a separate process, as already mentioned, this is great for security, but it had a large impact I think on sort of the overall, what happened with technology, because it prevented calling web APIs. You're in a separate process than JavaScript in the DOM, you can't actually just call into the normal things that JavaScript would, what you can call is a plugin API. So this was initially the NP API, the Netscape plugin API, the thing that was used by Flash and Java back in the day. Later a new plugin API was made, the Pepper plugin API, PP API, so Pepper, Salt, et cetera. And this worked, however, a criticism was that it overlaps with existing web APIs, so the web already had APIs for rendering and audio and network, and this was basically a bunch of new APIs that do similar things. Another criticism was that the APIs weren't fully specified, so there was an implementation of Pepper in Chrome and docs, but other browsers were a little bit worried whether the docs were spec enough, whether there are corner cases, whether they try to implement it, they'll run into issues. It's a little hard to say how valid those concerns are, but these are things that people were worried about at the time. Okay, so more flooded context here. So again, this is around like 2008, 2010, et cetera. This is about the time when the web is trying to move away from plugins, so people are being told, web devs are told, don't use Flash, don't use Java, just use HTML, JavaScript, et cetera. And that's good in general. For Native Client, this was sort of a problem in that it began as a plugin. Again, it used NP API or PP API. It was later integrated into the browser at which point it's not a plugin, it's just part of the Chrome platform. But I think that by running in a separate process, by having a new set of APIs that are very different than the existing web APIs, it still felt more like Flash and Java to other people. And all this had an impact on adoption of the technology. So early Knackle ran in other browsers using NP API. So the same way that Flash ran in other browsers, it just used a plugin API. But in 2011, Mozilla and Opera announced that they oppose Knackle and Pepper. And since that point, Knackle only ran in Chrome, essentially. So adoption was an issue. Still, the technology made huge, huge strides over the years. For example, in a few milestones are mentioned here in the slide. So in 2010, Unity was ported to Knackle. Unity is a massively important game engine, extremely popular. In 2011, Mono was ported, which is a sort of CLR implementation. It's also very important for games, among other things. In 2011, there was a lot of noise around Bastion and a bunch of other popular games at the time, being ported to Knackle. A big milestone is 2013, when Chrome decided that Pinnacle is safe enough and stable enough to be shipped on the web, that is to be enabled for web content to use. Until then, it was just sort of on the Chrome web store, which is very limited. And the idea was, well, we'll ship it. Web devs will use it, and then we'll show how useful the technology is, and other browsers will welcome it. That was the hope. So that launched 2013. And of course, over the years, the technology just kept getting better and better. For example, 2016 Sub-Zero was launched, as the slide says, which is basically a super-fast compiler for Pinnacle. So this kind of addresses the issue that, as I said, you kind of finish compiling the code on the user's machine if it's a very large project that can be slow. So Sub-Zero was very fast, faster than building with dash of zero. OK, so we've talked a bunch about Knackle. Meanwhile, other events were happening in a parallel world. So there's another path that involves things like SNJS and M-scripten. And this is kind of the path that I was involved in. So maybe a good place to start is 2010, where I gave up on a startup that you never heard of. And I moved to work in Favrex for Android in the US. And that was fun at Mozilla. And in my spare time, I kept tinkering with the game engine from the startup. So the startup was kind of like this game engine that could run on the web, and you would download games and stuff. And so I tinkered with the game engine for fun. It was open source, and I wanted to run on the web, because I like the web. But the question was, how? So remember, this is 2010. So Knackle was quite new. Pinnacle was just announced. It wasn't clear what the adoption would be of these technologies. So it wasn't clear what would be a good way of running a large C++ codebase on the web. So I was thinking, well, JavaScript is fast now. Let's maybe compile to JavaScript. So JavaScript has no explicit types. You just write var, and that's it. But I was thinking, when you compile C++ to JavaScript, it would maybe preserve the types that were there originally, and maybe a way to look at it is in this super simple code here. So in C++, we write x equals 10, x plus plus. Obviously, it's an integer, and it stays an integer. In JavaScript, just replace the int with a var, just kind of drop the types, and it's valid JavaScript. And you can imagine that the compiler would actually see, well, this begins as 10, begins as an integer. All the operations on it preserve the fact that it's an integer, and then it could be optimized. And it did some experiments in this area, in particular using PyPy, which is pretty good at compiling large amounts of code with no types and inferring those types. But I don't have time to get into the details. But anyway, I convinced at least myself that this could work, so I bet on JavaScript and started writing a compiler from C++ to JavaScript. And I called it MScript, and as we said, because it turned LVM IR into JavaScript. And it's kind of a Simpsons reference, the episode with Cromulant, if anyone knows. If not, you can look it up. And this was kind of a fun weekend evening side project for me, just tinkering, getting code to run. It was just fun. Around a year later, it became more than fun, became kind of part of my job. So by that time, MScript could compile quite a lot of things like Python and Doom, and kind of showed that this could work. So we're compiling to JavaScript, which we never intended as a compiler target. But it turns out you can compile stuff like C++ to it. Of course, there have been other compilers like GWT from Java, but I think this was the first for C++. So this year, I gave a talk at GS Conf EU about it, and I published a paper on the re-looper algorithm that it uses. So very, very briefly, a compiler typically has very low level view of control flow. But JavaScript, of course, has ifs and loops, so you need somehow to bridge that gap. Anyway, so I made up something that helps them. And later in the year, Mozilla basically gave me the option to join the research team and to work on this full-time, basically, as my day job. So I was super happy to say yes to that. And kind of why did they do that? I think because they understood that native code on the web matters. We want to run games, other high-performance things. There were worries about NACL like the ones that I mentioned earlier. They felt that compiling to JavaScript is just the safe option. It felt web-friendly. It uses web APIs normally. JavaScript already runs in all browsers. It doesn't need a plug-in. And of course, the belief was, well, JavaScript is just getting faster and faster. So MScripting shows this can work. And we'll just make JavaScript fast. And everything will be great. Well, it turns out not everything will be great because normal JavaScript speeders are actually not quite enough to get to native speed. The issue, of course, is that JavaScript is just in time for JIT optimizations. It's kind of unpredictable. It's not a consistent thing between browser to browser or even different versions of browsers. So in 2013, Luke Wagner, David Herman, and I made up something called AsmJS, which is a subset of JavaScript. So this is a little sample of AsmJS. If you know JavaScript and you know what this does, what might look odd are these kind of OR0s. So the OR operator in JavaScript converts both inputs to a 32-bit integer. When you do an OR with zero, it changes no bits. So writing OR0 is a very concise way to, say, turn this thing into a 32-bit integer. And therefore, when you see a function like this, the inputs are turned into integers. The output of the ad is turned into an integer. So the JavaScript VM, if it thinks about it, it can easily see that, well, we're not writing the types, but these kind of annotate the types for me, and I can optimize these as integers. And in fact, we added useAsm, kind of a convention that if the browser sees that string, it can use the AsmJS type system, which formally specified how this works. And it can do ahead of time AOT compilation if it wants to. So when it does that, it literally sees the types of every variable, and it can generate very efficient code. Here's some data from the first launch of AsmJS in Firefox. So the key thing here is to look at, so in kind of orange on the bottom in each of the three benchmarks is native, which is normalized to 1. Lower values are better. And you can see that in the blue bars is Firefox without AsmJS. It's kind of like 10, 12 times slower, 5 times slower. With AsmJS, it gets to about 2 times slower than native. So it kind of reaches half of native speed. So that's a pretty big jump using this AOT approach. It did get a bunch faster later, but to be fair, it never quite got to the speed of Knackler. But it did kind of get to the right ballpark. It did have some serious issues though. So one issue is startup speed. It's parsed because it's JavaScript. So it arrives as text in the browser. And the browser needs to parse it and figure out what to do with it. It's never going to be as fast as an actually efficient binary format. There's no way to solve this, not with something that is in JavaScript. So the benefit is being in JavaScript that it runs everywhere, but the downside is it's in text. And also, it's a hack. So it's maybe a fun hack, but it's a hack. Those or zero things are weird. If you get the slides later, there's a link there to a blog post that's very smart. So it's weird. A lot of people didn't like it. I can definitely understand why they didn't like it. But what I want to say more about on this slide is that it's sort of aside from being kind of weird, it also had serious downsides that it's a hack. So we added things like Math.iML and Math.fRound to the JavaScript language that made it a better compilation target. So those two things help with sort of low level math. And the standards bodies were happy to work with us, because this actually helps normal JavaScript, too, in some ways. But you can imagine that if we try to get closer and closer to native speed, we will eventually need more hackish things from the language which it was never designed to do. So you can imagine the standards bodies would understandably eventually not be so happy with this. This was also a problem for us writing the As-and-Just type system. So it got harder and harder to add more complicated things, because in As-and-Just, it has to run as JavaScript, because a browser might not use the As-and-Just type system. But it also has to be easy when doing AOT compilation to sort of see what it's supposed to be doing and to optimize it. So there's kind of this tension there. And for example, we added memory growth, which means that you can sort of adjust the size of memory over time. And it had this convoluted pattern. It was so complicated and annoying that we ended up getting rid of it from Firefox and we got rid of it from the spec. So the hacks had a cost. Despite that, it did do quite well for a few years. So as I think I said, it was announced in 2013. Unreal Engine was ported that year. Unreal is another very important game engine used in Fortnite today, for example. Benchmark started to take a look at As-and-JS around that time. So for example, in Octane, there was sort of a subbenchmark that used it. In 2014, the Unity game engine that we already mentioned earlier announced support for As-and-JS and M-scripten. And in 2015, two more browsers joined Firefox in doing the AOT path, so Edge and Chrome. OK, so we've talked for a while about Native Client and for a while about As-and-JS and M-scripten. And here's a kind of two separate parallel paths. We've talked about what each has done. But of course, they're not really that separate in that they competed and cooperated and interacted in various ways. So let's talk a bit more about that. Obviously, they competed. I don't really have much to say about that. They both kind of hope to be the future of Native code on the web. What I think is more interesting and maybe less obvious, and so I want to spend more time on, is there was increasing cooperation too. So in 2013, we started to have lunch, dinner, meetups between Google and Mozilla Tools. This was just a lot of fun and also helped us work together on things, get to know each other. We started to share code between the two projects. M-scripten started to adopt the pinnacle legalization passes. So it's the code in LVM that sort of handles some crucial things. We started to share a triple in LVM, sort of the definition of the target machine we compiled to. And there was also pepper.js, which basically let you compile a Native Client application using M-scripten to As-and-JS. You could take the same code base, compile it either to Native Client, well, to pinnacle at the time, or to As-and-JS. And there was more and more cooperation on the browser and VM side too. So in 2014, the VM people, that is, the people working on the JavaScript engines talked more about As-and-JS because they were each optimizing it in their own way. But they started to talk more about, hey, how should we optimize this? What's the future in this area? In particular, I want to mention Luke Wagner from Mozilla and Ben Tezer from Google that I know talked quite a lot about this. And sort of the idea of a byte code, a more proper solution, keeps coming up. So you're talking about As-and-JS and how to optimize it, but it's a natural thing to think about how to do better in this space. And in 2015, Mozilla, Google, and others met during GDC, the Game Developers Conference, to talk more about what we call at the time web asm. And the plans got increasingly concrete. And in fact, we announced WebAssembly in that year, 2015. So I think it might have been around April that we came up with the name. We found some emails that mentioned it, replacing some other names that were thrown around. And then we spent a few weeks working behind the scenes. So at this point, we had agreement between all the major browsers that were going to standardize a new thing called WebAssembly. And we sort of prepared a website and a process, et cetera. And then we launched it. That is, we announced it publicly that we're starting the process of spec-in and designing this thing. So that was in June 12. OK, so what changed in 2015? By which I mean, for many, many years, people have asked for native code on the web. Why don't browsers just ship a VM, ship a byte code, let us use whatever language we want? Why are we forced to just use JavaScript? These are very common complaints from web developers and developers in general. And I think it's a reasonable question. Why did browsers finally end up doing this at all? And why in 2015 and not earlier, why not later? So at least my theory is we've kind of mentioned this. Browsers had to optimize SNJS. It runs as JavaScript in your browser. So you're already running this, and there's so much competition on JavaScript speed, you have to run it fast. So you're very motivated to optimize SNJS and to use the SNJS AOT path, et cetera. But once you're optimizing SNJS, you realize how much of a heck it is. You see all the downsides. And you naturally want to do something better, something cleaner, a more proper solution. And there was an increase in collaboration between browsers, as I mentioned. And I think just that we were all talking together more, working together on smaller things, helped us finally get to a point where we could work together on standardizing and do a big thing. And I think that kind of shows in WebAssembly, that is, the history that we had these two paths that eventually ended up working together. So just like NACL, WebAssembly is not based on JavaScript. It's a new design. It's proper. It's in binary. This is very much the right thing. And this is something NACL very much got right. Like SNJS, it runs in the same process as JavaScript, uses normal JavaScript. Sorry, normal web APIs. So I think that's something that SNJS got right. This kind of also shows in our tool chain. So we use MScripten, which basically had the right JavaScripts and web API model. It compiled to SNJS before. And now we just made it compile to WebAssembly. And it's basically calling the same API, et cetera, so we could keep using it. And MScripten uses the new LVM WebAssembly backend that was written in collaboration between tools people from Mozilla and Google, the people that used to work on SNJS and used to work on Pinnacle are working together on this stuff. I think I have one or two more slides, which is very briefly just to mention a few more milestones along the way here. So as I mentioned, we announced WebAssembly in 2015. Before the end of the year, we already could start to run the work in progress version of it in our tool chain and MScripten. In 2016, we showed multiple browsers running a bunch of large demos like Angry Bots that sort of proved that we were all in sync and working together effectively. We reached a consensus on Wasm in 2017. And all major browsers shipped is supported in a stable version by November, so you could say Wasm one point arrived by the end of that year. We also published a paper on Wasm in PLDI. And we announced the deprecation of Pinnacle in favor of WebAssembly. And you could also say SNJS is deprecated effectively, essentially, since we don't really need it. In fact, in MScripten, we plan to stop the mitigate in just a bit Wasm. OK, so getting back to this picture, so these two paths would seem to go on forever, but not converge. So it turns out they do converge just past the hill where you can't see it in the picture. And they converge to WebAssembly, obviously. And that's it. Thank you.