 Hello, everybody. All right, let me make sure this is set up. OK. Hi, my name is Seth Thompson, and I'm a product manager on the web platform team. I work on V8 and DevTools and WebAssembly. And today, I'm going to talk a bit about JavaScript and V8 and WebAssembly, too. So what is V8? Well, V8 is an engine. It's part of Chrome, and it runs JavaScript in web pages. So the V8 team has a simple mission. It's right here. It's to speed up real-world performance for modern JavaScript and enable developers to build a faster future web. So really, there's just two things we care about. We care about making JavaScript fast that's already on the web, and then we care about making it easier to write performant JavaScript. And that's things like our language efforts and standardizing new parts of the ECMAScript language. So what do we mean by real-world web? Like, what is the real-world web? Well, the real-world web is different for everybody. It's whatever sites you are most likely to visit, whether it's the Onion, New York Times, pictures of cats. These are the things that people browse most frequently. So in our testing, or in order to try to benchmark V8 and come up with a workload to drive our optimization strategies, we wanted to make sure we understood what this real-world web was and what the performance characteristics of it were. So we set out to pick a variety of sites. We call them internally just top 25 sites. They're not necessarily the top 25 in the Alexa ranking, but they're tried to be representative of a variety of different types of sites, different content, and most importantly from around the world. Fun fact, internally, we actually use Taylor Swift's web Twitter page as the Twitter example. So there's a variety of sites we use. And the reason we have ended up here, where we're actually measuring the performance of V8 against real web pages, is not sort of obvious. It took us a while to get to this stage where we're actually using real websites to measure performance. Here's a simplified history of how we measured performance on V8. The first error sort of at the very beginning, most JavaScript VMs used micro benchmarks to measure the performance of code. Now micro benchmarks test individual language features in isolation. So you might run a loop to push an element to an array 10,000 times. That's a micro benchmark. It just tests array push. Now this sufficed for a while, but we quickly realized that measuring things in isolation in a vacuum didn't necessarily yield performance optimizations that translated to real applications or real code, bulkier code. So the next error was really static test suites, things like octane. You might recognize the name of that benchmark. Now octane is more representative than a micro benchmark because it includes real code. Octane includes a Game Boy emulator, actually, and a ray tracer. So this is real code that people have written. But benchmarks like octane are static. So we have an updated octane since we released it. And in addition, not all websites run ray tracers. So as I mentioned earlier, Taylor Swift's Twitter page does not have a ray tracer embedded on it. So this third error was necessary because we realized, in addition to the previous workloads that we used to measure performance, we needed something that was even closer to what users found when they browsed every day. Another way to think about this is that the sophistication of your performance measurement strategy is probably inversely correlated with the distance from your target. So if you're starting out and optimizing a performance problem and you are four or five or eight times slower than where you need to be, a quick and dirty workload like a micro benchmark could be all you need to just close the gap a bit. But as soon as you start hitting or getting closer and closer to your target or closer and closer to the optimal performance of something, the importance of how representative your workload is raises drastically. And that's why we find ourselves moving beyond an era where micro benchmarks are suffice and even static test suites like octane and trying to measure against real websites. So what sort of insight do we glean from measuring websites? Well, we instrumented V8 so that we could see where it spent time. This is where it spent time in the V8 engine when loading the following websites. For example, we run more websites as well, but this was just a sampling. And you can see here that we have a really detailed insight into what parts of the compiler and what parts of the VM contribute to the load time of pages like this. And this helped us change some assumptions we made about where we should spend our time. You can see here that a lot of websites run compiled JavaScript, or at least a lot of websites take a portion of the actual load time running compiled JavaScript. And there's a bunch of other parts of the V8 engine that contribute even more to the total page load. So this was an insight that came from moving beyond static benchmarks and from looking at actual websites and has really driven our optimization in the last year or so. So I'm happy to announce, actually, that we've improved the median page load for these top 25 websites we measure by 5%, which in some sense seems like a small number. But if you think about how long Chrome has been optimizing page loads, and if you multiply that savings across every time that somebody around the world accesses one of these sites, 5% is huge. It's an incredibly large savings. And the improvements on some pages were even higher than that median. Facebook, YouTube, Wikipedia, Instagram, and discourse all saw big performance improvements. And most of these came from improvements in the runtime and in the parser, not necessarily our optimizing compilers. And I mentioned that static test suites were not as representative as real web pages, but sometimes they can be useful. And in fact, we look at all of these types of workloads when we're working. And the speedometer benchmark, even though it's a static test suite, actually maps quite closely onto these real wild websites, because what it's doing is it's measuring the performance of frameworks, things like React, Angular, Ember. And we were happy to see that the performance work we did for the real websites correlated to speedometer. And so in the last six months, we've improved speedometer performance by 15% to 25% depending on the device. So that's performance, but there's many aspects to making JavaScript run efficiently on a computer or a phone. And in particular, the memory consumption of V8 matters a lot, especially on low-end devices. So these are devices with less than 512 megabytes of memory. And in fact, the number of these devices out in the wild is incredibly large. So we focused on reducing the memory footprint on these devices in particular, and actually reduced Chrome's overall memory consumption by 35% by tweaking the heuristics we use and the trade-offs we make in the heap size and zone memory. So that's a big number. Another brief engine update, I've talked previously about the different parts of V8. There's an optimizing compiler. There's a baseline compiler. Well, we have been hard at work introducing an interpreter to V8. Now, you might be thinking an interpreter isn't that supposed to be slower than an optimizing compiler or a jitting compiler. Well, an interpreter has a benefit, which is that it has a lower memory footprint than an optimizing compiler, which has to compile all JavaScript into native code. And in addition, the interpreter is a simpler piece of software. So it's easier to make optimizations with a simplified execution pipeline. And we can also choose. We don't have to use the interpreter always. We can choose how we tear up to full speed. So adding the ignition interpreter actually brought sizes memory savings. But in addition, it allowed us to make some improvements, which made ES6 generators almost three times faster. So we're excited that this has showed real wins already. So let's talk about ES6 features, or ES2015 and beyond. V8 supports ES6, ES7, and parts of ES2017, or things that haven't been actually formalized into an Xmascript standard yet. And this slide doesn't properly capture just how large of an update to the JavaScript language these iterations were. So if you've been programming or using ES5 and below, it's definitely time to take a look at the powerful features and the idiomatic features that these language updates bring. It's also time, if you look at the Kangax compatibility table, a lot of browsers have support for these features. So if you're using Babel or another transpiler, it may be time to start considering whether you can turn off transpilation of features and whether your users have support for them. So I'd also like to announce, and this is something that we've been spending a lot more time on, is how V8 embeds in Node. I think Paul talked a lot yesterday about how many developers are full stack developers. And the browser JavaScript environment is just half of the JavaScript environments that you care about. So as of Node 7, all of these features are available by default, with the exception of async await, which will be in the next major Node version. And we've also been working on the performance of individual JavaScript features. And you can see here that we've been working on things like spread, generators, for of arrays, and destructuring. And all of these were getting closer and closer to our optimal performance there, as well as built-ins, like object create, function prototype bind, array prototype push. These are things which are used all the time and framework heavy code and otherwise. But I wanna focus on one feature in particular which we launched recently. This is async await. And this has been one of the most frequently requested features and most eagerly anticipated features of all of those language features on the previous slide. And that's because this code should look very familiar to you. JavaScript is inherently asynchronous. And it's very, very easy to end up in callback soup. So async and await are primitives, which, when combined with promises, can turn code like this into code like this, which looks like the synchronous code you might run or write in another environment. So async and await bring readability, they bring debugability, and they make it a lot easier to write asynchronous code in a style that maps more onto the synchronous way that you might think about what your application is trying to do. So I'd like to give a brief demo right now. And I'm actually working on a little site that looks kind of like Pinterest. So let's, let me make these a little bit bigger here. Okay. And what I want my site to do is, let me see if I can make this source even bigger there. Okay. Can people read that? Maybe a little bit. So what I'm trying to do is I want a site that loads on a mobile device or on a port network very quickly with small thumbnails of the images before it loads the full-res versions. So I've gone into my network tab, actually, in DevTools, and I've throttled the network down to 4G, which actually is still quite fast, but I think it'll be enough to see the effect we're going for here. And when I refresh now, you can see that I'm loading in these blurred thumbnails, and then as the full images get downloaded, I'm replacing them into the page. Now, for the sake of this demo, to show you a sink of weight, I've used the Fetch API, which definitely is overkill and is quite contrived, but what demo is not a little bit contrived. So you can see here I'm fetching a thumbnail, and Fetch is an API which turns a promise. So traditionally, the way that you would handle this is using a dot then. I'm decoding the response into a binary blob, and I'm taking that blob and creating a URL which I insert into some HTML and insert into the page. Then after that's done, I am fetching the full version of the image and taking the async Fetch API, getting a promise back, converting the response to a blob, and then updating the source of my image tag, removing the thumb class, which is the thing that blurred. So you can see here a pretty simple structure, I'm just inserting them into this grid I have. So this code is not terribly hard to understand, but it seems pretty verbose for what I'm doing, and it's not quite clear to me that waiting for each of these previous then bodies to finish before executing the next ones is quite what I want to do. And I think that by converting this to async await, we may be able to make some optimizations. So how do you use async await? Well, await is a keyword which waits for a promise to resolve before the statement ends, but it doesn't block the thread. So the browser has an event loop, and the reason that JavaScript is so heavily relies on asynchronous code is because the browser doesn't block on IO. So it's very important that even though your code might look synchronous, and async await helps it do that, it's important to remember that under the hood, waiting for say a fetch, the promise returned by the fetch API to resolve doesn't necessarily block the main thread, and that's really the power of async await. So how do we do this? Well, I'm gonna do one thing first, and that is I'm gonna take these snippets of code that are creating these HTML tags. I'm just gonna make them their own function. So let's do that. This will be insert thumbnail, and I'm gonna take in my image, which is the string there, the name, and use that as an ID, that needs to be an ID on the HTML, so I'll take that as an input, and then it takes a blog too. So let's see, that should work there. And then I can replace this with insert thumbnail and image blog. Great. And I'm gonna do the same for this other code, which takes my full resolution image and replaces the source of a particular image tag. So let's make a function of that, say update thumbnail. And that also takes the image name, so it can find the HTML object, or the HTML element, and the blob which it inserts there. Okay. So that makes things a little bit easier to see already. Let's make sure that things still work. Yeah, they do. Okay, perfect. So the first thing I'm gonna do here is all the await keyword, if it appears in a function, needs to, there needs to be some indication to the browser that the function is, has a weights inside of it, or that it is following this other model of programming. So we use the async keyword to dictate that a function is gonna have a weights inside of it. So usually the async keyword just goes before the function keyword. But in this case, because we're using ES6 arrow notation here, you can also put the async keyword right in front of the parameters to your arrow function. So this is an async function now. And instead of taking a promise chain here, I'm gonna go ahead and write const response, and this is the first of two responses, await fetch. So what does this do? Well, what this does is, when this line executes, is the fetch API, which returns a promise immediately, but the promise doesn't resolve until the network request for the image goes through. This line will actually await the resolution of that promise before assigning it to the response constant. But as I mentioned earlier, won't block the main thread while it does this. So the code is as asynchronous as it was before, but to a reader is a single line of sort of synchronous code. So this is incredibly powerful. To continue down the line, we sort of do the exact same thing with all of those then chains. So let's go ahead and quickly write this out, and I'm gonna take my blob here. Actually, I'm gonna call it thumbs, just so we can remember that this is a thumbnail. And this is awaiting response one blob. Great. And after that, actually, then I can just insert my thumbnail with image, the name of my image, and the thumb block. Okay, so already I've gotten rid of all of this code into these couple lines, which just look and make a lot more sense when you're reading through things. Okay, let me just quickly do the exact same thing. I'm gonna copy paste this, and this is my second response. And this is for the full image, and I'm getting back the full blob from this, which I'm inserting. Rather than inserting, I'm updating my thumbnail. Okay, so let's quickly see if this does what we expect. And it does, okay. So you can see here that we've removed all of this code with the then chains and the then callbacks and turned it into six lines of what looks like synchronous code, but is really, as I mentioned, asynchronous under the hood. Now there's one extra thing to show here. In my promised chain, I caught errors at the very end. Now the way to do that in async and await code is as you do it in synchronous code. You just insert a try block around what you want to try to run without errors, and you catch any errors if they come out at the other side. So that looks and reads just so much cleaner than before. All right, let's make sure that runs and we've successfully converted promise-based then code to async and await. So you can see how powerful this is, but by converting things to async and await, we also sort of can potentially better see a problem with this. And that is, if we go to the network tab, we can see that all of my mini thumbnails here, so those are these first 11 lines. They all finish before my full resolutions get fetched. Now why is that? Well, if we look at the code, it's because we're waiting on the first fetch to finish before we kick off the fetch of the next function. So that's a problem. I wanna actually, I might as well be fetching both resources at the same time, rather than waiting for one to finish before I kick off the next one. So this is one time where async await has a little bit of a quirk, and that is that what you really wanna do here is kick off the fetches and await, excuse me, and keep track of the promises that they return up here, and then await the resolution of the promise in the proper place in this synchronous order, because I do fundamentally need to insert the thumbnail before I update it. But if we do things like this, instead of awaiting them inline, then the fetch actually kicks off right after those first two lines complete, and I'm not blocking or waiting on the resolution of that promise until the right time inline. So let's make sure this still works. It does, and you can see now that the, make sure this is working. It's hard to see from the waterfall, but I believe that the kicking off of the first full resolution is no longer blocks on finishing the download of the thumbnail. So that's an important thing to remember. Sometimes you want to kick off the promise before you await its resolution. Either way, I hope you can sort of see that this code is a lot easier to reason about and understand than the previous code. And so if you're using promises at all, absolutely you should take a look at updating your code to take advantage of async await. And async await is supported in Chrome Canary and will be in an upcoming release, the next release. So let's go back to the slides. Okay, so async await, very, very cool feature, and it just really simplifies the writing of code. So we talked a lot about JavaScript, but I want to move now to something that's a bit more on the horizon, but promises really, really powerful, a really powerful tool for you as developers. So WebAssembly is a cross-browser, plug-in-free, low-level language that makes it easy to run native code in the web. So there have been previous efforts to do something like this, Flash, you might even remember Java applets in the far distant past, but each of those was encumbered by particular difficulties, and WebAssembly is the first cross-browser, open standards-based solution. So we're really excited that we're off to a good start in the standardization of WebAssembly. Now, what are the benefits of WebAssembly? Well, WebAssembly is a binary format, so it's much smaller over the wire than textual equivalent. You may be familiar with Asm.js, which tries to achieve some of the same goals, but by using a backwards-compatible JavaScript syntax. So WebAssembly is small from the start because it's its own binary format, and it's fast. And WebAssembly, you're dealing with memory and performing operations on numbers and int32s, int62s, float32, float64s in memory, rather than dealing with higher-level JavaScript primitives, which have their own runtime associated with them. So WebAssembly is fast, and it's near native performance. And as I mentioned, it's low-level, so it's possible to compile C and C++ to WebAssembly. And we're really excited about this because we think WebAssembly will unlock a new class of high-powered apps. If you thought it wasn't possible to run a Photoshop equivalent in the browser, it definitely is something that WebAssembly would be capable of running. So WebAssembly is designed to enable sort of high-performance computing and unlock things, not just games, but media applications and other things which traditionally have taken too much raw compute power to run effectively in JavaScript. So we have a demo on webassembly.org of an AngryBots game, the AngryBots game running. This is actually a Unity game that was compiled to WebAssembly. And you can see that the performance is a lot smoother than even the Asm.js equivalent. And I definitely encourage you to check it out. And WebAssembly is already implemented behind a flag in Chrome, Firefox, and Edge. So this demo runs in more than one browser. But today I want to talk about how you, as a JavaScript developer, might take advantage of WebAssembly without digging into C and C++ or bringing out a compiler to compile WebAssembly. And that's because we think WebAssembly is also going to unlock a really interesting functionality which might just be in a library. So you might be writing a traditional JavaScript web app and just use a WebAssembly module to perform some functionality which is computationally intensive. So if you're writing a progressive web app that encodes and decodes JPEGs, you could just unload, offload, excuse me, the encoding and decoding of the JPEG file to a WebAssembly module and write all of the rest of the application, the user interface and whatever the features you have, in JavaScript. So WebAssembly was created from the beginning to play well with JavaScript and interact just alongside of it. So just to give you a brief demo of this, this is far from an AngryBots game, but I wanted to show you that as a JavaScript developer, if you receive a WebAssembly module, you don't have to really understand native code or CRC++ to still use it. So I've got an HTML page here that right now is just using this WebAssembly API because it's exposed to JavaScript, the way you start and run WebAssembly is by calling a JavaScript API. So it's gonna load this WebAssembly module and instantiate it, which is what we have to do to run it. So let me quickly switch over my server and let's take a look at what this looks like. Okay, so briefly before I forgot, the other important thing here is to see what this WebAssembly binary is. Well, it's incredibly tiny. This is the binary representation or the textual dump of the WebAssembly module. That's all there is. And this WebAssembly module does something very small, just adds two numbers together. So I mentioned this isn't a big game demo, but it should still be interesting to see how this interacts with JavaScript and HTML. So let's go to our new index here and the spot. So what we've done right now is, and I'm using async and await actually because all these functions are asynchronous, we fetch the module, we turn it into an array buffer and we use the WebAssembly API to compile it into a module and instantiate the module. You can think of a module sort of like an ES6 class and the WebAssembly instance as an instantiation of the class. The module can be instantiated more than once and you might wanna use it in different ways. But regardless, instance is sort of the unit of JavaScript object that we're gonna use to invoke this WebAssembly module. So let's actually break here and see what we get. So instance is, oh, let me just just actually log it out here and we can see it directly. Oh, I don't know. Okay. So what we get out here is, all right, let's go through what we're loading here. Is our server on the right page? Yes. I'm not calling load. That is, wow. Wow, thank you. I really, yeah. I told you this talk was about advanced JavaScript but it goes beyond advanced to expert level JavaScript. Thank you. Okay, so we're gonna load this and we might then see some results. Hopefully, I mean, you could never be sure here. Okay. And what we get out is an object which represents our WebAssembly instance. Wow. Okay. Now, WebAssembly instances in JavaScript have a simple thing on them. They have exports because a WebAssembly module can export a bunch of different functions. So we see here, we get this object, has an exports method, or excuse me, an exports value and property and add to is a value that it exports. So add to is the WebAssembly function. Now the important thing to note here is add to is actually native code. So again, adding to numbers isn't a particularly impressive example but you can see that from JavaScript in an environment you're very familiar with, you can easily get access to something like a native function and we can call it just like any other. So let's take this and I mentioned that all async functions are return promises so we'll have to use dot then here but we're gonna take our instance and we're gonna call instance.exports.add to. Let's take two numbers and let's log these to the console. Okay. And let's see what happens. Ah, I keep forgetting I didn't map my persistence over here. Let's do that one more time and this is instance.exports.add to. Let's see what happens. Missing argument after argument list. Did I close this up correctly? We get 84. So you can see that even if you're not a WebAssembly developer or even if you're not compiling these modules yourself, you can imagine a day in which instead of downloading an MPM module which is JavaScript or provides a JavaScript implementation of something like a ZIP encoder or an image decoder or a PDF renderer, you could just rely on a WebAssembly version or a WebAssembly implementation of the same thing and use it with no knowledge of how it was created. So WebAssembly is currently in a browser preview period and we've announced that this is slated or the roadmap is to collect developer feedback during this browser preview and launch WebAssembly on by default in browsers in early 2017. So WebAssembly is really close and it's gonna be really powerful not just for native developers but for developers making any range of progressive web apps with some computational intensive code. So thank you very much. That was my talk and if there's one thing to take away, it's that V8 is continually investing not just in the performance of JavaScript but in unlocking new capabilities and improving the ergonomics of writing web applications in the first place. So thank you very much. My name is Seth Thompson. Thank you.