 Hello. How is everyone? Hi. My name is Seth Thompson. And I am a product manager on the V8 team and Chrome. So V8 is an engine. And it's the engine that runs JavaScript in Chrome. And our mission is quite simple. It's that we want to speed up real world performance for modern JavaScript. And we want to enable developers to build a faster future web. And there's two parts of this mission that are important. The first is that the JavaScript that V8 is optimized for is the JavaScript that you, as developers, are actually writing. And it's the JavaScript that includes new language features as they get introduced, new patterns of application development, new idioms. And then the second is that as we as an engine participate in the TC39 standards committee and develop tools and give guidance, it's that all of this goes towards a faster future web. So we'll talk about more of all parts of that in a little bit. But first, I wanted to start with some fundamentals of a JavaScript engine. Specifically, V8 is a just-in-time compiler or a JIT. Now, what this means is that when JavaScript is sent to the browser, the browser has to execute this code as it runs it, or immediately. And in order to guarantee maximal performance, the engine wants to transform this code, this JavaScript, into machine code, native code. But because it's doing this all as soon as you load the page, it needs to do it just-in-time or at runtime. And there's some fundamental trade-offs at play here. I think one of the things I'd like to do is shed some more light on what we mean when we say that an engine runs JavaScript fast. Because there's a lot of different ways to run JavaScript. So the first fundamental trade-off is that, in general, the more optimization an engine performs, the faster the machine code it generates, so the faster that code can potentially run, but the longer the initial delay. Because remember, all of this compilation and this optimization happens after you load the page and the browser sees the JavaScript for the first time. So there's a trade-off there. The top peak speed once the program starts running versus the initial delay when it gets started, or startup. And the second trade-off is that, in general, in a JIT, the more optimizations an engine performs, the more memory that the engine consumes. So anytime someone says that their engine is five times faster or 5% faster, you should think about faster in what dimension or how does that number get translated into a position in this problem space or this trade-off space. So let's examine this in a little more depth. Here are these constraints as I've laid them out. Generally, an engine can have fast startup or high peak performance. And specifically, an engine can make decisions about executing a particular function with this granularity. So it can immediately run it, or it can make optimizations and run it faster, but pay the cost of making those optimizations up front. And the second is that an engine can have a low memory footprint. You could think of an interpreter, which has a very low memory footprint. But that comes at a cost of the max speed as well. So memory and speed also are a trade-off to make. So let's say that I wrote a web page, and all it did was run one line of JavaScript, foo. Now, we don't know exactly what foo is doing here, but I would be willing to bet that a JavaScript interpreter, which you might have a visceral sense of something slow, but I would be willing to bet that a JavaScript interpreter can execute one function much faster than an optimizing JavaScript compiler, which takes the foo function, looks at it, turns it into native code, and then performs multiple optimization paths over that native code before it can even execute it. So all of this is happening when you load a page or start a JavaScript, executing a JavaScript file. So to put that into context on our little chart here, you knew that you were just executing one function. You would probably want to optimize for a fast startup, not peak performance, because you're only running this function once. So the time that takes to make it fast to run multiple times, you would have already paid the cost by running it once. So what if this exact same function, though, is run 10,000 times? Does the trade-off change at all? So in other words, if we know that we have to make foo fast and it's going to be used 10,000 times, then in this case, it's worth taking that initial startup delay to optimize our native code for foo because we'll amortize that cost of startup over the next 10,000 executions. So in this case, for a code pattern like this, you want to optimize your compilation of the foo function for peak performance. And if this is a desktop browser, you can rest assured that there's enough memory to compute lots of these optimization passes. But what if this exact same code is run on a low-memory mobile device, let's say an Android device with low RAM? Well, then if taking the memory to compute multiple optimization passes and generate a lot of machine code is going to be the difference between your device being under memory pressure and closing a bunch of background tabs, in that case, you might want to actually sacrifice the peak performance of executing this JavaScript for a low-memory footprint so you can keep multiple or more tabs open in your browser. So I would argue that on a mobile device, although you ideally would like peak performance here, you have this other constraint, which is that you'd prefer low-memory usage. And finally, what if that same line of code is on a server in a file run by Node.js? Well, in this case, your server only starts up once. And then it keeps running on whatever machine is receiving requests from your users. So in this case, you don't really care about the startup cost at all. You'd like the engine to take as long as it can to optimize this function. Because once the engine or the Node app is up and running, you want each of those requests to be served as fast as possible. But again, if this is an IoT device, maybe you're running Node on something that's also memory constrained, well, here, you might have to sacrifice some of that peak performance for a low-memory footprint. So the reason I go through all of these examples is to say that the same three lines of JavaScript or the same function of a single function of JavaScript requires many different types of optimizations or many optimal ways of executing this function depending on the context, depending on the device it's running on, whether it's running on the server or client side, how much memory there is. So as an engine or as Chrome developers, as we're developing V8, we want to put together an engine that spans that entire trade-off space and is able to use heuristics to know whether it should be tuned for fast startup or peak performance or low memory. So over the last year, and actually, even beyond that, for the past two to three years, V8 has been working on an entirely new execution pipeline. So what this means is that the compilers that V8 previously used have been completely replaced by a new execution pipeline. So let me quickly walk you through the history just to give you a sense of how much machinery there is behind a JavaScript engine, how many moving parts there are. In 2008, V8 started with a simple code generator, and it generated semi-optimized machine code. In 2010, we added an optimizing compiler. Now, remember, an optimizing compiler is the one that takes more time to start up because it's computing optimization passes, but then generates code that, when run multiple times, is very fast. In 2015, we realized that our first optimizing compiler wasn't extensible enough. It didn't support the full JavaScript language, and we knew we needed something for new patterns of JavaScript, things like Asm.js, eventually WebAssembly. So we created a second optimizing compiler. Then we added an interpreter. I'll talk about all of these things in a second. And finally, we get to the present day where we evolve from that first yellow compiler and crankshaft, we just remove those from V8. So today, we have two parts of our engine which are completely new. So what are those parts? Well, the first part of the all-new V8 is Turbofan. And Turbofan's an optimizing compiler. We've been working on it for over three years. As an optimizing compiler, I mentioned that it is designed to be able to squeeze out the most possible performance from the machine code that it generates. Now, it's also designed to be extensible from the beginning. So we were able to implement all of ES2015, which are the newest JavaScript features, in Turbofan, and as well as follow-on features from ES2016 and ES2017. And Turbofan supports the entire language, so JavaScript primitives like Try, Catch, and Finally, can be optimized for peak performance, where historically, they weren't. What all of this means for you as a developer is that the new V8 has fewer performance cliffs. You're less likely to run a single function, have it be fast, make a change, and suddenly wonder why it's slow. And that's because the engine now supports a more diverse set of workloads. So just to recap, Turbofan is optimized for peak performance and multiple optimization passes, even if it takes memory. But we've also added Ignition, because we know we need to serve these use cases where the startup of the code or the initial execution is fast and there's a lower memory footprint. So Ignition is an interpreter. And contrary to popular belief, it's not necessarily slow if it's used at the right time. And Ignition generates a byte code, which then runs. And we've noticed that it's particularly beneficial for loading heavy pages fast. And it's integrated with Turbofan to make adaptive optimization simpler. So this means that if we start executing a function with Ignition, we can watch and see whether it's used often and use a heuristic to say that we should probably send that function to Turbofan and optimize it for max performance. So Ignition is optimized at this end of the spectrum. And when you put these two things together, you end up with v8, an all new v8, which can target multiple places along the spectrum of low memory, high memory, fast startup, peak performance, depending on the workload, depending on the heuristics that we see as we execute your code, depending on the device that your code runs on, depending on whether it's embedded in Node or embedded in Chrome. So this means that real world JavaScript is faster. The engine can run in a much lower memory footprint. There's fewer performance clips. It's a more well-rounded engine. And it's better tuned for Node.js than our previous configuration. Finally, there's a third new part of v8. And that's what we're calling Orinoco. Orinoco is a new, mostly parallel and concurrent, compacting garbage collector. Our previous garbage collection was not always parallel. And Orinoco expands our ability to perform garbage collection across multiple threads to make for faster pauses when we're cleaning up the memory of an application. So all of these things come together into this new package. But I think, and I've sort of mentioned many of the different dimensions on which you can compare the performance of JavaScript. But I want to talk a bit about how we benchmark JavaScript or how we tell whether we're getting faster or not at real code. So v8 has started measuring the performance of real page loads. We have a system which can record user actions. So we can set up a benchmark that loads a page, scrolls through the page, potentially watches a video or reads a news article. And all of these simulations, we can then run benchmarks against. And we're happy that after optimizing for these real world web pages, we saw a 25% improvement on the speedometer benchmark, depending on the platform, over the course of the last year. So what this means is that by optimizing real web pages, we were able to deliver improvements on a benchmark like speedometer. But not all benchmarks are good. And in fact, if we could choose, we would always just run against real web pages. The reason that we use something like speedometer is because it runs in multiple browsers. So you can compare between engines. So here's the performance over the last year for speedometer. And one of the downsides of benchmarks is that of these traditional benchmarks, not real world simulations, but the benchmarks that you would run in a browser tab to compare engines is that they're not always emblematic of the types of JavaScript that you're writing. So at the very beginning, we talked about these four different ways that you could write code on a server, on a low memory device. And the Octane benchmark was tuned only to exercise the peak performance of a compiler. So we believe that chasing Octane or optimizing in particular for Octane led engines down a path where they over-optimized for peak performance and under-optimized for things like low memory usage and fast startup. So this year, we announced that we retired Octane because we felt that it wasn't yielding the right decisions for engine optimizations. And I mentioned speedometer earlier as something that better approximated real world websites. And the reason it better approximates them is because it includes applications to do MVC application, to be specific, that implement the same to do application across many frameworks. So speedometer includes Angular and includes React. And what we've done is we've worked with WebKit, who have originally implemented speedometer, to add even more frameworks. So I'm excited to announce that speedometer 2 has just been committed to the WebKit code base. And it expands the frameworks that it tests. So now it tests Angular 2 rather than Angular 1. It adds Preact, Vue.js, Inferno. It adds ES2015 code. It uses code with bundlers, Webpack. And it's updated all of these frameworks to the latest version. So while no benchmark is a perfect approximation of real world code, we hope that speedometer 2 will be a better way to compare engines across browsers. And those are the frameworks and the bundlers that are included. So speedometer 2 is coming soon to WebKit. And you will be able to find it on browserbench.org. So one of the things that I mentioned in that past section when I was talking about important parts of a performance story is ES2015. So ES2015 and newer features are the latest version of JavaScript. So ES2015 features are things like promises, rest, and spread operators. Array iteration becomes a lot easier with ES6. ES2015, excuse me. And when an ES2015 was initially implemented, there was a slowdown on the ES2015 code. And this is because engines take a long time to optimize particular code patterns to make them fast. So when ES2015 was first introduced, it was actually a lot slower than ES5 code. Well, over the last year, we've been using a tool called Sixspeed, which compares ES2015 code to the transpiled version, or the ES5 equivalents of accomplishing the same action. So an error function is compared to an anonymous function. And we've been using this tool to optimize the biggest performance differences between ES2015 and their transpiled equivalent. So we worked on optimizing four of. So now, using the four of keyword is as fast as writing a simpler JavaScript loop with a var. We worked on improving object.assign. Object.assign shows up everywhere in React and Redux code especially. We worked on improving iteration and destructuring. And we also improved the performance of spread calls. And by doing this, we decreased drastically the slowdown of ES2015 transpiled to ES5 compared to ES5. And you can see here that over the past roughly six months, we went from the average ES2015 code being almost three times slower than ES5 code to the present, where we've almost reached parity. So what this means is that there are fewer and fewer reasons not to use ES2015 code natively when you can, when your users support it or on node on the server. And we also, I wanted to highlight a couple of language features in particular, which got special attention because they're so useful, but there was a lot of performance on the table to be had. So generators are now two and a half times faster and async and await, which is a very useful idiom for turning a promised-based then style code into something that looks a little more synchronous, async and await is four and a half times faster than it was in the previous six months. It is a big deal. Yeah. Now, underlying all of this is our promise implementation. And for a while, native promises were actually slower than promises that came from a library like Bluebird. Well, I'm also happy to announce that over the past year, we've improved promise speed by four times. So native promises are now something that are able to be included in real world code without worrying about their performance impact. So I've talked a lot about language features and the different places that you can run JavaScript. And one of those environments is Node.js. So Node.js is obviously a server-side language and Node embeds the V8 engine. So over the past year, we've actually invested a lot more in the Node community than we ever had previously. V8 is now represented on the Node core technical committee. And you can find us on GitHub, on the Node repository, working through issues that come up under the V8 engine label. And these are things like regressions. Somebody notices that their Node code slowed down for some reason. We're working with the Node team on releases and making sure that as soon as a new version of V8 is available, it can get upstreamed into Node.js as fast as possible and tested for release. And we've also worked on performance optimizations specifically for Node. So in addition to exposing JavaScript features, Node also has a rather large standard library. And some of the APIs in the standard library, things like buffers, we've had to do specific performance optimizations for. So we worked on faster instance of. We worked on a buffer.length regression. We've worked on supporting long argument lists in Node. And in general, we've made sure that let and const are as fast as their var equivalent. We also know that certain libraries are used in Node throughout the ecosystem. And the through library, which is used for creating streams primitive, we also spent time optimizing to make sure that streams in Node were fast. And all of this is summarized by what we call the Acme Air benchmark, which is a benchmark that starts up a Node server and tests sending thousands of requests to that server. It involves a database. It's a big app. That benchmark showed a 10% improvement when we launched Turbofan and Ignition. So it goes to show that the improvements that we made towards making our engine more well rounded did, in fact, yield faster Node performance as well. So we're really excited about that. And V8 is part of Chrome. When you're debugging JavaScript in Chrome, you can use DevTools. So over the past year and a half or two, we've been working on making DevTools support Node.js. And I'm happy to announce that it's now easier than ever. I'm actually going to show a little demo right now of where we are right now. So traditionally, when you're writing a Node application, it's difficult to debug things. So it's not quite as simple as it is client side when you can pop open the inspector and navigate around your web page. So for this demo, I'd like to go through a Node command line interface called Emoj. And it's a really cool program. It's an open source program. I just found it. When you run it and type something, any sentence, so say hello, it comes back with a bunch of emojis that correspond to the sentence. So it's really cool. It's an open source project. And in this case, or for this demo, I want to figure out exactly how it's implemented. So to debug any Node application with DevTools, all you have to do is pass a flag to Node-inspect. And what this does is opens up a port, a debug port, that can communicate with the DevTools instance. And you can debug in DevTools. Now, previously, you had to paste this relatively long URL into Chrome in order for this to work. But now, if you go to Chrome-inspect, or about colon-inspect, you can see that Chrome will automatically detect any running Node instances on your computer. Yes. So let's go ahead right there and click Inspect. We get a dedicated window that opens up, and here we can see our CLI. If I look into my files and my sources, I see that file that I just executed, and it's right here. So what's really powerful about this is I can come in and use any of the debugging features that have been introduced in the last six or so months. One of those that's really powerful is inline breakpoints. So here what I'm going to do is I'm going to set a breakpoint on this line of code. And this is the code that fetches those emojis from the server that it's using to perform, I think, some machine learning. And what I really want to do is I want to know what the server is returning. And I think it's returning an array right around here in the middle of the line. But if I normally would just break on this line, it would break at the beginning of the line. And if I advanced, it would have completed the fetch already. What I really want to do is break right here. So inline breakpoints, which is a really cool feature of DevTools, I can do just that. And now, I can also use this feature to debug node code. So with that breakpoint in place, let me try this again. I'll run hello. And I get something back. I get paused on a breakpoint. And I'll just show a couple quick new DevTools features. One of them is that this call stack supports asynchronous code execution. So you can see here that it actually traces processes through a variety of async functions. It can trace promises being resolved. But I want to come down to the scope here. And I can see that this array variable, the array that it returns is actually 10 emojis long. And so now I can see, OK, what this is doing is slicing that array and only returning the top seven results. So relatively easily, just by passing the inspect flag and then opening up Chrome and clicking Connect to Node, I can jump into the execution of a Node program and use all of the DevTools features to debug it. So I was interested in the debugger today, but the JavaScript CPU profiler is available for Node. The memory profiler is available. And the console is available. I can see if I can see process.versions.v8. Yes, this is Node. It's running this version of V8. So all of this is immensely useful. And it's just one of the new features that we have for debugging Node with DevTools. I won't demo it now, but I'll briefly say that we also have a new dedicated window for Node. So you can actually close Chrome. You can open this Node debugger. You can add in the port. And it will always stay connected to whatever Node instance you're running, even if you're running multiple Node scripts at the same time. So yes, that's exciting stuff. OK, let's go back to the slides briefly. In addition to these inline break points that I just talked about and this integration of Node and DevTools, the V8 team has worked with the DevTools team to support a number of really useful features for writing JavaScript applications. So one of the themes of Google I.O. this year on the web track has been optimizing the performance of progressive web apps and JavaScript by simply shipping less code. If you're using a bundler like Browserify or Webpack and you're requiring many modules from NPM, it's very easy to end up in a situation where your app bundles, ships across the network to the client, parses and starts up way more JavaScript than you actually need. It's just simply how the bundler included an entire library, let's say, even if you only used one function from it. So I'd like to do another demo here and show how DevTools can help you find this situation and fix it if you're working on an app with lots of dependencies. So I'll briefly show you that I've got an application here and I'm going to serve it on a little server that watches my code, sends it to Browserify, recompiles it. Fairly basic stuff. But I can show you that I'm requiring load ash as a library and a bunch of other dependencies. So here's the app. It's a GitHub repo formatter. So if I type in, hello, IO 2017, it'll give me a little slug that I can add into GitHub. It's just kind of useful. Let's say I want to look at the performance of this. Now, Browserify turned my JavaScript into a single file, a bundle here. And it's actually quite large. It's about 6,000 lines. Now, I could go and manually figure out exactly which part of those 6,000 lines I actually used. But instead, I'm going to use a feature called DevTools Coverage. Now, you might be familiar with coverage from test coverage. You run a coverage tool to figure out whether you've tested all the parts of your code. Well, this type of coverage tool is a little bit different. Instead, it tests, of the JavaScript that the browser saw, how much of it was actually executed? How much of it was just dead weight or dependencies that weren't used? So if we go under, you can press Escape to bring up the drawer on DevTools. And we go under the Coverage tab right here. This is new, so you'll have to do this in Chrome Canary. And if it's not down here, it'll be under More Tools than Coverage. This panel allows us to load our page and check which JavaScript is actually executed. So what we have to do is hit Record, and then Refresh. We can type in something. And now, you can see here that if we isolate just the bundle file, you can see that we've only used about a third of the JavaScript that we're shipping. What's more of this tool allows you to actually look at the source and see which functions were called and which weren't. So although there's some green here, which is code that we ran, there's also a lot of red. And I actually know exactly what it is. All of this red code are functions from the Lodash library that I didn't use. Because actually, I'm only using one function. This is the function that's actually turning the string Hello, Google I.O. into what they call kebab case. But anyway, so I'm only using one underscore function here. And I happen to know that actually I can, rather than loading the entire Lodash library, I can load just kebab case. In other words, I can trim my dependencies down to exactly what I need. And in this case, when I save it, my process will recompile my bundle. If we record and reload, we can see that now our bundle is only 1,000 lines rather than 6,000. And the percentage of bytes loaded, actually I think I need to clear this first, OK? The percentage is now much, much smaller of code that's not actually being executed. So this was a trivial toy example. But you can imagine running this on a very big code base where it's not easy to know which dependencies are used and which aren't, or which parts of dependencies are used. And this should help you make sure that anything that you're shipping to a client for a particular page or a particular route is just what's needed to execute the functionality that you need. So that is code coverage in DevTools. Let's go ahead and go back to the slides. If you are familiar with the performance panel in DevTools, the performance panel is another way to instrument and investigate the performance of your app. And you can go to the DevTools documentation and find a number of really good tutorials for using the DevTools performance panel. A couple other things we've added include line by line profiling. I think that was the first of these features. And in addition to seeing the coverage of which code was executed, you can record a performance profile, look back at sources, and in the gutter see the time that each function took to execute. You can use that to identify a particular bottleneck in your source. We've got code coverage, which just was launched, and async debugging. I showed a little bit of this in the node demo. Async debugging has gotten a lot simpler more recently. So V8 and DevTools are constantly working to make sure that debugging and making fast applications in the first place is easier and easier. There's one last thing that I wanted to touch on. And this was mainly a talk about JavaScript. But WebAssembly is a really exciting new technology that V8 has recently added support for. So WebAssembly, if you're not familiar with it, is a new language for the web. And specifically, it's a low level language designed to execute near native code. So WebAssembly is ideally suited for a library that you might otherwise write in C if you were in a different environment than the web. So for the first time, you can use WebAssembly to compile C and C++ programs and run applications like desktop suite applications, like a graphics intensive game or a video editor that previously would have been constrained by the performance of a dynamic language like JavaScript. So we're excited to announce that V8 is supported in Chrome. And I think one of the most amazing parts about WebAssembly is that it's not just a Chrome technology. In fact, when I see this slide, I think the thing I'm most excited about are the other browsers here. So WebAssembly is also launched in Firefox. And it's currently in preview in technical builds of Edge and WebKit. So WebAssembly is poised to become a cross-browser solution for running native code. And it's the first new language that we've introduced on the web that has this sort of cross-browser support. It doesn't require any plugins. And it uses the regular web platform APIs that you're all familiar with. And we launched it in Chrome 57. And as I mentioned, you can compile C and C++ to WebAssembly within scripting. Already, we're beginning to see some very incredible demos. This is a demo of the Unreal Engine running in a web browser. And we also have on the WebAssembly website a Unity game, as well as a bunch of community projects that have already started being created. Just the other day, I saw a video editor running with real-time video effects. And it was very 30 frames per second. So if you're interested to learn more about WebAssembly, I encourage you to check out the IO recording of a talk that I believe happened yesterday by Alex Dinello. And soon for WebAssembly, we will have more startup performance optimizations and some more features that enable things like multithreading, more advanced native code features. So to summarize or to pull this all together, if there's one thing to take away from this talk, it's that the V8 engine and really JavaScript engines in general need to be well-rounded engines. They need to be able to run lots of different types of code fast in lots of different environments. And the constraints of those different environments might change the types of code that needed to run. So V8 today, with the ignition interpreter and the turbofan optimizing compiler, is well-equipped to run code at both ends of these spectrums, of both of these spectrums. And I think with WebAssembly, this diagram can be expanded a bit. WebAssembly works alongside JavaScript and will allow developers to push that peak performance angle even farther. So WebAssembly just expands the possibilities that a JavaScript or an engine can run code in the browser. So that's my talk for today. If you are interested in some of the things you heard today and care about the types of optimizations we're doing or even the language features that JavaScript offers, I encourage you to go take this survey. We put it up at bit.ly slash v8lang survey. And this gives you a bunch of upcoming features or proposals and asks you to rate them, how exciting they are to you. And this will help us decide what we come on stage next year and talk about. So that's all for today. Thank you very much. And my name is Seth Thompson.