 Hi. Thanks for coming, everybody. My name is Seth Thompson, and I'm a product manager on the Chrome web platform team. And I'm here today to talk to you about the V8 JavaScript Engine. And in particular, we're going to spend a little bit of time talking about V8 and how it's used in Chrome by real users, and also how it's used in other contexts. It's, for example, in its embedders, like Node.js on the server. And we're going to spend a little bit of time focusing on how the JavaScript language itself is changing, and then talk about two new exciting projects on the horizon. So let's get started. First, I'd like to go over V8's mission. So this is the mission that we use to drive the work that we do to speed up Chrome and V8. And the mission reads, to speed up real-world performance from modern JavaScript and enable developers to build a faster future web. And there are two parts to this that are equally important. The first is that as we make the web and browsing with Chrome faster, that we make sure we're making the right performance enhancements for the JavaScript that's currently out there, JavaScript that's being written as it is today. And the second part is also important. It's that for you and the audience who are building new websites, we want to empower you with the right tooling, the right language features, new additions to the language, and new resources to make you successful writing faster apps to begin with in the future. And both of those two parts together will ensure that the web is fast and the ecosystem healthy in the future. So that's what drives us. And I think it's important, which is why I wanted to start with it. So let's talk about this notion of real-world performance. What is that? What performance isn't real-world? The important part about this focus is that it's reflected on websites that exist as real users browse them in the wild. And that's important. And to know why it's important, we're going to take a quick detour into some JavaScript history. So this is a chart about how JavaScript has been measured and benchmarked in the past. In the early days, engines used micro benchmarks to determine whether they were faster version to version or whether different engines were faster. And micro benchmarks serve their purpose, but they were highly tuned for very specific language features. And in some cases, fairly artificial. You may have heard of the SunSpider benchmark. Well, SunSpider has some code that runs bit mask operations 10,000 times in a hot loop. Or it measures regex string replacement, but 10,000 times in exclusion of other features. So this is useful for finding things like regressions, but it's not necessarily representative of the code that actually gets run in the web browser on real applications. So really, the second big era of measurement was the introduction of static test suites and benchmarks like Octane from the V8 team or Spedometer, Jetstream. These are all names you've probably heard of. The goal with these new benchmarks at the time was to include code that was more reflective of real applications. Octane, for example, contains the code that decodes PDFs in Chrome, PDF.js. It also includes a Game Boy emulator. So these are real applications. They're no longer just testing specific features. But at the end of the day, these were still static test suites. Since 2002 when we introduced Octane, we've really only made bug fixes. But the web has changed so much in the last four and a half years. And Octane in reflection is really more about computationally expensive peak performance measurements. So I'm excited because I think we're on the verge of a new era. And really, the third era in measuring benchmarks will be measuring real web pages as they're accessed by users. There's no substitute for the real thing. And it's particularly important. I should just note that the reason you have to be good at measuring JavaScript and measuring the performance of your engine is because every new feature you add, every optimization you make is driven by this notion of being able to measure whether you're inherently faster than before. So at the beginning of the year, the V8 team took a hard look at what sites were most commonly browsed across the web. And we ended up instrumenting Chrome to be able to record special traces of V8 JavaScript execution when browsing sites you'd be familiar with, things like Google, Tumblr, Twitter, TechCrunch, Flickr, and other international sites, too. Everything under the sun that users were actually browsing. And we were able to compile the data into a really amazing new visualization, something we hadn't done before. So the details of this are pretty hard to read, but the important parts are that on the side we have a list of top 25 websites. These are things you're familiar with, Reddit, Wikipedia, Yandex from Russia, Amazon, Taobao. And we measured how V8 runs a page load, and in particular where it spends time. So you can see that we are measuring the execution of JavaScript code itself, but also things like garbage collection, API calls to the web platform, the compilation, the parsing of this textual JavaScript before we can even start running it. All of these things are broken out. And they're not all the same, but there are some patterns to the way that all of these websites use JavaScript. Now here's the crazy part. We compared that to what Octane measures. Octane measures something that's not necessarily reflective of these top benchmarks. So Octane, you can see, spends very little time in parsing and compiling much more time in the running of actually optimized JavaScript code. And Octane may be useful. It is useful. It's reflective of things like decoding a PDF or running a Game Boy emulator. But for the long tail of sites that are perhaps more attuned to just browsing, there's a huge difference in the types of code and the way that V8 is executing when it's running these pieces of JavaScript. So this was a really important conclusion for us to make, which was that as we go forward, we need to constantly be measuring at scale the web as it exists today for users. And I have to stress we're at the dawn of a really new era. We've only just managed to put together some of this initial data. But already in the last couple months, we've used it to figure out which JavaScript built-ins appeared most commonly on real web pages. And we were able to say not just that a particular built-in was used most often, but that it actually was used enough on any given page to contribute substantially to the load time of the page. And we use that knowledge to optimize things like object.assign, object.keys, the in operator. And with this data, and I have the stress that the optimizations here are not the important part. The important part is that we have data to be able to rank these and say, let's target an optimization for object.assign before we write one for object.keys because it will help the most web pages the fastest. So this is really exciting. And I think the real benefits of this way of measuring JavaScript are yet to come. OK, so now let's talk about the bread and butter of what V8's been working on for the past year. Things we've shipped or are about to ship in architecture changes. So JavaScript is a dynamic language. And in the process of running JavaScript, you end up creating a lot of objects. And there's an inherent trade-off between the ability to run fast and the consumption of memory on the system, on the browser, or on your mobile phone. And at all times, you have to make sure that you're not consuming too much memory. So garbage collection is important because by implementing something like Orinoco, which is our new mostly parallel and concurrent garbage collector, we're able to have higher throughput in the garbage collecting in general and lower the memory. But we're able to do it with less jank because garbage collection runs on Chrome's main thread. And so any pause or any time that you spend cleaning up unused objects on the heap, for example, results in a small pause in JavaScript execution. In the worst of cases, this can mean you visibly see a pause in an animation that's playing on the page. But we're excited to announce that with this new architecture already, we've noticed that the median garbage collection pause, and this is on Windows devices in the wild already, the median 50th percentile pause is down to 4.4 milliseconds. So short. And that's four times better than just four Chrome releases ago. So we're excited that Orinoco can bring more and in reducing the actual pause times that impact JavaScript execution. Now there's another important aspect to garbage collection. On the rendering side of Chrome, when you're trying to get a smooth page or a smooth browsing experience, think rendering a video, playing a game, even something like infinite scroll where you're on a mobile device, you're scrolling quickly and these elements are shooting by on the screen very smoothly. In order to achieve that, you need 60 frames per second. And that gives Chrome 16 milliseconds between frames to compute the frame, to render it, and to paint it to the screen. So the process looks something like this. You can see at the bottom the ticks that represent actual frames that we need to hit to enable smooth browsing experience. As Chrome is rendering these frames and JavaScript is computing them, memory does naturally climb up as you're creating these JavaScript objects, which then become unused. Before this idle time improvement, when the memory reached a certain threshold, garbage collection just automatically kicked in and brought the memory down. But because it's on the main thread and because it blocks execution to create JavaScript in an inoperate, excuse me, to run a garbage collection operation at an inopportune moment creates a missed frame. So you can see here. It just so happens that the memory reached the threshold right before Chrome was about to paint a frame. Garbage collection kicked in and disrupted the painting of that frame. Well, Chrome, the renderer in Chrome, has exposed a lot more information about how long it's taking to compute frames and when it actually has idle time before it needs to paint the next one. And by piping information from that rendering scheduler into the V8 garbage collector, we're able to more intelligently schedule when we perform garbage collection. And with this new knowledge, we can even perform more garbage collection than before in smaller chunks hidden in the idle time between frames. So an improvement like this allows the memory to be lower overall, but it keeps the renderer able to hit 60 frames per second. And we've seen the statistical improvements as well. On real websites, so we've measured infinite scroll across websites like ESPN, Yahoo. On certain pages, we were able to completely hide garbage collection in the pauses between when screens were rendered, in idle time between when frames were rendered to the screen. And for long-running pages, imagine you open Gmail and it's sitting in a tab on your desktop for weeks, hours maybe at least, but often weeks. In situations like that where the page is running for a really long time, this proactive scheduling of garbage collection during idle time when you're not doing anything with the page allows us to reclaim up to 50% of the heap and lower the memory usage of a page, a long-running page. So both of these are really exciting and it contributes to a smoothness in the browsing experience, which is really important. OK, another big architectural change is an upcoming interpreter. So V8 historically has had an architecture built up of compilers. We actually take JavaScript code, we parse the JavaScript code, and then we compile it to native code. And this allows for truly blazing fast speeds. But the parsing and compilation incurs some time before you can execute the newly compiled code. And to compile JavaScript into machine code also creates machine code, which takes up space and memory. So the interpreter is designed for two things, to improve startup and lessen the amount of time that we usually take compiling, but also to lower total memory usage of V8 and Chrome, which is particularly important on things like low memory devices. You have mobile phones which often only have 512 megabytes of memory, and every last little bit of memory savings we can get out really improves the usage experience. So you may be, and actually, yeah, we've already measured that there's five to 15% in the total reduction of Chrome's memory usage when we enable something like the interpreter. But you may be thinking, aren't interpreters slow? Here's a diagram which shows just how these trade-offs could work. This is actually an example page and different pages, as you saw from the earlier screen, run JavaScript differently. But in certain examples here can actually be beneficial and take less time overall to use an interpreter, which has slightly slower JavaScript execution, but faster in parsing and compiling. And so this means that V8 will now have a third tier in its architecture, and we're adding this interpreter, we're calling it Ignition on the left hand, alongside of our baseline JIT and our optimizing compiler. Okay, so let's talk about something which probably impacts the work you do more, the JavaScript that you're writing. ES6 and ES7, two major language updates. Now ES6 is an update, or ECMAScript is, is actually the language specification that determines JavaScript, the syntax and semantics of JavaScript. And it's written and updated by a standards body called TC39. So every so often they release a new version of JavaScript, ESX, and they put out the spec text. So ES6 was such a big update to the language. It had so many new features that actually the page itself of text describing what the features do was over two times as long as the previous version of the spec text for ECMAScript. So there's just a massive amount of features in there. Any one of these could take up the time of a whole lecture, but just briefly, there's classes to make object-oriented code easier to write. Arrow functions, destructuring, default spread and rest parameters, template strings, simplified templating, let and const, new scoping, iterators and four of, makes it easier to iterate over arrays, for example. Generators, a new dictionary objects, map, set, week map, week set. We have proxies and reflect, which allow you to intercept things like property access and actually provide custom behavior. Symbols, you can now subclass built-in so you can extend the string object, for example. Unicode regular expressions and promises, which make asynchronous code a lot easier to write. Just a massive amount of features and we've been chipping away at their implementation. So in 30 to 39, we implemented these features, 45 we implemented more and all the way to the present, but we didn't just work on ES6 features. ES7 was actually a language update released earlier this year and it includes fewer features because it's only been a year after the last update, but it has a rate-up prototype includes an exponentiation operator and a bunch of spec updates and changes to the previous version. And we've implemented features here, too. So we're excited to announce that as of Chrome 52, V8 will support natively ECMAScript 6 and ECMAScript 7 ES 2016. So it's actually, it's a huge accomplishment. The team has been working really hard. And I should mention that ES6, one of the most popular features is modules. Modules require interaction with the DOM, the HTML import, and though that side of the integration is still being worked on, so modules don't work out of the box yet, but as soon as the HTML implements it, they will. And tail call optimizations were also originally specified on ES6, but they're under recent discussion at the Standards Committee and there's a proposal to change the syntax of how they operate, so those aren't included in there either. But at the end of the day, this means that with Chrome Canary today and with Chrome 52 when it hits stable, you can write and ship ES6 applications. And so we've also, we're not just stopping there. ESnext is the colloquial term for the future features that are still being standardized, but coming soon is async await. We actually landed an implementation, a prototype implementation in V8. We're hoping to ship that very soon, make dealing with asynchronous code a lot easier. And if you follow node package management ecosystem concerns, there was some left pad hassle, but we're really excited to announce that JavaScript will support left pad out of the box with the pad start and pad end and we've landed a prototype of that too, so that should really improve things. Yeah. But you probably are using ES6 features already and you're probably using something like a transpiler, Babel Tracer. Why does it matter that the browser supports these at all if we can just transpile them? Well, there's four important reasons. There are, polyfills and transpilers can't replicate all features, so there are certain features that just can't be polyfilled with existing ES5 code. Those require native browser support. This is something like Proxies. There's less overhead. On the left here is a Fibonacci sequence using an iterator in ES6 and this is the transpiled code which allows that to run in older browsers. I mean, there's just so much overhead there which you have to both ship to your users and the parser has to parse before you can even start executing. So there's benefits to enabling this natively on the overhead side. Finally, debugging is a lot easier. I can't, even with something like source maps, if you're in the browser and the browser's running ES5 code and you're trying to step through things and debug them, there's this mental context switch that you have to perform to jump over to ES5 which is your source is written in and easier debugging has had when Chrome supports these things natively. And finally, we haven't had time because these features are so new to optimize them yet, but over time it's much more possible for the browser to optimize native features than for you to write fast versions in user land code. So something like an array iteration is much faster if done natively in the browser and with time ES6 will be much faster than the transpiled versions. So with that, let me quickly just show you how cool it is to be able to run this stuff natively. I'm gonna switch to a demo here. And yes, so I think we're ready here. We have to do MVC. I don't know if you're familiar with this. An application which shows the same application written in many different versions with many different types of JavaScript. And, excuse me, we have a version that's been around for many months. It's written in ES6. You can download this today. But in their installation instructions, they say you should compile ES6 to 5 using browserify and Babel. Well, I don't wanna do that. And I wanna do this more efficiently. So I've got the source here and you can see that, oops, let me quickly open it up here. You can see that this is really ES6 source. You've got import exports. You've got classes, arrow functions. I'm just going to run this through one step. Now, this is not transpilation. This is something called rollup. It's a tool to take import and export statements and just concatenate files and put everything in the global namespace. So the reason I have to do this is because I mentioned that modules are still not supported natively, but the results of that process still gives the exact same ES6 code. And this we can run natively in the browser. Let's check it out. Okay. So hopefully this works. Oh, that's not the same demo. This is on 8080. Yes, I think this is my to-do application. Nope, that's supposed to be milk. But it's running, it's working, and this is the coolest part. If I go to inspect and I look at my sources panel, for example, if I had to debug something or step over things, I'm actually seeing the ES6 natively in DevTools there. So there's no source maps, there's nothing. This is just appearing right as it is. Yeah, it's really very helpful in figuring out debugging something. Okay, so let me jump back to slides really quickly here and we'll continue with something else. Okay, now we're talking a bit about debugging and a bit about DevTools. One of the things that is most frustrating to me when I'm debugging is when I run into a 500 internal server error. So you've probably run into this yourself. You're in the front end and you're in this particular mental context. You're in a particular code base working on your front end app. And you hit an API and it returns a 500 internal server error. And to debug the logic of the back end, you've got to totally switch contexts. You often have to move to a completely separate code base. You have to kill your existing app process. You've probably got to start up a debugger which is a different debugger than the one in DevTools. And then you might fix the problem but then you got to restart your app and then finally you're back in front end. And this cycle is just such a hassle to move back and forth. So DevTools debugs JavaScript and V8 and V8 powers Node.js. So we thought why is it so hard to debug Node.js using DevTools? There do exist some UI solutions for debugging Node.js but none of them are a proper out-of-the-box integration between DevTools and Node.js. So as of this week, I'm happy to announce that we submitted a pull request to Node.core to add proper DevTools support to Node.js. So we're crossing our fingers that the technical issues, the review goes well and that this can be merged into Node.core. But what it promises is the ability to use the same binary that you're using to run your app with an additional command line flag and suddenly you can introspect your app using DevTools in any Chrome window using all of the DevTools features immediately after they're shipped. And this is a much more faster and there's just so much less machinery between the two environments. So let me give a quick demo of this as well. OK. So I'm working on an app here. It's a progressive web app called Weatherwonder. And I have a relative timestamp here that I just know is the wrong one. It says 46 years ago. I know my weather data was updated sooner than that. So something's wrong. And my normal experience here, and you've probably done this many times, is you inspect the element. You try to figure out where that string is coming from. In this case, I can save some time here. And I know that it's coming from a request to a back end that I'm running to Node.js back end. And it's getting a JSON response. So I can actually see I can just verify here that the time is returning from the back end with this error in the string. Now, because this is one of those things which is really on the border of DevTools normally, in the past, I'd have to completely switch context, go through that hassle, as I mentioned, to try to figure out why the string is being returned as the wrong time, and then come back into the front end to see it updated. So here's something cool. I've built the, I've got my gulp running my front end. So this is just the SAS and the HTML of the front end of the app. But I've also just got a simple Node process running my back end. It's a single API. Now, I especially built that pull request into a binary. It's called Node 2. But this is a normal Node installation. And if I just start it up again, but this time, pass in an inspect flag. That's it, just one flag. I get a URL back that I can paste into any Chrome window. So let's do that in Chromium, a canary, excuse me, to get some of the latest DevTools features. And just like that, we can see our Node application running in DevTools. So this is, let's take a moment to see what we've got. This is the sources panel. It contains all of the debugging that you're used to currently doing, stepping over things. But it also contains some of the more recent features that DevTools has shipped, things like async stack traces to be able to debug across asynchronous context, something like set timeout. It contains inline values, black boxing, if you're trying to debug code, but you don't want to dive deep into the library code. And all of these features just work out of the box. We've also got a console, which is really useful if you're trying to interactively interact with your app. Just to prove that this is Node.js running in DevTools, let's actually take a look at our global object. This is the global node object. Off of global is process. And off of process is the version, v7.0.0. This is bleeding edge node. So this really is an interactive DevTools console that's looking at my Node.js main context. And we've also got profiling. So the same profiling tools that you use in the front to figure out where you're spending time in JavaScript, you can use on the back end. You can record a JavaScript CPU profile, and it will actually profile your Node.js code. There's an amazing demo of this that was just given a couple of days ago at I.O. Where it was used to trace ES lint, and they found a 50% increase in speed just using this CPU profiling view. So it's immensely incredible, and I think that these tools will allow you to introspect your app in totally new ways. But let's quickly try to fix this bug. Now, that particular JSON was coming from this route right here, it's city.json in my Node application. So just like I would on the front end, I'm just going to set a breakpoint and try to hit that functionality again. So if I refresh my route, you'll notice it's not responding. It's still got that circle spinner there. Well, that's because I've hit the breakpoint. I'm debugging my back end. And I can use all of the tools you're familiar with. I can step over, and we can see inline values. I can see my city came back correctly. I've got a response that didn't get hit in that if loop. And I'm actually going to continue down to this timestamp. Just continue to here. And let's see what we got. Now, the nice thing about the console is it actually shows me my context of the area that I stopped at the breakpoint. So if I put timestamp in, that's undefined because we haven't gotten that line yet. But currently, dot time, there's a timestamp. And I've got my moment library, which is a Node library that I'm using to provide the conversion from a timestamp to relative time. And that is just the regular object. So I can kind of take a look and introspect this. Now, one thing that I think might be a problem here is that the timestamp I'm getting from a weather forecasting API is coming back as a timestamp. I think it's a Unix timestamp. But moment's a JavaScript library. So it's expecting things in the form of a JavaScript timestamp. Well, look at those two numbers. The JavaScript timestamp is longer. That's because JavaScript timestamps are measured in milliseconds. And the Unix timestamp is seconds since the Unix era. So this is a perfect example of something that normally would be so difficult to track down using different debuggers across different environments. But by putting the front end and the back end environment all into dev tools, I can really quickly see, oh, hang on, this response from my back end is in a different format. So I believe the fix here is just to multiply the thing coming back from the API by 1,000 to convert seconds to milliseconds. But here's the other cool part. Instead of having to go over to my source, make the fix, and restart my node process, dev tools already includes LiveEdit. You can modify JavaScript on the fly. So I'm going to go here and just add in, I just have to type in in this screen, multiply by 1,000. I hit Save, and that's it. It hot swaps that new code, and I don't have to restart the process at all. So I'm going to continue with that old request and remove the breakpoint. Now let's see, without any other changes, did we fix our front end? Let's just hit refresh there. Look at that immediately a few seconds ago. So the speed at which you can iterate here is just so much easier, and the full power of dev tools is exposed to Node.js. So we're really excited about working more closely with the Node community and continuing to improve a tool like this and shipping it by default, hopefully, as well. So thank you. Let's go back to slides. OK. So this last thing is totally separate than JavaScript. It's actually a completely new language. WebAssembly. This is something I'm excited to tell you a little bit about. It's a cross-browser, low-level language, and it's designed to run native C and C++ code in a tiny binary format. So why is this important? Well, first, there's a lot of applications out there on the web, things like games, but also media transcoding, full editing suites, things like scientific computing. Really, the high end of performance, which you might typically think of as being constrained to the desktop, these should all be possible to build on the web. But currently, JavaScript's not the best solution for delivering native-like speed. So WebAssembly is a project run and being designed in a community group that includes all major browsers. It includes Apple, Google, Firefox, and Microsoft. And we're currently designing this format, but it's not just being in the design phase. I should also mention to it, it's an open standard. It's plug-in-free. And WebAssembly will give you a native-like code environment, which can access the same web platform API as you're already familiar with. There's no new permissions it exposes. It's view source enabled, so you'll still be able to see the best parts of the web where you just click and try to look at your app from the DevTools. It's a familiar tool chain. WebAssembly is going to be able to be compiled to from LLVM, using an out-of-the-box LLVM distribution. It interops with JavaScript, so there's no separate model or trying to replace JavaScript. These two are meant to live together. It's the same browser sandbox, so it's a secure environment to run this code. And it's compatible with something you might already be using, which is Asm.js. Now, the main benefit over Asm.js is that WebAssembly is a binary format. So instead of shipping your application compiled to textual JavaScript, which can get very large, we have a binary which is smaller in size and much, much faster to decode. And I'm really excited to announce that not just is this a Chrome project, but it's one which other browsers have embraced as well. So already Firefox has shipped another experimental implementation, and Microsoft Edge has announced that they have an internal build with implementation running too. You can go to their blog and check out a video. So this is really historic in that a new language has been embraced so quickly and across all of these different platforms. So let me give a really cool demo of WebAssembly. This is actually the first time that any WebAssembly application has been shown live, and this is gonna be, let me pull it up here, a demo that you can access too if you get a recent version of Chrome Canary or Firefox Nightly and enable a flag. You can go to the WebAssembly GitHub page, click on the demo and let's see if we get a native game running. Look at that. This is WebAssembly. So look at how smooth this is. Now, one thing I wanna point out is this isn't necessarily a cutting edge game, but it is a game which exists today as Asm.js. And with just one flip of a flag on the compiler, we were able to target WebAssembly. So this isn't some whole new platform you have to learn. It's really easily targeted if you're already building with M-scripten and making Asm.js games. But here's where WebAssembly really shines. I'm presenting on a MacBook Air here. And it's actually a MacBook Air from 2012. So it's a pretty old machine and you can see here that this game is running full screen. It's so smooth. But not only that, the WebAssembly is so efficient and so fast at what it does that I've been able to run in a background window the exact same game in Firefox at the same time my whole presentation. So both of these have been running in the background on my MacBook Air from 2012 the whole time and you can just see how much more smoothly it is than existing technologies. So we're really excited to see WebAssembly mature. Yeah. And I'll go back to slides now. Okay. So that concludes this presentation but I just wanna thank you for coming out and if you have more questions about these features please do reach out to the V8 team. We have a Google group and we also have a lot more instructional videos about the way the web platform is changing in general from IO at the YouTube link there. So thank you so much. My name is Seth Thompson and I appreciate you coming out.