 My name is Thomas Nadasat. I'm the product manager for the V8 team on Chrome as well as WebAssembly. And today, I'm happy to be here to share with you a little bit about what the V8 team has been working on over the last year and some of the things that we plan to tackle next, as well as to give you some recommendations about how to write JavaScript. And so V8, for those of you who don't know, is the JavaScript engine that powers all of the execution of JavaScript, not just in Chrome, but in other places such as Node.js. To briefly provide a sense of scale, every single day, V8 spends more than 31,000 years of time executing JavaScript written by developers such as yourself. This means that when the V8 team makes even just a 1% improvement to the performance of the engine, it translates into more than 300 hours of saved user time every single day. Most importantly, though, we've just recently gotten a new Twitter account at V8.js, and I encourage anybody who hasn't already followed to do so. So to start off, I want to introduce V8's mission statement, which is to speed up real world performance for modern JavaScript and enable developers to build a faster future web. And to start off this discussion, I want to focus on one seemingly innocuous word within this mission statement, which is real world. And the real world web is the one that each and every one of us uses every single day, but that also means that the real world web is different for everybody. And part of the job on the V8 team is to make sure that the investments and improvements that we make into the engine actually translates into real benefits for end users. And so towards that end, I want to talk to you about our work on real world benchmarking. And to start off with a very brief history of benchmarking, when modern JavaScript engines were first getting off the ground, we were able to utilize what are called microbenchmarks, which essentially tested each language feature in isolation. This was great in order to be able to pick the lower hanging fruit of engine optimization. But as engines became more and more performant, we had to make various trade-offs in the performance of those engines. And it became much more important that we had a more holistic and representative sample of the JavaScript that people were executing. And towards that end, we started utilizing what we call static test suites, which are essentially collected workloads of some JavaScript performance that you can then execute your JavaScript engine on and get back results. To give an update on two very useful static test suites that we pay a lot of attention to on the V8 team is Spidometer and Aresix. Now, Spidometer is a benchmark that actually implements a real test to do MBC app written in a variety of frameworks, such as Angular, Backbone, Ember, and others, and then basically just runs that app through its paces, adding to do items and removing them. And on this one, I'm happy to say that we are 22% faster since just this last year. In addition to this benchmark, we also pay attention to one called Aresix. Aresix aims to be more future-looking and tests new ES 2015 features and beyond, such as classes four of, as well as testing more future-looking workloads of JavaScript. We've only been tracking this benchmark for about a half year, but in that time, we've already managed to improve it by 40%. So really great results from the V8 team there. To return to our timeline, static test suites have been really useful, and they've been great at empowering the team to optimize our engine. But we think that we can still do better. Ultimately, returning to our mission statement, we care about that real-world aspect. And what better representation for the real world is there than actual real-world web pages? And so on the V8 team, we actually have a set of real pages that we use. You heard these briefly mentioned in the keynote. We call them the top 25. And these are actually snapshots of some of the most popular pages out there on the web, with some diversity thrown in for size, scale, and functionality. And this list includes very useful sites, such as LinkedIn, Instagram, and most importantly, Taylor Swift's Twitter page. And on these top 25 sites, I'm happy to say that we've improved by as much as 5%, which when you take it across the scale that these web pages get translates into massive user savings. In addition to those loading stories of loading these real-world web pages, we actually also have interaction stories. And so to highlight just one of those, oh, sorry. This is actually reddit.com, which is particularly close to my heart, where we were able to speed it up by as much as 10%. But like I was saying, we also have real-world user interaction stories. And one of those is Imager Media Browsing, where we actually interact with the media on Imager and then benchmark against that real-world scenario. And in that one, we've gotten 24% faster. So I want to return to the mission statement and have a look at a second piece of that mission statement, which is performance. And specifically around performance, I want to discuss the work that the team has been doing to speed up the core of the V8 engine. And to start off this discussion, I want to introduce an enemy of all of us, which is Jank. We've all experienced Jank, but to describe a little bit of what it is, let's have a look at this nice swinging pendulum ball, courtesy of Lynn Clark. And then compare that nice smooth swinging ball to this horrible atrocity, where we've taken out just a single frame of that pendulum. And you can really see that stutter on the right swing. It's extremely infuriating, and we've been working hard to get rid of it. And one of the things that causes Jank on web pages is when the garbage collector hash has to kick in. Specifically, the main thread that's responsible for doing all of the painting of content to the screen is responsible for doing a lot of things, including executing the JavaScript and things such as garbage collection. And so in this example, what you see is memory grows over time until suddenly the garbage collection system has to kick in and actually collect the garbage. And this can cause potential frame skipping that you see there along the bottom. In order to combat this, we started what we call Project Orinoco, which is a mostly parallel and concurrent garbage collection system. And to explain what that actually means, let's return to our example and explore the first phase of Project Orinoco, which was to actually split up that very large chunk of garbage collection that you into smaller pieces so that we can better utilize the idle time that exists between frames in order to do the garbage collection then and prevent as much large blocks of garbage collection causing frame skipping. But on systems that have very little idle time in general, this can still occasionally cause frame skipping. And so we've now taken Project Orinoco to the next phase where we've actually started moving garbage collection off the main thread. And so we still leave some small slivers on the main thread for garbage collection, but we're actually able to take the majority of the garbage collection off the main thread in order to better utilize the multiple cores that modern machines ship with. This has had measurable impact, and I'm happy to say that we now do as much as 56% less garbage collection on the main thread. To take a step back for our next piece of this discussion, I want to go over two classification of trade-offs that we often have to consider when designing our engine. The first of this is the trade-off between being able to get a fast startup and getting going with your JavaScript execution quickly versus having that longer-term peak maximized performance. A second trade-off is between having low memory and keeping a low memory footprint as opposed to optimizing your functions using all of the available memory. So you might ask, can I just have both? Well, luckily with V8's new architecture, that's exactly what you get. And so V8 has just recently completed our transition from our old architecture to our new architecture. And to go into very briefly some of these pieces, we have V8 Ignition, which is an interpreter making it extremely fast as getting going with some amount of execution very quickly, as well as taking up a very low memory footprint. This makes it perfect for functions and code that you execute just once or twice. And then in addition to Ignition, we have V8 Turbofan, which is our optimizing compiler. And the job of our optimizing compiler is to take the functions that get executed multiple times and utilize the free time that we have after that startup and to utilize the available memory that we have to really just optimize these functions maximally and generate that max-optimize performance that's necessary for really great longer-term experiences. And to explain what a monumental shift this has been for the team, I want to very briefly go over the re-architecture that happened. And so here you see a diagram where we start with our source file, we put it through our JS parser, and then in our old architecture, we would put it into what we call full-code gen, which would then generate un-optimized code. We would then want to optimize it with our old optimizing compiler that we call crankshaft. But in order to do that, we actually had to go all the way back to the JavaScript source file, re-parse it, and then put the result of that into crankshaft. With our new Ignition and Turbofan architecture, however, we're able to take the results of Ignition, which generates an intermediary bytecode format, and then take that bytecode format and directly optimize it through Turbofan without having to re-parse the file. And I want to show off this commit, because it's one of my favorites. It's the commit where we removed all the 130,000 lines of old architecture code, and we replaced it with just two to connect the new plumbing. I was really, you know, we were all holding our breath like, is it going to work? And it did, so we were very happy. And one of the big advantages of this new architecture is that we're really taking a new approach in how we're being prescriptive about how you should be writing your JavaScript. In the past, we had various patterns and anti-patterns that we recommended that you follow if you wanted your JavaScript to run performantly. But that's really kind of a sad way to have a language, because you're basically saying, here are all these amazing features, but don't use any of them if you actually want your code to run well. And so with this new architecture, we're now finally able to say, go out there, write the idiomatic JavaScript that makes the most sense to all of you for your application, and then trust that the V8 team will pay attention to how you're writing it and will optimize it accordingly. And so we've really gone from this list of don'ts, like don't use let cons, don't use Strycatch, to saying, please do use all of the functions available to you in the language to write the idiomatic JavaScript that makes the most sense to you. To highlight just one of these examples, to explain, here we have, in the older versions of Chrome, you had various patterns around Trycatch, where we didn't really recommend that you use it, but there's this weird function wrapping thing that you could do that you see here on the left in order to actually get a 27% improvement over the more idiomatic version that you see there on the right. And I'm happy to say that in Chrome 60, both of these implementations now run equally fast, and in fact, both of them run substantially faster than they would have in, for example, Chrome 51. So to once again return to our mission statement, I want to next focus on another word, and that's modern. And for the V8 engine, modern means speeding up the bleeding edge of JavaScript, and we do this by actively participating in TC39, which is the standard spotty responsible for ECMAScript or ES, and then we also put additional work into optimizing the language features that come out of that process. And we actually track all of these features and track their speed, and we've gotten very fond of graphs such as this one here, where you see kind of the features going along and it's executing in around 500 milliseconds, and then we're able to make some performance improvements, and we get it down to just 200 for the same workload. Additionally, you have graphs like this one, where we kind of go up for a few commits, and then boom, we're able to land the optimization and really improve things. And then here's my personal favorite, where we were able to take a workload, this is template string tag that took 250 milliseconds and reduced that to just around 11, 12 milliseconds. Now, I'll be the first to admit that not every single graph for these features looked like these, but the hope is that over time, the V8 team will be able to tackle more and more features and make more and more of their graphs look like this and really speed up these performance new features. We actually write about a lot of these optimizations that we work on in our blog, which you can find at v8project.blogspot.com, if you're interested in some of the lower level details of how these features actually get fast. In addition to the fact that we keep making them faster, another inherent advantage of ES2015 features is their size. And so when you compare it to something that, for example, has been transpiled with something like Babel, you can actually get a significantly larger result. And so to highlight just one relatively extreme example of this, here you see simple eight lines of code, defining a class, extending it, and setting a symbol name property on it. And then you can see here what happens when you transpile that code with Babel to generate all of these lines of code. They have some variety of checks and stuff in there. And it's really cool that we can do this transpilation, but it is ultimately larger. And so to highlight one real-world example of this, this is from a coworker of mine's blog where he actually ran the test of having his entire blog written in ES2015 and then transpiling it with Babel. And what he found was that the overall size in kilobytes was about half the size with ES2015 features. But in addition to that, he also found that the parse time of that JavaScript was less than half, saving user's time. And one last point I wanna make about why it's really great to use ES2015 features where you can, which is a little more subtle. And that point is that when you write less idiomatic JavaScript, it's actually harder for the engine and for the engine developers to optimize that code. And to explain that a little bit further, here I wanna return to the same example we were looking at before. And as an engine or as an engine developer, you look at the version there on the left and you can kinda reason about what it is that you're trying to get done. You're trying to extend a class and set a property. Understandable. But when we look at the version on the right, it's gonna be much, much harder for us to figure out what you're actually ultimately trying to accomplish. And so from the perspective of a engine developer, transpiling your ES2015 code can actually serve as a way of data loss that hides the meaning of what you're actually trying to accomplish and can ultimately reduce the level of optimizations that we're able to accomplish. So with all these reasons you might be starting to think, should you still be transpiling with Babel? I know this question actually came up yesterday during the leadership panel and Alex Russell answered it by basically saying that it depends. It depends on your users and it depends on what you're trying to do. I am in the very fortunate position of not having to disagree with Alex Russell as I don't think that would go over well for me. But rather I wanna expand on his answer by actually giving you a little bit of detail on how you can actually accomplish these settings. And so the question around Babel and should you still be using it? Well in the V8 team, we actually think Babel is awesome. And in fact, Babel is so awesome that it allows you to relatively easily configure it to either transpile ES 2015 code, not transpile or transpile intelligently. And so to show off just how easy this is, I'm gonna try and do it live in a demo, fingers crossed, see what happens. Gonna go ahead and sign in and if we could switch the screens, perfect. And so here you see a very simple little test application that I have set up that's utilizing Webpack and Babel. You can see here I'm on my package.json file where I have my beautiful V8 demo name, I know, very creative. And then just a couple of dev dependencies like Babel, HTTP server, Webpack, I have my two scripts that start Webpack and HTTP server. Pretty simple stuff. Index.html, equally simple. We basically just load our bundled JavaScript file which gets reproduced from just this single file which uses the fetch API to fetch pets.json and do a little bit of nice string templating, utilizing some new ES 2015 features with that as well as with these nice arrow functions. And then we basically just add it to a DOM element. And in case you wanted to see it, this is our pets.json, very simple structure. Great, so when I actually wanna run this, I can go ahead and say npm run Webpack and that'll go ahead and generate our files using the configurations that we have set here in our simple webpack.config. And then I'm able to do just npm run start, which will go ahead and start our server. Let's switch over here, reload, excellent. And so now here we're seeing those settings. That page, load with my adorable little pets. And you can see here that when we go under sources, you can see that we were actually transpiling now, we've removed all of that kind of nice syntactic sugar and we're utilizing some tri-catches, potentially unnecessarily. Great, so now to show off how easy it is to actually configure this. There's something called Babble Preset N, which is an npm module that integrates with Babble very easily and allows you to actually transpile based not just on the browsers that you're trying to target, but additionally on some intelligent settings. And so to show just how easy this is to install, I'm gonna go ahead and npm install, dash, dash, save, dash, dev, Babble, dash, preset, dash, nth. And it's gonna go ahead and fetch that packet, fingers crossed, excellent. And then now we have that here in my package.json, Babble preset of n, beautiful. And then I just need to change this one line of code in order to stop transpiling ES2015. And I'm gonna cheat slightly by copy-pasting from this blog that I have here to save you all the time of me having to write this out. And we'll go ahead and indent it slightly accurately. Okay, that's still a mess, but you get the idea. Basically what this does is that you give it a set of browser targets that you're actually aiming for. In this case, these are the ones that support all of the ES2015 features. And then it'll actually pay attention to that when you run once again npm run webpack. And this will then go ahead and actually build these using these settings. And now you can see when we go in and actually look at our app dot, our bundled app, we're now seeing these nice features of ES2015 and the smaller code size as well. And just to make sure that we're seeing it in line of production as well, we'll go ahead and switch back. And boom, there it is in the sources graph as well. All right, thank you. So that was just one demo highlighting how you could turn off ES2015 transpilation altogether. But there are other options as well. And so one of these is that you can actually give babble preset and different conditions, such as greater 1% last two versions, which will actually check against the table of supported ES2015 functionality and actually intelligently transpile only to things that it needs to, which is really great because you can go home, you can set these settings and then as adoption grows, as upgrades roll out, you'll start transpiling less and less automatically. In addition to this, though, there's even more that you can do. There is a way that you can have your transpiled and non-transpiled code living together. And there's an excellent blog post by one of my colleagues, it's the one that I just stole that piece of code from. You can find it at philewalton.com slash article slash deploying ES2015 code in production today or you can just Google deploying ES2015 code and it's the first link on Google, Yahoo, et cetera, whichever one you care to use. And this is really great because it utilizes the module no module trick that some of you might have seen in order to actually have the browser load whichever one it needs conditionally, which is really great. So to return to our mission statement one last time, I promise, I want to actually highlight that last word, which is web. And you'll notice here that we don't say build a faster future Chrome or even a faster future browser. We say faster future web very intentionally because we actually care about the entire holistic experience. And for V8, that means that we love Node. And over the last about a year and a half, two years, we've actually been getting a lot more involved with the Node community. Given the growing popularity of Node, it's likely that many of you out there have used Node directly and even those of you that haven't have likely used some kind of Node-based tool such as the ones we were just looking at, the webpack babble, but also ones like TypeScript. And we've been engaging a lot with the community and it's really borne some great results. And so one of the structural changes that we have made is, which we announced at the keynote, is that you can no longer land a commit in V8 if it breaks Node.js. And so this is really making Node.js a first class citizen within V8, which we're extremely happy about. In addition to that, we've actually also been putting together a bit of an early version of a benchmarking tool set that actually benchmarks the tools that developers such as yourself use every single day. It includes things like TypeScript, Babble, webpack, the ones that we just looked at. And we recently got this together and over just the last three versions of Chrome that we've been able to test it on, we've already gotten 38% faster on that tooling benchmark. And that ultimately means that every time that you hit execute, run, build, whichever, and run your build process, you're able to more quickly see the results and you're able to iterate faster, enabling you to be faster approaching that future web that we all want to get to. In addition, as one last comment of one of the results of this collaboration, we were able to land that new architecture that I was discussing earlier in Node 8.3, which is going into LCS support, long-term support. And the results have been really great. We have a very active Twitter community that has been tweeting at us with great enthusiastic results. Here you can see the dip there on the bottom right where they were able to utilize the new version of Node with the new architecture. In addition to that, you have people like this one doing some benchmarking, other system deployments, and people overall just being very enthusiastic and showing their excitement, which we love and encourage you all to continue. And so that basically brings me to the end of my talk. And as a product manager, I love giving action items. And so if I can give just two recommendations for all of you to go home and consider, I would encourage you all to always be writing the idiomatic JavaScript that makes the most sense to you. Don't do patterns and anti-patterns that you might have heard about before. We're doing away with those. Additionally, for the second action item, I would suggest that you go back and you seriously consider how much of your ES2015 code you really need to be transpiling and whether or not you can start transpiling more selectively to gain some of the awesome benefits of ES2015 code. That's the end of my talk. Thank you very much for paying attention. Thank you.