 Hi, everybody. My name is Seth Thompson. And I am a PM on the Chrome V8 team. But I'm here to represent the whole team. And it's a big team. We have folks in Munich, San Francisco, Mountain View, New York, and contributors from around the world. So this stuff is really due to their efforts. OK, let's get started. So V8 shares the same name as a race car engine. And I don't know much about cars or even mechanical engines. But I am fascinated by Formula One for two reasons. The first reason is it's just so fast. The cars are going at such speeds around the track that I read if they could actually get enough runway on the ceiling, they generate enough downforce to stay on the ceiling. That's crazy. So I think we picked the right name for suggesting unparalleled speed. But the second thing about Formula One that I think is more pertinent to this talk is the fact that every year, the cars that the racers drive undergo drastic changes to all sorts of parts of the car, the engine, the brakes, the shape of the car, the steering. I mean, here's just a picture over the years of how much each car has changed. And you can imagine how different they must be to drive around that track. The track is the same and the driver is the same often. But the cars that they're working with have just different performance characteristics. Year to year, a team might change the tire tread. That means increased traction. That means that the driver has to be cornering in a different way than they did with tires that were smoother. They might upgrade the V6 engine to V8. This creates more power and torque, but it changes the performance characteristics once again. A team might change the turning radius to accommodate a particular track. I think Monaco, the track, has such tight turns that they need to change how steering works to actually get the car around the turns. And the driver has to accommodate for that on the fly. And the shape of the car, the body of it, changes in ways that fundamentally affect the aerodynamics. Just a small modification to one the tip or the tail can feel different as a driver going around the track. So when I joined Google or before I joined Google, I thought that the V8 engine, which turns dynamic JavaScript in a native machine code, was done when they first announced it. I mean, that was such a major improvement in speed. Really ran almost as fast as native code for a variety of applications. I assume the team packed up, gave each other high fives, went home, were happy, they were done. But I couldn't have been more wrong. Ever since then, the team has been working on an amazing set of features, things like adding new support for language semantics and syntax as ECMAScript changes and JavaScript changes. They change the heuristics that they tune for optimization, things like code caching, inlining, type inference. There's a bunch of hacks in there to really squeeze out extra performance. And the results are dramatic. Chrome today with V8 of today is turbocharged version of the original V8. So the engine really is changed. And unlike a car which gets a new release maybe yearly, V8 gets a new upgrade every six weeks as a new version of Chrome and Chromium is released. So this talk is about the engine and the driver working together. V8 in Chrome and you, the developer, writing code for V8, working together to make the fastest possible web apps. So the talk is in two parts. The first part is what we're doing on our end to make all JavaScript currently out there faster, improvements in the engine to eke out extra performance. And the second part is how you as a developer can best take advantage of these changes and adapt to machinery that's being evolved underneath you. Because ultimately the goal is to write fast JavaScript that's fast today, but then that remains fast as it stays out there on the web and is browsed via new browsers with the continual improvements. So let's start with what V8 has been working on, what the team has been working on in the past year. I'm just gonna highlight four big areas. We released a new optimizing compiler. We drastically improved garbage collection by making it smarter and more intelligent. We've added support for ECMAScript 20 fix 15, aka ES6. And we have this incubation project, which is really cool, which is a Svelte interpreter. And I'll talk more about why we did that in a second. So let's start with Turbofan. Turbofan is the code name for a new optimizing compiler. Now V8 as an engine, I mentioned, compiles dynamic JavaScript to native code. But it has actually a bunch of different compilers and it chooses which one to use based on the code itself. How long a function is, how expensive the function is, what language features a function you're writing might use and especially how hot the function is. The more you run a particular function in JavaScript, we pull out the big guns and we use a compiler which takes longer perhaps to compiler code but results in much improved optimizations. So why do we write a new optimizing compiler? Well, we wanted to start up from the ground up with a new approach. Turbofan represents the JavaScript AST in a sort of interesting new form called a C of nodes. This reduces the loads and property accesses at a really low level in the memory to this giant soup that we can have more freedom in how we schedule which accesses and which loads come first and last. The scheduling freedom is unparalleled and allows us to architect new and improved optimizations. But what does this mean for you? Well, Turbofan is designed from day one to support ECMAScript 2015. So it's true, some ECMAScript features, even if we supported them syntactically in our older compiler, we couldn't use all of the optimization techniques that let's say another compiler used because the support was spotty. But Turbofan will support all of ES 2015 and beyond and it will make it fast. This is really exciting. Turbofan uses static type information for the first time. Previously, we inferred type as a function ran. We made guesses about what the type was. But with Turbofan, when there's static type information, we can leverage it from the beginning. So I'm very proud to announce that we now fully optimize or will fully optimize as in JS with Turbofan. This is a big improvement. And soon when the WebAssembly language spec is finished, we'll be able to optimize that as well. So rather than guessing at what types you're using in JavaScript, if you use Asmjs, Compile C, or C++ code, we can run it like native code because we know from the beginning what types are being used. So Turbofan is great, but this next improvement, I think, users will really feel fastest. So idle time garbage collection is a method of freeing memory in the browser more intelligently. Now, I'm sure you use Chrome. And you know that when you have a lot of tabs open, the memory consumption is non-trivial. So all of these optimizations make the browser feel so much lighter. So what's going on? Well, Blink, the rendering engine in Chrome, released a scheduling API recently. This is amazing because for the first time, a single API has a global knowledge of what tasks are happening in the browser. That is, when clicks are coming in, when a scroll event is happening, when the page is actually being rendered to the screen for a frame, and of course, when V8 is busy performing a JavaScript execution on the main thread. So with this new scheduler API, we have a sense of what's going on. Now, I'd like to show a quick diagram here that explains why jank happens in the browser, why something that should be a smooth animation feels kind of jerky and stuttery. Well, for smooth animation, you need 60 frames per second. So the bottom of this diagram has markers for frames going forward in time. And in between those frames, the browser's doing work. It's running JavaScript execution to figure out what to paint to the frame for the next screen. And you can see here it's often variable. Some frames have a lot of JavaScript work, and some have idle time. Now, as the JavaScript is running, it's creating new objects. The memory profile is increasing. So V8 has to have a garbage collector, which comes along at some point and says, oh, these JavaScript objects aren't being used. We can free them up and lower the memory profile. But normally, it waits until the memory profile hits a certain point and then schedules garbage collection. It does this because it doesn't know any better. You can see here how big that garbage collection chunk is. And if it comes at the wrong time, it may postpone execution of the JavaScript for the next frame and miss the frame. This is a janky, disrupted animation. With the scheduler API, though, now, we know when these idle spots are happening from V8. So we can proactively schedule garbage collection as smaller chunks before we really need to garbage collect. So you can see here we've got some free time in that second frame. So we'll go ahead and garbage collect and reduce the profile bit. Having this global knowledge of idle times allows us to make garbage collection essentially happen when you're least expecting it, resulting in smoother animation. Here is a demo of the Ort Online 3D benchmark. And you can see after these improvements just how much smoother this is. In fact, the video that's playing in Keynote right now is not representing this correctly. If you try this in your browser on the right, it's even smoother than it looks here. And it's faster, too. You can see how much further we are along in the benchmark than we were before. And again, this happens because we know more intelligently when to schedule garbage collection in between frames. Now, there's another improvement that this knowledge of when the browser is idle brings. V8 has this heap of dynamic JavaScript objects. This is what makes the memory profile so big, what takes up space in your memory. And often a page creates a bunch of objects when it initially loads. So you can see here this graph of a page creating a bunch of objects and increasing memory. Right after the page loads, we increase the heap size to that limit prime. But then garbage collection kicks in. We reduce the memory footprint a bit. And then I background the Gmail tab. So let's say I go away. I'm not using it actively. And the page goes idle. We can see the usage of the creation of new JavaScript objects kind of tails off. And we don't have much memory consumption. But before these improvements, that heap size had still been raised. And we were taking up more memory than we needed. Well, now, Blink knows when you background the Gmail tab, when you're not really interacting with the page. And we can use that time when it's in the background to clean up after ourselves, shrink the heap, and just take back a bunch of memory. So here's another demo. They're flipped this time, so be careful. On the right is Chrome 43, the old version. On the left is Chrome 45 after we implemented these improvements. And you can see when you initially load the page, memory consumption is pretty high. It's around 150, 200 megabytes. But the crazy part, or excuse me, it was about 150, 150 on either side. But after both tabs goes idle, only in the new version does Chrome realize, hey, we're idle. We can freeze some memory. And quickly, I think you'll see it happens soon here, we save a bunch of memory as we shrink the heap. Boom, it's down to 80 megabytes right now. So it's about half of the memory that the older versions of Chrome took. This is huge. And it's really out there in the wild right now. And your users will feel that Chrome feels lighter and snappier because of this. So that's garbage collection improvements. But here's one that, as developers, you might be particularly excited about. We are spending time right now working on implementing ES2015 and V8. So ECMAScript is the body which determines what the JavaScript language is. And this last year, they've announced a new specification with a bunch of really cool features. We've got promises, proxies, arrow functions. V8 has shipped a bunch of these features already. So you can go out today and use classes, on newer syntax, and semantics for better object-oriented JavaScript programming. We've shipped arrow functions. Not only is it shorter to write this, you don't have to write those anonymous functions anymore. But it's much easier to deal with lexical binding of this. So the this in this case refers to the correct thing. Whereas previously, when you're writing this longhand, you might have to bind the context of that set interval to the outer context. And we've shipped spread and rest operators. These just make it so much easier to write arguments that take in something like an array. And we've staged a bunch more features. These are just two highlights, default parameters and destructuring assignments. But we've got a bunch more on the pipeline. We are really committed to delivering ES2015 in the browser. And like I mentioned, because we're doing this in Turbofan, it's not just that we support these features, but we're really committed to optimizing them and making them fast too. So here's a project which is new. And it's a bit of an experiment, but it's pretty exciting. Designing V8 is all about trade-offs. As you try to go faster, you grapple with increasing memory and having more computationally expensive CPU operations. But as Tal mentioned yesterday, increasingly a bunch of users on the web are browsing from what we call svelte devices or these Android phones or phones in general that have drastically reduced memory profiles. So while you may be using an iPhone or an Android Nexus in your pocket with 3 to 5 gigabytes of memory, many of our users, in fact a huge number of them, are browsing with just 512 megabytes of memory. This makes it really tough to perform optimizing compilations for JavaScript. So ignition is an experiment to trade off a bit of speed, it's true. An interpreter is slightly slower than compiling code to native. But it is drastically reduced memory footprint. So already, based on our experiments, this project ignition has shown that we can create a bytecode from your JavaScript AST that's three to four times smaller than unoptimized code than what we normally would run off the bat in a browser. And this is, it really will be noticeable for users on mobile devices that just are not as beefy as some of the things that Chrome or MV8 were originally built for. So really excited about this. But what's next? Briefly, I want to talk about frameworks. V8's motto is to make all the JavaScript on the web fast. But how many of you use frameworks when you're starting up a new website as the first thing to do to find a new framework? Usually it is. People use React, Angular, Ember, Polymer. All of these frameworks perform in different ways. They're often setting DOM objects rapidly. They might have a virtual DOM implementation. Some of them use an immutable pattern where they create a new object every time the object changes. And some of these APIs are designed to be ergonomic and really flexible and easy to use. But they turn under the hood into things that we call polymorphic functions, which are bad for performance. So V8 historically has made these types of applications with frameworks fast, just by making all JavaScript fast. But next year, we really want to dig down into the specific usage patterns that modern frameworks are using and optimize for them in particular. So we want to reduce boot time of frameworks. And we want to turn the patterns that they all use into really optimized code. So this is something that historically, I think, we haven't spent much time looking at. But we're really excited to tackle this. And we've been working internally on some advanced benchmarks which use real websites and are able to load them deterministically for our testing. So we'll be testing on the web, on real websites that, of course, are using frameworks. So this is what V8's been working on and what we'll keep working on. But there's another half to the equation. And that's the code that you guys as developers are writing. Obviously, you know to use efficient algorithms and optimize the types of data types you use for something. But as Paul showed just earlier, and as you've seen at this entire conference, there's times when you're optimizing for something like RAIL and you dig into Dev tools and you see, just in the flame graph, JavaScript execution taking up a massive amount of time. And sometimes you can't do less. So the question is, how can you write that same JavaScript in a way that V8 can make faster? So I've shown a bit how the V8 engine is like a Formula One car. It's this incredibly powerful machinery that allows you to move at blinding speeds. But it could be finicky sometimes. And more importantly, over time, it's performance characteristic and performance affordances change as we make evolve the engine and make changes to it. So I'd like to think about how drivers in Formula One actually adapt to changing machinery, like an F1 car. So one of my favorite drivers is Ayrton Senna, the three-time world champion. I actually showed this deck to my team and got 40 emails back with different names that I should mention instead. But I'm going to talk about Ayrton Senna. And I'd like to imagine, what if he was a JavaScript developer? How would he apply this knowledge of adapting to a new car to the browser and adapt to new versions of V8? I'm going to give four tips that I think are inspired by the sort of things that drivers do. The first tip is understand how modern engines work. Now, Ayrton Senna doesn't have a mechanical engineering degree. Neither should you read the source of V8. But having at a high level a sense of what's going on to the hood is really useful. So I'd like to share two things, which are fundamental enough that they're actually shared architecture across all of the JavaScript engines out there on the web. Because really, you want to write code for everything. So this includes Spider Monkey in Firefox, JavaScript Core in Safari, and Chakra in Edge. Look at this code right here. It's pretty simple. I'm just creating a point class and instantiating a new object three times over. Well, I mentioned that the V8 engine is turning this dynamic code. Because remember, in JavaScript, your objects can have any number of properties. And you can change the shape of them at any time. Under the hood, V8 needs to turn that into native machine code. And it does this in a way that you can imagine as creating structs, like C-like structs, for every object it sees. So when it sees this point, it infers, as the function is run, or as this code is being run, it infers that that object is not this dynamic dictionary, but it's actually a struct that looks something like this. It has two properties, x and y, and they're both ints. So when you create these three points, it knows that the shape of the object you're creating is the same. And it optimizes it further. So these three points share the same object under the hood. So we call that a hidden class internally. But in JavaScript, you can dynamically set properties. So if I modify my third point to add a new attribute, a new property, z, under the hood, we have to go back. We notice this. We still support it, obviously, because it's valid JavaScript. But we have to undo that optimization where we said all points share the same hidden class. And we create a new hidden class. It looks something like this. It has an extra attribute, z. And this is really bad over time. Because we have a bunch of other optimizations that rely on there being the smallest number of hidden classes possible. Here's a quick example. And let me just show you how to fix this. You should always declare all of your properties in the object constructor. Because V8 can look at that and say, aha, all points will share these properties. And I can optimize for that from the beginning. So this is a much more effective pattern. I'll say that one more time. Declare all properties in object constructors. Don't dynamically add or remove them. Here's an example of why this is so important. If you have code like this, say a function twice, this function just adds the input together and returns it. In JavaScript, you could throw an integer at this. And it would add them using math. You could throw a string at this and would concatenate them. You could throw a bunch of different types at this. So when V8 sees this code here, it's a really hot for loop. And it runs twice 10,000 times. 9,999 of those times, the input to twice is, in this case, a string, or excuse me, an integer. Am I reading that right? A string. And it sees that. So it says, aha, we're going to optimize twice. It's going to be optimized for integers. Hooray. But then on the 8,000th time, you send an integer down the chute. And it says, well, we tried to do the special optimization, but it broke. So we're going to have to deopt to a slower version. So when I talk about integers and strings, what I really mean is these hidden classes that I was talking about before. JavaScript doesn't have types, but we think of objects as having particular shapes. So it's important that when you call a function, you use the same shape of arguments and returns so that v8 can turn them into highly optimized machine code versions and consistently use them without deoptimizing. Now this is called monomorphism. That's just a fancy word for it. Now I mentioned two high level parts of the architecture. These are unlikely to change because they're so core to how all modern JavaScript engines work. But I don't expect you, and in fact it's an anti pattern, to go memorize all of the other heuristics that v8 and JSC and SpiderMonkey and Chakra use. Because they're changing so fast. So in general it's a bad idea to memorize rules, and it's a much better idea to use the tools, the suite of tools, that Chrome and Chromium provide you and others. So for example, the v8 engine has a headless version called d8. And if you use it, you can pass in certain flags and introspect what's going on under the hood. There's some info information out there on the web if you want to do that. Paul and others today, all the Pauls, have been showing you how to use dev tools and find flame graphs where you can dig into actually which functions are taking the most time. In fact, in the CPU profile page, you can see those same deoptimizations that I was talking about a couple slides earlier. You could see them because there's a little warning sign next to the function in dev tools on the CPU profile page. And there's a couple more interesting tools out there if you really want to dig into the nitty gritty. Something called IR Hydra, which actually shows for any JavaScript how we turn it into native machine code. But I want to quickly give a brief demo of a little tip that you can do today probably without installing anything extra. So I mentioned d8, which is the headless version of v8. But I actually don't have it installed. And if you're not a Chromium developer, you probably don't have it installed either. But I do have node installed and node embeds v8. In particular, node has all of the v8 options available as flags that you can pass in a node when you execute something. Now, before I show you this demo, I want to make one important caveat. It's important to check the version of node you're using because you want to be on the same version of node as the one in the browser. So if I run this, I can actually see what version of v8 is included. It's 4.6. That's the same as Chrome version 46. That's what I'm running in the browser. So check that the versions are the same. But when I have node installed, I can pass in this flag called, and this works for d8 too, but you probably have node already, can pass in a flag called traceDopt. And what this does is it executes a JavaScript file and it logs out when those horrible deoptimizations that I talked about happen. So I'm going to run it on the code I showed in the slide. Remember, this is this twice function, which has a polymorphic function call. You pass in a string for a while and then an integer. If I run this, node and v8, d8 under the hood will tell you that. Now, this is a bunch of low-level assembly, but in particular, this first line here is really important. That's the one that we want to look for. It's actually grep for just the word deopt. This will print out when it hits a particular function deopt. So you can see here it says, if you read it across, js function twice was deopted. You can imagine doing this on a bigger code, a bigger piece of code, let's say something that has 4,000 lines, and counting how many deops happen. This is a good way to sort of figure out, just by squinting at it, how optimized your code is or how much room there is for improvement. Take a second to run. 12. So I know I can compare this to another body of code or change the code a bit and see if it deops less. But let's apply the knowledge to this example. So I mentioned that polymorphic calls are bad, where you pass in two types into the same function. So what if we remove this if statement and just pass in a? I actually have it over here. It's called monomorphic. So in this version, we do the same thing. We have this really hot loop. But we're only passing in one type of object, one shape of object. It's a string. And this time, v8 can optimize this and not hit those deops and have to revert to a slower version. And you can see here it runs without a problem because there's no more deoptimization. So this is just a quick tip. Start with dev tools and the flame graph. But if you really want to look under the hood, you can use node. OK, so that was two. Use tools, don't memorize rules. What are some other things you can do? Well, definitely stay up to date. You can bet that Senate understood when the car got a new break or when the engine was updated. He was talking with his engineers every day. Well, we have a really easy channel for you to figure out what's going on in v8, under the hood and with developer-facing improvements. It's the v8 blogspot at v8project.blogspot.com. So read this and see what's going on. You can also read the change log in our source tree, although this is more fundamental architectural changes and doesn't get updated quite as frequently. So the blogspot is a good place to start. And I really want to impart that v8 is constantly changing. And we're constantly making improvements on it that are driven by the types of code that we see out there in the wild, that you guys as developers are writing. And we can move faster with this feedback loop by just communicating more. So if you've got code and you're not sure why it's slow, you've looked at some of the dev tools, you've dug in and tried to remove things that are obvious, but it's still slow, file a bug. Because in v8 and all the modern JavaScript engines, reasonable code should run reasonably fast. Engage with the team on communication channels, like Stack Overflow. And in particular, you can send mail to v8users at googlegroups.com. Talk with us. Show us new frameworks and libraries. If you are using the latest and greatest framework and for whatever reason you're seeing a long boot time or a bunch of de-optimizations when you shouldn't be, show us this framework, show us this library. It helps us stay up to date with what developers are actually using. And finally, the biggest help is if you contribute benchmarks of real world apps for us to test against. Because the way we figure out if we're making good performance changes to v8 is by running benchmarks against it. And we want those benchmarks to be as close as possible to real apps. It's best if they actually are real world apps. So send us stuff that you use to test speed and we can work to optimize it. And finally, two anti-patterns. I mentioned these earlier. Don't memorize hard and fast rules. There's some information floating around on the web about how v8 doesn't optimize try catch statements. Well, it's true. It's true today. But we're working actually. We're about to ship optimized versions of try catch. So these characteristics change. And if you just used a rule that you read one time on Twitter or on the internet, you'll miss it when v8 finally ships the improvement. And actually, in Firefox, they already have shipped it. So these things are changing so fast, you can't rely on rules. You've got to rely on tooling to figure out what's changed. And lastly, don't use micro benchmarks to make design decisions. When I talk about benchmarking and profiling and running a performance audit, it's very important it's on your actual app and not some slimmed down version. It doesn't help to write a four line JavaScript statement and try to figure out what way of iterating over an array is fastest. Because I can guarantee you that when that same array iteration happens in a 50,000 line real world production app, a bunch of factors, the context will be different. And you can't guarantee that v8 will perform the same. So don't use micro benchmarks. And lastly, to bring it full circle, one of the things that I was so struck by Senna and these F1 drivers is how closely they work with the team that built the car. They constantly talked. And the team really was just as essential in winning races as Senna was as a driver. So you are Senna. You're the developer. You are the driver. And the v8 engineers are this team of engineers working to make the machinery that runs your code faster. So let's work together. Help us help you reach that checkered flag. Thank you very much. You can contact me at sethtompson.google.com and send mail to v8users at googlegroups.com. Look forward to hearing from you and sharing more about the exciting things that v8's working on. Thank you.