 Hello, my name is Shu and I work on the JavaScript specification as well as the V8 project My name is Leshek and I work as a performance engineer on the V8 VM So Shu, what's new in JavaScript these days? Yeah, a whole bunch of stuff has happened since last year and you might recognize some of the features We're going to talk about today from some of our colleagues 2019 Google IOTalk because language features take a while to be standardized and to be shipped in the browsers The ones we're going to be talking about today have shipped So let's start with the fun stuff Like I said, we're about to we're either shipped or about to ship quite a few syntax features That should make web devs lives easier for this talk We'll be focusing on two features that will make dealing with optional values easier So Leshek, have you ever written code that dealt with configuration? Oh, yeah, definitely like always using a hash map for those things Yeah, so I'm keeping I'm writing this new chat app right something of a strength for Google engineers I made some network parameters configurable Which I keep in this map of configurations called config that you see on the screen But the network configuration is optional because it isn't always set by the user and it has Sub-configurations like the server and the port and maybe those aren't set by the user either Handling that kind of optionality is kind of a pain currently folks do this with logical and like you see on the screen Oh, that's pretty no place Yeah, and for those chains of property accesses were at any point some property in the middle could turn out to be undefined We added this feature called optional chaining Easier to show you on the screen than to talk you through it So the optional chaining feature is the question mark and the dot instead of a plain dot Oh, I see so if net config is undefined Then net config dot server is undefined and dot adder is undefined and so forth Yeah, almost it's a little bit more relaxed than that It's if it's undefined or not and specifically we call the set of things that are undefined or not knowledge So in the in this case if net config is knowledge the whole optional chain is undefined if net config isn't knowledge But net config that server is knowledge Then again, the whole thing is undefined you get the idea if nothing is knowledge Then eventually you get the the whole property the most nested property access and cool That's not easy to read Yeah, I think so too now another common feature while dealing with configurations is default values And sometimes folks use logical or for this like you see on the screen. Oh, yeah, I've definitely read that before Yeah, and it usually works fine, but sometimes it doesn't and is it's really surprising when it doesn't Suppose I add a configuration for enabling compression to the to the server. Do you spot the bug? Oh, yeah, right How'd you actually explicitly disable compression, right? If it's false then false or true is still going to be true Yeah, exactly if enable compression is false false or true like you said is true So what we really want to test here is not something is truthy Which is what logical or tests for but actually if something is absent or present and we already are familiar with that concept That's knowledge so we introduced this new syntax feature called knowledge coalescing which is the two question marks and that does exactly what you Want here for default values it tests for knowledge this on the left hand side If the left hand side is knowledge then it evaluates to the right hand side If the left hand side is not knowledge then the whole thing is undefined So in this case enable compression is false question mark question mark true will still get you false because false is not No or undefined, but if enable compression wasn't present if it's undefined then you get the default value of true That's pretty cool. What can I use it? So you can use both optional chaining and knowledge coalescing in Chrome stable today Now enough talking for me this that that's that was just a taste of the new features You can find more on our website later. We'll show you the link But you know we add all these new syntax features and I'm worried that supporting them all will slow down the parser V8 is known to be fast and I don't want to do anything to make it slower You know what that's a fair concern to have When we shipped ES6 back in 2015, we actually saw a big parsing performance drop This is measured on octane on the code load benchmark and we had this big drop During this implementation phase, but actually nowadays parsing speed doesn't matter as much as you might think Oh really not anymore. I thought parsing was pretty expensive. Well, it's still not cheap But in the past year we've worked a lot to move a lot passing off of the main thread and be able to pass scripts While they're still loading so imagine that Chrome sees a script like this The HTML parser gets up to it It sees the script and then has to pause the HTML parsing has to download the script Pause it execute it and only then can it continue parsing HTML. I know isn't strictly true because of optimizations like preloading No, you're absolutely right This isn't actually the whole truth and the download of the script happens a lot earlier If there's a link preload or if the pre-parser finds the script earlier, and if the download moves Off of the main thread and earlier in time the parser execute can move earlier, too But the thing is the parsing itself can happen on a separate thread as well It's only really the execute that happens happen on the main thread in particular if a script is marked as async You can keep processing the HTML up until the parsing of the async script has actually finished and it needs to execute And we've had support for this basically forever, but it's been very limited We've only been able to concurrently pause once script at a time And we've only been able to do this for async scripts. Yeah, how come it's been so limited honestly just historical technical reasons Which don't really hold anymore So one of the first things that we did was move everything from this dedicated thread into our global threat pool Which meant that they could happen at the same time in parallel another thing that we changed was to have synchronous script Also use this off-thread parsing functionality. I'm kind of confused. You said synchronous scripts But what's the point of parsing? Synchronous scripts in another thread isn't the whole point for non async scripts that they block the main thread Well, that is the point for the execute But for the parsing even though we're parsing on a different thread if the main thread is free That means you can do other things it means that the user can scroll It means that the user can type it means that we can execute other JavaScript like on click handlers So it's actually very useful to be to have this empty space here on the main thread Ah, okay. I see this is a difference between improving interactivity versus just improving the loading time Right, but we can improve loading as well Because the parsing is happening on a separate thread. We can actually move it earlier We can start parsing when the download starts and then as data comes in from the network We can feed it into the parser And then the actual Pass time doesn't matter as much as you might think All that we need is for the parser to be faster than the network Really the main networks are already pretty fast Not always but usually they are fair enough and you know caches are even faster So we can't completely ignore parser performance So we have invested a lot Into improving the parser performance as well the single threaded parser performance starting 2018 We put this big effort in put some of our best engineers on it And we've had actually very good results in improving parser performance just through programming optimization Yeah up and through the right that's the kind of graph I like to see really fascinating stuff The thing I learned quite a bit in just the past five minutes about making parsing and compiling faster and Web app performance in general and you got me thinking about this other big chunk of web app performance, which is memory I was doing this thing the other day with with my chat app And you know, I got it basically working and I was trying to measure the performance of the packets that I was getting from the server I wrote this Little moving average class to compute the latency moving average of Of the packets that I was getting from the web socket You see there that I add a message listener that basically all it does is it accumulates events into the event array And I use that later in this compute function Which I don't show to actually compute the moving average And the way I use that is I have this component that when I start measuring And I want to see the live statistics of the moving average of the latency That I make a new instance of it and then when I stop I null it out because I don't want to keep all the events I accumulated in memory I know that v8 garbage collects memory that can no longer be reachable And as long as the moving average is reachable through the this stop moving average property on the moving average component That the garbage collector cannot collect it, which is why I null it out That makes a lot of sense to me Yeah, and I thought this would work fine. But then what happened was, you know, it's a chat app I kept it open for a while and I opened the memory pane I see every once in a while that a gc happens and it moves, you know, it It collects some memory the memory goes down a little bit But it's pretty clear that the trend is up into the right This is one of those graphs where up into the right is actually bad And what this was basically showing me is that it's a memory leak, right that every gc every time gc happened It wasn't able to collect all the memory So I just kept accumulating more and more memory and eventually if I kept this chat app open for Another day or so My computer will have ran out of memory memory leak But the only collects object that you can actually reach you know that out your moving average field So the garbage collector should be able to reclaim his memory shouldn't that? Yeah, so it's a common mistake, but it's still pretty subtle I'm sure the more seasoned web developer would have spotted it right away So what's going on is that the web socket is holding on to all the event listeners strongly Which means that until they are explicitly removed everything that is reachable via the event listener is also considered reachable And thus not collectable by the garbage collector So you see that use of this dot events dot push Inside the event listener as long as that use is inside there the whole moving average instance Is reachable from within the event listener and thus not garbage collectable So even when I nulled it out in the moving average component It was still considered alive by the garbage collector To deal with this folks often use what's called a disposable pattern where I have a A method that manually removes the event listener called dispose And that's kind of annoying and to use that the way I would do it is before I null it out in the moving average Component that would have to remember to manually call dot dispose What is this c++ you have to manually manage your memory after the whole part of garbage collection Was that you don't have to bother with that sort of stuff? Yeah, exactly and it's so easy to forget it too And this is all because the event listeners can't be garbage collected until you manually remove them So what if there was a way to actually tell the engine? Don't let me keep you from garbage collecting this thing even though it's reachable Then you don't have to remember to manually call dispose or even need the disposable pattern And it turns out there's a new standard feature in javascript that lets you do exactly this Weak refs All right, and before we go into that I have to give a quick disclaimer So weak refs are an advanced feature that's hard to use correctly because garbage collection is unpredictable I'm very different from browser browser and even from Even different from run to run of the same browser because of that unpredictability We didn't add weak refs to the web for many years And you hopefully will never run into a memory leak or a bug that legitimately needs it But on the rare occasion that you actually legitimately needs a weak ref Finally, you can use it And fix your problem at the root. All right back to the main programming So how am I using weak refs here to solve the previous problem? I still have this event listener But now instead of directly registering that event listener function with the socket I wrap it in a weak ref It is the what's called the target of the weak ref And inside the actual event listener I deref the weak ref and I call the function And this kind of indirection basically means that the the function that is actually holding the moving average component alive Via this that events that push is no longer kept from being garbage collected because it is a weakly held reference inside a weak ref Okay, what does weak ref dot t ref return? I see you're using optional chaining function called syntax here Yeah, good eye on that that that was not an example that we showed earlier But like optional chaining for property you can also optional chain a function call So if it's undefined Then you don't end up making the call and the whole thing is undefined But that also suggests that deref here when the thing is actually collected will return undefined to recap here What it basically means is that you have to manually call deref because we're no longer preventing the garbage collector from from collecting the The event listener since it's wrapped up in a weak ref So every time you want to access it you have to manually deref And if the garbage collector has collected it then deref will return undefined Okay, so in this case the listener is reachable by this listener And once a particular moving average instance isn't reachable Then the component and the component knows it out then the whole thing can be collected Right, exactly because we're no longer Accidentally keeping moving average alive anymore via the event listener. We can go back to What I naively thought would work in the first place when I no longer need all the data in the moving average Instance, I would just null it out and let the GC do its thing. Okay No way, but hold on. But now you've got this strongly held listener Uh on the actual event listener that's calling the weak ref. Yeah, that's a that's a that's a good point You know, I thought you wouldn't spot that but that's exactly right Even though with this weak ref in direction, I still have this event listener. Remember the socket still keeps Strongly it just holds on to all the event listeners until I unregister it. I still have this extra event listener So what do I do there? There is a companion feature the weak refs called finalization registry that lets me do The thing that's needed, which is I want the garbage collector to tell me when it has collected something So that I can perform some action at the point that an object has been collected or in gc parlance finalized And that feature is called finalization registry on this slide what you see is that I make a finalization registry and when I add the new event listener I also register with the finalization registry meaning when the inner listener the thing that actually does the This.events.push is collected and remember it's collectible now because it's held in a weak ref when that's collected It's going to run this function that I pass the finalization registry Asynchronously to remove the event listener cleaning up all the excess memory now again This is an advanced feature and hopefully you'll never need it So it doesn't actually pass the object itself into the finalizer There's a good Observation there you see that the thing that actually gets passed the finalizer are some other values The object that you want to observe the finalization of that's already been collected So you don't get that back in this case the thing we need to perform the finalization action to unregister Or the socket and the wrapper of listener and that's what will pass to your finalization registry All right, that makes sense. Yeah, and like I said, this is a this is an advanced feature and this Example here is pretty dense. I recommend that You follow the link on the screen there to follow our full explainer on the v8.dev website for the feature So with all of that work, I opened up the memory panel again. I kept my chat app on for a while I start and stop measuring the latency and Now I see that every time a gc does happen It's able to reclaim basically all of the memory and then over a longer period of time I'm no longer accumulating memory and yeah, it looks like I fixed the leak Looks pretty tricky like I've collected garbage before and I don't particularly deterministically myself Yes, garbage collection is not predictable. It's not it's non deterministic don't depend on always running And that's why we we have kept saying that we crafts and finalization registry is an advanced feature So and that's a good point too Given the unpredictability of the garbage collector are there other things that the engine does to make apps slimmer? Actually, yeah, basically doing a lot of work to reduce its memory consumption There's actually been two major projects that landed last year Which have focused on this point of compression and v8 light and I can actually talk about both of them very quickly So point of compression first of all, um, you've probably heard that machines are 32 bit or 64 bit Um on 32 bit machines. We have 32 bit pointers on 64 bit machines We have 64 bit pointers and the whole point of this and the opponent ended Is that 32 bit pointers can reference up to 4 gigabytes of memory 64 bit pointers can reference up to 18 Exabytes of memory which is quite a lot more And chrome wants to be able to run in 64 bit so that it can access more than 4 gigabytes of memory Yeah chrome definitely needs more than 4 gigs. Yeah, all right We've all heard we've all seen the same memes and you know fair enough If you've got a hundred tabs open with a thousand images and they're playing games and they're playing music It's going to use up memory but Not necessarily each individual tab not necessarily each individual v8 instance And the key observation of point of compression is that actually We can probably restrict each v8 instance to be less than 4 gig And if we can restrict it to be less than 4 gig And we can pre allocate a 4 gig area for it and force all objects to be allocated inside of that area And now instead of referencing those objects by a 64 bit pointer We can reference them by an offset like this Under point of compression you can take your 64 bit pointer and then you can split it in half You can split it into a base and an offset The base is the start of that 4 gig allocation area and the offset is the offset within it And then you only have to store the offset on objects, which means that your pointer is called half the size They got back to 32 bit size I'm guessing it wasn't just easy as that It definitely wasn't it was a whole journey and there's a whole blog post describing that journey which was very exciting But as a little spoiler, I can tell you that on typical websites We reduce memory by about 40 percent Yeah, that's uh, those are some very impressive numbers 40 percent But what if a web app or an ljs program really wants to use more than the four gigs? Are you constricting apps to only have four gigs of memory? Well, kind of but also not really First of all with points compression those objects are a lot smaller So you can fit a lot more of them into that 4 gig by allocation area And also this 4 gig only applies to a single v8 instances JavaScript object heap. So for example type to raise they have their own external memory backing So they're not included wasm instances have their own 4 gig by allocation area So those are separate even other v8 instances inside of web workers And on other tabs have their own 4 gig by allocation area So you're only restricting one v8 instance not all of them the other big project last year was v8 light And this was this was a really interesting one because we thought to ourselves What would happen if we just gave up on performance and tried to just improve memory? How far could we actually get and for memory constraint devices where v8 just couldn't run at all without the memory that it needed Yeah, that's an interesting thought experiment. I guess if you run slowly That's better than not being to run at all because you're out of memory Right, absolutely The approach that we took was to just look at Typical websites and look at what kind of things are actually taking out memory there 40 ish percent was user data. There's not really much we can do about that Projects like pointer compression are going to reduce that by a lot But we can't really have any targeted optimizations that reduce the amount of data that users create And there was this big bucket of other because there's always a big bucket called other and we couldn't reduce that either With targeted optimizations But we did look at some of the top users of memory and we decided to try and target those Right, so right off the bat if you're not worried about performance at all You don't need to optimize code that makes sense to me. Absolutely And if you don't need to optimize code, you don't need to really collect type feedback either because that's just Storing the data that we need for optimization and it's you only use for performance Even the byte code that we generate you don't have to store that You can just compile it on the fly whenever you need it and get rid of it afterwards Sounds a little different to me though Bycode is unoptimized code and if you're even getting rid of that That sounds like you're giving up more than just a little bit of performance Yeah, the first prototype of VA light were pretty slow But then we realized that we could get a lot of these gains without actually sacrificing performance at all Just by being a little bit lazier Yeah, I'm something of an expert at being lazy myself Yeah, I'm pretty good at being lazy myself too But as an expert in laziness, you know, right? That being lazy doesn't just mean not doing anything at all It means not doing anything until you you're really required to So we took the same approach with VA Let's talk about those feedback vectors that I mentioned previously, the type feedback You're not going to make much benefit from type feedback if you only run a function once or twice So only where you're going to start benefiting you after you run it tens or hundreds of times So we can delay creating this type feedback until we've already had a couple runs of this function Taking off some of those feedback vectors, but not all of them Same thing with source positions We only need those for printing line numbers when we print exception stack traces or for printing Snack traces and DevTools So if we can delay calculating those two later, then we save a lot of space as well Even bytecode We have this capability of getting rid of bytecode that we don't need So we can just get rid of old bytecode, keep around bytecode that we're still using And save a little bit of memory there And there were a bunch of tiny projects targeting these top memory users Which are described in this blog post in a lot more detail But against boiler alert They reduce memory by 10 to 30 percent on typical websites Nice So there's actually been a lot more going on in v8 in the last year We only really had time to talk about a couple of projects I recommend you visit our blog where we post about new versions of v8 We talk about exciting new things that we just like to talk about It's a great read and we look forward to seeing you there Thank you very much for all the viewers who joined us for this whirlwind tour of The javascript what's new in the javascript language and the new developments in the engine itself That makes running javascript both faster and to use less memory We definitely didn't have time to go into all the new features that were added to javascript So please give our blog a read. Thank you very much. Thanks everyone