 Hello, everyone. I am Starth and I'll be speaking about memory leaks in JavaScript. I'll be starting with giving a reason to listen to me talk. And after that, we'll move on to, if you don't want this to happen to you ever, that's probably a good reason for you to stay for the talk. All right. So what are memory leaks? The programming, the memory lifecycle, for pretty much most programming languages is quite similar. You allocate some memory, get some job done, compute, calculate, and when you're done, you release that memory so you can do something else. You typically have a leak when you're unable to release that memory. And so it kind of clogs, accumulates, and you eventually have a hang and a crash. So a lot of you probably think that, you know, I just do some domination. I have some event listeners. And JavaScript ends there for me. So I don't think I have leaks. That's not necessarily true, because leaks lie in probably the simplest of data structures and simple coding, as I'll be showing you. So no, you don't have to be doing groundbreaking complex stuff to have a leak. How this actually works is, let's rewind probably 10 years ago when the amount of time a user spent on a page wasn't that high. The reason this is significant is because the moment you switch a page as a reload or a refresh, any leak you might have had is released. Of course, over time, Ajax use became mainstream. And we have single-page applications. And suddenly, a user is on a page for a very, very long time. So in case you do have leaks, they do have a chance to get significance, accumulate, and you can actually start feeling the effect on the page. Of course, for the past five years, mobile environments have boomed. And mobile environments, as it is, are resource crushed. And so the impact of the leak there is even more severe. Often I get asked, when do I need this? I really have tons of other stuff to do. I have to look at it now. What can buy me time? So here's the thing about leaks. The behavior of a system that is low on memory cannot be predicted. So sometimes your file descriptors will flick out. Sometimes your DB connections will break. Maybe if your graphic intensive rendering will not be proper, or there will be lag and scrolling. You don't know how it comes out. And the end result of this is you end up killing the user experience. And as front-end developers, that is something we do not do. Experience is something you do not want to kill. And think of it also this way. The very time your leaks are going to make an effect that the very time you do not want them to happen. The moment you're getting popular, the moment more people are coming to your service, to your web page, the longer they stay, that's exactly when they're going to start feeling it. And what's going to happen is, as non-tech folks, they're not going to realize what is happening. They're going to feel good factors are going to be missing. And they're going to end up, I mean, you might hang. You might crash. And they're just going to end up tweeting mean things about you. And the internet can be quite deep. So where are these leaks coming from? Obviously, you don't put them there because you don't want your code to leak. So the last known major browser bug was from IE 6 and 7, something known as the Circular Reference leak. Of course, having said that, I'm not vouching for any other internet explorer after that either. But that's IE again. So the major source of leaks is usually the developer code. And that's exactly what I'm going to show you. What about the code that leaks? So let's quickly have a look at garbage collection JavaScript. JavaScript garbage collection happens automatically. So you don't call upon it. There is, at every certain interval of time, the JavaScript engine is like, hey, it's been a while we haven't gone to garbage collecting. Or sometimes if there is a memory crunch, that is also when it goes, hey, it's time to garbage collect. This lies in the concept of reachability. Have a look at this graph. This is a Mark and Sweep algorithm where everything starting from the root that is reachable won't be garbage collected. Rest, everything else is going to be garbage collected. In this particular case, it would be nodes 9 and 10. And because everything else is reachable, starting from a root node, which is one in this case, a leak would be very typical, assume node number 8. It's something that you're not going to use ever again. So ideally you don't want it to be there, but it is being referenced, so it's not going to be garbage collected. How is this significant? This is significant when you have multiple such instances in your code, and perhaps this is a real world object with lots of properties and lots of functions and quite significant in size. It's important to know that garbage collection is not a part of the spec, which in turn means that each browser implements it differently. This does not necessarily mean that a leak in Safari won't be a leak in Chrome, because Mark and Sweep, the algorithm that I showed you, that is pretty much the basics of all modern, modern browsers. Of course, its implementation varies. Some of them have something called a scavenger. There's an incremental, there's a generational. So what's only going to matter is the nitty gritty is that probably a minor change in the numbers, but yeah, a leak in Chrome is going to be, a leak in Firefox is going to be a leak in Opera. So this is something known as a sawtooth graph. What I've done is I've plotted memory usage versus time. So just like I said, every time your memory usage increases over time, then there's a time when the garbage clear things, hey, we need more memory, and it is able to reclaim everything. Of course, this is an ideal scenario, but this is pretty much how it goes. Where it says that, yeah, I think we are able to claim everything, then there's more usage and there's a claim, and this is a very healthy graph. This is very typical of how a leak is. Check a look at the part at the bottom, right over here and over here. With each collection cycle, there is a growing gap at the bottom, which kind of says that the amount of memory that you can reclaim is decreasing. So there is a small segment basically of creating some sort of object somewhere that every cycle is unable to collect and no doubt you're heading towards a memory shortage. So we all know that kind of person, right? Who, when learns a new word uses it everywhere all the time is without context. Yeah, don't be that person. This is not a leak. So no doubt we are headed towards a memory bloat over here, but as you can see, every time the garbage collector must be running, there's nothing that's able to reclaim. That means it is a scenario which is memory intensive. Think of a infinite scroll where you're loading high definition images into the memory. So this is more a case of a bad designer, a bad architecture than a memory leak. The point I'm trying to make is it is quite contextual and every boom in memory usage is not a leak. So that's enough talk. What I'll do now is I'll walk you through creating a memory leak. I have really small code snippets and we'll try to make memory leak and we will use Chrome's dev tools to actually analyze and see the memory traces. So I'll be using, yeah, true. So his question is, if you have an infinite scroll with memory intensive usage, there's nothing you can do about it. Nothing can go about in the sense that is something you should not be doing ideally because you're not supposed to dump so much memory. If you see even on Facebook, if you ever open a video or a picture, it opens on a separate layer. And every time you click it, that separate layer is loaded separately. So even though you have infinite scroll there, that is not a part of some, that is something that is garbage collected once you press an escape. So that in itself is not supposed to be done, but yeah, if it is done, it's not about to leak them. I mean, there's nothing you can do about it. So for those who don't know, I'll quickly learn through closures. This is a closure. Outer function returns an inner function and inner function has access to huge string because of the way scoping works in JavaScript. The fun part is even after inner function returns and the context of outer function doesn't really exist, huge string still exists. Inner function does have access to huge string even after the context of outer function is over. I'll be using a Chrome memory profiler now. Do have a look. So I'm going to the timeline tab over here and unchecking everything except memory and I press record. We'll have a look at that. That's exactly what I'm about to show you. So have a look at this. I have a function run that has been executed every half a second. Inside this, I have another closure, do something with string and there is a string str of a million elements. Of course, this is exaggerated just to prove a point. So I don't really expect the leak here because every time run is executed, str, the context, there's nothing holding on to the context so I expect everything to be garbage collected. Let's see how Chrome handles this. I stop recording. So this at the bottom here, this is your time frame. So we kind of analyze this for roughly 40 seconds. On the mid, on the left side here, this is the memory graph. So I had started from a mere 1.5 MB and peaked out at 19.7 MB. This blue thing that you can see here all the way, that is the memory usage pattern. And this below, these are the memory peaks and graphs. So every time there's a peak, which I assume is the str of million elements, the garbage collector runs and it is successfully being able to clean it. As I said, there's nothing holding on to the context there. No surprises here. Let us have a look at another snippet. We'll repeat the exercise timeline, uncheck everything but memory, record. What I'm doing here is similarly million elements running every half a second. My closure here is running once every one-tenth of a second. So even after run completes executing, logit does exist. And because it's a closure, it'll have access to str and which is why str will also exist even after run has completed. So for every copy of run that's get created, I have a separate str and a separate logit, which is in memory and it is kept there because it's continuously running. Once every one-tenth of a second. Again, 18.7 MB and surprisingly no leaks here. So why, I mean, when we have a copy of that, which is in memory, why do I see no leaks? So Chrome is smart enough. In fact, most modern browser engines are smart enough to realize that, hey, logit does have access to str, but it's not really using it. It's not doing anything with it. So let me not keep it in memory. So though a new copy of str is created every half a second, it is also marked for Babish collection and this is precisely why you see no peaks. I mean, every time there's a peak, it is being collected and there is no bloat or there's no leak in the graph as such. Similar exercise, timeline, uncheck everything, record. What I'm doing now is I'm just, this is visible. Okay. I have done, it's just a combination of the first two. I have a million element string array. I have one closure that is referring to str. I have logit that is outrunning the existence of run. Logit does not refer to str and do something with str, does refer to str, but then again, it's context. I mean, the moment run executes one, this is over and it doesn't live on. Again, no reason for str to be kept in the memory. Let's see how this works out. So we have booned str to 180 MB in roughly what, 40 seconds? And this graph, as you can see, is increasing. It is collected here, but it will continue increasing just as you can see here. Slowly, slowly it's increasing, increasing and over time you are headed towards memory shortage. So why this kind of weird behavior when the previous two seem pretty straightforward and this is just a combination of those? Understanding how closures work in memory is just to keep this, it is nothing complicated. So both these closures or rather any n number of closures, all of them do have access to str. And the point being all of them, no matter when and where they refer to str, they should get the same value. So even if this function mutilates or changes or updates str, the next function calling it should get the latest updated value. The way this is implemented in engines this is kind of kept in a global dictionary sorts. So the moment even one of the closure refers to str to its lexical environment that is, its entry will be made in that dictionary. So now the dictionary is available to everyone whether or not they use it. So do something with str doesn't exist after a run, it is garbage collected, logit doesn't touch this. However, now str's entry into the dictionary has been made and which is why every time a run executes there is a new copy of str which is again, like a million elements. And this is exactly what leads to a leak. Not typically what you expected again, really simple stuff but these are typically how the use of closures can lead to memory leaks. To fix this, all you need to do is write about here say str is equal to null. The way that works is logit is not using str and do something with str is using it. So once it uses, it's done with it. So once you null it here, you have curb your leak. So that's about with the code. There's another small snippet I'd like to show you. So all I did was I went to jQuery's GitHub wrapper and I searched for memory leaks. And this is just one of the snippets that I got. So this is an exercise that I highly encourage you go ahead with where you can see things that they have done in production live code just to see how to curb, I mean these are fixes they have done over time to curb leaks. So how many of you use front end frameworks? Does this make sense? So no kidding, there is actually a front end client called fart.js that emits farting sounds when you scroll because silent scrolls is so mainstream. So regardless of whatever your poison is whether it be ember, backbone, react, are you leak free? So no doubt any framework including Node.js that you may be using has been battle tested. So the framework itself is not causing any leaks. However, your implementation with these tools is not necessarily leak free. So using these tools, there are good chances that you have leaks if you're doing similar kind of stuff that I showed you earlier. Which brings you to the next question. Is this a possibility or is it just theory? No, this is pretty much achievable and I would urge all of you, I mean this is what a coder should aspire for. What happens is when you have memory leaks in mind, inherently the quality of code is bound to improve and the thought process behind execution of any idea that you have is also improving. It has to better, I mean that's how it works. Memory optimization, imagine you have a code snippet that executes using 100 MB and if you rewrite it smartly in a better manner to come down to 25 MB. So the 75 MB you have gained is not curbing a memory leak. It is optimization. Of course, that 75 MB that you did gain does give any leak you might have had a longer time span to live. But however, these are two very distinctive and separate things and should be viewed that way. The way forward is just like I showed you the Chrome timeline tool, there is also the Chrome's profiler where you can actually take snapshots of the heap. You can see what kind of objects are open, what functions are taking up the most memory. So get to know your tools, you have similar tools on the server side, you have VA profiler, node inspector, and so many others. This inherently translates into some of these points where learn to let go, objects that are no more required, learn to delete them, de-reference them. The way to go about that is understand hoisting. How does that work in JavaScript? Understand scope in JavaScript. JavaScript has functional, not block scope. ECMAScript 6 has something known as let, which is a new kind of thing. Understand that how that works. Unbind event listeners. Event listeners also kind of cause a lot of leaks because they end up acting like closures, especially listeners which are bound to DOM elements that don't even exist anymore. Make it a good practice, unbind those kind of listeners. Closures, almost everything can be written using a closure, but that doesn't mean it has to, I mean, try to understand what warrants the use of a closure and just not stuff it wherever you can randomly. There is no rule of thumb, so to say, there's no fixed way of interpreting. It is all about context. So measure your memory traces or is it just a bad design or a bad architecture? Fix it if it is required and repeat. Some really interesting case studies are a genuine state on memory leak. Walmart had a Node.js leak that they have documented. The snippets that I showed you are from Meteor.js which is excellently blocked about. NFU supporting IE 6 and 7. Good. They had a circular reference leak that was really well blocked about. And of course, pick up your favorite jQuery library or JavaScript library, check their bit commits, check their logs and see the fixes they have done against memory leaks. I'll be taking questions now. Again, it's too contextual. Just because you are multiplying doesn't mean you have a leak but that doesn't mean not managing it explicitly will leave you leaking. So the general rule of thumb is, there is no fixed way of saying, because I'm doing this, I'm gonna have a leak, because I'm doing this, I'm gonna have a leak. It depends on n number of other factors of how you're actually implemented or executed a particular, yeah, it'll just, do a limit, yes. So his question is, is it right to say that memory leaks will not crash you and just slow you down? To a limit, they will slow you down and eventually if a browser does run out of memory, it will, your tab, just like I showed you, would crash. And if this is on the server side, you will have separate issues as well. Your module might shut down, your program might start acting weird. So as I said, the behavior of a system low on memory can't be predicted. So no, that is not true. I mean, depending on what kind of crunch you have, you may initially shut down. Yeah, in that particular case, the example that I showed you and which is the case meter.js faced, they have just nulled it at the end because only one closure was using it and that's also just once. The rest of the closures then were just repeating and not using it. So once that use case was done, they just nulled it, which worked for that. I mean, which worked for that. Program, I mean, you have these kind of dev tools that there are such things on the server side, I'm not aware of the ones on the client. I'll look that up and get back to you. Yeah, yeah, yeah, there is. So in Chrome, you have tools like these profiles where you can take snapshots of the heap, which will exactly show you which function is eating more memory, which are the open objects, which are not being garbage collected and you can pretty much easily trace it down to the kind of line or the function that is closing. Do look up, a guy had done this on Facebook. Of course, that bug doesn't exist anymore but he's really broken it down for you where without any prior knowledge, he's just used a Chrome Dev tool and an open Facebook window and gotten out of the function that is eating a lot of memory. I used to face a similar issue but then that was because of the number of connections the browser was able to support because these connection itself do not eat any data and these crashes are typically caused by data hoarding. So when you have a lot of data in the memory, that is nothing that causes. Most likely what caused your issue was an amount of number of open connections that a window can support. Maybe, I'm still not sure because I face a similar problem as well. I know, I'm not aware of that connection. I mean connections are totally different domain but I can pretty much assure you that's not a memory related thing. Yep, so this is the second snippet. What is the difference between the first and the second snippet? Second and third. What is happening is in the second snippet, log it is continuously running. So I would have expected str because log it is staying in the memory I would have expected str to remain two but the garbage collector is smart enough to realize that log it is not referencing to str is not using it in any way. So str is marked for garbage collection. In the third case, log it is still not referring to str. However, this previous closure is which is why its entry is made now into a kind of global dictionary where it stores, this is known as the lexical environment. So the lexical environment needs to be stored so that any other closure because all closures have access to this. So any other closure that is referencing to this they should all get the same value. Do I answer your question? In this particular case, yes. At the end when we do, I mean anything after this, once this is done executing, we are not using str anymore. So wise thing here, the way meter people have fixes is by nulling str. This is not a universal solution. Again, if there were multiple, if let's say you have five or 10 such closures and only two of them using str and one of them repeatedly, then this solution will not work. Then you overall have to question why is this closure designed like this? I mean if you're not deleting views that you're not using anymore, yeah, it will fall under the simpler category. Of course, you have to check whether the backboard itself has a mechanism where, okay, then then it has to be done. No, but then your object is not, so in this particular case, str will remain in the memory because it is a part of the lexical environment of the closures, but then again it's not eating up any memory. So you're freeing that kind of memory, str is not being used, but still a million elements are kept. So when you change str into null, it is not garbage collected. However, it's not causing any significant damage anymore. I'm sorry, could you speak up? There's a lot of noise. I'm sorry, I heard almost none of that, but maybe we can catch me later. I think we are out of time. Any other questions? Do catch me, I'll be around for a bit. Thanks guys, thanks a lot.