 Hello, my name is Steve Cezina. I live in Colorado and I'm a UX engineer at CrowdStrike. This is a talk about memory leaks, specifically memory leaks and JavaScript single-page apps, like the kind of apps that we write with Ember. I'm going to talk about how we, as a community, can put an end to them. First, I want to make sure we're all on the same page by defining some terms and concepts. First is the heap. Every physical device has a certain amount of RAM. The device allocates this memory to the browser, which then allocates memory to the JavaScript engine. So the heap is what we call this finite chunk of memory that is dedicated to the browser tab running our app. This determines how much memory our apps have to work with, and it varies by browser, OS, and device. So how do our apps use this memory? Well, every object in JavaScript is automatically stored in the heap as your program runs. This includes objects, functions, arrays, strings, numbers, booleans, and DOM elements. These things all take up memory while they are in use by our app. DevTools provides us the heap snapshot tool for visualizing the memory allocated in the heap at any given moment. You can see here that after capturing a heap snapshot of this chess app, we can see the heap size measured in bytes, and we can search the snapshot to see various classes from the app that are currently present in the heap. Although we can see what's in the heap, we have basically no control over memory management. Memory is automatically allocated by the JavaScript engine. So if memory is automatically allocated, that means memory must also be automatically freed up. Otherwise, we would eventually run out of memory. Garbage collection, or GC, is this process used by the JavaScript engine to free up memory. It does this by removing objects from the heap once they are no longer in reference, and it does this as your app is running. As soon as the JavaScript engine decides it needs to reclaim memory, it will briefly halt the execution of your program to run a GC. We can see this visualized here using another DevTool called the memory allocation timeline. As we use the app, we can see memory that is allocated represented by the blue bars, and memory that is reclaimed by the garbage collector represented by the gray bars. Normally, we don't have to think about this process. It just works. Well, most of the time. The rest of this talk is about the times where we do have to think about it. All right. So if the JavaScript engine automatically reclaims memory, how does it know what it can reclaim and what our program still needs? Well, GC roots are objects that the JavaScript engine uses to determine what can safely be reclaimed. Each JavaScript environment has its own set of GC root objects, but you'll probably recognize them. Node.js has the global object, the browser has the window object, and the document DOM tree, and DevTools also creates a GC root when it is active. These GC roots are separate, but they're all part of the same heap. When a GC needs to run, the garbage collector begins at the each GC root and recursively traverses all the objects in reference, and it marks everything it finds during this traversal. Then it sweeps back through the heap and reclaims anything that wasn't marked. So you can imagine we might end up in a situation where we have object references that hang around unintentionally, preventing garbage collection from reclaiming objects that our program is done using. This situation is called a memory leak. When this happens, the leaked memory stays around in the heap indefinitely until the object's dominator is garbage collected. Dominator is just a scary word for some other object in the heap that's still holding reference to the leaked object. Now, imagine that the dominator is a long-lived object like an Ember service or controller or worse, one of our GC roots like the window object. Since this dominator will stick around for the life of the app, so will the leaked object. If a memory leak is exercised over and over, say like a leaky component being rendered many times, the heap will fill up with these objects that we're no longer using, reducing the amount of memory available for our program to use. This is a problem because as the heap grows, GC will begin to take longer to execute and it will begin to run more frequently. Longer, more frequent GCs will cause poor perceived performance for the users of our apps because more time is spent pause for GC and less time is spent running our code. Given a bad enough leak, the heap can also run out of memory causing the app to crash. In all of this varies based on the device, browser, and OS. Devices with lower overall memory or lower CPU speeds will experience memory issues much sooner, and mobile browsers will often aggressively kill tabs that begin using too much memory. So hopefully you have a basic understanding of how the browser manages memory and how things can go wrong. Now I'm going to share a story about memory leaks and me. Before CrowdStrike, I worked on a very ambitious Ember app. It was the kind of app that users leave open in a tab for days on end. The app had a lot going on. Hundreds, maybe thousands of components on screen at a time, drag and drop, keyboard shortcuts, in-app video recording, real-time data updates. These sorts of apps are sometimes called productivity apps or workspace apps, and they can be especially sensitive to memory issues. Apps like this often deal with lots of data and components, meaning lots of objects on the heap, and long sessions, meaning lots of time for memory issues to compound. So out of the blue one day, our customer support team began receiving reports from users that the app was randomly crashing. The crashes were unpredictable and there were no clear reproduction steps. At first, it was just a few customers reporting this, but after a few weeks, the volume of complaints grew. Users were experiencing multiple crashes per day, often losing work that they were in the middle of. They began to perceive the app as unreliable and unstable. We even began to see people complaining about this issue on our NPS surveys, and our NPS scores were being dragged down. NPS is a measure of customer satisfaction, and everyone in the company paid attention to it. As you can imagine, fixing this issue became a top priority for engineering. There was high pressure to fix the problem, but no reproduction steps, no clear indication of what the problem was or whether it was even a problem with our app. This was a recipe for disaster. We had a vague idea that this might be a memory-related issue, but it just as easily could have been an issue with an add-on, with Ember, or even with Chrome. And no one on our team, including me, was an expert in memory. I was like Ed here. I knew some of those words, but I had never faced down an app-breaking memory leak before. But we had to do something, so our team decided to dedicate two developers, I was one of them, for a full sprint to do whatever it might take to fix the issue. So we read up on memory leaks. This repo, memory leaks examples, became our guide. If you're not already familiar, it's a great resource and goes into more detail on how to actually find and fix memory leaks than I have time for today. So without reproduction steps or a clear root cause, we set out to fix every memory leak we could find in our app in hopes that that would fix the problem. We captured countless heap snapshots. We recorded countless memory allocation timelines. We would run a test. We would open DevTools, capture a heap snapshot. Then we'd search the snapshot to see if that part of our code had caused a leak. Then we'd fix whatever code we suspected was causing the leak. And then we would do it over and over again. This is all very manual slow process. And heap snapshots are definitely not easy to interpret and make sense out of. And once you find the leaky code, it's not always immediately obvious what's even causing the memory leak. But we were making progress in fixing leaks. So one sprint turned into several. We merged PR after PR, hoping that each memory leak fix would be the one to finally fix our problem. But every time we fixed a leak, we would uncover another. This process led us to find and fix memory leaks and some of the add-ons we used. We even found a leak that was introduced by a recent upgrade of Ember itself, which fortunately had already been fixed in a newer release, so we just had to upgrade. But after all of this, we were still getting reports of the crashes. We had ruled out third-party browser extensions as a cause. We had ruled out low-end devices as a cause. We even watched customers use our app over Zoom, hoping to see it happen, but it never did. And we even thought about just giving up and engaging a consultant. Eventually though, we realized that continuing without finding a reliable reproduction and root cause would continue getting us nowhere. So our last idea was to install a session recording tool in our app in hopes that we could capture a recording of a session that crashed and finally get something to go on. So a couple of weeks go by and nothing. No crash sessions had been captured and we had stopped receiving complaints from customers about this issue. Months after we started fixing memory leaks and weeks after our last memory leak PR had merged, the problem just mysteriously went away. We suspect the issue was probably a bug in Chrome that had been patched in a recent release. So what's the point of all this? The point is memory management, when it goes wrong, is hard to deal with. All of the memory leaks that we fixed were very subtle. They were easily introduced by well-meaning, talented developers. They snuck through PR reviews and QA. They were present in our app, our add-ons, and even in Ember itself. And we as a community don't have a good way to detect memory leaks. Find them, fix them, much less prevent them in the first place. And until a user complains that the app is slow or randomly crashes, we might not even know that there is a memory leak in our code. So what do we do about it? Well, we start by using Ember. Ember happens to give us a lot of tools to avoid memory issues. And although I happen to run into a framework-level memory leak, they are actually very rare and the core team cares deeply about preventing them. In fact, Ember is one of our best tools for avoiding memory leaks. There are entire classes of memory leaks that Ember developers just don't even have to worry about. For example, Ember's declarative template rendering system means that we pretty much don't ever have to think about leaking DOM nodes. The elements we render in our templates are automatically torn down by Ember once we are done with them. And we never have to think about it. These days, Ember isn't unique here. Other frameworks have the same benefit, but this hasn't always been the case in past eras of JavaScript development. Similarly, Ember's on modifier used for adding event handlers in our templates automatically takes care of removing event listeners. Forgetting to remove event listeners is one of the most common sources of memory leaks and Ember handles this for us. And even when using our own custom modifiers to interact directly with the DOM, we are given a nice API to clean up after ourselves. So good memory management hygiene is built into Ember's common constructs like components, routes, controllers, templates and services. And the same memory management tools that Ember uses internally are now available by the Destroyables API introduced in version 3.22. Destroyables give us a robust set of APIs for managing object life cycles and cleaning up objects once we're done using them. So Ember does a lot for us when it comes to memory management, probably more than most of us realize. But as we saw earlier, there are still opportunities to introduce memory leaks in our code. Here are some of the most common causes. We won't be covering these in detail, but it's likely that you have one or two of these occurring in your app right now. Once these issues find their way into our code, we're pretty much on our own as developers when it comes to dealing with them. Manually capturing allocation timelines and sifting through heap snapshots is basically the state of the art for finding and fixing memory leaks in our apps, assuming that we even know we have a leak in the first place. Like pretty much any other bug, memory leaks are much easier to introduce than they are to find and fix. And it's best if we can just avoid introducing them in the first place. That's why we have tools like automated testing, TypeScript and linters. These tools have eliminated entire classes of bugs from our code bases. But this sadly doesn't exist for memory leaks. We often leave it up to our users to tell us that we have a memory leak, although they don't know that's what the issue is. We can do better. So out of frustration with the experience I just shared, and after some long thoughts in the shower, I realized that all of these problems could be solved. Thanks to Ember's strong conventions around testing and some open source packages from the Chrome team, I realized that we have the technology. So I wrote an Ember add-on called Ember CLI Memory Leak Detector that solves some of these challenges. It works by recording a heap snapshot after your test suite finishes running and the application has been torn down. The heap snapshot is searched for any retained instances of ES classes defined in the host app or add-on. And if it finds any retained classes in the heap snapshot, we know we have a memory leak. And the test suite fails and a report of the retained classes is logged in the console output. This tool can be used after a full test run via Ember test, for example, in a CI environment, ensuring that you never introduce a new memory leak in your code base. It can also run on smaller sets of tests via Ember test dash dash filter or repeatedly while you're actively writing code via Ember test dash dash server. This means you can quickly tell if a test module or even an individual test exercises leaky code. This dramatically speeds up the developer feedback cycle when you're fixing memory leaks. The process of running a test, opening dev tools, capturing a heap snapshot and searching it for your code now takes seconds instead of minutes. And it's integrated right into Ember's test harness. Since the add-on tells you which classes have leaked, you are able to narrow down to a set of test modules, then a set of tests, then a single test, then you just change code in the leaky class until the test passes. It's like test-driven development for memory leaks. I'm really excited about this tool and the potential it has to benefit the Ember ecosystem in a few ways. One, it gives every Ember developer, including you, the tools to find and fix memory leaks quickly without needing to be an expert in memory allocation or heap snapshots. Second, it gives add-on authors and users of add-ons a shared way to understand, report and fix memory leaks in add-ons. And third, it can prevent Ember developers from introducing new memory leaks into their apps. I would love to eventually get this add-on added to the default blueprint for new Ember apps. Between Ember's strong memory hygiene and Ember CLI memory leak detector, the Ember community has the tools to work toward a future where every Ember app is free of memory leaks. There's still work to do here, so let me know if you're interested in helping out. Before I end, I wanna say thanks to Alex Ford and to TrueCoach for making development of this add-on possible. Thank you.