 So, we're going to talk a little bit about performance and what performance actually means in terms of how do you profile it, how do you make decisions in it, how do you do the science of profiling web applications. I love this quote from Sir David Attenborough that basically says, you have to steer a course between not appalling people with your opinions but at the same time you can't mislead them if there's actually problems. Crying Wolf in enterprise software is a big problem because you can come back and bite you pretty hard. So I'm Landon, Landon Noss, I'm the Kermit guy on Twitter, you may have seen the avatar at some point. That's me, that's who I am. I've been writing a number now since 1-2 or maybe a little bit earlier than that. I work for Sony here, I actually work down in Orange County, I'm a former guy cayer, so I work on PSNOW, PlayStation Now. We're effectively the Netflix for games. You can hop into our app, click title, launch it, stream it, it's pretty rad, you can do it from almost anywhere. PSNOW was really important from Sony's perspective as a technical thing. It was the first time that Sony had ever done HTML DOM, JS development in a large. For instance, the PS4 store, if you've ever used a PlayStation 4, is actually built powered by WebGL. And PSNOW was kind of the first time we said, hey, what if we just did this in the DOM, and what if we did it in Ember? And we did. We launched 7.7 last year, and we were the first Ember app in production. It was the first time our team had ever wrote Ember, and it was the first thing that I worked on when I started a couple years ago. PS4 is basically just this incredibly, cosmically powerful system of which Web developers get just an eddy-beddy living space. If you've ever seen the stats for PS4 if you care, it's effectively Safari 7, although that's relatively recent. It's an 8-core Jaguar AMD CPU. You get 8 gigs of memory, a Radeon-based CPU that's pretty much custom. But the browser doesn't really get any of that. We get one core, and we get a very, very, very small fraction of memory for dealing with textures. So if you've ever worked in, I mean, how anything from doing transforms to opacity, that creates a layer, that creates a texture. That texture has to go into memory, and we only get a small fraction of what you typically get on the Web. Typically, on the Web in modern Chrome, you get half a gig of texture memory. And that can help you build really cool websites, but it's not really great for performance on a really resource-constrained device like a smart TV and shockingly like a game console. So I'm going to talk a little bit about science in this artisanal environment. The act of performance tuning works effectively like this. At any point, you can come through and say, I have this scenario. Something is slow, something is wrong, something is bad. It can be an easy solve. You can immediately get to a solution from that point. Sometimes it takes a little bit more effort, and you have to have some insight into the problem, or you make some clever observation about the problem. And that can lead you to the solution. But if that doesn't, you have to kind of form some hypothesis about why this is happening. You're either painting too often, you're painting too much per frame, you're not using RAF, things like that. But generally speaking, the sweet spot is effectively right here, is you need to be able to understand your tools, you need to be able to analyze what your tools are saying to you, and you need to be able to make smart refactorings on top of that information, and just continuously loop in that process until you solve the problem. But realistically, this is more of us, what happens when you're actively developing. Something wrong happens, you have to think about it. For a little bit, you make some guesses, you do some profiling, you kind of yell at whoever said this was a problem. What do you mean slow? What does slow actually mean? And then you enter the kind of Bermuda Triangle here, where many developers have been lost to the ages forever of just getting stuck on a problem and basically just says, why don't we just not ship that feature? So the easiest way to think about it is the simplest theories are preferable to more complex theories, because they're easy to test. You ask a simple question, you get a simple answer, and that helps you. Relatively speaking, the simplest explanation to a particular performance problem is usually the correct one. You're painting too much, you're forcing an invalid layout, you're not using transforms, you're not using opacity, you're thrashing the DOM, whatever it happens to be, except in those small cases where that's completely untrue and it's something that's a browser bug, it's a core bug in the run loop, who knows, it could be anything. There's some things that can really help you in some of your analysis, so a lot of people use Chrome. I personally use Chrome a lot. It has some hidden features that are really helpful for UI developers. They're hidden away as DevTool experiments. You can get there from Chrome flags. The important ones, for me anyway, are custom themes. I need a dark environment, I can't think right. The layers panel, which we'll get to a little bit later. Really the only two things that are necessary for ember developers are turning on the promise inspector, because we use promises everywhere, and being able to step into async code, which basically means that backburner no longer eats every promise and every exception. You can actually trace where that actually ends up from in a call stack. The other thing that's helpful is the perspectives UI. However, I haven't used that in forever, and I don't know why it's in that screenshot, so I'm getting ahead of myself. The theme I use in a lot of these screenshots is a zero dark 30. You can actually theme your inspector now, which is pretty rad. You have dark themes, mononoke, et cetera. They're pretty helpful. So back to the whole sciency thing. What is a slow frame when it comes to web performance? So I know this is probably pretty hard to see for those in the nosebleeds, but this is a timeline view in the Chrome profiler. And there's a couple of really, really important things that are on display here, specifically about how long to take for your site to start painting, which is kind of the first major performance metric you should look at is how quick do you start painting. The second is effectively how quick does all of your resources parse and evaluate, and then how long does it take before you get to the point where your DOM is ready, or its load event happens. This in particular, the site is Google. So Google.com pretty much loads relatively quickly. So we get to the first paint within 400 milliseconds, roughly. Content loaded happens just after that. And then from there to DOM load, they do some evaluation and then a lot of lazy loading after that. Basically, the top panel in the timeline view is your scripting, loading, rendering, and network chart. It's kind of a rollup of where your profiler is spending most of its time evaluating its content. The blue bar in the top is effectively what your heap looks like. So if you've ever done any kind of computer science, your heap is basically just what memory is getting allocated and for what and when, including those little drops, you'll see is when garbage collection might happen, it might just be that you know some properties and that just eliminated a lot of stuff in your heap. Next up is your frame timing panel, which there are dragons there, and we'll get to those a little bit later. And then your call stack claim chart. Out of everyone in the audience, just hands up real quick if you know what a call stack actually is. A decent amount of hands up, that's great. Another example, Facebook, as we all know, they kind of work with React. React's profile looks a lot different than something like an Ember app or just a straight up JavaScript application like the Google homepage. They hit first paint, again, about as quick as Google does, which is great, but then the time between load and DOM content loading is a lot longer and then the time between DOM content loaded and document being ready is very large. So what does fast boot look like? Fast boot's really interesting. So you get the first paint, again, about the same amount of time that you would in other sites, but you get the DOM content loaded a hell of a lot faster. You typically would with a non-fast boot application. About 650-ish, 690-ish milliseconds, 650, it's closer. And then you're completely loaded and you'll see that those, these spikes along here, these guys, is actually just a render frame being run. So your idle, effectively, is what that looks like. And that's less than a second and that's awesome. That's like as good as you can realistically get. Another example is Bustle. So Bustle is, I don't know if anyone knows what Bustle is, it's a content website, it's a news website, it's written in Ember. It got popular for a hot minute because it does a lot of interesting things behind the hood, under the hood. And again, their load profile is also pretty damn good. They're under two seconds. Their first paint takes a little bit longer. However, this is actually a completely irrelevant test because the first chunk of that stuff is the unload from the previous page. So that's my fault. So again, these markers that I keep pointing to when I look at these charts, these markers are basically, how long does it take the DOM to get to a point where it can start painting pixels to the screen? It's the first really important metric of how much work are you doing before the browser can actually start showing you pixels. The next one is DOM content loaded. All this basically means is that I've evaluated everything that you said is in your document. I parsed it, I may have had to evaluate it or put it in a VM or compile it in some fashion. And then when that's finished, we get to an idle state, which again, what you see from those bars, at least in this case, where you would typically see like a jQuery ready event or effectively the document is ready so you can do stuff with it. Again, same thing painted over Google, Facebook and that's it for now. So another cool thing about Ember Internals, which I'm a big fan of kind of using the framework for what's there. Ember Instrumentation is not a very well publicized API and that's kind of a shame because it's great and it allows you to wrap any code that you would normally execute in this instrument handler, much like you would do like PubSub for instance. You can pass some particular payload to instrumentation and then some callback that as soon as it gets executed, we'll run your normal code. So in this case, we're in a model hook, you would fetch data, you would call your store, you would normally do in any other case and then elsewhere in your application, you can actually subscribe to when that instrumentation stuff fires and actually get some timing around how long does it actually take to load a particular model hook. Or in this case, if you're setting up a bunch of very complex components and you wanna make sure that the timing is a particular delta, you can use instrumentation to do that sort of thing for free, it's built in, it's been around for a while. So it's really helpful for figuring out, there's a weird part of my code that does something I don't understand, we can create some instrumentation around that. Another example, using an observer that does a lot of work, you can actually create wildcards. So if you have kind of rigorous namespacing, you can allow yourself to say anything that's an observer is in the observer namespace, you wanna make sure your observers are not going crazy or you're not doing too many computed components or whatever, you can kind of design an instrumentation system for you that kind of helps you diagnose this stuff. So what I was saying earlier, there's dragons involved when you get into an idle state in the browser. Chrome will lie to you, although bending the truth is probably a more apt metaphor here, what request animation frame is really doing is it's just executing some work. Whether or not that call stack actually has a bunch of work to do with it is not totally relevant, but you'll notice that the frame timing that Chrome Profiler is telling me is that this frame took 450 milliseconds. But we can see right here that these little spines effectively coming down from the top, that's wrath, that's request animation frame, it's executing quickly, you're getting high frame rates. What's important about this is that just because request animation frame feels like a callback, it's not, it's an internal API into the DOM that's doing some really important work behind the scenes that does not necessarily mean frame rate. So if you're getting slow frames, it has nothing to do with wrath, wrath helps. But what's more important is that it schedules work to get done. And as soon as that scheduling is finished, it'll figure out what the layout needs to be and paint any changes between the previous state and the current state. A lot of folks really just don't know that much about request animation frame and the internals of how it works. But what you're basically accessing here is a scheduler. And that's the important thing to understand is that anytime you run code on the scheduler, it involves a little bit of work in terms of layout and a little bit of work in terms of painting, sometimes a lot of work. So that's just one little caveat to understand that the timing you see in the frame timing window is not necessarily your frame rate. Your frame rate looks a little bit more, sorry, I'm skipping ahead a bit for time, and I thought that slide was here, but it's not. So excuse me. Anyway, layout always precedes paints. Paints are where your frame rate can drop. And that's the most important thing I wanna convey to you guys is that the amount of time you're working in JavaScript has very little to do with how much work the browser is actually doing to show pixels to your users. It's more about how often you're painting, why you're painting, how often you're invalidating your layout, things of that nature. So I just wanna make sure that that's relatively clear. Again, keep in mind, if you're doing this with Rigger, it's important to say that any hypothesis that you try to create around why a performance bottleneck is happening has to get measured. You need to be able to provide data to back up your hypothesis. Are you just guessing? Guessing is fine, but when it comes to fixing problems, it's not good enough. So measure, measure as much as you can, except for when you type out measure, which I left in, whoops. All right, so load time. The most important thing for load time is really two things. Don't do too much, and if you have to do something, do it as late as you possibly can. If you don't, you are running into kind of blood death, spurt, artery murder of blocking your entire execution graph because you're saying, well, I'm gonna load everything asynchronously, but then I'm gonna process everything kind of as soon as it happens and who knows what needs to happen there. Maybe you need to iterate an array. Maybe you need to figure out some properties based on an object. Maybe you're sending the user in, right? These are important things that most apps have to do, but they do them synchronously after asynchronously requesting that data. Synchronous code will block the browser, period. There's no free lunch. If you have to do some work that involves the CPU in any way, that will block the DOM, so do it later or defer it. Make an asynchronous. There's nothing wrong with that additional overhead on the call stack, because that's relatively well optimized and the JIT will make sure that that stuff, if it's a real hot code path, will get optimized very quickly. So small little takeaway there. Another thing, progressive enhancement is a collaboration with user experience and your technical implementation. There's no one without the other. You have to work with design to figure out what it means to be progressively enhanced versus just we're gonna just load our app and then make 10 HX calls and then throw it to the user and not care. It's all right. Going on. As developers, we must work out our design for the user's benefit. This is really, really important. The user doesn't care that you can load your JavaScript vendor file 10 seconds in, sorry, back that up, in 10 milliseconds, but then it takes 300 milliseconds to evaluate it because it's been minified or whatever. All they care is that the UI is fluid and it's fast and it loads quickly. So one strategy that you can work with here is that you just don't depend on data to render your UI. Just don't. There's no reason for it. You know what your UI needs to look like, but for whatever reason, you're relying on a method of loading that data before you can show any UI and that's crazy. You should Greek it. Greeking is a practice basically of showing a, that actually comes through like crap on the screen, I apologize about that. But if you've ever used Facebook's mobile app, they do a really good job of saying, hey, we've loaded, we're here. But hang on, we got some stuff to do before we can show you the real content, but we wanna make sure that you understand that like, you're loading, it's great, it's fine. Greeking, which is the term I arbitrarily decided this to be, is basically just saying you're already mocking data in some fashion in your application, more than likely, either through Mirage or fixed your data or what have you. So why not put that to work in terms of your user's benefit? Why not use that to mock out the actual UI they're gonna see later in life? Doing this kind of render as fast as possible is the easiest and simplest and fastest and most impactful thing you can do to tell your user, we actually give a shit about loading this as fast as we possibly can. Because if you do this right, you can show that UI almost immediately after DOM content loaded, which is roughly beneath that second to glass milestone Paul Irish has talked about for so many years. The good news is you can do this with built-ins. The UNLESS Helper in handlebars, hands up if you've ever used that in a project. Lots of hands, that's great. You basically just have to wrap a little bit of logic around a particular data service or something like that that just flips a property and says, when the data is loaded, just swap it out. Maybe you have to do some re-rendering of those components, it's not a huge deal, you can probably work around that, but by and large, this is something you can do today without an add-on. So the easiest way to kind of think about how to architect your component layer and your data layer and rendering stuff is think about delegates. If you've ever written any cocoa or mobile applications or you've ever done certain flavors of C++ delegate patterns should be relatively familiar. In Ember's case, you generate a component that does no UI work, it's just a div effectively. You can style it if you want to, but try not to actually create any elements inside of that. And the idea here is that the delegate will yield whatever the model needs to be for its views to get rendered. So that might include in this case of the shopping cart, cart might just pass in products and that's all you care about and that's fine, but you can mock products relatively quickly and say, well okay, when did render calls on the cart delegate, it goes and fetches some data from whatever API it needs to, goes back up to the delegate and says, okay, I've loaded now, that's great. All my UIs now totally rendered. I don't have to do any additional work, I just have to set some property on the delegate and my views will either re-render or update with whatever information they've got. It looks kind of like this. In my cart delegate component, I just fetch some injection and some service that's attached to the cart delegate, fetch it and then I say, okay, here's my cart model and I've said that, hey, this is loaded now and I have some computed property that basically says whether or not I'm finished loading. I'm sure there's a better way or more elegant way to do this, but I'm operating on like three hours of sleep after a very long month, so excuse me for the brevity. The template kind of looks like this and there's a lot to sort of unwind about this but the important thing is that you can actually yield a property as you would in each loop iterator, right? So each cart mocks as product is basically the same thing using yield, except for whatever you're yielding as the cart object just gets passed to everything. It's just effectively a closed over value that you can just apply to your templates. In this case, you just say, unless it did finish loading, we say, okay, iterate through whatever the mocks are, render whatever that UI needs to be. Maybe there's some additional work to make sure that there's an animation saying, hey, we're loading, we're loading, but otherwise as soon as that property flips, just re-render UI as fast as you can and realistically handlebars will handle that in a less than 100 millisecond. So you might have a small flash, but again, it's up to you to style that UI in a way that makes sense to your user. Whoops. So what I'm basically arguing is just do less with the model hook in the route line, right? Do as little as possible, touch the route only for things that require coordination across multiple facets of your application. Don't even worry about using the model hook. Don't worry about using the after model hook and the before model hook unless absolutely necessary. Skip the model if you possibly can. Just make small, independent database services that allow you to fetch and handle your data as easy as possible so that your components can be smart about how fast they render if it's really important to you to get something into the glass as fast as you can. Bonus points, if you're really clever, you could probably handle this entire thing as a mix-in that just you apply to a component that handles whatever the API of the services to get that data. You don't even have to think about it anymore. It's just handled for you, but I'll leave that up as an exercise to those in the audience. How we doing on time? Okay, I got one more thing and then whoa. All right. So from a rendering perspective, paints and layouts are your worst enemy. In Firefox, they call those reflows. You may have heard the term layout and reflow kind of used interchangeably. They effectively mean that you've invalidated the style of your document in some fashion and it has to get validated, reparse, what have you. Anytime you invalidate a layout, requires painting something to the DOM based on the new validated layout. Basically everything you touch triggers some form of invalidator in some capacity. It could be that you have to get a new paint, it could be that you have to do some kind of style sheet update. You might have to update some rectangle somewhere. Even the act of measuring a rectangle can cause a reflow. So there's a lot of things you gotta really look out for in terms of is this something that's realistically affecting performance? You should measure it. And there's a resource here, CSS triggers, which I'll bring up super fast. Nope. Basically this gives you a list of which browser and if it affects layout, painting, or compositing. We'll get to compositing in a second. And the list is vast. Effectively everything, apart from transforms, opacity, and I think one other property, but anyway. So one other thing you can do from the comfort of your own couch is you can turn on some of the more advanced rendering profiling tools that you get out of the box with Chrome today. This isn't in Canary, this is just part of DevTools. And what you can do is say you're working on a really cool little website here that allows you to maybe browse a store of music or something like that. And as you go through, you say, well, there's some jank in this UI. Bring up the DevTools. And I get this as my normal environment. Just hitting escape and going to the rendering tab. PS, if you don't see it, just click the little drop down here and go to rendering. You can do a lot of things. For instance, you can say, well, I wanna show when something paints. I wanna see what layers I currently have. And I wanna know what my frames per second realistically look like. That, I'm full screen, so I'm not sure how that's gonna work. There it is. So now we have a slightly strange looking thing. Nope, not the right one. You'll notice that these green boxes here is basically the dom saying, hey, I need to paint this. This is painting right now. And I don't know why this other stuff is not showing itself properly. There we go. So you'll notice up in the corner we've got a solid-ish 59.9 frames per second. We can actually see the rasterizer and the compositor doing work in that profile as we move around the application. We also notice that there's things like, hey, there's painting happening as the highlight moves from box to box or as we go down. Your job as a developer when it comes to optimizing for performance for the user is minimize the amount of paints that you're doing. And if you do need to paint, see if you can do it in a compositing operation instead. So basically what does that mean? That means that you don't use properties like offset top or margins or paddings. Those are layout tools, those aren't animation tools. Everyone kind of learned that back when jQuery animate was a thing. What's important nowadays as things become more and more powerful is minimizing the amount of memory you use, which means eliminating as many layers from your UI as possible. So I mean, just real quick, hands up. Who knows what the magic bullet fix is when it comes to performance? Magic bullet, anyone? So you know, that's great and you got it. So magic bullet is basically just saying, hey, I'm gonna say this element, I'm gonna translate it Z property by zero. So I'm gonna do effectively nothing, but just make that a layer, which effectively says make that a texture, send that to this GPU, but don't do a damn thing to it. That's crazy. Sometimes it's absolutely required. You have to have that extra little bit of memory as an optimization, but you should not do that anymore unless you absolutely have to. And you can effectively see how much memory pressure you're applying to the GPU. Again, you have a lot of it on modern desktops, but on mobile, you have a fraction of that. So it's important to understand that you need to animate and design your UIs for what your user would expect. You wouldn't open up Facebook and expect a million different transitions and animations and stuff like that because they can't afford those pain operations. It's not because they don't want to, just browsers can't handle it on mobile and that's their more important vector. So just kind of keep that in mind as you build what you build. Let's see. You're about done here. Again, I just want to reiterate. The timeline profiling tool is about why is an operation doing what it's doing? It has nothing to do with framerate and that's a really important distinction to make when it comes to saying, this page is acting slow. It's like, okay, well, are you dropping frames? Maybe. Are you doing too much work on load? Maybe. But use the right tool for the job in terms of how to collect that data. It might be just looking at the timeline and seeing a gigantic pyramid of death on DOM content ready. So you're probably pressuring the DOM too much by saying all of my vendor files get included in the head or I've got a gigantic style sheet that has to get parsed and evaluated or my vendor file is including three different versions of jQuery and three JS and two different versions of Ember because I got no idea. So tend that garden as best you can. Frames per second, however, is a completely different beast and has nothing to do with the profiler itself. It's strictly just how fast the compositing engine in whatever browser you're working in can put pixels on the screen and those tools are completely separate than what the timeline actually is or any other profiling tool for that matter. So last thing, and then I'm done. I promise, I swear to God, I'll get you guys out of here. How fast does a user give up on a page? Do we have any guesses? Anyone at all? Five seconds. Five seconds, maybe. So hella fast is the only real acceptable answer. According to some data that Kissmetrics built around loading time in a survey they did, 3% of people, and I don't even understand how this is real, less than a second, they're out of there, less than a second and they're just like, I'm gonna go back to my home screen, I'm gonna close the tab. Fuck it, it's not worth it. Four seconds and you lose a quarter of your audience. That should have some gravity to what we're all trying to do here. Four seconds is nothing and that's a quarter of your audience. So this is legitimately a serious thing you should really pay attention to. Eight seconds, almost half have bailed on your site. With good reason, eight seconds is a long time to stare at a blank screen. After 10 seconds, you've lost half, maybe even more. And that's all I got for you, thanks. Two questions, sure? All right, who's got anything? I can dance, I swear to God. Yeah, what's up? Yeah, definitely. The one caveat to that I would say is it's great to offload that to Ember concurrency and I think Ember concurrency by the way is awesome but it's just as important what comes back as the result of that task or how that task is executed, it's just as important. One in the back, yeah? Can we see a nice little dance? Uh, no. Uh, I figure some ways like you get more tasks coming in you say like how we want that to interact like drop it on the floor, break it to finish, all this kind of thing. You talked about like dusting yourself where you're like get me a Elizabethan stuff and it comes back and you like the browser's like whoa, you're gonna like, You gotta parse all that, yeah, yeah. Is there any mechanism you can provide for like determining priority or like how you can deal with that stuff or maybe it was something like that? So I actually had a really amazing conversation with, and I'm sorry, I'm blanking on his actual name but runs fired in the performance channel on the Ember Slack and the original question that I was trying to present to him was that is there anything we can do where the run loop can help save us from some of that kind of asynchronous craziness? Like you can make a million asynchronous requests as easy as you want to but a lot of times I've seen developers just kind of forget or not care about what they do when that callback actually fires. And my thought was like well, can we use the kind of scheduling aspect of the run loop to kind of help that and that conversation effectively went nowhere. However, he gave a really striking and amazing breakdown of how RAF works and how some of the frame timing stuff kind of works. To answer your question, I don't think so. I don't know. The way the DOM API actually works in terms of the scheduler and the callback queue, probably not. However, there's nothing stopping you from running a wrapping class around XHR and just doing it yourself. So I recommend something like that. Thanks. Yeah, they didn't. It's important because again, what we're writing is working on a video game console sitting in your living room. So the most important thing in that aspect to the user is that their games run fast and flawlessly. So by limiting ourselves on the back end where we're just kind of an ancillary process to them and they don't actually care about that experience, we didn't need those resources to deliver a really great UI. We kind of have to work within the current strains of that system. Like writers always talk about if you work within a spec, your best work kind of comes out of that spec because you're providing those limitations to work with them. So achieving greatness as corny as that sounds within a very strict kind of framework can be really rewarding, but very hard. So it's not a personally imposed limitation. It's just, it's what we got from the hardware team. They said, this is what you got. And we were like, all right, well, let's do it. Anyone else? Cool. Thanks for coming.