 They are a pin-tweet and they're also available over here. So we are spoiling ourselves with faster websites. What's our generation's perception? If you respond in less than 100 milliseconds, it's considered almost instantaneous. Within one second, it's not bad. If you take roughly 10 seconds, the user's mind starts to wander. And if you're taking more than 10 seconds, you are never ever coming back. So at the end of the day, it all depends on what the user feels, what the user perceives. And that is what I'll be talking about, perceived performance. These are the key points. I'll be talking about the Rails performance model. Then I'll be talking about passive events listeners, the new CSS Contain property. And then lastly, I'll be talking about Speed Index. So this is the Rails performance model. It was put forward by the Google developers. It stands for Response, Animation, Idle and Load, R-A-I-L. So it sets guidelines in areas which are important for perceived performance. So here we have the guidelines. I'll be explaining in detail each of these guidelines. No need to get worried about so much text on the screen. So let's start up with response. So response to clicks in less than 100 milliseconds. This can be anything, a response to clicks, toggling form controls, etc. And you can show anything. This can be a simple state change or a color change. If you're taking more than 500 milliseconds, show something. This can be anything like material designs ripple effect. Or you can show intermediate state transitions. Like over here in this example, we have Housing Go's search page. And when the user taps on a property, what happens is that we have some of the details already with us, like the name of the property, the price, etc. So what we do, we show that information up front and make a network request parallely so that when that comes up, we'll show that. So the user does not get a sense of waiting and he can start viewing the data when it comes up. And last option, if you can't show any of that, show a loading spinner. But make sure that it's your last option. Next up, another section in the response category is response to swipes. So now you want 60 FPS. To get 60 FPS, 1,000 divided by 60, you have to respond in less than 16 milliseconds. So how do you get that? Avoid heavy scroll listeners. Or if you can, just don't add them at all. Or you can rate limit your listeners using debounce or throttle. Rebounce is just something like you wait for X milliseconds for your event to stop firing and then you run your function. So in the example, B and C events happened very close to each other, but we waited for some milliseconds to actually run our C function. Similarly, throttle. Throttle is proper rate limiting. Your function is run only once every X milliseconds. Like in this example, X and Y happened immediately after A, but we had already run our function once in that time frame, so we did not run it again. So if you can't do any of these, your last option is passive event listeners. It's a new property which was recently launched. It's available in Chrome and I guess Opera. It's coming soon to Firefox. It's coming this month in Firefox 49. So what are passive event listeners? So when you scroll in onto a div with your mouse, what happens is that if you want to cancel that scroll, what you'll do is that you'll prevent default that event. Now a classic example is when you scroll on Google Maps, what it does, it will prevent default so that it can zoom into the map. Now the browser does not know if your listener is going to prevent default or not. So it has to run your function so that in order to determine whether this function is going to call prevent default or not. So which causes sluggishness? If your function is very slow. So what is the option? I solemnly swear that I won't prevent default in my scroll listeners. So through passive event listeners, you tell the browser that my function will not be doing prevent default or does not need to prevent default and it can go ahead painting the next frame. So here's the syntax. The third parameter is where you used to send your captured true flaws flag. That has now changed to an options object and you can send a passive true parameter key so that to tell the browser that this is a passive listener. Also, if you still want captured true false, you can just add a key captured true false. How to determine if browsers have this feature available? Simply you just add a test listener. You pass in an option object with a getter on the passive key. If it checks for the passive key, you are sure that this browser supports passive event listeners. So here's an example. So over here, I have my timeline enabled from DevTools and what I'll be doing is I'll be scrolling. You can see that the frame rate drops when I'm scrolling. Let me give you an example with prevent default. You see I cannot scroll. Now I add passive event listeners. Frame rate constant 60 FPS. Cool. So your browser renders the next paint before it calls your function. Again, if you call prevent default with passive and listeners, it will fire that, you know, what does it fire? Unable to prevent default inside passive event listener invocation. Cool. So passive and listeners next up or if you want to disable, if you want to prevent default, you can add the touch action none property. And if you have a horizontal carousel where you want to pan on the X, you can pass pan X. Another area where performance of scrolling performance gets degraded is infinite lists. So what you can do is make sure that your DOM elements remain constant and you get smooth scrolling. Over here we are using react virtualized for to, for enabling the infinite list. Okay. So some tips for react users. Sorry again for react. So now you guys know that react is a virtual DOM diffing library. What it does, it keeps the entire virtual DOM in memory. So what happens at the time of patching at that time, what it does, it will have the int first, the original, the old DOM virtual DOM, and then it will generate the new virtual DOM through the new state. And then it will run, it will compare those two virtual DOM. So at the time of diffing, you have twice that amount of virtual DOM in memory. So what happens just after that, after your patching process completed, a huge amount of memory resources get freed up and that might trigger a garbage collection. So just when you wanted to animate the brawl, the garbage collector says stop, I got a GC and you drop some animation frames. So what you can do to avoid creating new objects in your render function, why creating new functions like over here, we are creating a new on click listener, a new function every time. Similarly bind what it does. It creates a new function. You should not be doing this. What you can do is you can bind your listener to your instance in this constructor and so that only one instance of that function remains. Another trick that you can use is you can hoist up your static content, static content which is independent of your props or state and so that only one instance of that particular virtual DOM is generated. But this does reduce readability. You have to scroll up and down. What is my hoisted title variable? Oh, it is my awesome title. So what do I do? No worries. There's a Babel plugin just for you. If you're using React, you must be using Babel to transpile so you can add the transform react constant elements and it will automatically hoist up your constant elements for you. Heck, you can even add the react preset which automatically adds all of these optimization plugins for you. Next up, define should a component update. If you return that you know my component has not changed, it will skip the render process entirely. Next in the real performance model, we have animate. So again, we saw to get 60 FPS, you have only 16 milliseconds per frame. So we saw earlier the render pipeline yesterday. Just a quick recap. If you have JavaScript, your JavaScript will apply your attributes. It will change your styles. Then your style is computed. It takes up your DOM and it takes up your CSS and creates your CSS or some. Then layout happens, which is sort of like wireframing. Your various elements are positioned on the screen and then paint happens, which it paints each of those elements and then composite happens, where it takes up those elements and then it stitches them together. So these are the properties which trigger different parts of the pipeline. The higher up you start, the longer it's going to take. So for example, if you are modifying something like margin and height, it's going to change the position of the elements. So it might change the position of elements that are below it also. So that might cause a re-layout. If you are modifying properties like color, background color, et cetera, you'd only need to paint again. You know that the element is in the same position, so layout might not happen. Next up, if you animate properties like transform and opacity, what happens is that it takes up the already painted layers and just stitches them together in the composite factor. So what you aim is that you start as low as possible so that that particular step happens. So what do you do? Four things a browser can animate cheaply. Transform, translate, scale, rotate, and opacity. You may need to add the translate G0 or the translate 3D 0, 0, 0 to push the elements to a different layer, to pushing the elements into different layers such that they are painted individually so that they can be reused during the composite layers. Also, there is a new translate 3D 0, 0 is a hacky way. What if you want to add a different value to it? So you can use the build change transform. That will also push elements to a different layer, but make sure that you don't push everything to a different layer. It's going to slow down your render pipeline. It will create different small, small GPU renders of all those different elements. To determine which property triggers which process of your render animation type line, you can check out csstriggers.com. It gives you an entire list in which browser in Blink, Gecko, what the float property will trigger, layout operations again, et cetera. All css properties you can check out over here. Next up, let's talk about the new csscontain property. This was introduced in Chrome 52 and Opera 40, and it's coming soon to Firefox. It has huge developer support. So what is? If you define contain layout, you tell the browser that my elements which are inside the elements which are inside my current, any descendants of this element, they do not affect layout. So for example, if a layout of any other elements, so example over here, we have my child live, which whose layout, whose position does not affect my untouched live. So what happens is that layout of the entire document does not need to happen. Only if you know something, a descendant of your particular element changes, only layout of that particular element is run done again. So yeah, so this is the highest performance benefit that you get from the csscontain property. Next up, contain paint. So with this, you're telling the browser that no descendant will display outside my contains bounds. So it's sort of like overflow hidden. So if you have position absolute and position fixed elements, they are positioned relative to their first non-static cally position ancestor. If there is a closer contain ancestor with contain paint, they will be positioned relative to that ancestor. So it becomes a containing block. It becomes a new static context as well. So the index works only for elements which are not positioned static. So now if it is contain paint, the index will also work. So we saw earlier that will change pushes elements to a different layer. Similarly, contain paint also pushes things to a different layer. So simply put, contain paint is equivalent to overflow hidden, position non-static. If you've given absolute or fixed, it will take up that. If not, it won't be static. It will change transform to push elements to a different layer. Contain size. So with contain size, you tell the browser that my element size are defined and its size won't change. So any descendant, like if my descendant grows big, it has a lot of text, my element size will not be affected by that. But if you don't define your element size, they will be rendered zero cross zero. So make sure that you know you've defined if you're using contain size. It's not much of a performance, but contain style. So contain style is like, if you're telling that any styles of a descendant will not apply outside that particular descendant. So for CSS, that is cascading style sheet where your styles flow up to down, this actually does not give much of a containment, but it can be helpful for CSS counters where your counter is used in various elements. It is in no way related to the shadow DOM like scope styling. So we have more shorthand as well, like this contain strict which is all four, it is a shorthand of all four. It is useful if the dimensions are set because contain size requires that your dimensions are set. So if you're using contain content, what it does is that it's a shorthand for layout plus style plus paint. So it's all except size. So it doesn't need dimensions. So this is what you should be by default using because it helps you, it's not that restrictive as contain strict but you can easily use it in your application. So here's an example of the performance improvement. Previously when we had the performance improvement, the layout root was the entire document and five nodes needed layout. Now the layout root is div box and only two nodes need layout again. So you saw that the layout process it's time decreased as well. Next up, JavaScript animations. So avoid animating using JS, if you have simple animations in CSS instead. So we saw earlier the render pipeline. Now JS animations will start right up top and on every frame your JS will be it executed, it will compute your styles and if you have optimized layout and paint won't happen, but composite will happen. So using CSS animations, your browser knows that this is my starting point, this is my end point, these are my elements which are animating. This is my duration, so it can optimize the layers accordingly. But JS animations on every frame the browser does not know that it's animating right now. So what might trigger is update layer tree. What's update layer tree? It's where the browser thinks that my current split of DOM elements into layers might not be performing. Let me try again. So it computes the layers again and it can do it on every frame and this process is proportional to your DOM size. Let me take an example over here. So this is a pull request. It's the contents of the pull request are not that important, it's just a really huge component. What I'll do is I'll open my timeline and I'll hit record and I'll scroll. Stop. Oh my god. Okay. Every frame my event is triggered and my update layer tree happens. I'm just taking an example of scroll. This can be an animation as well. Yup. So what over here is happening is this is my stall event. Assume that this is your render animation frame or something like every frame animating something. So what over here GitHub is doing is for that sticky bar that you see it is aligning it to the center and every scroll event what it does it computes the width computes my elements with and then divides it by two subtracts listening to compute this area that you know I need to align it to the center and every on every scroll what it does it sets it's left it's a position fixed element so it's left to some particular value in pixels. So the browser thinks that you know it's his animate he's changed something my layers might not be efficient let me run update layer tree and you see that it took 28.28 milliseconds a lot more than your 16 milliseconds budget. So you saw that my scrolling was actually quite performant it was still smooth because GitHub is using passive events listeners it has started using it and but if this were not passive you could actually could have actually seen this like so a few months ago before passive event listeners was launched GitHub over huge PRs was completely unbearable to use on mac chrome I don't know if this bug exists across other windows chrome as well I'm not so sure so on firefox this is this run smooth but on chrome at this may not be a bug from either side from chrome side or from github side github can say that you know at firefox it runs well why not on chrome chrome can say that hey why are you setting the same value again and again to the style why can't you just see that keep variable that you know the last value was this and this is my new value and check that if it's not different why are you setting it again so either way it clashes that you know which side it's a bug on you can check this out right now with the link is over here so also for JS animations avoid set time out and set interval for callbacks what will happen is that you don't know what frame rate to hit you know your users may be using various devices and you don't know how performing that device is going to be so if you overachieve code for the same frame might run in the same frame and you might drop some frames and if your code if you under under hit and you set your frame rate to be low then you might not be using your users device to the full potential so use request animation frame or which library or a user library which does that for you what it does it gives you a callback on the next animation frame we saw that this earlier also from yesterday's talk okay next up in the real performance model is idle so what is idle idle is the time when after your page has loaded the user is scanning through your page during idle time you need you would want to perform your heavy computations during this time before you know the user is starting has started interacting with your site so make sure that you do this in blanks of 50 milliseconds before the user begins to interact so that you are not wasting your time for some future resource your user is not waiting on some future resource to use your website to use your current page again PRPL PRPL is polymers performance philosophy what it stands for is push your critical resources render your initial route pre-cache your remaining routes and lazy load your remaining routes so what essentially during idle time what you will be doing is you will be doing pre-cache and lazy load there is a new feature which is coming up in Chrome that is request idle callback what it will do it will call you when the browser is free so you don't need to determine how do I determine whether you know how that the user browser is free so this is a new spec which is coming up Facebook and GitHub have started using this it's currently available only in Chrome but it's coming soon to other browsers as well next up load so the guideline set by rail is to load in one second and in India 3G connections that's very difficult for first load so what you need to do is that you need to aim that at least your repeat users should for them the page should load in one second there are a lot of areas where you can optimize your load time explaining each one of them is outside the scope of my talk I'll just I'll just talk on one important metric that is speed index which is important for perceived performance so let's take a look at these two examples over here we have Gmail over there we have Amazon I need to play this so which one is faster obviously Amazon because you know as in when the things were ready with Amazon it started loading it with Gmail what happened was that you know it needed to load everything for your inbox and then it showed it so what is speed index the average time at which visible parts of the page are displayed expressed in milliseconds and dependent on the size of the viewport so what speed index how to measure speed index what is done is that you take a video of during load time and you see which pixels represent their accurate state at the end of the payload frame by frame so what you do is that you plot the visual completeness of the two of the other website on a graph over here on x-axis we have on x-axis we have time on y-axis we have visual completeness so Gmail you saw that you know initially some white space was equivalent to in its final stage so we saw that it was 18% visually complete in less than one second but then it took 11 seconds to reach 90% complete visual complete while Amazon as soon as the initial stuff was ready it showed it and it was 85% in 4.5 seconds and slowly it loaded the remaining so what is speed index speed index is actually the area above the curve and the slower the speed index the better a speed index less than 1000 is what you should aim for but it's okay for repeat users make sure that it's less than 1000 speed index depends on network conditions and screen size so make sure that you are comparing to speed index course that you maintain your test conditions webpage test allows you to do that just a quick example so over here we ran a webpage test for our website and check it out I don't have net again sorry okay so what happens is when you run webpage test on your website it will show speed index a metric and I just wanted to show that I'm at the end of my talk so again the summary response tap to paint in less than 100 milliseconds swipe to paint in less than 16 milliseconds to achieve your 60 fps again animation each frame in 16 milliseconds idle time use your idle time proactively schedule your work make sure that work gets completed in 15 milliseconds of deferred chunks load make sure that it's ready to use in one second at least for repeat users aim for a low speed index and that's it excuse me can you show the slide you are in the first slide okay they are available they are pinned to it on my twitter handle as well as easy as HK 110 if that's simpler to write thank you any questions full bouncer or no questions okay I think so this must be pretty old stuff nice talk you need not be sorry for react it's also JS anyway so my question is have you tried the purple pattern what's your opinion about it come again have you tried the purple pattern what's your opinion about it the PRPL we are using it but we are not using polymer as such so instead on housing or PWA once the page is loaded our service worker prefetches the other pages the components for the other pages as well and for push the P part we are not using HTTP push right now but we are trying for it it's not specific to polymer as such yeah but PRPL we are using PRPL yes thank you yeah you've talked a lot about the technical aspects of creating performance apps what about the psychological aspect have you done any work on like what kinds of animations what kinds of user interactions make the app appear faster as to actually being faster no I have not done so essentially they should look performant at least you should not have as a technical point you should not have jank or sluggishness but like how your motion should be I think so there is google's material design which explains in detail how your animation should be if you are going from one page to another make sure that your animations are fluid and they look as if they are going in that order and you can follow google's material design for that we want to show you an example of contain just give me some time I'll connect to hb what's the password phone what's what so over here we have an unoptimized example let me just open timeline e2 record okay so I am animating over here this is an unoptimized example let's take a look it was not that slow but sorry I can't actually see okay here's the example term reader.github.io css-contain there are two examples an unoptimized example and an optimized example you can run a timeline over both of those examples you will see that the layout process the time taken for layout does not decrease that much if you have a very fast laptop but you can see the number of nodes dropping and you can see the the root layout for layout process it's the on which you have set your contain and the number of nodes that need layout that would drop also so one of the so one of the so one of the reasons for using pure js animation over css is interrupting css animations are not interruptible css animations are not interruptible so that that becomes a use use case for using pure js animations but at the same time we are the use case are not so complex enough that we need a library like react motion so how do you go about interruptibility in js animation I mean performantly doing js animations when you want a interruptive you want them to be interruptible okay I guess you know just remove the element from DOM on the next animation frame add it again with the final state that is one thing that you can use I guess other than that I'm not so sure hi would you comment on would you give a comment on the velocity js if you have ever used for a DOM animations yeah can you tell that hello do you hear me yeah would you give a comment on velocity js for animations so I have not compared javascript libraries but we saw a flash talk yesterday on greensock I really like that library I come from a flash background so I used greensock earlier when I was working with flash and it's very easy to pick up in javascript and for ads what happens is that you have you need to load your ad very fast and what google ads what it does it that it restricts some libraries for you know so that you are using the browser cache for any libraries that you are using in your ad so for google ads for animation it recommends greensock as one of the animation libraries that you can use in your ad would you explain again passive event listeners okay okay so we saw the example earlier yep so this was over here what happens is that we have a scroll listener added and it's a very slow scroll listener so what happens is every time I scroll my framerate drops over there do you see in that so what happens is that on every frame the browser first runs your function and then it paints so that causes you know your framerate to drop so what happens is through passive event listeners you tell the browser that first paint and then run my function so that gives you 60 FPS no it all depends on what you are doing you can do anything if you are modifying some CSS attribute you might cause layout again so we saw github's example we were scrolling on that really huge PR what it was doing it was doing something to compute the that sticky that it has to for that sticky behavior it had added a scroll listener so what github does is that it sticks that thing to the top of the page now it all depends on what your listener does github's listener causes update layer tree if your listener just does something which causes the composite to occur it might be fast as well but through passive event listeners you are telling the browser that you know call my listener later render the frame first yes that's it so suppose there are no more questions then we can get the next speaker on stage thanks a lot Aziz Aziz will be