 Hi everyone. Good morning. So like Arun said, I'm here to talk about, you know, a case study. This is essentially based on our experience of improving performance of our React app. So more specifically, we are going to talk about the client side rendering or the CSR part of the React performance that we improved. And we're going to look at some of the low hanging fruits in terms of improving the performance, the kind of gains that you can do in a few weeks that can bring in notable improvement to, you know, all your speed matrices. Okay. So before we talk about it, here's a little bit about myself. So I'm Punik Sethi. I've been working on software performance for a decade, right from measuring performance to profiling and tuning and optimizer and all that stuff. Since the last couple of years, I found it as if I where we do front end performance optimization and we have a couple of products specifically around tracking and monitoring web performance. Okay. So onto our case study. So here's a little bit about the react app that we kind of optimized for performance. So this was react 16 mentioned this year because we use some of the react 16 specific features, which we'll talk about. Webpack four was used for the build purposes. Again, we use some of the presets and plugins that are specific to webpack four. All of the CSS was CSS in JS style components. And most of the images were SVG. So this was not an image heavy web app. So as to say, so I'm mentioning this to highlight that most of our, you know, roots was consumed by JavaScript. So that is what we were focusing on optimizing in this particular app. And yes, 80% of the traffic for our app essentially was coming from mobile devices in India. So we kept that in mind when we were measuring the performance before and after the optimization. Okay. All right. So before we get into the meat of what are the kind of optimizations we put in and how we went about it, we'll just look at how we or what are the performance goals we set? And why did we set them? So this is essentially to help you understand how we kind of ensure that our optimization exercise was very focused by, you know, sticking to certain goals that were measurable. Okay. So this was the performance goal that we had in mind. We wanted to reduce the speed index for sessions first visit for the top three entry routes. We also picked the mobile device on which we would kind of measure the performance Moto G for relatively lower end mobile device, fast 3G network from Mumbai location. So we went ahead and kind of, you know, knew what speed measure we cared about under what conditions we were kind of measuring that particular measure. And why did we pick these? So, you know, this gives an idea of why we picked these specific, you know, things. So speed index, this was essentially a discussion between the Dev and business as to what is the single metric that matters the most. This wasn't the only metric that we cared about, but this was the top most. We'll look at another that we looked at later. We were looking at sessions first visit because single page apps are all essentially about the first hit where your SPA code gets loaded. The later hits are obviously a lot faster always. We're looking at top three entry routes because these from 90% of our, you know, entry, this was coming from Google analytics. Moto G for again, we pick because majority of traffic, like I said, was coming from mobile devices. This is a relatively lower end device. If you improve for this, you improve obviously for a lot of other mobile devices fast 3G network. This was based on our experience. You know, all of us use Geo 4G, but in terms of speed quality, it is equivalent to a fast 3G network and Mumbai location because all of our traffic was coming from India. So we use these measures and we then use webpage test to set these up and kind of take our measurements. Okay. So like I said, we also kept an eye on the time to interactive measurement other than the speed index. And we also did not want speed index for some of the other routes to degrade while we were just focusing on the three that I kind of mentioned earlier. Okay. So that's how we determined that here's this goal that we'll kind of constantly watch as we, you know, optimize our app. Okay. And then we went ahead and took a baseline performance. So these were the three routes that we were tracking register view scorecard and purchase. And under all the conditions that we just looked at in, you know, last couple of slides, we measured what were the timings using webpage test, you know, so from Mumbai location, fast 3G, Moto G4, all those things. And these were the timings, eight seconds, 11 and a half, 8.5 before we started. So we knew where we were in terms of one timing. Second thing we captured as baseline is the amount of JavaScript that was loading for these three routes. So like I already mentioned, you know, majority of the stuff that got loaded when the routes were, you know, hit was JavaScript. So we measured the amount of JavaScript that was loaded when, you know, someone would hit, for example, registration route. So before the speed index event would occur, 478.5 KB GZIP. So these are, these all numbers are GZIP. Most of the numbers that I've talked going ahead are all GZIP. Wherever it isn't, I've explicitly mentioned and I'll explicitly let you know. Okay. So we got these numbers in terms of the amount of JavaScript that was loading. This obviously you can do from Chrome's DevTools. And thirdly, we got a good view of, you know, how our JS bundles look like through webpack bundle analyzer. So for folks who would have looked at webpack bundle analyzer, I think it's one of the best tools for you to get a real quick view of some of the low hanging fruits, you know, when you do an optimization exercise. So this kind of tells you what constitutes that 478 KB that we just saw in the earlier slide. And then you start questioning, why do I see this here? I don't access this here and so on and so forth. Okay. So we, we took these three, took note of these three things before we kind of started our work. And yeah, so then we came into the optimization. So that's the meat of, you know, what we did, but it's important to understand and realize that all the pre-work is kind of necessary to keep the exercise very goal oriented and thus bring in, you know, measurable gains. Okay. So until the optimizations, so here's the list of optimizations. Here's the what that we did. We identified and removed unused code. We identified and removed any duplicate libraries. We did code splitting. We did some of the dynamic library loading. We use, you know, we use a lot of plugins and presets, but one worth mentioning here is the Babel preset ENV that we use. And we did some of the optimizations beyond the app code, which we'll also look at. So the slides going ahead, we'll talk about how we did these. Okay. How we did these optimizations and what kind of gain came from each of these. Okay. So the first one unused code. So this essentially, I mean, very simply is about not loading the code that is never used. It makes sense, right? Don't load the code that you're not going to use because any code, any JavaScript that is there within your bundle is one getting downloaded is getting passed and is getting compiled, even if it is not used. And these three things have a substantial cost on your mobile devices. It may not appear. So when you're testing, you know, on your desktops while doing the dev work, but the moment you look at these on your mobile devices connected with an actual device set up remote debugging on DevTools, and you will see that these are having an impact. Okay. So how did we remove or identify this code and remove it essentially using the pack bundle analyzer. So that essentially we reduce our vendor bundle by 80 KB GZIP around 20 odd percent. So we'll look at, you know, a few examples of the kind of changes we did. First is a real low hanging fruit in the optimization circle, something that you should look at right away. So let me just before talking about this code snippet, let me show you the web pack bundle analyzer view. So this is how our bundle analyzer view look like when we started, right? And you can see what I've kind of highlighted in red here. So we saw that moment was loading all the locales. I can clearly see Russian and AF and I don't know a lot of other locales. We definitely knew that our app was English only. So we're certain that we are not even using all these locales. I think this was 50 KB GZIP. So a massive amount of JS that was, you know, loading needlessly. So to remove this, we used a webpack plugin moment locales webpack plugin. Like you're seeing here, this went into the webpack config JS. We just wanted a single local, the English local to stay. And this essentially brought in I think around 40 to 50 KB GZIP reduction in our bundle. So a very low hanging fruit. It's one of the first things that, you know, anyone and everyone should do if you're using moments, just make sure you're not using locales. If of course, you know, you are not, your app isn't internationalized. So just, just be careful about that. Second thing, this is a smaller game, but every app that I've worked on, I've seen Redux logger already always part of the production JS bundle. But we all know that Redux logger we use essentially for all the dev related debugging, right? So the kind of code snippet that you're seeing here is something that's probably present in most of the, at least every app that I have kind of worked with is you do a static import of Redux logger. Okay. And then you use it if your environment is non production, because you just want to log all the, you know, Redux state changes and do all the debugging around it. So what happens in this particular case is that the Redux logger library is always included, whether in production or in dev, but it is used only in dev. We never use it in production. So we don't want to do this import. This is a static import that we're doing here. So what we did in our case and is recommended, you know, way to go for this is to use dynamic import. So we brought the import of Redux logger inside a function, we just created a function load Redux logger, where we would import this and do all the stuff related to it here. And we made this call only when the environment was non production. So this ensured that we're not even importing Redux logger, you know, when the environment is non product is production, sorry. Okay, so this is another thing to kind of, you know, always keep in mind just to remove this kind of unused code that may go into your production JS bundles. Okay, Redux forms again, you know, one of the things. So the way we did imports for Redux forms was not by using the ES modules. So when we looked at, you know, our webpack bundle analyzer view, the Redux form library was kind of relatively big. And we were not using that many controls that made us kind of, you know, try to analyze as to why we were seeing so many of the Redux form controls in our JS bundle, which we were not even using. So we realized that like I said, you know, we were not using the ES form of the Redux forms. Now, rather than going and manually changing all those imports, we decided to use Babel plugin transform imports. What this guy does is that it changes those imports from, you know, just the Redux form import to the ES equivalent import of it. What this does is it makes sure that only, and of course, you do the preventful import is true. So this makes sure that you're just importing, you know, the controls that are used on, you know, that particular route or screen. I think this was around 15 or 15 somewhere between 15 to 20 kb gzip reduction. Okay, and yeah, so this one is kind of, you know, lazy but common. So our app essentially started, we picked up react boilerplate from GitHub, and we then built like everyone, you know, we pick either CRA or some sort of boilerplate and then built on top of it. So we picked, we had picked react boilerplate and the boilerplate already had in place Redux saga related imports and, you know, all the skeleton code in place. It had all the internationalization code in place. We were actually using Redux thunk. We were not even using saga, but it was kind of just left because, you know, who would kind of touch something if you don't know what that is kind of scenario. So I would say when you are picking up a boilerplate, just make sure that you get rid of the parts of boilerplate that you are not using. So in our webpack bundle analyzer, we were seeing react internationalization. This is react intl, pretty small, just letting you know. And this was Redux saga. So we saw this and we were sure that, oh, we're not even using saga. We're not even doing any sort of internationalization. So that's where we kind of just went ahead and just just removed those imports and deleted that skeleton code to get rid of this. Yeah, so those are the kind of, you know, so those are some of the ways where we removed the unused code. So the way is to look at the bundle analyzer view and question yourself as to where is this being used or is this being used at all? Okay, so the second kind of optimization that we did is we look for duplicate libraries. Some of you guys who have a seasoned eye for bundle analyzer would have already seen that, you know, in our vendor bundle, the immutable JS is appearing two times. So something of this sort, you know, for a performance freak is actually criminal, because you're including something two times, double the size, double all the costs that I talked about. So we went ahead to analyze why this was happening. This typically happens when there's a difference in the version in the dependencies. So we were using, I don't remember which of these but we were using I think 3.8.2 within our dependencies and one of the libraries that we were using had 4.0 as its dependency. And thus we ended up having two immutable JS, you know, soaking up all taking up all the space within our vendor bundle. The way out was we changed our code to make sure we were using the same version as one of our, you know, libraries was using. And thus we reduced I think 17 kb gzip. So we basically just got rid of one of these two and that was a 17 kb gzip reduction. Okay, code splitting. So this, I mean, reduction and duplicate library is all very lucrative because you can get rid of stuff that you don't want. But there's always so much that you can do there, right? And then comes code splitting. So code splitting essentially is about avoiding to load components until they are needed. A lot of times we load a lot of things right at the beginning, we may sometimes not need them or may need them a lot later. So that's what code splitting essentially tries to identify. The way to go about identifying these opportunities is to look for conditional rendering of components. Okay, in our case, we'll look at example to understand how that happens. In our case, here, we lazy loaded four components. So we, you know, benefited from it, I think at four different places, we got our total reduction by 90 kb gzip. This was pretty decent. I think 23%. That's one fourth the size. So yeah, pretty helpful. So we'll look at one of the examples of the code splitting that we did to understand, you know, how it can be used. So this is how our code looked like before doing the code splitting. So this is for our scorecard view. So you know, there's a scorecard view where scorecard is supposed to appear for that particular user. However, if the user is in certain stage, which is update here, so you know, if a user hasn't updated all his data, then we want to show him a form called info update form. So that is how this functionality is set up here, right? If the stage is update, show him the info update form, else just forget it. However, the import is again static. What this means is that the scorecard js bundle always includes the info update form related code, whether it is shown to the user or not displayed to the user. So that's exactly what we want to avoid. More so in this case, because we identified that we actually showed this form, I think around two to three percent of the times it was so the fee user not having filled all the data was pretty rare case. So we were showing we were loading a certain amount of code that was needed only for 2% of the guys. So rest 98% were kind of loading that needlessly. Okay. So how did we code split this? So this is how we did this. We used react lazy and suspense capability. This came in with react 16. So we change the static import that we saw earlier through react lazy. We encapsulate the import within the lazy. What this does is that it essentially will download this particular component related js only when this condition would happen. So this component and its dependencies would not be part of the scorecard related js and would get loaded separately. What suspense does is that it provides you a mechanism to specify what to display while this dynamic downloading may be happening. Okay. So in this way, what we wanted to do and what we achieved is that when a user sees scorecard, the info update form and its dependencies would not load unless this is true. So yeah. So react lazy, like I said, you know, it is essentially used to do a dynamic import as a just just for dev convenience as a show it as a regular import and suspense is to specify the fallback content to show when, you know, the lazy loaded content is loading for guys who are working on earlier versions of react react, loadable and loadable components are the alternatives. They are pretty feature rich. So if you cannot use react lazy and suspense, you still can do code splitting. One another point that I'd like to mention around code splitting as to when to or how to identify when to code split is one like we talked is to look for conditional rendering, but that's not all we need to actually do the code splitting and see how much of benefit it is bringing. There's a lot of times you can, well, let me put it this way, you can actually increase the total amount of js that is loading if you don't get this right. So this diagram I'll just try to show this. So let's assume there's one particular js bundle that has three components and you know all its dependencies. Now let's say if I go ahead and code split component two, you know, I just see that this is being rendered conditionally like I showed and we code split component two. What will happen here is that this js bundle will just remove component two but not library three related code because it's also needed by component one and the separate bundle that will be formed will have component two plus library three. So you would want to analyze how much of gain you are making or you know lack of gain you are making by actually code splitting because a lot of times I mean the dependencies are not as simple and as clear as it is, you know, shown in this diagram. So it's always advisable to code split. So if I would have done code splitting for component three, it would have been relatively simple. The three and library four would have been reduced from this particular js bundle and it would be, you know, loaded as a separate bundle. So the best way to do this is code split and then actually hit that route and see within DevTools what is the js being is being loaded, what is the size of that js, look at it in the webpack bundle analyzer. Okay, so that's code splitting. Another thing we did is dynamic library loading. So our app had provides customer support related, you know, icon and then when the user clicks that icon, he can chat with you know, our customer support. Now we were using we use third party library to do this, which is I think around 52 kb gzip with all its dependencies. This is how we were using it earlier. We built a wrapper, small wrapper chat client, which we were importing in our let's say scorecard root. Okay, and then once the scorecard component was loaded, we would do all the chat initialization. Okay, now what we realize is that we needed the customer support chat to be initialized only after the app is displayed, only once the speed index has happened, right? However, in this case, with the chat related code bundle with my overall vendor thing, we were actually loading it before the speed index was happening, which was unnecessary. So what we did is we actually consciously delayed loading of this library. So we removed the static import that, you know, you were seeing here, we use set time out the good old set time out to, you know, happen four seconds after my scorecard would be loaded. So this four second was agreed upon by, you know, of course, talking to a lot of people, but what we ensured is that our chat client would and its dependencies would load four seconds. So it would give a four second breather to for our actual app to, you know, get rendered and become functional and become interactive. Okay, so this kind of changes are good. I mean, when you kind of look at it from the bigger picture as to why this is being loaded right now kind of thing, it can definitely, you know, bring in such insights as how to optimally load, you know, some of these kind of libraries. Okay, Babel Preset ENV. Yeah, like I said, a lot of plugins and presets we can use. Babel Preset ENV I've just mentioned because it is kind of relatively powerful. Now, we all understand that there's a lot of polyfill code that becomes part of our JS bundle when, you know, we do the builds, but not always we need all that polyfill code. What Babel Preset ENV does it is that it allows us to specify for what destination browser, you know, versions we want our build to run and thus we can avoid a lot of needless polyfill. In our case, we couldn't agree to what browsers we wanted to support and not support till, you know, we did this exercise. So we didn't do that part, but we use this interesting feature from Babel Preset ENV, which is the use built-ins. So use built-ins essentially allows you to specify where that polyfill code will go into your, you know, the JS bundle. Would it be so typically it is part of the initial vendor bundle. However, a lot of JS for the, you know, for that polyfill is sitting in a lot of later roots. So use built-ins usage allows you to ensure that the polyfill for a certain code is only part of that particular JS and is not loaded initially. What did this for us is that our vendor bundle was reduced just by this one change by 20 kb gzip, you know. So we didn't want to load the polyfill when the actual code isn't there. You don't want its polyfill to be there initially. So that's what this thing did. However, just an important disclaimer, use built-ins usage flag is currently experimental. So we did a thorough functional testing to ensure that, you know, we did a thorough functional testing to ensure we were not breaking things by using this. Okay. So beyond the app code, we, a couple of things. I'll just walk through these quickly. One is improved HTTP caching. So it was interesting. We had, you know, our HTTP cache headers expiry around a month because we did push to production every fortnight. So we always thought, okay, we'll be pushing to prod every fortnight. So we don't need, you know, stuff to be there for more than a month. However, it's important to understand the ways split chunks within webpack for works is that it wants your vendor bundle to be similar unless you do newer third party libraries coming into your code, you know, for a longer duration. So we had our vendor bundle lasting for longer. We, with this change, we kind of, so we changed it to be there for a year. And with this change, our vendor bundle would be in cache for longer and thus, you know, helps returning users. We also brought in broadly compression. So broadly compression essentially allows you to, it allowed us to reduce the bundle size by 20 to 25%. Okay. So just the performance gains that we achieved. Just walk through these quickly. So I mentioned the initial sizes, right? So we, I think we reduced. So the first two graphs are gzip to gzip. And we also brought in broadly compression. So we, with broadly compression in place, I think we reduce the JS bundle size by around 50% overall. Okay. So these are the three routes that this is what we measured and the timings. So this is the timing gain that we had. I think 30 to 40% across three routes is the gain that we were able to make with all the earlier conditions. So we measured again with all the same conditions. Okay. So this is the speed index number here. And yeah, that's about it from my side. Questions. I don't know if we have time for questions. Thanks, Tejesh. Thanks for the talk. Any questions?