 Okay, so the problem statement, and also a note, so all of these fundamentals I'm going to talk about primarily we use both in Minthra and Jabong, but this particular talk is going to take Jabong as a case study, right? So I'll be putting all the examples which is primarily from the Jabong. So some of the problems we had, which is a desktop-first responsive design, so back in 2014 when we built, or 2015, when we built a new version of Jabong web application, those were the days when responsive design was very popular, and we took a path to building a desktop-first approach because the desktop traffic and the desktop revenue was higher than the mobile during that day. So that's where the implementation has done. A lot of desktop components, which has been imported into the mobile, and that's where actually the suboptimal performance were for the mobile application. And since this application was built on 2014, 2015, so obviously a lot of components, or the UI components we built, those were become outdated in 2018 or 2019. There was a definitely a problem in terms of what language consumer understand across the internet, probably we were not there, so the need of changing that also. The last one, which is about the trend. So in India, more people have the mobile phones than having a desktop system. So definitely there was a need to look at the whole problem in different perspective and thinking about the mobile first rather than having a desktop first. So today is primarily, I'm going to talk about the performance optimization and not covering the rest of the thing. Why we need a PWA? So I'm sure most of us are here know about the PWA, and I'm just going to recap some of these things because PWA is not something which is, you can call it as a technology, it's basically how you build application, right? So this is a term used to build an application with collective methods and technique, and that's where we call a PWA, right? So the primarily thinking behind the building of a PWA, because I think the space we are in, right? So we have the responsibility, and we really wanted our consumer to decide which platform they wanted to shop in. Typically what happened in many organizations, you route your user from mobile web to mobile application, and whether they like it or not, the very simplest solution is to ask user to go to the apps, give some rewards to them, offer some more lucrative offers, and ask them to go and shop in mobile apps. But that was not the case when we thought about this, right? Because we wanted user to decide where they want to shop. There are some user, we did some user research as well, and then it came out that some of the user, they really want to use the website rather than using. In apps, so that's where it comes to the point where we thought of let's build a mobile application, mobile experience, which is a world class in the high performance application. And obviously it should be the fast one integrated, which is you have the home icon on your homepage reliable, so it should work across the network conditions, and any kind of hardware, right? It could be a suboptimal or low end device. So that was the solutioning part, and that's where we decided to build a PWA. Some of the early decisions we made, so definitely the people, I think that's very important because these are the ones who are going to build the solution, right? So having the right people on board is very important. People just not thinking about using the technology as just using some of the framework because they really want to do that. We really wanted people who understand that technology is a tool to solve consumer problems, right? So that's where the whole idea was. Domain knowledge and consumer behavior. I think until we have the context about the business, it's really hard to solve the problem at best level, right? So that's where the domain knowledge and consumer behavior help us a lot in terms of deciding the architecture and the technology choices. I'm going to cover that in the next slide. Microfrontents, I think everybody's aware of the microfrontents and the why we use microfrontents. So that's where the reason we started building the microfrontents. I think this is one of the random decision I would say. At some point of time when we decided to have the JavaScript size under 20 KBs. And eventually we achieved that. So technology choices. We are using Preact. I think that's where I can talk about the business context. So I think any e-commerce platform, right? So it's 80% or 70% of time, you just scroll or navigate through pages, right? We really don't need a very heavy or very optimal diff algorithm or the virtual DOM kind of a thing, right? This is just a navigation and the scroll, right? So it's a very simple. You just need a kind of a library. You just solve your view part, right? You're just able to render and reflect the any changes which comes to the state. So that's where we thought Preact is much far better than because it has a very low, lower in size and the memory print is also pretty low, right? So that's where we decided to use Preact. Then we thought about the Redux and all. I think today we had a good discussion about having a Redux or not having the Redux, right? So that's where we choose not to use Redux. We started using one library, we're building one simple library which is going to help us in the state management. Eventually ended up building a complete framework what we're using to build an entire application. And the functional programming, I think I'm going to cover that also. There's one fundamental thing during my development days, I found that framework trapped. So whenever we are going to use one particular framework, we are trapped into that. And choosing from one to or moving from one to two is always very difficult and none of the organization allowed to easily think through that we should go from A to B. And that's where we wanted to build something which is not particularly or tightly coupled with a framework. So the strategy, by the first principles. So by the way, I'm going to talk about a lot of principles, fundamentals in my talk so that these are the fundamentals is completely not coupled with the front end also. You can apply this any problem you solve. So by the first principles, a website is a collection of web pages. And a web page is collection of some rendering element, JavaScript, CSS, phones and images. If you start optimizing these small chunks, you are done with your sorted, right? So you'll be able to optimize everything. So that's where we focus. So I'm going to cover mostly rendering part, JavaScript and a part of image. CSS and font I'm not going to touch in today's talk. So another principle which is put first things first. This is one of the things of best people in the world are very successful people does pretty well, right? To optimize their own schedule or how they become highly optimal. So that's where we use these things in optimizing first contentful paint and first meaningful paint. I'll show you the number later part of that. So how we decided that, so hybrid rendering. So the only things which user needs, it's upfront, right? So that's where we wanted to ship only those JavaScript, CSS and HTML content, nothing else. Reduced initial payload, that means again the same thing. Inlining critical CSS, so I think it becomes just one KBC CSS, which is we needed in the home page. Server push, definitely it is a really helpful to push this FMP forward preload critical request. So this is what I'm going to talk about. So let me first talk about the hybrid rendering. So hybrid rendering is so the first thing which is you could see that so above the fold server side rendered. So even if you see that the hero image, which is the bigger image, so this even it doesn't need a JavaScript to be rendered. So this was completely resolved from the, the URL was resolved from the server and all the main content, at least 70% of the content was completely rendered from the server side, right? So it has no dependency at all on the JavaScript. There are some below images which definitely was lazy loaded or they require some kind of a JavaScript to be rendered, but 70% of the page we would able to render from the server. So that's where we hit good optimization in terms of the FMC. Then the second part, which is your below the fold and the client side rendered. And this will only happen when user shown an intent to render that. So we'll not even fetching the JavaScript resources upfront in order to ensure that the main thread is available to the user to take action, right? So at this point of a time, server side render, the page is completely interactive. User can take action. And once it is called to the bottom, then only we are going to fetch the resources and going to render the below the fold content. So this is a very simple implementation of how we build this page. So you could see, we build one utility component which is a on-demand loading. So this is basically a reactive component which only takes action when there is a scroll happens. And there's a, so that's the below the content part which is a renders only on scroll. So this is how we have implemented that in the code. So adding more to that, you can see the below the content. You can see some clusters of the resources fetched. So these are the resources only fetched when user scroll, right? So at this point of a time, the only this happens. And obviously I'll talk about the using server, service worker pre-caching to ensuring that this is happening in a very quick time. Server push, this is helpful because generally what happens and if you don't use server push, so once your HTML loaded, then it start getting passed and it start rendering the request to the server to fetch the JavaScript and rest of the resources. But applying the server push, we were able to fetch the resources even it was requested by the server, right? So this is really helpful in terms of having the JavaScript dependency to be resolved very fast even before the server, your client started asking for these content. And multiplexing I think this is a, by default, HTTP to feature where the same connection being used for the fetching the multiple resources from the server, preload critical request. I think typically I'm sure most of the people are doing this but most interesting part for us even fetching that hero banner, preloading the hero banner. So I said we resolve the URL from the server side. So there was no JavaScript required for to fetch this particular image. And then even we preloaded this. So that's where the FMP become very quick because everything required to render the page was already there. And I think this is a disclaimer that preload, choose the resources which you want to preload. Choose this very wisely because it requires your, it consumes your network bandwidth. It also consumes the hardware capabilities of your mobile phone, right? So it's not something you can preload most of the resources, you just be very choosy in terms of choosing the resources you are going to preload. So this is the difference in slow 3G. So before it was, it used to take 14 seconds to complete the page. And after doing this preload, you could see that in ninth second we were able to have the hero banner in the page. And this is a slow 3G. Okay, so now JavaScript optimization, primarily in three categories. One is the reduced JavaScript boot up time. So this is where I'm talking about the initial load of the JavaScript. Code is splitting, pre-caching with service worker. So this is where I'll talk about the code is splitting at a different level, route level and the component level and optimizing that thing with the service worker. And the JavaScript runtime performance optimization where I'll be covering the real model, right? So I'm sure you guys must be aware of the real model. So that's where I'll talk about how we have implemented that. So some code snippet I would like to share. So this is a typical project component look like for us. So you could see this is a pretty declarative piece of that and this is what we call smart component. There's no markup at all. So this is simple pure function. It even doesn't know that it is going to be part of React. It just takes inputs and does output. That's it. So a function is just functioning without knowing which environment is getting to execute. Right? So you can run the same function in the browser as well as in Node.js environment. Action. So this is again pure functions. These functions doesn't aware of the environment or doesn't aware which particular utility they belong to because everything which is required to be executed goes as an input and they just do the output. Event handler, these are again pure functions. So these are the typical DOM event handler, right? Where we have the event as an input, right? So that's also being incorporated and again the entire props goes inside these functions, right? It's a pure function again. Gets an input and returns its output. So this is where the 1.js utility comes into the picture. So we have gone ahead with the subscription model. So any smart component can subscribe to the properties and once this component is going to render, they get these properties, the value of these property from the global state store. And similarly, if any changes made to these properties by the actions above, simple what happens in React or Preact, it will happen here also. Those will be able to re-render the component to reflect the store changes again. And there comes JSX. So I guess, so I think why we did this is because I think we always had confused or there are a lot many question about that. What is smart component and what is DOM component? What is the representational component and what is your business component or logical component, right? And you can see the very clear boundary about it, right? You will never see that. And these, by the way, these can be run anywhere because they have no dependency. These are pure JavaScript, nothing else. So if I could pass this JSX, I can write a JSX of Babel preset compiler, transpiler for other language, I could simply run this entire thing out of Preact ecosystem also, right? So that's where I was saying that we really don't want it to be in a situation where we trapped in a one framework. So code is fitting. So this is the home page, one JavaScript coming, which is obviously it is starting. It was 15 KB, now it is close to 18 KB. Nothing else. You just click on one of the banner. All the JavaScript or resources required for this particular pages are available now. Even you could see some of the script which is required for the GA also, right? Mostly we get a lot of issues with the analytics and kind of a third-party script, right? So we have optimized that also. Even we are not including all the events, all the Google events, GA events in the bundle. Those bundles are also coming on demand. So that's where to ensure that this is happening very quick in the real time, we have all the important JavaScript pre-cached using Service Worker. User clicks on one of the product. They get to the product page and again the script required or JavaScript required to render this particular page is getting fetched from the server. So that's where we achieve the route level component and sorry, route level and the component level code is splitting. So some part of it I can show you that how we achieved this, so you could see that these are the above the fold component which is anyways there, even if SSR or not SSR, these components will be rendered upfront. So these are the on-demand, but mostly the route level splitting, but these are purely the component level code splitting, why? Because these are the components which is below the fold. So we don't really need those upfront because unless user is showing the intent to see these content, we really don't need to fetch it, right? So that's where, so I think you can compare this with something like this, right? You go to a merchant asking something and it gives you something, it gives you that particular item and it gives you another three, four items extra, right? Saying that sir, you might need in the future. Not doing this exactly, this is what we are doing with the consumer. You are thinking you might need it, probably they might not need it, right? So just wait for the, unless this demanded, don't do it. So this is the difference after the code is splitting, vendor size become 8.3 KB, earlier it was 88.88 KB. You could see the 122 KB overall JavaScript after and before it was 218 KB, why is this because third-party JavaScript are still there? So the everything we removed which is from the in-house JavaScript and eventually we ended up with making the complete page interactive in 18 KB. Now let's talk about the runtime performance. So I was saying that we have implemented real model very heavily, so to ensuring that the removing all the long task from our code base, we used set timeouts, promises to differ the execution of some of the JavaScript functions which primarily not required when the user actions happen or in order to respond to the user action. So the target was to respond to user within 100 MS and that's where the guideline says because if you don't respond user within 100 MS so possibly people perceive this as a lag. So that's where you hit the perceived user performance, right? So that's where this is highly optimized in terms of achieving this 100 MS time. Any visual changes, so any animation you do or any animation we were doing that, so that has been, so JavaScript execution has been part of the request animation frame. So to ensure that it gets the entire frame, right? So in order to achieve the very smooth app-like experience, you need to ensure that you are hitting 60 frames per second. So that's where you get just 10 seconds to execute a JavaScript which is responsible for doing any animation kind of for, right? So that's where it's important you get the entire frame to use this function. All right, so this is very interesting part. I guess if you have followed the talk from this CDS, so I think they heavily talked about WebWorker, right? So that's where we were doing that. So all the instrumentation, so any consumer website, right? They heavily get a lot of, capture a lot of data from, do a lot of analytics on the user data, right? So they can understand the consumer better so they can do a lot of personalization. But user doesn't know that, right? Why they care about if you need the instrumentation or you need these kind of a data, right? Doing these kind of a computation during the user action is something which is, you can still avoid, right? And that's where we did. So all the instrumentation or analytics has been shifted or moved to the WebWorker. So let's have the worker to take care of the what we need and let have the main thread or the UI thread to serve the consumer, right? So that's where having this main thread available for the user is having up to good in terms of having a better perceived user performance. Memorization, I think we already had a talk about the memorization. Everybody must be clear about the memorization now. So calling any DOM API, right? It has an adverse impact on how the browser function, right? So if I do a window.innerHeight, that means it is going to calculate the innerHeight in the real time, right? So that will cause the reflow. So to ensure that you don't do that multiple time, right? So every time when you call a DOM API, just cache that value and next time you don't really need to compute in the real time and use that. So every function to the DOM API has been memorized or cached in somewhere to ensuring that the no impact on the user per se performance. So next is image optimization. I think this is the most important part is the last one which is we do a lot of, during the lazy loader, doing the right kind of image extension or there are a bunch of tool which allowed you to modify the height and the width of the image. So reduce the JavaScript and that's where we did. We reduced the JavaScript to be executed in order to form the URL of an image. That helps a lot. And even using the right extension for the images, what we were using, that also helped a lot and the impact was really huge. I guess I cannot calculate the number in percentage, the impact we had. So you could see the now the 56.6 kb compared to 954 kb. And that's where I was talking about the desktop first design. The focus on the desktop, we always compromised on what we could do better for the users using mobile phone. So these are the very simple and very hygienic and very simple things we're supposed to do in order to get the right images dimensions. Fast 3G, which really, if you compare this, the end product, when we were able to reduce the page completion from 18 seconds to 3.5 seconds. Slow 3G, I think 42 seconds, you can imagine nobody is going to wait for that long. Users were already gone by that time. But here you could see that there are some of the other things we did. So the text, so even three second user thinks something is coming. So we started engaging user at three seconds. And they have the context, the content which talks about the image going to appear. So probably user, if click it third second, they will be able to navigate to the right page. And from 40 to 11 seconds, visually complete the page. This was the light house score earlier. We were thinking this is a pretty bad, but when I have seen other website, I thought we are still orange. We were good. But I think from there to here, I think something which is, we never thought about it. So this was never the plan to achieve 100. It just happened over the course of doing the smallest small thing. So I would suggest this should never be the goal. It's just the outcome of the effort you made. So that's what we did. We eventually ended up there. I think we are happy about it. All right, so what to do in order to, once you do a lot of hard work in terms of achieving certain level of performance, you need to do certain things, ensuring that you don't need to do the same thing again after two years. So how to build a culture, how to accept performance as a part of your family? Still there are developers who might think that, why the hell I need to think about the performance? We have already achieved it, right? The achievement is that we really don't need to care about. We'll see again after the release or the next month or something, right? So that's where we have to do some kind of regulations. We need to apply this odd even kind of a thing, right? So people, they want to, not want to, they have to adopt it, right? Once it becomes a habit, then we are good. So that's what kind of things we did. We decided performance budget, performance budget particularly with the lighthouse. So 20 KB benchmark was always there. So you cannot, your build will fail if your bundle size goes beyond 20 KB, right? And that's where we're still able to maintain. So that's what I was saying, 100 was not a goal. So, but you still have some performance budget, right? So based on that, we decided how far we can go. If we are moving ahead of 20 KB then possibly we need to trade off with the product manager or possibly the internal engineering team that how we can ensure that what we can trade off in terms of pushing to the bottom or dropping some of these things, right? So that's where the, the, the fast and maintaining the performance budget jobs. I think it was also very important to have these kinds of jobs every day you can see the, the, what is the score, right? So that's where we have put that every day, a couple of times you have the Jenkins running and showing you the results. And if it is beyond the accepted label it will just market critical. And testing, so I think release criteria was also important. You can't go to the production until you meet all of these criteria, right? So these are the regulations we started with. After some time we started thinking we have seen the result people have become habitual to that they started doing the performance started talking about the performance. And that's where we did good in terms of accepting performance as the first class addition. Okay, so lessons, so by the way we have done all of these in 45 days. So we pushed the Jabong PWA into production in 45 days. But what next? The end result I have shown that was done in the 45 days, no. So that's the, I would say the part of story where you can shine, but I think eventually it takes another eight to 12 weeks in terms of completely getting this project done, right? So the TTI optimization was not easy at all. It actually took a lot of time and a lot of energy and a lot of some completely. We were completely drained in terms of getting this done. The util traps, a trap. This is again one of the things. We generally create a util file and we put all the util files there. Eventually what happened? You have 500 functions there. Your bundle size is again going up, right? So what we did, none of the JavaScript function in the util is, so in every function is going to be into the new file. So no file is going to have more than one function. So that's where you just import the function or the file you need there, right? So that's where we solved this util trap. Analytics and third party JavaScript, nightmare is still a nightmare. I don't know when we'll be able to solve it but we are still fighting against it and possibly we'll be able to do that. Polyfills, I think this is again, unconsciously we do that, right? Most of the people use core.js to ensure that you are supporting Chrome 42 and SEO things, a lot of these, the browsers, right? Backward compatibility. So that's where we make mistake generally because our budget was 20 KB. So we were able to see some of these things which probably people with higher budget cannot see because the core.js could be around 10 KBs or less than 10 KBs, right? Because, but that's a half of our total budget. So that's where we spotted that. We removed the polyfills completely and we gone back to the MDN, picked those functions from there, put that into individual files and we ended up getting all those things possibly in some, not in KBs also, in some bytes, right? So that's how we learned, how not to use the some of these libraries without thinking. Side notes, I think ability to push the production code daily to the production was one of the great thing rather than having a discussion back and forth and asking we should do or we should not do. Just do it, push to the production and see the result and claim it whether you have done things which is working or not working, right? So sometimes we spend a lot of time in discussing and rather taking action. So doing this we allowed developer to take decisions and produce the outcome. Ask experts for the help. You can ask me. So for this thing we did actually we worked with the Google folks a lot. So preloading that hero banner was one of the suggestions they have given. Using web worker for a non-consumer related we have done the recommendation came from the Google folks, right? So we have done a lot of, like we spend three to six months with them in terms of getting all of these things done, right? So don't shy away, right? So that's what the community is, right? You should always ask for the help whenever you seek for, right? So there is no problem. The two hours rule, I think this is something which has helped us, that's what I'm sharing it because we have to push in 45 days so that's what we wanted to track in every two hours. So ensuring that people are not blocked. Sometimes you just push for things. I'll say it earlier but I think that's where it helps. People were ready for the things they really wanted to ask about, right? So we had the check-in point after two hours daily. We are still learning, definitely. After showing everything that doesn't mean we have done a good job, right? So we are still learning. There's a lot of plenty of things we need to learn in order to having all kind of a consumer and serving the web to everyone. So that's the summary. So this is everything I can put into one slide, what we have done. I did not talk about the passive listener network API for the different images for the different network category. Also the intersection observer we are using. So these are the things, so that's what I'm putting everything here. So we have done all of these things in order to achieve that number. All right. So that's it. Thank you.