 abof.com is Aditya Billa's online fashion retail brand and if you open abof.com right now on your mobile it looks something like this. So before we stepped in, before me and my team stepped in, abof.com was your regular old website. It was powered by something called IBM WebSphere Commerce which is a traditional web framework in the sense that it renders some HTML on the servers, sends it down the pipe to the user and then on the front end some JavaScript added on to enhance the user experience. And WebSphere Commerce uses the dojo toolkit to actually build the JavaScript with it. But because abof.com was a very, very fast growing website when we stepped in and a lot of teams had worked on it over the past couple of months, there was a lot of jQuery code that was also present on the front end on abof.com. And because of this, because of this mismatch of technologies and because of different frameworks being used on the front end, they ended up with a huge payload on the front end which actually degraded their front end performance quite a lot despite the fact that their back end was blazing fast, it was really fast. So let's look at some numbers. When we stepped in earlier this year around February or March, it took about 20 seconds before something showed up on the screen on a CG connection if you went to abof.com. And that's a lot for an e-commerce website. That means most of the visitors who come to your website using a slower connection are just going to bounce off. When we stepped in, by we, I mean Alaris Prime and some folks from ActiveSphere who are one of your sponsors today. When we stepped in, we rewrote the entire website as a single page application using React. And once again, let's take a look at some number, but before we do that, I'll just use my computer to control this. So before we do that, let's, I'm going to define a term. I'm going to define a metric that we use to measure performance. And this metric is called time to first paint. And to understand time to first paint, I'm just going to show you a video. So let's quickly look at how a page on abof.com loads. So this is, this is the page you get if you go to women's dresses on abof.com, right there. At around three seconds, when you start loading your page, something gets painted to the screen. And when I say time to first paint, this is what I mean. Time to first paint is the time from when your user makes a request to your website to the time when he actually sees something on the screen. And time to first paint is a really, really important metric, especially on e-commerce websites, because this is what decides if your user is going to stay on your website, wait for your products to load, and actually interact with the website, or if the user is going to get bored and bounce off. So throughout this presentation and throughout this talk, and in fact throughout this entire project we did at abof, time to first paint is what we looked to optimize. Our goal throughout this entire process was always to get something on the user's screen as quickly as possible. Now let's take a look at abof's numbers. Let's take a look at what abof's numbers were before our teams stepped in and after our teams stepped in. So when we stepped in, abof.com would, on a 3G network, on a mobile phone would take about 20 seconds to load. After we stepped in and after we optimized the website and rewrote it to rewrote it in react, we got the time down to some 2.2 seconds, 2.6 seconds. That's about an order of magnitude improvement. And if you look at page load times, which is the time it takes to load the entire page with all the assets, all the images, all your CSS, all your analytics, everything, the entire page load. That time comes, was initially some 27 seconds, but after we were done with the website and we delivered the react rewrite, that came out to be about 9.4 seconds. So once again, that's about three times faster. And a quick comparison video with a bunch of other e-commerce websites in India. These are slow motion videos taken on a 3G network. And so the video is a little cut off here, but on the top left corner, you'll see abof.com loading. And on the bottom, you see Flipkart, you see Amazon, and you'll see Jabong and Coves as they start loading. So I'm just going to play this video real quick. So once again, at around three seconds, you see that abof.com has something on the screen while nothing else is loading yet. Let's just continue this video for a bit. And around four seconds, you see Jabong loads, Flipkart is loading. And around nine seconds, you'll see that ABOF has now finished loading, and everything else is still loading. Flipkart is at 13 seconds now. Where's Flipkart? Guys. All right. So if you want to take a look at this comparison video, you can go to that URL. It's available on YouTube. You can take a look. This was taken about three months ago. I think Flipkart recently pushed some optimizations that has actually made them really, really fast. And Ankit talked about it in the morning. So that's really great. I'm glad everyone's paying attention to their performance. All right. So let's talk about how we went about doing this, how we went about optimizing ABOF's performance. So ABOF was a ground up rewrite in React. And while doing this rewrite, we followed three main principles throughout our development process. And these principles are number one, be impatient. You want to start painting to the user screen as early as possible, as quickly as possible. You want to get something there. You want your user to start interacting with your website as quickly as possible. Principle number two is be lazy because we are in India and we have to optimize our websites for low-end Android devices. We're talking Moto Gs, but we're also talking devices by MicroMax and Lava Mobile, white-label Chinese devices. And these devices don't always have the latest web browsers or the fastest processors. And they're also on flaky 3G or 2G networks or what's called LiFi, which is a Wi-Fi connection that's actually really, really slow. So on the device itself, you want to do as little work as possible. Principle three is set quotas for everything you do, for every computing resource you have at your hand, whether it's the number of network requests, your bandwidth, CPU time, memory, the number of DOM nodes your device can handle, you need to set baseline quotas for everything. And a good strategy for doing this is to begin with the kind of devices you're targeting. Once again, ABOF's customers are, let's say a typical ABOF customer uses a Moto G to visit the website and visits the website using a CG connection. So you already have two constants here. You have a mobile device and you have a network type. Using this, you can set up some quotas that you can, some limits that you want to not hit during development. You could say that your payload size is never going to go above, say, 100 kilobytes. That's all the JavaScript you're ever going to load. Or you can say that the number of DOM nodes you're going to put on the device are never going to exceed, say, 5,000. And once you set these quotas, it's important to make sure during the development process that you're testing to see that you're never going shooting above the quotas. So that was rule number three. I'm going to quickly talk about my tech stack. As I mentioned before, this entire project was powered by React and the React ecosystem. But by and large, the tech stack doesn't matter. If you're building a front-end app, all the principles I'm going to talk about are going to be more or less the same. And they're going to be more or less applicable across frameworks. But still, let's take a look at the tech stack we used. So we evaluated four front-end frameworks across five criterias while we were doing the project. And the main competitors were React and Angular. And the five criterias were rendering speed, payload size, availability of tooling, and whether we could render on the server or not, and community size. So React pretty much checked most of our checkboxes. Angular 2 failed at mainly payload size and the availability of tooling. The framework was pretty immature. And View and Riot, they're newcomers to the front-end framework game. And our main issue with these frameworks was that there's not enough community for us to pick it up. But as I said, the framework by and large doesn't matter. So if you'd like to talk frameworks, I'd be more than happy to answer your questions afterwards. But for now, let's just move on. And when we used React, we also bought into the React ecosystem. And we used Babel to compile our ES6 and some ES7 code down to ES5. We used Gulp to automate our tasks, WebPacks for bundling our code and optimizing our code. And WebPack is a really, really, really powerful tool. If you guys haven't checked it out yet and if you guys haven't used it in a project, I highly encourage everyone to test it out. In fact, if there's one tool out of all of this that you should learn, it's WebPack. And we used Redux for managing our data. And a bunch of other small libraries. But more or less, we stuck to what you see on the screen right now. We actually hand rolled a lot of code for ABOF instead of using an available library. Because in a lot of cases, third-party libraries give you more features than you actually need and you end up downloading more code to your user's device than you actually need to do. So remember, be lazy. Let's take a look at the architecture of how the entire ABOF.com website is structured. And look at that. Let's look at this really bad diagram. So the interesting story about this diagram is that I was looking at my emails when I was preparing this presentation and I saw that we had actually sent this diagram to the folks at ABOF at the beginning of the project to explain to them what we were going to do. So this is our professionalism in display right here. Okay, so let's look at what happens when you load ABOF.com on your mobile device. If you follow the arrows that are labeled with the Roman numeral 1, you'll see that your client makes a request to a load balancer. And the load balancer forwards the request on to a Node.js server that's in the top-right corner, a Node.js server that's powered by a CoA.js, which is a web framework similar to Express. And the Node.js server actually is running your entire React app as a universal application in the backend. A universal application is a front-end application that can also be run on the back-end on Node without actually pulling up a web browser. So the Node.js server is actually running all of ABOF.com, everything you see on the front-end, on the back-end inside of Node. And as soon as the request hits the Node.js server, it renders some initial HTML for your user to see and sends it back to the load balancer. To render this HTML, once again, follow the arrow down to WCS, which is IBM WebSphere Commerce, which comes with the REST API. So to actually put together this initial HTML page, the Node.js server makes a request to WCS, collects all the data it needs, puts together the HTML page, renders it on the back-end, sends it back to the load balancer, and it's delivered to your client. Once the page is loaded on your client, all further requests are, if you look at Roman num num numeral 2 here, all further requests to build the page and to load more products or filter products or search or whatever, are directly going through the load balancer to WCS. So the Node.js server is no longer in the picture. And the good thing about this entire architecture is that the initial server that gets sent back from Node.js is actually cached in the load balancer. So most of the time, in fact, it gets cashed in our CDN. So most of the time, if you go to aboff.com, you're not actually even hitting our infrastructure at all. You are just pulling down some HTML that's being stored on a CDN. Only when you start looking at page number two, for example, do we actually make a REST request to the API to the aboff.com API and start loading further products? So for aboff.com, we have a very specific page which builds up the page, builds up the listing page for a product step by step. And throughout this presentation, I'm going to use that URL as an example. So if you guys want to follow along, you can load that up in your browser, load that up in your DevTools and follow along with me. And let's take a look at these three steps very quickly. Let's take a look at these three steps visually before we actually start digging deeper and talking about them. Stage one is when we render the HTML on the server or Node.js and send it down the pipe. And this is our be impatient principle in effect. So if you go to aboff.com and turn off all JavaScript, you will see something that looks like this. This is the initial HTML that gets rendered on the server and sent back. So if you can see that it has a header, it has some filters, some sorting information and a list of products. If you scroll down, you'll see that there are about 12 products on this page. There are no images. There is no interactivity because there is no JavaScript. You can't actually use a filter. You can't actually sort by popularity. You can't do anything. But because this page loads so fast because even on a slow 3G connection, this comes down in about three seconds, your users are going to begin interacting with the website pretty much immediately. And this gives your user a very specific feedback. This says, okay, something is happening. This website is loading. You have come to the right URL. And this feedback is very important when you want to prevent your user from leaving your website. You'll hear about perceived speed. So this is something that's part of improving the perceived speed of a website. Stage two is loading the JavaScript. The JavaScript will, once a JavaScript loads, it will add interactivity to the website. And then go on to load images, account information, for example, what your username is, what your email is. If you added something to the favorites, that's when that stuff gets loaded. If you have something in your cart, this is when that stuff gets loaded. And when the JavaScript loads, your page is going to look something like this. So now you see we have a hamburger menu on the top left, top right, my right. Yeah. And we have filters. We have popularity. We have all the product images loaded. And if you are paying very close attention, you'll see that we've also loaded the font. That's also something we do using JavaScript. But there's some stuff, some information that still hasn't loaded. And if you are on a really, really slow connection, that's the only way you're going to observe this, this particular screen, maybe throttle your dev tools to GPRS or 2G to see this. So when you open the hamburger menu, you'll see that the top area of the hamburger menu says, hi, guest. We still haven't loaded the user's account information. We still don't know who our user is. And the reason we defer loading this account information and we defer putting this in the initial payload is because you want to keep the initial payload really small. We don't want to show the user anything that's not necessary. This is secondary information that the user is going to probably interact with, say, five seconds or 10 seconds or maybe even 30 seconds after the page is loaded. So we don't really need to load it immediately. Once again, be lazy in effect. And if you wait a couple of seconds, you'll see that the user's account information is now loaded. Now, instead of saying, hi, guest, it says, my ABOF and clicking that will take you to your user profile. If you look at the top right, you'll see that there's a heart icon with a badge saying three. That means the user has added three things to favorites and there's a cart icon. There's nothing in the cart yet, but if you had added something to your cart, you'd see a badge there as well. So this is information that gets pulled in after the initial page is loaded, after all the images are loaded and after all the JavaScript, all our React components have essentially been initialized. The third and final stage of loading is when we load everything else. Everything else that's an add-on. So for example, third-party scripts. ABOF uses a third-party service called MeTail to add some additional functionality to the website. So this is when that third-party scripts get fetched. This is when Google Analytics get fetched. And anything else that might not be core to the user experience, this is when it gets fetched. So once again, you will observe this only if you throttle your Chrome DevTools to a really, really slow speed. If you see down there on the bottom left, there's a gray bar saying see how it fits. This is the MeTail script being loaded. And that's how an ABOF page is built up slowly, one by one, by sending down essential information first and then pulling in all the auxiliary information that's required for the page later on. And now let's dig deeper into each of these stages. Let's take a deeper look at what's happening behind the scenes. Starting with HTML. So to begin with, why load the initial HTML first? Why go through the pain of setting up a Node.js server, Node.js service somewhere in the backend and getting it to render your website? Well, the reason is that your browser will start painting to the screen as soon as it receives a little bit of HTML and CSS. So if you want to optimize your time to first paint, which is what we wanted to do for ABOF, the best strategy is to send down a small chunk of HTML and CSS down to the user as soon as possible because you want to improve the perceived speed. You want to give your users the perception of speed, even if your website is not actually fast. So sending down this initial chunk of HTML makes it feel like something is happening and it assures the user that he's typed in the correct URL, that his network is not down, that his mobile phone is still working and it just makes sure that the user stays around for the later stages of loading. And here's something really interesting. This is a small video of a website loading in Chrome. This is a new tab page and let's quickly take a look at what happens when you actually type in a URL into the URL bar and hit enter. Okay, so let's take a look at that once more. So as soon as the URL is typed, the loading indicator is different and as soon as the browser receives the first byte of data from the server, the first byte of HTML, the loading spinner becomes different and this is how most web browsers work. This is how loading indicators in most web browsers work, whether they are mobile browsers or whether they are desktop browsers. So what you ideally want to do is make that loading indicator go clockwise and dark as quickly as possible because that's the point where a user will know that the website is loading correctly and this is something we actually optimized for. We wanted our loading indicators to be clockwise and fast. We didn't want them to be anti-clockwise and slow. So for abof.com, we load the first 12 products. We render the first 12 products on the server and we send them down the wire. Depending on what kind of website you have, whether it's an e-commerce website or whether it's a web app, this is going to vary for you. For a lot of websites, loading, say, 50 products is something that works for them. For some websites, loading one or two products works best for them. This is something you will have to A-B test and figure out for yourself. Now we've looked at the how, the why of sending down HTML down the wire initially. Let's take a look at how we do it. This is a term you might have heard. This is called universal rendering or isomorphic rendering, which basically means that a web application that is intended to run in the user's browser on the front end is designed in such a way that it can also run on the back end without a browser, without a DOM, inside Node.js. And instead of rendering to the DOM, it can produce a string of HTML that can then be sent down the wire. That is called universal rendering. And if you guys are not familiar with React, there's a bit of React knowledge that's required for the best of the sections, but I'm gonna explain everything as quickly and as best as I can. A React application is basically a tree of components, a tree of UI components. And this is what the tree looks like for abof.com. In fact, if you open abof.com in Chrome and look at the React DevTools, this is the tree that you'll see. And each component inside this tree looks something like this. This is once again an example from actualabof.com source code. I hope nobody sues me. This is a component called badge. And as you can see, a component is basically a function that takes in some parameters and returns jsx. jsx is on line nine, there's a span tag. And jsx produces a virtual representation of what your UI is supposed to look like. So this code, when you actually render this component using React, this doesn't actually produce anything in your browser, it doesn't give you a DOM. It just produces an object hierarchy telling me that a badge is made up of a span and the span has certain properties. So just take a look at this screenshot on the top right next to the heart icon, there's a three. That's essentially what a badge component is doing. There's a library called React DOM that takes this virtual representation of a badge or any other component, or in fact your entire tree of components and puts it in the browser. There is another library called React Native that takes the same tree, the same virtual representation of your UI and puts it on a phone. And in the same way there is React DOM comes with a utility function called render to string which accepts a render tree, a UI tree as an argument and gives you a string of HTML. And this is how we render a React app on the backend because a React app is completely, completely divorced from the browser's DOM or any of the browser's API, we can run it anywhere there's a JavaScript runtime. So we run it on node and we use the render to string function that React DOM gives us to render our component tree to HTML and then send it down to the user. It's a little bit more involved than that, but this is something I'd be happy to talk about outside after the talk. And I can actually show you exactly how we set this up with node.js and with co-order.js. For now let's talk about stage two which is loading the JavaScript and then loading all the auxiliary information. Once again, why do we do this? Why load all the auxiliary information like account information or information about the user's bag? Why load it later on? Why not just send it down with the HTML initially? Well, this is why. When somebody comes to a webpage listing dresses or listing jeans or listing clothing, they want to look at clothing. They want to look at dresses and they want to purchase dresses or purchase whatever you're selling essentially. They don't want to look at their account information. They don't want to look at their favorites. This is all auxiliary information that's not needed at the time the user is coming to your website and every bit of auxiliary information you add to your webpage is going to add some extra costs. So you need to prioritize your user's intent. For every webpage you build, for every webpage you're optimizing, you need to figure out what is the one thing that the user is coming to this page for? What are they doing? And because of this, it's not very useful to load all the user's account information, profile information on first load. So we loaded asynchronously afterwards after the react components have been rendered, after the entire page is functional and the user can already interact with the website. And so an interesting scenario that happened to us after we built abof.com was, so the first stage of the project was just building the listing pages, listing pages and the homepage. The second stage of our project was actually optimizing the checkout flow. When you actually go to your cart and check out a product and go through the steps to actually order a product, that was the second stage of the project. And at that stage our priorities were completely reversed because for a checkout flow, you cannot proceed with checkout unless you have a user's details at hand immediately. You need to show the users what his cart looks like, what his addresses look like, what his username is in fact even. So our priorities in that case were completely reversed. We had to, we began with loading the account information and the cart information and then we loaded everything else. So once again, you have to figure out whether or whenever you're optimizing your web app what the user's intent is. At a very high level, let's take a look at how we load the JavaScript essentially because when we are rendering things on the server, we're rendering the first 12 products and sending it back to the user. We know that the page number one of our product listing has been loaded. We know that 12 products have been loaded and we have the details of all those 12 products in our browser somewhere. So how do we know how does it react when it runs on the front end after all the JavaScript has loaded? How does React know the state of the web page? How does it know what has already happened, what has already been rendered to the browser and what still needs to be rendered? How do we figure that out? Let's take a look at that at a very high level. First, React takes over. React runs and it essentially, it knows the root element where our app is being rendered and it takes over, it takes over that root element. And how it does that, if you open abof.com in your DevTools and just take a look at the Elements panel, you will see a script tag that has this huge, huge chunk of JSON that you can see on the screen. If you format this correctly, you'll see that this is what it looks like. This is a variable called initial state. Everything that happened on the server, the details of everything that happened on the server while rendering the initial HTML are present inside of this somewhere. You can see that there's information about the cart, there's information about the environment, there's entire product listing there. We know what our breadcrumbs look like, we know how many products we have, we know everything. This is the information React uses to take over the page, essentially. And this is where you need to know a little bit about Redux. This is something called a Redux reducer. And this basically takes care of our state. This is the magic source that takes care of our application state. And this is a reducer for one of our menus, our navigation menus. As you can see here, there's an initial state here that looks completely empty in the beginning. This is line nine. This is where the app starts. This is the menu, by the way. And this is our menu component. And if you look at line number 31, 32, and 33, you'll see that there's a static function available in there called prefetch. Now this static function has nothing to do with the browser once again. This is just a function that makes a rest call. So on the server, when we are pre-rendering our code, we call this prefetch function on our component that needs some data fetched. And this prefetch function will then fill in this initial state, which will be serialized as JSON, send down the wire with the page on our initial render, and then this will let React take over the page. And once React takes over the page and React is running, it then fills in the blanks. It pulls in product images asynchronously, pulls in account information, love products, bag items, site photo. If any extra information it needs, it will essentially render the page. And our components have been designed in such a way that if a component's initial state does not contain the values it needs to render itself, it will make a rest request. So if we want to pre-render a component on the server, all we need to do is define a prefetch function on that component, and that function will be called on the server and the initial state will be filled in. So when the component is rendered on the front end, it just won't make the request it needs to make to get its data because it'll already have its data. But on the front end, if you leave that initial state blank, which is to say you leave out that prefetch function, your component will make the required request on the front end and fill in its data. Okay, step three, everything else, which includes third-party scripts, analytics, and whatever else you might want to add as an add-on to the web page. Once again, why? Why do we want to render, why do we want to delay loading all these things? Well, progressive enhancement is a loaded term. It means a lot of things. It is generally referred to as adding an extra feature to a web page after it has loaded, adding an optional feature to a page after it has loaded. But in our case, we consider things like analytics as optional features as well, because analytics are really important to your business and it's really important to do user measurements, but honestly to a user and especially to a user who is already on a limited device, on a limited connection, to a user, analytics is just a roadblock. It loads down, it slows down your loading process. So you need to have some respect for users and some respect for the kind of devices and networks they are running on. And progressively load whatever third-party analytics that you need to load. So in our case, after everything is done loading, after React is loaded and all our components have fetched the data and everything is done, after the entire website is functional in all respects, only at that point do we go ahead and actually make a request to load the Google Analytics script that starts collecting data on the page. And only at that point do we start loading the third-party service we use, which is Meetail. But analytics are important and I want to take a couple of minutes to quickly talk about analytics. So when we built ABOF, when we built ABOF and we shipped an initial version of the code, we realized that we were losing a lot of analytics events and that was because we were deferring our analytics script. We completely forgot that a ton of analytics events happened on the front end while the page is loading, like your DOM content loaded event, for example. And all these events have some triggers attached to them inside of Google Analytics that let the business team measure things. So what was happening was we were losing a ton of analytic events. And how we solved this particular issue was when the page was loading, when the website was loading, if our analytics events weren't loaded yet, we simply queued them up. If our analytics script wasn't present and analytics events were happening, we just queued them up. Now this required some extra work, working with the business teams to figure out what kind of events they were interested in. But it was totally worth it because we sped up the loading time by quite a lot. And to do this, we built a Redux middleware for essentially queuing all our analytics events. And I'm going to quickly, very quickly run through this because we're running low on time. This is a Redux middleware that will quickly check to see if an analytics script is loaded or not. And if there's no analytics script loaded on the page when a particular Redux action is happening, this will just put the Redux action in a queue. So if you look at line 16, you will see there's a capture analytics event function that's called for every single event that is happening on the page. And over time, as your page builds up, you will end up with 20 or 30 analytics events inside of an array, which will then, if you look at line nine up there, as soon as your analytics script loads, your Google analytics script loads, all these events will get pushed into Google analytics. And that's how we handle the problem of actually making sure that all the analytics were being captured, as well as making sure that the analytics script was being deferred and being loaded as late as possible in the loading process. All right. So this is how a page on abov.com loads. This three-step process is why everything is so fast. And this is a general strategy that we have applied to all our other projects, which is pretty much breaking up the user flow, breaking up the loading of a page into primary data, which is the data you want to show to the user immediately. Secondary data, which is data which might be useful, but the user will interact with it later on. And third-party scripts and add-ons and progressive enhancement. And whenever we build something, we make sure that the primary data is visible to the user as quickly as possible. And that's why abov.com is so fast. There's something I haven't covered yet in this talk and something that is a talk of its own, which is app performance. We know how to load an app really fast. We know how to make it really fast. We know how to optimize the time to first paint to be as low as possible, but we also need to keep the app fast, especially for abov.com, which is an infinitely scrolling grid of products. As soon as you reach page 10 and you have 120 products loaded inside your screen, things start to get slow. And that was something that we spent a lot of time optimizing, especially on slow devices. So that is something I would really love to talk to you about. So if you want to talk about app performance, you guys can meet me outside. Testing load times is, we've already had talks about this, but I'll just tell you the tools we used. We used web page tests, page speed insights, and just loading the web page on different mobile phones, on different networks, on different Wi-Fi connections, just to see how it works. And if you've been to 12th main in Indira Nagar on a Friday night, you know how bad networks get. So you should probably go get a beer and go there with your phone and open the whatever website you're testing and try it out there, because that's the best way to test performance. Yeah. All right. Testing app performance, test on real devices, best thing to do, questions. Yeah. Hi, uncle. Yeah. Thanks for the great talk. I'm a little concerned because are you saying that time to first paint is the only thing that you considered when you're saying fastest e-commerce website? Because it seems like your time to first interaction is actually fairly slow and you don't even get the images in until you've loaded all your JavaScript. So yeah, that's a great question. So for this talk, I concentrated mainly on time to first paint. The time to first interaction is something, would be something around a second after time to first paint because the JavaScript payload is so small. It's about that much. Yeah. On a 3G network that loads reasonably fast. And at that point you start to, you are actually able to, so because you have about the first 12 products loaded and because you have a lot of data already loaded on the page, time to first interactivity is pretty low. The React stuff mainly pulls in the images and it mainly populates the menu. So that's something that's a trade-off we made, essentially. Yeah. Did you face a problem of memory leaks? We did, yes. So how did you go about? How we did it is that we did not, but how one would do it. I think we did what we made sure that while building a React code, we were not re-rendering, we didn't need to. I can talk about that in a bit, like details about that. Basically there's a function called should component update with tells react, whether a particular component should re-render or not. We, that for all our popular components, especially the infinite scrolling grid, we were running falls from that method unless we actually wanted to re-render it. And another thing we did was make sure our DOM nodes, the number of DOM nodes we had was as low as possible. But if you are building an infinite scrolling grid, and this is something how I would have done different, is you probably want to recycle your DOM elements, which is to say that instead of loading new elements for every single product you load, have maybe 20 elements, 20 DOM elements to represent all your products. And as the user scrolls through the website, reuse those DOM elements. You don't want to increase the number of DOM elements. But besides the DOM, we didn't have any JavaScript memory leaps because the JavaScript portions of the website are not very complex. Yeah. Yeah, yeah, yeah. Okay, the question is, we're using something called visual website optimizer. Why are we using that? That's a, I think that's an SEO tool. So that's something we did not add. I think that's a business tool that the folks at ABOF used to, I think, figure out the hotspots on the webpage where people are interacting and stuff like that. I think it's a company based out of Delhi, right? Yeah. Yeah. Can you hear me? Yeah, I can hear you. My question is on the queuing of the Google Analytics events. You said you built your own middleware to queue the events. Right. Google Analytics. Oh, right. So why did you choose to do it again on your own? Okay. So the question is that Google Analytics itself lets you queue events. Why did we choose to queue events in a Redux middleware? That's a really good question. So a lot of these events that happened on the front end actually happened with us in the case of ABOF on the backend when we were doing our initial render because we were rendering the first 12 products on the server side. There were a couple of events that needed to be handled on the server. Now this is something that can be fixed with a different analysis strategy, but the business team at ABOF at that time could not overhaul their analytics strategy. So we had to be backward compatible with everything that the old website had. So essentially this was one of our page load events, which for us became time to page render on the backend. So all those events had to be queued up in the backend and then they were sent down the wire in the initial state to the front end and then we put it in Google Analytics queue. So this was kind of a hacky solution to a problem that could have been avoided, but we had to be backwards compatible, essentially. Thanks. Yeah, so you have the static HTML render from the server first and then react takes over and then replaces that. Exactly. Which if the state is same, it's going to be exactly the same. But even my boilerplate react application, I often got warnings saying that your server render is different from a client render and get that huge bunch of red text that says that reacts rendering it again. So have you experienced that and how do you make sure that it doesn't happen? I have experienced that, but I've only experienced that during development. If you actually run it on production, this sort of stuff usually doesn't happen. And I think the reason for that is if you change the markup return from one of your components, change the structure of the markup return from one of your components very drastically. For example, if you're using returning a span in one of the components and suddenly that span is replaced by another component that creates a completely different tree. I think that's when this particular behavior happens. But this will happen when you are rapidly changing code. This doesn't happen if your code is not changed. So for the most part, you don't have any visual difference between what's been rendered in the initial static HTML and what comes after react takes over, right? Right. It's perfectly the same. Yeah. Okay, thanks. Yeah.