 Good morning, all of you of nice and early today traffic as usual. Even if you leave home at 7 am, you're still going into traffic in Bangalore. All right, hope all of you had your breakfast, you're nice and awake. So, let's start. I'm going to be talking about what we have been. So, my name is Abhinav Vastogi. I am a lead engineer at Flipkart. I've been working on the Flipkart Lite mobile site for the past year and we have been working on the new desktop site. I'm going to be talking about that case study of how we rebuild www.flipkart.com, the desktop site and what the next generation of web looks like where we are going for the technologies available to us and what we can do with them. So, we recently went through a, let me say, a faith lift or a change in a switch from what you see on the left to what you see on the right. As you might have seen it, this will be rolled out like this, rolled out almost three years ago. There's a lot of stuff that goes on behind the scenes. So, what you see in the screenshot here is essentially the design overhaul that has happened. Internally, the text stack has completely been rewritten. So, this is what the text stack looks like right now, where we are. We have a bunch of content technologies specifically, you know, the most common stack these days and the content is the most common. But yeah, we are using React, React Outer, Redux, Webpack for bundling it all together and the service has been expressed and more karma for our test cases, EMT to run our applications, handlebars for our templating and text and promises as a general browser technologies. So, all this is pretty standard and well talked about now and we will be going into details throughout the day and tomorrow of how these technologies work, how there are alternatives to these technologies and that is really how you are restricted to these. So, where it all began was essentially Flipkart Lite. That is where we experimented for a lot of time and we figured out how to use these technologies to the best use, namely things like service workers, the concept of application shells, generally progressive web apps as they are defined. I'm talking about things, you know, capabilities like being offline first or network resilient, instant feedback, you know, always available, things like that. So, Flipkart Lite is where it all started and, you know, we had tons of learnings from working on that. We had, you know, a lot of great conversations with a lot of folks at a lot of good companies. We spent a lot of time in researching and what you see here is the new face of Flipkart Lite, essentially all the features that you see on the mobile app loading in an instant in your browser. Again, this is being rolled out as we speak and all of you will have access to this very soon. Now the interesting part here is most companies or most websites, you know, they have a desktop version first and then they sort of build a mobile version for that, a Lite version or a mobile version or a responsive version. Usually it works like that. I haven't heard of a lot of cases where, you know, you develop a mobile start course and then you migrate to a desktop. How many people have heard of that kind of situation? Two, like five, six, ten people. Not bad. All right. So what happened here is that we learned when we started working on it, there are some significant differences in how mobile works and how the desktop works. And it's not just the tech start. It's also, you know, the requirements are different. The user behavior is different. The form factor is different. That's the design part of it. And how a user behaves, what time users log on. And testing metrics are that, you know, people use desktops during the day and mobiles more during the evening or the night. People don't usually use desktops when they're in bed. So these kind of behavioral differences come up. Device capabilities are obviously different. Mobile phones are getting stronger every day. You know, more powerful every day. But desktops are still more powerful than that, especially when it comes to multi-threading and desktop processing power. Browser fragmentation and distribution is very different. And on mobile, you have Chrome, not as a majority, right? You still have UC browser, which hardly runs any good job for computers. And it sort of dominates the market here in India. So you really have to solve for that. Network conditions are obviously one of the biggest factors for desktops. Not a good assumption, but you can sort of be on the safe side because people are mostly on good, decent connections. And they won't go offline so much, like either they're offline or online. It's usually not that flaky unless you're on, like, a common Wi-Fi and public place. On mobiles, however, it's vastly different. Your network connections can go up at any time. Right, so this is what a typical single page app looks like. You know, keeping all the differences in mind, this is what you can develop. And I won't go into details of all this, right? How a typical SPA works is that you serve an empty HTML from the server, a client downloads the JS bundle that's in the HTML, and you show loaders, you make API calls to fetch the data, and you render it. And if subsequent navigations happen on the client, or you click on a link, you already have the JS as a template, you immediately show the next screen loader, and so on and so forth, you make API calls and render. That's how SPAs have been working, and there will be other talks who go into detail of how this works. I'm here to talk about generally what happened with Flipkart. We already had Flipkart light in place, we had the knowledge of how this works, and you're wondering what else we could do to make it better. So yeah, a typical SPA will have a lot of pros. You know, it's pretty easy to implement now, using the technologies that I mentioned, React, Angular, or all of these, make it pretty straightforward to implement it. Navigations are fast, it's pretty much instant once you load the first page. Server processing is very cheap. All you have to do is like, serve one static HTML file, right? The one node process would serve like, thousands of requests per second. And you don't even get too much traffic, because only the first request for a session will come to your server, subsequent request will directly contact your API. So, essentially it's really good. It's a good model in general. This is what it generally looks like, right? You have an HTML which is pretty small, but once the HTML is downloaded and passed, CSS download starts, your JS download starts, your JS passes, that's when your client-side code will click in, your new make API calls you get the data and you get your first render. Now this, obviously, doesn't look too good. All the way over here on this side of the screen. So the cons here are, obviously you have an empty page for a really long time, you have rendered nothing until that point. An interesting issue that comes up with single-page apps, very different is that, you know, your SEO gets jeopardized. The first HTML that you see here is pretty much empty. So, unless a crawler can execute a job that wait for you to make API calls and then you render, the crawler won't be able to crawl all your content. And obviously the JS bundle is huge. It takes a long time. An interesting metric here is a good one to follow is that you generally take around one millisecond to pass one kb of JavaScript and keep in mind, I've got you here that you might be serving gzip content, your bundle might look like 50 kb, and gzip is going to be 500 kb and that time it will take to pass it is 500 milliseconds. So apart from download, it will take half a second just to pass with JavaScript before it's connected. It's done in mind. So now what do you do over that? To improve it, you come up with new methods. So what we did with that light was we came up with the concept of App Shells, which essentially means you initial HTML that you serve from the server is not blank. It contains the loading state of your page. And obviously this needs to be content-sensitive if you request home page, you need to serve the under that it looks like a home page for a missed page or a product page, it needs to look like that. So how we solve that was build time rendering. I've talked about it earlier. It essentially means that you at build time you know that there is no dynamic content in this here. So you can essentially connect with HTML at build time. And it just becomes static HTML that you have to serve at runtime. The biggest flaw here, no more empty page. What you see in the first render is so what happens now is your first print happens the moment your HTML and CSS are ready. HTML size is not huge. It's pretty small is your loader and your rendering is blocked on CSS downloading which is again sequential after HTML. So your first print comes here which is much better than going all over here. So that's the first upgrade. Now the point obviously here is that you know this is still not meaningful. This is still better than an empty page. It's still just the loading state. The user can't really do anything about it. If you're going to slow connection you're staring at a loader for like 5 seconds which is pretty bad. That's that. So to solve that one approach we tried was why not try server rendering even if you're even if you're using the modern client side stats what's preventing you from running it on mode you can render your full page on the server for the first year first and everything works pretty much the same and your client will read into the page when it has the data. So your first print becomes meaningful your SEO gets solved. That's the biggest benefit your HTML generated from the server contains all the content that you want the caller to see. And this is what it looks like. Now observe here. You know you add a waiting time to your HTML because your server needs to make API calls render the data. You have to keep in cost or keep the cost in mind of rendering or templating whatever you're doing the API call to make connection plans. When the HTML download and parsing happens then the CSS downloads and HTML is completed and your first print moves here. You can see your first print has moved again a lot to the right and these diagonals are not to scale. I purposefully avoided adding the timelines here because I will get to that. So this is where your page becomes interactive which is again a lot of time. So another so the fonts here you get significantly an increase in server load. That's the biggest fun. Rendering a page on the server using client-side technologies you then need to use something like Reactorm, render to string or you need to use PhantomJS or Rhino or some there is also making the DOM on the server. It gets pretty costly. So there is a significant increase in server load. Your HTML download size increases enormously. One state will be what? Like 10kb of HTML that can fit in one HTML chunk. So the moment you get a response you are done. This HTML can be like 800kb one mb of HTML, just pure HTML depending on what page it is and it keeps downloading forever. Your response times have increased because again your server is processing a lot and your JS file is still huge so it doesn't become interactive until all of this is ready. That's like too many fonts and it is approachable. So next update the next thing we tried was the concept of universal apps that you render only SEO critical content on the server client continues to work the way it was and most modern libraries can reconcile the changes. So what that means is that by SEO critical content I mean things like the title of the page and meta tags. So at least you got some content out in the page. So what happens here is that you could render just header, quarter and meta tags from your server response and load everything else on the client and the pros here are obviously your server load becomes much lesser you are not using both API calls you are not processing that much HTML on the server the SEO is not as good as full server rendering but it is still better than doing nothing at all on the server you can still load quickly you can still show loaders in the first print you get a big boost in the performance and the cons obviously are that your first print is still slow you are still trying to run client-side code on your server you are still trying to sort of for example at Flipkart we are heavily into the service oriented architecture. So even SEO for us is a service. We give it a URL of a page and it tells us the meta tags and the page types and stuff like that and a huge piece of text you put in the photo also so yeah that is still there you still don't have any organic content in the first print you still have loaders what do we use to do with that the file is still huge you don't get much benefit there so the next update would be you are anyway doing all this just for a minimal extra cost you might as well just render the first fold getting closer to the server rendering but bear with me here you just render the first fold of the page what that looks like is essentially this and instead of showing loaders you show some content on the screen you know what is going to be in the first fold for like 90% of your audience you can get metrics of what the browser is now an interesting bitfall we ran into here was the moment you render this kind of html how you typically html looks like is you have your body tag your div tag which contains everything that html and then you have your script tag that html has images in it like imdsrc both images are going to start downloading immediately and what happens is it will not necessarily blockage out the resources but it is going to content for the network and if those images are large for example you know these biners here are pretty huge you are going to sort of fill a loading your javascript even further so an interesting thing we did here was concept of progressive images it has been very popular for some time medium specifically has been doing this really well for a long time you load much smaller images you know lower resolution images first and then you replace them using javascript using the higher images they are downloaded and you start the download later of course and the browser will automatically blur them so you won't even need to see if it's blurring for most of the browsers the difference the benefit you get is huge this image here will very well be 300 kilobytes the full resolution image the small one is hardly 300 bytes that's the benefit you get so you get the progressive images along with this the client side is still pretty fast those are that the server is making the api calls now there is a benefit to it the plus side is that making api calls on server is generally better because now you are making api calls server to server which is within your data center ideally and you get rid of all the latencies of the network you get rid of connection hiccups the server can have any dq host entry for your api server you won't even need to do a bnx resolution you get organic content in the first print and the best part is that you have anyway made an api call to get all this data from the server you might as well send it to the client why does the client need to make that api call again you can just append it to the end of your html after the script add or something and your script can read from that JSON itself the client doesn't need to make up a new connection and the api server doesn't need to process it again so a big one at this point was still that your javascript is still huge and everything will get blocked on that so the best part the interesting part we worked on here was that you split your js bundle into different chunks based on the page that you are on once you think about it it's like a really no brainer that's how jQuery used to work I mean once you have jQuery every page used to have it on javascript wherever it was on that page don't play with the checkout javascript on the homepage so you only load it required for the current page and your subsequent javascript resources can be loaded when needed it's pretty obvious once you think about it this is what it looks like for us this is the old one you had one app bundle which was like 360 huge and once you split it based on routes like homepage, product list page offers these are the sizes so you get like you need only 32 kilobytes of javascript at the end of the homepage this looks pretty fancy and believe me it's pretty easy to how do you break the js bundle you have to define split points on your application your code will have like you will have 200 components and you have to find them together somehow using the fact that you require something like that these tools make it easy to split the code actually so all you have to do is you replace your import statements with your prior dot ensure that's pretty much it that's really it import is statically analyzed import is statically analyzed so when you import a file at the top of your app the component imports and occurs it becomes a hard dependency of that component and that this component cannot render without that for example if your parent component which is rendering your entire page will obviously be importing all the pages because based on the route it has to render all the pages if you import all of them all of them become hard dependencies of just your shell of your layout so essentially everything gets bundled into your initial entry chance which is app.js so letting that if you use required to ensure it's essentially an way of requiring files it's literally what it means it just ensures the file is there when it's registered so that fact or whatever it will be using that fact specifically will automatically download that file when it's needed so that fact can react out of what's in hand for this really well or write out I have a code sample to use so this is what I was talking about your typical routes file looks like this if you import your page at the top when you define your routes that if you are on slash about you render this page as a component or if you are using the ASX format you do what is going to be doing the component pretty straight forward like one one lines of code on this credit all you have to do is this that instead of just doing component you do get component and required to ensure of that page and you call back the fact component when Webpack sees this required to ensure it will not bundle this file until this route is executed when the browser tries to execute this route that's when it will download this file and if this looks too ugly there is a loader for this for Webpack there is an app for that there is a loader for that you can get a Webpack loader or a Babel loader or it's called the bundle loader you do an a thing require of that and this essentially this syntax looks pretty much exactly the same as this one you can show this component page it becomes that component a thing require this is what you get in the timeline after that you are waiting for the API calls and stuff to happen on the server for HTML download with CSS download and the first chunk is ready you get a full render at this point content if you look carefully this is much smaller it's pretty much same size smaller than your CSS because you are not flicking that CSS and you get your first render here this is not too scale because the JavaScript earlier was going all the way till here and your first render was like there obviously so that's the benefit you get here and all your JS is downloading in parallel so why it's downloading like that is because this is what your typical HTML looks like you have a style sheet at the top you get your server rendered HTML here and you have a bunch of JavaScript files here and earlier it was vendor and after j by the way I hope everyone is splitting the code into like vendor and app for long term caching and you actually need to do this every time you will serve it once and it can be cached you will have to download to react the renderer again and again and your application code can be cached so there are some interesting watchers with this approach you need to have ashen for caching like I said if you are serving your vendor and your app JS suppose you do a code deployment tomorrow you need to be able to tell the users or their browsers that you need to download the new file but through that you need ashen so you add ashes to your file name so you take the content cache of the file and you are rendered to the file name so if the file changes the hash changes, file URL changes the browser request and you can automatically and you can essentially indefinitely cache all of those files now what happens is I am talking specifically about webpack here because that is what we are using when you split your JS bundle into smaller chunks webpack needs to know that if I hit flash about I need to load page or JS and if each more page or JS is in which bundle, what's the file name of the hash so webpack adds a lookup table to your entry chunk because entry chunk is guaranteed to be there that's the first chunk that's loaded for any page typically that's like all your common modules are there and your this lookup table is there now what happens is if you make one change on product page your hash changes it causes to happen is that your lookup table changes which means your entry chunk changes which has all your common changes which have not changed following me here so far so what happens with this that your entry chunk changes essentially you end up invalidating all your cache and you lose long term cache you need to take care of this webpack allows you to add a manifest plugin here which essentially pulls out the lookup table into a separate file like a small 100 bytes, 200 bytes of a file only that file changes if any file changes so we actually turn the watcher here into a big pro for us right into this benefit that now what happens is all your bundles are separate, chunks are separate if you change a piece of code on say product page or checkout page only those JS files need to be downloaded for the client if the client doesn't go to the order summary page if you download any new JS more of the JS has changed everything renders pretty much from cache it's getting fast so we have converted an issue here into a pro but this is still not fast enough right there is a lot more you can do so crpl is what we tried next that's a pattern that has been popularized you can see that this is all the polymer apps but it's a pattern in general so you can apply to any that knowledge you have crpl stands for it's an acronym it stands for this crpl that you push your critical resources for the initial route you are here being pushed when you render the initial route which is what we are doing already you pre-cache the remaining routes and then you lazy load and create the remaining routes on demand so initially when you load home page you only download home page then while home page is downloading you can push more content so if you are using http2 you can easily do this if you support push well and good you are not using http2 I will get to that so you render initial route and then you can lazy load other pages as needed so this is what pre-loading looks like so all your steps are at the end it still stays there and you add links, rel, pre-loads now how many people are aware of this kind of thing there are links, rel, pre-loads alright, impressive so I am sure there are other talks which will go deeper into this I will do a quick update on this so link rel pre-load essentially tells the browser to download these files in parallel to the html without blocking but not to evaluate or to execute that file so these files will download in this order along with the CSS but won't execute until there is that content and they will execute in this order you can download in a different order and execute in a different order it is really cool so along with that if you don't have http2 which is the case with us this is an infrastructural challenge we have still on http1.1 as of now so one way to mimic the push of content that is push minutes that you don't block the resources downloading on your html downloading you can do that with http1.1 all you need to do is convert your html response to the transfer content what that means is that since some parts of your html are independent of your page content for example your header photo your meta tags, things like that your CSS, JavaScript bundles are independent of which page you are on push them out as soon as you get a request you don't need to figure out what API do I need, what data do I need to render that page and then you send those first for the resource download to start and then you do a smart use of preload, prefetch and defer so preload is what I showed you prefetch and defer are other options that you have which functionality differently again there will be other talks detailing on this this is how we do streaming using express right in your cloud handler you set your content type explicitly if you do response or render it automatically sets it response is set you need to do then you write your html head tag up to the opening body tag which has all your link preloads and your style tags you write it any pleasure to the user at this point the html streamer opens and as soon as you get the request this is all stacked up this is milliseconds after it is responded with the head part of your html milliseconds after getting the request then you do your whole server rendering this will be empleting, this will be envelope, this will be yeah whatever and you write that and then you write your script tags and you close the response this is what a very basic streaming looks like and this is what it looks like for the client on the time line the html is the same blue chunk that was there on just the left side tuned in so this html is still taking let's say what 500 milliseconds to download now your css can start downloading at like 50 milliseconds after getting the request, how cool is that and all of these things don't block the html, if you do a preload nothing is blocking the html from rendering or parser, the html continues to work the way it was but now you are downloading with resources embedded what happens is that html is downloading in this part and it's parsing for a lot of time so you look carefully at the time line you are going to see this happening in parallel so start some milliseconds, the first print happens here as soon as the html some part of the html is ready and your full page render happens once the html is complete, that's when you get the last script tag and it has executed so this javascript will execute here once you see the script tag you can load your s chunks you can not need in on that page later after it doesn't block the interaction with the page so obviously the pros here are that your css and javascript also can start downloading immediately it's all happening in parallel so you can make use of this piece of the browser your page is ready to render as soon as html is ready, it is pretty fast the con here obviously is that now your server needs to keep a connection open with the client I forgot this after I implemented this what happens is suppose your express is printed by enginex or reverse proxy even if you are streaming content express is generating a stream enginex will pop up that stream once express closes that stream enginex will send the entire html together to the client why? you lose all the you lose all the streaming benefits enginex does that for a reason the reason is that your express server becomes immune to client latencies so if a client is really slow he is going to process his kvps more than right from the old ages he is going to take like 5 minutes to download a html it's like I said it's a 500 kb html a 50 kvps is easily going to take 10 seconds so now what will happen is your express server will need to keep a connection open with the client if enginex is not buffering express being running in javascript in note at single threaded it can only do one request at a time so unless you close that response it cannot take a new request so for 10 seconds your express server is blocked from taking any other requests so that's the benefit and the issue with buffering in enginex it's very easy to turn it off you just have to give one statement in your enginex config proxy buffering off another con on the inside here is that your client can download only limited number of resources at a time so don't make everything parallel so this is what it looks like in real life this is a screenshot from the production flip cut side this is what I was talking about earlier this is the blue part for the html and everything else happens after html is complete this is your first paint at around 2.5 seconds and this is your complete render DOM complete at 3.5 seconds if you stream it what happens is these blue chunks you see here are downloading html and the light blue is parsing this is why there were chrome dev tools on chrome canary with the network enabled in timeline so you can see how your network requests are happening now your CSS is downloading JS is downloading in parallel JS is split into chunks and your first paint is at 1.000 milliseconds your first paint with meaningful content is at 1 second which is the benchmark we had set for ourselves and your DOM complete also happens faster because now everything is happening parallel to this essentially you move the whole chunk here to the left with this one small trick an interesting thing here is your first paint event by the browser is here but what is this duration like how did these images started downloading without preload if html was not there so the interesting thing we noticed is that visually if you see so you can also capture screenshots in this visually your first paint actually happens here but the browser tracks it here this is what we figured so what this led us to believe was that browser waits for certain steps to certain conditions for it to be considered as painted but for the user what matters visually is it there or not so if you are working on this kind of the cutting edge of optimizing using all these technologies PRPL and all you can't really depend on things like DOM content loaded in the window on load event to track your performance you have to start tracking custom metrics and you can use things like the request animation frame and user timing API to track such things for example you have this performance.mark as part of the standard now it's a window.performance.mark this as a point in the timeline now the beautiful part about this is that this is a standardized API which means websites like web page test understand this point just by the virtue of doing this web page test.org speed curve, deer boost all these technologies which track which do synthetic testing just by putting this line in your code as the first thing in your before your html mark this point on the timelines on the websites you don't have to do anything about it at all this is all and you get these custom metrics you could do performance.measure to track how much time was spent in the API calls how much time was spent in rendering so these are things you should track you should track all these custom metrics which is very important so the key takeaways here are that you have to solution differently for mobile and desktop you have to treat performance as a feature that's what we do it's not just a side effect or a byproduct of what you're building biggest thing you can't optimize what you can't measure you have to measure it again and again and again and every single thing you can split JS bundles into smaller chunks you can preload those chunks smartly but don't block your main thread on that and real user monitoring is important synthetic testing doesn't always work as you expect so that's all thank you we have time for Q&A so that's my Twitter that the slides are available at this URL yeah just something real quick in all the measurements and other graphs what were did you throttle your efforts to do a certain yeah so the graph that I showed here the real graph is actually throttled at 3G so did all these measurements and all these optimizations how does the latency work figure into this are you only relying on a speed of the network to figure out what it will do and definitely so like I said by virtue of making API calls on the server you get rid of a lot of latencies that you're hoping to run into another bunch of latencies are just setting up these connections with your CDN or with your server so for that there are other tags like you can do DNS prefetching or you can do a preconnect like I talked about link rel is equal to preload there is also rel is equal to preconnect which will essentially set up the HTTP connection for that resource but downloaded when it's actually requested so we are doing all that and that removes all the latencies from these requests but yes those are considered and when you throttle in chrome it also adds a latency to the connection apart from lowering the speed what was the major choice when you started working on a clip cut light to choose a react over other GIS sorry can you repeat that to choose a react over other GIS those are major goals you wanted to achieve that alright so how we approached choosing react ok so we took this decision like really long back almost one and a half two years back of building all this on react that's before we started work on clip cut light the reason we chose react was I will tell you the procedure we went through so we started building our own rendering library right and we built the entire component system we built the dependency system all that stuff and that's when we realized that react is the one which suits and needs the best right that is what our library was looking like and react being maintained by facebook has a ton of other features that we could not have built or maintained right on just in the small team so that is the approach we usually take that you know we figure out what we need to build what we need because only when you try to build it yourself will you be able to tell what are the exact requirements right things like angular has things like factories and services and all which are really good for some use cases not for us right so I am not saying build everything in house right use ready made tools where applicable but yes you need to understand your requirements very clearly specifically for us if you look at react versus angular that's when you talk about um react is more of just a rendering engine it's just a view layer or angular has a lot of other stuff it's the entire mvvm architecture we wanted more flexibility we didn't need double binding we needed single binding a lot of such things and combined with redux and all we could build a very customized um architecture of how we make api calls and how we fetch data how we cache it so that's how we took the decision will it so the only request going to the node server here is this html all of these are going to the cdn they are all different URLs right cdns are meant to take this kind of a load cdns are built they have a lot of caching they have a lot of resilience to these kind of multiple requests from the same client we also use like we keep the connections open so if you make one request the connection remains open for a few seconds so you don't have to set it up again so you can optimize that yes definitely so if you split your resources into these multiple chunks right so instead of making 5 requests now you are making 10 requests right so of course your server will suffer if you make too many requests especially if you are using express and node right um like I said it's single threaded cdns on the other hand are built to take like millions of requests per second so they can handle this load quite easily to announce the performance of this SPA have you considered the approach of multi app architecture sorry multi app architecture multi app architecture multi app I'm not sure I get that I'm not aware of it I'm just asking to increase the performance of single page application have you considered the multi app architecture approach I'm not aware of the mighty architecture um let's discuss outside uh can explain what it is we can probably discuss more on it any more questions hello okay okay so great insights so you mentioned for server side rendering you are using reactom render to string right yeah so that's a synchronous call that is a synchronous call yes it is so how you scale the number of requests per seconds right okay good question um so yes server rendering with reactom has a significant cost um the reason why I said full page rendering on server is costly is because reactom experimentally these are not published numbers but experimentally we have seen like an exponential increase when you increase the DOM size uh so we sort of found a sweet spot where it doesn't it does not like take a whole second to render but it finishes within a few milliseconds but we are still able to render some content um you cannot work around the blocking model of that um reactom will block the thread you could spawn up a worker give your DOM to that to render and then get it back um but there's still a significant overhead in just you know threaded communication so yes that is the cost we are living with that is where you actually scale horizontally add more machines if you are if you can't render those many strings together okay so we have like uh built a small library kind of thing uh which is like a caching on top of uh react rendering um so basically if your component is a pure render component um then based on the props you can just cache it right markup yeah definitely so react there are a bunch of libraries for react also or web for webpack which will allow you to cache pure components quite easily and in interesting insight we learned there actually was that in react you can define classless functions right as pure functions those are actually not cached by react in um actually what is better is that if you define a create class or you use a class like extends component and you do a should component update false if it's a pure function right obviously with a shallow compare that is actually much faster uh react doesn't um what do you say doesn't memoize pure functions so that's actually something that we looked into and that gives us a lot of benefit okay you have a question like lots of questions for Abhinav and we take an update nice a lot Abhinav I think soon the create start of the conference thank you