 Hi, so today I'm going to be speaking about some quick wins for page speed. Actually, that's the talk title that I've submitted, but the real title of my talk is web performance for lazy people like me. So let me just speak a bit about myself. My name is Tejas. I make terrible jokes. You'll find a lot of them feel free to laugh and you know, make me not feel so bad, but I love closure and I love JavaScript. I work at Quintype and that's my credentials over there. So I've listed this talk as a maybe beginner to intermediate talk. I think there's been a lot of talks over this year's JSFoo and last year's about things like progressive web apps and stuff like that, but I wanted to kind of get back to some of the basics of what you can do to make your app faster. And in particular, I was looking for a few of the quicker things that you can do without rewriting your entire application and maybe porting it from one language to the other. What are some of the smaller things that we can actually do, which will take us a long way? So like most important or great talks, I'm going to start with an important quote. So if debugging is the process of removing software bugs, then programming must be the process of putting them in. This is from Edgar Dykstra. You might remember him if you studied computer science at Dykstra algorithm. So the first thing is why should I care? So let me answer this in the context of what I do, which is that we are sort of a software for a lot of India's most popular news sites. You might not have heard of us, but we power a lot of news sites behind the scenes. And the digital news space is incredibly competitive and maybe even worse than a lot of industries bounce rates are incredibly high, right? Just imagine how you interact with news. You're probably scrolling through your Facebook feed and you might see something important. You might click on it and then you've forgotten all about it and you're on to the next thing. And because we deal with a lot of news, many of our readers are from very rural parts of the country. We've worked with some customers whose main users are in like very rural parts of where the network connection is terrible and the devices that they're working with are also pretty. Yeah, so most of you would have gotten bored by now. So this is what that experience is for a lot of people where they kind of come to a big white screen and they don't see anything. I held on that slide for about four seconds and I know that in most of your heads, you're like, okay, what's going on? It's something go wrong over here. Most of your users, like around 60% of your users will actually drop off if your page load is four seconds or above. Different reports have different numbers for this. And this is the first reason why you should care about your page load speed. And unfortunately, this is pretty brutal almost for every 150 or 200 milliseconds that it goes either way. You'll see that you're that the number of people who stick around and actually consume your content changes. And the second reason, and this is an update that Google has pushed in July 2018, is that Google is now using your page load time as an indicator into the PageRank algorithm for, you know, so they're giving you benefits if your page speed is good and they're penalizing you if it takes a long time to load your page. I don't know how many of you have noticed this new page, this new area coming up in the Google PageSpeed indicator site recently, which uses the Chrome user report, Chrome user experience report to actually tell you how fast or slow your page is on actual users. This is not an estimate. This is actual users and the time it took for them to experience your site. And like a lot of things, PageSpeed also is probably governed by the Pareto principle, right, which is otherwise known as the 80-20 rule. You can probably do about 20% of the work and get about 80% of the benefits. And after that, it comes down to a lot of squeezing that last bit of performance out. But we can actually do a lot of really quick things. And before we go too much further, I'm going to mention that a lot of these things are not in scope simply because they've been covered elsewhere or I think there's enough material out there on these and I'll put my slides up. So if any of these look very interesting to you, you can always look up the slides then figure out what to do over here. I'm also not covering making everything progressive web app with isomorphic rendering in part because this has already been covered. And also, and this is actually a true story, we actually spent a lot of time converting our entire site to a progressive web app, but we still didn't get like that sub second page load that we really wanted. So let me introduce you to the seven step program for a fast page load. So my first step is to build a chart. Here's what I made earlier, right? And this is mine, please, please don't just blindly copy my browser compatibility table. What you really want to do is segment your browsers according to how much you really care about them and how much effort you're willing to put in. So for me, it was the good in which I'm optimizing for speed, maybe the last five chromes, Firefox edge, Safari, Vivaldi. These are the browsers I really, really care about. And in these, I'm going to spend a lot of time benchmarking my performance and stuff like that. The bad, which is I wanted to work, but it's kind of slow. As the previous speaker mentioned, you see browsers up there because it's a pretty big browser in India, but we care a lot about it to work, but we're okay if it's a little bit slow. And the last category is browsers that you just don't care, right? I'm not going to spend any time trying to figure out how to get this to work on these older browsers, right? Obviously, this should be very much influenced by your actual business, right? I'm not going to make a blanket statement that IE6 is not worth covering or anything like that. Look at your analytics, look at your actual users, and figure out what they are really using, right? And be ruthless in trying to figure out what you should support. And I'll give a very simple example, which is like something as simple as CSS grids or Flexbox, which is covered by a lot of browsers, but maybe not covered by a lot of the older ones. Your development team can really choose about, do I want to spend about 3x the amount of time writing CSS to handle these old browsers or do I want to maybe cover this smaller set? And that's something to continuously keep in mind that supporting older devices costs you actual money, right? Both in terms of, you know, coding for the lowest common denominator as well as your dev effort. It's a decision you should be really pragmatic about. And my second and most important step in this program is to profile, right? Which leads me to step number three, profile. And for those in the back, that's profile. And just in case the people on the balcony missed it, profile. Step six is also, well, no, it's not profile, it's make a small change, but then profile, because that's the most important step over here. Why? Why should you actually profile your stuff? There's really two reasons. So the first is, because the speaker of today's talk, that's me, likes to make a lot of claims that you should not believe, right? There are no silver bullets out there. Because to quote Leo Tolstoy, all fast pages are alike, every slow page is slow in its own way. My problems are not your problems, right? I've optimized for a certain set of constraints. You may not care about that. You may have a totally different target audience. So please profile the most important thing. The next question that we want to answer is, what are you going to profile? So unfortunately, there are a lot of things that you actually can profile. And sometimes these are very orthogonal concerns. In order to optimize for one of these, you may end up sacrificing the other one. And it's very, very unfortunate, but these are, again, decisions that you'll need to make in a very practical manner for your business. As an example, you might care about the first time page load versus the subsequent load, which is jumping from one page to the other or even coming back a few days later to a different page on your site. And again, this is something that you know will be very business and domain contextual. So for example, we had a high bounce rate. So we care a lot about the first time page load. And the second time is, it's icing. Time to first contentful paint and time to interactive. And what's above the fold versus the time the entire page is completely loaded. So you're going to hear this a lot, which is above the fold. So this is what the fold is. Typically, if you look at on your mobile device, whatever is visible on the first screen, and maybe I'd argue maybe 20 or 30% below the bottom of that, that is what everyone refers to as the fold. So now that we know what we're going to profile, and ideally, you should have targets for each of these, which is I want my first load to be maybe under a second and subsequent load to be subsequent load to be under maybe half a second. How? So most people don't realize this, but the best tools to actually profile your site are in the browser itself. In this example, what I've actually done is I've clicked on more tools and remote devices. If you're building for an audience that's primarily on a mobile device, your best practice should be to actually profile on that device. It's not a thing a lot of people know, but just with a simple USB cable, you can attach your Chrome to, you know, Chrome running inside Android, and you can do a very similar thing between Safari and iOS. This panel actually will show you a lot of details, which I won't get into. Right. This is actually a picture of me sitting in an area which has terrible internet connection, trying to figure out why things are slow. The next tool I wanted to show off is something that's known as Google Lighthouse. Just raise your hand if anyone who's seen this. Well, okay, that's about half the audience. So Google Lighthouse, I think it showed up maybe last year, the year just a bit before that. And this actually shows up in the audit tab in Google. This is actually one of the most powerful tools for debugging your actual page speed, and they have releases of this almost every single month, and it just gets better and better. You can actually run this outside of Chrome as well, and we're going to maybe talk about that in a bit. But this actually goes through a lot against an actual browser, and it tells you after simulating your network, and it tells you how long things took to load. And the last question you want to ask is, well, how to profile? And this is one tool that I found very, very useful. This is called a webpack visual bundle analyzer, I think. Yeah, webpack bundle visualizer, sorry. So what this does is it loads up your webpack bundle, and it kind of tells you where your different hotspots are. Right? So in my example, I can see that my app.js is 350k out of which load-ash was 48 kilobytes. This kind of brings me to a myth in today's community that serving large assets with gzip is fine. It's really not. Compression reduces network, but not your processing. And India is actually in a very unique space in this, right? Because I think whereas maybe various other countries, the adoption of kind of higher end smartphones has been a lot higher. In India, the network, especially in like, for example, suburbs or slightly rural area, the network has improved thanks to a lot of work by the various telecom companies. But people are still not buying high-end hand phones. So you actually do have a huge amount of people who have decent-ish network, but terrible low-end smartphones, which actually brings me to sometimes the best JavaScript is done at all. A lot of people here do React, and everyone thinks about React for the front-end, binding, you know, well, rendering onto a container. But a fun read that I had is the BBC actually uses React only for server-side rendering. They completely do server-side rendering without any client-side rendering at all. Just because they felt like their site didn't demand that amount of dynamicism, but even though they enjoyed the framework. So enough kind of, you know, being a bit philosophical. Let's get on to what's the actual quick wins and what this talk is about. So the first thing I'm going to actually say is you can actually tell the browser what's important in your site. Most sites or most pages rather will look some variation of this. You'll open your HTML and you'll have some important CSS. It'll have, you know, maybe some images that are below the fold. And then a bloated third-party JavaScript that, you know, every time it loads, your users are waiting forever. And then finally your app.js or whatever you actually need for interactivity. Your browser's smart, but you can actually make it a lot smarter. This actually brings me to the first tip that you can do, which is just, which is link headers. What the HTTP spec and also you can embed this in your DOM allows you to do is allows you to tell your browser that certain resources are important. So in the first line, I say that app.js, please reload it. What the browser does in this case is it makes the network request to download this JavaScript ahead of time. As soon as it sees this header, in fact, there's a very good chance that it will start to fetch the resource. This means that by the time you actually load the, you actually include the JavaScript on your page, it'll already probably be in memory or disk cache or whatever your browser is holding it for you and doesn't need to wait for network at that point. You can actually do this with JSON, with like fetch data as well. So for example, JSON, you can actually send that off to preload as well. And one thing that's sort of interesting that not many people will know is that if your CDN supports it, they'll actually do an HTTP to push for this. So people who are not familiar with HTTP to push, what that means is when I'm asking for this page, even before you ask me for this JavaScript, the CDN will just give it to you. Again, this is maybe great if you're trying to optimize for the first page load, but of course carries the risk that on the second page load, you might get a duplicate thing. So try to benchmark and see if this is actually worth it. But in our experience, we found this is something that's been really great. The next tip I'm going to come to is to inline your CSS, especially the ones that's important. So true story, I think I spent at least five years trying to figure out how to do critical CSS in the right way. I've tried passing it through Phantom.js and like, you know, every single method that that is there. But one day just an experiment, what I tried was I just took my entire CSS and I just pushed it into the head, right? I didn't even chunk it by page. I just put the entire CSS over there. It turns out it actually worked because anyway, I was downloading the entire CSS and from there, you could just improve on it. This does, of course, improve your first time page load. So again, you're duplicating it on every single page. So your subsequent loads may suffer a bit. But you know, you have other ways of doing that, including optimizing that, including with PWA or PJAX or any other way of loading your page and not duplicating it. Just from a security point of view, please remember to sign your CSS just in case you're doing this. But in general, and coming back to the principle of, you know, not making things work so much, compressing your CSS is like, it's probably the best thing you can do to actually reduce things a lot. And I know a problem that I've always had or my team has always had is we'll first start by writing an application and it will be great and performant and whatever. But then as requirements change, people will just forget to delete the old code, right? And this is a problem that I've seen many, many people have across different projects where, you know, and CSS in particular, because there's no real way to figure out if this block is unused, we'll just sit around forever and just blow things up. So what we found here was CSS modules was actually a great way to kind of slowly fix this for us, right? And I'm not talking about CSS in JS over here. But even though, I mean, they are sort of similar. But what I'm talking about is over here, you can see in the header.js, I'm importing a style and then I'm using that style in this component, right? So in essence, if I stop using this component, this CSS goes away from my CSS bundle, right? Which means that, which just means that I can use my regular webpack, hey, this class is not being used, you really want to import it as a way of also deleting old CSS. And another hidden feature that a lot of people don't know in Chrome is that Chrome has added a coverage tab. I think this is about three months ago. So Chrome has a coverage tab, which actually tells you what percentage of your CSS and JavaScript is actually being used. So you can see over there in the circle bit, this is actually pretty bad on one of my sites. It's about 64% unused. I think our internal target is to get it down to about 30%. Then we'll know that, you know, this is not worth optimizing anymore at this point. And the most important of all the CSS that sort of comes through, I would argue, is the font. This is I think one place that everyone kind of falters when building their site. And for those fonts are great because they really help push the branding of your site. But unfortunately, when you don't deal with them correctly, they end up being pretty slow. So the first thing I wanted to say is, please choose your FOIT or FOUT strategy. Yeah, please choose your FOUT or FOIT strategy very carefully. So what does that mean? I'm going to explain it, right? FOIT is a flash of unstyled invisible text and FOUT is a flash of unstyled text, right? So invisible versus unstyled, that's the difference. Speaking in English, what that means is while your font is loading, do you want all the text to be invisible? And then the font suddenly shows up? Or do you want to see a system font and then everything turns into that, into the font that you were trying to load? That's the two strategies and use font display to choose between these two behaviors. Font display will help you ensure that you choose these two strategies. But actually with this comes a secondary problem that may not be really obvious. By default, what browsers do is every single font that's loaded will cause a re-render of your page. So which means if you're loading four fonts, you have the initial render and then four renders after that. And by four fonts, it's probably two fonts with two different weights, a normal and a bold, which causes about which caused four renders on your page, right? And typically this will look like an earthquake. I don't know if any of you have seen that, but your screen's kind of moving a little bit left and right as the fonts get loaded one at a time. There's actually a JavaScript library, which will help you on this. It's called a font face observer. What you do is you say, give me a callback when all of my fonts are done. Or for example, or you can say two fonts are loaded and four or whatever, you have different options like that. And you can actually control this with CSS. So you would have a class which applies all your fonts at once, which forces the browser at least to do a single re-render as opposed to four re-renders. So let's come to the most heaviest part of your site. And I know that this has been covered including in the talk just before this, but images. Images are actually the single largest contributor to weight. About 55% of your page is probably images. So we had a great talk by Rahul from ImageKit yesterday. But yes, definitely use a service such as Imgics, ImageKit, Tumblr, Cloudflare, Polish, or any of these to make sure that you're serving the right image for the right device. And of course, these use WebP. Which is all great, but I wanted to call out probably the most important mistake that a lot of people who are not familiar with HTML5 do is please do not choose your image in JavaScript. It's going to be ridiculously slow and it's just not worth it. Your image tag has long supported, and the browser support for this is pretty good, has long supported a field called SourceSet. So what I'm doing over here is I'm saying this image, the source is large image.jpg and the source set is, and I'm giving a list of images along with the sizes that they correspond to. So small.jpg is 200 pixels, medium.jpg is 400 pixels. And I think most people stop at this, but please remember to put the third parameter which goes along with this. So the third parameter, which is sizes, helps your browser figure out which size it should actually ask for. So what the size is, in my case says, is that if the minimum width is 1024 pixels, that's what I call a desktop, then this image will occupy 25% of my browser screen. That's the VW viewport width. And on a mobile device, it's going to be a big image. It takes 100%. So what your browser actually does is says I'm on a MacBook 13 inch, that's 1440, and I'm going to take 25% of that. That's whatever number it is. And I'm going to pick the image that is slightly bigger than that. And maybe depending on my network, I may choose a smaller one, but the browser handles that for you. And this is actually very, very well supported, as long as all these images are in the same aspect ratio. If you're using a different aspect ratio, there's another there's another tool for this called the picture tag, which you can look up. And of course, lazy load your images. And with intersection observer, if you can, I'm calling out with the caveat over here is lazy load, whatever is below the fold. If you lazy load, what's above the fold, you're again, back to things being slow. I'm going to go through this quickly, because I know this previous speaker did a great job of covering intersection observer. But over here, I have an image. It has a source, which is just an empty GIF one cross one pixel just hard coded on the page. And it has a data source, which is the actual image. I'm going to create an intersection observer and say that start observe trigger my callback when it comes 100 pixels below the bottom of the screen. I'm going to observe with all of my on every image that has a data source. And for each online 15, you'll see that I just set the target source to the data source. Now this is really great because I'm able to use intersection observer, which is a technology. And you can use it as a polyfill on older browsers. Polyfills are the greatest because they let you use tomorrow's technology today. And people who are not familiar, this is what a polyfill is, right? Over here, I'm just trying to do new promise something or the other and promise is not supported in this browser. I'm sure everyone knows the solution to this, right? I'm going to import promise polyfill and just for the heck of it from the previous example, I'm going to include Babel and fetch and maybe another few polyfills. The tradeoff is that doing this blindly will probably end up with something like this where you can see over here, Babel polyfill is actually sitting in about 20% of my bundle. And that's simply not ideal because I don't want to make my faster browser slow just because some old browser out there doesn't support promises, which brings me to my next hack, which is a conditional right? So over here, you can see online to actually check if fetch is defined and window.promise is defined. And if it is, I include the entire polyfill, else I just load app. And then I later load app.js. I'm sure there's a webpack plugin, which will do this for you and much faster, but it's frankly just like a few lines of code so we weren't too bothered with it. This actually does support our overall goal, which is that it's really fast and has no performance penalty on my last five devices. And I think you see browser doesn't support promises and it'll load it. In fact, every time you load a library, you should check your bundle size, not just for polyfills. And I keep having this debate, I think with different people on my team who tell me stuff like, hey, you know, but when they just added the memory of Chrome, just went up sort of by maybe 15 or 16 MB. And should you care about it? So this my response is that, you know, 16 MB maybe on the server is not that big a deal. But 16 MB of RAM across 225,000 concurrent users, you're wasting about three and a half terabytes of RAM globally. So my argument is it's morally wrong for us to willingly waste RAM. So speaking of libraries, let's play a game. When I say it's 2018, you say we don't need it anymore. Okay. So yeah. So I'm going to start with low dash. It's 2018 and yep. So yeah. So let me kind of get started. I know a lot of you will know about this already, which is underscore dot map array and with some function, you could just rewrite that as array dot map. I think you'll kind of see in all of these discussions or these, these two or three libraries that I'm going to cover. I'm not saying bad things about these libraries at all. I don't think web development would have been even possible without low dash in the last, I don't know, like a few years, but browsers have slowly picked up and brought this functionality in house, which makes a lot of the use cases for these libraries more esoteric or for more obscure browsers that don't support this anymore. So with that in mind, let's don't, don't do this, right? Don't necessarily, even if you do need to use one library from low dash or say one function, like for example, omit, don't just import underscore from low dash. The reason is it actually pulls in the entire library for you, which can be quite huge. Instead do this. You can use a totally different tiny library such as object dot omit. If you really want to use low dash, you can import omit from low dash slash omit, which is right now my preferred way of doing it. But in webpack for doing import curly omit from low dash, we'll actually reduce and import only that one function. Thanks to web back for tree shaking abilities, right? I couldn't get the last one to work myself. So I'm kind of sticking with the, with the third one, but frankly, removing global imports of the entire low dash library actually brought down our, our bundle size by about, I think about 30, 40 kb. And the next thing, or maybe a little bit more anyway. So the next, next library I want to cover is jQuery. So I'm going to say it's 2018 and you guys will say, yeah, that's right. Right. So jQuery, I'm going to be a bit more nuanced because jQuery, everyone knows it does a lot of things. It's, it's, it's, it's really, it's a really powerful library that's been around for a long time. But what I've found in a lot of my own web apps was that what would happen is I would have most of my app written in React or view or whatever else the framework is today. And I just do react jQuery for a few small things like for example, fetching some JSON or maybe looking up a DOM, an element in the DOM and binding something to it or rendering react into it. It turns out you can actually get away without jQuery at all for some of these simpler use cases. So here's an example, a bit of code with jQuery. I'm wearing a link and when I click on it, I'm going to add a CSS class and I'm going to hide it over 500 milliseconds and I'm going to do some dollar dot, I mean, yeah, get JSON and equivalent in not jQuery is very similar. I use document dot query selector or document dot query selector all to, to fetch my items. I'm just going to do dot class list dot add to add my class. And I'm going to use CSS transition to handle the 500 millisecond shrinking. And of course we have fetch instead of, instead of the, instead of the JSON APIs. And once again, I'm going to caveat this. I know that there's a huge wealth of libraries that are out there that are built on jQuery and I'm not going to pretend as if, you know, you can get rid of all of them overnight, but it is still a starting point for you to think. Once again, jQuery came in at a point where browser support for query selector and query selector all was pretty pathetic and, and you know, everyone would be implementing in a certain way. But as we've moved on to 2018, things have gotten a lot better. So last library I'm going to cover, but maybe not spend any time on it because we're out of time is moment. I know I love moment a lot. It's great for date manipulations and it's great for anything to do with the times and dates, but I found that if I'm only using something like to print out the current date, you can actually do this with a lot smaller libraries as well. The reason for this moment actually comes pre bundled with a bunch of local. So if you just import moment and don't do anything to prevent this, it'll come into like Dutch support in German and like every language almost that you could think of, which ended up bloating things quite a bit. So just in conclusion, some final thoughts. Although we were a PWA that wasn't nearly enough to get us to a sub second page load, right? In fact, you'll probably find that progressive web apps are all optimized for subsequent page loads and do very little for your first time page. Using a lot of these tricks as well as, you know, some of the PWA stuff, we brought our first content will paint down from about 350 milliseconds to 90 percent and saw somewhere between 35 and a 50 percent increase in traffic. And this should be a continuous journey, right? I've seen a lot of teams thinking about like a perfect day where we'll make everything slow for like a month and then, you know, one day we'll sort of clean it up. That's not, that's never a great idea because birthday never shows up. Every day should be performance day. And the last thing I'll say is the easiest way to get that to work is to measure your important metrics as part of CI. Lighthouse is now available as an npm module and can launch Chrome in headless mode and generate your response as a JSON response. It's very easy to run this every minute or every hour or whatever your capabilities are. And in fact, there are even services out there which will do this for you. There's one called Friota.sh, I think that's which will just run Lighthouse against your site every minute and tell you what your speed is and alert you when things are bad. In closing, I wanted to link to some great resources that I found. The first is the Chrome Developer podcast called HTTP 203 Non-Authoritative Content. This one's really great. They publish about a video every three days and every single video I'm like, wow, I didn't know about this. I think last week they did a great video on some of the new browser features which actually let you measure performance in the browser. So that's navigator.performance and it has a great amount of stuff on that. I of course showed you Webpack Visualizer and the last thing I wanted to show you is or was not sure but is cssstats.com in which you can actually, you will actually go through your site and tell you, these are how many fonts you've used. These are how many sizes you've used. These are how many colors you've used and give you a bunch of stats. And as I have three seconds left on the clock, I'm done. Yeah, thank you. Before asking questions, I just have a couple of announcements. So there is a BOF on how to travel the world as a remote developer like freelancing stuff. And then there are two methods. Refresh tracks are going on like designing for voice and voice and VR interfaces. Please take the feedback forms and fill it like every feedback forms are end of this row. And then there is a break up to 420. You can continue questions. Yeah. So if there's any questions, raise your hand or however else they handle it. I'm not sure. Hey, I'm here straight. Wait, where? My left or my right? Hi. Yeah. Hello. So I just wanted to know how do you do? How is above the fold rendering then? So above the fold? Above the fold. Yeah. Yeah. So what I understand is, so if a user opens a website on his laptop or say, yeah, so the screen that he so the screen height which he, which he's currently seeing that is being rendered server side and the rest of it is rendered client side, right? Yes. So I just wanted to know how is it? How is this done? Where is the logic being written for this? Okay. So I think this kind of comes to, well, really what is the responsive design, right? Obviously there are two approaches to this. One is if you're relying on your CDN or something to do a lot of HTTP caching, then, then what we like to do is to render the, to give the same HTML back to both the mobile and the, and the, and the desktop and then render it in CSS, I mean, and render it with query selectors and whatever. And you can do a different approach if you don't put on caching very much. But to answer your specific question, I think at least what we do is the server side kind of will render whatever is above the fold for both the mobile and desktop. I think at least we've been kind of lucky in that the number of resources is drastically different. Like you might be able to see like five images on, on, on mobile versus say 12 or 17 on the, on the desktop. We found that the penalty for that hasn't been too high though, even though mobile only is using five or six of those 17 resources. Nice talk. Thank you. So what I wanted to know is we always talk about that lazy load your image, lazy load your images, but then there are some times where the pages where images are like a placeholder at least otherwise I think we'll start coming up. Yes. I wanted, what is a parallax type style. Now I want my styles to scroll up based where I am on the browser. Right. But if I have not loaded the image that space will be empty, how to handle that part? Right. So that's why I think like, I think the previous talk also covered this pretty well, which is that I think the earlier way of doing this was you would use a lot of like, for example, jQuery, which would listen on scroll with, which would then calculate, for example, dot get bounding rectangle, typically to figure out what your images are coming into view that actually was really, really slow. The reason being that and when you start debugging with the, with the Chrome profiler, you'll see something a lot called reflows, which is what it's actually doing when the, when the browser is actually trying to query, figure out what's on the screen and kind of reshape everything, even calling, trying to figure out the position of an image with dot get bounding rectangle actually calls causes a reflow. And that was actually one of the reasons why this would be really, really slow. If you look at the intersection observer API, I'm just going to go back to that specific slide. Anyway, if you look at the intersection observer API, it actually gives you a lot of ways to very easily deal with this, right? So you can actually say that, that if you see line number six, it actually tells you that 100 pixels or 300 pixels below my screens when you should start loading. So that should give you a sufficient buffer one. And two is because intersection observer itself comes pre bundled with your browser. There's no question of the light of any external library loading before this loads. There's two things. I'm not sure about the support of this yet intersection observer. Yep. So it's in my, in my screen is actually, it's actually reasonably well supported. I believe Safari, I mean, mobile Safari and the desktop Safari also has support. You see browser is the only one that if I remember, you need to worry about, which is, which is edge is edge supports intersection observer if I'm not mistaken. And I think I, I'll need to check the chart on a counter question on that when we say lazy load. What happens is that there are no, if you see browsers, your browser doesn't load your page from the top of it. So if a user has stored, so say, let's say 50 percent of my screen and then he's, uh, refreshes the page, the actually the page loads at the 50 percent of them. Wasn't that lady, uh, lazy load will fail at those situations? Are you saying you're looking at the page? Well, let's say my page was like three times the screen size. Okay. I've stole the second half of the page. Yes. When I refresh the page, what the browser automatically does it for me. It loads, loads from there. Instead of loading it from the start of the page, it loads me from the middle of the page where I was last told us. Sure. So what on lazy load will not allow me to know what is going to be loaded preloaded? Well, let me, let me think about that. Like, um, what I will say is that intersection observer I can see is very, very light. So frankly, even if you are at 50 percent of the page or whatever, intersection observer will figure out pretty instantly that you're at that 50 percent and will immediately trigger the, the, the callbacks for those images. And if you see the way intersection observer actually works, it says 20 pixels above the page, 100 pixels below. That's my example over here. So it won't load all the images that it took for you to get to that page. So, so whatever's on the first half of the page, it won't load that. It'll just load whatever's in the second half. Um, I'm not sure if it solves that specific use case, which is somehow you loaded the page at the 50 percent mark. Yes, if you don't lazy load the image at that, for that microbench market might be a bit faster. But I think overall what we've seen, including on UC browser is that, um, it's a pretty seamless experience, especially if you polyfill this, um, advanced.