 And today I'll be talking about the Chrome User Experience Report, which is a data set released by Google. And yeah, let's get started. I'm Indian. I spoke about Dexsecure already, so I'll just skip this part. So traditionally, there are two types of measurements that you can do on your website. One is synthetic measurement. This is when you test the performance of your website in loading it in a controlled environment, say from a data center. And you can control factors like load it in a simulated 3G connection, load it on a mobile device, and so on. The industry standard tool for doing this kind of analysis is webpages.org, where it is a pretty exhaustive base in which you can measure the load times of your website. The second way of measuring the performance of your website is called real user monitoring. Compared to the first approach, this is where you use live performance data from your users to see performance bottleneck. The advantage of doing this is that it is happening on your production website, hopefully, and it has a larger, it's more representative of what your users feel as compared to synthetic measurement. Both have a set of use cases. For example, you can use synthetic monitoring as part of your CI CD pipeline and say that, hey, you can prevent performance regressions from getting deployed, for example. And real user monitoring can find, say, weird bugs that occur only in the Samsung browser or something like that. So both have its set of use cases. So there has been this project called HTTP Archive for more than 10 years now, and it's done by a bunch of guys at Google. And what this basically does is that they crawl the top half a million websites every single month, and they use webpages.org, which is synthetic measurement, to get different metrics like page load time, what's the speed index, a lot of performance metrics, and they put that in BigQuery. So using this, people have been doing a lot of experiments, analysis on how the web is behaving at large, so we can answer questions like, hey, what's the average size of the web page last month, and how has it been growing over time? And till recently, there hasn't been a way to do this with real user monitoring data, and that's what is possible now with a new data set that's called crux or Chrome User Experience Report. So Chrome User Experience Report is basically HTTP Archive for real user monitoring data. It has data for more than 3.2 million websites. I think this has increased again last month, and it's all available using BigQuery, where you can slice and dice the data with various factors, and how this data has got is that people have opted into data sharing as they browse the website in Google Chrome. The data is anonymized and captured through all these different users and put in BigQuery for us to analyze. And the data is grouped by the website, so you can't get data for individual pages like you do in HTTP Archive, but it's grouped by different websites. What metrics are available? So these are the four metrics that Chrome User Experience Report gives currently time to first paint, time to first contentful paint, DOM content loaded, and on-load, which is the time to load the page. It's an event fired by the browser. Briefly explain what's the difference between first contentful paint and first paint. First paint is when anything is painted on the browser. It can be just run some websites as they are rendering, just flash white. And so that time, the time to first paint is recorded at this point in time. You can see a lot of these times is it's not that useful to know when your background of your page changed. You're looking for something more meaningful. And again, they have defined a new standard called first contentful paint, which tries to give you a bit more meaningful metric in terms of when something from the DOM is actually painted in terms of like whether when is the first text on the screen, when is the first image on the screen, or when is the first canvas element on the screen, so on. And that gives you a bit more meaningful metric of hey, has the page started loading, right? So this is the metric that we'll be using throughout the talk. So just wanted to give an introduction on it. And these four metrics on load, first contentful paint, first paint is you can query this across different dimensions. So you can query this across the kind of device the user is using, kind of what network the user is on. This is using the network information API which is only supported in Chrome, but again, the data set is only for Chrome, so it doesn't matter. And you can even get data for offline users, right? And you can slice and dice by the country as well. And this is how the table actually looks like. So you can, the kind of questions that you can ask with this data set is pretty unique. So you can't ask like, hey, what's the page load time of Google? Since this is real user monitoring data and the data has been anonymized and grouped together, the questions that you can ask are something like this. For Google.com, for people who are on a 4G connection and on a tablet, what fraction of users have their page load time, say within 0 to 200 milliseconds? So you need to be answering questions in terms of like, hey, what fraction of users have this user experience? What fraction of users? You can't even say, hey, 100 people have this user experience. Yeah, if you want to know exactly to write the BigQuery itself, we have written an interactive blog post on how you can get started. But in this talk, I'll be focusing on the actual results that we got instead of going through the actual queries that we used to get the results. Sorry for one more definition, but while analyzing this report, we found that it was easier to define a composite metric like site experience benchmark, which is instead of asking like, hey, how many people have load time less than one second, right? This gives you the fraction of users which have the first content full paint less than one seconds. So higher the ACB, the better. Higher the ACB, more number of users have first content full paint within one second. So it's a faster website in terms of first content full paint. So this is the metric that we'll be using in our experiments. And let's start off with something very simple. Everyone knows that site loads more slowly on your mobile compared to desktop, right? And using this data, you can actually quantify that, hey, how much slower are my websites going to be on a mobile device? So what we did is very simple. We took 50 websites and we got the ACB metric, which is the fraction of users with less than one second, which have the first content full paint less than one second. And we found out that, of course, desktop has higher ACB consistently compared to mobile. So we can make a statement like this, right? ACB goes down 36% when going from desktop to mobile, which means that for a given website at one second, 36% fewer users have their first content full paint on mobile compared to desktop. So you can quantify a lot of things like this to figure out, hey, how much slower is our users going to have their first content full paint? And say that you want to, this is, you can use data like this to take decisions as you develop different applications, right? For example, you are developing a new website and you want to figure out, hey, do I want to use a single page app or I don't know what the other one is called multi-page apps, but so you can, there are a lot of factors that go into a decision like this. It's not just based on performance, but let me just focus on performance and give you a lens through which you can use such a question. So what we did to answer something like this is that we took 50 websites, the top 50 websites, which weren't using any framework, a single page app framework and 50 websites which were using a framework like Angular React and stuff like that. And we plotted the SEB for these 50 websites. So single page apps, again, tiny details, single page apps load a lot more of the content upfront. So it means that your initial load or your first content full paint will be slower, but the future interactions will be faster over time because the code has been loaded for it already. Multi-page apps, the code is just loaded for that particular page you are in. Of course, the definitions of these are getting blurrier over time with preload and stuff like that, but let's stick to this definition. So you can find that using a single page app framework basically affects first content full paint by quite a lot. So we can, on an average, for the sites that we tested with, 225% more users have their first content full paint completed within one second if the site is not using a framework. So if, and again, we can do the same experiment with just for desktop users. Now the number is 162% and for mobile, we can find that this actually plays a very important role. So 486% more users have their first content full paint completed in one second if the guy is on a mobile device. Again, answer questions like, hey, for a lot of users for your website are coming through mobile device, probably it's good to not use a framework or like server ender. I mean, the call is up to you, but using this, you can make an estimate of how much impact different kinds of users are going to have on their performance in different conditions. No talk is complete without starting a framework or what is a faster framework, I guess, but okay, my point is not that. So I'll start with a set of conditions applied. So again, what do you want it to do here is like figure out in terms of first content full paint, which was the fastest framework out there. And again, I'm fastest as in like, there are different ways to measure performance. What do you mean by fastest, right? I'm here, I'm looking at first content full paint. You can also look at on load. You can look at, hey, for my webpage there's 1000 DOM nodes and I want to update, find out of them, which is the best framework to handle that update. So it depends. And traditionally how people evaluate, answer such questions is like, they build the same website using multiple frameworks. Say it's a hack and use website or a 2D application. The same website used to be built with AngularJS, React, Preact, and then they run synthetic monitoring tools like web page test and say, hey, my React application, my 2D application built in React was 10% faster than 2D application built with say Ember or something, right? Let's now look at how to use ROM data to analyze this. The main advantage is that this provides a more representative view of what your users actually see when loading your website. So it's not, it gives you a much more holistic picture when evaluating different frameworks. There are some disadvantages when using this because we are not exactly comparing Apple's to Apple's. We are comparing different websites built with a particular framework with different websites built with a different framework. But it's still probably okay because we are only looking at first contentful paint here, which is one of the very first events that happens on the page. So again, what we did is we compared the top 50 websites using different frameworks. So we compared top 50, Angular websites, top 50, React websites, and so on. So we can assume that they're not doing any major blunders, right? Like shipping the development version of the bundle to production or not minifying it and so on. So the first contentful paint would be a pretty good representation of how fast or how the size of the framework, how fast the browser is able to execute the code on the, in the framework and stuff like that. And that's what we did. So we took 50 websites from four different frameworks and plotted their CB metric for each of them. And we actually found a pretty consistent result. So we found that view was the fastest closely followed by React and React and Ember were like slower than those. So if you are just looking for first contentful paint, view is the winner. And again, we can do the same thing for desktop and mobile. You can't find out interesting things by dividing the data in these different dimensions. For example, you can find that quite a few sites built on Ember for some reason, very few users have the first contentful paint complete within one second. I don't know if there's something, these websites are doing incorrectly. I haven't dug more into the data, but might be worth looking at probably the way Ember is built. It's probably hard to have a fast, fast contentful paint. Again, I'm wrong, but I might be wrong. But this is the data that we got for the sites that we tested with. Another decision that you might say, like, hey, I don't care about all the data, I want to use the latest framework because it's all the hotness, right? So now we want to decide whether to do server-side rendering or just leave it as client-side rendering. Again, to go into the definition side of it, usually single-page arts are client-side rendered, which means that the HTML for which HTML is generated on the client side. So the server just shipped a JavaScript bundle to the client. The JavaScript bundle needs to be downloaded, passed, executed, and then the HTML is generated, which is shown onto the browser. So that's client-side rendering. Obviously, you can imagine that since there are a lot of steps involved before content shows up on the page, it can take quite a bit of time. The possible solution to this, if you want to optimize the first contentful paint is to do server-side rendering, which means that you generate the HTML on the server itself, where it's probably closer to your database and it's much faster to, first, your server is obviously going to be much faster than even your laptops and your mobile devices, right? So you generate the HTML on the server and then send it to the client. And once the client JavaScript bundle loads, it sort of takes over. We expect, again, so we, what we did is again we took 50 websites which are only client-side rendered and 50 websites which were also server-side rendered. And as expected, we found the server-side rendered websites to have much faster first contentful paint. More people had first contentful paint within one second if the sites were server-side rendered. Yeah, again, you can do this via desktop and mobile. Again, the effects of bad performance is much more pronounced on mobile devices. And yeah, this is an interesting blog post written by Paul Lewis a few years ago and he actually did the same experiment but using synthetic monitoring data and he sort of reached the same conclusion. Another interesting thing that he found out that server-side rendering is not a silver bullet also, like all your problems are not going to go away. The time to first contentful paint improves, but what happens is that another metric called time to interactive becomes worse. So time to interactive is when you can actually start interacting with the website in terms of like you can start typing, scrolling, and it basically means that the main thread is not blocked on any script execution, so it is free to execute your user defined actions. And server-side rendering actually makes it worse. We could have found this out using the Chrome User Experience Report as well, but right now among the four metrics it exposes time to interactive is not one of them, but hopefully in the future we can do a large scale analysis on this metric for server-side rendering and client-side rendering as well. And then we can take better decisions based on hey, should I server-side render or not, right? Another interesting thing whenever we talk about this data set to people is that, hey, I want to know how fast my competitor is loading, right? So what this does is basically you sort of have peak into the Google Analytics of your competitors or your RUM data of your competitors. And what we found out was that a lot of people even this question was interesting not just to the developers but also to the business people to see how fast your website is compared to your next competitor. So what we did was instead of you writing big queries to pass through all the data that you have, we just built a simple website, just enter two URLs and you can let me see if I can quickly bring that up. Yeah, so here again I am, so the compare views more interesting. So say suppose you want to compare across all countries, how much, how much, wait, what is this? Oh, the slide is not. Yeah, so you can go to this website called rux.executive.com. So enter any two URLs that are more than 3.2 million websites indexed by the data set. So what you can see is like, I want to compare say Lazada and Zalora, right? And these slide blue people represent the fraction of people who have the first contentful paint completed. So you can see Zalora is doing, Lazada is doing much better than Zalora. So in one second, 66% of people have something on the screen, which is a good thing. And in one second, 8% of the people have the page loaded, which is amazing. Zalora again, in the first second, not so good compared to Lazada. Only 28% of people have something on the screen and only 3% of people have the page loaded. When using this website, again, you can just say, Hey, I'm just interested in Singapore. I'm just interested in mobile. You can play around with it in a much more easy way. We also update this data set every time. So Google launches new data every month. So we try to keep this website up to date. So you can see, hey, over time, how is my website performing? Have I becoming faster, slower, compared to competitors and so on. So that's rux. You can also just look at individual websites as well, say, hey, we work website. You can find out what percentage of people have, 13% of people have something on the screen. So if you do some improvements, this is a nice way to get insights into your own data as well as your competitors data, for example. So that's one other thing that you can do with this data set. Other thing is like just run analysis across the web, right, figuring out what kinds of trends are happening across the web, similar to what people have been doing with HTTP archive. One very simple question. Hey, I know 3G is slower than 4G, but for Google.com itself, we can compare how much faster 3G and 4G is. So for the same website, just by using a 4G connection, the time to first contentful paint, number of 15% more users have time to first contentful paint completed within one second. I know that's quite a mouthful, but the SEB definition may not be that obvious the first time you hear it. Again, you can compare 3G with 3G, which means that, so 3G itself is not a very fixed standard, right, like there's no minimum speed requirements and stuff like that are mandated for it. So what we did was that we took one particular website, Google, and we looked at how Google behaved in different countries in a 3G connection for users in different countries. So for example, what we found out was that SEB of Google in Venezuela is 30% lower than that in Sweden. So again, we are not comparing different types of connections. We are comparing the same website with the same connection 3G, quote unquote, but the user experience is quite different. So even asking questions like, hey, how does my website look for a 3G user might be sort of meaningless, I guess, because you can see the user experience varies quite differently depending on where the 3G is from. So we did a similar thing for 4G. Again, we got a different number for 4G. Okay, one of the last experiments that we did was we wanted to figure out, does using a website, okay, there are two kinds of big companies have a website which is sort of the global website, which is usually the .com website and they have a local version of the website, is it .com.sg, for example, right? And usually in Singapore, you would use this version of the website. We wanted to figure out if, hey, for users in Singapore was google.com.sg actually faster than google.com, which you would usually expect it to be. We did that for Amazon and except for, think, BR is Brazil, for one country, it was mostly the country specific website was actually faster, so that's good. On an average, it was 18% faster. We did that for Google and we found actually, data was pretty sort of consistent, but not exactly. So for a lot of websites, we found out that using google.com was faster than using your local version of the website. I'm not sure exactly why that is. We only plotted out the countries where using the local version of the website was slower. We found out that most of the countries where, like in Africa, South America, China had a massive difference. So if you are in China, you should not be using google.com.cn if you're just caring about performance, of course. I don't know if it's some great firewall thing, but yeah, this is interesting data that you can dig into and figure out, like probably someone from Google would be able to figure out if there's, you know, some see where the .com actually loads faster than the local versions of these websites. I just finished with some gotchas while analyzing this data set. The data is aggregated across the entire domain, not just individual pages. So if you have, say, one or two pages, which is very popular, but it's loading very slowly, it can screw up the data of your entire website. And there is data for both the HTTP and HTTPS versions separated in the data set. Ideally, your HTTP version should just be a redirect to the HTTPS version. So you will get the number, the number, there's no point analyzing the HTTP version because it'll just Chrome user experience report actually measures the redirect as load, right? So you'll get a 99% of users have the load completed in one second, which again, doesn't make sense, but so makes sense because it's just a redirect. So just be careful about that. You don't have visibility of number of users. So even though you might get data, like a 10% of the users, you don't know if it's 10% of 1000, 10% of a million, that data again, you, so any insights that you get from this, you just take it with a grain of soil that it may not be very accurate in terms of the amount of users that the data set has. And even though this is real user monitoring, just keep in mind this is just for Chrome, just for users using the Chrome browser, right? And Chrome is not equal to the web. There might be different people using different browsers might be seeing other stuff. So again, before you base your entire decisions based on just this, you just need to keep in mind that this is data just for Chrome users in the past month for sites across your website. Sorry, pages across your website. That's it. If you have any questions, you can ping me on Twitter, ever-conscious guy, and we write more blog posts on stuff like this performance research that we do on our blog, likesecure.com slash blog. Thank you. And if you have any questions here. Okay. Okay, thank you.