 So what I'll do is I'll start off with making a small assumption. Let's take for granted that you are not the site owner or the team that's actually built the site. How do you look at a site and figure out what are the areas of improvement, what are the things that you should watch out for, and things that are working great, things that need improvement, and so on. One of the first things I do is, if possible, ask the site owners what the backend stack is. And sometimes I think you would have first-hand experience in this. You'll get a mixed-back of responses. Sometimes you get all the information. Sometimes the information is partial. A good tool to get you started with this is, by the way, that's I think always been my first stop. So for this session, we asked our friends and we had a couple of opt-in sites where we were able to do some audits and analysis and show you how you can do this for your own site. So for example, if you are looking at the news laundry site, you don't know what the backend stack is. You can go to buildwith.com and see what is the stack that it's using. It'll give you some good information. Like, for example, you know that news laundry has Google analytics. What that means is you can ask the site owner, hey, can I get some data on end user performance or the real user metrics? That you know that it's at least being measured and there is a possibility to get that information. Similarly, I think there's one signal. There's Facebook, something to do with Facebook. There's a Google tag manager. And I think, you know, I would just check, for example, Twitter, jQuery. I would always take this with a pinch of salt because sometimes buildwith is accurate, sometimes it's not. But it gives you a good idea of where, you know, a starting point. You know that, you know, like for example, this site has, it's using Cloudflare, for example. So these are things, you know, it quickly gives you, I will talk about how you can get into these aspects in detail. Quick question on this, Satya. I have used this tool often and people get really surprised that, oh, such a tool exists out there which can read my technology. Can you briefly explain how does this tool work internally? Very briefly, not in depth. And should it be a cause of concern if someone is able to find out these things for any business owner? Also as a side note, did the light just go off at your end? I can't take a look at it. Yes, it has. There's a part that the backup will pick in in a minute. So I think to start off, it's not a concern. Anything that shows up on buildwith is what's actually loaded on your page and between that, the kind of JavaScript that's being used or, you know, the servers that you're connecting to, that's essentially the information or signals that this tool uses to figure out what the tech stack is. I can actually walk you through some of them. Google Analytics, there will be a JavaScript which you've included for Google Analytics. It's able to read it on your pages and specifically the URL that we put in which was the homepage of Newslaundry. So it's able to say, okay, fine, Google Analytics is there. And it's able to get a little more in-term details on what kind of analytics you're using simply by looking at what JavaScripts and what function calls you're making. Similarly, you know, Facebook and all of that. One signal, again, you would include either the SDK Service Worker or a JavaScript for one signal and that's showing up because you've included some component which even if I have a Chrome browser, I can open up the DevTools and figure it out if I know exactly what I was looking for. Tag Manager, all of this works the same way. If you go to some of the backend stacks, I think everything that shows up on front end is self-explanatory. But what about things like GoDaddy DNS? That's because your site is post-runner name server and I can't get that information by just looking up the site. There is a record, the NS record will tell you what DNS provider you're using. Broadly, this is on public information, right? So essentially it boils down to everything that it is showing is public information. There is no hacking going on around here. No, it's all based on signals that it shows up on your site and it's public. Okay, so once you get a sense of what are the different text stacks, the next thing you wanna do is try and figure out what the site is. You might wanna go to the homepage itself like for example, you might just wanna go to newslawnreader.com, figure out what exactly is the content of the site. That's important because sometimes if you don't know what kind of site it is, it's hard to figure out what you should be looking for. And the other thing that you wanna keep in mind, and this is important for different kinds of sites, most of the sites will have page templates. It's especially true for publication or blogs or even in e-commerce, you have a homepage where you list all your top items, then you have a product page and you have a category page. That concept is actually common across different classes of website. In a blog, you have your homepage, you have your categories and you have a post in news publications, that's something similar. Even for sites that don't fall into these categories, as a site owner or a developer, you know the different templates you've used. That's important. And we'll use the different templates as a good starting point. Now, once you've identified the templates here, the way I went about it is homepage is definitely a template because it's in most cases unique. You can go to report and it shows up as a link on the homepage. So when I go to it, it has a list of stories. So that seems unique. And if I go into one of these stories, it's likely gonna have a single story. You can see that each of these have some different page templates. So we'll use this as a starting point. Now, what you wanna do is, you can start off with web page test. And what you wanna do is do a visual comparison. Now, the first thing I would like to do- Just wait a minute there. What is web page test rest? Okay, that's a good question. So web page test is a tool that I would use to load up pages and figure out how a page is getting loaded on a Chrome browser or a Firefox browser from different locations and dig deeper into what's showing up, what does that mean and interpretation of the data. So it's a very visual tool which sort of helps you break down the performance and some of the other key data points to you for any URL that's publicly accessible. And why should we trust this tool? There are hundreds of them on the market. Sure. In fact, you don't have to use web page test at all. Web page test is just one that makes it easy for you. It's freely available and it has multiple agents across the globe. You can in fact use your developer tools on Mozilla, Firefox and Google Chrome. That works as well, except it works on your local browser. You would want to simulate what it looks like when you're making requests from different geographies. It doesn't really matter much, but if you have any geo-based logic on your site or if you don't want to test from local, or you want to share these results with somebody else, you would do something like a web page test. Very common use cases. Yeah. And the important thing to note about web page test is you don't have to... It's an open source offering. You don't have to use the publicly available web page test instances. You can deploy your private instance in Amazon or GCP and run as many tests as you want. In fact, that's one of the easiest ways to run bulk tests. So if you want to run hundreds of thousands of tests, that's the best way you would do it because these public instances will not work for you. Okay, so yeah. What I was saying is once you identify the different categories, you would come in and first thing you want to do is do a visual comparison between your different pages. You want to either take the different categories and run a visual comparison or do something a little more interesting. You know that if you're a website today, your top browsers are Chrome and depending on the region, it's Safari or Firefox. For the most part, or your browser is a Chromium-based browser which behaves very similar to Google Chrome and you would like to see how your page loads up. Now, what I've done is when you actually submit a test, it takes about a minute, minute and a half to complete this and show the results. So I've done these tests ahead of time and essentially once the test is complete, the URL would change to something like this which is basically the test. Can you just show, kicking this, how to initiate this test once and what is to be filled in which part of the fields? Are we doing the advanced testing or the visual comparison? So I think in almost all cases, the inputs are the same. Here you can just say, it's a label, you can just say, I wanna test in Firefox. And what are the options it gives out there? So there is a label, there is a URL and you can use, what are the test configurations available? Sure. Okay, so when you're doing the visual comparison, you'll notice that there is an option to add the label. So you can go to news laundry page and put in the homepage. And we were discussing earlier, we are going to do the comparison between that and reports. This is a category, so it's a category template. So we put that in and we wanna pick up a story as well. So I'm just picking up the first story. I would say don't start that, or actually you can start the test. You'll have the older one case, right? I hope this will not kill the case of the older one. Yeah, the configuration available when you're doing a visual comparison straight off the back is just whether you wanna test it on a mobile 4G desktop or a trotter 3G connection. So basically doing a combination between a type of device and the speed of connection, essentially. Yeah, you can go ahead and fix the test. You'll see that, this is what I was talking about. It has to finish all of these tests. Now, we can let that take, but what I usually do is, suppose if I had to do the same thing, I don't use the visual comparison option. I use the advanced testing and I select the test location. I, this is a site which is more relevant in India because right now I'm based out of Bangalore, so I want relevant results for me. I select location as India, Mumbai, the Mumbai India webpage test offers me Chrome and PipeOps. It does not offer Safari. There are some locations, I think, which offer Opera. I think there's one that offers, okay, these are the three options. Okay, so essentially in the advanced testing, now we have far more options. Not only are going between device and connection speed, but now you can decide which location the test is going to run on, which browser the test is going to run on. Plus, there is a device angle and the speed angle, right? The other two. Yes, there's a device angle, speed, and speed itself is broken down into your latency and throughput. For example, if you look at... But tell me Satya, what's wrong in running the tests on your own computer? Why do we need so many combinations or options? No, you can. By that I mean that why do we need to go C3, 3G, 4G, and all those things? Why are so many options relevant, essentially? Okay, so whenever you're building sites, right, I think it's important to understand that right now, you know, the faster internet connections are available to a fraction of your end users. Even if, you know, you cater to a niche that's affluent and they're expected to have good internet connections, even in that case, you can expect the last mile connectivity to be choppy, or you should at least build for it. You can always assume that there will be failures or partly not the last mile connectivity is poor because either your 4G is bad or your bad signal area or even if it's Wi-Fi, it's Wi-Fi is not all that reliable. So I think it's important for site owners to build for the worst case. And here, you know, at least when you're doing, you know, this type of analysis is important to understand how your site will load up for a user who's opening it on a 4G connection or a 3G connection. Because you might be sitting, you know, in Delhi or Bangalore, you might be having like 200 MEPs connection. And, you know, you can open your site, it just opens up in a second or two. But that's not really relevant. It's not going to... As a site creator myself, I can completely vouch for the fact that all approvals happen on clients' own computers and on their own internet connection. And most client internet connections and the computers are obviously coming from really privileged spaces. And at that point of time, it's really catering to probably 1% of the users. And the 99% of the users, how they experience the website is something that often never passes through a client's computer, like knowledge awareness at that point of time. Which is why probably this is something kind of can even help a client in case they are a little tech savvy or maybe ask the developers to make sure to test across all these variations as well and maybe create a report out of this. Yeah, yeah. Look, I'll be very candid to it. So when you're looking at end user performance, wephage test is not the kind of data you should be looking at. Wephage test is a tool that should be used primarily in the developer's pipeline. That is when he's optimizing performance or tweaking the website. Or when you're looking to understand, you know, what the ballpark is, like is my site doing really, really well? Do I need to make improvements or where do I stand in terms of, you know, comparing with competition? Those are the three scenarios where you should really use this. And all of these are, you know, guidelines only. Take it with a lot of salt because none of these results will be accurate. If you really need to figure out how end users perceive your site, I think you need to collect pre-user monitoring data or RUM data, as we call it. This is data from end users, browsers, as they visit your site. That is the right data set to be looking at for performance. And I'll be walking you through some data sets that are publicly available, you know, after we go over some parts of web page test to try and understand how we can do it. I think I have derailed you from where you were going. So if you want to generate the first result, let's go and generate, should we do that before looking at the advanced ones? Sure. What I'm going to do is, in the interest of time, I'm going to quickly skip to some pre-baked results so that we can quickly skip past waiting for the test and setting up the test. You know, if you have questions on how you want to set up advanced test, put them in the chat or as a question, I can answer that particular question. Okay, so once, you know, like I said, you can do this comparison test either between different browsers or between different page templates. And that's important because you need to understand how your own site performs on different browsers. Is there, you know, a single browser which your site behaves differently on? That's important because you might be testing on Chrome all the time, but if something is wrong on Firefox, you will never know. And, you know, where page test gives you that quick, you know, view into what your site is looking at in different browsers. And it's actually easy to sort of visualize as well. So in this test, we have Firefox in the top row and Chrome in the second row. And we are looking at the home page. Now, straight off the back, you know, we can see that we are looking at DOM complete, which means that all the resources have been loaded and loaded in the sense that all the resources have been downloaded to the browser and the DOM tree of the page has been completed. And for that event to sort of trigger on web page test at least, there is a slight difference. You know, Chrome finishes in 5.4 seconds and Firefox takes 6.3 seconds. Now, that in itself tells me nothing. So there are a few things you can do. You can look at what's exactly getting loaded. Now, each of these browsers have different, you know, ways in which they load content, especially if your site uses HTTP2 connections or going forward HTTP3. The priority of different resources are different. You can actually visualize a lot of that in these tests. You know, when you're looking at these tests individually, like for example, if you want to go to the Firefox test, you can open that up separately and figure out, you know, how each of your resources on the site are being loaded. For example, I think this section of the page, you can... Before you dive into this, can I ask you to just explain this visualization for us once? Just an overview before we get into the individual requests. Sure. What you're seeing here is a waterfall of all the requests that actually happen on a browser. So when a browser is making a request, it's actually what's happening is your HTML page gets downloaded first. The next thing that happens is in your HTML page, the head section of your website is passed. The browser figures out all the links in your head section, which is mostly blocking CSS, critical JavaScript, and maybe a five-by-three in an ideal scenario. Those are the three things that usually exist in the head section. And once that's downloaded, the body of the HTML is passed, and as and when it encounters different resources, it gets downloaded and executed. Over here, what we're seeing is something similar. And this visualization is color coded, right? And there's a legend on top. Basically, this blue essentially means it's HTML. As you can see, there are only two pieces of HTML that's likely getting loaded. I think this is HTTP, yes. Okay. Yeah. And the orange ones are JavaScript. As you can see, at least the news around this site heavily relies on JavaScript. There are a bunch of JavaScripts that get downloaded and executed. It tells you a couple of things. Maybe the site is built with Node.js. I think built with told me it's Express.js. So it's likely to be true. That's where you sort of start connecting the dots. And I think the important thing to note about some of these Node.js or sites that have built JavaScript is, so if you'll know more about this, there's an app.js or there's a JS bundle that comes in which subsequently loads or constructs the rest of the DOM tree. I think, and I'll walk you through at least a couple of things that I've run into where at least when the first HTML comes in, there's actually nothing in the DOM tree. Once the JavaScripts come in, they construct the rest of the DOM tree, which means that some of these tools can give you misleading results, but it's important to just keep them in mind and work around that. So where can you see where the DOM tree is getting created in this? Is that something that the visualization is showing anymore? Yes, so the way to look at this visualization is the first one, it's actually the HTML because we know that the first resource is always the HTML. Everything that comes after that is actually color coded and it is going to help in the construction of both the CSS object model and the document object model. So I think all of these resources are primarily JavaScript and I think there's a one signal SDK of sorts, then there are fonts. So all of these get downloaded, they're needed to either construct the CSS object model or the document object model. And once the DOM is complete, your page is fully constructed. And you don't have to wait until the DOM is fully constructed. I think browsers are able to sort of start rendering depending on how you laid out the different resources on your page. So that's why it's important to get your head section, right? If you have your critical CSS and the JavaScript in there, it's able to download it straight off the bat and start ranking content on the page. It's the count of the number. So one of the things that I always try to look at in a result like this is two, three things and it would be great if you can tell us a little bit more about them. One is the number of resources that are being requested or the distinct number of requests that are being made. Are they all intentional? Are they all the things that we designed for or we created or are any of them just happening without our knowledge at all? And that generally means that it's something that we have completely overlooked or we have accidentally included something or some script further calling in some script and there is a potential scope for optimization there. So one is just auditing all the requests that are being made. Are they the ones that we actually intended for the resource to be loaded? That's one thing that we do. And also looking at the number of count of resources that are getting loaded. The other thing that I take a look at if you just crawl down a bit right at the top. No, the other way, the other down. Yeah, these numbers take the first view and when does it start rendering? What is the speed index and all? These are certain numbers we also take a look at. So would you comment on these two points that the number of resources and these numbers? Yeah, so I just opened up the Chrome version of the same test. Chrome has some additional metrics which make it more interesting for conversation. So when we ran this web test, we did two things. One is we assume that your browser has never visited the site and it's going to visit the site for the very first time. What does performance or how does page load at that point in time? As you can imagine, when you're going for the absolute first time, every content has to be fetched from the network. There's nothing available on browser cache. So that usually means it's going to take a little longer. So that's called the first view and in the test itself, when I was running it, there's something called the repeat view which means, okay, once you've visited the site, what happens when you go to the same site again? And I'll just give you the high level view. For the first view, it takes about 6.7 seconds for the site to load. For the repeat view, it's two seconds. What that means is in these two seconds or rather the additional, the delta of four and a half seconds that exists between these two things are required to fetch content which is now locally cached on the browser which is not going to change. These are things like your images, CSS, JavaScript, they don't change immediately in most situations. There are situations where it can be dynamic. It may not be cached on the browser but we leave that as an outlier. In most cases, it can be cached and your browser never has to make that request again unless you clear a browser cache or go to an incognito window. But that will give you an understanding of, once a user has come into your site, then he's navigating between your different templates, what the page performance is. That's important. Like for example, once I go to a publishing site, newspaper, and I have to, let's say click on the first story and then the story opens up. It's important for you to use common resources so that the subsequent pages open up much faster. So all of these are optimizations. Speaking of which, I think, when you're looking at these different optimizations, there are a few things that you have to keep in mind. I think we spoke about planning for bad connections straight off. Always plan for bad connections. It's important for you to look at map timings, resource timings and you have to keep that in mind to figure out what are the things that you as a developer or a site owner should focus on. Map timings or resource timings are, like Sovik mentioned, what's at the summary section of the page test. It tells you what the first byte time is. In this case, it's two and a half seconds. You know, there's no real measure of whether it's good or bad. In this case, two and a half seconds seems a little on the higher side. But one of the things you have to keep in mind is this is a web page test. It is not a real measure of performance of your website. It does not reflect, you know, these numbers don't reflect what happens in the real world. So we'll get to it. It'll also not be consistent. A second load might actually give a different set of numbers completely. Yes. Not very off, but there are reasons why they will be often will be good to discuss that probably later in the session. Yes. In fact, I've seen it, you know, vary by over 300% in a matter of, like a couple of minutes apart when you're on the test. So take all of this data with a pinch of salt. These are good guidelines, but not a good absolute numbers. But let's assume that, you know, these numbers, however, on a relative scale are fairly true. And I can walk you through some of the examples. Like, for example, if the first byte is two and a half, 2.4 seconds, your start render is 400 milliseconds after that, which, you know, tells you that content rendering takes about 430 seconds. Your largest contentful paint, which is a Core Web Bikle, Google is actively tracking this for SEO purposes, is four and a half seconds. You might want to focus on that. Cumulative layout chip, which means once the DOM is constructed, you have no visual changes from the site, which, you know, is too jarring for the end user. Google has made active, you know, Google has actively made some changes which focus on cumulative layout shifts. And in this case, there's hardly any shifting in the layout. Blocking time is, again, the time, you know, is the time that all the differences are taking, which block, which are end of blocking, which means until that particular script or that CSS is fully patched and parsed and constructed, nothing will render on the page. That's why it's important to sort of get your critical CSS and JavaScript in the head section itself. Just for the benefit of the audience here, I just want to add over here that all resources that are added to a page can be added in two different manners. One manner in which first you must load all resources, only then the page will start showing up or there is another technique through which you can load and say, start showing up the page and slowly on the background keep loading the non-critical things. So this total, the time that it takes, the blocking resource time that it takes, you would want it to be as low as possible. And if there is any scope for cutting down any resource and make it load asynchronously or in a deferred manner, that should be the way to go. And I guess this, is there any benchmark for these numbers, Satya, can you just provide any benchmark or direction for these numbers or what makes it turn green or red in this tool? So that's a good question. These web vitals are purely driven by Google and Google gives you guidelines. What I'll do is I'll just walk you through something called Chrome User Experience Report. And that will give you a good, spread of how Google perceives these numbers. They keep tweaking how these different metrics affect your paid-speed scores and from a Google's perspective, how effective or how fast your site is. And they tweak all of the percentage or the weightage of a lot of these metrics. And it often changes with time. And I think it's available. I don't know if it's public information, so I'll get back to you on that. Okay, there's a question that is coming from YouTube, Satya, which is, wanted to confirm, does these run and create reports for only the landing page or for the whole app? Or does it one needs to go into specific URLs in order to get reports for each of the pages? Okay, that's a good question. This will only create a report for the homepage or newsrunnery.com slash, so nothing else. And it's important to know that sometimes you might have an infinite scrolling webpage, but that's predicated on the fact that you're able to pass an input and scroll to a particular point before subsequent content loads. The webpage test will not be able to provide any of those inputs, so it's actually just going to be fetch and execute. Okay, so that gives you a little bit of an idea on the waterfall itself. You can do a full-blown performance audit to understand what are the things that you should focus on between webpage test and your DevTools. These are two powerful options and both give you information that's specific and sometimes easily consumable either in webpage test or Chrome DevTools. So I like webpage test because it's web-based, it's easily shareable. And outside of the waterfall, there are a few other things that you need to pay attention in the webpage test tool. Second one that I quickly want to draw your attention to is the connection view. A connection view essentially tells you how many connections does your site really make. And an easy way to look at this is how many third-party content or how many subdomains actually, unique subdomains get loaded on your site and how many of HTTP connections the browser has to make to fetch all the resources. In this case, there are about 25. I would say 25 is not too bad, not small either. It's about somewhere in the middle. I've seen sites with a lot more connections. I've seen sites with fewer as well. But what's important is in this timeline, if you look at the connection for NewsLaundry, a lot of the content actually comes from www.newslaundry.com or FEAS.com. I think the site uses Quintype. So Acetype is a subdomain for Quintype. And between the newslaundry.com and the acetype.com, almost all resources get downloaded. The rest of it is all third-party content. So that's a good indication of how your site is constructed and how the resources are coming in. Again, if you want to dig a little deeper, you can figure out, before start render, what are the things that actually go in? We saw two HTML connections in the waterfall. And what was that? Yeah. And that shows up in your before start end of the request details as well. So you can actually figure out what this is. Since this is an HTML, it should render some text. And this is how you basically dig down and figure out what exactly is happening if you've not built the site yourself. If you've built it, you probably already know. If there are two HTMLs coming, is it like, must be an iFrame or something, right? Yes. Which other pulls in a second too? Yeah. Because ordinarily you will never see two HTMLs. Like under usual circumstances, you should never see two HTMLs. At least for the most part, I don't. Which is why it was used. Okay. And I think some of the other things are, the performance review, it's just a quick visual table, which tells you that you're using connections. Now, we saw in the earlier connection section that the connections are getting reduced, right? Now, the site is set up the right way, but let's say there was some issue with the way the headers are set. And there was a connection close director getting passed with every resource. What you would see is that one keep alias is not set. And two, in the connection view, you'll see multiple connections in this table. So you would see that for every resource there's a row, which means that for getting a single resource, a new connection is made. So this view is also good to figure out if your connections are being used efficiently. So the place where we see multiple connections to that, some one of the Google things is, yeah, the seven rows, seven to 11. Does that mean that this is the problem in this way or are there different domains? No, so these are OSCP statement requests. I've seen them in Firefox a lot. In Chrome, I don't see them. These are, so let me explain this call. This one is a special call. You can ignore it for the most part. You've not included it as part of your site at all. It's something that the browser, it's a call that the browser generates, okay. Yes, it's to verify that your SSL certificate that your site is presenting has not been blacklisted. So it's a good call, but you can just safely ignore it. The ones that you want to look at are the other ones. Okay. The other one, I mean, connection reuse is important, GZ compression. That's something, it's a low hanging fruit that if you miss, you could pay a big performance penalty. I can quickly walk you through what this is. Now, empty resource, we saw that this particular site has a lot of JavaScript. Now, this JavaScript is compressed in the sense it's GZ compressed when it's getting transferred. So the bytes downloaded is 10 KB, but it gets uncompressed by the browser and it's 41 KB when it's uncompressed. Now it's important to get that compression ratio on the wire, otherwise you're transferring more content in slower connections. It makes a situation much worse. And this is probably the lowest hanging fruit. You should always enable GZ compression for text content, some font types. I think the web fonts are compressed by default. So you don't have to enable them for web fonts, but some of the other ones, like for example, if you have any text records, HTML, JSON, XML, all of that should be compressed. Yeah. Images, I think image compression is a topic in itself. What I would like to show you is there's a neat little utility available on web page test which is called image analysis. And it goes through all of the images on your site and it tells you if you were to optimize these images, what would it look like? And this particular image analysis, you would see that it's showing the brownie six images. This is a little bit of fit that I wanted to quickly share. Now, this is because the image analysis tool is going to fetch the HTML, pass the HTML for image links directly and try to optimize those images. The way the site is built is the HTML loads, then there is a JavaScript which in turn pulls the images. That's why it doesn't get picked up. Basically, lazy loading. Lazy loading, there are different ways to do lazy loading, but these are elements that are specifically loaded by JavaScript. JavaScript, yeah, got it. Okay, we can go over one of the other examples where you can see image optimization. The other thing that's low hanging fruit is caching on the browser. Now, compression, image optimization and browser caching. These are all, at least the compression, connection keep alive and browser caching is driven by HTTP headers. You must set the right cache control headers so that it's cached by the browser. We spoke about the first view and repeat view. You want to make sure that in the repeat view, if a resource is supposed to be available in the browser, it is, and you don't make an unnecessary request for that. And web page test actually gives you a good indication of where you are on all of these. Like, you can ignore the security score, but first by it, it's going to tell you that what first by it is a little high, maybe you have to pay attention to it. Do explain what's first by it. Do explain what's first by it. This is something that I pay a lot of emphasis to. Okay. So first by it is like, for example, I go to the site, any site for that matter, when you go to it, you're making a request. So the first thing that happens, the DNS resolution, then a TCP connection is established and SSL handshake takes place. You make a request for that particular page or a resource and the server actually sends the content back. Now, the content can be sent back in a single response. Or it's segmented and sent, you know, in multiple packets. The important thing is when the first packet arrives at your browser, you're essentially measuring the latency for the request to go out from your browser and for the server to respond. It's a good indication of the network related issues between the end user and the server. That's time to first by it. And I would also add also the page generation speed. Essentially, if your time to first by it is up, there are possibly two issues that are there. Either you're not doing fast enough or you don't have enough optimized things inside the backend of your server to generate the page. Maybe you're not caching internally in your server itself or every time the PHP kicks in or the database connections kick in and all and it takes a lot of time. On the second part is the amount of time it takes to travel, the bytes take to travel to the server. For that, you can try to as much eliminate by picking the right server location while you're running the web page test. Yeah, yeah. But it's important to remember that while you're running web page test, you can change the server locations, your end users are fixed and your server, your actual application is fixed in your data center. So it's important to sort of optimize for it. It's a critical metric that everyone should pay attention to and optimize. And I think the other thing that web page test tells you is CDN detected. If you're using a content delivery network, it's able to detect. I think we spoke about this in the last session. It looks at the DNS and the way the DNS resolution happens to detect whether it's a CDN. I'll quickly move on to some of the interesting ones. If you're a website owner, sometimes you don't know or maybe it works both ways. If you're part of the deputy, your ops team or your marketing team might include third-party content that you're not sure of or you don't know what it is and you want to just figure out what are all the third-party content and how is it affecting your page performance. The page test gives you a good indication of that. It will first list out all the third-party content that sort of go into a site or goes into a rendering of a page and it gives you both a number of requests to the different domains and how much of a page weight that these different domains contribute to. A good example of this is like, for example, we saw that this site had a lot of JavaScript coming from FEAS type.com, images coming in from FEAS type.com. That's indicated over one third of the website in terms of bytes and in terms of the number of requests. So that's a good indication. There was another pie chart to content breakdown. Yeah, yeah, this, but there was another pie chart that you had very briefly looked at which would break down between images and JavaScript and what are the different types of resources. That was on some other page, right? Yeah, so, yes, that's the content breakdown itself. At the bottom of that, oh yeah, over here. Yeah, so, yeah, you can see that at least this particular site for 50% JavaScript, we can look at some of the other sites, right? Let me quickly go pull the other sites that we've run some tests on, right? But none of these measures are necessarily good or bad. I would say you would benchmark yourself the first time and then see how you can keep optimizing over a period. Yeah, and like I said, I think every site owner or the technical team picks a stack and sometimes your stack drives how your content breakdown is or how the resources are delivered to you. Sometimes you might use a third party. That third party could be WordPress, it could be a CMS that you're using and that platform drives how your sites are rendered and loaded. So you have a little control over it but it's important to understand how they've set it up so that you can make good decisions based on the test. I would kind of disagree with the fact that you have little control over that bit. I think as long as you have awareness of what kind of implications, especially if you're talking about technologies that are self-hosted, right? And I would say even today a large number of them are still running in a self-hosted way and you have control over the entire stack. It's just that you, because you're using, it's like saying, I'm building my site on top of a certain platform. Your platform choice can be based out of some of these metrics firstly and then within the platforms also, most of these, I don't consider any of these technologies to be necessarily underperforming. No matter which is your choice of CMS, which is your choice of backend technology. The reason you have heard of it because they have already stood the test of quality, a lot of great implementations out there in the wild. So they can be tweaked to make them performant. But I think sometimes the awareness is lacking which is why you don't dig deeper in this. And that is where these tools come in and make it very useful for you to question why is it like this? Yeah, I agree with that. And because of lack of awareness, you tend to misconfigure it or not use it the right way and that sort of aggravates the problem. Okay, so I think another important thing that you might run into is third-party content, right? Like we saw these different domains. Okay, you're back in the first site on the, okay, go on. Yeah, and I'm just speaking just any of these tests to illustrate the point. Now, these sites have third-party content. You mentioned, I think you brought up a great point. If it's a self-hosted solution, you always have control over it. I agree with that. But for example, you know, it's a well-known fact that you have no control over how Facebook runs their site or Google runs their... Yeah, absolutely. The only control you have is you not include the third-party content. And at some point of time, right now, you and Apple is saying that we'll block many of these things for you. Yeah, and it's sort of important. And to understand what really happens to your site when one of the third-party providers or services that you're using goes down. And this can be even simple things like, for example, if you're using jQuery on your site, which is critical for your site to function, but you're not using that jQuery from your own domain or your own server is not renting that you're using some other service to include jQuery, you might just want to understand what the implications are or measure the risk. But it just actually gives you a good way to do that. So, right from the beginning... I mean, in the last screen itself, it was very interesting. Just two external resources were taking up so many bytes in the last report that we... Oh, wait a minute. Just to one the font and the other is some YouTube member, probably. If you add up, that's about 70% of the bytes size that gets added. Yeah. Yeah. A lot. Like the percentage breakdown in the pie chart is from where I was picking up the... Oh, okay. Yeah, there's a 32% of the YouTube and 42% of the movie fonts. Oh, no, this is 42% is the hard copy. The color coding then probably is there in the second page. Yeah, it's 7% of the fonts. But I think it's important to sort of understand what the implications are. Like, you know, you can say if you assume that some of the big providers, you know, try and ensure that their services are always up and running, but internet is prone to failure. Things can go wrong. And as a business, you want to make sure you understand this. It's important to just understand it. Whether you want to... There's an audience question out there. Satya, there's an audience question. I just do want to complete, add one note to caveat to your thing that you can always trust a big organization to eventually incorporate all the performance benchmarks because as a big organizations also are slow because they are slow in adopting to change. They are slow in making certain changes. Most recently, like you would often, funnily find many of the recommendation that a company like Google makes, it itself is unable to follow them at its own websites that they are creating. So adopting those changes, because they're done by absolutely, like people who are setting the benchmarks are different than people who are adopting these benchmarks. And it's hard for a large product to adopt them. So yeah, eventually they will get it right, but sometimes you can stay ahead of the curve by not including some of the external resources from them as well. The question from the external audience, from the audience that has come in, again from Lawrence is that, so what kind of strategy or tools you follow or use? Because in most of the cases, landing pages are lightweight and let's say dashboard, which is usually heavy in terms of JavaScript, payload and other assets. So what kind of strategy? What if you want to, okay, this is a separate question. What kind of strategy would you follow in terms of, I assume the audience member is trying to ask, as a ratio of JS versus other assets, how, what kind of strategy should be a good measure? Is there a benchmark that your content breakdown or the MIME type breakdown should, or ideally look like this, something? So I think the ideal response should just have HTML with inline CSS and JavaScript, but it's not feasible because it's not maintainable and you don't want to put everything in the same page. You cannot reuse it across the pages and so on, but there are a lot of considerations that go into it. I think one of the patterns that we used to follow a couple of years back was, if it's a small JavaScript or a CSS inline it. I think with HTTP2, that consideration has significantly gone down. You don't really care about having too many requests because HTTP2 has a good pipelining mechanism. You don't have a limitation on the number of parallel connections that a browser can make and so on. So I think a lot of improvements have sort of come in. We will see that from a protocol perspective, there is continuous advancements being made and we have to worry about some of those things. As developers, we don't have to worry about some of those things increasingly. Having said that, you have to also ensure that from a performance standpoint, you can tweak a few things. If you're using JavaScript, try and ensure that you do some amount of code splitting, ensure that whatever the page really needs is the only thing that gets loaded on the page. And if you do code splitting and CSS, if you refactor it well and split it up the right way, it's still available, it's still modular, you can include it in different pages as and when it's needed. And whatever parts of the code you're not using, you don't include them. I think that's a good measure of how you would go about it rather than trying and have a goal for what needs to be done in JavaScript, needs to be done in CSS. Because I think Sawek can probably talk more about this. A lot of stuff that you can do on JavaScript can also be done on CSS, especially with animation, there are more than one way to do it and more than one way to display the content. So I don't think there is one size fits all. Two more questions, Satya. Lawrence also adds this question, which is what if you want to monitor and test specific part of an application? So if you have an application and you want to monitor only a small part of the application, how do you do it? It's a great question. So there are a lot of, okay, application monitoring essentially falls into two big buckets, right? You might want to start off by monitoring your backend infrastructure. So you start measuring your server times and how your servers are rendering, how they're getting to different sources. That's a question in itself. But if you focus on the front end, as in how do you measure a portion of your application from a browser standpoint, you can actually go as granular as a single URL if you want to do that. Or let's say, you know... Provided it is public, right? Provided it is public. Yes, provided it's public. And I'm just going to take an example, right? We'll just use this site as an example. You just want to measure and we'll make some broad assumptions. Let's say that this media section itself is rendered by calling a separate API. It's called load media API. And you want to first measure the performance of that. You would start off by, you know, monitoring those endpoints and monitoring the resource timings. There are RUM tools available which offer custom timers. And there are lots of RUM tools available in the market. There are open source options available which will allow you to, you know, set custom timers. And these custom timers work well with, you know, the resource APIs that the browser offers. So it's actually consistent across different browsers. And you will be able to monitor certain sections of the website if that was the question. But if the question is more towards how do you, you know, measure microservices the approach is a little bit different. All right, right, okay. So another question that the audience, Meera has asked, which is, is there a way to test performance of pages with form or uploads? Yes, yes, there are. So basically when you're making, you know, a post call. So by the way, Meera, in just in case you want to, because I see that you are in Zoom, in case you want to ask the question directly to Satya and discuss it out quickly, feel free to raise your hand or unmute yourself and ask. Satya, you can carry on. You can continue answering the question. Sure, so if the question was specifically, how do you test a form submit on webpage test? Web page test is actually very flexible. You know, you can script. When you're entering a URL, essentially you're making a get call. You can make a post call. You just need to figure out, I mean, there's documentation available, which is also in the link. And you can just figure out how you want to, submit the specific parameters in the right format and stuff like that. But you're essentially playing around with the advanced options available in webpage test. But that's just one tool. Typically you would use a synthetic testing tool, which allows you to monitor or rather make post calls, which is mostly the case. If it's a form, put calls, if it's a custom form to create a new record and so on. But tools are available. My recommendation is, if you truly want to test the performance of a form submit, you should run a distributed test and it should have more than one user because that's essentially what production use cases tend to be like, you're never going to have a single user only and they're usually distributed and you need to figure out how these requests behave at different points in time. So I would say run the same test, iteratively with a five minute, 15 minute or a 60 minute interval throughout a three weeks time span to get some sense of what this release looks like. If you have faster releases, you'll just shorten that cycle. But the principle holds true. You want to keep running those tests from different agents over a fixed interval. Okay, Satya, I'd also do a quick time check at this point of time. I don't know what are the other things that you plan to cover at this point of time. There is also a rum related question that is there. I'm not taking it up immediately. Whenever you want to enter that segment, I would like to start with that question. So I actually had the other things you want to cover at this point of time, then just let me. So I mean, I was talking about single point of failure. I'll close this off. So we were talking about how, almost all sites have some third party content. Webpages allows you to simulate single points of failure in a sense, you can, in this case, what I've done, and I can show you what I've done. I've simulated a failure for. Also tell us how you do what you have just done. How do you create a single point of failure? Okay, so what you essentially do is when you're starting a test, there is a box for single point of failure. Now in this site, I think this was news laundry. We had the hard copy of the test as well. And one of the things we, what you're essentially looking for is find and identify all the third party domains that a particular website uses. And you want to figure out, okay, what happens if let's say the facebook.com site goes down. The site goes down. So I have plugins probably for login or tracking on Facebook. What happens when they go down? Will my site work? Will there be issues with loading it? And essentially what you have to do is just enter these different host names in the SPOF text area and do something to test. And what you're doing is you're saying you're black, you're kind of excluding a particular set, either a single or multiple set of domain names to fail. So any request to those domains will not succeed. And then what happens? And so you're almost simulating that, like there is another site that we saw where there were about 25 different domains that were being called resources, being called from 25 different domains. If any one of them fails or any two of them fails, because you don't have control over all these external resources, then what happens? That's the what if scenario. Okay, so what happens then? So in this case, we saw that when you fail the Google Facebook domains for the different tracking plugins and Google Analytics, and you fail the fonts, essentially the site visually loads. But if you look at the nav timings for that particular test or from a browser standpoint, the document complete is gonna take 49 seconds. And this will be sort of reflected in your metrics as well. But it's important to know that even in case of failure, visually this particular site just looks fine. And you can see that by going through the print strip. And- Manages to load the entire thing. Yeah, there's no significant difference. There's a slight shift in when the site starts rendering the content, but that slight difference is so small that I wouldn't be able to say if it's because of- But why would the load time increase to 49 seconds? Why would such a thing happen? Okay, so typically what happens is a browser will try, when it fails fetching a content, and this happens for all the content. A browser automatically will try to retry that particular connection, that particular request. Before retry kicks in, the first request has to fail. Now, for the first request to fail, it takes a while. And that's essentially what the simulation does. It fails the request, but it doesn't fail it immediately. It has to wait for the connection timeout. And that's when the failure happens. All of this cumulatively adds up. If you have three retries from the browser, essentially you're waiting for three timeouts before which the browser gives up. So that's essentially what typically happens. And you can actually see that when there is a failure, you'll see that- So you're saying that in many sites, a failure might end up in leaving the visual rendering also in a broken state? Yes, that is something you need to sort of test for just to make sure that you're mitigating the risks. Okay. And we spoke a lot about RAM data. Should I start with the question? Should I start with the question on the RAM thing because there's someone, Venki Shetty has asked, what is the recommended way to measure real-time web performance experienced by end users across devices and networks? That's a great starting question for RAM. Okay. So typically, we'll hold on to the real-time aspect of it because I think it's a logistical challenge. But essentially, if you want to measure performance for any given website, you want to measure what your end users are experiencing on your website. You can simulate it. You can run backbone tests all day long, but those are synthetic tests. And by that, what I really mean is your test agents or nodes that are simulating the test are sitting in data centers with very, very good quality internet connections. They don't have any disruptions. There's no congestion on those networks. Real networks have issues. I had a park at the beginning of the session. So actually what happened was my internet reset. So I switched to a different connection. And in those scenarios, what happens? Zoom was able to work just fine. You have to just make sure, similar to that, in poor conditions, your site also works well. The way to gather this data is by getting the map timings from the browsers. As a standard, all browsers today have map timing APIs, which real user monitoring tools sort of plug into to gather the data. And it's consistent across browsers. I think that standardization has happened as a good thing for all of us. And what you want to do is sort of use that RUM data, collect it, and depending on the size of your site, you might be sampling it because if your site is not very largely visited, then you potentially can get all the data samples. Google Analytics is an example, samples that are created around 1%. And even a 1% sample is statistically very relevant, especially in large demographics like India. There are millions of users coming in. 1% is a lot of data, it gives you good education. And a good segue into that is, the next thing I was going to talk about, that's the Chrome user experience dataset. And it's called the crux DB. Quick time check before you get into this. We would like to close by 12.30. If anyone has any questions, feel free to drop in your comments on YouTube or here so that we can take the final ones. I assume this is the final segment that Satya will be covering in this talk. Go on Satya. Yeah. At the end of the session is also to see if we want to do a continuation at some point. Sure. We can also do a continuation as in right now, right after this, or in a follow up session? No, as a follow up session. Okay, sounds great. So as Satya enters this last segment, if any of you feel that there should be further more advanced sessions, further digging into what are the specifics available, what are the, any specific things you want us to cover in a greater detail, because this is a very general overview as you can see. There's so much you can cover in a one hour session or a one and a half hour session. So please drop in your thoughts and comments here as well as on hasgeek.com slash content web. I'll also announce this once at the end. So Satya, please continue. Yeah. So a good source of data. If you're not already using a RUM tool on your website, but you're not using Google Analytics or similar tool that measures and user performance that can be impulse or boomerang, that's open source or something else. In the absence of such tools, users who are using Google Chrome as a browser can opt in to sort of send data to Google. And Google makes a data set publicly available. It's called the Chrome user experience report or data set. That's publicly available and it's free to use. Almost all sites eventually in some sample make it depending on who's sending that data back to Google. And this is public information. And what I can do is I can put in steps on how you can create a crux report for your site on the Hasgeek page. What I did, it takes a few seconds to create. I've created some pre-made dashboards for the site. So, okay. So what this crux report shows you is actually what page insight gives you, but it's not simulated. It's not running from a single agent. It's actually gathered information from actual end users and it has some amount of historical data as well. Just to quickly run through some interesting data point that I found. If you look at time to first write, you might wanna keep track of time to first write historically, like is my site doing better over time, worse over time, for whatever reason, accumulation of technical debt is my site progressively getting slower because I've added some plugins, some third party content over time and I have not cleaned it up. And this gives you a good indication. So, time to first write, like Sovik mentioned, is the time, which is both the network part and the server generating the individual web pages. Over time, if your servers are getting loaded, you would see that time to first write goes downhill. In this case, it's doing fairly well. I think this is consistent. Like I said, this is a very, very small sample. So take all of this data with a pinch of salt. Now, if you look at DOM content loaded, again, you'll see that compared to October 2019, there's been a great improvement. DOM load is less than 1.5 seconds as of July for about 50% of the sample users, which is great. In October, it was about 20%. I think there are some other interesting data points as well. This one is really great, right? The first paint, when you look at this data set, you can almost tell that there was a significant change. I don't know what the change was, but there was a change that happened between December and January and that change further was rolled out in fact. That improved the first paint time on the site, which means some template was changed, the way content was rendered on that page was changed. So the RAM data is actually very critical to understand what kind of end users are coming in, what the performance is on your site. Like for example, even if you wanna understand where are my end users coming in from, what should I focus on? Like for example, on this site, you can see that 93% of the users in this crux data set are coming in from mobile devices. That means you have to focus on your mobile website more than you focus on your desktop website. So things like that are actionable. You can figure out where to spend your time and resources on, provided your site is live and it's got a significant sample size. These data sets are very actionable, but it's not a replacement for RUMtools. You should have your own RUMtool, Google Analytics data would probably be a little different from this. And while this is a good place to start, RUMtools will give you those insights so that you can make actionable decisions. I think those are some of the things I wanted to cover. And I saw we mentioned that a lot of these topics I've just came through, because a lot of these topics are fairly advanced. Earlier we mentioned web-based tests and Chrome DevTools. I think that in itself is a very vast topic. So if you have any specific questions, you can post it and we can probably take it. Right, thanks a lot, Satya. So Satya has promised even on the video and even before the video that he'll tell us steps, how to reduce the Chrome user experience report like this, something like that for your own website. In my opinion, this is great because not everyone has the knowledge or the technical skills or even the bandwidth to set up their own RUMtools. And especially because in RUMtools you not only have to do the monitoring but you also have to do the data collection and then you have to figure out a way to process it. There are definitely more and more tools coming up which can make your life easier. But at any point of time you set up, you're going to only need six more months to see a report like this. But something like this, because they have historical report, you can see trends of how you're getting better or worse in over a period of time. But this was a very insightful session at least for me, Satya, and I'm sure it was for the audience as well. If there are any questions, audience feel free to shoot over the next minute or so then we can take this, otherwise we'll close these bits. And one last question I'll ask to you, Satya, before we close this, what are the aspects of performance we did not touch upon at all in today's conversation? Do you think we left aside certain things which was beyond the focus of the tools we were talking about? One of them I can definitely tell is the server side performance, the page actual generation bits as well. And I think that is one session that I can try to plan for in the future, which is, okay, beyond the things that can be measured from outside the system, what are the things that you can do internally in the server to performance? Anything else you can think of? Okay, I think I briefly spoke about this, but a lot of the headings or the overviews that the page test gives you are actually vast topics in itself. For example, optimizing JavaScript and CSS. That's a topic on its own. Images, they're a topic on their own. I think you can do a lot of optimizations. We didn't talk about how even the sites that we looked at in the session, how they've all differently handled CSS, JavaScript and images, all of them have different approaches to it and all of them are right in their own ways. So there's no one size fits all. You just need to ensure that you're getting the end result that you look at. And the other thing that you could do is figure out what are the critical resources for your site and figure out how you can load them ahead of time. You mentioned lazy loading. You can tweak the priority of some of the requests in a lot of browsers, I wouldn't say all. I think Safari has some challenges. On Chrome, you can definitely play around with it. Firefox, you can play around with it. So there are some capabilities available that you should definitely use if it's giving you performance benefits. And after that, I think even measurements, it does not require a session in itself, but it's worth talking about what's the right way to measure? How do you measure on an ongoing basis and take effective action and validate that it's made an impact or made a change to your site? So I think all of these, you have to look at them holistically. I don't think there is a silver bullet to performance. You just have to figure out that one thing that's going to give you the maximum benefit and go after it.