 Hello and welcome to the pre-Christmas JavaScript SEO Office Hours. My name is Martin Splitt. I am in the Search Relations team at Google. And I am really happy to answer any questions about JavaScript and, well, Google Search. Today, we have a few questions submitted through YouTube. We have a few people in the audience live today as well. We do these roughly four nights. So every 14 days, roughly, I am doing these sessions. There is a thread in our YouTube channel's community section where you can ask your questions. Or if you want to join the live recording, I'm commenting with a YouTube link in there so that you can ask another YouTube link, a Hangout link in there so that you can join the live recording if you wish. Right. So there are a few questions, very few questions, on JavaScript, actually, but a few questions that touch roughly on JavaScript. So I'll happily have a look at those. Gregory Scott asks, from my understanding, the speed-ranking factor for web core vitals, core web vitals, but roughly, right, would be based entirely on field data and not lab data. Is this correct? Yes. At the time of this recording, that is absolutely correct. What that means, if I'm not wrong, is that the guidance from the lab test, like, throw, lighthouse, web.dev, page speed insights, may deliver results that are inflated scores, or deflated, actually. It might give you very different scores. That is correct, which are pointless. Ah, I disagree. They are not pointless. They are indicative. So they are not necessarily the accurate values that you would see in field data, but they give you a hint on where might be issues. It does not cover everything, obviously. While lab data might show everything is fine, field data might show, actually, things are not very fine. That's why it's important to also measure in field data. But lab data usually is a good cannery in the coal mine. It gives you a feeling for where you are and where potential problems lie. So they are not pointless, but they are not the truth. It's really, really hard to say what is the truth when it comes to website performance, because it really depends on lots of factors. Then there is a, he gives an example on WordPress where changing plugins gave the options to remove the version string from the files which inflated performance scores and was practically useless for field data. Then you know that that's not useful. That's gaming the metrics in that case. So the main question, does this mean that delayed JS function in some of the more popular caching plugins for WordPress lately is just that a trick for the lab data scores that won't have any practical meaning for the field data? Measure. That's not generally like delaying JavaScript to a later point might be useful. It might not. You have to measure if that's a problem for people on your website or not. I would not trust the lab data 100%. Again, the lab data gives you a rough guidance. It gives you like a trace a bullet to figure out where problems might be and debug these problems relatively interactively, which you really can't do with field data. So it has its purpose, but obviously there are things that can gain the metrics, but that's a little pointless because if the users are still experiencing a slow website, that's going to reflect in field data. So be a little careful with that. But if your measurements show that the field data does not benefit from that function, then that's probably an indicator for that being a metric gaming rather than an actual improvement for users. Tobias Merz is asking, hi there, we have a problem with our cumulative layout shift. We implemented a sticky nav without position sticky since it's not supported in all browsers. Our solution is more like bootstrap affix. I don't know what that is, but I know bootstrap, but I have no idea what the affix is. So we listen to the scroll event and set the position fixed top zero when the nav sleeves the top to prevent a jump or layout shift. I have a wrapper around the menu with explicit height. Therefore the wrapper still has the same height even the nav gets position fixed. So there's no layout shift. Everything seems to work fine. It looks good for the user, but the CLS is counting up every time and a lot, why is that so? Without having a look at the specific example, that's really hard to say. I could imagine figuring out a way to take nav out of the nav flow like position absolute, but I was thinking position fixed does that as well. But I would definitely consider asking on either the JavaScript sites mailing list to bit.ly slash js dash sites dash wg with an example URL to look at or maybe the webmaster forum or feel free to go to Stack Overflow as well with these kind of questions. And again, provide a clear example and you can build a fake example that just shows the behavior because without something to actually investigate or look at, it's really, really hard to give an answer to that question. And there's one more question from Gary Scott around the web core web vitals. Hold on, no, I answered that one. Oh, right, it sounds like it's the same question, but no, it's not the same question. It starts with the same introduction, but then it's like, does it make sense from an informational site to block all countries out there except a few bigger ones in order to get the best average field score for that speed? I know I can do that. The question is if I have, for example, low return on investment from countries with slow internet connection with large population, does it make sense to cut them from accessing their website since they have a slow connection that hurts my field scores? No, that's thinking that is laser focused on the core web vitals. And that's really, really, really risky. A, because people from these countries, if they want to access your website, they will through a proxy or what's called a VPN, which really is mostly a proxy for most cases. And then the speed is even slower, so not helping. The other thing is core web vitals and page experiences, one ranking factor out of hundreds of ranking factors. So you should not overestimate the power of this ranking factor. It is important, it is not the most important. And I think if you have useful information and you can get this information to people and get some ROI, you should probably do that. Because again, there's hundreds of ranking factors. Speed is not the only thing. Because if speed would be the only thing, then a blank website would rank really well because it's really, really fast, right? That's not the point. Fast is an important quality signal, but there are other quality signals that really, really matter too. So I would not do that also. That implies a more complex setup, which usually invites more problems. I would not do that. I don't think that's a reasonable thing to do here. And then we have a third question from Gregory. I don't know if you know this, but WordPress decided to reinvent itself with a new editor called Gutenberg. Yes, I know that. They're using web components in there. With installation of the plugin, you can get more current features that they think to implement. It's basically turning the whole page into blocks. I'm not sure if these blocks are only happening when you edit the page, or if that's also in the output of the pages. I would have to look into that as well. Since one page of 2K words and images could actually be something to 20 or 30 blocks or more as each new paragraph is considered a new block. What I noticed on the last change logs is that they would consider splitting styles so that each block has its own style CSS. This means that loading one page could resolve and having 30 or more style CSS files. I would assume that they don't do that when they deliver the actual page, but maybe I'm wrong. That's a question for the Gutenberg team. I would ask them. I would ask them and check what they think because I'm pretty sure they're not doing something for the sake of just doing it. I'm pretty sure they have some reason behind it. And I think performance is on their radar. So the way you describe it sounds like performance issue or potential performance issue. I'm pretty sure they don't do that just for the fun of it. Ask them they are, it's everything is on GitHub. You can ask them the Gutenberg repository why they are doing that and if they are, or what's offsetting the performance hit from that. And I'm pretty sure they have something to say to this. Then we have a question on Hylia, John Miller. Not quite, close enough though. Martin here, hi, hi Rafael. I'm from Brazil. So when I run Intest PageSpeed Insights laboratory data generated regarding my website, it's made for Brazilian users, but it's hosted in the US with the CDN. Is the data from the Brazilian server or an US server? I think it matters, but what matters really is the field data. Yeah, the field data is actually what matters. PageSpeed Insights probably generates it from a US server. If the server is, if your server is hosted in the US and PageSpeed is hosted in the US then they'll likely have a better connection than users coming from Brazil. Even though Brazil and US shouldn't be too bad if the server would be in Australia, I would be more worried than that. But yes, the field data is what you should really consider. Again, the lab data gives you a rough tracer bullet for where problems might be, but it might not show all the problems because again, the disparity between the two field data and lab data. And then that's it from the, no, there's one more YouTube question from Mustafa who is also in the call, I know that. Regarding crawl stats report, what should I do about a huge drop on the crawl stats report from 300K requests to 50 to 70K requests? I checked the performance on the search result clicks and the sessions on GA both look normal. Then my answer is that doesn't seem to be a problem. The amount of crawling that happens is no indication for quality or ranking or whatever. The crawling we do is based on lots of things. And as long as your crawl stat report doesn't show anything like scary as in like your server is actually generating lots of errors or your server is really slow in responding. As long as that's not the case, you should be fine. And thankfully you gave us a sample URL so I can actually take a look and I shall do that here. And as far as I can tell from the coverage, nothing really has changed. So it's not that we are not seeing something or that, as you say, the performance hasn't dropped. So that's not a problem really. As far as I can tell, the crawl is going up again with 200 crawl requests recently Sunday. So that's three days ago. I do see a drop from 300K to 41K-ish, but that itself is not a problem. It's just meaning we crawl less, but that has no meaning on anything like we don't think it's less quality. We don't think we just decided, oh, most of the pages probably haven't updated. We don't need to crawl as much. We only need to crawl a few 10,000 pages, like 40,000, 50,000. And now we are probably seeing that we need to adjust again. So we are upping the crawl budget to 200K. Maybe it goes further up. Maybe it goes further down. It doesn't really matter. As long as you're not seeing any issues in the performance or in the coverage, I wouldn't be worried about that. Also, the increase in crawling could also come from the fact that I think we switched the side to mobile first indexing recently. So it might be that we are re-crawling with Google smartphone now. So that is why the crawling goes up, and then it will probably go back down again. If we establish that we don't need to do as much crawling, that's just it. Again, as long as the performance is fine, as long as the coverage is fine, there's nothing to worry about. Right. And with that, we are through the YouTube questions. And I'm very happy to take audience questions now. Now is your turn, everybody. Pawe. Hi, everybody. I've got a question. I hope not a silly one. So we've got this server-side rendering website built with Gatsby and React. And what I noticed is that a website was really fast for a user. It's a very fast one. I mean, the user experience from the point of user experience, it's a really fast website. But when I look into Google tools like Lighthouse, I just noticed that the site got much slower in Lighthouse after installing Tag Manager. And I know that Tag Manager code is performed synchronously. Sorry for my English. Always. Always. So I just wonder, where does it come from? I just feel that it's related somehow. But I can't figure out what's the reason. And I'm just looking for some kind of explanation, I think. So as I said, the lab data and the field data are not necessarily the same. If you have Chrome UX reports available for the site, if it has enough data points in there that it shows, then I would look into that. And Search Console has it under the Core Web Vitals report. So if those look fine, then I wouldn't worry too much for Lighthouse and web page tests and all of the others, like page feed insights. I know that they have a really hard task, which is to generate a score from a lot of different metrics. Now, these metrics are not all what is in Core Web Vitals. There's more metrics than there are in Core Web Vitals. Core Vitals are just a part of what these tools report on. And to generate a score, they have to mix these metrics somehow. The mix, at least for Lighthouse, the mix is documented somewhere in their documentation. And you can find out how they weigh the different metrics to generate the score in the end. And depending on how that score is generated, you might see interesting fluctuations. And if Google Tag Manager does something that is relatively heavy, but does it in the background, that might not actually affect what users perceive or notice. And that's great. But it might affect one of the metrics that these tools are looking into. Whatever that metric is, it could be like total blocking time. It could be time to DOM content loaded. It could be various different things. It could be the parse time. It could be that the Tag Manager detects these tools and then loads slower or something. And then that mixes into the scores. And then that's why we'd get a lower score. As long as the real user metrics look good, you shouldn't be very too much about that. Again, the lab data testing tools, Lighthouse, PageSpeed Insights, WebPageTest, they give you an indication on where to look. But then you still have to interpret these metrics. A Lighthouse score of 100 doesn't mean anything unless you actually understand that the metrics that you care for are really looking fine. A Lighthouse score of 50 might mean something. It might mean, oh, we should look into this. But it might also just be like, oh, yeah, this is measuring things that we don't really that much worry or care about. So that's not a big problem. Luckily, Web.dev has lots of information on how these metrics work and what they mean so that you can make more educated guess on what you should do about the scores you are seeing. But when you see these discrepancies, and I've heard a few offenders like Google Ads, apparently, is a thing where that can happen. Google Tag Manager is a thing where that can happen. Certain YouTube embeds have this issue. So even Google properties are not excluded from that because the testing tools don't care where requests are made to or coming from. So you might see differing values between the real user metrics and the field data. If you have real user metrics available, that's what I would look into. Field data, sorry, lab data is interesting to just debug things and get a feeling for where you are going, but I wouldn't worry too much about it if there is an issue with just the lab data values. Okay, thank you. I think it's clear. Yeah, and this is what you said. I noticed that the tool shows me some information about total blocking time. So that might be one of factors, yeah? Okay, generally speaking, okay, just... So if I understand correctly, so the important thing is how really user experience the website, yeah? Yes, that is correct. Okay, thank you. You're welcome. And this seems to be like the theme of today because most questions are circling around that. As a general note, I want to say that measuring user experience in terms of responsiveness of the site and speed of the site is ridiculously complicated and has lots of caveats that are really, really hard to quantify. And the goal really is to have something that is a very human interaction as in like how quickly does the site respond? How quickly does it show the content also of all of these things? How much of my CPU does it spin? How much of my battery does it consume? We try to put all of this into a number and that's always a hit and miss. Oftentimes it works in some edge cases, it doesn't. In some edge cases you get reports that don't make sense. And then there was someone asked the question, to be asked the question with the CLS where they're like, we don't think that that's the case. I would try to report if you see like CLS values being very high in Lighthouse, for instance, I would report that to the Lighthouse team with the sample URL so that they can take a look at it because maybe it's just a glitch in the way that CLS is calculated. That is absolutely possible because it is surprisingly hard to get these values accurately. And also to find metrics that are, that's the whole reason with the core web vitals. It's not like, oh, we wanted to make something up so that we are busy. No, it has been that we had lots of metrics and the web has changed and the way that users interact with the web has changed and the metrics didn't no longer fit how people were actually experiencing the web. So we had to come up with new metrics that happened relatively randomly which pissed off people understandably if you are optimizing for the metrics that you are given and then the metrics change and now your optimization work looks like nothing has happened or it actually has made things worse. They're like, what the hell? So the core web vitals give you a more structured and time-slotted way because we are considering updating them once a year roughly and not like in random times throughout the year. So it gives you a more predictable set of goals to look into even though we understand and hopefully you understand these core web vitals aren't perfect either because there will be websites that are really fast for users but don't look great in core web vitals. That's why these metrics keep evolving and keep improving but getting a perfect score for what people are experiencing on websites is really, really hard. All right, any further audience questions? And again, you can use the chat or you can use the microphone and you can use the raise hands feature. Dave. Hi, since we're on a bit of a performance bent today is there a massive benefit on streaming response? Particularly something like an SSR server-side rendered something maybe hydrase later tends to compile it all, bug it all across. I can see there's a few frameworks that I was trying to push towards streaming that initial response. It looks extremely complex in certain parts and it seems to have some drawbacks but is it really worth something pushing towards or do you think it's kind of a case-to-case basis so there's no real rule? I think it's probably on a case-to-case basis but generally speaking, it does make sense if you think about it. With server-side rendering, the big challenge is the real server-side rendering not like static pre-rendering or whatever. The request comes in, a program runs, generates the DOM or generates the HTML and then sends the HTML over the wire in one big block. So basically what happens is the request comes in and from the browser's perspective nothing happens for a while and then it all comes over. If we could shorten the amount of nothing happens for a while from the browser's perspective I think that would be good because then you could start or the browser can start to build the DOM as the data arrives, which has been the case with static websites. You have a file on disk, you have the browser requesting it and basically immediately as the file is opened the data is streamed over the network so like it arrives bit by bit in the browser. Not like nothing happens for a long while and then the entire thing starts streaming in but that being said, you already hinted at is streaming a HTML response that is generated by any kind of program and I'm not even saying JavaScript specifically. This could be a Java program, this could be a PHP or Perl program, doesn't matter Python. C++ if you really want to have the pain of that. Having that stream the data as it generates it is probably a lot of complexity that usually means where there's complexity there are bugs and edge cases that are surprisingly hard to cover. So I would not be surprised if that is not worthwhile especially if the time that the program runs is in the sub seconds. Like if your PHP script or your HTML script if your PHP script or Node.js script takes like 0.5 seconds then I don't think it's worth the hassle. If it does run for 10, definitely worthwhile. Definitely worthwhile sending the data and 10 is just like in a very extreme number to just make sure that we are talking very extreme numbers here but the reality is somewhere in the middle probably like sub seconds and then everything that is longer than a second you can probably see benefits. Everything that is longer than two seconds you very likely see benefits. But again, then what's the complexity that you invite into your code? Is it worth it? Can we make it so simple that it doesn't generate so much overhead that A, the program runs slower and B, problems happen that we then need to mitigate with even more code. I think that's an interesting venue to explore. I haven't seen as much exploration in the wild yet but I think it's an interesting thing to try out and see if you can actually gain significantly. I remember I did that with WebGL. I was exploring if a certain WebGL format so basically like JPEG for 3D graphics, GLTF if that could be streamed and what you would gain from it and the gains we saw were huge but the complexity we saw in creating these models in a way that they would be streamable was so big that even the huge gains we made and we were talking gains of like 20 to 30 seconds on a 3G connection but it would take tens of hours of a 3D artist with specific tools that are really, really expensive to actually generate these models. So we decided it's not worthwhile and we might as well run into the situation like this with streaming for a server-side rendering framework but I think it's worthwhile to try it out. We should probably try it out as a community. That's a cool idea. Thanks for the question. You're welcome. Oh, to be honest, yes, you can simply open an issue on github.com slash Google Chrome slash Lighthouse. I think that's their GitHub repository anyway and they have an issues tracker and if you say like Lighthouse reports CLS here and we don't understand why then is it Lighthouse? No, it's not. RenderTron is a completely different thing, Lighthouse. You can search the issues here or so either add to an issue that exists or you can add a new issue there, report it and you'll get an answer. Even if the answer is no, that's working as intended and that gives you a hint on that it's not the tooling that's something that you are not seeing there but I would not be surprised if there is a glitch. All right, for the questions, it's funny because the recording will be sent in two things, like I'll get a video and I'll get a text file with it. Always funny when that happens. We do not have further audience questions. I would like to say thank you very, very much to everyone who submitted a question through YouTube. Thanks for everyone who joined today. I hope that you're all safe and healthy, that you have a fantastic holiday time and enjoy and see you in the new year. Thanks, Martin. Merry Christmas, everyone. Bye-bye. Thanks, bye. Thank you very much. Bye-bye. Happy new year.