 All right. Hello and welcome, everybody, to the JavaScript SEO Office Hours. After a bit of a break, I'm back, and I hope to do these now on a bi-weekly basis, or fortnightly basis, as the British would probably say. So every 14 days, there will be an opportunity, and I'll try to switch the different time zones so that we have both APAC-friendly hours, as well as EMEA Americas-friendly hours too. Sweet. So let's go through the questions that were submitted on YouTube. Black Cat Goh asks, how does Google's web rendering service handle course, so cross-origin resource sharing? Does it inherit the policies of the Chromium version in use? Yes. It is also bound by the course principles and APIs and headers, so that means that if you have course requests or cross-origin requests, so requests to another subdomain or another origin, you want to make sure that these are allowed as per the specification as well. Then we have Montsef asking two questions, actually. The first one, is it safe from an SEO perspective that NUCs to view store the duplicated data in a script tag? Yeah, that's fine. That's not an issue. And the same question about view service at rendering hydration at SEO, is it better to do lazy hydration, which means hydrate the content or refill the content that you sent from the service well just before the user reached the components to decrease time to interactive? It's also a user experience question, and I think it would be good to lazy hydrate, but if that causes problems or if you notice in the testing tools that your content doesn't show up in the rendered HTML, then that means that lazy hydration does not work with Googlebot very well. I haven't really tested that. Try it out, and if it does not work, then I would not hydrate lazily. But hypothetically, the content should be there and from the server-side rendered version anyway, so you should be fine. But definitely test everything. And also, things can change over time, so testing is always the best possible strategy here. Also, if there is a popular, sorry, Kayo asks if there is a popular resource, like a popular JavaScript library from a WordPress plugin or something from Node.js, can Google cache it in a single place to render pages from different domains? So in the WordPress plugin example, let's say you have a local link from my JavaScript resource and that's broken, can Google still render the web page? Because it's a popular resource and therefore cached from other pages? No, we cannot do that for the simple reason that that would allow cache poisoning and that's a very, very risky vulnerability. So what you can do is you can include them from CDNs. If you trust these CDNs or public places where these might be hosted, then you can use those versions instead of your version and we might be able to use them from the cache. On the other hand, you don't really have to worry about it because popular or not, our cache is very aggressive. So we are generally over caching than under caching. So if something is broken momentarily, that normally isn't a problem because we're not fetching it in every render anyways in the first place. But there is no such thing as, oh yeah, jQuery is broken on your side so we'll just fetch it from somewhere else. The only situation where I could see that as potentially feasible is if you have a sub-resource, what's it called? A sub-resource integrity token in your code and even then it is tricky because there are so many different versions. So we are not implementing that really. But again, don't worry about it. Normally we are over caching rather than under caching. So even if you have a momentarily broken link to a resource from your site, then that's not necessarily a problem. Also, we retry renders if that is necessary. So don't worry too much about that. So the follow-up question is, from what I understand, the caching is more or less domain-based. So Google probably will cache everything my domain needs. Even if I request the resource from a CDN or from other? Well, no, because then we would potentially be able to cache that on that domain as well. OK. Yeah, I make sense. So that makes that possible. And I believe the follow-up question that will come with that is, how does that affect crowd-budget and everything? So let's say I fetch the resource from a different source, from a different CDN. Does that impact my crowd-budgets or their crowd-budget? I'm not 100% sure, to be honest, because I haven't really thought about this. But I think it does not affect your crowd-budget. Or at least it doesn't do so directly. But then again, it might do indirectly. I wouldn't worry about it too much, because I assume that you're not having thousands of resources that you need to download. I hope you don't. I hope so, too. Thanks. You're welcome. Um. Hi, Martin. The follow-up question about that resource availability rate. Yes. So I see in a lot of URL inspector tests a high level of the resources not being available. That leads me to ask, does Google do a fresh fetch for each resource when you do a URL inspection test? Or is that really representative of the availability rate? You're probably mostly riffing off of the other error that you might see in the testing tools, right? Is that what you were getting? Or does that look differently in the testing tools? What I'm referring to is when you go into the More Info and you look at resource availability. On a page called 150, I've got some rather complex folks I've worked with who have 104 of those missing, unavailable. And some of them I've seen from being dependencies. So their jQuery resource didn't respond. That was unavailable. And then the five scripts executed off of that also were listed as unavailable because domino effects. Some periods, if those tests are actually representative of what Googlebot is seeing or just an isolated individual, we try to fetch everything fresh. Right. So, OK, the general answer to that is unlike the actual indexing runs, the test for the obvious reasons actually does run a fresh fetch because we want to give you the opportunity to test the latest version of everything that you've got, which means that we are definitely bypassing the cache. So the testing tools do bypass cache, which is not something that happens in the actual crawls and the actual indexing runs. I know that that sometimes causes a few confusion or confusing moments. But when you say resource availability, you mean basically you go in the URL inspection tool, for instance, you go to page resources, and then you see some resources can't be loaded, right? Yes, so there is the screenshot, the rendered HTML, the More Info tab. It says, X out of X resources unavailable, and you look through those. That's what I'm curious about. If Google has a proper copy of that resource at any given point, it sounds like that will behave differently than the test. You can double check this. If you look at the crawled page instead of the live test that you do, you would see how that looks like in terms of the page resource. And not everything that is in there is necessarily a problem. Sometimes other error just means that we didn't deem this resource to be very important. That happens specifically with images that are usually skipped and fetching because we don't really have to fetch the images for some of the stuff and most of the time. So you're being images static. So once you're not going to worry about grabbing it again? Yeah, kind of. I mean, we are updating the images. It's just that for the indexing run, that doesn't necessarily report well because the image index is a separate entity anyway. OK. So generally speaking, you shouldn't need to vary too much about a resource not showing up in the live test because we are skipping the cache. But you still want to make sure that you have a look at the crawl page to make sure that all the content that you expect shows up there. If that doesn't show up and you see a related resource issue, let's say like some content is drawn from an API endpoint that is fetched by your app.js file and then that app.js file shows up as fails to load in the crawled page report. So again, you basically click view crawl page and then you get more info, page resources. If something shows up there as an error and you see the matching missing content in the rendered HTML or to see the content that should be loaded by this resource missing, then that's a strong hint for something going wrong in terms of us not being able to cache it or maybe we cached it in the past and we couldn't refresh the cache or the cache has expired. But generally speaking, yes, the live test tools, all of them. So that's the AMP test, the rich results test, the mobile friendly test, and the live test in Google Search Console, they skip the cache. That's correct. Thank you. You're welcome. All right, I see that we are through YouTube questions and that means that I'm opening up the questions to the live audience. All right, Martin, I have another question then. Sure. This is based on something I've been seeing on the forum. So let's say I want to have a lighter version of my website for mobile. And I don't want to show all the resources on the mobile page. I don't know, maybe I don't want to show a carousel because it takes too much space. One way to do that is to use CSS display none. But the resource will be fetched anyway, right? Yeah, I think so as well, but I'm not sure. So do you have any suggestions on what do you suggest, server side rendering, what would be the best thing to do too? There's a bunch of different ways of going about having a mobile friendly version of your site and not loading resources. One way is that if it's a component that is dynamically inserted, it sounds like that. If you say additional resources are being fetched, then what you could do is you could do conditional fetching based on the media selector. So you can say, oh, this is a small screen. And I think on small screens, this carousel does not make sense. That's one way of doing it. That is possible. You can also just server side render. I mean, you would also always have to find a way to adapt to things. But if you server side render, then you should not necessarily have to fetch much in terms of resources because these resources will be inlined into the page. As least the HTML will be there. It will still load the images, of course. What you can do is especially if it's an image carousel specifically, you can lazy load the images. And then display none would mean that the images aren't being loaded because they are never in the viewport. That is an option for this as well. So you have different options to make your design responsive to the different screen sizes. And you have to try them out to see what works best for your approach. Because they have, from a search perspective, it doesn't really matter which one you go with. But I think from a user experience point of view, as well as from an implementation point of view, that will very likely be the bigger bits to understand what works best or what doesn't work so great. Because some might be easier. For instance, the CSS display none, that's really easy to implement, I guess. Whereas a dynamic switch, depending on the media query, might not be as easy to implement, depending on the solutions being used. But it might be more effective as it would take out the content entirely if the screen size doesn't warrant for displaying it. Yeah. So and of course, you are suggesting that this should be dynamic based on the screen size, not on user agent or any other way. Don't base it on user agent. Yeah. Yeah. I don't think I saw anyone much later people using this with user agent. But then I was thinking now for the Core Web Vitals scores when they are testing for, at least in Lighthouse, they are testing for mobile and for desktop. The only thing that's changing there is the screen size when they are doing all this. Screen size, yeah. Yeah. OK, makes sense. Thanks. You're welcome. Good to have so many questions in the first one after the long break. More questions? Anyone else having a question? Hi, yeah. Is there any difference between the mobile friendly testing tool and the rich results test and the YOR inspection test? Big question. I know there is, but in terms of the time I've been using. I've specifically had someone come in the forum who has a very weird case of a site that would render the HTML in the mobile friendly testing tool, but was not in the rich results test. And from what they were reporting to us, it wouldn't in YOR inspection either. And apparently, that seemed to reflect more the what they were actually getting. But is there anything around that? The only thing that could think why the mobile friendly test was possibly working is there was something about viewport thing, because it attempts to paint, but I couldn't really see anything. But there's nothing too different about those other than that, I suppose. No. So internally, it's a very good question. Actually, I think I saw the thread. It's like a React page that has suspected that it's React. And I don't think it's React. Yeah, I think there's a different one. Yeah, I did see some one guy the other day, a React thread. But I think that was basically, it was just incredibly slow. It was just telling me how to think about that. But it was legitimately. Right. So these things do pop up multiple times, I see. Interesting. So internally, all of these are called the single URL inspection tools, or SWEET, or SUIT, depending on. Yeah, I think SUIT is how you properly pronounce it. Then SUIT is the internal acronym. It all goes to the same pipeline. Obviously, then there are a few changes. For instance, the rich results test allows you to specify if you want mobile or not, whereas the mobile friendly test always opts for the mobile route. The URL inspection tool goes for if the page is mobile first index, then it goes for the mobile route. If it is not, then it goes for the desktop route. But you can't switch that. So there might be small issues or small differences, especially if it's a non-MFI, so not mobile first index page. It might go through the desktop route. And then you can get different results in the rich results test and in the mobile friendly test, because those go the mobile route instead. So there's a few small different code paths that could be triggered hypothetically. But it is the same infrastructure. And so far, as you say, most of the times I've seen this either with intermittent errors on which resources are fetched, because we are bypassing the cache, then different resources might time out or might fail to complete their requests in time. And then you might get different results in one. Even in the same tool, I think the thread that I looked at at least, they used the same tools multiple times. And they got like, yeah, the URL inspection tool doesn't work. But the mobile friendly test does. And the rich results test also doesn't work. And then the next time they pose it was like, your inspection tool doesn't work. But the mobile first indexing tool doesn't. And the rich results test does. And it's basically like it was different combinations of the three where it worked and it didn't. And I'm like, well, if you're running it through the same three tools multiple times and you get different results, then that tells you that there's something intermittently wrong and not necessarily a fundamental issue. Really hard to do that. So I found a way about this one little case. It must be a very edged case that it was consistent, too. It was consistent with no page resource, loading errors at all. It was consistent failed in rich results, and we're at an MFT form. Is that the one that you escalated recently? Yes, I think I might have escalated it. Wait, because that is an issue on my desk to find out. I just didn't have the time yet. But it looks like it might happen either tomorrow or on Friday that I can look into it. So that's fun. That's timely. So yeah, just for the general population and public out there, if you are seeing a fundamental issue that can't be solved in the Webmaster Forum, then sometimes these issues get escalated through Googlers who work on the forum. And then these are being investigated. I can't give you. Please don't send me direct messages or something because I can't investigate every page out there. But if it looks like this might be something that where the public channels fail to support your case and give you support, then we might look into it as a potential general larger issue. And if that is true, then we will fix that larger issue. But if it's just your site, then we'll find a way to give you more public feedback so that the next person running into the exact same situation sees that forum fed and can use that as a reference point. So that's what I'll be doing later this week as well. It's an interesting case. It's a curious case. When I saw that, I was like, hmm, interesting. It goes through the same pipeline. How is this different? We'll find out. Thanks for the question. That's a really good question. Hi. Welcome. More questions? OK, so I'll give it a 5-4 for your questions. One more. There we go. Oh, multiple people jump on. All right, go, James, go. So in thinking about how to approach understanding rendering and resource availability better, aside from doing the manual breakdown of that render HTML, associating it with the script that creates it, is there another way to figure out if a resource has been persistently unavailable to Google on the script which generates content is problematic? That's not really easy to do that, especially over time as the crawled page is basically just a snapshot. I mean, in the end, if you see your page, for instance, dropping out of the index every now and then and dropping back in, that's something that you probably want to investigate in general if that is an issue with intermittent resource issues. Generally, these issues are really, really rare, because as I said, we have a cache. We have a very aggressive cache. So even if your server gives us the script once and then doesn't give it out for the next month, you should not see intermittent resource issues. This is more a problem with the life tests than a real issue. Is there an index average type that we should look for moving from indexed and submitted or whatever to? Not really. Not really. As long as it is indexed and you want it indexed, then I think that's fine. I don't think you can monitor for anything specific there, except for it landing in the error category or in the excluded category. If a page that you really care for lands in the excluded category every now and then, then that's probably something where you want to look into. It doesn't mean that it's a rendering problem, though. Would crawl anomaly be the closest rejection or excluded type to indicate that? Not really either, because crawl anomaly normally is not a rendering issue, but an issue in crawling already. Also can be pretty much anything. That's a really broad category. I wouldn't say there is a clear category or there are candidates where you would need to worry about rendering more than others. Couldn't think of any. Because rendering is more or less a transparent part of indexing. So it's not that we report a very specific error back because we are retrying errors. So again, normally you should never run into this problem in real life where a resource is temporarily not available to us. Unless, of course, you 404 a resource and then the page drops out because it's now basically empty or barren of content. But I wouldn't know how you would spot this in Search Console patterns. Thank you. You're welcome. I'm sorry that I don't have a better answer for this. Probably crawled currently not indexed might point at that problem. If it is something with the sometimes it might affect the canonical. If the canonical on the HTML looks different than from what is in the rendered version, then that might come out as like alternate page with proper canonical tag or duplicate without user-selected canonical. But I wouldn't say that these are inherently signals that that's a rendering problem. They might be, but they don't have to be. That's as good as I can do at this point. Damn. The live tests in the URL inspection to use the same pipeline as the testing tools, which results tests. Yeah. Yeah, and it doesn't take up caching when you do the live test, right? Yeah. OK. But I have another question related to lazy loading, native lazy loading as an attribute of images. I was looking here in Can I Use? And I noticed that it doesn't run on Safari yet. It's an experimental feature on Safari. So maybe if I implement that on a web page, it could cause some slowness in real-time measurements, right? I know that Googlebot uses Evergreen Chrome, but if I'm looking at the field measurements in Search Console for speed and things like that, maybe using native lazy loading, we will slow down. We would expect to slow down for iOS users or Mac OS users, something like that. I mean, it wouldn't. Yeah, it wouldn't. Yeah, exactly. If you have a fallback, then that's the best you can do. If you don't have a fallback, then it wouldn't slow it down, but it would be slower than in Chrome where native lazy loading is implemented. Yeah, that's true. Yeah. So it should appear on the field testing results on Search Console. What do you think? I think we are mostly sourcing them from the Chrome usability report. So I don't think they show up unless iOS Safari does play into this, because when you're running a Chrome on iOS, you are effectively running Safari. But I'm not sure if that is the case. I'm not sure if we can somehow, I don't know. It might be. All right. Any case tests on the real thing, right? Definitely a test. OK. All right. More questions? Five, four, three, two, one. Awesome. That was fantastic. Thank you so much, everyone, for submitting your questions on YouTube, as well as everyone who joined tonight or this morning or this evening or this afternoon, depending on where you all are based. It has been huge fun. And again, as I said at the beginning, we will do these every two weeks. So definitely keep your eyes peeled for the post for the next episode's question time where you can post your questions and the link to the Hangouts if you want to join the next edition. I was saying next week's Hangout, but it's not a week. It's the week after next week's Hangout. Wow. OK. It's quite late here. I'm sorry. In that case, thank you very much, everyone who is watching. Have a great time. Stay safe, stay healthy, and bye-bye. Bye-bye. Bye, Martin. Bye, Martin. Thank you. Thank you very much for joining.