 Hello, and welcome to the JavaScript SEO Office Hours. Today, December 9th, 2020, like I don't know, today has been the longest month of the year, or the longest year of the month, I'm not sure. Great to have you all here. We will go through some of the submitted questions from YouTube, and then I'll hand over the word to the audience. If you have been to one of these Office Hours beforehand, or if you're watching the recording and wonder what this is and how you can participate every two weeks, we or I am posting a thread on the community tab of our YouTube channel, where you can submit questions. And I also then, if you click on the sort, in the comments, you click on the sort by newest first, and then you see me also posting the Hangout link to these Hangouts if you want to join the live recordings. We have a few questions today. One comes from Rafael, who asks, facing a big problem with PageSpeed Insights and Google AdSense. Pages with AdSense are a battle in the metric, so yellow or red. The pages without are green and yellow. So is there any way to delay the AdSense JavaScript in the plugin called WP Rocket? I don't know if there is a specific option for this. I know that this is hiding the issue from PageSpeed metrics. However, I think if you have a way to insert a delay, that also makes it a little better for the user because the website becomes interactive and visible and the content is there. And then you load, hopefully, then you load the AdSense code that should be a better user experience. And if it works, that's great. But for AdSense, I would ask the AdSense folks because I'm not that familiar with the AdSense code base or the way that their stuff works. So that's that from my side. And I have no suggestions to work around this issue. I would bring it up with the AdSense support because I think it's important that also our AdSense product is mixed good on the promise of good user experience. So that's that. But I'm not in the AdSense team. I can't really speak for them. Ricardo is asking about the intersection observer. He says, from what I understand, and correct me if I'm wrong, Googlebot renders the page using a very tall viewport. How tall is that, by the way? Good question. For this reason, the suggestion for the lazy loading of the images is to take the advantage of the intersection observer, among other things. But what if my page is really tall, much taller than the viewport used by Googlebot? Everything not included in the viewport and therefore is not yielded, is ignored by Googlebot? So that's a really good question. And the reason why, and I know that Asaf, who is also on the call, has a question about the viewport as well. The reason why I don't want to speak too much about the viewport is that that's an implementation detail that can change at any point in time without reference. And you should make your website work in a way that allows access to all content without having to be on a very specific viewport. Now, that being said, regarding how tall it is, it is as tall as it needs to be within certain limits. Now, I know that's a fantastic response. But what we do is, and again, that's an implementation detail that can, at any point in time, change. The way that we deal with things is we don't scroll because that's surprisingly expensive and glitchy. What we do instead is we expand the viewport. And when we see that there's new content being loaded, we expand the viewport further. We can do that quite a while at some point due to memory restraints or constraints. We might not do it any further. So I would recommend making sure that you have a way to either split the content up so that you can access it via a specific URL or making sure that the individual bits and pieces that you care about are available under different URLs and are submitted through the sitemap or use the Intersection Observer. But don't expect everything to be in the page unless it is a reasonably small amount. So I would say if you need, let's say, a million pixels, then you're probably looking at something where eventually we would cut off. I picked a million pixels arbitrarily. I'm not saying we can't go further than one million pixels. I'm also not saying we are going to one million pixels. What I'm saying is there is a certain amount of wiggle room where the Intersection Observer will still work. And I think we might actually fire all Intersection Observers unless we see that it stops creating new code. But I can guarantee that that's the case, and I am not sure what the limitations are where the heuristics kick in. So again, as long as these items are just basically links to other pages that have the actual content, that's not a problem because then you can submit these individually through sitemaps, for instance, or offer a paginated version of the infinite scroll. But infinite scroll is definitely a challenging and interesting use case. Normally, Intersection Observer should get you pretty far. So that's that, which guides me gently into Asa's question. I know you have a question about the desktop crawling viewport, don't you? Right, yeah. So we want you to feel free to. Do you want me to read it out? I can also read out your question. All right, so Asa is asking if on Googlebot desktop, so he has a site with three column layout, and one of the columns is triggered only when the viewport is at least 1200 pixels wide. You haven't seen it being rendered either on the screenshot or the code itself. What's the threshold for getting the content rendered? I guess the threshold is 1024 pixels, if that's the correct dimension. And you think it's aligned with most desktop cases. 1024 is not really a desktop size, more tablet size these days. Yeah, so as I said, I don't actually know the dimensions because I don't care, and it shouldn't matter as much. I know that there is an option. Hypothetically, we could even do what we do for vertical layouts in horizontal. We can expand horizontally. But as far as I'm aware, Googlebot doesn't make use of that. What's the default width of the viewport? It should actually be, I think, the default starting width and height is 10,000 pixels in each directions. So I'm surprised that you seem to be seeing 1024 pixels. That's a very specific number, and I'm not sure where that comes from. So basically, we ran around 20 tests, and we hit at 1024, and it brought the left column. And you could see the rendered HTML included the content only when you were broader than 1024. Right. Right, that's interesting, because it should be broader. And also, we want to prevent having the 1024, because we don't want to show it for a tablet. An interesting question there is, why? So if it's a matter of not showing it or showing it differently, so if it's a proper responsive design, then that shouldn't be a problem. Then it should always be in the DOM somewhere. Is that the case? Or is it only being loaded into the DOM if the screen size has a certain size? Or if the viewport has a certain size? OK. So if you stretch it and reach the Chrome browser, can you reach the 1,200? So it's showing up, but if you reduce it. But when you say showing up, does that mean it loads additional content at that moment? Right. Well, what's the use case here? Why is this content not available to someone on a smaller screen device? Because we want to differentiate between tablet users and desktop users. We don't want to have this column on the tablet, because it's looked like two things in the world. Right. I understand what you mean. It looks too busy. It looks too full. Too busy. Interesting. The thing that I'm surprised by is, though, that it's 1,024. That's a very odd number that I wouldn't have expected. And yeah, in a high report, 1,024 as well. And it is quite arbitrary. Yeah, as far as I'm aware of the default number, I would have to double check what we're actually using in indexing, because it's a configuration that goes through WRS. And I'm not exactly sure what is the configuration. But again, it's also an implementation detail. And the question really is, how can if this content is important for you, or important enough for you so that Google needs to see it, how can you include the content in a way that doesn't make the design too busy? And there's lots of ways of doing that, right? So basically, we did have the functionality that we'll take this unit and embed it in the shrink version. So we're talking about the author bio. So yeah, so we do want to get it for all by Google. But we have some kind of solution. I'm just wondering what is the threshold. Seems to be 1,024, even though I would not rely on that. As I said, this can change any time in either direction, because we are not guaranteeing this. This is one of the cases where there is a gap in the documentation, and the gap in the documentation has a reason. The reason is that you shouldn't rely on it, and you can't rely on it. And the second question that I wrote there is, is that the case for desktop viewport, like 1,024? So I think this one seems to be, which surprises me, because I know that at least for vertical, we do expand, not 100% sure if we expand for horizontal. I don't think we do. But so you would normally see taller viewports is in more height of the viewport, depending on how large or how tall your content is. That should generally work. Not 100% sure what's happening with height. I'm pretty sure that is fixed. And I'm not sure what is fixed to probably 1,024. If you say that that's what your test concludes, I have no reason to doubt that. Yeah, like after 20 tests, we reach that number, right? I wouldn't be surprised if that's what's happening. I'm actually trying to search the source code right now to find that specific part. Ooh, what's happening out there. Let's see if I can find the, ah, no, I clicked on the wrong button and now I have the ugly version again. Damn it. The new interface is fascinating. Let's put it that way. And it doesn't no longer have the old files, which is unfortunate, because that was useful. I am pretty sure. Oh, hold on. Actually, I can try it here. Aha. I can try it by actually just passing in the request through the rendering system. And then I'll see what comes out. But 1,024 is a good possibility. You also have the question with the no archive tag inserted by Helmut. That should be fine. That should be picked up. If that's not, then I would argue that that's maybe a glitch in the cache. And then that's a problem, because the cache is not really maintained anymore. Again, and the intent is to. So the no archive, if I remember correctly, is pretty much just so that you are not getting into the downtime cache thing, like view, page, and cache. Should be fine. If it isn't, if you see that it behaves differently, then that's a good possibility that the way that the cache feature picks it up happens in an earlier stage in the pipeline, which is a bit of a tricky thing, because the cache feature itself is actually not actively maintained anymore. It just is there, because it kind of works. And it would be a shame to stop supporting it. Probably you're not rendering the page. It's possible. I mean, we are always rendering the page. What's possible is that the cache service kicks in right after crawling, which would then basically mean that we are not getting the rendered version. We are rendering the page. We're just not getting the rendered version for the cache actually pulls out data before the rendering happens, which is a bit unfortunate if that's a problem. The AMP team regards the AMP HTML tag. They say that they don't rely on JavaScript. Because they don't have to. Yeah. So maybe with no archive is the same. That is very much possible. But I would be surprised, because I know that the bit that parses meta tags and AMP is not a meta tag, the meta tags in non-AMP pages are parsed before and after rendering. The question is, when does the cache build its entry? When does the cache get populated? It is possible that the cache populates immediately with a fetch reply from the crawler. So immediately after crawling, it pulls itself a copy. It is also possible that it actually only pulls itself a copy once things get into the index. I could see that being reasonable as well. And that would basically mean that you would not see problems with the JavaScript injected variation of this. But again, it is possible that we are pulling it earlier in the process, and then you would probably see that it's not picking this one up. And this is actually really tricky to debug, because it might also just be race conditions, because a lot of things happen in parallel. So sometimes we might see the rendered version, sometimes we might not. That's something that we see with the cache in general. Sometimes we see the rendered version in the cache, most of the time we don't. And I have the feeling that's because the cache tries to extract information as early as possible from the document. And that's very likely to end up being before rendering. So I wouldn't rely on the no-archive tag being inserted from JavaScript. Generally, I would always try to get the meta tags consistent before you need JavaScript, or at least leave them out if you can, and only add them with JavaScript if you can't make them consistent between the server send version and the JavaScript rendered version. Excuse me. Bless you. Thank you. So currently, our main caller is desktop. So we did have this tag on the Roach, Jamer, and desktop. But when we will move to mobile first, don't have a solution yet. But sure, it'd be fine. I mean, on the other hand, what's the big problem with the cache? I'm not sure most people are even aware that it still exists. I only use it when the website is down. Why we don't want our pages to be cached? It's a secret. OK. Everything's secret. Oh, my. Awesome. We're publisher, and you know. Right. Yeah. OK, I see where this is going. All right, OK, I understand that. Right. Oh, my. The data marketplace thing is broken. I can't actually access the output from Raffia right now. That's unfortunate. Yeah, so I would test it. But I think you have a 50-50 chance. I think it's a race condition kind of situation. Got it. Thank you very much for both of the. Doing my best. Sorry. All right. Do we have any other questions? We have a few more minutes before I got to head out for today. All right. There's no further questions. I'd like to thank everyone who joined today's recording. Has been a huge pleasure. I wish you all a lovely, lovely time. Stay safe, stay healthy. The next Office Hours is going to be in two weeks. I'll get this video updated, uploaded to the channel as soon as possible. And then I'll put the thread for the fortnightly next edition of the JavaScript SEO Office Hours into the YouTube channel. Thanks a lot. Have a lovely evening or a lovely morning, afternoon, whatever it is where you are. Has been a pleasure. Stay safe, stay healthy. Bye-bye.