 Hello, and welcome to another JavaScript SEO Office Hours Hangout. I'm pretty happy to see that there's a few people here. I'm trying to do these hangouts now weekly with an alternating time zone, basically like an APAC-friendly time zone and a more North America-friendly time zone. North and South America, actually. Doesn't matter. It's a point being I'm really happy to see a few people in the Hangout today as well. And we have a few questions. We have very few questions, so I think I'll start with the submitted questions, and then we'll take questions from those who are with me in the Hangout. So the first question that we got is the question that I have is JavaScript single page apps. I think that's what they mean. Do they require hard-coded links? What does that mean, hard-coded links? I mean, OK. We built a 17,000-page e-commerce website and Google cannot crawl the pages. They will not index them. The thing here is, as far as I can tell, I'm not sure what you mean by web developers need to know. Web developers do know because it's in our documentation. And as long as you generate links that are proper links, and I mean A tags with an href that has a URL that we can crawl, your discovery will not be impacted. Now, again, links are important for discovery of content and allowing us to understand the structure of your site. With 17,000 pages, as far as we're concerned, that's not a huge site. So that should not be a problem. You can also use the sitemap to submit the URL's task. But that's an optional additional measure of making sure that we can discover pages. Sitemaps like the site structure, so having links is very, very important to allow us to understand your site structure. But I would reiterate, if you'd use JavaScript to generate the links, if you use client-side rendering and that generates all your HTML, that's not a problem. As long as it is proper links. And there's actually, on this channel, later today, which is April 29th, I believe, and 28th, 28th, 29th, 29th. April 29th, we have this additional video, the Webmaster Conference Lightning Talks on JavaScript and links. Very, very basically said, as long as you generate proper HTML links, we'll be able to crawl them. The HTML links have to have crawlable URLs. And there should not be any other technical issues that prevent us from indexing or rendering the page. You can test that with the URL inspection tool. You can test that with the mobile friendly test. You can test that with the rich results test. Wherever you see the rendered HTML, if your links are in the rendered HTML, then you'll see the links. And then we can crawl and potentially index websites that are linked. And yeah, I mean, you have to make sure that there's no other technical problems, like accidental noindex or robots.txt blocking of certain URLs. And if that's not the case, and if you're not preventing us from crawling, rendering, and indexing, then everything will be fine. So that's that. If you want to elaborate on the question, that's fantastic as well. You can also send that in with more details in the next. There's a post on youtube.com. Google webmaster slash community for the next JavaScript SEO office hours, if you want to elaborate on that question. Second question we got is, how to do a client-side 301 redirect? I have a static website, and I want to do a client-side 301 redirect. How to do this in a way that it does not affect SEO? A redirect always affects SEO. Because what it does is it tells us, especially if it's 301 or 302, this page no longer has content. Instead, you should go there, and then it just takes us somewhere else. So there's always an effect in terms of SEO, because that means that the page that we originally indexed is no longer having content. It is now just taking us somewhere else. A 301 redirect means an HTTP status 301 permanent redirect. You can't do that client-side, because an HTTP status is sent from the server. What you can do client-side, though, is especially if you have a static website and you don't have access or control over your web server, optionally, if you have access to your web server, configure your web server for a specific URL in HD access files or in whatever you're using to configure your web server. You select a 301 redirect to this other URL when this URL is being requested. That's how 301 redirects work. If you can't do that and you do only have access to the client-side of things, meaning JavaScript, then you can use window.location.href and then give it the new URL. And then that's also a redirect. That's not a 301 redirect. It is a JavaScript redirect. That also works. It has no obvious downsides. It doesn't really matter to us, but it requires that the crawlers that you care for understand JavaScript. If you have a crawler that does not understand JavaScript, then that won't work. You can still use a meta-refresh tag in your HTML that would also probably work for search engines or crawlers that don't understand JavaScript. But for Google Search, it doesn't really matter. You can use a JavaScript redirect. You can also use a server-side redirect if you want to have a 301 redirect. But the 301 client-side redirect is per definition impossible because the 301 is the HTTP status code that comes from the server. So if you're not changing your server, you can't really have a 301 redirect on the client side. That is that. And that were the two questions that were submitted on YouTube, as far as I can tell. We don't have more questions submitted by YouTube. So any questions from the audience? Now is your time. Hi, Martin. You can hear me? So I'm using a framework that does bundling. So you've got one big JavaScript file. And the core framework part of it is one meg. The bundle is 1.7 meg. And what I'm seeing is it seems to be hidden miss if Google can actually render that page. It seems to kind of go, that's just too big. Don't want to bother with that anymore. And it's kind of sometimes it works, sometimes it doesn't. Is there any? And this company, who do the framework, are working on a way to shrink it down. That is amazing. At the moment. So are there any other ways to kind of somehow try and get around it? Maybe not bundling. So it's lots of smaller files? I wouldn't do that. So the thing is, I'm guessing when you say sometimes it works, sometimes it doesn't, you're seeing that in the testing tools, right? Yeah. So I'm seeing where you get the error thing going, fail to load file, fail to load file. Let me guess. It says other error. Is that the reason for it? Other error? Yeah. So here's the good news and the bad news. The good news is you don't have to worry too much about that because that comes from the fact that the testing tools are a lot less patient than the actual indexing and rendering infrastructure because we can't really use a cache. We're usually avoiding caching in the testing tools because we want to actually give you the result for the latest version that you actually have on your server. That comes with the downside of, as the testing tools use the actual infrastructure that Googlebot would also use, that we might get scheduled a little later. And that is actually relatively independent of the file size as well. We might just be scheduled for fetching later. And then the tools say, we can't wait an hour or half of an hour or 20 minutes or 10 minutes. We need the result now. So then they basically just time out and give you this unfortunate other error. That's not great. We are aware of that. We are working on solutions. Or we are trying to figure out what we can do to alleviate the issue a little bit. But generally speaking, the indexing infrastructure is a lot more patient. That being said, so while you shouldn't exactly get worried about it, it is annoying at the end testing but it wouldn't affect the indexing and rendering performance or anything of that. That being said, 1.0 megabyte and 1.7 megabyte, that's 2.7 megabyte, that doesn't fit on a floppy disk. That's obviously, that's always how I think about these things is like, look, if your thing requires me to have 14 megabytes transferred over the wire, that means that you're literally shipping 10 floppy disks. I played entire and fantastic games on 10 floppy disks. So why is the website that large? But besides that, the bundle size is definitely not fantastic for user experience because you're downloading a lot of stuff that you probably might not need. And that's something that, as you said, your framework provider can help you with. There are ways of doing things, depending a little bit on what the framework is and what the tooling looks like. You might have options to do what's called tree shaking. So if you are in charge of the application code, you can usually use tools like Webpack or Rollup to actually analyze what of the dependencies from the framework is actually being used and removing that from the bundle so that everything that you don't use in the framework on the application code, you would not have to ship, which is super helpful. But that really depends on the framework and the tooling setup that you can get to and how much control you have over the application code that you are using. So tree shaking is a great way of reducing bundle size. I would not unbundle for the simple reason that then you have a lot of network round trips and the browser has a maximum number of simultaneous network connections that it opens to a single host. So let's say you have it all on a CDN maybe, cdn.example.com slash main.js, app.js, bundle.js, framework.a or framework.part1, framework.part2, framework.part3, then they would have to be staggered because at some point it's too many requests to do at the same time, so that would also slow things down. So actually bundling is a good idea. But that being said, there is also most frameworks have an option to do what's called code splitting. I know that sounds paradox now because on one hand, you want to have a bundle so that you don't have too many requests separately, but then again, you don't want to have one gigantic bundle because then you are basically downloading the entire application code for every single page that you're getting. Even though you can cache that, it's still like a lot of code to download for the initial visit, for the first visit. So what people tend to do is they tend to split the bundle a little bit like, oh, we have, I don't know, some route of like our home page needs or our landing page needs this much code and then there's all the other code. So we are like downloading, we are splitting this out and downloading just this bundle when you have your first visit. We download the framework bundle, and then the framework bundle gets cached. We need to have that on any page, every page anyway. So we are getting this one cached. And then we have the bundle only for the home page. And in the background, we're loading the other bits and pieces. Or if you quit in between and jump to the next thing, then we download the next bundle. And then basically you're using the cache the most optimal way, and you're breaking it down into reasonable bundles. But completely unbundling is not the greatest thing to do because of the amount of HTTP requests that you have. HTTP2 or H2 makes that a little better because it can actually multiplex over the same connection. But A, Googlebot does not support HTTP2 yet. And B, not everyone has a HTTP2 compatible browser version probably, or there might be other issues, especially on high latency connections, which are basically and packet loss affected connections, which are mobile connections, HTTP2 can also be a little bit of a performance bottleneck. So try to figure out if your application code and or framework provider support tree shaking. That's the biggest factor that will reduce the size and maybe also code splitting along the routes. If you search for tree shaking and code splitting, you will find a bunch of articles on how other frameworks do that. And I'm pretty sure if you have a commercial provider behind the framework that you are using, then they should be interested in actually getting that shipped as well. Yeah, they are aware of it. I think they pretty much said tree shakings out the question with their design. Oh, oh, OK. So yeah, they're looking at the idea of code splitting. It's a very GUI type. It's like grids and charts. So there's a lot of stuff in there. So I think they're going to kind of give you checkboxes. Do you want charts? Do you want this? Do you want that? Ah, yeah, OK. So you're going to think again that way. Yeah. But at the moment. Let's see a approach that jQuery took, I think. Yeah. It's actually got jQuery under the hood as well. Oh, OK. Just go do it. Yeah. In fact, it uses, you've got a choice of something like five different frameworks that it sits on top of. Hence this massive thing. Another symptom I saw was, so I'm actually creating structured data using the package. What I'm seeing in the Search Console is the structured data appears and then disappears and comes back and goes in the Enhancements section, implying that this is the intermittent thing that I'm thinking of. I'm just getting around to it. Sometimes it's not. Yeah. Debug, because it could be. That is hard to debug. That's an interesting point. I'm not exactly sure where and when we are extracting the information for the Enhancements report. This could be potentially that we are putting it or pulling it from the wrong spot in the pipeline. A short-term work around, while your framework provider figures out how to get that solved, is potentially considering dynamic rendering. That's an option, because then you would work around that JavaScript bundle issue. But I'm not sure if that's a good idea, because dynamic rendering, A, it's work around, B, it's not a non-significant investment, I would say. And C, it is additional complexity. So you have to think very carefully if that's a problem. I would say as long as you're not seeing general issues with impressions or performance in Search, then I would probably not change much, because then additional complexity is just risky. You're importing risk for certainty that does not have a visible effect. If you are seeing issues with the performance in Search, then maybe it's worthwhile thinking about it while your framework provider figures out how to reduce the bundle size. But generally speaking, the indexing pipeline is quite patient with situations like this. And 2.7 megabyte is not a size where I would start to cry and scream. If you would have said 10 megabytes, I'd be like, well, now we're definitely in hot water, but 2.7 megabytes shouldn't be a problem on our side, except for the testing tools. Yeah, and I'm trying to do as much content, so on-stuff server side, so that it's not reliant on the JavaScript that builds all the fancy tools. The content comes straight out. Hopefully, there's a danger at some point. That's also always an option to say, OK, we're trying to get everything that is content and learning pages, we get that as static as possible, and we only load the framework for the pages that are really dynamic. That's a solution that a bunch of people have successfully implemented as well. Cool. Awesome. You're welcome. Next question. Hi, I had a question. Yes. I just checked last week. And even though I think you said that the new Evergreen Googlebot is working 100% since the beginnings of this year, I checked my access logs, and 80% of the requests on Googlebot requests I get still show the user agent with Chrome 41. Is that something to be expected? No, that's not something to be expected. I would check if the IPs come from Google, because as far as I'm aware, we are not using the 41 string. We do have different user agent strings, that's for sure. But the 41 Chrome 41 string is unusual. I would not expect any service from our site to still use that one since April 2019. So that's like a year ago. If you're seeing that check if the IP range results, if you do like a reverse DNS lookup, it should resolve into a Google namespace. If it doesn't, then it's a fake Googlebot that happens. Some people are like, or some services are pretending to be Googlebot, but they're not Googlebot, then they haven't updated their user agent yet. If you are seeing a 41 Chrome 41 Googlebot coming from Google, then it would be some service. But I'm not aware of any service that would be using that string anymore. I know that we are still having some services, like the read aloud, for instance, that uses a completely different user agent. But basically, I know that we can still crawl without saying the Chrome version. There's a few Googlebot strings that we have on our website. And one does not have a version information. And that still happens. But the 41, that would surprise me if we're still using that one. Yeah, that surprised me, too, because we are a medium-sized website, but we can have around a million requests, or even more, up to 10 million from Googlebot every day. So I checked, and it was at least three quarters were using this old version. I don't remember if it was 41 or 42, but it was not the 70 or 80 something that's the current version. I will check again and let you know. Yeah, I mean, fundamentally, just make sure that it is an actual Google IP address. If it does come from a Google address, it wouldn't be the regular search part, then it would be something strange. And I would have to figure out what exactly that would be. But I don't think that we are using this still for anything in search, not that I'm aware of. I think there's a few services that used to use that string that weren't actually rendering. And then it doesn't really matter if it's not rendering. It doesn't truly matter. But I'm not aware of anyone still using this. I would have to research this a little bit. But generally, you can rely on Google Bot to be able to agree. OK, thank you. For search, for search at least. I don't want to comment on other Google products. Yeah, I checked because we were adding our first client-side rendered page. And I wanted to make sure that Google actually rendered everything. So yeah. It should still. I could imagine that maybe like another Google product, like Google Ads or something, if you use Google AdSense or AdWords, they might still use the old string. That is possible. Thank you, not Google search. You're welcome. Yeah. All right. Oh, that comes through the chat. Hi, Maxime. Infinite scroll in Google-friendly ways. Yes. Oh, you have a link for me to check. That's nice. So let's have a look. I'm not looking at the screenshot. I'm basically trying to figure out how the page is supposed to be. So OK, let me actually open the website first and then see if I can find something that is content that would be loaded by Infinity scroll or Infinite scroll. Actually, you know what? I can probably share my screen real quick to show you what I'm doing because it's probably a little boring to just hear me go like blah, blah, blah. I want a specific Chrome tab. And I want, actually, you know, if I do a specific Chrome tab and you're not seeing when I switch tabs, that's also not playing. OK, in that case, I will share with you a shared Just Chrome, a window. It says a window. That's good. I think I want this window to be shared. And now you should probably see yourselves, which is odd, but here we go. OK, so here we have the website. And we can see that it's quite long already. And it does seem to be loading additional content. As I scroll, let's see. I try to grab something from far down here as possible. OK, now that definitely loaded additional content. So are we seeing N-I-U-Yaw? I'm not sure what that means. My Russian is not very good, but I'll try my best. OK, now we go to the mobile-friendly test. In the rendered HTML, I'm looking for these characters. And I'm not sure if that jump meant that it actually saw something or not. So we are not seeing this part here. That's for sure. Are we seeing something else? Like, let's try this one. Nope, wrong window, actually wrong tab. Yeah, so we can definitely see up to here. And it looked like we couldn't see the other string that I was trying. So you get a feeling for what we are having in the content and what we are not having in the content. If you are exposing this to other URLs, then that's not a problem. That's generally not an issue. And it looks like some part of the infinite score in lazy loading did work, and some of it might not have worked. You can also check here if you are using JavaScript to load additional content, then you would see, well, this is all not relevant, you would also see if we are doing the actual API called to fetch the additional content. I'm not sure that all of this looks like it's not grabbing additional content, but maybe I'm just missing something here. So you want to make sure that we are loading all the content and that the content is actually visible in the rendered HTML. If it is in the rendered HTML, that's fine. Having multiple H1s is not an indication for a problem, and it's not an issue on our end either. So that's a question of how you structure your content. If I'm searching for H1 headlines, then we see like we have one, two, three, four, five, six, seven, eight, nine, 10-ish. Oh, actually, no, hold on. I think we had, no, no, no, no. We are jumping in between the things. So we have, no, no, we have two. These are two independent headlines. Saint Laurent and CAC Industrie, something, something. So how the industry, something, something. So those are the two headlines that we definitely see. Everything else is, if you are looking for more than these two H1 tags, then we're not seeing them. Oops, nope. Okay, I'm searching for a random, Cyrillic phrase. Let's try this again. And if I'm going in here, how many H1 tags do I see? Say inspect. Come on, inspection tool. Come with me, be nice. So if I'm now asking for all the H1 tags, we also get two. So as far as I can tell, that seems to be fine. There's two H1 tags. We're seeing them in the browser. We're seeing them in Google. So we are seeing your H1 tags. And again, having multiple H1 tags is not a sign for a problem. That's acceptable as far as we can tell. Okay, now stop presenting. So you would have to check if everything from the content that you care for is actually in the rendered HTML, if it is, you have implemented this successfully. You can use the intersection observer to implement infinite score with lazy loading. You can make sure that you have end points where your server creates different pages so that you give us a pagination as well. That's a way of implementing this. You can submit different pages and site maps. That also gives us a hint that, oh, there's more content that we might not have seen on the homepage. So there's definitely ways of doing this. And the fact that there's two H1 tags does not mean that there's a problem per se. All right, hope that answers the question Maxine. More questions. What's the double dolly you did in the console line there? Was that? Oh, that's a helper. Yeah, so Chrome DevTools have, it doesn't always work. It stops working when the website actually has jQuery installed or like another library that overwrites this. But by default, if you go to, I don't know, any static HTML page, if you go to example.com, then the DevTools give you like a bunch of helpers which is like super nice. Actually, I'll show you real quick again. I want to share this window. So if I were to go to example.com, let's say, because that's a really nice and simple website and I go to inspect, then a few things happen in the console automatically. So I have dollar, which is short for document query selector. It's just a lot to type. So whenever I do like quick debug, I just try this first and then I can ask for H1 elements. And then you see like as I do that in the page, I get the overview of what the page looks like, which is quite cool. Oops, that's not what I wanted. You can also select an element here and then go to the console and then you can type dollar zero which is the element that you selected in the elements tab, which is also quite handy because then you can do like all sorts of things like text content, hello, right? So you can very quickly debug with these helpers in the developer tools. And this is just the shorthand for document query selector all, which also is a lot to type. So this gives you all elements that match a certain selector. So in this case, I think we have two paragraphs, for instance, and then you can inspect what kind of paragraphs you actually have here. Oops, and then work with that. So what I did is I basically just asked document.crayselector all H1, so to see how many H1 elements and which H1 elements we actually have on the page, which is a cool way of quickly debugging that. If you're interested, if you search for, I think it's called dev tool tips or some dev tools tips or something like that, F tips, there's a GDE called Umahansa who has like an entire course on developer tools tips and tricks. There's also other articles that explain all these helpers that developer tools in Chrome provide you with. It's quite nice. You have like a bunch of stuff that you can do in dev tools if you ever need to debug things. Okay, more questions. It does, yeah, Maxim, it looks like two articles get rendered at once, but it matches what the browser does. When I opened it in the browser, I got the same amount of H1 tags. So to me, it looks like Googlebot does exactly what the browser does when it comes to that. Maybe we don't render as many articles because there might be something in the infinite scroll implementation where we don't really follow through with all the articles that you get when you scroll in the actual browser, because Googlebot doesn't scroll, but we definitely load a bunch of content. So as far as I can tell, the content seemed to be there. And the fact that you rendered two articles is not a problem for Google, especially if these articles have links to their own page, then we would probably consider their individual pages as well and might rank that in search rather than the home page. So that's not per se an issue. I mean, you know, like for instance, e-commerce shops might have an entire category page with lots of products, that's not a problem. You just don't have products, you have articles, that per se is not an issue. Okay, more questions? Yeah. So nobody asked, I have a little question. So we have the problem again with some little things. So the Google search console said to us, one search page is duplicate content, but with another URL like another one, but both page get separated content, but this is loading by JavaScript Ajax. So I don't know what we do wrong or is the problem with the pages to slow and the Google bot read us later. It's not with all pages, it's with some pages. Have you tried rendering them through one of our testing tools and looked at the rendered HTML? Yes. And they look completely different in the testing tools? So yes. I can send your links if you like. That would be fantastic to have a look because this sounds like a... So this is the tool links? Stab und Führungsschienen. Okay. So if I have a look at this, I'll try to render the results in a little bit, okay? And let me go to Mobile Friendly Test, for instance. So I'm guessing the other search pages don't have that problem there. It's not that everything... Not everything. We have, let me not lie, over 500 pages and only 20 pages have the problem also. But with the same, with Stab, I'll... Okay, so we, right. So what happened is somehow, and I don't know exactly how, somehow we have seen similar content that's put it that way and decided that it's like one thing is a duplication of the other thing. And then we have selected the, I think the Stab one, if you said that that's the canonical now. And we have decided that that's the best match for this. That can have, especially because you're using JavaScript that is possible that there was a problem with fetching JavaScript at some point. And that's why we decided, like imagine that for some reason we couldn't fetch the JavaScript. We couldn't load the results, whatever, or there were empty results for a while. And then we're like, okay, so this is an empty search results page. This is an empty search results page. And this is an empty search results page. They're all the same. So we're just like picking one to keep and the other ones we did. If that was the case, if there was a glitch that led to that, eventually we would probably recover. You can also resubmit them for indexing and then we might revise the discussion. Actually, we have John in the call. I've seen you, John. You did not escape my attention. Is there anything else you would have to do to fix this DduP issue if it's an issue? I didn't pay attention. Sorry. I'm so sorry. I didn't want to call you out at this one. I was like, this is a common problem people have. So maybe John has something that I forgot about to mention. So assuming that you have multiple pages loading search results, not search results, you have a catalog of different categories and for different search terms in the category, sorry for different categories basically in the catalog, you're using JavaScript to load the products. And we have decided that they're all duplicates of each other and we have selected one as the dub cluster lead. So I think that sounds like we had an issue rendering at some point probably. Maybe we couldn't fetch the JavaScript or something. And now it works and now it gives you the different content. So I guess eventually we would probably figure that one out or you can request indexing to fix that, right? Yeah. Relatively small number. I guess it depends on where the issue came from. So what I've sometimes seen is that we learn that particular URL parameters are irrelevant. And if we learn something like that, then it tends to take quite a long time for that to kind of be unlearned again. So if you think that that's the case, like if you have multiple URLs that all use the same parameter names and you're noticing that different parameter values all lead to the same canonical, then one thing you could do is to kind of work around Google and change the parameter name. So if you have something like, I don't know, Q equals, just take like QU equals or S equals or some other letter. And then essentially we'll look at that and see, oh, it's a new type of URL. We don't know about this parameter. And we will need to analyze to see is there actually unique content per parameter value here? And if we find that there is, then we will index it normally. Whereas if you keep that parameter and we've kind of learned that the parameter value is irrelevant, then it's going to take a really long time for us to reconsider that. Okay. And I noticed that you have a language parameter in the URL and I'm guessing like the language parameter doesn't make that much of a difference. So maybe that's where we fell over that possibility. We give Google the link without language parameter. So Google get only text search question mark, Q equals, STAP, for example. So, and I have copied this from my browser. And so I have setting to German so you get the language parameter. So, but I have an extension question to this. So we want to change something on our server and add RenderTron. If we add RenderTron and some of this URLs get re-crawled, do we have the problem again or it's not so important more if we add to the RenderTron server to redirect Google to RenderTron rendered version. I hope you know what I mean. I know what you mean. So RenderTron like dynamic rendering in general is a workaround and I would only incur this additional complexity if I really have to. And I say that as the RenderTron maintainer. So I would not do it if I don't really have a really good reason to do that. And I don't think that this, if this is the reason why you add RenderTron, I wouldn't then I would do what John just said and probably like reconsider maybe renaming the parameters or something. But assuming that you set up RenderTron correctly and it doesn't fail and generate static HTML for Googlebot, then that won't incur the problem again. It might incur different problems depending on what the rendered output from RenderTron looks like. But that specific problem that we can't load your JavaScript would not happen again because RenderTron strips the JavaScript from the page. The main reason we add RenderTron is for social sharing. Yeah, okay, this is the point. Yeah, I understand that, 100% understand that. I have been in that position. So if you use it for social bots, so we can use it for Googlebot and maybe no problem. Sure, sure, that's a possibility. Just make sure to make RenderTron use a cache because RenderTron makes things a lot slower if you're not using caching. Okay. Cache every now and then. Okay, cool. Perfect, nice. Thank you. You're welcome. Any other questions? I think that's really weird. I saw a question being submitted on YouTube and now it's gone again. Or at least I'm not seeing it anymore. And I'm like, ah, ah, here, ah, ah. So someone submitted something to YouTube. Hi, I have noticed a significant drop in the FAQ listings in the two past months here's a screenshot. That's why I shouldn't just read things out as I go along because that's not a JavaScript question. That's a question for the regular office hours. But basically saying like, so they added FAQ markup to their page and they're seeing a significant drop in the FAQ listings in the past two months. The fact that you add the markup does not guarantee that you get rich results in search. So it can be that another page has better content for these specific questions. It might be that our algorithms have decided to just show us FAQ markups for the queries that you have markup for. It doesn't affect rankings. It does affect visibility in the sense of you don't get the rich results snippets if you're not showing up with rich results snippets. Why am I saying rich results snippets? Which results? And also the screenshot that you shared it doesn't look like a super unusual large drop. It's just dropping, it went up a little bit, it went down a little bit. That seems to me to be the regular change in algorithms over time. I wouldn't worry too much about that. Okay, now coming back to JavaScript questions, anyone? No, they're like 12 minutes. Could you explain a bit more about how Renditron works? Is it a wrapper around puppeteer type thing? Yes, Renditron is a wrapper around puppeteer, in fact, and it has two components. One, well, it has one component that is universally useful if you want to use Renditron and one that is specifically made for Express.js web servers. So the way that Renditron works is actually, again, I'll just show you because we have documentation on this, the hats diagrams, which is nicer to explain them just like wave in the air in front of me. Because I do that, I notice that I wave my hands quite a bit. Here we go, so in our search documentation. In the guides, in the implement dynamic rendering guides, so, oh God, basically it is one way to implement dynamic rendering. Dynamic rendering means your server looks at the user agent that comes in. If it's a user browser like Firefox, Edge, Opera, or regular Chrome, then you would just run your JavaScript and produce the JavaScript. That's this part of the diagram here. So the first part here where your server sees, okay, a regular user requests the information. So I give it the single page application or whatever JavaScript it needs to actually fetch the content, the JavaScript to load more content in or to like populate search results on this category page or whatever. Whereas if it detects a browser that is, sorry, a user agent that is from a crawler, like Googlebot, Twitter, Facebook, any of the social media bots, then it would instead of just sending the response back, it would proxy back to a rendering solution and then the rendering solution takes that URL, opens it in a browser, waits until the page has finished rendering, and then makes a snapshot of the HTML and sends the HTML, the static HTML back to the web server and the web server then sends that back to the bot or crawler that requested it originally. Renatron is a thin wrapper around Puppeteer. It basically creates two endpoints, one for a screenshot, one for rendering. We have a demo instance as well. So that even is a web interface just for the fun of it. So I can go to a website that does require JavaScript to do like a bunch of stuff and then I can, for instance, take a screenshot and okay, I can't take a screenshot, the demo instance does not do screenshots, okay, fine. Wow, okay, the demo instance doesn't do anything. Ah, no, I know what's the problem, my certificate went down, but that's okay because it's an experiment page anyways. So here we go. So this uses a bunch of JavaScript to actually do all these tests, but if I check the page here for any script tags, then we will see that we don't have any JavaScript on this page. So this is just static HTML rendered by a heteroschromium through Puppeteer and that's the main component of it. So you have a server which gets a URL and then produces static HTML that you can send back. The second component is the ExpressJS middleware if you are using ExpressJS as your web server or as your application server, then you can use Renatron middleware to actually get a way to integrate Renatron into your web server. If you are using Apache or EngineX or any of the other servers, IIS, then you would basically configure it like a reverse proxy. That's Renatron. So does it only execute script tags or does it execute, for example, would it be a problem if it did the Google Analytics reference, that sort of thing? What do you mean? So you get a script tag to fire a page view in Google Analytics? Mm-hmm, yeah. It's got an employee doing that. It would Renatron would then execute that JavaScript, generate the page view, but would not. It strips the JavaScript entirely out of the HTML that it serves as a first back. So you would not get Google Analytics on the HTML that Renatron generates. Which is not great if that's what you want. A few people asked for an option to keep scripts in. And I'm like, dynamic renderings, entire idea is to remove every bit of JavaScript from a page. So I don't think Renatron is the right tool for that. If what you were saying sounds more like you want server-side rendering. Right. All right. We have time for two, maybe three more questions. Maybe one, if it's one, that makes me go talk for a longer time. Oh, there's something coming in via chat. Last question on that problem with infinite scroll, if it's OK. Yeah, sure. We are afraid that when we'll fix it, we should expect serious fluctuations and search results. When we are searching description of an article that loads into infinite scroll, Google says that there are seven or more articles with this word. Does that mean that the Googlebot interpreted our articles incorrectly all this time? I mean Googlebot saw twice bigger text when it was in original. Um, I'm not 100% sure if I understand where you're going with this. But basically, if we, what's that? Where's that link taking me? Let's click on that link. Ah, OK. Well, that's the, I think what we are seeing here are different. Yeah, all of these are, oh wait. 690C94E. So we seem to be seeing different pages with this text in it. But I'm not 100% sure if that's a problem or not, to be honest. If you change the way that your content looks like and the site structure looks like, then yes, that will cause ranking fluctuations. Because we are effectively seeing changes in all the pages that we have in the index. We might see new pages if you change your URL structure in the same time. That will cause fluctuations, yes. If what you are seeing right now in terms of search performance is what you expect to see, then I don't think that's a problem that needs to be fixed. If you see problems with the way that search works for your site, then that's something that you want to fix. Don't fix what isn't broken. As far as I can tell, and correct me if I'm wrong, as far as what you have submitted through the chat so far, we are seeing all your content. We are ranking your content. Your content shows up in search results. Sounds to me like that's what you want. And I wouldn't fix this if it wasn't a problem. If you are seeing that we are not seeing some of your content, or if you notice that your content doesn't perform very well, then you can consider reconfiguring your site. But I wouldn't change anything without being 100% sure that what you are seeing right now in terms of search performance is not what you expect to see. And just the fact that we see the same content on the same words in multiple URLs, that does not mean that it's a problem. It can only be that reporting is a little harder because your content might be spread across multiple URLs and multiple different pages. But if it works the way that you would want this to work and you get the clicks and impressions that you look for, don't fix it because it doesn't seem broken. You're welcome. OK, one more question. Anyone? I think maybe we have submissions through YouTube as well. Yeah. Is there some tool that I can use to track the page speed with JavaScript automatically in a dashboard? I believe there's a bunch of tools that do things like performance budgets. You can run Lighthouse in an automated fashion. Automated fashion, blah, in an automatic fashion. That's what I wanted to say. There's like a Lighthouse CI, I think it's called, is the tool that you can set up to run, I don't know, twice a day. And then you can snapshot the scores and then see how the score changes over time. I know that that has been built. I'm pretty sure there's also paid services that do something similar, but I don't have a list of them right now. I would just search for web performance budget tracking or Lighthouse over time or something like that because I'm pretty sure it has been done before and it's probably well documented at this point as well. So Lighthouse is a good tool to track performance over time. I think you can probably also somehow do that with web page test, but I'm not so sure about that. And there's definitely paid solutions out there that do that as well. All right, now one last question from the audience. That wasn't easy one. Can I do one that might be, which JavaScript's involved? Sure. So I'm using Puppeteer to audit clients' websites. So it's rendering the page and things like that. And as I mentioned before, I noticed that it runs Google Analytics scripts, which I found quite powerful because I can actually find out if a client's got Google Analytics tagged three or four times, things like that. Whereas Googlebot uses robots text to stop it doing that. I'm wondering if it's you guys, should we be using Puppeteer because it's a bot, really? Should we be blocking, triggering things like Google Analytics and things like that? I think that is up to you. On one end, if the client's websites or if the client's marketing or analytics or data science departments complain that they get visits on analytics that look weird, then they can either filter it by filtering. Or if you can probably give them something that they can filter by in Google Analytics, you don't have to block Google Analytics because you are a bot. That is your decision in the end. If you don't want to accidentally skew, I mean, if you are requesting the entire website like once for an audit, I don't think that's going to make a big difference unless the website has five visits anyways per month. Then they're going to be like, we have 10 visits. That's a 100% increase. And then they found out that's actually you botting the website. But normally, it doesn't really matter that much. And I remember that at least the last time I worked with a Digest Science Department, they're like, oh, yeah, we're removing everyone who works here from Google Analytics. And we also remove all the bots that look weird. You can maybe set the user agent to something that is easily spotable. Yeah, I do that. Yeah, and then that's an easy filtering question. Maybe Google Analytics knows about it anyhow. Probably. If you are having a really weird user agent, I'm pretty sure that Google Analytics will be like, interesting, let's put that here. All right. Yeah, but in the end, it's your decision if you want to run Analytics or if you don't. OK. Excellent. In which case, thank you all so much for your questions and for joining. Also, thanks to everyone who submitted a question through YouTube. And the next JavaScript SEO Office Hours is in a week's time. It is also already posted on youtube.com slash google webmaster slash community. If you search for the post that says JavaScript SEO Office Hours May 6, I believe, is the next one, then you can submit your questions there. I'll make sure that that one's also recorded as well as this one. And I wish you all a fantastic time. Stay safe, stay healthy, and see you soon again. Bye. Cheers. Cheers. Go. Stop the recording. All right, I need to.