 Hello, and welcome to a new edition of the JavaScript SEO Office Hours. Today, I have 13 people here in the Hangouts. We have a few questions on YouTube, so stay tuned for this episode, so to speak. I'm not even sure if this is an episode. I think it's like an episodical thing. It's a series of Hangouts, but I'm never sure if it's like an edition or an episode, or what would I call it? Anyway, whatever. I'm digressing. There is a bunch of questions on YouTube. Let's start with the YouTube question before we go into the audience questions. So showing different content for an existing user based on the JS cookies, will this hurt my SEO? For example, A, users new. For this user, we show what we show one promotion like get started. And user B is already an existing user. For this user, we show the latest promotions. Does this impact my SEO? Kind of, yes, it does. I mean, it doesn't really, but it kind of does in the sense of that Googlebot does not run with set cookies. So we would only see the content that a new user would see. I would suggest to have different landing pages and then actually expose both of these through links to Googlebot so people can also see the latest promotions if you care about that. And that would allow you to show the same content to both users and Googlebot. Just you would move the people to different starting URLs. For instance, you would redirect people if a cookie is present. That is fine. That wouldn't hurt your SEO, per se. Can a JavaScript front end technology change cause a drop in Google ranking? We have a few million visitors per month and follow Google guidelines when we switched from Angular and PHP, so service at rendering, to view and nuts with service at rendering. We lost rankings immediately about minus 20% traffic. Our overall lighthouse score went from 40 to 85. So do you think it's possible for a site to drop because of the change in JavaScript? Bonus info, we did notice that other sites have been hacked and that the hackers have made subfolders on these pages with our entire site copied and smart canonicals to spam shop pages. I'm not sure what smart canonicals are, but OK. Could it be that because of the timing of this, Google has considered some of the hacked sites as the true source of the content that belongs to us as Google has to understand our new JavaScript and if so, what to do? Examples of sites in case these are helpful. These are helpful, but I'm not sure if I have the time to look at them right now. Generally speaking, a switch in technology should not mean that much of a change in pretty much anything except for I'm guessing you didn't just change technology. You'll probably also change maybe site structure or maybe you changed the way that your content was presented. If you made changes above the threshold of where technology is serving, we don't really care about the technology that runs, and we care about the content. So if you have made changes to the way that you present the content, that would mean that we would have to take time to actually re-understand things. It could coincide with the other hacks that we're trying to take content away. You can check if we consider your page as canonical for the content that you produce. So that's something that I would double check. And ranking can always change a little bit. There's things happening on the web all the time, so ranking changes are not necessarily something that comes from the technology that you choose, but other factors might be part of that as well. It could be that it was an update to the algorithms. Ranking isn't really my area of expertise, but generally speaking, assuming that all the content is visible and present in the rendered HTML, and the website apparently got faster, I wouldn't expect that to be the source of ranking changes. It could be these hacked websites. It could be that just the way that you represent your content has changed fundamentally, and that we need time to reprocess it. That's also the same thing with like people are redoing their sites, revamping their sites, changing the way that the content looks like and is presented to the user. And then they see changes like this, because fundamentally, you have created a new website. If you really just change the underlying technology and everything else the way that you show the content and the URLs and all that stay the same, then we wouldn't have seen that much of a difference. Then there's a question. Hi, Martin. I am helping a non-profit, and I'm afraid that the content in a very important part of the website is not indexable, because it's made up of widgets loaded by scripts. We'd be able to confirm and maybe have some tips on how they could add the content differently to make sure it's indexed by the bots. What you're saying is you're afraid that, which means you're not probably not sure that this is actually a problem. Content loaded by scripts and widgets is not exactly an issue per se. I would plug this URL into any of the testing tools, be it the URL inspection tool, be it the mobile friendly test, the rich results test, and look at the rendered HTML. If the content that you care about in this section of the page is present in the rendered HTML, then there's nothing to worry about. If it isn't, then you would have to investigate why it is rendered HTML. There's a follow-up question, a critical question, on the first question, the first question being the one regarding showing different content based on cookies. How about websites where new users, including Googlebot, are shown an HTML site, but logged in users are shown a JavaScript homepage without any textual content at all? Per se, what happens once you're logged in is not something that search engines really care for. I mean, do what you feel is right. We can't see things behind the login to begin with. So I wouldn't worry about that, to be honest. For SEO reasons, anything behind the login is invisible to us. You're welcome. I think it's time to take a few questions from the audience. Anyone has a question today? Yes, hi, Martin. Hi, Christian. I would like to start, because I have two questions, but maybe the first one. It's from a client of mine, and I only have it in German, so I try to translate it. You can actually ask the question in German hypothetically, and I'll happily answer it in English. But I think, yeah, if you can translate it, that would be fantastic for the audience. Yes, yes, I will try. So it's about a single-page application, and it's under a domain which is redirected to the WWW domain. And this causes problems with end users who have a bookmark to the non-WWW domain. It's causing a Corp, C or RB error, because the service worker who delivers pages from the cache for the database content points to the other domain, which is with WWW. And yeah, this is causing these Corp problems. And the question would be, is it OK to redirect to the WWW, only if the user agent is Googlebot or a crawler, and show the non-WWW version for the normal page visitors? So just making sure that that was like a lot in this early. You have a non-WWW and you have a WWW domain. And the problem is with people who bookmark the non-WWW, why don't you redirect everyone to the WWW domain, including those who have bookmark the non-WWW domain? Yeah, because then the client says there's a problem with database fetching and the service worker, because it shows these Corp errors. Right, but that means that this service worker, or whatever it is that is making these requests, is requesting some, while being on the WWW domain, request something on the non-WWW domain. Yes. That's the root cause. You should fix the root cause. In that case, your service worker should use the WWW domain to make fetches. And then you redirect everything to the WWW domain. So if that's difficult from a technical point of view, I'm not into it so deep. So I can't, yeah, would it be OK to only redirect the users to the WWW and not the Google bot? I think that is OK. But to be honest, I think it's as much effort to do that than to fix the service worker making requests on the WWW domain. I think the effort is pretty much the same. It's different people having to do this, I agree. It's very likely that someone else makes the redirects versus the person who writes the service worker. But I would argue you can do it. I think it's fine. I don't see an inherent problem with it to do the redirect only for Google bot. It might turn out to be tricky to test things later on. And you might run into situations in the future where it's harder to debug problems with this kind of setup because you are treating Google bot differently than normal users. So I would recommend to fix the root cause, which is the service like redirect everyone to dot, dot, dot, and then make sure that the service worker does the thing it needs to do properly on the dot, dot, dot domain rather than trying to make a request on the other domain. This might be as simple as changing one constant or variable somewhere in your scripts to actually use the correct domain. But if you can't do that, then sure, a technical workaround is that. But I would consider that not a solution but a workaround. OK, OK. Thank you. You're welcome. Happy to help. OK. OK. And maybe if I may ask my second question. Sure, we have time. OK, cool. Another client who has a home page and a website and before loading the main content, he shows content to collect user content for legal issues, something like that. And only if the user content is collected, the main content is loaded. And this all takes place on the same URL. And the question would be, would it be OK to not show Googlebot this legal user constant page and immediately load the main content? So you treat Googlebot differently than the other users. Would that be OK from an SEO point of view? I think that is fine. Depending a little bit on our horistics, we might false qualify this as cloaking, which then might cause issues. But normally, normally, you would have to test this. It should not be a problem. I advise a little bit against that. It's probably, unless there's literally legal reasons not to load the content in the background and then just make the user consent before it, there might actually be legal reasons to not load content before someone has given consent to something. In that case, yeah, that's fine. That's a workaround that I would say is OK. OK, and you would consider this a low risk that something was wrong? I would not consider it low risk. OK, so we have to try. I would try that very carefully and be ready to roll that back if needs be. OK, thank you very much. You're welcome. Suzuki has a question. Suzuki-san, excuse me. This may not be related to JavaScript. Well, it kind of is probably related to JavaScript. How can I identify the largest content when my LCP is slow? And question two, how can I find what elements cause poor content layout shift, cumulative layout shift? The LCP, that's a tricky one. I would probably use the web page test and look at the film strip. What is the largest blob where it basically is mostly white or whatever the background color of your website is to suddenly there is content, whatever loads in that large blob is probably what's blocking the largest contentful pain time. Very likely going to be images, but it could hypothetically also be text blocks. But it's more likely that these are images that make your LCP time high. For which elements cause poor CLS, you can look at the requests and then block individual requests. You can either use a script that, so if you are looking for, let me see if I can find this real quick. Tobias Willmann wrote a script using puppeteer for this, which I think is pretty cool. Haven't tried it out, so I don't know how well that actually works. But I think Tobias is usually producing good stuff, so I wouldn't be surprised if this is actually pretty cool. I posted it in the chat, and I'll make sure that I put it in the YouTube description as well. I might forget that. Let's face it, but I'll try to remember and put it in the YouTube description when this video goes up. But basically, I can probably show this. Maybe I can show this on an example. Do I have a good example, though, where I load different pieces of content? Yeah, I think I do. Well, feel fine out. Let's see, 50 lines of code, a blog, and then if I go, I'll share my screen with you in a second. Let's see, a Chrome tab, and I want to share this Chrome tab. So if I'm not sure what causes things here to go wrong, I can go into the Network tab. I can load things. And then if I'm wondering, actually, let me get this away here, and maybe I start with images. There should be this one, for instance. I can say I want this URL to not actually be loaded, and then I can load again and see if things are shifting or not. And you can see that this image has no longer been loaded. You can basically go through the different elements and then run your metrics to see if you are no hold on. This is where I want to go. You can run your metrics to see if that makes a difference or not. I think it respects request blocking, I hope. But anyway, there is a puppeteer script that does this for you, which is even nicer and gives you a better feeling for what's happening. But I'm pretty sure. Wow, why is Lighthouse warming up so long? Well, to be fair, my computer is acting up a little bit this week, so I'm not super surprised. Not super sure what we are getting. But unfortunately, we don't get to see if the image was loaded or not. But I'm pretty sure if I basically now go and block pretty much every image that we will end up getting a better score. So you can have a look at what requests make the biggest impact on your scores. Oh, Lighthouse 6 gives you this info. That's awesome. I didn't know that we have that built in. Dave, thank you very much for following up on this one. So yeah, PageSpeed Insights does that. Lighthouse 6 will roll out, I think, with the next Chrome release probably. Or you can install it from GitHub if you want to run that. That's pretty cool. Then it tells you which elements are affected. Anyone else with a question from the audience? We have a few questions on YouTube if no one from the audience has a question. All right, so I'm not sure I understand this question, but maybe some of you can help me with this. Why would first contentful paint and largest contentful paint be far apart in Lighthouse and PageSpeed Insights? The performance tab says they happen at the same time, and there's a real user. I see the whole page load at all these elements at the same time. Well, it is possible that with Lighthouse, it runs on your computer unless you run it from web.dev slash measure. PageSpeed Insights runs from the cloud, so it might give you different data if that's the question. Also, first contentful paint and largest contentful paint do not necessarily have to happen at the same time, especially if you're on a device with slower CPU and these can divert quite substantially. They might load at the same time on your device, but they might, as in your computer, but it's unlikely that this might take longer on a slower network connection or on a slower device with a slower CPU, like an older mobile phone. That happens. How is largest contentful paint by page type determined? What does that mean? What page type? I don't fully understand the question because I'm not sure what you mean by page type. The element seems to shift. Well, that has nothing to do with largest contentful paint. That would be CLS. That would be community. They are shift. Jennifer, if you want to ask this question with a little more detail, next, in the next Hangout, let me know. Or basically just go to YouTube and put it in the next Hangout comments or in any of the next coming up Hangout column threads, because I'm not sure what you mean by page type. Largest contentful paint, how it's determined is explained quite nicely on web.dev, but I'm not sure what you mean by page type. Then we have a sports betting website that streams sports games and data to end users. We use JavaScript framework, AmberJS in this case, to render the sports book views. The page URL structure is mysite.com slash live sports, hash football slash England slash competition ID slash match ID. Our customers want us to get rid of the hash sign and provide them with slash routes in order for Googlebot to crawl them and index them. However, when the match time comes and when the match ends, the URL returns a 404. Question, is there a real reason to remove the hash? Will Google index those temporary JavaScript routes? If yes, then what will happen? If a few days after it, the page returns a 404 status. So generally, if you want these things indexed, you need to remove that hash. But if these URLs are short-lived and by short-lived, I mean 90 minutes of a match, then I don't see why you would do that. Unless you do have them up upfront and they are there for like a week or longer than a couple of days or minutes, then I think if you want people to find it before the match or while the match is running, then it makes sense to have these URLs present and rendering properly a couple of days beforehand for the match. It's fine for URLs to go 404 afterwards. But what would happen is if you give us enough time to discover these URLs and enough time can be a few days or even a week or so, because we might not crawl your page quite often, quite that often or your site quite that often. But if we do discover them and crawl them, we would index them. We can't do that if there's a hash in there. So people would not find this while the match is running or before the match starts. So if your customer relies or wants people to find this content before the match or while the match is running, then you should definitely get rid of that hash in the URL. And once you are returning a 404, even using JavaScript, you're returning a 404 because the match is over. That's fine. We'll eventually see that as a 404 page and then remove it from the index again. So depends a little bit on what your customer wants and how your customer thinks about these URLs. It's fine to have these temporary URLs indexed, nothing wrong with that. But if you want them indexed, you can't use a hash route. Can we somehow only use JavaScript to make a working comment form? Yeah, but with a comment form, I mean that's a form that you use to enter comments. You can totally use JavaScript to do that. I'm not sure I understand that question. If you have follow-up information, please post a follow-up question in the next Hangouts. Right, that's it for the YouTube comments, I think. Let me reload the page because sometimes people are commenting while the Hangouts is happening. No. Any more questions from the audience then? Now is your time. I've got a quick one, Martin, if that's all right. Sure. I came across a couple of questions and stuff in the forums and that where people are pre-rendering, but they're kind of serving everything with the JavaScript after the pre-render. So they're kind of enacting, but leaving it in place. It's my understanding that's kind of missing some of the point of it. Are you better to remove that JavaScript? Does it particularly cause troubles if you do leave it in? Will Google try and then render the page anyway? That's a good question. And I think if I think about this, so if I want to pre-render 90% of the time, probably people don't have to pre-render for Googlebot. They might want to pre-render for other bots that don't run JavaScript, but unless they have a technical reason that they should fix elsewhere, then using pre-rendering as a workaround is kind of, it's a workaround. And if you are then not removing the JavaScript from the pre-render page, then you're kind of missing the point. Because then, sure, we do have the content in the initial HTML, but if your JavaScript kind of overrides it and overrides it incorrectly, then we might end up still seeing the incorrect content or the missing content or whatever it is. If your JavaScript just overrides everything and doesn't work properly, and if that's the reason that you pre-render in the first place, I would suggest if you pre-render for Googlebot, because your JavaScript causes the content to be incorrect or missing or whatever, do test very, very carefully if your pre-render solution actually fixes the problem when the JavaScript remains on the page. If it doesn't, I would just fix the JavaScript and get rid of pre-rendering. If it works, don't touch it. If it's not broken, don't fix it. It's fine. You can do it. I kind of feel like if your JavaScript renders fine in the first place, then pre-rendering is just a way to burn money, because service costs money, and then pre-rendering usually takes server load. So I would say if you pre-render, the page should come out without JavaScript at the end. But if it works for you otherwise, then fine by me. You do miss out on the benefits, though. Thank you. Thank you for the question. Anyone else with questions? I see there's a bit of Twitter chat. Oh, people are posting random stuff on Twitter. Sorry, do we have a question? Yes, Martin. Awesome of our own. How are you? Great. I have a quick one. It's not probably related with JavaScript, but we're thinking about implementing FAQ schema markup in some of our pages. And I'm wondering if that it's going to decrease a little bit the page speed of the website, adding this LAD JSON data inside the page. And also, if this is also a good practice, if you are trying to acquire customers for, let's say, paying keywords, all the keywords that are bringing direct cells rather than informational or transactional queries. I don't know if you got my point. I got your point. That's pretty it. Thanks. You're welcome. Great question. So FAQ markup is most helpful for informational queries, not really customer acquisition, especially because if you are too blatantly advertising your services or products as an answer to a non-product-related question, doesn't really have that much. FAQ is more a way to give people information around something that you offer quicker than having them to go to your website. Also, adding JSON-LV blocks does not really impact page speed that much. It does add a few bytes to the website, but that's insignificant. If you look at the amount of JavaScript that you usually ship, the amount of images that you usually ship, it's a small, small, small percentage. And it doesn't really make the browser slower while parsing because it basically parses the script, sees it's not a JavaScript, and kind of skips it. So I wouldn't worry too much about page speed implications. I think having FAQ data can help, especially for customer service-related informational queries. But I don't think it has much impact in terms of customer acquisition. But I might be wrong about that last part. I agree with you, actually. Thank you. You're welcome. Awesome. Do we have other questions from the audience? No further questions? So five, four, three, two, one. All right. In that case, thank you so much for watching. Thank you so much for posting questions on YouTube, joining this Hangout and asking them live as well. I hope you have a fantastic day. Stay safe. Take care. Thanks for joining. And see you soon in the next JavaScript SEO Office Hours. Bye-bye. Bye-bye.