 Mm-hmm. Hello, and welcome to another JavaScript SEO Office Hours. It's me, Martin Schlitt, from the Google Search Relations Team. And I'm here to answer your questions on JavaScript SEO. And a few questions were already submitted. So I'll go to the submitted ones. And then we'll take questions from the audience. If you want to join these live hangouts, you can go to google.com. Slash webmaster slash connect. We have a calendar where they are scheduled. Or you go to youtube.com slash google webmaster slash community. And I'll post the link to these recordings a few minutes before the recording start so you can join them live. Or you can just use the YouTube post to post your questions and then see you at the answer later as the recording goes up. Cool. So let's have a look. What was awesome. The thing is, if you ask questions that are not related to JavaScript, I would ask those in the regular SEO Office Hours, or the webmaster Office Hours, because this is specifically aimed towards JavaScript questions and problems. Right. So is it fine that we serve the content pages without any JavaScript for bots? Is that considered cloaking? No. If it is the same content or roughly the same content, it doesn't matter if there is JavaScript or not. And if it's slightly different, that is what we would call dynamic rendering. We have a guide on dynamic rendering as well. And that's perfectly fine. That's not a problem. Does it have to decrease the time to download and increase the number of pages Google bot can crawl within the same budget? Maybe, maybe not. That depends on many different factors. So dynamic rendering usually, when you render the page without JavaScript, as the request comes in, it usually takes longer because you have this extra rendering step in the middle of it. So that sometimes makes the site a little slower, but that's normally neglectable and not a big problem. Again, it depends on your setup. You should measure that time. It might save you a few requests in crawl budget. If your JavaScript is client-side rendered and does like a bunch of API requests to actually fetch resources and then create the content from that, you would save these requests. And that means that you get a little more requests in your crawl budget. But then again, crawl budget is only relevant if you have a really large site with, I would say, like a million and up pages. And there's other factors that go into crawl budget, so you might not see an increase in pages being indexed. But you might. It's not impossible. OK. Hi, Martin. Is the client-side JavaScript redirect treated like a 301 immediately considered permanent like a 302 temporarily or something else? It's considered like a redirect. We don't really make that much of a difference in terms of 301 and 302. It does have a different outcome for the users. 301's browsers catch that and immediately jump to the target destination, whereas with 302, it basically jumps to that original URL first and then jumps to the target. Client-side redirects for browsers are pretty much like 302's. So if you can avoid them, I would. If you cannot, then that's OK, too. So it doesn't really hurt anything much. So you don't have to worry about that big time. Someone wants to join the session but doesn't have the link, so I'll just quickly post the link here as well. We rebuilt our website in React and implemented a mobile navigation from one of the component frameworks, which is not in the DOM until a user clicks on the hamburger icon. Could that have a negative effect on how Googlebot sees our page? None of the links are visible in the HTML. You answer your question yourself. If it's not in the HTML, we're not going to see it. That's what's going to happen. And if this is your navigation and your navigation is the only thing that tells us about your page structure, that's not great. I would argue that it's always better to have the links in the HTML in the DOM, maybe hide it, maybe move it out of the screen, whatever, but don't just remove it entirely. You can totally have something that is invisible in the DOM and only comes in on user interaction. We wouldn't do the user interaction, but as long as it's in the DOM, we would see the links. Whereas if you really just need a user interaction for the links to be injected into the DOM, then that's what we are not seeing. So that's not great. Is there a difference in how Google handles history push state versus history replace state? Good question. I don't know. I think there isn't, but I don't know for sure, and I would have to test. You can probably test that as well. I'm not 100% sure. I guess I would assume that there isn't a big difference, but I am really, really not sure. And I'm not sure if we would replace state as a redirect, but I would have to check. That's a really good question. I like hard questions. I'll come back with an answer maybe on Twitter. I'll see. Thanks, Marty. Sorry for that. Since the bot should not really care about custom web funds, is it OK to serve the bot a page that loads faster and does not rely on the custom funds? Could that have a positive effect? I wouldn't do it. Yes, it could have a potentially positive effect, but it also means that you have a more complicated setup because you then have to inherently figure out is this a bot or not, and then have additional code that deals with this. Additional code usually also means additional bugs, no matter what you do. Like there will be weird side effects that you might not have foreseen. I just wouldn't do something for such a miniscule improvement. A better way of dealing with this is to either prevent us from loading the web funds, like put the web funds somewhere where you can robot them away so that we can't fetch them if you really care about this mew of performance and set your CSS to funds so that they swap in default funds and don't have the flesh of style text upfront. But honestly, this sounds like something isn't broken and you just want to tickle these last few seconds out there, don't. The complexity that you incur is higher than the benefit that you potentially gain, so I would not do that. As mentioned in the JavaScript SEO Basics article, yay, someone's reading my documentation. That's great. Using meaningful HTTP status codes can be impossible for SPAs with client-side routing. Yeah, the article goes on to suggest using a redirect to a server-rendered URL for this purpose. The solution seems like an overkill 404 page. No, no, that's not, because you should be able to configure your server so you can set it to a 404 status code for some URL, and then you just redirect to that URL. I don't see why that's hard. I think in Express.js, there's like five lines of code, including the client-side part. Does Google really not recognize a 404 at this day and age? Maybe you can shed some light. We do. We do. If we think that it's a software 404, we do usually flag that, but it's such a bad idea and it's so easy to prevent. If you don't want to do the overkill, then just no index the page that is a 404. That's also cool. That's one line. That's why I have these both examples in there. We do recognize soft 404s normally, but there is a risk, and there will always be a risk, because what we are doing is our machine has to guess, where there is a mechanism, where there are three mechanisms where we wouldn't have to guess. The 404 as a server-side response, which is not available in the client-side rendered and client-side routed application. I understand that. That's one way of not having us guess and actually having us know that this is a 404. The no index, where we don't know it's the 404, but we wouldn't put it in the index for sure, so again, no guessing, or the redirect to a 404 page. Also, no guessing. So if you can make something more solid, why wouldn't you? Especially if it's a really, really small amount of code that you have to write to make that happen. If you're okay with the risk of potentially ending up with something in search results that is a 404 page, go for it, but then don't complain to us later. There are solutions for this that make it clear that the page shouldn't be in the index. So you can use them or you can not use them. That's fine. Most soft 404s are being called, but I know at least one example in the recent past where that didn't work. And then people were like, why is this showing up in search? It's an error page. And I'm like, you're not telling us it's an error page. You're actually not even saying the word error anywhere on the page. You're just showing unrelated things. So yeah, you can not implement that, but then you'll live with it a little bit of risk. Cool. So those were the YouTube questions. Thank you so much for submitting. I hope the answers were helpful. If not, you can totally ask follow-up questions in the next week's edition. And I see that we have a bunch of people here in the audience and I wonder if the audience has questions as well. Hi, this is Gayatri and I joined a bit late. So my question was a bit of a dynamic rendering. So I think I missed that part. Can you just go through that again, please? My name is Gayatri Sampath. Did you ask a question on YouTube? Yes. Why am I not seeing it then? Let me see. Ah, yeah. Okay, so I see you ask how to join the session. And then I'm not seeing your question. Could you ask your question again? Okay, just give me one minute. Why am I not seeing the question? You search for top. Actually, so the issue was that we have, we are using React.js for our website and the navigation bar links, they're not showing up as links in the Google Search Console. When you do a live question, when I see the View Crawl page, I don't see the live link. It's just dumped as a chunk of text. So we got back to the developers team and they told us that dynamic rendering is in place. And you don't have to worry about this. But I'm just concerned that if the link shows of the main domain won't be passed on to the navigation bar categories or the content silos and then onto the individual pages. So I wanted to confirm with you if dynamic rendering is in place for our website. So how do I confirm that it has been implemented properly and it doesn't hurt our SEO? That's a good question. It depends a little bit on the dynamic rendering solution that you have in place. If it's RenderTron, RenderTron adds an HTTP response header to a page that it rendered. I can actually probably show this, give me a second. I need a new Chrome tab for this. And then I need a RenderTron instance. Okay, so if you are using RenderTron, which I don't know if you do, and you don't have to, there's other solutions. I'm not saying that this is the right solution. I'm just saying like this is one where it's relatively easy to spot what's happening. Let's see, I think this needs to be HTTP. So if I render and serialize this page and I get a, this is now a dynamically rendered page and I can tell, because if I go into the network tab and I'll actually make this larger real quick that people can actually read what's happening in here. If I load this again, I get the headers and the response headers contain an X render RenderTron header. So RenderTron makes it quite easy to see if it has been used or not. If you would see that in the response headers in search console, then you would know that RenderTron was used. If not, it's relatively easy as well. You can go to, let's say the mobile friendly or actually the rich result. I just go for the mobile friendly test. If you were going to the mobile friendly test and I, oh my screen is still too large for the actual windows that I put on top of it, you would look into the rendered HTML. So what makes me nervous is that you say like you don't see the links in the rendered HTML. That is great, into the layout again. Ah, yeah. Anyone having a question? Silence. Now is your time. Actually, I do have a follow up question on that. Sure. Don't a header bar links have more priority than footer links or like footer links, footer links carry low or weightage, right? When it comes to Google, just like forward links or things like that. It usually doesn't matter that much. Okay. The answer is as usual, it depends, but normally it doesn't have that much of a, it's not something that I would invest a lot of time in. Okay. Good. Awesome. Is there any issues with using web workers to process something quite large in the background and then spit that data back? Oh boy, I feared this question would come. Yes, I did a test on web workers and I noticed that we are not perfect when it comes. That's an understatement. We are not talking as much about web workers yet because I think it's a fantastic technique to actually offload like heavy work to the background, to a separate thread basically, but I noticed that Googlebot isn't dealing perfectly fine with it yet. We are working on that, but currently there are certain limitations. I tested what happens if you do an asynchronous communication flow with web workers or basically like I asked the web worker, load something for me, then it asynchronously loads something and then comes back later. That doesn't seem to work reliably in Googlebot at the time of this meetup or this recording. This will hopefully change in the future soon, but at this point we don't have guidance on web workers yet because we hope to fix that rather soon, but we'll see there are other things with higher priority at this point because web workers are a relatively exotic feature on the web platform still, but we would love it to be less exotic because I think it's a great way of offloading work from the main thread. But be very careful and test that very carefully because there are very clearly limitations, especially when the worker does something asynchronously. Right, okay, thank you. That's a really good question. Wow, holy moly, some of the interesting questions. I also saw a question from Rina now that I apparently missed earlier. Gyroscope-based site using Angular, 10K URLs, which is the most important content on this site and have good interlinking, but still not even a single page is indexed. Looks like these pages don't exist for Googlebot, even in the log file, I don't find these URLs. However, URLs can be indexed for life tests and the other tools. Well, the 41 requests, that doesn't say much, but I'm wondering what happens if I throw this into the tools. It is, so is there a status in Search Console like discovered but not indexed or are we waiting for the crawl queue to do things or what's the background on these? Let's see, all right, so this is mobile first indexed. We did crawl this, what we're saying, this is non-canonical, that's interesting. Okay, let's run this through the mobile friendly test. I'm sorry, so if I run this through here, because it is possible that we are not seeing something, it is possible that there is a caching issue and none of these things would necessarily surprise me. I can't, I just get a link from any of these tools. Everyone has some sort of redirector in here. Okay, when I look at it, hmm, okay, let me share my screen, because I think Rina is or was in this, was in this hangout so she no longer is, that's unfortunate. I'll show it anyways, hoping that she sees the recording. So if I share this tab, you should be seeing my screen in a moment. I think you see my screen now, I'm not so sure. The layout doesn't help, let me actually change the layout real quick for spotlight and then hopefully my screen is showing up. You can see, I can see that. Awesome. So here we see the slash watch slash movies is the header and the loading spinner. And I'm not sure what would I see if I would go to that website directly, movies slash watch. Pretty sure I wouldn't see just the loading spinner. I'm actually seeing just the loading spinner. So that isn't great. But okay, right. So we see a, we see a loading spinner here. If I go to the main page, I see the same. No, now I actually see content, that's great. Okay, so this page does load a bunch of content. And was it movies watch or did I get that wrong? Watch slash movies, watch slash movies, okay, let's see. Interesting, right. So watch slash movies, uh-huh. Well, okay, so we're not seeing that, but if we look at what we are seeing here, we see the header and the loading spinner. That's what we see in the mobile friendly test for the URL slash watch slash movies. And if I test the main domain, I see a header and a spinner. So as far as Googlebot is involved or as concerned, we are seeing two similar pages, which means we end up eliminating one for the other. And if I were to judge others, you're trying to use a service worker, maybe that's failing somehow. So I would definitely like debug this with your developers, because that's not something that you would want, I would say, good. And then there was a question in the chat, I believe. Yes, one easy question, JavaScript used for accordions, is that a problem to access behind the content behind the, for instance, the info? Generally no, as long as it is in the DOM, then we should not have a problem or inherently have a problem accessing this. Ah, you're back, okay. Hi, I just had a look at your website. Do you hear us? Hello? Rina? Because we hear you. All right, going back to the question with the Sky URL. Hi. Hello. Do you hear us? Rina? Hello, yes, we do hear you. Hi. Hi, yes. Hello. Do you hear us? Am I already here? We do hear you. Sorry, I missed the enemy. Not a problem, you're in luck, I still have it open, so I can show with you what the problem seems to be. Give me a second. I just need to reorganize my windows and also the layout. I'll present the Chrome tab. I think it's this one. Yeah. Okay, so basically, okay, so our problem is... Hello, yes. We still hear you. Hello. Hi. Rina, we could still hear you initially. I'm not sure now we don't hear you anymore. Okay, okay, so I start now. So basically our problem is we do have a majorly movies content, right? But still not even a single URL is indexed in Google. However, the things look fine with the movie test and all. So this is a directory page only. It's a subfolder. Watch movies. There are multiple movies, a dedicated page for the movies. This is not a particular movie page. Ah, right. Okay, but the thing is, so this URL, Watch Movies, is showing us a spinner. No content here. There's only the main navigation and then there's the spinner here. No, no, no. It's not already... Yeah. But that's what I'm seeing here. That's what Google bought to use. Yeah, because... Right. Wait, let me see. Can you open the live site? Ah, yes. Um... No, no, no. Wait. Like this. Watch slash movies. Actually, I'm just going to click on you slash movies. So this is your live site, right? Yeah. So click on any of the movies. Like one day. This is the actual movie page. That was a sub-folder only. Okay. Then let me have a look. Loads forever. Sorry for that. Yeah. That looks good. That looks like we have a bunch of content. That's pretty nice. Okay. So if we have seen this URL, let me check. 17 days ago. Indeed, we're not knowing about this URL yet for some reason. Interesting. But these URLs are not indexed. Even in the log details, I don't find any movie URL. That's an interesting question. It looks like we have discovered it from the site map, but it doesn't look like we have done the rest of this. So let's see what is here. Do we see this properly? So slash movies is indexed. That one we know. That one we have in the index. But then this one, which is clearly linked, we haven't. And this happens for the entire site. Only we do have one particular section, which is an episode. But site map is already submitted in the place. And even URLs are not crawled. Only discovery is there. Discovered but not crawled. Any other error is there? Right. I don't know why that is, but I can try to find out why we are not crawling these. I'm pretty sure I can have a look at why this is happening. I don't have an answer to that from the top of my head. Awesome. Okay. Slash movies, slash TV, slash music. The content under that particular folder is not. So all of these main pages are indexed, but none of the sub pages are indexed yet. And I don't know why either. Okay. I'll have to have a look at this because I don't know from the top of my head. Yeah. So there is definitely a link here in the press. Do we know about this URL? Have we discovered the URL would be the real question. And if we have discovered the URL, why have we not? Okay. Interesting. We haven't even discovered this URL. All right. I'll have a look at what's going on here. If it's a problem on our side, then we have to fix it. If not, then I would probably respond on Twitter or something. If you ping me on Twitter publicly, then I can potentially say something about, like, well, there's a problem with this and that. But it looks like we should be able to find this. Okay. Interesting. Coming back to the original question of the Sky Accordion, I'll try out the URL that you have given me here in the chat. And I'll see what we see when we go to that page. Because, theoretically, it's not a problem to have an accordion. Wait, where is the accordion to begin with? So you said eginfo. Ah, that's like a tab thing. Not really an accordion. Is it? No, no, it's an accordion. You're right. So if I take this into one of our testing tools, like the mobile-friendly test, I really need to get rid of this overscan issue. I say that every week, and then I still don't do it because I'm so busy with other things. I'm too busy to actually fix the overscan problem. So for accordions or any UI element, it's not a problem if the content is invisible. It should just be in the DOM. If it's in the DOM, we can discover it and work with it. If it's only injected upon user gesture, there was an earlier question. It was like, we have a menu that only is inserted into the DOM when you click on the button. Googlebot doesn't click, so Googlebot didn't see the button. I actually didn't click the button and didn't see the content that was injected. Now I need to figure out the content that was hidden in the accordion. So if I click on info, I'm getting the German version. Sorry for that. It's in the DOM, so we can see it and we can potentially index it as well. So that's not an issue. Any other questions while we are here? Martin, I don't have a question about structured data, but it's off topic. Can I ask it here? Sure. I mean, no one else is asking questions, so go for it. Okay, so actually we have a fairly big website which has around like thousands of pages. And we're looking to implement a structured data markup. So I understand that the organizational level of structured data can be implemented in the headers across all the pages of the website. But if I have to do it for individual pages or categories such as articles, or blog content, or news, or contests, do I have to go to each page manually and add the structured data for each of them in the first leg of implementation, or is there a better way to do it? So optimally, your CMS does help you with that in some way because your CMS should know what kind of page each page is. If your CMS does not have any support, then you can maybe talk to your CMS provider or to your developers, because hypothetically developers should be able to make decisions based on the type of page and then programmatically add the structured data markup. That should be the easiest way to do. There is a possibility that you can do it with Google Tag Manager if you're really, really careful, but that is more brittle than implementing it into your site through your own developers or through the CMS. So if you really cannot make changes to your own code, then maybe try to figure out a programmatic way of detecting which kind of page it is and then where to pull the data for the structured data types. But I would advise against it unless you really, really have to. Okay, so even with this implementation, can it be done for the existing pages as well? Sure, sure. Okay, got it. And I have one more question. It's about adding review ratings in Google Organic Search. So recently in September 2019, Google said that it is no longer going to publish reviews for organizations and local businesses. So with that, can you still add reviews and give Google the possibility of displaying it when it organically searches for it instead of us fronting for it? And if we do that, do we have to enter the review manually every time it changes or does it pull automatically from Google My Business account or something like that? I don't know. I don't know about Google My Business. I don't know the policies behind structured data. So that's, I don't know. Okay, got it. Thank you so much. You're welcome. All right. Sometimes people are still submitting questions via YouTube. Let's check if we have additional comments. No, not today. Any further questions? Audience questions invited. If there are no further questions, I would say... Martin, hi. It's me, Ferhat. Hi. Just one simple question. You told us that when everything is in the DOM, then this should be not a problem, but can this have an effect on SEO when content is behind tabs or accordion at that cave? So you shouldn't hide your primary content behind a user interaction. If you consider that content to be like central and critical and important, then I would suggest that you don't hide it behind a user interaction. But if you have your content behind a user interaction, it doesn't hurt. It doesn't make a big difference. I don't think we pull such content in for the description snippet. So if we decide to rewrite your description, we wouldn't use content that is hidden on the page, but I don't think it hurts big time. Again, make your primary content visible immediately. That's my biggest suggestion. Okay, in that case, all content behind the info, right? If that's the content you really care about, then yes. Yeah, okay. Thank you. You're welcome. You're welcome. More questions? Oh, there is a... Only this kind of URLs are indexed. Oh, you're facing an internet issue. Okay, that's unfortunate. Yeah, so as I said, for your page, I would have to check what's happening there, why we are not seeing these things. I can't answer on the spot for these things, as the URL seems to be fine. And I'm not sure why we wouldn't see the linked URLs. That requires me to make sure that it's not something on our end. It might be something on your end, but I can't tell you what without looking at it and seeing the reason as to why we are not seeing something specifically. Further questions? There was so many windows. There are no further questions. Then I would like to say thank you very, very much for joining this week's JavaScript SEO office hours. And check out our YouTube channel to see when the next ones are announced. I'll probably announce them pretty much right now so that people can post their questions there. They will be recorded as well. I'll put up the recording hopefully soonish, I guess maybe Thursday, maybe Friday. And yeah, keep being amazing. Stay healthy, stay safe. Thank you very much for joining. Have a fantastic time. Bye-bye. Thank you so much. You're too much. You're welcome.