 instant. Interesting. The more you learn. All right. Hi everyone and welcome to the first official JavaScript SEO office hours. I am working in the search relations team in Zurich, currently working from home as you can probably tell, which is probably what most of you are also doing. I'm guessing COVID has pretty much reached global working from home peak at this point. And this is our opportunity to discuss JavaScript SEO specific questions. If you have questions that are not related to JavaScript, I've seen a few people were already asking questions on YouTube that were not really fitting this format. Also, the regular office hours that you can use to ask some JavaScript questions. So don't worry about it. If you don't have anything specific to JavaScript, you might still learn something today. Cool. In that case, any one of you wants to ask a question already? Let me see 20 people showing up. That's really cool. I have a question. Sure. So let me see if I can also type it here. Oh, here. So my question is, if you use robot attacks to block JS or CSS, external JS file, CSS file, in other domain, or is it in other domain that blocks it? So user will see different things in Google bot, right? So would Google distress this kind of page and downrank them? That's a very good question. Thank you very much. No, we won't downrank anything. It's not cloaking. Cloaking very specifically means misleading the user. Just because we can't see content doesn't necessarily mean that you're misleading the user. It is still potentially problematic. If your content only shows up when we can fetch these resources and we don't see the content in the rendered HTML because it's blocked by Robots TXT, then we can't index it. If there's content missing, we can't index that content. So it's definitely worthwhile trying out our testing tools to see if the content that you want us to see on the page is actually visible on the page, even though some JavaScript or CSS resources might be roboted. But generally speaking, roboting JavaScript or CSS resources isn't per se a problem. It can be a problem if we can't see the content, but it is fine from the standpoint of cloaking. It's not cloaking. I see. So from what I heard, what you mean is that if the JavaScript only do enhancement like make it beautiful, like change it, there won't be a problem. But if JavaScript later inserts some text or something, there will be a problem. Yes, correct. If the content is loaded by JavaScript and we can't load that JavaScript because it's roboted, we're not going to see it. That is potentially problematic. But if it's like an enhancement, let's say like a chat box or a common widget that allows users to add something to the page that isn't visible immediately anyway, then that isn't an issue. So in this case is like, in this case, if we load something in, is that a penalty for doing that? Or it's just that the thing that we load won't be right? It's just that we don't see the thing that you load if you're roboted away. I think thank you. Thank you. You're welcome. Very good question. I'm also trying to keep notes on the questions, but actually I have the recording, but if you hear me type, then that's because I'm trying to like keep taps on the questions being asked. Awesome. Anyone else having a question? Hello, Martin. Hi. Hey, I'm Vahan, Lead Developer at Search Engine Journal. So today, we have implemented the Internet Scroll on mobile. And in the past, we had it on the desktop. And my concern is, would Google index the infinite scroll articles as part of the main article, which is open first? The Ajax URL, which the page queries has a new index applied, but is there any guarantee that like the appended content will not be indexed as a part of the main webpage? Very good question. The answer is it depends on how it's implemented and how we see it in the rendered HTML. I would highly recommend checking out the testing tools to see the rendered HTML, because it depends a lot on how you build your infinite scroll and how we can discover additional content. But if it's like, for instance, using some sort of link that tells us to go to another URL and then that URL is no index, then we would not see that content. Like the picture is implemented the following way. Like when you scroll down, it loads the article through Ajax. At some point, like when you are about to end reading the article. But the Ajax URL, which sends the content of the next article has no index header tag applied. So that makes me somehow confident that the appended content will not be indexed. But I would like to know how we can make sure that infinite scrolled articles will not be indexed as a main part of the article. Test. If there is a certain trick. Test? It really depends on the exact... So the way that you describe it, it can go either way. I don't know. I'm not fully sure how we see the rendered HTML. Use the testing tools. Specifically, the URL inspection tool can help you figure out how the rendered HTML looks like. If the rendered HTML somehow still contains the additional content, because the viewport has changed or something like that, then we would index it as part of the main page, as in like the page that you've seen. And then no indexing that doesn't really help that much. It can also be that you accidentally no index the content that was previously on the page so that you might end up in no indexing too much. I would always test these things and look at the rendered HTML. The rendered HTML tells you what we are seeing. You can use the life... Sorry, the URL inspection tool to see what we have crawled. So you see in the crawled rendered HTML. But you can also use the live test to see what we see if we would do it again. So it depends, really, is the answer in that case. You're welcome. Thank you. Today is March 25th. Is that correct? Yes. I got it right. Very good question as well. And I see that we have one question in the chat that I would like to take at this point. As a JavaScript developer, what are the top five things that we should consider to have or not to have for web security? Web security is a vast and huge field, not really related to JavaScript SEO, though. But I would say definitely make sure that you test for cross-site scripting and cross-site resource forgery. Those are two very important things. Make sure that you implement the content security policy headers. That's what they are there for. They prevent a lot of problems. You can set the content security policy to report only so that you basically get feedback on potentially problematic settings before you switch them off. Yeah, what else? Update your dependencies. Reduce dependencies to begin with. This is a big one. Don't include modules for one functionality that you can build in five lines yourself. Two, what else? Security is not exactly my strong side. So I'm not quite sure what else I would do for security, but definitely look at your dependencies, reduce your dependencies, keep them up to date. Use content security policy headers and make sure that you are not accidentally allowing some sort of cross-site scripting on your site. Anyone else having a question in the Hangout or should I go to the questions that were submitted? Hey, I have another question. I recently noticed that Google Fund and third-party fund can affect web performance a lot. I wonder what is the best way to prevent third-party fund to affect your site performance? Is it worse to query third-party fund in CSS because CSS itself has to render blogging and then query again and need to send another request? So you're saying you're worried about the performance of third-party funds? Yeah. Yes. I wouldn't worry too much about that. You can hypothetically cash it on your end, so make sure that you're not downloading it over and over again. But I think one way, I'm not sure what it was called. I think it's fund display or something. It's the CSS property that you can use to specifically say that it should already start rendering with the system fund or with the fallback fund and then swap in as it has the new fund that might reduce the performance impact there. But I would consider other things as well. There's like, you can subset a fund to make sure that you only download the subset of the fund that you actually need, reduce on the weights in the fund file. So maybe it is definitely worthwhile looking into inlining or basically just not necessarily inlining, but hosting your own version of the fund. But if you can for licensing reasons or whatever, definitely at least use fund display swapping. Okay. Thank you. Awesome. Another question. What is the best way to QC-Lady loading? Because what I do is I use develop tool and I just record it without scrolling. And to see that if any image loaded before content download, is that a right way or is that another way that there is a more efficient quality assurance lady loading? That is a tricky one that I never really thought about. I just, yeah, I think I also just use the developer tools and then resize the window or scroll in the window and see if it actually loads the actual sources. I wouldn't know for sure. That's a good question. Yeah, because if that is the best way that if you have one million pages, you have to do one million times. Yeah, I'm not sure if that's a great one. I think you might be able to script something with puppeteer maybe. But even then it would probably be really, really hard to do that at scale with all the pages if you have so many pages. That's probably something you should do on a page level as the pages get created. If you use the native lazy loading implementation in Chrome, then you don't really have that problem. Because then you can make it a problem of the browser to do that. But not every browser supports it. So I see that at this point where you have a bunch of custom implementations, this might be pretty hard to test at scale. I see. Thank you. You're welcome. All right, now I think I'm going to take a question from the list. What are some of the best practices to follow while optimizing speed on JavaScript websites? What's the best CMS for JavaScript website? I don't know about CSSes because I'm pretty sure that you can probably make it work with pretty much all of them. But for performance and JavaScript, I would suggest A, to defer as much JavaScript as you can. And also if you can, if you reasonably can, not rely entirely on client-side rendering for your most critical content. Because the quicker you deliver HTML to the browser, the better it is. And the more you rely on JavaScript to do that, the slower it will be. Especially for landing pages and static pages, you'd want to consider service-side rendering or service-side rendering and hydration. If you are to start a new project, I would look into a higher-level framework such as NuxJS or Angular Universal to do that. But if you really have to stick with client-side rendering, then at least try to make your JavaScript as fast as possible by basically tree shaking your dependencies so that you only have the absolutely crucial and necessary piece of code from your dependencies and not just like drag-dead code around. Consider splitting, actually bundling up as much code as possible, but also splitting your code along the packages where it makes sense. So for a specific page, I might not need all the bundled for all the application. So you could split those and then preload the, or actually not preload, but prefetch the other bundles so that it's faster as the browser does that in the background. But yeah, the less JavaScript you ship, the better it is. That's a general rule of thumb. Okay. And then we have a few more questions in the chat for caching static assets such as fonts using ServiceWorker would be recommended. Yes and no. Yes, generally for your users, that is fantastic to use a ServiceWorker to reduce the network activity, especially in the sense of reducing the latency because it's always going to be faster to fetch from a local cache than from the network. But search engines also usually are heavily and aggressively caching, which is why we would suggest to generally use long lift caching and fingerprinting or some sort of content hash or versioning of your assets. There's an article on web.dev, which is called HTTP cache, your first line of defense, if I remember correctly. I can post that in the chat real quick. If I find it, yes, I do find it. Here's fantastic. Where's the chat? Here's the chat. So definitely try to make it easier for the browser by having long lift cache. For e-commerce collection category pages, that display product ratings is a good bet unnecessarily. That's a structured data question. I would take that to the office hours real quick. Last call I asked about JavaScript files that were being requested by Google after four years after they were embedded on our site. That is really weird. I would be very surprised if that isn't some sort of glitch in the way that the indexing runs. I wouldn't vary too much about that normally, especially if your content gets indexed anyways. But if you share that problem with either this fantastic mailing list, I can't type anymore. There's a mailing list called the JavaScript sites working group where you can ask this with a sample URL, because this needs to be investigated and I can't do private support, so you can't send that to me privately. But the web master forum or that mailing list is a fantastic place to get some feedback from others who might be able to help you with this problem, Trevor. For a client-side does having a site map speed up the second wave of crawling or does the stay the same with without the site map? This is for a site with two million pages. It depends on the answer to that question. There's no such thing as a second wave of crawling. The second wave is an oversimplification that is coming back to us with interesting implications every now and then. Basically, a site map just helps us discover things that we can't discover as quickly otherwise, which means if a site map helps you with discovery generally is depending on your site structure and on your crawl frequency, which has nothing to do with if you have a site map or not. If you have a very well linked structure between different pages anyways, then the site map is probably not going to be a big speed up, but the site map definitely helps us discover new content generally speaking quicker. Two million pages, it might give you some advantage. It might not give you some advantage. Try that out. It definitely doesn't hurt, but it doesn't replace linking and in the clean menu structure and a clean linking structure between your pages anyway. It's just a different way for us to consume content and discover links on your site. Awesome. Now, there was a question submitted on YouTube, which is from Dave. When fingerprinting resource files is better, sorry, when fingerprinting resource files, is it better to change the name like vendor.sumversionstring.js or can parameters work? For example, vendor.js question mark version equals something. Parameters work for us. That's perfectly fine. It might be tricky depending on your setup. If you have a cache in between or like a reverse proxy that somehow kills query parameters, I know that some CDNs do, then it might not work and it is sometimes harder to debug because you might not have the level of logging that you need to actually troubleshoot issues throughout multiple hosters and pieces of infrastructure. I would say, generally, it doesn't make a huge difference, but having it in the filename is probably a little more robust across different environments. I would balance that, but generally speaking, it's fine to use parameters. My reason is because if you do it with a parameter and it changes, it would potentially still load, wouldn't it, if Googlebot or something is taking it. So you would get the fresher version, which I suppose could be both the button and the curse, but that was the reason behind the question. Then you make your life with debugging even harder because then you're like, I think we loaded version 5, but then it turns out that it was version 6. It really depends on your use case and on your environment. I would say from experience that putting it in the filename is slightly more robust, but either way works. It's not that one way wouldn't generally work. It's just if it doesn't, then debugging parameters is usually a little harder depending on how the systems are configured. It might be a lot harder, in fact. Thank you very much for the question. Anyone else here having a question before I go back to the written questions? Silence, okay. So sorry, mate. I just typed one out because it's a bit of a long-winded one, so probably better to write when trying to verbalize it. Right. A client currently building New Frontend in Gatsby. Nice choice. And it's setting up their own AV testing. Yeah, fun times. Development is against URL changes and guidelines how to separate URLs and canonicals are desired if we move to parameter injection dependent on version and canonical ejections of JavaScript. I'm not sure I fully understand the question. So a little bit further context. So obviously removing Rebuilding Custom Frontend, I may want to do my own custom AB solution. Obviously, if you're going to be doing AB, I want to append URLs with versions or some sort of control and then cross canonical back. They've said that will require a lot more engineering. So I'm trying to find a compromise where I can get parameter potentially injected, obviously, because it's all being on in JavaScript and there's no CMS because it's all being done headlessly as well. Then do you perceive that if we inject a parameter onto version B, I'm going to inject it to canonical that way, that would still meet the requirements for basically canonicalizing B to A, C to A, etc. from the Google perspective. I think that would. If you have parameters in your canonicals, even if they are JavaScript injected, that should be fine. Okay, but could be a good compromise. Thank you, Mike. You're welcome. All right, I'm going to copy this over then. Oh, more people are still joining. That's amazing. Anyone else having a question for me at this point? Now is your chance. I have another question. Sure. So I'm not sure it's a JavaScript question or not because I'm not sure closed caption is created by JavaScript on most of the page. So my question is, if you have multiple languages of closed caption on the video, and I know they are not in the source code, so they might later insert using JavaScript or something like that. Do people go closed caption or do they know that they have different language if I have multiple languages? And is that a way to make sure that you know that? Video indexing is so far away from what I work on that I have no idea how we're indexing videos and how we're dealing with closed captions. So I can't really answer the question, but I would assume that whatever mechanism there is to provide closed captions, if you provide those, even if it's with JavaScript, it should be fine. I don't know how exactly this entire mechanism work. I've never worked with a video that much. But I guess maybe the Webmaster Forum people might know about this, or definitely it is something that I can then hopefully send on to other teams to discuss. But that's not something that I can answer that easily. Okay, thank you. You're welcome. All right, in that case, do we have more questions in the... I'm using JavaScript to set my title and description. It was stated that Googlebot respects it, but I noticed that sometimes it uses those titles and descriptions, but other times treats the website as if it had... No, sorry? Okay. As if it had no title and description. How to prevent that and be sure that the Googlebot actually waits for JavaScript to kick in. Notice is that the page is not empty, because it looks as though it is like that. It's not a problem of us thinking that it's empty, because if it would be an empty page, we wouldn't index it, because there is nothing to index then. So we definitely be seeing this, the content if we index the page. This effect seems to be random. My website sometimes has a description of title and SERPs and sometimes does not. I'm not sure what's going on there. I would definitely post this in the Webmaster forums or in the JavaScript sites, WG. Generally speaking, whatever we see when we are done rendering, we put into the index and then move forward. We might rewrite the title anyway. I'm not sure if you're aware, but basically when you give us a title and description and we figure out that it's not relevant to the query or not relevant to the content on your page, we might consider alternatives anyways. There's an entire system that deals with that, and sometimes the system picks a title that you don't think is right, or sometimes it might not find a title that is worthwhile picking, but I don't think that we usually end up with empty titles. That is a really, really weird situation. I would definitely have a look at the URL inspection tool and look at what kind of rendered HTML we had in the crawled page. Probably also run a live test, especially if it's an on-off thing that suggests that there is some sort of issue on your end with the way that you're generating your HTML in JavaScript. It can be that your server sometimes times out, sometimes gives us a 404 or 500 or something like that. I would check the server logs to find out what's going on there, especially because it sounds like your website is client-side rendering or at least uses JavaScript to change the content, because it does change the description and title. I would check that your server definitely does not return any weird errors to us. I would consider posting a URL into the JavaScript sites WG or into the webmaster forum so that we can take a look at this. Awesome. Anyone else here having a question? Matt, I just tested a few URLs and it seems it doesn't index infinite scrolled articles, so that is very good, but I'm going to research a little bit more and find out what is the adage case and it does index when it doesn't index. I believe that John Mahler said that when you have an adgex URL, no index, Googlebot will not crawl that content. That's why I asked again to get some sort of confirmation. Generally, that is true. It does depend on your implementation though. Good to hear that your implementation apparently seems to work the way that you expected to work, so then that should generally be fine. You're welcome. How big of a problem is it when a part of the website has client-side rendering and Google can't render any image of the website? Hold on. Sorry, I need to admit someone to the hangout. Googlebot cannot render any image of the website. I know it's not an optimal situation, but I have to convince the development team that this might be a problem. All textual content can be rendered, for example, and then we have a sample URL. I would have to look at the sample URL to see what is happening there. Actually, let's do that real quick. Generally speaking, if there is an image URL and we can't render it, that shouldn't necessarily be a problem unless you want that image indexed. If you care for your images, which you probably should, then you might want to know why we are not rendering it, and then you can use various tools to see why the thing is happening. I'm not sure I should use the wrong tool for this. Let me quickly run this mobile-friendly test. This is really annoying because there's a weird redirect thing going on, and I don't want the YouTube redirect. I want the URL itself real quick. I can probably share this with you all on my screen if I figure out where that is at now, my entire screen. I want this screen to be funky. I threw this into the mobile friendly test. We can see that there are no images being rendered, which is because there's nothing specifically preventing us from seeing them except that there's some CSS and some JavaScript not running, but that should not be a problem. Let's have a look at the HTML. If I scroll down here, can I search for the image tag, maybe? That's a bunch of images for all sorts of things in the menu. Now someone wants to join, so I'll add them in real quick. All of these are present basically like menu images or something, and none of this is actually an image image. I'm not sure what kind of image he's missing here, but I can't find them in the HTML. If they are not in the HTML, we're not going to even discover them. I'm not sure. Let's actually try this URL in the regular browser tab and have a look again. There are a bunch of images that we are supposed to see, I would say. They are definitely present here. They are not present for whatever reason in the rendered HTML, because I couldn't find any of this previously. Also really interesting, so there's an alt tag here and an alt tag, an alt attribute here, and we can maybe search for this in the ... Come on, don't do this to me. There we go. Oops. So we do find this in the text, but I'm not seeing this in an alt attribute. So that tells me that whatever they're doing, we're not seeing it in the rendered HTML. If we're not seeing it in the rendered HTML, you know what that means? It means that we are not able to index this. If you care for these images being indexed, definitely you need to figure this out because that means that we are not seeing them right now. Good. Any more questions from you all before I go back to the list of questions? I have a quick question. Oh, multiple people. Oh, okay. One after the other. Yeah, go ahead. Okay, great. So we have a client whose website is awfully outdated and they are part of a big corporate system, so an upgrade is a bit out in the future and currently they can't add any sort of editorial content. How would Google look at it if we made a sort of a workaround where we added content using JavaScript? At the front page, we might add a URL with a parameter through JavaScript to a sort of a fake page by overriding another or adding another like fictitious parameter with custom content. So you can totally do that, especially if the content is basically an update of the previous content at the same page, then that's not a problem. Googlebot would see that rendered content unless your implementation has some flaw or problem. You can test that with the testing tools. Creating fake pages through history API is a little bit of a tricky one because then you can't necessarily assume that it will always return the right content if you're getting on the page and the JavaScript breaks. But fundamentally, it works. It is a brittle solution, but it is better than probably like dealing with like we can't do anything on our website at all. I would definitely very, very carefully test the implementation, whatever you're building, but if you use JavaScript to inject content, we see it as normal content. Okay, great. Thanks. Okay, welcome. I just wanted to ask if now with the JavaScript aware of Googlebot, Googlebot still doesn't scroll the pages nor click the buttons or correct? Yeah. Okay, okay. So any like JavaScript buttons that opens, for example, drop down and stuff like that with content inside, you won't see it. Depends on the implementation, the way if you only load the content into the DOM or only make it part of the active HTML document after user interaction, then we won't see it. But if it is rendered into the HTML beforehand and only becomes visible through user interface, then we might still index it. Okay. We just won't click on anything. So if you have a button that takes you to a different page, we won't click on it. We won't see that. Okay. The only thing you'll do click on quote unquote is anchor tags, right? Yes. Yeah. Okay. We don't click on those either. We use the href URLs. Okay. Okay. Thanks. You're welcome. There's a follow up question. We can't index an image if it's not in the HTML since this is the case how to make sure lazy loaded images are indexable. Lazy loaded images. There's two options here. So one is if there is an image tag that has the loading equals lazy, we will see it because it has a source attribute. If you do not use the source attribute and lazy load and basically upgrade the image, then it depends on your implementation. For instance, if you are using an intersection observer, that would not become a problem because we are doing something to make the viewport large enough for your website. And then the lazy loading would kick in and would create the image source attribute that we need. And then in the rendered HTML, you would see the the actual image source. And I can show you an example of that. Let me really quickly show you something. Oh, okay. Someone wanted to be in this. So based on what you just said that indicated Google Doc do scroll down. No, it does not scroll down. There's other ways of changing the viewport so that things become visible. I don't want to discuss this too much because it's an implementation detail that I don't want people to cling on. Actually, this is the wrong website. Where did I have this? I think it was here somewhere, lazy loading experiments. I guess. Yes. No. Oh, God. Okay. Let me see if I can find this back. And I think someone wanted to join the Hangout. Yes. Okay. So I can actually make this larger. So I built a bunch of test cases for lazy loading. Let's use lazy sizes, for instance. The way that these pages work is they, oops, I'm not in front of the protocol yet. The way that this works is we have a bunch of HTML, a very, very boring text. And then we have an image tag that does not have a source. Well, it has a source, but it's only like a one by one pixel source. And then the data source is what's used to actually lazy load this content. And I can show you that this works by actually seeing like 100 by 100, 200 by 200, 300 by 300, 400 by 400, and 500 by 500. So these images lazy loaded as they became visible. And if I'm lucky and this thing doesn't break on me while I'm doing this, because every now and then we run into interesting resource situations here where it's the problem is the testing tools are very impatient. So sometimes if things take too long, then the testing tools bail out where they shouldn't have bailed out. We can see all resources. And here you see it loaded 100 by 100. It loaded 200 by 200, 300 by 300, 400 by 400, 500 by 500. So all these images have been successfully loaded and will be indexed. And that's actually what really matters when it comes to lazy loading. So this implementation actually works. You can also see that in the rendered HTML, if I scroll down to let's say here, then we see that the source has been replaced by the 200 by 200 image source as lazy loading would do. So this is how you can test your implementations. And that way you know if your lazy loading works or doesn't. It is not very easy to say, yeah, that totally works. And this other thing doesn't, you have to test these things, unfortunately. So not just loading anything that is Jarvis inserted, if they show up in a risk result, they can potentially be indexed too. Okay, thank you so much. You're welcome. Do we have more questions in the, no, there's not more questions on YouTube. So if you have any further questions, now is a good time for those. Can I ask another one? I just remember, for example, we use a Google Maps API and since now Google about executes JavaScript, does that mean you will consume the quota? That's a really good question. There is a potential that this might actually happen. But I think if I remember correctly, because Google Maps does not normally generate, I'm not sure, you would have to check if Google Maps gets roboted away or not. But generally speaking, we are very aggressively caching things. So I wouldn't vary too much about this, but definitely test that because I don't know if we are skipping Google Maps or not. That's a very good question. Okay, thank you. You're welcome. Thank you very much for asking all these good questions. I'm really happy with the questions being asked today. Martin, just one thought that came to my mind. I have implemented the infinite scroll also on AMP. We are on the next page, a new feature. So I'm guessing the way it's implemented, the next page feature of AMP, does guarantee somehow like next articles will not be indexed as a part of the content writers, because the way you thought the planet and implemented the next page of AMP should also avoid that issue. Can I? I'm not sure about AMP. Cool. Any further questions? Let me check the chat. No questions in the chat. All right. Any further questions? So it is Google here doing the tool path thing like before it reads the HTML and then after a while it renders the JavaScript and gets rendered the source or something like that, or is now faster because I remember that it used to be like there were experiments of people creating different versions of their pages and saying, hey, after a week, this appeared on Google and after a couple of days, this appeared on Google. So generally speaking, we are still somehow so we are running some analysis on the initial HTML specifically to discover links. So link discovery is fast as if you service that render still. But fundamentally, you can assume that every page gets rendered and you can assume that between initial crawling and rendering on median, there's a delay of five seconds in the queue. And then plus the time that it takes to actually render your page. If your page renders in 10 seconds and we have 15 seconds at median, if your page renders an hour, actually no page renders an hour, that's bullshit, like a minute, then that extra minute comes on top of that. But yeah, generally speaking, JavaScript rendering got a lot faster these days. And do you cut off the rendering at some point? Like if you have a very long page? We do. It is very hard to specifically say something because it depends on a bunch of heuristics and signals. So I would not worry about it too much. If it's fast for your user, then you should not have a problem. If you have to bind your user for two minutes to actually get any content, then you might run into problems to begin with. It's not an SEO specific problem. So you shouldn't worry too much about that specifically for SEO. Okay, okay. Thank you very much. We've got another quick question. That's okay. If on an SPA that you're routing locally, I know there's kind of two options for a 404. You can either no index or you can redirect off to an actual 404 outside. Is there any benefits between the two or is they're both just as good a solution as another one? It just depends on what's the easiest. Yeah, both of them are as good as a solution. It really depends on what's easier for you. And there's not that much of a difference between the two. Okay, thank you. You're welcome. There is, however, this one note that I would like to give, I've seen it happen and I've done that mistake myself many years ago. If you think that it's a smart solution to have a robots no index in your rendered HTML that you sent initially and then switch that back with JavaScript don't because what happens is we see the initial HTML, we see no index and we stop and we don't execute the JavaScript. So that like the robots thing has a risk attached to it if someone somehow decides to put no index in the HTML and then switch that with JavaScript, which redirects don't have. But I could see people screwing up either solution. So both of that is risky to a certain degree. All right. Just to be clear, you just said that if the HTML contains no index, that would prevent you from executing the rest of the page basically, right? Pretty much. And when I say no index, I specifically mean like a meta robots content no index. If you have a no index in the robots tag, we basically do not render the page. That's what happens. We also mentioned that in the documentation, if I remember correctly. I can see if I can find the documentation link real quick. And oh, okay, that has changed. Let's see. I believe this might be here or is it in the other one? I'm not sure. No index. Okay, we don't explain it here. I think it's explained here. Yeah, this one. So we explain this in the documentation. You can swap. You can basically dynamically add a new index. You cannot dynamically remove a new index from the page with JavaScript. Okay. One last question or maybe two more. I don't know yet. So if there are no further questions, I would like to take this opportunity to say thank you so much for coming to the first official JavaScript SEO office hours. I hope this was useful and that you got the answers you were looking for. If you have any further questions, remember, we do have the Webmaster forum. We do have the JavaScript sites working group that you can post questions to. It's usually a good idea to send us also URLs as in like send either the Webmaster forum or the sites working group to send a URL there where we can see the problem action or even try to build like a tiny little prototype that exhibits the behavior you're wondering about so that it's easier to debug these things. Please continue watching out the my Twitter feed as well as the YouTube channel's community page for the next office hours to happen. I expect them to be approximately every two weeks. And yeah, I'll upload the video to YouTube later on and thank you so much for being fantastic people and asking great questions and have a great day. Thank you, Martin. Thank you. Thank you for coming. Bye-bye. Thank you. Bye-bye. Bye-bye. I need to stop the recording somehow. Stop recording.