 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts for webmasters, SEOs, publishers of all kinds. As always, if any of you want to get started with the first question, feel free to jump on in now. Yeah, sure, if you've got the time. Sure. Hi, I'm Dominik. My question is regarding the rendering of JavaScript apps on the server. John, you mentioned this on Google IEO. It's an option for hybrid apps to render the full app on the server, only for Googlebot. And my question is now, will this, because it takes a lot longer to render the full app on the server, than just the above-the-fold content and then go with rendering the rest on the client? So my question is, will this negatively affect the page rank in some way? Google also takes things like time the first byte and all those metrics for user experience. Yeah, so page rank, definitely not. Page rank from our side is kind of the metric that we use for the links to a site. So definitely not that, but the ranking in general, that's kind of a different question. We do use speed for ranking to some extent. For the most part, we differentiate between sites that are kind of reasonably fast and sites that are really, really slow. So when we're talking about something really, really slow, it's more on the matter of it takes multiple minutes to actually render the page in a browser. Whereas if you're just talking about, oh, it takes 10 seconds longer, then whatever. That's no big, big issue from our side. The one thing you might see a difference, though, is with regards to the number of pages that we crawl per site, per day, essentially, in the sense that when we see that a server takes really long to respond to our requests, we'll try not to overload it with additional requests. So we won't crawl as fast as we otherwise might. So that's something that might play a role in there as well. So if you can do things like caching on your side to make sure that these responses go out fairly quickly, that's kind of what I'd recommend. But with regards to ranking, also with regards to the mobile ranking that's happening, I think, in July or June, the mobile ranking update, that's something where we take a number of different speed factors into account to try to get a full picture of the site with regards to how it loads. So I wouldn't expect to see any issues there either. If you're doing something reasonable with regards to pre-rendering a page, then you kind of make it up for the rendering speed that it would otherwise take to render this page. Because we don't have the process in the JavaScript that makes it faster, too. I see. So if you go like, we take five seconds instead of like three or six, instead of three, that would be reasonable in your opinion. I think that's perfectly fine. OK, cool. That was my question. Thank you. Cool. Fantastic. All right. Any other questions before we start on the submitted questions? No? OK. Fine. We'll go through all of the stuff that was submitted. Let's see what we have here. All right. The first question is with regards to A-B testing, an SEO. It's a pretty long question. Let me see if I can distill it down. We're doing A-B testing with JavaScript so far. But this approach is kind of slow and creates flickering. So they're looking into two different ways of handling that. One way to do it is to do this on the server. Instead of within JavaScript, the page will stay fast. And there would be no kind of variations with JavaScript. Would that work as well? That would definitely work for us as well. With regards to how you would handle Googlebot, in general, we'd recommend handling Googlebot like any other average user on your site. So that would usually mean that Googlebot is a part of this experiment as well. With regards to using cookies to stay within one variant, one thing to keep in mind is that Googlebot doesn't replay cookies. So if you give Googlebot cookies, it doesn't return them the next time it crawls. So it would probably fall into a different category every time, depending on how you set up this test. You could theoretically also set it up in a way that Googlebot always falls into the same group. That might be an option for a while, too. In general, for all of these A, B testing variations, we recommend making sure that the A, B variants and the version that Googlebot sees are kind of similar in the sense that we want to be able to index the page that's representative of what a user would usually see. So if your A variant is a site about cookies, and your B variant is a site that's selling car parts, then that's really hard for Google to kind of index a version that matches those two experiences. So if they're kind of reasonably the same, if you're providing reasonably similar functionality for a user, then that works perfectly fine. We're about to implement a wrong country message on our websites. So this seems to be very popular at the moment. And it's based on the user's IP. If the user is from an incorrect domain, then they would see this kind of interstitial or this warning page. Since Googlebot primarily crawls from the US with a US IP address, it would see this on all non-US sites. What's your best advice on this? And what's the best practice to implement a wrong country feature in a website? So in general, geo-targeting is not a perfect art. So it's something that you have to expect that regardless of how you set up geo-targeting, some users from the wrong area will go to the wrong site. And you also have to assume that some users you might categorize incorrectly. So with that in mind, setting up a kind of a banner or a warning on a page is perfectly fine to guide users to the right version, but also make sure that those users can stay within that version if they choose to do that. So this is also really important for Googlebot because, like you said, Googlebot crawls from the US. So when we crawl from the US, we might still want to see the French version or the German version or the Italian version of the site. So we would want to be able to crawl and render these pages, see the warning if you want to display that to us. That's perfectly fine, but still access the rest of the content on the site. So it's important that we can still use all of the internal navigation, that we can still see the contents as you have it localized for those individual countries so that we can actually index the Italian content and the French content and the German content and all of that. So that's kind of what the ideal situation is that you should be aiming for. On the one hand, looking to create more of a banner rather than interstitial. So don't block access to everything. On the other hand, let users from the wrong location also still use the site regardless of their location. For Google, in general, one thing you could do is to show this banner with JavaScript. And if you wanted to make sure that it doesn't cause any problems at all, you could block the JavaScript banner from being crawled with a robots.txt file. In general, you don't need to do that because we're fine with kind of banners and boilerplate text that's repeated across the site. We can work with that. It's not that we will demota website because it has a banner on top that's saying, hey, you should look at the US version instead of the Italian version. So those are kind of the options that I would recommend doing there. Hey, John. Hi. So I have a question related to Utah's last officer that sub-directory and subdomain doesn't matter in Google Point of View. So suppose someone points a link to sub-directory. Does that link benefit for subdomains? So someone links to a sub-directory on your site. Does that also account for a sub-domain? Yeah. So from our point of view, the links to a site are from one URL to another URL. And within the site, of course, you can have your normal internal linking, which could include links to a subdomain or a different part of the website. So if someone is linking to one specific product page, for example, on your site, and from that product page, you have links to other parts of your website, then we can forward some of those signals to those other parts of the website. So if those other parts of the website are on a different subdomain, we can forward some of those signals to that subdomain as well. So it's not that a link would automatically count for everything on the same domain or subdomain or same sub-directory, but rather that usually a website has a lot of internal links that kind of spread the signals that we get across the rest of the website so that we understand the rest of the site a little better. There is any shortcut way to remove all 404 pages? Sorry, I didn't quite understand that. There is any shortcut way to remove all 404 pages? Index to Google? What resources do you want to remove? It's index pages. Index pages? That 404 pages. 404 pages. So usually you can just let them kind of be crawled and fall out of the index automatically. To have them removed faster, you can use the URL removal tool. What happens with that tool is basically we just don't show that URL in the search results. And the removal tool is in Search Console. You can submit either individual URLs or folders or whole domains if you want to do that. The important part is that you have your website verified in Search Console so that we know that you're the owner. And if you decide to remove something that's totally up to you, we'll remove those URLs usually in a little bit less than a day. But I already logged the bot website to access website. You can see the world up there. I really have trouble understanding you. Sorry, can you repeat that? Hello? Yes. Somehow the audio quality is really hard to understand you. Or maybe you can type it in the chat box and I can pick that up a little bit later. All right, let's see. Some more questions here. I've seen a big spike in time spent downloading a page and would appreciate some pointers on where to investigate. So for the most part, this isn't something that would cause problems for a website in the sense that crawling is sometimes a little bit, I don't know how you would say, random, in that we go off and crawl a bunch of URLs and we don't really know what to expect. And some of these URLs might be really big URLs or take a lot of time to actually be downloaded. So this is something where some amount of fluctuation is usually normal. What I would recommend doing here is if you're really seeing a really strong spike where maybe it goes up from, I don't know, 200, 300 milliseconds up to 20 seconds, then I would recommend getting the server logs for that time frame and filtering out all of the Google requests there and double checking what kind of URLs are being requested. So you might see that we were crawling many more URLs than before, which could be a sign that maybe we're overloading your server a little bit. You might be seeing that we're crawling different kinds of URLs. So maybe we're crawling more URLs that require more time on your server to actually be processed. That might also be something that you see here. In general, this is something where you can't just blindly take this graph and say, oh, this means exactly this because so many different things flow into understanding that the time spent downloading a page in general. And you really need to take a look at the server logs there to see what exactly is Google measuring. What kind of URLs are being crawled now? How was that before? Is Google crawling different URLs now than before? And is this even a problem that I need to worry about? Or did Googlebot just go off and find a bunch of URLs that happened to be big and take a long time to be downloaded? And that might be perfectly fine as well. All right, so thinking about the upcoming page performance update, I've been considering asynchronously loading content similar to what is described in the PRPL pattern documented on Google Web fundamentals. Should I be worried about lazy loading content if the crawler is using this content to determine relevance of my page to a query? Basically, what's the trade off for speed versus page relevance if there is at all? So what I would recommend watching out for here is to make sure that Googlebot is able to render all of your content properly. So we talked about this in the Google I O session that we did two or three weeks back where we went through how Googlebot actually renders pages. And in particular, to also keep in mind how quickly some of this content might be visible in search. So if you're using lazy loading techniques that require processing of JavaScript to pull in textual content, just keep in mind that rendering happens a little bit offset from crawling. So it might be several days or a week later that Googlebot actually renders a page. And it finds all of this information. So if it's just textual information that can take a couple of days longer to actually be indexed, depending on your website, that might be something you worry about or might be something you don't worry about. So if you're a news website, obviously you want your news content indexed as quickly as possible. If you're more of a static website, then you don't really mind that initial indexing time like going from one day to a couple of days. So that's one thing to kind of think about. The other thing is, if within this chunk of content that you're lazy loading, you have internal links that you don't otherwise have within the static part of your website, then that, again, takes a couple days longer to be reprocessed. So for example, in an extreme case, if all of your internal linking is lazy loaded and requires JavaScript to actually be visible, that means we first have to render that page, which can take a couple of days. And then we find new URLs. And then we index the next page, which shakes a couple of days more. So you're looking at a time of, I don't know, a week or maybe two weeks for the first layer or two of your website to actually be crawled. Whereas if this is a static site, then we can crawl pretty much on the same day we can crawl any new links that we find. So again, if you're a site that is producing a lot of new content and you need to have that index quickly, then probably you don't want to put your internal linking into the lazy loaded part. I would make sure that you have at least the important to internal links within the static portion of your website as well. So that's kind of my thinking there, is you really kind of have to think this through and think about your website in specific, how you want to have an index, how it's really important for your site to be indexed or not. And if it's more of a static site that's not changing so quickly, then probably lazy loading all of these things makes a lot of sense. Maybe you can get a lot of time, speed advantages out of it, which would be great. On the other hand, if this content really needs to be indexed quickly and you have a lot of new content that you're producing all the time, then probably you need to make sure that this content is within the static part of your website and you're lazy loading maybe something that's less critical instead. But regardless of how you set that up, I definitely recommend also double checking the numbers on these things. So instead of blindly assuming that lazy loading is going to improve performance, really take the time to set up some test pages, measure the real world effects, and look at the various testing tools on those pages to see what changes you're really producing there and is this worth adding an extra layer of difficulty for maintenance, for a long-term upkeep of your website? So unfortunately, no easy answer. Do URL parameters work like a robot's text file? So if we block something in URL parameter tool, does that prevent Googlebot from crawling those pages and seeing any meta tags on those pages? So the URL parameter handling tool lets you do a variety of different things and not just block everything. So depending on how you have that set up, we can take that as a sign that you don't want to have these URLs crawl. In general, we will crawl those URLs less frequently. We might still crawl them already now and then. And when we don't crawl them, we generally wouldn't index them. So that's something where it's not quite like robots text in that it doesn't block us from crawling those pages. But rather, if we think we still need to sample some of them, we can sample them. And also, with regards to indexing itself, it's not something where we would index an URL by itself without actually seeing the content there. So it's kind of similar, but not quite the same as robots text. Consider these URLs. So one with a directory slash game slash PS4 and one slash console slash PS4 is the second one considered better because it's more directly related to the product. Or would this make any difference? This wouldn't make any difference ranking-wise from Google's point of view. URLs themselves are like a really, really tiny factor when it comes to understanding what a page is about. Usually, we look at the page's content and the context of that page within your website a lot more. And for that, it doesn't matter if you have this listed under slash games or slash products. For more point of view, that's pretty much just all the same thing. The one reason you might choose to do something fancy with subdirectories like that is if you want to track them separately. So for tracking, it often helps to have different subdirectories or something in the URL so that you understand this belongs to this product group and this belongs to that product group. And then you can see what is the performance of my slash games directory or my slash consoles directory. But purely from an SEO point of view, do whatever works well for you. How does Google decide which query YouTube or news should come in to web search? Because the same query, if we sometimes make it, sometimes we see news section or sometimes we see a video section, what's up with that? We have a number of algorithms that try to figure out which kind of format of search results are relevant to the user at that time and for that query. So it's not something where we kind of blindly say, for this query, you will always show a video one box or always show top stories or in the news section. It really depends a lot on what we see is happening on the web and what we think makes sense for the user at that time. So no simple answer there. Would canonicals and meta-noindex tags in the HTTP header make a significant difference? Or is it just that these tags have to be in the head? Does it save any crawling budget? From our point of view, we can process them at the same time. So if you have them in the HTTP header or if you have them in the head section of a static HTML page, from our point of view, they're exactly the same. They're not stronger. They're not better either way. You can use either kind of format whatever works best for you. It's slightly different when it comes to these tags being rendered in the sense that if JavaScript is creating the link rel canonical, we wouldn't be taking that into account. We only use it from the static HTML. With regards to the no-index tag, if JavaScript is adding the no-index tag, then keep in mind that rendering takes place a little bit later. So there's this time between us crawling the static HTML version and rendering adding the no-index tag where we would still index that page. So that's generally not such a good idea. I'd make sure that those tags are in the static HTML if at all possible. The rel canonical needs to be there for us to process it. And the no-index is recommended to be in the static HTML version because otherwise you might see kind of this flickering in the search results that we index this page for a while, and then we drop it out of the index here. If making a privacy policy via a form, does it mean to have a possessive name in the document? And I don't know what you would need to do for a privacy policy. So you probably want to check with someone locally with your website where your website is based to see what is legally required in a privacy policy. Regarding bounce rate and overall side quality, we have our invoice pages and our order tracking pages showing 100% bounce rate. Is it enough to just disallow them? Totally up to you. Whatever you want to do with those kind of pages is essentially fine from our point of view because probably nobody is going to search for them anyway. So they'll probably be clicking a link from their email and going to that page and looking at it and then going away again. It's not something that we would pick up for indexing. One thing maybe to keep in mind here is that if these pages are blocked by robots text, then it could theoretically happen that someone randomly links to one of these pages. And if they do that, it could happen that we index this URL without any content because it's blocked by robots text. So we wouldn't know that you don't want to have these pages actually indexed. Whereas if they're not blocked by robots text, you can put a noindex meta tag on those pages. And if anyone happens to link to them and we happen to crawl that link and think maybe there is something useful here, then we would know that these pages don't need to be indexed and we can just skip them from indexing completely. So in that regard, if you have anything on these pages that you don't want to have indexed, then don't disallow them. Instead, use noindex instead. If the same page is accessible via different URLs, but a canonical is based on the main one, is it all right? Or is there any SEO problem with such an implementation? For example, the following URLs all have the same content and a canonical to the main version. So let's see, slash blog, slash 123, and then long directory or short directory or short names, long names. I notice in the logs that Google is frequently crawling at different versions. I was wondering if this was a problem or not so important. Usually, this is perfectly fine. It's something where if we see links to these individual pages, we'll try to crawl them. If we see that there's a rel canonical there, we'll try to fold them together because they're the same content and you're telling us essentially which one you want to have indexed. What I would just keep in mind is if we're crawling these separate URLs, then probably somewhere we're picking up links to those separate URLs. And for the most part, probably these are links within your website somewhere. So that's something where I would try to dig in and see where are these links coming from. Is this something that you can fix within your site so that Googlebot doesn't need to go off and try to crawl those pages at all? So that's more of a, I'd say, practical thing that you might want to think about from an SEO point of view. If we're looking at these pages, seeing the same content and you have the canonical set up, that's perfectly fine. What do you suggest regarding an international website which has a very small number of translated pages? So US site has 750 pages and the EU site has 20 pages. At present, we don't have capacity to create a localized content for everything. So what can we do? For websites in general, you can work like this. So it's something where the hreflang markup could be useful here. hreflang is on a per page basis. So if you have some pages translated in some languages and some pages not, that's perfectly fine. Totally up to you. If you put the English content on the international site as well, that's something you can also do. I suspect it'll confuse users a little bit if you have some pages that are translated and some pages that are not translated, but from an SEO point of view, that certainly works as well. In the AMP carousel, yeah, I have a question. All right, go for it. Hello, I did block the site to crawl my site into Google Search Engine. How much time it takes to remove my site completely? To remove a site completely, if you use the URL removal tool, it'll be gone within usually less than a day. Not actually removed tool. I did a blog with access to crawl my site in the robot.txt. If you block it with robots.txt, then usually we won't remove the site completely, because we might still index it with just the URLs. But there's also no fixed time. So if we have to re-crawl the pages either for the robots.txt or because you're returning 404, then that really depends on the website. That can be several days for some pages, to several weeks or months for the rest. OK. All right, I see there are some questions here in the chat as well. For PWA, would you put it as far as a priority, or what should we do there, I guess? So PWA is a neat, I guess, the number of attributes that fall into this. It is a neat way of creating a website that is more like a web app in the sense that, often, it has offline functionality. It has different setups, different kind of technical implementation details that let you add it to your home screen on our phone, that let you do notifications, kind of like a local app as well. So these are things that I think are pretty neat. And you can make these really fast. So you can make really speedy websites, essentially, in the form of a PWA. With regards to SEO, one thing to keep in mind is that usually these are all made with JavaScript frameworks. So everything around rendering comes into play. So what I would recommend here is, if you're keen on kind of living on a bleeding edge from a technical point of view, definitely set up a PWA for yourself to try things out with. What I wouldn't recommend doing is kind of waiting for some random client to come to you and say, hey, I have a PWA. Can you do SEO for me without you actually having tried it out yourself? Because there are some things that are a little bit trickier and some things that require a little bit of extra knowledge with regards to how JavaScript is processed, how on the server you could handle things like dynamic rendering or not. All of these things are a bit tricky. So that's kind of where I'd say you need to try this out. First, you can't assume that everything that you've been doing with SEO for static sites automatically just works with a PWA site. You really have to think a little bit further. And some of these things are quite technical. So make sense to kind of dig in, understand JavaScript a little bit, or to work with a developer who can really kind of guide you and whatever sites you're working with on that journey of making a PWA work really well in search. So it's not impossible. I guess I'd say it's just harder. And it's not as easy as just copying, pasting a bunch of markup and putting that on pages. But it should not replace the mobile approach organically, right? It should be on the site. You can do that. You can replace the whole site with a PWA. That's like some sites have done that. And it works really well. It's something like you can certainly do that. It's a normal website. It's not like a native app that you have to install on your phone. It's essentially a normal website that works in a normal, modern browser. But you can also have it all separate, right? You can have your mobile approach, now, and you can have a second option going with PWA, right? Sure. Sure. Don't get confused about that as long as you have the. Well, if you have rel canonical setup, then usually we'd be able to pick that up. It's something where you can try different things out. Or you can say, well, for my shop, I'll try it with a PWA. Or maybe you'll have, I don't know, at some point, there'll be a WordPress plugin probably that does everything in a PWA. So maybe you can transfer your blog into a PWA and you keep the rest of your site as a normal site. But PWA is essentially just a fancy name for a bunch of features that are available on a site. It's not completely something different. It's not that you have to write your code in Java instead of JavaScript. It's just a different kind of feature set that you have available for websites. Your colleague, Ilya Grigoryk, had a really nice presentation about this from TeamWeb. But you kind of pitch it as in really go all in, because it's really nice. And you've seen some really nice stuff. Yeah, I think there are some really cool things you can do with a PWA setup. And I think there are some aspects that are really, I don't know, really neat from functionality point of view, where you might even be able to avoid setting up a native app. So I think the Twitter has a PWA that essentially runs on a browser on your phone, so you don't have to install anything. I use that all the time. Instagram has something similar. So this might be an option there. I still think it's something you really need to have a technical mindset for this, or you need to work together with developers who really know what they're doing. And to kind of trust your SEO mindset on the things that you've seen. So a lot of times we'll see this split between the developers saying, oh, we'll do PWA. And SEO is saying, we'll work on our website. And at some point, both of these are live. And this works for SEO. And this doesn't work at all for SEO, because they don't know. They don't know things like what is a URL, and why you should watch out to make sure that they're unique, and you don't have kind of infinite spaces, and you can be crawlable. All of these things, you all have lots of experience with, and they have no experience with. So it's really important that you work together with them. So basically, from a search perspective, it can be done, but treated with care, with extra care. Sure. Yeah. Makes sense. Thanks. All right. So another question about near me queries. I can start a little faster if you want. All right. Please elaborate. Would you go with one dynamic page near me that will target the near me queries, as in based on location that will show results for that specific city or whatever? Or would you go with actually one page per city, optimized for near me, whatever city it is, as in it would make it, I don't know, safer from a Google crawling indexation and assessment? I think both of those are valid choices. The important part to keep in mind is that if you have just one page that is dynamic, that's changing content based on the location completely, then Google, when it crawls from the US, will see the US content. So if, for example, you have a page that works like across Germany, and depending on which city you're in in Germany, it shows different events, then Googlebot will always see, I don't know, events in California. And it'll never know that there are actually events in Germany that could be indexed there as well. So having some mix here where Googlebot can still find a page like events in Zurich or events in the city in Germany and index that specific local content as well makes sense, as well as having some kind of a generic landing page where Googlebot finds general information about your website, but also has this dynamic aspect that provides local content for local users as well. All right, then a question about GDPR pop-ups. How will they affect SEO and usability? Yeah, I don't know. We should, I don't know. They're sometimes quite annoying, these pop-ups. But they are how they are. So in general, if the pop-up is on top of the content itself, so if the content loads within HTML and you're using JavaScript to show a pop-up on top of that, then we'll still have that normal content behind the pop-up to index normally. So that part usually works fairly well. What doesn't work on our side is if you replace all of the content with just an interstitial or if you redirect to an interstitial and Googlebot has to click a button to actually get to the content itself, then that's not going to happen. Then what will happen there is we'll index the interstitial content because that's the only thing we have on this page. And we won't know that you can actually click a button and get a little bit more information there. Googlebot also doesn't keep a cookie. So it wouldn't be able to say, well, I'll click, except now. And the next time I crawl your website, you just show me your content normally. Googlebot wouldn't be able to return that cookie to you and say, I agree with your terms of service. So those are the two extremes that we've seen. For the most part, sites get this fairly right. And you can test it, of course. You see it fairly quickly in search. If your site doesn't show up at all in the search results for normal content, then probably we can't pick that up anymore. So for the most part, I think that's working well. You can test this on a technical level with things like the mobile-friendly test, where you can render the page as a mobile Googlebot and then look at the HTML that is generated. So within the mobile-friendly test, you can now look at the HTML after rendering. And you can double-check in the HTML that we can actually find your normal content and not just the interstitial. The third question goes on. The regulation is not to record data in Google Analytics if users disagree or if they do nothing and close the pop-up. We see a huge drop in Google Analytics. Boundary jumps up, returns are gone. I don't know how Google Analytics handles this, so sorry. I check in the Analytics forum. Maybe they have some more information there. Most likely they would. All right, sure. Let's see. What else we have here? Is having 300 links from one page where about 80% of them are header, footer navigation and site bars? Is that wrong? What would you recommend? That's perfectly fine. If someone is linking to you in the footer or in the sidebar, you can't really change that. It's like they're linking to your website. That's their choice of how they link to your website. In the AMP carousel for media publishers and top stories is, let's see, or how does Google compare the Google Edition versus the user location? For example, a Russian article might appear on Google.ru, but not on Google.it. Does Google use the same country and language signals as Google Search? Or does the top stories use something different? Yes, as far as I know, the top stories carousel is an organic search feature. So the same kind of information with regards to language information and country targeting apply there as well. So geo-targeting is either based on a country code top level domain or with a generic top level domain based on search console settings. And the language information, either you don't need to specify it all because we can recognize most languages perfectly fine on a page. If you have different language versions of the same content, you can use things like hreflang to help us there. Let's see. We have a bunch of old buggy redirect chains and subdomains that are not indexed. So I looked at a few of these pages. Essentially, what you're seeing is if you're when you're doing an info query for an old URL, you're seeing the new URL show up in the search results. And it looks like you're trying to block that from happening by returning a 410 or 404 instead of a redirect or anything like that. From our point of view, this is perfectly fine. You don't need to do anything special to break that connection. An info query will try to show the best matching thing that it has there. And sometimes that is just like a new URL because things have moved on to a new URL. And maybe we have that kind of information in our background. And we can show that to people who are explicitly searching for the old URL. However, that doesn't mean that we're not indexing the new URL in any kind of bad way or any wrong way. It's really just that we know this old URL, when people are searching for it, we can also show this new URL as well. It's not that there is anything broken there. So there's no need to force a 410 for these URLs or to block them by robots text or to block them with a URL removal tool. We've already kind of processed this move from the old URL to the new ones. And the info query is not indicative of any problem on your website. That's really just our algorithms trying to be extra helpful. I'm like saying, oh, probably you're looking for this. Let me just show this to you. Let's see. I have a question regarding pre-rendering our JavaScript app for Googlebot. I think we talked about this briefly. We have two brands and a selection of websites in the UK, which essentially sell the same products. The only difference is one is marked for photographers, videographers, and the other is for designers. The websites have the same structure and templates. How would Google deal with this? And should we be worried about this with Google potentially demoting them? So usually what I'd recommend doing there is trying to find a way to use the rel canonical to kind of separate the products that you have into one or two, one or the other of these websites. With two websites, it's generally not that much of an issue. But if you have a lot of different websites targeting different audiences, then it's really important for us that it doesn't look like kind of a doorway sites where we see the same content essentially on a lot of different domains just slightly tweaked. So that's something that I would try to avoid there. And again, you can solve this on your side by using a rel canonical where you can say, well, this camera is clearly high-end professional camera. Therefore, it's targeted towards this user and this other device is targeted more towards the other user. You can still list it on both of these websites, but you have a clear canonical setup that this product is here and this product is here. And then when it comes to indexing, we know these are the products to index for this website and these are the products to index for the other website. And that works out fairly well. Even when you go to the website itself, you can still kind of see all of the products that you have in overall. But for the indexing point of view, it's a lot easier for us to say, well, this belongs here, this belongs here. Theoretically, is it possible for unnatural links, partial, matches, manual, action, where we can say that the previous SEO built them and we just don't want them counted? So this is the type of thing you can just use a disavow file for. So that's kind of what I would aim for here is if you think that these links are potentially problematic and you really don't want them counted, just use a disavow file for that. For the most part, sites usually don't need to do anything special there. All right, wow, we made it through just in time. Three minutes left. Is anything else on your mind before we close out? Hi, John. I have a question related to the canonical and no index. It is like we have lots of comments on our web pages and our system accidentally to make those comments into individual pages. So we have 10 comments, then we got event pages within those pages. So I come up with a strategy to categorize back to the original URLs by two weeks ago. But after that, look into the index coverage in the new beta search console, it says it's still indexed. So I'm not sure it is the Google considering it's categorized back to the original or should I use another strategy to make those common pages as no index? So the three things I would look at there is either redirecting if you can do that. That's probably the best approach. The rel canonical is definitely something you can do. Or the no index, if you want to say, I really don't care about these pages ranking individually. I just don't want them shown in the search results. All three of these take quite a bit of time, sometimes, to be in effect, especially if you're looking at a bigger website with a lot of different comments on it, then probably we're not crawling those comments so frequently. So I would guess regardless of what approach you take, it'll probably take on the order of, I don't know, three to six months, maybe even longer for them to be completely reprocessed. So that's probably what I think we're seeing there is that it takes a long time to actually be reprocessed. OK, I'll probably just wait. Yeah, I don't think these would cause any problems. If they were more visible in search, probably we would crawl them more frequently. So I think you're all set. Yeah, some of the comments are already being crawled by the search engine. When you search some turns, then you can find it on the search results. If you think that those are problematic, you can also use a URL removal tool just to remove individual ones. Or if they're all in one subdirectory where you say slash comments, you can just have those removed as well. But for the most part, I would just let these be re-crawled and let them take their time. Sure, I will try to remove URL. Thank you. Cool, all right. Great, so with that, we've kind of come to an end. We have the next office hours hangout set up for Friday, I believe, and a German one set up for Thursday. So if there's anything else on your mind, feel free to drop those questions in there as well. And in the meantime, we're also, of course, on the Webmaster Help Forum, if there's anything on your mind that you'd like to discuss with others as well. All right, so thanks for joining. Hope this was useful to you. And looking forward to seeing you again in the future. Bye, everyone. Bye. Bye. Bye, bye.