 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts where webmasters and publishers like you all can join in, ask your webmaster web search related questions, and we can try to get you some answers. We have Martin joining us today. So any fancy JavaScript questions, I am sure he'll be good to go. Yeah. But I'm sure we have tons of other questions as well. So if any of you want to get started with a question, you're welcome to jump in now, everybody. I have a question related to JavaScript rendering. And Martin, since you're here. So in the new version of Google Search Console, when you do the fetch and render element there, if your content is not appearing in there, is it safe to assume then that something that Google is not going to find? During speaking, yes. So the testing tools sometimes are not able to fetch certain resources due to various constraints and certain things that we are working out on. But generally speaking, if we can fetch all the sources and the content does not show up, then it's likely that it's not going to be indexed yet. OK, perfect. Thank you. You're welcome. That's what I'm here for. Yeah. Hi. Papa. Hi. Yeah, I have questions related to Google Search Console. I have around 40 sitemap XML files in my sitemap index example. And I have around 147,000 web pages. But in the Google Search Console status, it shows only 65,569 URLs have been discovered. May I know, is there any mistake in my sitemap index or any issue which I can be fixed or from the Google end? Whoops. In general, the thing to keep in mind is we generally don't index all pages that we know. So even though we might know a lot of URLs from a sitemap file, it doesn't mean that we automatically index them. OK. That's kind of the first step. So it's pretty normal for, especially for larger sites, to have a lot of pages that are known, but they're not indexed themselves. OK, OK. So I wouldn't kind of panic and try to think like you're doing anything wrong. What you can do to make it a little bit easier is to internally link those pages better. So that when we find one page in your website, we can discover all of the pages on the website, even without the sitemap file. OK. But how many URLs have been discovered? But where can I check that status in the Google Search console? That's in the coverage report. That should be in there. OK. The coverage reports also show 65,569. But even I am doing internal crawling, and I have 147,000, which I can do, which are linked. Even I am doing the same process, which I can do, and I have figured out 147,000. But why is Google not able to do those? I think if the internal linking as well within your website and it's probably just a matter of, well, Google knows a lot of URLs, but we don't index all of them. And that's not really a matter of a technical thing. It's more that we've seen a lot of pages from this website, but we're not sure if it's worth indexing every one of them. So we index as much as we think makes sense. And if we see that this website does really well, that it's an important website overall, then we try to index more. OK. Thank you. I have one more question. In the coverage section, you say the site map's red, right? In the site map index XML, I have 40 site map links. But in the site map red section, there's no results displayed. There are zero results. I don't know. That's hard to say. What I would do there is maybe post that in the Webmaster Help Forum with a link to your site map file so that someone can double check to make sure that everything is OK there. I'll go for it. Thank you. Sure. All right. Let's look at some of the questions that were submitted. New people keep joining, which is great. I think there's a little bit of noise somewhere. So let me see if I can mute some of you. Feel free to just unmute if there's anything on your mind. Let's see. Got hit by the June core update. We're now working on quality content. How many months do I have to wait for recovery? I mean only a chance of survival in case of the next core update. Or can Googlebot decide to remove a penalty without any core update? So I think, first of all, a core update is not a penalty. It's not a matter of the Google algorithm saying this is a bad website. It's essentially just saying, from the way that we determine which pages are relevant, we've kind of changed our calculations and found other pages that we think are more relevant. So it's not a matter of doing anything wrong and then you fix it. And then Google recognizes it and shows it properly. But it's a matter of, well, we didn't think these pages were as relevant as they originally were. And these kind of changes, they can happen over time. We have a whole blog post that goes into fairly well detail on what we look at with regards to these core updates. So I would double check that. With regards to kind of seeing changes in one core update and when would you see the next batch of changes if you make significant effort to improve your website, for example. In general, this is something that happens on an ongoing basis. So on the one hand, we have the core updates, which are kind of bigger changes in our algorithms. And on the other hand, we have lots of small things that keep changing over time. The whole internet changes over time. And with that, our search results are essentially changing from day to day. And they can improve from day to day as well. So if you've been making significant improvements on your website, then you should see these kind of subtle changes over time as well. So it's not a matter of waiting for a specific change to see those changes in effect. But again, these core updates are not a sign that there's anything bad on your website, that there's web spam on your website. It's essentially just a way of us recognizing or trying to recognize the relevance of individual pages for individual queries. John, can I also ask something regarding content overall, not necessarily regarding the update? Sure. So we work with a fairly authoritative website. And they have a large blog section. And it's fairly popular. And we notice every time a new blog post is published, it tends to start ranking and attracting traffic fairly fast. Like within a month, they're already top 10 for fairly competitive keywords. They start attracting traffic. So we created a new section on the website. It's kind of more for evergreen content, like guides and things that don't typically expire. But it is essentially, we're still talking about articles in the same niche. But we notice that with this new section, the articles published there tend to start ranking much less slowly. So they attract traffic much later than posts that are published within the blog section, the existing blog section. So is it something that matters to Google, whether Google trusts certain sections, like subfolders or anything like that, on the website, in terms of it sees something new. I don't know what that is about. So I'm kind of testing it more a bit or anything like that. I don't think we have anything in that direction. So not that we would say, like, this is a blog. We must rank its content quickly. And this is a Wiki or an FAQ page. We should rank it slowly. I don't think we would have anything like that. What you might just be seeing is that within a blog, sites that are built up, for example, on WordPress, they have a fairly well set up internal linking. And they have an RSS feed. And they have kind of all of the whole setup for making content easily available. And you might not have it set up similarly optimal for the other parts of the website. I don't know. I'm just kind of guessing there. But it's not that we would say a blog should be treated differently than a normal web page. Well, so let's assume both our blogs is just that one of them is in a brand new section of the website that Google hasn't. So instead of slash blog, it's, I don't know, slash guides or something like that. But Google hasn't seen that section before. Would it kind of split it away from the rest of the website and kind of treat it differently in terms of, let's test it to see whether you're going to publish actual quality content there for some time. I don't know of anything where we would split it off like that. So trying to think what the options might be where something like this can happen. I think with adult content, that might be possible. If we think a part is adult content and the rest is not, that might be something that plays a role. But it sounds like that wouldn't play in there. Otherwise, the only thing that I have seen is that sometimes from a pure crawling point of view, we might crawl different parts of the website with different speeds, just because we think these parts of the website are updated more quickly than other parts. But if you've had kind of these regular updates on both parts of the website for a while now, then that's something where I wouldn't think anything like that would be playing a role. So that seems like something where probably it's just, I don't know, our normal algorithms at play and not something special that comes from it being a different part of the site. I'm asking only because we kind of tested it. So we published something on the guide section. Then a week later, we published something on the, well, it's different topics, of course, but kind of the same process. And not necessarily one is more competitive than others. And the one published in the blog section just ranked right away. The other one is kind of struggling and keeps building up in impressions just much, much slower, so we didn't know whether it was anything special about that new section. I don't know. I mean, in your case, I would just keep on testing. Like, try to figure out if you can narrow things down. Maybe it's something kind of unique. Maybe it's just that you recognize, OK, well, if Google sees it like this, I will just put my content here. That might be kind of the other approach. But I would keep an eye on it. It sounds like maybe it's something specific with your website. Again, I don't think we would do anything like that by design. So it's not that someone has hard coded something, like saying, well, blogs are updated like this and other pages like this. Yeah. But you mentioned RSS feeds and everything really. It is a WordPress-based website. And indeed, the blog section does have everything that you mentioned and that other section does not. But wouldn't that just help with indexing not necessarily ranking? So what if the content is indexed? Would it perform differently if you don't have those elements? No, no. RSS would help with indexing, yeah. OK. But sometimes there are things within a blog where you have the category set up or you have the monthly overview pages that link to the individual blog posts. You kind of have this internal linking fairly well optimized. And depending on how you set up the website, other parts of the website might not have that kind of well-optimized internal linking. And that kind of internal linking, that could play a role with regards to ranking. OK. We'll check it out. Sure. Thanks. All right. Let's see. For search queries made in Google from a desktop computer, our m.domain ranks for some of our very important keywords. It seems to us like this is a bug on Google's side. And then there's a link to forum thread. I didn't check the forum thread, so I'm not quite sure exactly what you're seeing. But one of the things that is worth mentioning is that especially with the shift to mobile first indexing, if you have a separate m.site, it's more likely that something like this can happen in that we might show your m.version in the normal desktop search results. And often that's due to kind of an inconsistent linking between the mobile and the desktop version, where we can't map that exactly. Sometimes it's just a matter of kind of the way that we crawl and index pages where maybe we'll pick up one version first and we'll see the canonical and we'll stick with the m.version because that's kind of the one that we would choose as canonical there, and we would show that. So from our point of view, with an m.site, with mobile first indexing, it's more likely that you would see the m.version in the desktop search results. Because of that, what I would do is make sure that you redirect desktop users from the m.site to the appropriate desktop site. Wait, desktop users to the mobile site to the appropriate desktop version. So kind of catch it on your end, rather than rely on Google showing the appropriate URL. This is a bit similar to the situation before mobile first indexing, where sometimes we would show desktop URLs for an m.site on mobile. And essentially it has just kind of shifted around here. And this is something from the mobile first indexing team. I talked with them about this earlier today. And from their point of view, we do try to catch this as much as we can, but we can't catch it completely. And at the moment, we're thinking that this is probably something that over time will become less of an issue anyway because people will redirect. And more and more people are using mobile anyway to search. So it's something where we would expect that potentially, if you have an m.site, this will kind of remain in a kind of an unstable situation like this. Again, what I would just do there is make sure that you redirect from the m.version to the desktop version for desktop users. That also helps us to understand that connection better and to show the right URL at the right time. So that's kind of the best way to handle it or rather the easiest way to handle it. Of course, the best way to handle it would be to use a responsive site or to use dynamic serving where you just have a single URL where you don't have all of these differences between the m. and the desktop version. Because every time you have a separate mobile version or a separate version of the site, again, it just makes everything a little bit more complicated. So it's not that we will stop our support for m.versions. It's just that if you have this setup and we've shifted to mobile-first indexing or we're shifting it to mobile-first indexing for your site, then it's possible that you will see this. Hi, John. Hi. So yeah, the question was from us. So thank you very much for stepping onto it. Yeah, the cases we actually didn't receive any notification from Google in the search console that we would be on mobile-first yet. And redirect from mobile to desktop is there. So setup should be stable. But it was last week that we noticed not only for the German market, since we're having products in many markets, we realize it has been in the UK and in other markets, too. And we have the right canonical and the right alternate. Exactly, yeah. So we have done everything Google recommends. So we're pretty sure we've done the setup well for a separate URL situation. Yeah, actually, it's really painful to see it. But I mean, if you have redirect setup for desktop users, then they will see the m.version, but they'll be redirected to the desktop version anyway. So it's something where users wouldn't really see it as much. But I agree, it does look weird. We have also lost rankings. That wouldn't change anything with the rankings, though, because it's not that these pages would rank differently. It's essentially just the URL that's being shown. OK. So if you're seeing changes in the rankings there, then that would be due to something else. Not a problem. OK, all right. Thank you. Thank you. Yeah, someone with a question in between? Hello. Hi. Hi. It's also regarding these mobile versions. So we do our best to follow your guidelines and rend these mobile usability tests. And it went quite well. So it checked your site, like several thousand pages, and then stopped at the last 20 or 20 something pages. And now for weeks, nothing happened. So we think maybe it has frozen or something, but we have no influence to change this. So what could this be? I don't know if you could follow my question. So this mobile usability test you have on your Search Console. OK, I think you asked this question in the post as well. Is that correct? Yeah, yeah, I even gave a link. No, so I took a quick look at the site there. And from our point of view, it looks pretty OK in the sense that the mobile usability report, I think, is around 20 pages that is still flagged as having issues with mobile usability. And the majority of the pages that were flagged in the past, they have passed the test in the meantime. So that's something where I would say you're essentially OK now. It's not that there's anything remaining that you really need to do there, because the numbers have gone down significantly from the numbers that were shown earlier on. So I think that's pretty much OK. It would be good to see that the number is zero, but somehow it doesn't happen. Yeah, that's sometimes tricky, because there can be technical reasons why individual pages don't pass the test. And then next time we try the test, then it works again. So for a larger website, a lot of times these errors will fluctuate a little bit close to zero, but not completely zero. OK, so we can't do much. Just wait. Maybe one day it will be zero. I don't know if it will be zero. I think it will remain fairly low. Maybe, I don't know, like a really small number, but it doesn't always go to zero, and that's fine as well. OK, and it doesn't influence the ranking, correct? Or ranking? If these are random pages on your website where the test sometimes doesn't pass, that's not a problem, because the next time the test runs, it'll be OK, and then things will be OK again. OK, thank you very much. That was all. Thank you. Sure. OK, then a structured data question. We are a new site. We recently updated our schema setup. We used to have an organization script as well as an article script, and I think the change that was made is to move the organization script into the article script so that it's not separate anymore. I took a quick look at the website. I think it's linked. And from what I can tell, we pick up both parts of the markup fine. So we pick up the organization markup. We pick up the article markup with the logo as well. So I think that's something where you should be fine. You also mentioned the structure that our testing tool says that it's valid, but the rich results test doesn't necessarily show it. The tricky part with the rich results test is that it only shows a very limited number of structured data types. So what you might just be seeing there is that the rich results test doesn't specifically show that setup that you use. So from my point of view, I think you're all set. Nothing much to do there. Parts of my website produce hundreds of 500 errors for four days. When the error was resolved on the fifth day, I lost many rankings. Even now, after one month, the rankings aren't back, and many pages are not indexed anymore. I tried to index these pages again with Search Console, but they're still not indexed. Is there anything I can do to help get the rankings back, or do I have to wait? So there are two things that generally happen with regards to 500 errors. So 500 errors are normal server errors where basically the server is saying, I don't know what to do. Something is broken. From our point of view, we do two things there. On the one hand, when we initially see them, we slow down the crawling. And we do that across the whole website. So if we see a lot of server errors coming, we will slow down the crawling because we want to make sure that we're not the ones that cause this server error. So that's one thing. If we see server errors persisting for a longer period of time, that could be, I would guess, around three, four, five days, maybe a week or so. Then what will happen is we will think that these server errors are actually permanent errors. And we will drop those pages from our index and treat them more like a 404 page, where we say, well, every time we try to access this page, the server tells us there's an error. So maybe we should stop accessing this page so much. Maybe we should stop showing it in the search results. I suspect that's what happened for these cases in that we saw them as a server error for a number of times. And then we removed them from the index because we thought this is a permanent error. And essentially, the next step here is once we can re-crawl them properly again, which we do automatically over time. When we can re-crawl them automatically and we see normal content again, we will show them in the index again. So usually, this is something that happens on an ongoing basis regularly. We just retry old pages that we know or that we think might not exist. And we double check to see if they still are missing or if they currently have content again. If they do have content again, we'll try to index them normally again. So if you're looking at a time, say, a month after a number of server errors, I would assume that might be right around the time where we start re-crawling a little bit more again. Because if we see a page return on kind of an error for a while, then we think maybe we don't need to crawl it as much anymore. But a month seems like a reasonable time to double check again. So my guess is around about now or like one, two months after this kind of situation happens, we would be able to kind of go back and re-index a lot of those URLs and get them back into search results. And when we can index them again, they can appear in the normal search results again. So it's not that they will start at zero. They will essentially be at the same state as they were at the time when they fell out. You can speed this up a little bit by using a sitemap file to tell us that these pages have changed and that we should go and re-crawl them fairly quickly. You can also use the inspect URL tool and the submit to indexing tool in Search Console to let us know that these pages exist again. The tricky part, of course, is that you need to do this on a per-page basis. You can't say, re-crawl all of my website now, Google. But rather, you should say, these are the 10 or 20 most important pages that are missing now. Kind of double check these pages, Google. And you have to do that individually. So that might be something that you're seeing there. But in general, this wouldn't be something that would have permanent effects. It could have effects for kind of a temporary time, maybe, I don't know, a couple of weeks, month or so. I would guess could be normal if you don't do anything special. But afterwards, things should settle down in the normal state again. If you see afterwards, after a couple of months, that it's still indexed, but it's not ranking as well as it used to be, then that seems like it would be a normal ranking change and not tie to those errors that you had in the past. Because once we can re-index those pages, we see them as normal pages again. There is nothing kind of special holding them back. It's just a normal page. Could the incorrect use of a very header response using user agent inhibit that site from being moved to mobile-first indexing? For example, providing a very user agent when the site doesn't actually use dynamic serving, it uses responsive design. So the very header in the HTTP result is a header where you can specify that this page changes depending on the user agent that accesses it. Usually, you would do that for a special mobile version. So if you have a mobile version that is shown to mobile users and a desktop version that's shown to desktop users, and you switch those automatically, then you would ideally use this HTTP header to let users and search engines kind of know about that. If you use it incorrectly, essentially, that's fine, too. Because if you're telling us that a page is different, depending on the user agent, and we crawl it with different user agents and see the same content, then we still have something to crawl and index. It's not that suddenly we can't index that page anymore, or we can't recognize that it works well on mobile. It's essentially just telling us, well, you should crawl this page twice, and then we crawl it twice, and we see the same content twice. And from our point of view, it's not optimal. It's kind of like we're crawling more than we would need. But it's not something that would be seen as any way problematic. The other way around would be a bit trickier. If you're serving different content to mobile and you don't tell us about it, then we might not recognize that as quickly. But in general, it's something we have a lot of experience with on the web, so we'll try things out. And if we see that it works, then it works. If it doesn't work, then we'll be kind of more on the safe side. The other thing to keep in mind is that mobile-first indexing is not a ranking factor. It's not something that you need to have. It's not something that you need to kind of force in any way. If your site is not moved to mobile-first indexing, that's fine. It will continue to show in the search results. If it has been moved to mobile-first indexing, then it will still be shown in the search results. We'll just show them or index the mobile version of that site. So it's not that you would need to kind of push it to mobile-first indexing or that it's a sign of quality that you're with mobile-first indexing. From our point of view, it's essentially just a technical change in how we crawl and index those pages. It's not a quality signal. After checking, oh, I think this is the mobile usability test question, I would like to migrate a site from the current subdomain. So example.wordpress.com to a new domain, like example.com. I already set up the 301 redirect, but I cannot use the change of address tool in Search Console since I can't verify an already redirected WordPress subdomain. Do you have any suggestions on how to manage this type of migration? That's a good question. We get that every now and then. Like people moving from a subdomain or moving to a subdomain, people moving a subdirectory of a website to a different domain, all of these things are essentially normal site moves from our point of view, but they don't work with the change of address tool in Search Console. So the Search Console change of address tool is really essentially moving from one full host to another full host. So copying URLs one to one from one side to the other, you don't have to use the change of address tool. If you've set up 301 redirects properly, if you're doing everything else that we have in our documentation set up properly, then essentially we'd be able to kind of process that move anyway and treat it as a normal move. So from that point of view, the change of address tool helps us to speed things up a little bit, but even without that tool, we're sometimes able to process a site move within a couple of days. So I would double check the help center, which has a fairly comprehensive guide on everything that you need to watch out for. And follow the steps there, and you should be good to go, even without the change of address tool in Search Console. I recently discovered that I have tens of thousands of backlinks from my own IP address. It's not shared, and twice as many from my IP address, but without the last number. Should I be worried? Is it safe to disavow backlinks from your own IP address? No major changes to the website for quite a long time that could have caused this problem. So essentially what probably happened here is we somehow found the IP address. Maybe there was a link somewhere within your website or on a third party website. Sometimes there are these kind of who is or website information websites that show the IP address. And from that IP address, we were able to access the IP address. And for whatever reason, we didn't see a redirect, or maybe we didn't see a rel canonical, and we started crawling that IP address. And if you serve the same website on the IP address directly as you have on the rest of the website, then we'll think, well, here's a nice website. And we start to crawl that website within your IP address range. Essentially. And when that happens, if you have links to your normal website within that, then we will think, well, there's a link to this other website here, and we'll show that in Search Console as well. So from that point of view, that's kind of suboptimal, but it's not bad. So it's not something that is a critical issue in the sense that you have spammy links or anything like that that you need to worry about. Essentially, these are links from your own website to your own website, just using a different host name, using the IP address instead of the host name. And that's what we show in Search Console. What I would do in a case like this is try to find ways to reduce the crawling that can happen on the IP address directly. You can do this by setting up a redirect, for example. You can do this by making sure that all of your pages have a rel canonical set to the proper full domain version. And with that, that's something where if we discover one page on this IP address, we can try to crawl it. And we see the redirect to your normal website. And then, essentially, we crawl your normal website from there on. So from that point of view, it's something that you can clean up. It's definitely worth cleaning that up, because if anything from your IP address is indexed in the search results, users might be going there, which is probably a bit confusing to them. And it's not something that you need to kind of disavow or process within the backlinks report in Search Console. So I would see it as a sign as either currently or in the past, your IP address was indexable. You can fix that with the redirect and with the rel canonical. But past that, it's not something critical that you need to change. One of my clients has health food-focused website and uses an ad network as one of the ways to generate revenue. The ad network uses JavaScript to generate text-based links. Some of the links are sketchy, such as natural cancer cures, when the website has nothing to do with that. I was wondering if Googlebot would render those advertisements and potentially attribute the generated content to the website. Should we be worried about advertisements potentially triggering an algorithmic penalty, depending on what the ads say? We're looking to move away from this particular network. Anyway, because of the negative user experience, it's just currently in our waterfall stack as the last resort. So I definitely would look into this purely from a user's point of view, because users don't know where the content comes from on your website. They look at your website, and if you suddenly have a link to natural cancer cures and you promote some third-party website with that link, then users might think that that's something that you're providing on an editorial basis. So that's something where I would be cautious about that anyway. With regards to whether or not Google would see that as a part of the page's content, that's more a technical question in the sense of when Google renders these pages, does it see these links? And you can test the rendering of the pages by using the Inspect URL tool. You can also kind of do a rough guess using the mobile friendliness test for pages that you don't have verified in Search Console. And with that, you can kind of roughly see is this script triggering? Is it showing content within the pages or not? Oftentimes, ad networks are blocked by robots text. They're prevented from crawling by crawlers, so we probably wouldn't see that. But that's something that you can definitely check from a technical point of view. But again, I would definitely also look at this from a user's point of view. And maybe there are also ways where you can continue working with this ad network and kind of have some of these more sketchy or more kind of links that you don't want to have shown within your website kind of blocked. Google's guide to mobile-first indexing doesn't mention differences in internal linking between desktop and mobile. If you have fewer internal links with different anchor texts, could this impact rankings when a site is moved? Yes, that's definitely the case. So if we index a site using purely the mobile version and if your mobile version has less content or fewer internal links or kind of missing anchor texts or missing images or kind of just missing content in general, then we will index it with less content. So it's not the case that we would say, well, on the desktop side, there's it like this. On the mobile side, it's like this. And we'll do a mix of both. We'll essentially shift completely to the mobile version. In general, when we check for mobile-first indexing, we have a kind of a readiness classifier internally. And it does look into things like this. So if we can tell that pages tend not to have any links on them anymore or there's significant amount of content missing on the mobile pages, then we'll tend not to shift that site to mobile-first indexing. However, if you have a mix of kind of good and bad pages, it might be that overall, our algorithms look at your site and say, well, in general, this is ready for mobile-first indexing. And we'll shift it over to mobile-first indexing, even though small parts of the website are not completely ready for mobile-first indexing. Or similarly, it can be that we say, well, the site is ready for mobile-first indexing and we shift it over. And there are kind of these subtle differences where overall, it looks OK, but it's slightly different with regards to the internal linking. And it's not as optimized, for example, with regards to the anchor text or you haven't done it as cleanly as you could with regards to how you embed the images. All of these things are small, subtle things that we can't completely test for where you could theoretically see differences. So from my point of view, what I would do is really make sure that any time you're testing your website using any external tool or a tool from Google or whatever, make sure that you're testing the mobile version of your pages. Make sure that you're testing with a mobile user agent. If you're using a website crawler to double-check how your website is indexable, which I think is a great idea, then make sure you're using the mobile version so that you're actually testing the version that Google would be indexing with mobile first indexing. John, I've got another question if you've got a moment. Related to internal 301 redirects, we have a fairly large set of sites, and we have a large volume of deep pages that have links, which are essentially 301 redirect to a final destination. I'm wondering if those 301 redirects appear to be impacting crawl budget. And if so, is that something you'd recommend remedying rather than having all these redirects internally that you've put in your code to go ahead and set those up to go to the correct destination without the 301 redirect? Good question. So I think it's always tricky with crawl budget because we don't really show how much crawl budget a site has. And it's really hard to kind of determine what all is included there. In general, when it comes to redirects within a website, if you're doing less than, I think it's five hops in one set, then we wouldn't count that against a site with regards to crawl budget. We would essentially just follow that set of redirects. If it takes more than five hops to reach the destination, then we would crawl that in a second round again. But in general, when we're looking at a website, it's really rare that a site has more than five hops for a single URL to kind of access that. And sometimes you can trigger it artificially in that maybe you know it's on HTTPS and dub, dub, dub. And if you access the non-dub, dub, dub HTTP version, then you have first the redirect to HTTPS and then the redirect to dub, dub, dub, which is something you could artificially trigger, something where maybe you would have four or five hops to reach the destination. But if we're not actively crawling those URLs through all of those redirects, then that wouldn't have any effect on kind of the normal crawl budget. So especially if we've already found the destination page and we're focusing our crawling on the destination page, then I think you're all set, even if there are a couple of hops in between there. And realistically, I think having more than five hops for kind of normal internal navigation, like an internal link leading to the final destination, that would be really rare to see. OK, thank you. Let's see, we have a few more here. We have an international site that spans across multiple domains, .com, US, UK, DE, et cetera. Recently, the .com has been dinged as a duplicate of the .us. And in the search results, the .com pages title pull in the US page title instead. What's the best practice in resolving this? We set up appropriate geolocations for all sites in Search Console except for .com, which we left blank. We have a GeoIP redirect set up on .com that redirects users from the US to the US site. But we don't have hreflang tags on .com or .us. So I'm not 100% sure of what exactly you're seeing here. In general, if the content is the same on both of these sites, then we would see that, or we would potentially see that as a duplicate. And we would potentially fold those together and show those as one version in the search results. If you're redirecting from the .com to the individual country versions, then we would see that kind of as a default homepage for an international site, provided you use the hreflang markup for that. So with the hreflang markup, you would specify the .com version as an x default version. And based on that markup, we would know that for this homepage on the .com version, we have the DE, the UK, the US versions, for example. And all of those versions have their own hreflang markup and the link to the x default version as well. So that's something where we would understand the relationship between those pages and be able to kind of show the appropriate URL at the right time in the search results. If you don't have the hreflang annotations and you just redirect the .com users to the appropriate country versions, what will happen is that since Googlebot primarily crawls from the US, we will see the US redirect. So for the most part, when we try to crawl the .com version, we'll see we get redirected to the US version. Therefore, we think, well, maybe the .com version is actually just the US version, and we'll just index the US version instead of anything else. So probably, that's something you're seeing there. The simple approach would be to use hreflang. You can use hreflang on a per page basis. So if you're only seeing this for your homepage, you can set up hreflang just for your homepage. You don't need to set it up for the whole website. The other approach that I guess you could do, well, I don't know. If you're always redirecting from the .com version, then probably hreflang would be the best approach here. I think another approach might be to set up a separate version on the .com site that's not the same as your US site. But that seems like it would probably just confuse people more. So for this situation, I think the default with the hreflang setup would probably be the best approach here. How are pages that are often going to 404 crawled and indexed by Google efficiently, like real estate information pages, product details and auction sites, should this content perhaps not be crawled at all? So essentially, I think there are two aspects here. On the one hand, we can pick up these pages fairly quickly with something like a sitemap file, which so initially, we can index them fairly quickly. That's, I think, perfectly fine. If you know when these pages expire, for example, an auction that has a fixed date, you can use the unavailable after meta tank to tell us that after this date, this page will be unavailable. And that makes it easier for us to drop it at the right date. Changing them to 404 is another approach. But there is a whole almost like a separate strategy that you could follow with regards to pages that go 404. When you're talking about products that are no longer available, auction items that are no longer available, real estate or whatever, there are lots of different approaches that you can take here. And I would recommend checking out some of the blog posts that are out there on how to deal with expired content to get an idea for some of the different options there. So things you could do could be, for example, to keep the old page for a while and just say, well, this is no longer available. You could potentially redirect to a category page, which is kind of confusing to users. You could do that together with a clean 404 page that you say, this item that you're looking for is not available. But here's the general category. Or here's a replacement item that's available for this. Or you could just say, well, I can't be bothered with understanding the details of these individual items. There's so many across my site. I don't know how they belong together. And in that case, just serving a 404 is perfectly fine. So what will generally happen in cases like this is if you look at your server logs or if you look at the pages that are crawled, you'll see that Google finds a lot of 404 pages. And from our point of view, that's perfectly fine. It's not a sign that there's anything wrong with the website if it serves 404 pages. It's essentially just telling us these pages no longer exist. And from our point of view, it won't be like, fine, OK. We'll focus on other pages on your website. That's perfectly OK. OK, looks like we just have a few minutes left. So I'd like to give any of you a chance to jump in with the last question if there's anything critical on your mind. No last questions. Oh my gosh. Yeah, I'll go ahead and try one. It's actually something I've seen on the Webmaster Hub forums. Somebody using, I'm not sure what JavaScript framework it was, but I noticed that immediately, so the source code of the page would just have the head element with a meta title and some other tags, no content. And the meta tags were the same on all pages. And a few seconds after rendering, you'll get changed meta tags, including the meta title, and added content, of course. And the user was complaining that, of course, some of the pages, the Google selected canonical tag was the home page, even though the content of these pages was fairly different. So I'm assuming this is due to the fact that maybe Google hasn't indexed the render version yet, and Google has just indexed the non-render version pure HTML and sees, well, basically these two pages are the same, so I'll just fold them into one before actually rendering and seeing, oh, it's separate content, different content. Do you want to take this one, Martin? Sorry, I have muted myself to not interrupt. I was distracted for a second. What was the question again? Sorry. No worries. I can send you the forum link afterwards. So it was somebody who is using a JavaScript framework to render the content. The source content before any rendering is just the head element with the same meta title across all pages, and basically content, just some JavaScript code there. After rendering, the meta title changes, the content gets added, pages start to look different. And they were complaining that in Search Console, some of their pages had the Google selected canonical tag to the home page. So I'm guessing this is because Google hasn't rendered the, or hasn't indexed the rendered version of those pages yet. So they are just looking at the non-rendered versions, which basically look the same across all pages and fold them into one before actually rendering and indexing the rendered version, which is different. That is definitely possible. It can also be that there's some specific JavaScript error that prevents us from rendering it successfully, and then that would also happen. Because then in that case, if there's a render problem, then we basically see all pages as the same. As I said, if it's not rendered, if it's not loading with dynamic content. And then we have a case where we think that it's a duplication, and then we would collapse it into the home page probably for any page, really. It would be useful to have the URL to be able to look if this is a problem on their site, but definitely send me the form, so then I'll take a look. What I noticed, I tested some of the URLs using the mobile rendering test tool. And they seem to look fine. It seemed to render those pages correctly. I'm not sure if it is a JavaScript error. I think it's just a matter of time before I render it. Can be that it's a matter of time, yeah. Yeah, I'll just post the form right afterwards. Awesome, thank you very much. Very good question. Cool. I think it's sometimes tricky with these JavaScript pages in that it's also really hard to tell what happened when Google saw it, where maybe it works now. Maybe they changed something subtle with the framework, with the setup, and now it all works. And it didn't work in the past, which is kind of like the set of URLs that's got copied together. And maybe it's just a matter of time until things settle down again. But yeah, always exciting. Cool, OK. With that, we're pretty much out of time. I want to thank you all for all of the questions that you submitted. Thanks for joining in for the interesting questions that you had here live. Thanks for answering a bunch of the JavaScript stuff, Martin. And hopefully, I'll see you all again in one of the future Hangouts. I'll put this on YouTube probably later on today. So if you want to watch yourself on the big screen, you'll be there. All right, and with that, I wish you all a great afternoon or a great day, depending on your time of day. And see you next time. Bye-bye, everyone. Thank you.