 All right, welcome, everyone, to today's Google Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do is these Office Hours Hangouts. Today, I had to move it to Monday instead of Tuesday, as we usually have it because I'm traveling the rest of the week. But it looks like we've got a bunch of questions already. So I guess we can get started. As always, if any of you want to get started with a question live before we go through the submitted questions, feel free to jump on in now. Hi, John. Hi, John. I'm John. I'm 25. I'm 25. So the link was coming in the past. So you know where the wrong link was. But I think it's like, I don't know. I don't know. I can't hear anything. I don't know. I don't know. I don't know. I don't know. I don't know. I hear a bunch of noise. I really can't understand what you're asking. Maybe you can type it in the chat. And I can get to that afterwards. All right. Let me get started with some of the questions that were submitted. Looks like there's some interesting ones here, too. So Glenn asks, is machine learning being used by Google's quality algorithms to determine what is disruptive, annoying, or aggressive from a usability standpoint? So someone has some background noise. For example, identifying clunky user interfaces, aggressive advertising, deceptive ads, et cetera. I don't know. So I don't think we're currently looking into any of these items specifically. It might be that our quality algorithms are looking at this as a bigger picture view to try to figure out what is a sign of a good website and what's a sign of a bad website. And maybe some of this kind of correlates with what happens on good or bad websites. But there are also lots of really good, high-quality websites that use kind of annoying monetization methods. So it's something where I don't know if you could map the monetization method to the quality of the content and really draw a line like that. So from that point of view, I don't know if that would be something that our algorithms would explicitly use like that. I suspect maybe some of this is flowing into our algorithms in general, but not in a very targeted way at the moment. How does hreflang tag help when my website has two totally different languages? For example, Chinese and English. People search in Chinese. You shouldn't see the English results at first. So yes, that's definitely a true point. That if people are searching for something in one clear language, then obviously it's pretty easy or should be pretty easy for us to recognize which page they want to see from your website. On the other hand, sometimes you have situations like people are searching for a brand name that isn't really clearly in one language or another language. And in cases like that, the hreflang tag helps us to swap out the version against the one that works best. So in general, when the hreflang tag came out, initially, our advice was more you should use this tag if you have multiple equivalent versions and you're seeing problems with your search results at the moment. So if you're currently not seeing any issues with the language and country targeting for your site that we're always showing the right version to the right user, then you probably don't need to set up hreflang. Hi, John. Hi. Yeah, John, actually regarding hreflang, the problem with me was my 99 pages are properly just being swapped out by Google. But one or two pages are where I still see that wrong version is coming up. I have checked it's a header tag. It's a proper canonical, everything like this. In such case, what would you recommend us to do when I am just saying a header is correct? Google is crawling both versions. And it is more than six months. I would still suspect that maybe the canonical that you're specifying is not the canonical that we're using for indexing. So you can kind of test that by doing an info query on Google, just info colon and then the URL to see which URL that we're actually indexing for that content. And oftentimes, you'll see a subtle difference with HTTP, HTTPS, stop-dub-dub, non-dub-dub-dub, or something with parameters at the end. And those kind of differences make it hard for us to actually use hreflang markup. Additionally, there are sometimes kind of edge cases where our algorithms see very strong signals that this is actually the page that matches this specific user's intent. And in those cases, we may override the hreflang markup on the page. So that's something that might also play a role there. In general, what I recommend doing when you can recognize that a user is viewing probably the wrong version of your page is to show a subtle banner on top to give the user data information. Say, hey, it looks like you're looking at the Italian version of the page, but you probably want to look at the German version. And here's a link to the German version so that the user can easily make it to the correct version of the page, even if they accidentally end up on the wrong one. So ending up on the wrong one is something that can happen outside of the search results as well. So if they follow a link, maybe a forum or somewhere else, they could end up on the wrong version of your page. So this kind of banner, when you can recognize that the user is coming from kind of on the wrong version of the page is something that would make sense regardless. OK. All right. So it looks like we're coming up to your question now. If a page from the website loses all its ranking, what should we analyze then? Is there anything where we should start analyzing from? That's kind of a tricky situation, I guess. So if a page loses all of its rankings, usually that means it's not indexed. So I would double check from a technical point of view if there's anything in the way of this page being indexed or if it's being indexed under a different URL. So you can also do that with the info query in the search results. On the other hand, if this page is indexed and it's just not ranking, then I would suspect that our algorithms just have trouble kind of recognizing the quality on this page or the quality on that site, that maybe there's something kind of really standing in the way and our algorithms are saying, oh, this is not really the page that we want to show in a search result for this specific query. And sometimes this changes over time. So it's something that might have ranked in the top 10 for a really long time. Suddenly, it might drop out of the top 10 or even, I don't know, go down a couple of pages in the search results on the one hand because the competition changes. On the other hand, users change over time as well. And on the other hand, our algorithms also evolve over time. So all of these things can change. And it's not so much often when ranking disappears that it's a clear technical issue, though sometimes that can be. But usually a matter of really taking a step back and thinking about what users really want and how you can provide that in a way that is clearly a big step above everything else that's out there. So that's something where sometimes it takes a bit of self-reflection and being critical with what you've put out over time. John, actually, this question was regarding one very large travel website, which I don't think that is using blackhead techniques. They were ranking in US with their flights page for their main flights page. And suddenly we saw that this page is out of index. So usually, technical issues, many times we see when the ranking is dropped, many times it is unnatural that technically there is nothing wrong. The only thing is some algorithm has decided. So in such case, I just posted this question. What should be the main points where we should be focusing on quality of the page? I don't know. Which website was that? Was that one of yours? It was Travel's. No, it was not my website. It was Travel and many blogs just posted that this website lost its ranking for flights page. OK. It was Expedia, in fact. OK. Maybe they just had technical issues. Sometimes it's as easy as that. And sometimes these issues are not that trivial to find. So that's something where, I don't know, maybe you'll hear more from them. Maybe someone else will spot something that is involved there. I wouldn't always assume that just because a page disappears from our index that our web spam algorithms are like deleting those pages and we think they're low quality. Sometimes even big websites have technical issues. That applies to Google's websites as well, where sometimes we don't show up in rankings where we suspect we would. And we look at the details and we think, oh, of course. We have this wrong canonical or there's no index or something crazy. OK. We're in open marketplace and our product catalog is managed by sellers. We don't have much details about the product on the product detail pages. And many sellers upload the same product under different prices. How can we manage this to avoid duplicate content and quality? That's, yeah, from my point of view, that's less of an issue around the SEO point of view, but really a matter of how you set your policies, how you maintain your website. Because ultimately what you put online is what your website stands for. So if you allow users to upload any crazy stuff and that's what you put online and that's what you publish on your website, then that's what Google and what other users will see from your website. So this is something where sometimes you have to be creative. Sometimes you have to find incentives to encourage people to create high quality landing pages. Sometimes these are things that you can subtly manage on your side as well, where maybe you can work with a relic canonical to fold similar things together. Maybe you can work with a no index and say, oh, I don't really trust this author in the beginning. But over time, as I see that they provide high quality content, I'll start having those pages be indexed. But these are all things that are essentially the same across a lot of different user-generated content platforms. And sometimes they're easier solutions. Sometimes it's a bit trickier. But pretty much in all cases, it's worthwhile to try to find a solution to these kind of problems. Maybe one thing to mention here as well is there is no penalty per se for duplicate content. So just because someone uploads the same listing twice doesn't necessarily mean your whole website will be demoted in search. It's just that if someone is searching for this product, we'll probably just show one of those pages in the search results. How does Google determine crawl error's priority? That's a good question. That comes up every now and then. There's actually a blog post when we announced the crawl error priority. Let me see if I can find that. That's a couple of years back where we talk about the factors that play into that. So I think the blog post is called crawl errors the next generation, which kind of lists the different types of priority or the different factors that fall into what we use to try to determine what the priority is of a URL in the crawl error section. So I double-check that blog post and feel free to ask if you have any further questions around that. Why in Search Console should all variations be added to improve search presence? So dot, dot, dot, dot, non-dot, dot, dot, Http, Https. We're not using subdomains anymore. Can I delete all the subdomain properties? Yes, you can delete all of the properties if you want. The idea behind submitting the different variations in Search Console is so that should there be anything that goes wrong with any of these variations, then we'll be able to inform you. So for example, if you move your website to Https, and suddenly the Http version of your website is completely broken, it doesn't redirect or it doesn't work at all, then what might happen is we get confused. Which of these versions should we be indexing and we'll show you some errors in the Http version? Maybe we'll send you notifications through Search Console as well. And if you don't have those versions verified, then we can't inform you about those issues. So that's why we recommend adding all of those variations. Oftentimes, we also see that sites don't realize which version of their site is actually indexed. So by adding all four variations, you can be sure that at least one of those is going to be the one that has the data for you in Search Console. How to determine quality of listing pages where users usually get a list of products. How to ensure that users get the best experience while they're on the category pages or any listing pages. From my point of view, this is kind of up to you. You know your user's best. You know your website best. You know your product's best. You know how you can provide something of high quality for users. And that's probably what you should be aiming for. And that's not something that I would primarily look at from a SEO point of view, but more from a usability point of view. And this is something you can probably do AD testing on to try to figure out which variation actually works best for my users. Which one confuses users best? Maybe even ask users specifically, set up a user panel to try to figure out what actually works best for your users. We're getting massive hits from IP addresses which resolve to googleusercontent.com. What's that? Is that safe to block? I don't know offhand, but if you can send me some log entries with a timestamp, the IP address, and the URLs and user agents that were used there, then I can double check there. In general, this would probably be worthwhile also to post in the help forums so that we can follow up with you there. Quering the Search Analytics API, we seem to only get pages that have at least 11 clicks and not all pages. Is this the case? Is there any other criteria? So it's not specifically limited to 11 clicks, but depending on how you query Search Analytics in general in the UI as well as in the API, you might see some filtering happening there for queries that are kind of unique that we think make sense to filter for privacy reasons. So that's probably what you're seeing. What can I find all snippets supported in the search results? I just found out about the events. Good question. I don't think we have a single page that lists all variations of all kinds of rich snippets that you can use. But in the developer site, where we list information about the events, we have a big section for structured data that has various types of structured data and how they're being used in the search results. So I check that out there. One thing to keep in mind is that not all types of structured data result in something visible directly in the search results all the time. So I'd recommend maybe doing some testing and figuring out which type really makes sense for your website, which type is actually shown for your website before you kind of dedicate your time and implement everything on all of your pages. John, related to this, can I ask one question? Sure. OK, so recently I just heard one hangout where you said that question and answer schema is a new schema that the people could use on pages. Is Google recognizing this schema now? I don't know. If it's in our help center, we should be able to recognize that. Yeah. So I don't know if we actually have that in our help center, though. So. Not for no. Yeah. So if we don't have it in our help center, then probably what would happen there is we'd be able to recognize the markup and process that markup if it's well-formed markup. But if we don't show it in the search results, then you wouldn't see any effect from that. But this is also something where maybe you could try to be forward-looking and think, I wonder if Google will actually use this markup in the future. And if you think that's the case, then maybe it's worth kind of implementing that. Personally, I don't know for sure if this is something that we would implement on our side. I could imagine that going either way. But I have used it for a future purpose. I mean, if it's easy for you to implement and you're able to implement it well on your website automatically, then that's something where maybe the effort is so low that it's worthwhile saying, OK, I will take this risk and put a couple of hours work in my website. And maybe Google will use it at some point in the future. OK, thanks. All right, let me just double-check. There are some questions in the chat here as well. What's the best way to have Search Console thumbnail refresh and update with an updated version of the site? It's been relaunched and shows updated. Is that possible? Would be helpful for us. I don't think there's a way to force an update of the thumbnail in Search Console. I know they get cached for a while and they get updated over time. But I know this kind of thumbnail in Search Console can be a bit outdated, especially if you make bigger changes. But that doesn't reflect anything that's visible in Search. It's really only in the Search Console UI. Do you have any issues with verification of site using TXT or CNAME, CloudFare, DNS? I can't seem to get either the TXT or CNAME to work and have to fall back to HTML file upload. I'm not aware of any issues around DNS verification on CloudFare. I know lots of sites use CloudFare and lots of sites that use CloudFare also use Search Console. So I suspect it's possible, but I don't know the details there. So what I might do in a case like this is post in the webmaster help form and double check with others to see if this is kind of a common issue or if this is something that's maybe configured incorrectly on your end. Is it possible to be shown by the side of YouTube Spotify or Deezer when searching for Liston Rihanna, for example? I don't know how that actually works. So I think that's a kind of a markup within the knowledge panel on the side. And I don't know how that actually works in practice. So one thing you can definitely do is use the feedback link on the bottom to let us know about something that you have on your site that might be worth showing there. The Search Analytics reports often doesn't add up depending on the dimensions we're trying to include. Are there any reasons why that might be? Yes, that's probably because of filtering. So especially if you're looking at things on a query level, then we're probably filtering some of those impressions out. And then you'll often see on the top like a total impressions shown on top as, I don't know, 1,000. And if you add up the individual items, you'll just come up to 800. And those differences are essentially from the query filtering that we do. Yeah. Oh, Miha, I got that too. Great. OK. Let me go back to some of the submissions on the page itself. Having pages on the same topic with partial match intent can confuse bots with the main topic, or is Google able to differentiate these pages? I'm not really sure what you mean there, Rameesh. So I'd probably recommend posting in the Webmaster Help Forum about something like that. Maybe ideally even supplying examples from your site where you're unsure if this is a good way of setting up pages or not. My suspicion is that you're thinking too much about this. And these are actually either normal pages or they're clearly pages that are kind of like towards the doorway page style where you're just swapping out the specific parts of your page to try to target all different variations. So I really recommend just asking with specific examples from your site and checking out what the other Webmasters say. We're delivering our internal image teasers with DoubleClick. So the images are hosted on DoubleClick. The problem is the site can't be rendered correctly by Googlebot because the images are blocked in the robots text of doubleclick.net. What can we do to solve this problem? So I guess the simple solution here is either updating the robots text file on DoubleClick.net or hosting your images somewhere else. So I don't see any kind of work around that. Either the images are crawlable or they're not crawlable. So that's something where you kind of have to make that decision on your side. I suspect editing the robots text file on DoubleClick.net will be a bit tricky unless you have really, really good connections with DoubleClick. But hosting images on your site somewhere or on a CDN for your site is something that's often fairly easily doable. There are major fluctuations of ranking from mini keywords in the last three to four weeks. Are these related to quality algorithm updates and Penguin updates? It's really hard to say based on something as generic like this question. We make changes all the time across all of our algorithms. So that's something where sometimes you do see fluctuations. And fluctuations in search are essentially normal. We make changes all the time on our algorithms. The whole web changes regularly. Users change as well. So this is something where fluctuations are kind of common and not something specific to just the last three or four weeks. In India, some known brands are creating pages that are very, with very small changes. But after so many updates, I still see them ranking. So it sounds like you're calling them out as doorway type pages, which from my point of view is possible. I don't know these pages. I don't know specifically what's ranking there. But if you're wondering if these are good pages or bad pages, you can always still submit a web spam report. And you can submit feedback on the bottom of the search results pages, which is something that the search team also takes a look at. So definitely let us know if you think the search results are low quality or not as good as they could be because of something that some other sites are doing. In many cases, what I've seen is that sites rank more despite the issues that they have on their site rather than because of the issues. So it's not so much that they're ranking because they're creating these kind of doorway-ish pages or maybe they're just ranking despite having these pages. So maybe they would be ranking even better if they had actually cleaned up these issues. Make a menu links are OK if it's an e-commerce site where many links are not relevant from one section to another section. Yes, you can set up a big menu on your site. If you think that's worthwhile, if that works for your users, that's something you can definitely do. I don't know if that's the best approach for all sites. I suspect not. But for some sites, that can definitely make sense to have this big, in-depth menu across your pages. We're planning to move our internal search to Google CSE. Do you have any advice or pros and cons about this? Totally up to you. So we have a lot of sites that are using a CSE, a custom search engine. A lot of sites do their own internal search. Essentially, that's up to you. What I would just watch out for is to make sure that the content on your site can still be crawled regardless of the type of search that's used. So to make sure that a search engine can find all of the pieces of your content by following links on your pages rather than by going to a search form and entering a specific keyword and then clicking on the search results pages. So that's something that you should, in general, be doing. And I suspect with a Google CSE, it's probably slightly more of an issue because those pages tend not to be crawled at all. So that's just kind of a normal best practice I'd watch out for. But for most websites, if you have a normal menu, if you have related items that are linked across each other, that all works perfectly fine. OK, a really long question about our website. We changed names from one domain to another and then suddenly the sites stopped ranking. And I think towards the end, it goes on and says, it looks like there used to be adult content hosted on the old domain. Could that be affecting things on our side? That could theoretically be affecting your site's ranking, especially if that's something that was in place for a longer period of time. Then sometimes it just takes a bit of time for algorithms to kind of process that and understand, oh, this new site that's hosted on this older domain has nothing to do with the old content. So sometimes we see this with adult content. Sometimes we see that with spammy content in general, where someone buys a domain name, maybe doesn't even realize that it has this crazy history behind it. And suddenly they're wondering why, after a site move, things are a bit shaky. And these things sometimes just take a bit of time, where maybe it can even take a month or two or three for things to kind of settle down kind of normally. That the form that you mentioned is probably a good place to go. It looks like you already submitted that. So that's specifically a form where you can submit your site if you think that safe search is incorrectly filtering your site. So that's a good place to go. And sometimes it takes a bit of time for that to be processed, and sometimes it just takes a bit of time for everything to kind of settle down on our side. I did double check with the team about that specific URL, and it looks like that should be being processed fairly soon. So you're kind of in the right place already. This is my website. On my website, there are 30 plus languages. I'd like to use the hrflang tag here. Only problem is we have the same domain for every language. So let me just go to the website very briefly to kind of double check what's happening. So I don't actually understand this website that much. So I guess that's a tricky part. Okay, but in general, the main problem with having one URL for different language content is that we don't know which one to actually index. So we try to index one piece of content per URL. So if you have different language content that's hosted on the same set of URLs, we'll probably just index one of those versions. And in many cases, if you have an English version that you're automatically showing to users from the US perhaps, then probably we'll only index that version. So this is not something that you'd be able to solve with the hrflang markup. It's really something where as long as you have one single URL for different pieces of content, for different languages, different countries, then we probably will have a really hard time to actually index that content. So this is kind of the most general case of internationalization where you really need to make sure that each different language and country version has a unique URL. And once you have that, and if these pages are linked across with each other, then we can index all of those different versions. And then on top of that, you can add the hrflang markup to tell us this English page is equivalent to this Finnish page and this Norwegian page. And then depending on where the user is searching or what language they're searching, and we can show the appropriate URL. But as long as there's just one URL for all of these different language versions, we will probably just index the English version and send people there, which is probably not what you're looking for. Last March, when we changed from HTTP to HTTPS, we lost more than 50% of our visitors in organic search, we usually being the first or second position and now we're not listed at all. We've read that after 90 days, everything went to normal but that's not the case with our website, thanks in advance. So I recommend maybe posting in the help forum so that I can take a look there and follow up with you in the forum there. So that's something where I suspect it's less of an issue with regards to moving to HTTPS and more of an issue of just generally our algorithms are kind of re-evaluating your site. And this is something that would have happened regardless of any move or non-move. But especially when you do this type of move at about the same time, it can get really confusing and it's really hard to tell what is actually happening there. But I'd be happy to take a look if you can maybe send me a link to your forum thread. I can take a look there. Same thing happened to us, John, we lost 50%. I mean, I know it can be algorithmic thing but exactly same thing happened to us like I sent it to you. And the forms were not helpful. I posted there, thank you. Sure. All right, my question. An Austrian news publisher on an AT domain is well listed in Google News Austria. They recently started a DE domain for German users with 50% the same content as an AT and 50% unique content for Germany. The same CMS and set up the DE domain gets rejected for Google News without details. Can't they open local news websites for different markets? How can we get more details about the problem? It's hard for businesses to understand reasons without details. So I don't know what the background is specifically around there, but in general, for Google News, these websites do have to be approved manually. And it might make sense to submit that again with some additional information. Maybe they're confused about what is actually happening on this website and they can take a look there. But there's also a Google News publisher help forum where you can kind of submit your URL there and have some of the top contributors take a look at the site to give you some information there as well. Maybe there's something obvious that they can point to where they can say, well, your AT site got into Google News at a time when it was easier to get into Google News and now the standards are a little bit higher. So maybe some of that could be applied there as well. When creating URLs for range pages, for collections of products, should I look to add a keyword for the most popular complete key phrase together or try and use the most popular at the start? For example, between dashes. So in general, I wouldn't focus too much on stuffing keywords into URLs. I think that usually just makes it a little bit more confusing and harder to maintain. So for us, URLs for the most part are just identifiers or pieces of content. So I wouldn't worry too much about stuffing keywords into URLs, especially for category pages or any other kind of page like that. I think it makes sense to have human readable URLs, if possible, because that makes it easier for people to share those URLs. Search engines generally don't care if it's human readable or not, but users do care from time to time. So that's something to look into, but I wouldn't artificially try to stuff every keyword into all variations of URLs, but rather try to pick reasonable URLs, use parameters where it makes sense, especially if you have maybe filtering or sorting or sections of a page that you're returning. For all of those things, it makes sense to work with parameters instead of stuffing everything into something static looking like a file. So search engines don't really care about that. Users care for something that's easy to use. So I'd really focus on making it easy rather than trying to stuff keywords in URLs where they are more confusing and probably more of a hassle to maintain properly. In analytics reports, acquisition channel's keyword not provided is appearing. How do I find out the sources of the traffic and ensure that I get more feedback than just getting not provided? So I don't specifically know about analytics, but this sounds like the search keywords. And in general, from Google side, for the most part, we remove the keywords when we send a refer to a site. So that's why you would probably be seeing not provided there. You can get more information about the queries that lead to your site in Search Console. Or if you tie your analytics account to your Search Console account, you can get that information in analytics as well. So that's kind of where I would look there. Or if you have more questions specifically around analytics, I'd recommend checking out their help forum. They have a pretty lively forum as well and can probably give you some information on these types of questions. We're adding pages to our categories, but in a different URL structure, will Google be able to understand that these are connected despite a different URL structure? Yes, for the most part. Like I said before, we use URLs primarily as an identifier. So if we can crawl that piece of content through that URL, then that's fine. If you have a different URL structure for different parts of your site, that's perfectly normal. That's absolutely not a problem. It's fine from a user experience point of view to show only subcategories and not products on main category pages. And to navigate deep section of your site, I don't quite understand that question. But maybe this is something that would make sense to post into the help forum. All right. Can embedding YouTube on a website help in the search results of a page? It can help in the sense that your page can show up in the video search results. So that's something that could play a role there. But it's not the case that we would say for the normal web ranking, we would rank a site higher because it has embedded YouTube videos or embedded videos in general. So just because you have videos embedded doesn't mean this page is automatically better. All right. Let me double check if there is anything more in the chat here. Index status, how often is it updated? The crawl rate is showing great history data, but I'm showing no index data in index status window. It's been a while and is flatlined on zero. This is happening on HTTPS and HTTP. So the index status feature, I think, is updated two or three times a week, something around that kind of timeframe. So if you've waited longer than a week or so, then in general, that should be updated to the current status. It is, however, limited to the specific version of the site that you have verified there. So that's something where I suspect, especially if you say it's flatlined at zero, that you're probably looking at the wrong version of the site. So that could be HTTPS, HTTP, dot, dot, dot, non, dot, dot, dot. Maybe even if you have different domain names where you host the same content, that maybe one of those is index instead of the one that you actually have. And a simple way to do that is to do an info query, info colon, and then the URL that you think should be indexed. And it'll tell you if it's indexed or not, as well as which URL exactly it's indexed at. So that's a simple way to double check that you're looking at the right data in Search Console. Another thing you can do is if you're using sitemap files or if you're using RSS feeds on your website, then you can submit those as sitemaps in Search Console. And you'll get pretty much daily information on the URLs that you submitted as a sitemap, how many of those are currently indexed. So that's kind of the most accurate way of double checking how many URLs are actually indexed for your website of the URLs that you care about. So that's kind of what I would aim for there. You mentioned that you looked at those already and they're all flatlined at zero. If the site itself is actually indexed, so if you do a site query for the site and you see some results there and it's been indexed for a while, then that sounds like something is perhaps still wrong. One thing to also keep in mind is if you're using Search Console for the first time for a site, it takes a bit of time to actually start updating all of this data and to start compiling that. So that might be something where if you've just added these sites to your account, maybe you need to give it like a week and see where the data starts coming in and updating there. All right. Yes? Yes. So just wondering more, I mean, how frequently do you update this data in the Search Console possibly when it comes to the internal links extra more than the real data, that how frequently Google updates this data? How frequently is the links data updated in the Search Console? As far as I know, that's also updated maybe once or twice a week, so fairly regularly. The thing to keep in mind there is we don't crawl the whole web every week. We crawl kind of in a more granular level in that some URLs are crawled fairly frequently and some URLs go fairly long between being crawled. So that could be up to maybe six months or even a year before we re-crawl a specific URL. So if there's a link that was added to a page that was just crawled and the next time we plan to crawl it is in six months, then that's kind of the minimum time that will be possible for Search Console to start showing that link in Search Console. So that's kind of one thing to keep in mind there. And similarly, if that link is removed and it takes us six months to re-crawl that page, then that's going to take that amount of time for us to actually drop that link from Search Console. And this also applicable to the internal links. Yes, yes. Internal or external links is essentially the same for us. It's just a different source of the link. So sometimes a link comes from within your website, sometimes it comes from the rest of the web. All right, thank you. All right, more questions from any of you. John, I have a question. When relevant websites that are not news websites but they are forums, when they link to your website, let's say they discuss. You have a story, news story, and they actively discuss it. Or they use your website's RSS to show some news and that links back to you. Are these helping to your authority level? If those are links that are not nofollowers, then those are normal links. Just because it's from a help forum, that doesn't really matter. It can be a normal link from a help forum. It can be an unnatural link from a help forum, too. But if it's a normal link, we'll try to treat it as a normal link. I think RSS feeds are sometimes a bit tricky because these are commonly scraped and reused across the web. And it's hard to say how much of that is actually a natural link or not. But normal forum threads where someone is posting about your website, I think that's a fine thing. I mean, that shows that people care about your content. They're discussing it. So that's a good thing. There is another thing, John, that you probably wouldn't need to be aware of. There is a website called People Per Hour. I don't know if you've heard of it. So what people go there, they go there and say, for example, I can publish a story about you in, let's say, Forbes, or Lifehacker, or another website. And these websites, the Forbes, doesn't know anything about this, for example. So these people per hour, they take money to write this story. And they go to Forbes and they say, would you like a guest post from us? It's a free guest post. And they look at the nice story and they publish. So that means for a link someone paid, is Forbes or websites like this responsible for that? They have no idea that the person, well, actually, she or he got paid. And that may be fair, too, because he has to be paid for her work or whoever that person is. Yeah, so people per hour. Yeah, so this kind of guest posting isn't really new. It's something that has been around for quite a while. And we've been dealing with that for a while as well. In the end of May, we did a blog post about some of these issues. Let me just copy that into the chat with regards to especially these kind of link schemes, where some company goes to different bloggers and says, hey, I would like to buy a guest post here, here, and here. And I want a link to my website from your guest posts. So that's something that our web spam team is well aware of and our algorithms are also kind of working to handle that algorithmically as well. So in general, what happens in cases like this is when we can recognize that this kind of activity is happening across the site, we tend to lose trust in those links. So for example, if, I don't know, on my blog, there's like constant guest posters and they're always linking to these random sites that they get paid for, then the web spam team might say, OK, we see you're doing this. It's your site. You can do whatever you want with it. But we're not going to trust any of those links. None of those links are going to provide any value for any of those sites. So essentially, you're publishing this, but those other sites that are paying for this type of links are not getting any value out of that. So that's kind of the approach we try to take there. This is similar to other things where when we can recognize that something is happening in a bad way and we can just ignore the bad part of that and focus on the good part, then we'll try to do that. So for example, with keyword stuffing, if we can recognize that a page is doing keyword stuffing and we can just ignore the keyword stuff part and focus on the rest, then that, for us, is also kind of a reasonable approach because we can't fix all pages. We can inform the webmasters and say, hey, you're doing this wrong. We can send them notifications from the web spam team. But sometimes there is still useful information on those pages, and we want to rent those pages for the useful part, not for the kind of keyword stuffing bad part that they're doing. So that's kind of a two-pronged approach that we take there. On the one hand, manually, we try to take action where we think it's necessary. On the other hand, algorithmically, we try to ignore things that we can kind of isolate any more on there. One more question, John. When a website upgraded server, if it's the same company, but when you move from one machine to another machine and IP changes, but the links remain the same, it's real one. Is that an issue for Google? Does it need new time to readjust to the new IP? It's not a ranking issue for Google. So you wouldn't see any ranking changes, but you might see changes with crawling. So specifically, what might happen is our algorithm recognize, oh, there's a change in the infrastructure here. And we don't know if we can crawl it with the same speed as before. We'll be kind of cautious and crawl a little bit slower. And over time, we'll crawl more and more again as we see that the server is actually much faster or better or just as good, those kind of things. So this is something that commonly happens when you move to a CDN, where you take all of your content and you put it from your server to the server in the cloud, and now it's running a lot faster. And Google first says, oh, there's a big change here. I will be cautious with crawling so I don't cause any problems. And ranking point of view, it doesn't change anything. It's really just from crawl. Thank you. Sure. Mihai asks, how quickly do changes in geolocation settings get taken into account? I don't know offhand. So let's see. I don't know. I suspect this is one of those things that we tie to the URLs themselves. So it's not that we say the whole site has moved from this location to another country targeting, but rather that as we re-crawl that site, we see, OK, this site moved from this country location to this country location for geotargeting. So those URLs, as we re-index them, they'll have the new country location attached to them. So that's something that kind of takes a bit of time. It doesn't just flip over. Obviously, the more important pages on a site tend to be crawled more frequently, and they'll see this change a lot faster. But for the long tail, it can take a bit of time to kind of see that shift. All right. If there are no further questions, let's take a break here. I'll be back on Friday morning. If there's more questions from your site, feel free to drop those questions into the hangout there. And maybe I'll see some of you again. Maybe the time zone works better for other people as well. All right. Have a good day. It's the last one. Thank you, Dr. We have a political review reply. All right. Ramesh, you probably need to post in the comments, because it's really, really hard to understand you from an audio point of view. All right. Then I will wish you all a good day, and maybe see you again in one of the future ones. Bye, everyone. Bye, bye.