 All right. Welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Webmaster Office Hours, where people can join in and ask any web search website related question that might be on their mind. Bunch of questions were submitted already on YouTube. Not as many as sometimes. I guess the holidays are here. So if any of you want to get started with the first question, you're welcome to jump in. Otherwise, I'll just go down to the list and see what we have that was submitted. OK. Let's see. I guess a complicated one here. You explained there are several core algorithm changes released in early November. After those rolled out, there were many different types of sites impacted. So several updates make sense to me. But as a result of one or several updates, many smaller bloggers in certain niche categories were impacted heavily, like recipe sites. And when checking those specific sites, you could clearly see many unnatural links via recommendation widgets and other link building tactics. For those sites, is it even worth disavowing those links? It really looks like Google just simply devalued those links. And if so, it seems disavowing would be useless. Thanks for any information. So I didn't take a look into any specific site there. So it's kind of hard to say what exactly is happening there. In general, if you look at your site and the way that it's embedded in the web, and it seems like there's really a clear pattern of unnatural links associated with your site, that could be because maybe you've been doing link building in some kind of, I don't know, weird way if you've been using widgets to build links, all of the usual kind of things, then that's generally something I'd recommend trying to clean up, regardless of any updates that happened. And cleaning up link related issues usually involves either cleaning up those links so that they're no longer out there. In general, that's the best approach. If these are widget links, for example, then sometimes it's as simple as improving the widget so that it doesn't have these links that are in there. That's really the ideal way of removing those links if you think they're problematic. If you can't remove those links, then using the disavow file is an option. That's one way for us to kind of drop those links from being used. And essentially, a third approach that you could also take, depending on the type of link, is to remove or block the page on your side from being linked. So what generally happens is when we have links to a website, we associate them with individual pages. So we have kind of the source of the link and the destination of the link. And with those two endpoints, we know which way these links go. So if any of one of those sites is removed from our index, if they no longer exist, then essentially that link loses effectivity. So that could be another approach that you could take there. In general, usually, though, people try to focus on either removing the links from the source site or using the disavow tool. With regards to kind of this general situation where you assume that an update has been affecting your site based on the links to your site, that's something where I would tend to be a little bit cautious before jumping to conclusions and really take a look at the links for your site to make sure that there's really kind of a pattern of unnatural links there that is really problematic. It's very easy to look at any website that's been on the web for a longer period of time and to find a handful of kind of weird and unusual links. So that's something where just because you find something weird doesn't necessarily mean that those links are negatively affecting your site. It might just be that these are kind of the usual kind of crafty links that get collected over the years. But if you do find a pattern of really significant unnatural links to your site, then that's something I'd recommend cleaning up before maybe someone from the Web Senteen takes a look at it manually and then applies the manual action to clean it up for you. So a significant amount of links, if it's only like maybe 10, 20, it doesn't matter, right? It's hard to give an exact number. It's really something where if you look at the website overall and you see that this is a really significant and a clear pattern there that's repeated across most of the links there, then that's something I would take action on. OK. All right. Then a question about the crawl stats in the old search console. When I compare the URL counts from crawl stats in the old search console to my access logs, there are lots of differences. So lots of details there. And why is that? How are these stats counted? What's typically included? What about rendering? What about other kind of bots that access pages? Yeah. Good question. I find it really hard to reproduce those counts because of the way that they're compiled. And I just suspect that's what you're seeing there, too. So that might be something where it's really tricky to try to understand the exact connection that you're seeing now. In general, what happens there is we include all of the accesses that go through the infrastructure that Googlebot uses, which does include Googlebot. It does include rendering, includes robots text access, or includes sitemap accesses. It also includes some of the other bots that are out there like the ad spot, for example. So it includes a lot of different things. It's not just HTML pages that we request. In general, I find looking at the aggregated logs like this is kind of useful in the sense that it gives you a bigger picture view of your website overall, how many requests are being made, how fast those requests are being responded to, which is something that you often don't see as clearly when you look at just the HTML files, for example. So that's something where I find it useful overall to look at these, though it makes it trickier if you're also looking at your log files and trying to compare them. I imagine, in general, if you're looking at the log files and you see what kind of requests fall into this, then that's something where you'll see the trends are generally the same. Depending on what you're tracking, you might also see the speeds or the file sizes that are involved there, which help you a little bit to understand what else is really happening there. So that's something where it's hard to track that back one to one to specific requests that are made. And for the most part, it's something which probably doesn't make too much sense. Let me see. All of the things that you mentioned there, JavaScript API calls, rendering CSS images, all of that is in there. What is not included in there, you mentioned in one of the questions, is with different domains. So that would not be included there. So for example, if you have one HTML page and it has all of the maybe hundreds of images, but the images are hosted on a different domain or a different host name, then those would not be counted there. Those would be counted on the other domain. So similarly, if you turn it around, if you're the CDN for a number of websites and they include lots of images that are hosted on your CDN and you don't have any HTML pages at all, then we'll still show those counts as being requests made to your website and show that in the crawl stats data. So that's kind of from that. One thing I thought I would mention, just because it's the crawl stats, someone put together a really nice bookmarklet that lets you download these stats as CSV files, which makes it a little bit easier to track them over time. It still doesn't make it easy to map one to one to individual requests, but at least you can download the numbers and put them into spreadsheets. Hello, Joan. Hi. Thank you. Thank you for answering all my questions. A lot of information, so thank you very much. I'm not much smarter about this, but thank you for your time. Can you tell me if you plan to, or your team, plan to take crawl stats to a new Google search console, or will it be there in some different, I don't know, report or something? I don't know. Personally, I'd like to keep everything that's still available in the old search console, but I imagine the search console team has to make some hard decisions along the way. What might happen is that we could take these reports and integrate them into something a little bit easier to understand or a little bit clearer actionable. So one thing that people have been asking about a lot, for example, is crawl budget. Maybe it makes sense to create some kind of a report around crawl budget. And if we did that, then maybe it makes sense to take these crawl stats and include them in a way that makes them a little bit easier to understand. I don't know if that's something that is lined up or will happen, be happening anytime soon. But these are always discussions that we have. And part of the directions that we end up going for these kind of features, it really depends on what kind of feedback we get back from you. So if that's something where you see this is really useful and you can give us information on how you're improving the web overall by using this feature or by using this kind of data, then that makes it a lot easier for us to go back to the engineering team and say, hey, this is really critical. We need to make sure that we build this out rather than that we build it back. So any kind of feedback or tips or ideas that you have around that is always useful. OK. I will definitely email you that information's for me. Thank you. One information, just a note about the book markup. I really love it. And I used it for the data in my comments. So it's really useful for me too. And can I have one additional question about the topic? Sure. I would love to know if you or I can tell that the website or some website has problems with crawl budget. Typically, I can see that from or I can find it out when mean of the duration time is something like very high or something or a lot of URLs which aren't useful are downloaded to crawl. And maybe in some report, a discovery report, I think it's called discovery report, there is some column with URLs which are discovered, but not crawled. I think I'm not sure about the right name. So are there any other possibilities to find out if the website has some crawl budget issues? No. In general, most websites don't have issues with the crawl budget. So that's always the tricky part in that everyone is kind of worried about this, or at least at least the people who know about it. And most websites don't need to worry about this. Usually, let's see. So one way to kind of see what is limited there is essentially if you see that the amount of pages that are being crawled doesn't match the amount of content that you think you've been changing on your website. So for example, with a news website, if you look at your server logs and you see Googlebot is crawling one third of all of the news articles that I put out in any given day, then that's pretty clearly signed that we're not able to keep up. But that's sometimes really hard to determine. So the two approaches that I usually recommend is, on the one hand, looking at the speed that it takes to download individual pages, like you mentioned. If that seems fairly high, then that's often a sign that we'd like to be able to crawl more, but your server is kind of slow, so we're not able to do as much as we could. The threshold there for fairly high is hard to say. So it's something where smaller websites, if they're fairly slow, then that doesn't matter so much, because we get as much content as we need anyway. But larger websites, if they're fairly slow, then it doesn't matter quite a bit. The numbers I see when I look across various websites, usually the websites where we tend not to have that much trouble with crawling IC speeds, I don't know, between 100 and 500 milliseconds per request. So that's not per page that is rendered in Chrome, but rather per request made to the website, which includes things like the CSS files, JavaScript files, et cetera. And sites where I see issues with regards to the crawl speed, usually the time per request is, I don't know, towards two seconds or higher. So those are kind of, I mean, it's not that we have fixed thresholds. This is more just anecdotal from what I've seen. So that's kind of the one thing. And the other aspect is server errors that also plays quite a bit into when we start to slow down. So if we see a lot of 500 errors or 503 errors, then usually what happens is over time we tend to slow down. So that's something you can pretty clearly see in your server logs. In Search Console, it's a bit tricky to see as clearly. But definitely in your server logs, you can track the number of 500 type errors and see if that grows or if you have spikes there or if that's like a fairly high threshold. Because ideally your server shouldn't be returning 500 errors regularly. That should really be something that's more of an exceptional thing. So those are kind of the two aspects that play into how much we can crawl. And the other aspects with regards to how much we want to crawl, that's something you can also control, which kind of goes into, I think in the coverage report, what you mentioned, the discovered but not crawled URLs. And usually looking at those, one way to tell if there's an issue is to kind of look at the pattern of those URLs. If those are all URLs that seem reasonable, that are kind of like clean URLs in the way that you have your canonical shape, then that seems like something where probably we could be crawling a little bit more. If those are all URLs where it looks from the URL structure that they're kind of non-canonical URLs, so maybe you have parameters attached to them, maybe you have filtering and sorting parameters or just generally long and complicated URLs that you know are not the ones that you would have specified as canonicals, then if we don't get to crawling those, that's usually less of a problem. But that could be a sign that you're linking to those somewhere within your website. And yeah, all of that kind of flows into this complicated topic of crawl budget. So on the one hand, if we can't crawl everything that you're producing, that's definitely a bad sign. If you're linking to a lot of URLs that don't necessarily need to be crawled, that's something that you can improve, kind of reduce the number of URLs that you push to Google. If you're seeing that the speed is fairly low, that the latency to fetch a single request to process a request is fairly high, that's something you can do. And if you see a lot of server errors, that's also a sign that we're probably crawling less than we could. So yeah, lots of complicated things. And like I said, for the most part, kind of I'd say small to mid-sized websites, this is not something that you really need to worry about. It's really hard to do. We're measuring speeds. The beta speed that's in search console right now, is that eventually, is that right now on AI and just kind of understanding what's happening with websites, where they are in terms of seconds, milliseconds, and then you're going to take that as a ranking factor? That's kind of the different kind of speed. That's the speed to actually view a page. And I believe that's just based on the Chrome user experience report data, which is essentially what users have seen in the wild. So that wouldn't be related to the crawl budget type questions. That would be something different. And especially on mobile, we do take speed into account for ranking. So that's something, if your pages are really slow and you see that in the speed report, then that might be worth trying to find ways to improve. But the speed in the speed report wouldn't be related to the crawl budget type question. I have another question to the throttling topic. You said about crawl budget. Long, when I see that you're crawling our page and we don't get any 500s, does that mean that you crawl as fast and as much as you want and at some point you just stop? Is that like an indicator we can look at on our server logs? Not necessarily. So usually what happens is we make a plan to crawl a collection of URLs from your website per day. And I don't know, these might be, let's say, 10,000 URLs that we'd like to crawl. And we don't try to crawl them all at once, but rather we spread them out for over the whole day. So it's not that you would see a pattern of us crawling in the morning and then stopping saying, but rather we would crawl and kind of spread out. And the main reason for that is because we don't want to overload your server. So if we were to crawl all of these URLs at once, then that would be a pretty high load on your server. Whereas if we can spread them out more, that makes it a lot easier. And especially if the server is a little bit slow so that we end up having multiple concurrent requests, requests that happen at the same time, then that's something we want to avoid. So we tend to spread things out if we see that things are just slower than we would have expected. Yeah, I'm thinking about it. Because also we are scaling on based on our load. So when you try or would try to crawl us, I guess we would just scale. And so the question would be if you would notice that it stops then or it strutles down, or would it still go up? So we try to do this on a per day basis and look at the whole day kind of looking back to see how the day went. And then based on that plan for the next day. So that's something where if you're kind of scaling up in between in the day and some periods are faster, some periods are a bit slower, that's generally OK as long as the aggregate over the whole day is kind of within that range where we'd say, well, this is still reasonable. We can still crawl more. And you'll see kind of day by day kind of the speed go up when we think we can crawl more and when we want to crawl more. In a 2017 blog, you stated that basically increasing the speed will increase the crawl rate. If we have enough demand for more URLs, then we will try to crawl more. But so that's kind of the one thing. The one thing I do want to caution here, though, is that it's very easy to focus on this and to say the crawl rate is really important. For ranking, it's not necessarily the case that being crawled more often means that you will be ranked better. So I wouldn't see this as a ranking factor, but really mostly as a technical kind of thing where if you're changing your content and you want that change to be reflected in the search results, then being crawled fast enough for that is kind of what you need. Whereas if you're not changing your content, then it really doesn't matter how quickly you're being crawled. So you don't need to kind of push Google to crawl more just because more crawling is better type of thing. It's really not the case that more crawling means higher ranking. OK, thank you for answering us. I saw a few websites which started to use AMP. And this is the thing about the crawl budget still. And the stats in crawl budgets looks like the count of AMP growth typically grew a lot. It looks like the crawl budget for AMP is different to the crawl budget of the website. It looks for me like this. Can you say something on the topic, like AMP and the crawl budget? That wouldn't change anything. So those would be normal requests that we would make. I could imagine that maybe AMP pages are smaller files. So they can be transferred faster, which means we could crawl a little bit better. It could also be the case if you have a separate AMP subdomain and you're looking at just the crawl stats for that subdomain, then those would look better because it's easier to serve maybe static AMP content versus kind of the dynamic full HTML content. But essentially, when it comes to crawling, we don't special case anything around AMP or normal HTML pages. OK. It's fair to me. Thank you. Thank you. Sure. All right. Let's see. Question about Search Console. On the one hand, we have for our property about 3,500 pages, with valid pages, 1,900 index, and 1,600 cent index. In coverage under mobile usability, we only have 1,400 valid pages and 55 errors. As we had until the beginning of October 2, it does stop at a mobile site. And then we switch to dynamic serving. I would expect that this should clear out the difference in counting the valid pages, but it's still there. Can you explain to me where this difference is coming from? OK. So I think one of the big challenges with the aggregate reports in Search Console, the mobile usability one, other reports as well, like the rich results or the AMP reports that are there, is that they focus on a relevant sample of the URLs from your website. So if you look at the coverage report that includes all of the URLs that we have indexed for your website, so not just kind of a relevant sample of them, but rather really everything across the whole index that we have. And that means that it's hard to compare the total numbers. So the coverage report might say, like in your case, maybe 4,000 valid and indexed pages. And if you look at the mobile usability report, the total might be 2,000 or 1,000 valid pages. And that's not the case that the difference is not mobile friendly or mobile matching the mobile usability criteria, but rather we took a total of 1,000 pages from your website and we reviewed those specifically for these criteria. So that's something where you might see kind of like the 4,000 that we have indexed, 1,000 we check for mobile usability issues. And of those 1,000, some of those will be kind of valid with regards to mobile usability. And some of those might have individual errors with regards to mobile usability. So that's something where adding up those pages from an aggregate report generally won't give you the totals of the actual crawl or the actual indexing report. In general, since we try to take a relevant sample of those pages, if you're seeing minimal amounts of errors on the sample that we show in that aggregate report, then that's generally saying you're OK. So that's not something where I would worry about and say, well, what about those other URLs that are not checked? Does that mean that maybe they're all bad? And usually what happens is that that relationship kind of stays the same and that most of them, like in your case, are OK, maybe a small part are not OK. The other thing that is specific to the mobile usability report is that we determine the mobile usability based on rendering those pages. We have to render those pages in a way that matches what a user would see in their device. And sometimes that doesn't work out. So sometimes things like the CSS file, we can't fetch properly or the JavaScript file, we temporarily can't fetch. And even if that's just a sample of all of your index pages, sometimes that can happen. And with that, you'll always see kind of a small amount of issues with regards to mobile usability in that report, which are probably mostly based on just kind of temporary fluctuations in our ability to fetch individual pieces of content. And you can double check that by going through the mobile usability report and taking some of the errors that you see there, do a live test. And if you do a live test and it says, oh, it's OK, then that's generally a sign that this was probably just some kind of fluke with regards to testing, some kind of flaky testing that was happening that isn't really something that is actionable for your site. So in particular, specifically with regards to the mobile usability report, I would not assume that this is something where you can trivially get the number of errors that we show down to exactly zero. Just because some amount of flakiness with regards to a request made to a server, it's kind of natural. That happens anyway. That doesn't affect indexing. If it were to affect the HTML pages, we would just refresh those and try to get that again. So it's really only with regards to the mobile usability report there. So that's one thing kind of to keep in mind there in that some of these errors might not be actual errors as much as they could be kind of like just fluctuation in the network connections that can sometimes happen. Let's see. OK, an easy linking question. In your opinion, which ultimately is best to use when doing internal linking, absolute or relative URLs? Ultimately, I mean, your site has correctly implemented canonicals and has a single uniform domain being used and no duplicate domain issues. So in that theoretical case where you have a theoretically perfect website, then it doesn't matter at all if you use absolute or relative URLs. So from that point of view, use whatever is easier for you. Oftentimes, relative URLs make it easier to test things locally. So maybe that's better. That's not something I would really worry about there. So I really would leave it up to you. And if I were working on this website, I would just see which of these ones were easier in this specific case with whatever CMS I'm working on and just kind of use whatever makes sense there. In the case where your website is not this theoretically perfect structure, which probably most websites are not, then working with absolute URLs, if you can make sure that they really point at the canonical versions of all of the URLs that you have, probably makes a little bit more sense because then you don't have to worry about things. Like, what if Google or some user ended up accessing the non-dot-dot version of your website and it was loaded with absolute URLs, we always find our way back to your preferred version. In practice, you can also work around this by using the rel canonical. And we can generally figure that out there anyway. So in the theoretical perfect situation, use whatever makes sense in the kind of realistic situation. I'd still say use whatever makes more sense for you. One thing that sometimes comes up is that people try to use absolute URLs as a way to kind of fight against scrapers. And from my point of view, that doesn't really work that well in that the most scrapers know how to deal with URLs. So absolute or relative, they'll get around that anyway. And probably they're smarter things that you can do to kind of work against scrapers if you're seeing that's happening with your website. Let's see, how to remove the content of the expired domain from Google. If the user publishes expired domains content on his own site, another one, how will Google treat it? So I'm not quite sure which direction those questions are going. In general, if you take over an expired domain and you want to kind of start fresh with your new content, then essentially you just make sure that the old content returns 404. And over time, as we re-crawl those old URLs and see that there's none of the old content left anymore, then that will be reflected in search. I would not recommend using the URL removal tool for this. In particular, don't use the URL removal tool for your whole domain because it will not reset your domain. It will just hide the whole domain from search, which means as you publish new content, that will be hidden as well. So don't use the URL removal tool if you're just taking over an existing domain. So that's kind of the one side there. On the other hand, if your domain expires and you forgot to renew it and you'd like to keep publishing the content, then ideally what you would do is in the ideal situation, you would set up redirects to the newly hosted version of that content. If the domain has expired in the meantime, then you can't set up redirects. And in a case like that where you can't set up redirects, then essentially your copies of the old content are essentially new URLs on the web. And our systems have to discover them first. We have to find links to those pages. We have to crawl them. We have to index them like any other piece of new content. So there is really no way for you to say without setting up redirects or without using things like rel canonical, that this new version of the content replaces the old one. Instead, what we will see is, well, there is a version of content here and the old version is gone. And we see something new. Something has gone away. It's not really clear to us that the something new should replace the thing that went away. So that's kind of an unfortunate situation if you let your domain expire and then decide that actually you do want to keep the content indexed. I have maybe a question related to that. It was now about the whole web page. What is when a lot of URLs get indexed but they shouldn't? And afterwards, the whole folder is blocked by robots.txt. How can you get those URLs indexed without changing the robots.txt? So you want to get those URLs indexed or you don't want to get those URLs indexed? No, they are indexed. But I don't want them to be indexed. And that's quite a lot of URLs. OK. So with the robots.txt, we would not crawl those pages. So if you have a no index there or if you have server-side authentication, then we would not see that. So that's kind of one problematic thing there. What generally happens is if these URLs are blocked by robots.txt, then over time, we will only index the URL, not the content anymore. So if someone searches for something that used to be kind of indexable under that URL, then at most what might happen is we show the robot URL in the search results. Usually, this is less of an issue because if these are topics that you actually care about, then you would have a version of content on your website that is indexable with more information. So for example, if you have a page about blue cars and you have one version blocked by robots.txt and one version that is still live, if someone searches for blue cars, we might know, based on maybe links to the Robotted page that this is also a blue car page. But we definitely know that the version that you allow us to crawl an index is about blue cars. So that would probably be the one that we would prefer anyway. So if there is a version of content that is crawlable and indexable and one that is blocked by robots.txt, then probably we would just show the one that is crawlable and kind of indexed already. If you need to get those old URLs out completely so that if someone does a site query and explicitly looks for those URLs, then you would need to use the URL removal tool. The URL removal tool can be done on a folder level or kind of a prefix of a URL level or on an individual URL level. For a lot of URLs, if you have several thousand URLs, then you would try to find a folder or prefix that would work to remove all of those URLs. If this is a really urgent situation and you have some URLs that are actually indexable within that folder as well, then that's kind of a call that you might have to make where you would say, well, maybe I remove the whole folder even though I know there's some content I actually still want just to make sure that the critical content that I definitely don't want to have indexed is also removed from search. So that's kind of a, I don't know, rare situation that we don't see that often, but that can happen as well. OK, thanks. Another question related to that, would that go also on the crawl budget? If it's blocked by robots.txt, no. If the URL is blocked by robots.txt, then we would not count that as something that we try to crawl. Oh, OK, cool. And that means over time, let's say, half a year or something, those URLs would also be automatically on no index because you can't crawl them. Not necessarily automatically no indexed, especially if we find links to those pages. So if we find a lot of links to those pages, what will happen is we will index just the URL. And we might show kind of as a title in the search results something from an anchor to that page. So that's something that can happen. But like I mentioned, if you have normal content for that topic, then we would probably prefer that over the robotic page. All right, thank you. Sure. I noticed that Chrome console is listing scripts that are not using the same site attribute, that those are blocked and recommending more about it here. So there's a link to Chrome status, I think, blog post. Can you explain what the harms are of not doing anything in this case? So I had to look at this post as well to kind of see what exactly is changing here. But it looks like it applies mostly to cookies. And that's something that generally from a search point of view doesn't play a big role. We don't keep cookies for individual pages, so we wouldn't forward those cookies or reuse them. So that's something from a crawling and indexing point of view, essentially it doesn't play a role for us. It can affect how users see your pages, of course. Any change in a common browser can affect how users see your pages. So it's not something I would completely ignore, but I wouldn't see it as something that is specific to search. One thing also to keep in mind here is that for Google rendering, we're using kind of the evergreen Googlebot now, which means we update the Googlebot Chrome version regularly as well. So similar to you when you get that warning that there's a new version of Chrome available, Googlebot gets updates as well. And if there are changes in Chrome that affect how a page is viewable, then that would affect Googlebot rendering as well. So for example, one of the changes recently about HTTPS makes content, that's something that generally, I think, has been like that in Chrome for a longer period of time. But if that were to come completely unexpected from one day to the next, that users in Chrome would be able to see your images, for example, then that would also mean that Googlebot would not be able to see those images and would not be able to render those pages that use kind of that functionality which is now blocked in the browser. That would also be blocked for Googlebot as well. So on the one hand, these things primarily play a role with regards to your users. On the other hand, if this functionality is critical for finding content that is indexable for your pages, then that could play a role for rendering as well. I feel that's kind of a confusing answer. Maybe I need to rephrase that at some point. I am looking to improve the visibility of my product pages. Many of our PDPs include large number of images, which each include large amounts of explanatory text with them. So my concern is that the associated text, while great, might be diluting the identity. In a lot of cases, the H1s are generally too generic. And the text below, in terms of source delivery, provides better clarity about what the product actually is. Do you think moving the images down and the relevant text up would help Google to better interpret the focus of each of these pages? So just shifting the location of content within an HTML page, I don't think that plays a big role at all. Headings are useful in that we can take a heading and see which images and which text kind of applies to that heading. But just shifting around things with HTML or with CSS, I don't see that playing a big role there at all. So in that regard, I wouldn't really worry about this. One thing I did notice when looking at that example page that you linked to is that when I load it up after a certain period of time, it switches to kind of a country picker interstitial. And I don't know how you're kind of triggering this and if you trigger this in all locations. But for example, if you were to trigger this when Googlebot crawls and renders your pages, that might also result in Googlebot not being able to index your pages properly. So that's one thing you might want to double check. In general, I recommend using kind of like a banner instead of an interstitial when it comes to things like country or language pickers. Because if you're using a banner, then even if that does end up being rendered in Google systems, then it wouldn't block the indexing of the rest of your content. Whereas if you have an interstitial that, in the worst case, kind of takes out all of the old content and replaces it with this kind of country or language picker, then we might not have much content left on that page for indexing. Let's see, one more question, because it's a Swiss related one, I guess. We're fixing a website with a CH domain. We have a well-running clone under a .com domain. If we take the CH domain offline for a few weeks and redirect all its traffic to the .com domain, what effect does this have on search and ranking? So I assume taking it offline and redirecting happens at the same time. Whereas if you take a domain offline and then after a certain period of time, then you finally end up redirecting, then that's a little bit trickier. But generally speaking, if you just redirect, then we would see that as a domain move and try to move all of the signals that we have for the old domain, for the CH domain, to the .com domain. And that's something where if you don't change the structure of your website, then generally that's easy for us to process. You can use the site move tool in Search Console to tell us a little bit more about that. But generally, we can do that fairly quickly. The one thing I would watch out for here is you're going from a country-level domain to a generic domain. So the CH is specific for Switzerland, which means there is geotargeting happening specifically for users in Switzerland when they search for local content. And with the .com, by default, that's kind of generic. So that could be geotargeting any specific country. So what I would recommend doing here, if you're moving from a country domain to a generic domain, make sure you definitely have the geotargeting setting set in Search Console. And like with all kinds of moves, double-check all of the other things that we have listed in the Help Center as well. So there's some things which are kind of obvious that get forwarded there as well, but things like the ownership, make sure that you have all of those transferred, all of the settings transferred to the new domain. If you have the parameter handling setting set up, if you have any specific removals that are set up, make sure you have all of that moved over to your new domain as well. So that's kind of what I would watch out for there with regards to the overall effect on search and ranking. In general, a kind of one-to-one domain migration is pretty easy for us to process and shouldn't have a visible effect. You might see a few days where things are shuffling around, but generally speaking, that should kind of settle down fairly quickly, but it's not guaranteed. So in particular, if the new domain has a really different history associated with it or really different settings associated with it that take longer period of time to kind of settle down again, then you could see effects where we would shift over to your new domain, but your site overall on that new domain is not as visible in search as it used to be. On the other hand, it can also happen the other way that maybe you have a really good domain or your old domain is really bad and you're moving things over, then maybe things are a little bit better. Okay, so maybe it's time to switch over to questions from you all as well. Looks like there's a bit of back and forth happening in the chat. What can I help with? Go for it. So can I still ask the same question regarding that site? Like if it ranks for its own name, is that still okay? I mean, it ranks for its own name, but doesn't rank for keywords. So it's obviously very suspicious. And at the same time, I've also noticed that Cloudflare has been blocking certain things from the CDN. And I also recall a chat that you had on Twitter with Eric Wu, I think, mentioning that Cloudflare does block Google bots. And so I've disabled all that stuff, but I've noticing now that the CDN is blocking particular things CSS within the site. But I'm seeing like now the site ranking for its own name under its own. So I don't know. I mean, what is happening here? Could it be also Cloudflare that's blocking? I say probably not because these kind of technical issues with regards to the access are more specific with regards to indexing the actual content. But it's very frustrating, like, because I'm seeing other local, major local brands that I look at ranking fine. Like within mobile and desktop, it can't be that this is the only one, John. Of course not. There's lots of... I mean, that's something... So I think maybe just generally in a case like this where you're seeing the content being indexed and it's the same content that's used for the mobile and the desktop version so you don't have like an end up version of the page, then I wouldn't worry about those kind of technical indexing blocking type issues because that essentially means we're able to index the content. And we index the content once for desktop and mobile search. So it's not that we have like a desktop version index and a mobile version index. If we can't get one, then suddenly it doesn't rank in that one anymore. If it's indexed, then it's essentially it's indexed. That doesn't mean that maybe we could crawl better if the hosting environment was set up differently. That's kind of a different question for smaller sites. Like I said, usually that's not a problem anyway. And what I recall from Cloudflare, I don't know which thread you're referring to, but what I recall from... There was a WAF, there was a WAF, yeah. Yeah, what I recall there is that they do block bots that act like Googlebot that are not actually Googlebot. And from our point of view, that's fine. So in particular, if you're using some kind of tools to crawl a website and you're using like a fake Googlebot user agent, then if the hosting environment blocks that, then that doesn't really affect us as long as the real Googlebot is able to get through. So that's kind of that side there. But in general, when it comes to desktop and mobile, especially for things like local businesses, there are sometimes significant differences in the way that they're visible in the search results. And sometimes that plays a role with kind of the elements on the page, things like how many maps, entries can we show for these queries. That's something that you would also see in Search Console, where the ranking would essentially take those elements to the account as well. Something is very... How is it okay though for their competitors to rank both on mobile and desktop with very even search-like positions, which is one-to-one? There's no differences between their competitors' search and mobile and desktop. I've been doing this 17 years. I've seen... I live this life, right? I do believe it's still a technical error. I literally... Yeah, I mean, I'll definitely take a look at that again, but it's really something where there are just differences with regards to how people search on desktop and mobile, and that is something that is reflected in the search results. So kind of the elements that are shown, sometimes the ranking, the understanding of the query, it can differ quite a bit. And especially when you're looking at queries that don't have a lot of impressions, then looking at the data in Search Console can be a bit tricky because it's like when there is an impression, because someone was searching for it, then suddenly there's data there. When there is no impression or people are searching for it, kind of like in a non-local way, then that might be that there's a different kind of data there, which makes it really hard to double-check. So especially with local businesses, if you're looking at things like local related queries, then that can be really tricky to check when you're not in that specific location. So take a query like, I don't know, some kind of business near you, that's something that people might do on mobile where they expect things that are really close by. But if you're in a different town or in a different country and you try that query, then that's something where you would see drastically different results and that's kind of normal. And in Search Console, if we see that site mentioned somewhere in the search results, we would still track that as an impression, even though it's not necessarily an impression from a user that you would care about. So even if you take a query out of 55,300,000 flowers to Toronto, and I see the first result on mobile number one and the desktop is also number one, and that's a very competitive query. That query is less than a million search. It's a very small town. But anyhow, I see it as an issue and I don't know, it's frustrating. When everything is done correctly, the speed is correct. Even though I'm not sure why, but the speed is showing low, the desktop pages are showing low. So I mean, but they're not fast. Clearly in Page Insights, you see they're in milliseconds and also in Lighthouse as well. I don't know, I'm just frustrated. This is like the first time I've been frustrated and I think like literally 10 years. No, no, it's always frustrating to try to chase these differences down. Well, you try to be a good webmaster, right? I mean, like, and it's just, it's upsetting, that's all. Cool, okay. So we're a bit over time. Let's take a break here. It's been great having you all here. Those of you in the U.S., I wish you all happy Thanksgiving and I guess a good Black Friday. I don't know if you wish people that. This is like just buying stuff. But anyway, a great week. We'll have the next Hangout in English on Friday and one in German on Thursday. And next week, we have Webmaster Conference in Tel Aviv, where I will be. So if any of you are there, if I come and say hi, we also have Webmaster Conferences lined up in Japan and Korea the next couple of weeks. So if you're there, that's also cool. Yeah. See, you know, Switzerland on the 11th. Cool. All right. Yeah, that's that's the one lined up after that. So many things happening next couple of weeks. All right. Cool. Then I wish you all a great week and see you next time. Bye, everyone. Thanks. Bye.