 Maybe. Maybe it'll start recording. It shows recording to me. Oh, there we go. OK, woo-hoo. All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I'm a webmaster trends analyst at Google in Switzerland. And part of what we do are these Office Hour Hangouts where people can join in and ask their questions around web search. Bunch of stuff was submitted already, but we probably have time. So if any of you want to go ahead and ask a first question, feel free to jump on it. OK. Can I put an easy one for you, John? OK. How does a site get in the Chrome user experience report if they're not there, that data site? Over 4 million sites, and this is the only one I've ever seen not being in it. I don't know for sure. I think it's based on a sample of the Chrome traffic, but I don't know how Chrome does that. So it's not something you have to sign up for or do manually. It's something that Chrome figures out automatically that this is one of those sites that needs to be included there. OK. Yeah. Make it more popular. I guess so. Well, I mean, it is a local business site, but it gets thousands of searches there, click so much for the search. It's not tiny, but it's not massive. I mean, the data that you see there, you can also pick up either through doing a lab test where you test it yourself live, or you can use the JavaScript code, the JavaScript library for core web vitals, and pull that data directly into analytics, for example. So that might be a way to kind of get that data as well. You don't see it in Search Console in those cases, but you at least have a rough understanding of what it is with the speed with regards to that site. Yeah, it's hard for me to gauge the progress as I'm trying to make changes, and sure, I can use the lab data and page speed tools or whatever it may be. But it's nice to see that green graph spike up, and I don't get that reward with this site, I have no idea. So you said I can use a JavaScript library to bring it into Google Analytics? Yeah, I'm pretty sure, at least that's what I remember. So I can double check and find the link for you afterwards. Okay, sure, thank you. John, I have a follow up question from a couple of weeks ago, what you said. So about negative reviews not hurting, like so if you have a bad reputation online and you see a lot of negative stuff about your company, you search Google, and Google has a lot of negative reviews and a lot of bad stuff comes up, would that hurt potentially from Google ranking for keywords? Could Google look at that and say, oh, this is a bad company? We're not gonna rank it as well because they have a lot of negative reviews. You said I don't think that would hurt the overall rankings for a website if there's a bad reputation around the site. But if you go back to 2010, there was some type of algorithm out there that looked at people manipulating their customers so that they would write bad stuff about them so that they would get links and so forth. And Google said, yeah, we will go ahead and take that into account. Has that changed or you're just saying in general probably won't impact, you don't have to worry about it in general. But if you do a crazy amount of damage to your customers where everything is bad about you, it might actually hurt you. Yeah, I don't remember all fan exactly what the context was a couple of weeks ago. But that was, I think that eyeglasses company that you mentioned, right? Yeah, to Corel. I mean, that's something where if all of the signals point in that direction, I could imagine that we might pick that up. But if you're talking about that there are a handful of people that are upset and they're writing these random things online and there are lots of people that are happy with your site and everything is normal, then that's not something where I would really worry about. I think those situations where it's like there are a lot of people that are really upset about your site that those are probably pretty rare. Not something that like most normal sites would run into. Okay, but as far as you know that whatever that was from 2010 is still around but it's probably hard to kick in or you don't even know. I don't know if that specific thing from 2010 is still around because things change quite a bit over time but that is something that we would try to pick up on. If it's something where we see that everything is really bad about the site and that might be something that our algorithms try to pick up on. But that's something where in general these kind of things, because they're so vague sometimes really need to be really strong. It really needs to be a strong signal for us to say, okay, we really can't trust kind of this problematic information and apply it appropriately for the site's ranking. But I don't remember in detail like what we talked about a couple of weeks ago there. My feeling was that was more a case of kind of like a normal website and there are a bunch of bad reviews out there but they're kind of embedded in the web in a normal way. And that feels like something where it's easy to get obsessed about a handful of bad reviews and it's probably not something that would drastically affect the outcome in search. I think that's always tricky with these kind of things to kind of infer from in a small situation where you are in like this applies and then to take that and say, well, that applies to everything. But yeah. No, the reason I'm just curious at that specific thing that the Google blog wrote about in 2010 is still something that's active in general. Obviously I would think they didn't, it would have been cool if you said, yeah, if 80% of what's written about your company and the web is negative, then that's the threshold. But obviously you guys are not gonna talk about thresholds but I just wanna know in general if that's still around with that specific thing, but you don't know what it seems like. So that's okay. Yeah. I mean, things have changed quite a bit in 10 years. So it's like that specific thing is almost certainly not there in the same way as it was back then. But we probably take something similar into account. And like threshold, I think that's something that is almost impossible to say. Okay, let's take a look at some of the submitted questions. I was wondering if I need to submit separate XML sitemaps for images and URLs or can I have one sitemap that includes both URLs and images? Will that have any impact on crawling and indexing? You can use one sitemap file for everything, essentially. There are limits with regards to the number of URLs you can put into a sitemap file and with regards to the size of the sitemap file, but you can split that up however you want. I don't think like in most normal cases that you would see any effect on crawling and indexing depending on how you split things up in a sitemap file. It might be if you take like a one million URL website and you turn it into one million sitemap files that you might see an effect. But if you're talking about reasonably sized sitemap files, that's something where I wouldn't expect to see any effect with regards to differences how you split that up in a sitemap file. When we have no content in our new sitemap, we answer requests with a 304, not modified status code. We're also using other solutions on other domains where we always answer with a 200 status code. No matter if the sitemap was updated or not, what do you recommend regarding Googlebot? Is 304 a problem? So 304 is a HTTP status code that you can return when a request comes in that says, has this file been modified? If so, assume you defiled. And in those cases, returning 304 basically says, no, it hasn't been modified. The thing to watch out for with a 304 is that really only makes sense when the request is kind of one of these conditional requests, not for all requests that are done to a site. So if Googlebot is just saying, give me your sitemap file, then returning 304 essentially doesn't really help there because it's essentially from our point of view, just to sign, well, this file is currently kind of not available and there's no content essentially that you can return for that. If you don't want that sitemap file to be processed at the moment, returning a 304 is fine, returning a 404 is fine. If you do want the file to be processed, then you should return the sitemap file. I don't know how for sitemap requests if we always do conditional requests or we never do conditional requests. I know for normal web crawling, we do a mix. We do some requests with kind of this conditional has it's been modified since the last date that we know and some requests are essentially just give us the current version. And for web search in general, I would not return a 304 status code blindly unless it's really a conditional request because if you return a 304 status code when the request is a normal unconditional request, then essentially you're not returning any content and we don't have anything to index. So for indexing, that would be bad. For sitemap file, if you just don't return anything, then we can basically say, well, we don't have a sitemap file at the moment, but that's okay, we can still work with the rest of your site. So for normal crawling, only return a 304 if the request is really conditional and for sitemap file, if you want to just return an empty file or 404 or 304 for cases where the sitemap file hasn't been changed, that's kind of up to you. Why some of my pages show in the Google web cache while others don't, what could be the reason? There's generally no specific reason for kind of some pages being shown in the cache and some pages not being shown in the cache. That's essentially just a side effect of our internal systems. So it's not a bad sign if a page is not shown in the cache. It's not a sign of quality if it is shown or if it isn't shown. It's essentially just our internal system saying, well, we currently don't have a cache page that we can show you. And that's completely normal. That cache page is not reflective of anything when it comes to ranking. So that's something where I wouldn't necessarily worry too much about the cache page being shown or not shown. If you want to test what Googlebot would see, then I would use the inspect URL tool and not the cache page. I changed the niche of my blog. Earlier it was on how to blog and now we're only writing content about gadgets. We removed all of the old niche content and now it's been more than a year and we're still not getting any traffic. We have around 100 quality blog posts. Why are we not getting any traffic? We run a Google search engine ad campaign and we found that users spend more than eight minutes on a page. So it's really hard to say why you're not getting traffic. There are lots of reasons why you might or you might not get traffic from Google search. So that's something where there's no absolute answer where you can just say, well, you need to do this and then you will suddenly get traffic from search because obviously everyone wants that and wants more traffic from search. So what I would recommend doing here is going to maybe the Google Webmaster Help forums or some other Webmaster SEO forum mentioning your site, mentioning some of the details about your site, specific URLs and queries on your site that you kind of would like to rank for and getting some honest advice from other people about your website. And sometimes there are small technical things that you can change. Sometimes it's more a matter of, well, you're writing about this topic where there are millions of other people writing about the same topic and it's a lot of competition. It'll be hard to kind of get a foothold there but all of this advice can be really useful for you to figure out what you can do next. Like where can you go from that situation where you are now? Is it something where you can tweak the content? Is it something where you need to reconsider your targeting? Is it something where maybe from a technical point of view something is not working as well as it should for your website? So that's kind of the direction I would go there. I have a website with user-generated content, a listing site, users submit their website URL but after some period some of the website domains expire or they're not working. What should we do with those kind of users? Really hard to say here. I think first of all kind of from a very high level point of view, if this is a website where people can submit their URL and you just show a link to their site on your website then it feels like one of those kind of website models that's a little bit, I don't know, out of date in that it's really hard for us to understand what we should be showing your pages for. So in particular if you're just linking to other people's websites, what kind of queries would make sense for us to show your website instead of their website directly? So that's kind of from a very high level point of view looking at this kind of directory website model. That's something where I might reconsider like maybe it would make sense to kind of move into some more modern website model I guess. With regards to the links on your site no longer being valid at some point. 404s that you link to are unfortunately part of the web. It's not something that we would take a look at and say this is a sign of low quality or it's a sign of a bad website. Users might be kind of unhappy if that's the primary reason for your website and that doesn't actually work. But that's more of a thing between your users and you. That said, if you're providing user-generated content and you know this user-generated content is kind of low quality, if you will, then that might be something where you might want to step in and say, well, I need to make sure that everything that my website provides is of the highest quality possible. And then things like maybe moderating the content that people submit to your website, double checking the links, double checking that it's functional, that it's useful for other people, that's something that you could do. And especially checking links is something that's usually pretty easy to automate as well. So that's something I would kind of go into that direction. That said, like I mentioned in the beginning, this kind of model of having a directory where people can drop links and you just post those links on the web, that feels like something where at some point you might want to move on to something more modern. We have two pages with almost the same content, but in different languages and different subdomains, like an article in Hindi and the same translated, slightly modified version in Tamil or Punjabi. Is there any effect in search ranking and AMP traffic on the original Hindi page in this case? Does Google do some internal translation and compare the article content for ranking in any way? So it's cool to see translated content on the web. In general, we see these pages as being completely separate. Essentially, they stand on their own. If you translate the content, it might be the same thing, but they're very different pages. So just purely from a technical point of view, we look at the words on the page and they're very different words. Like one is in English, the other is in Hindi, or in Tamil or Punjabi. So from that point of view, these are different pages that we would rank independently. We would index them independently. We don't see them as duplicate content. So that's perfectly fine. It's not something that you need to resolve or fix or change in any way. What you could do is use hreflang to link between those pages so that we know which version is in which language or for which country. If the traffic that you're getting to these pages is already pretty clear in that people in Hindi are landing on the Hindi page, people searching in Tamil are landing on the Tamil page, then you don't really need to use hreflang. So that's something where there is a lot of work involved with making hreflang work well. And if you're already seeing that things are working out well, you don't really need to implement that. With regards to ranking, if there's kind of a positive or negative effect there, there's no real effect there. If these are separate pages in separate languages, we will rank them individually. If someone is searching in Hindi, we'll try to show the Hindi page. We don't try to compare those pages to see if they're equivalent, if they're exactly the same. Sometimes translations will be significantly different, which might be that a certain subset of functionality is available in one language and a smaller set is available in another language. It's still a translated version of the same page, but the content itself might be very different. And that's perfectly fine. So overall, I kind of continue going that direction. That sounds good. Why does Google keep selecting the wrong canonical? For example, in Search Console, it says user declared canonical is this one, and Google selected canonical is a different one. So in general, that's really hard to say without looking at the actual URLs. So looking at the sample URLs you have there, the domain name is not mentioned, so I can't actually check to see what is being shown there. But within the URL, you have slash en-se, which I imagine is for English for Sweden or something like that, and then slash en-za, which sounds like English for South Africa perhaps. And the cases where I have seen something like this happen where we kind of mix up the canonicals are usually situations, well, not usually, but it sounds like this would be one of those situations where you have different versions for different countries, but the content itself is actually the same. So in this case, probably the English version for Sweden is the same version as the English version for South Africa. It's a different URL for different countries, but the content, if the content itself is essentially the same, it doesn't have to be exactly the same. But if it's primarily the same, then what will happen is we will say, well, we can save you some trouble. We'll just crawl one version, and we'll index that one version, and we'll use that as a canonical, even though you've specified that these are separate pages. So that's, from my point of view, that's probably what's happening there. I don't know without looking at the exact URLs. If that's the case, on the one hand, you can just kind of take that and kind of work with that. In many cases, that doesn't change anything. If you do need to make sure that you have separate pages shown in those two countries, then you need to make sure that the content itself is really significantly different across those pages so that when our algorithms look at those two URLs, we will say, well, this is really different content. We need to make sure that we index it individually. And in cases like that, we will index it individually, usually switching from us picking one as a canonical to us indexing both of these versions separately is something that takes a bit of time. So I wouldn't expect that to switch from one week to the next, but maybe over a couple of weeks or a couple of months that would settle down and we'd index both of those versions. For one site, I'm helping the crawl stats in Search Console show about 600,000 URLs crawled per day, but the site only has 100,000 crawlable URLs in total, about 30,000 of those are indexed. I know crawl stats take all URLs into account, not just HTML pages, but that's a massive difference. Most larger scale sites I'm helping are not showing a six times difference crawl URLs per day versus total URLs on the site. Let's see, do you know why they would be showing such a large number? It is really hard to say without looking at the site itself. The thing that's sometimes a bit confusing is the URL or the account that we show in Search Console includes, on the one hand, all URLs that we pick up for crawling, which means it also includes everything that we use for rendering. So things like images, CSS files, JavaScript, server responses, all of those are URLs as well from our point of view, so we would show those there. And the other thing that's sometimes a bit confusing is that this includes all requests that go through the Googlebot infrastructure. And that also includes things like the ads landing page checks. I think it also includes for shopping and e-commerce, the product landing page checks, all of those things. And depending on the type of website and how they're active on Google, those numbers can sometimes be quite significant in that we request a large number of URLs from a site to do these ads landing page checks, depending on how you have Google ads set up. And those would also fall into this graph of crawled URLs per day. Technically, it's not crawled. And technically, it's not pages, but essentially, everything that goes through that infrastructure is shown there. So that might be something that you might see there. But again, it's really hard to say without knowing more. And that's something where if you look at the server log files, then just looking at the user agent alone would not show that. But usually, if you take a rough look at the IP address and take a chunk from the Googlebot crawled IP addresses, then you should be able to see some overlap there. The other thing to keep in mind is that a lot of these requests can be at essentially different priorities in that they might have different importance. So if you're seeing that we're crawling too much, if this is crossing a load on your server overall, you can limit the total number of requests. And that will limit the total number of requests that we make to the site. But that doesn't mean that the crawled for web search essentially goes proportionally down. It could be that we continue to crawl everything that we need for web search and the URLs that we use for double checking 404 pages, all of those random things as well, that those requests go down. So that might be something to take a look at, to see if we're crawling too much for your website. Why would my website.com slash English slash SE website get a majority of impressions and clicks from India and the US? How can I make sure that the site only shows up in the Google index for Sweden? This is the only site that's giving me problems. All 12 other countries have sites that seem to get the correct local traffic, except for this one. So I think this is the same kind of question, or similar as before, in that you have English for Sweden and probably what is happening, I mean, I don't know your website to be sure, but probably what our systems are seeing is that we're seeing the same English content from multiple countries. And we're picking maybe the Sweden version as the canonical for all of that English content, and that's the one that we would be showing in search, because that's what we picked as a canonical. So in a case like that, it can happen that we show your Sweden page in India and in the US because we think these are all the same pages. In general, the hreflang annotations help us a little bit, but what's really important is to make sure that the content is different, so that we don't fold these together. If you care about this difference. And another thing to keep in mind is that all of this geo-targeting is not guaranteed to happen. So it can still happen that we recognize that the English for Sweden page is different than the English for India page, for example. But if we think the English for Sweden page is really the right English version to show to other users, then maybe we would show that English for Sweden page even in India. So geo-targeting hreflang is not something that is guaranteed. It's not a restrict that we would only show this in those countries. But rather, it's a signal that goes into our systems and says, well, this is really my preference. I'd appreciate if you did this, but it's something where our systems will still need to look at that and say, well, that's good to know, but we still really think it's different. So what you can do in cases like that is try to catch that on your side. So assume that users from all around the world will be accessing your site from everywhere, or all parts of your website, and try to catch those. Try to show a banner on top, for example, saying, hey, it looks like you're coming from India, and this is the page for Sweden. We have a page specifically for India. You can click here to go there. By showing a banner on top, you're kind of sure that it won't affect search, whereas if you automatically redirect, then any time Googlebot crawls the page from a location that you're automatically redirecting away from, then we would not be able to see that content at all. So showing a banner would be the right thing to kind of catch these users that are going to the wrong version of your website. Yeah, that's kind of the direction I would go to. I mean, the other approach for all of these international sites is to try to simplify it a little bit more, and to say, well, actually, I just care about the individual language versions, and maybe just have one English version for multiple countries. And by doing that, you can often reduce the number of URLs that you need on your website, which makes things a lot easier overall. So that might be another approach to take there. But again, it's really hard to say, without knowing your website and understanding why you have different English versions for different countries. Sure. Sure. I wonder if Google was thinking about changing the way it treats the age of Lang Tech in this respect, because actually, when a webmaster, like the friend who posted this question, specifically wants to tell Google, in the end of fact, which users should be shown which website, then I think it's a good idea for Google to actually follow this guideline. Wouldn't it be? I think it would be good, but it's always tricky in practice. So I don't know, for example, in a country I don't know, for example, in a case like this, but I see this a lot with German websites. I imagine that's also where you've seen it, like especially the set of German, Austria, and Switzerland, where often the content is exactly the same and they're targeting for multiple countries. And that's something where we do try to recognize when there is something kind of country-specific on a page so that we don't accidentally deduplicating these URLs into kind of one canonical. But it is something where, if you really want to be sure that this doesn't happen, then you need to make sure that nothing on our side would assume that these pages are actually the same. The tricky part with hreflang that I've seen is that some sites essentially implement all variations. So it's like number of countries times number of languages. You can generate all of those pairs. And that's something where we just end up getting lost when we try to crawl and index sites like that. So that's where our systems kind of have these protections built in where, if we can recognize that you're serving the exact same content for all of these versions, we can simplify that for you so that we don't have to crawl all of these versions, and we don't have to index all of these versions. But obviously, there's middle ground in between there, too, where if you just have two or three different versions, we should be able to say you have hreflang set up properly here. We should treat those as separate URLs. I was just wondering, because you can easily say, OK, let's stick to the German-Austria-Switzerland example. So when I take the hreflang tag, I could just say it's DE. So that means all German-speaking countries in the world. Or I can say DE minus DE for specifically Germany. And thinking about the URL structure and what kind of an effect that has, or a psychological effect, if you like, to the users, then it might be worth it, especially thinking of all the English-speaking countries in the world, or I'm aiming at Russian-speaking countries in the world. And then you have a conflict, probably, which is a bit racist, I admit. But some Ukraine people probably won't like to see .ru in their URL. But I'm not able to address them in the way it works currently. Yeah. So we do also have some protections in there, which are, I think, almost a bit more confusing in that in the situations where we recognize the hreflang and we recognize that these pages are the same, then we will fold them together. We will pick one canonical. But because we know of the hreflang, we will show the appropriate version in the search results. So for Germany, Austria, Switzerland, for example, we might pick one as a canonical. Maybe the one for Germany. And in the search results, we would show the Swiss URL or the Austrian URL. However, because we have chosen one as a canonical, when you look in Search Console, you will see all of the reporting on that canonical URL, so on the Germany URL. So it's something where if you just look at Search Console, it might be, well, Google has everything on the German URL, the German-Germany URL. But actually, the ones that we show are the local versions. So that's something where if you just look at Search Console, you could think, well, it's totally wrong. But if you look at the search results, you'll often see that actually the correct URL is being shown. So it's sometimes pretty complicated. And that's kind of why I tend to point into the direction of if you want them to be treated separately, then make sure that they're really separate content. Because then you don't have the situation where we'll try to fix it for you. And then we show you confusing data in Search Console. And then you're confused. And then you look at it yourself. And it's like, it's not how Search Console is showing. And like, Search Console is broken. It's like this endless cycle. So if you clearly make them separate, then you don't have to worry about it. But I got that point. It's just like so many times on the web, you're offering one service, well, exactly the same service in different countries. Therefore, thank you very much. Yeah. John, I have a follow-up on that. Sure. So your friends at Microsoft being Search, or whatever they're calling it today, said that I actually spoke with them on Friday. And I'm asking them about hreflang. And they weren't so into it. And they said potentially they're thinking about new ways of going about doing the same thing with you guys. Have they been in touch with you guys? Probably not. I don't know. I don't know. You can't disclose that. I mean, on the one hand, Gary is the one who is generally in touch with them with regards to a lot of these search things. And I have no idea. I think in general, hreflang can be really, really tricky. And it's something where, at first, when you look at it, it's like, oh, you have two different versions of the URL, and you just link them together. It's so easy. But with a larger website, with more than just two or three languages, it gets really hard. And we've looked into different ways of making that easier. I came up with a really neat scheme that I thought was really easy until it turned into using regular expressions to map out how your website works. And at that point, it's like, well, I don't know if this is really an improvement. But if anyone, if that's from Microsoft or anyone out there has an idea for a good and robust and easy to understand way to make international websites so that they kind of work well, then by all means come send it to us and we'll try to see what we can do. OK, good to know. Thanks. John, another follow-up on that, just curious. So in your example with the three German pages with very similar content for three separate audiences, basically, would it help? It makes total sense how you treat that in Search Console and how you treat that overall from a search point of view. And I think that's helpful, because there are situations where you might want to have those three different German versions, even if you can monitor them properly. Does it make any difference in how the webmaster sets canonicals for those three pages? So you mentioned you're selecting one canonical version for those three pages. Should the webmaster also do that? Does that make any difference? I mean, we kind of expect the pages to be self-canonical. So the German for Germany page is canonical for German for Germany, German for Austria page is canonical for German for Austria. If you canonicalize all of those to one page, then we will probably follow that, and we will only index the German for Germany page or whatever you pick, and then we won't be able to verify the hreflang links between those pages. You would miss the hreflang. We would kind of miss that, yeah. Good point, yeah, OK. And it's something where it's sometimes tricky with regards to the content on the page, but we do try to watch out for things like currencies, like addresses, phone numbers, to make sure that we're not missing kind of a country-specific version of a page. So if you have all of those same currency, all of the same phone numbers and addresses on all of these pages, then that's easier for us to say, well, we should just pick one. Whereas if you have an address in Germany, an address in Austria, an address in Switzerland, then it would at least be something where I would take that to the indexing team and say, hey, we should be able to figure this out. Even if the rest of the content on the page is the same, we should be able to recognize, this is an address in Germany, and this is an address in Switzerland, we should not mix these pages. What if it's a question of just the currency or something very minimal, something like that? Just the price. We should still pick that up. I think that's something we should pick up. It's trickier if you have something like a currency drop down and one currency is selected on one page and the other currency is selected on the other page, then kind of the same currencies are mentioned on both of these pages. It's just one of them is default here, one of them is default there. But if that currency is really on that product page, then we should be able to pick that up and say, that's Swiss francs and that's euros. Those are different things. We should not fold those together. OK, so in Search Console, you would see them separately then? That, in that case. That's something you, from my point of view, I would see it as a bug if we don't pick that up. I know sometimes we do run into that situation where people will come to me and say, why are you folding these pages together when they should be kept separate? But I would see it as a bug. That's something that I would definitely bring up to the team so that we can figure out a way to catch that better. Got it. Just on this one around the euro, obviously, you've got Belgium, France, Netherlands, which are kind of part of this thing plus Switzerland, Germany, and Austria. So you would advise on trying to build more local signals around these websites as well, or to enable this to be localized and not get picked up and merged together, because some pricing in cross-border might be different as well. And I know we're having cross-border e-commerce. This can then cause effect on pricing and things like that. I mean, the easier you can make it for us to understand that these pages are specific to one country, the more we can kind of follow that. So things like local phone numbers, local addresses, all of that can play a role there. So I think that the price itself might be tricky if there is like $19.95 on one and then $21.95 on the other. It might be that we would look at that and say, well, we just crawled it again. Maybe the price will change next week. It's not a matter of which URL we crawl, but rather when we crawl it. But if there are other local signals on that page, then that makes it something where, at least from my point of view, I would pass that on to the team. That's something definitely worth bringing up to us as well, to kind of make sure that we treat them as separate pages. Is it legitimate with Atreflang as well, just to try and simplify it, just to keep the thing going there a little longer? If you sort of just use Atreflang, we had a question in the forums. Someone had multilingual, once again, Germany, Swiss. If you just use the H, left language groups and left different languages to Google to decide. So all the English ones are there. All the German ones are there. Just makes life a lot easier in terms of an awful lot less URLs to cross link. Yeah, that's something you can do. I mean, if we don't have an Atreflang pair that maps specifically to something that you have on your pages, then we kind of have to kind of guess at that anyway. So if you have German and English and someone in France searches for your site, then you didn't really specify which one you want. But we can tell they want to go to your site because they search for your site name or something. So we'll just kind of try to pick one. And John, can we have some Atreflang tools within Search Console, please? Atreflang tools, I don't know. I wish we would just have something a lot easier. But yeah, I think Atreflang is one of those, I don't know, awkward things where since a lot of people at Google work from the US, it's kind of like in the US, you tend not to worry about Atreflang as much because you just make a global site. And global means like all of the US and Canada and it's kind of the same thing. Whereas in Europe, you drive like an hour and you're in a different country and suddenly Atreflang is a really big thing. So that's something where because so much is happening around Search in the US, it's kind of like one of those neglected topics where we always have to keep pushing and kind of making sure that, OK, don't forget about this and keep this supported. People need a way to make international sites work well. Running like the US as well, you have Spanish language. So you've got South America plus Northern America speaking with large Spanish volumes. So I know more and more larger sites are doing both English and Spanish versions. So I think it's something that's becoming more important. Yeah, I think different language versions is a lot easier than same language in different countries because with different language versions, it's usually the case that someone will search in Spanish and those Spanish words will be on the Spanish version of your site. It's not so much that we would accidentally show the English site if someone is searching in Spanish. So different languages are almost one of those situations where I'd say, well, maybe you don't even need to use Atreflang because people are automatically searching in your language and that matches that specific language version. But in South America where you have different countries that all speak Spanish, if someone's searching in Spanish in, I don't know, Argentina, it would be good to make sure that we show them whatever kind of local version would fit for them. It's interesting. I was going to ask, I don't want to sound like that dumb New Yorker or something. All you think about is like you live in New York, you live in the US. You don't think about outside of the world. But it seems like a nice significant portion of your questions that you get are around Atreflang. But how much of the web is using Atreflang? Is there some data point out there that you could share even from a non-Google source? I don't know. So I think it's more than 10%? Probably not. I don't know, probably less than 10%. But I think at some point we did a search on archive.org, or what is it, HTTP Archive, where you can do BigQuery searches across the web to see what kind of content is there. And at some point I did one for the different Atreflang versions. But I need to dig that up. Maybe I can tweet about that. Cool, thanks. I think usually what happens with Atreflang is these questions show up in the German Hangout, because that's where all of the German for different countries in Europe kind of thing happens. And in the English Hangout, it's, I don't know, every now and then, but not as regular. Cool. Cool. OK, let's see. A handful of questions left, I think. Let me refresh, see if anything more shows up. Let's see. Is a Facebook comment section on blog news portion of a website good or bad for medical sites EAT? We recently removed ours and saw a small dip in traffic. I'm wondering if there is a correlation. I suspect this is totally unrelated to any change in traffic that you're seeing. I don't actually know if Facebook comments are indexable. That might be something kind of first off to double check to see if Google can even render those comments. So that's probably what I'd double check first. And if we can't render those Facebook comments, then probably they have no effect at all. We have a news website accepted into Google News and a social discussion forum located in different directories, but on the same base URL. We believe the content in the forum is valuable, but hurting the overall search ranking of our news. How do we tell Google to treat these sections separately since the news section is quality in-depth content and the forum is mostly user-generated? It's hard to say how you need to separate those. I don't know, actually, from the Google News site if there's something specific where you need to say everything for Google News is in a specific sub-directory. That's something where you might want to double check in the News Publisher Help Forum or in the Help Center for news publishers. Otherwise, if you really want to keep these separate, then I would make it as clear as possible that these are separate parts of your website, which might be something like using sub-domains to separate that out or to have a clear slash news section and a slash forum section on your website. Just something where it's clear that one is not a subset of the other part of your website. So that's kind of the direction I would head there. In general, having user-generated content, a forum, discussions on a site is not something that I would see as being negative, per se. But it does depend on how you kind of manage that part of your website. Because ultimately, if you're publishing content on your website, then that's what we try to index for your website. And if you're publishing random user-generated content from random people from across the web, and it's totally unmoderated, and sometimes it's great, sometimes it's crazy, sometimes it's something mixed in between, then it will be really hard for us to understand what we should focus on with your website and how we should treat it. So that's something where I would try to make sure that the content that you're providing from a user-generated content point of view matches the kind of content that you want to provide in the news section of your website, too. Is it actually cloaking or forbidden if I deliver a faster website to Googlebot? For example, if I leave out all 20 web trackers and retargeting pixels so that it gets through the page faster or you can render the pages faster. I don't think we would necessarily see this as cloaking, but I suspect you're not going to get a lot of value out of it. On the one hand, when it comes to speed, we use the Chrome user experience report data. And that's something that is based on what users actually see. So if you're making a page faster for Googlebot, that's kind of nice, but it would be nicer to make the page faster for users because that's what we would kind of pick up on. With regards to rendering, in general, doing something like this is similar to pre-rendering a page on the server side. And from our point of view, that would be OK. From a practical point of view, if you're doing this for a website that doesn't need to be pre-rendered, then you're just introducing a lot of extra functionality that could break. So that's something just purely from a maintenance point of view, I'd recommend trying to find a way to avoid having to do this kind of special version for search engines versus kind of the version that everyone else gives. Sure. Back last year, Barry covered a tweet that you had said about doorway pages. Basically, the person had asked, oh, I'm going to automatically create 1,300 city-based landing pages to rank for phrase plus city name. Now, that obviously sounds like mass spam and potential doorway pages. Can that strategy actually be useful and valuable and not consider a doorway page done in a much smaller context, like maybe a handful of cities where you're actually adding content that is valuable to somebody in that city? For example, car dealers love to do this. They want to rank for cities all around their area. And somebody in a city that's maybe 100 kilometers away or 50 kilometers away, maybe you offer them something special in order to come to the dealership and the business. So if you actually provide value, do you see that as something that's beneficial? Yeah, that can make sense. I mean, obviously finding that balance there is tricky. That's what we're doing. You would do it for ranking, of course. But if you're serving the user at the same time, I think we can all be friends. Yeah, I mean, it's also something where it's more than just, well, I have something unique for that city kind of thing, where one thing that we often see is just general information about the city. So you're a car dealer, and you're like, people from the city can come and visit us, and the schools in the city are like this, and population is like this, and all of these things. It's essentially auto-generated content. It's unique, but it's essentially still a don't-boy page. On the other hand, something like you mentioned, where you say, well, I really like this city, and I'm doing kind of this special deal. So if you come all the way from this city or this region to visit me, then I'll have something special for you or whatnot. Here are reviews from people from your city. Here are the top models that may have been purchased in your city. Something that maybe they actually care about, perhaps. So that you might consider, OK, it's when you're kind of just spamming it out and just trying to do things from a ranking perspective, then that would be considered a doorway page. Does Google have a penalty for doorway pages, or is it just my flag, an individual? I think we, at least we, used to have something for doorway pages, a manual action. But I don't know if that's something that we still do from a manual action point of view, or if it's just our systems trying to pick that up automatically. A lot of these manual actions have evolved over time, where we would say, well, we need to do this manually. And at some point, we figure out how to do it algorithmically. And it kind of evolved into that direction. But I don't know, in particular, with regards to doorway pages. At least something that I haven't seen people talk about that much recently. Can we get a list of those that have switched from manual actions to algorithmically? Thank you. Appreciate it. I mean, if you're working with sites that get hit by manual actions, and maybe you have a list already, or maybe, I don't know, on Black Hat World, people will have. There was a time where we thought, like, you stopped sending out manual actions for links because of Penguin and stuff like that. And it was like quiet for a while. And we're like, oh, no more manual actions, because you just ignore these links. And then a whole boatload of manual actions was released over Link Spam and stuff like that. So I don't know, in particular, around the Link Spam stuff. But it is something where, from our point of view, it makes sense to find ways to do it algorithmically. Rather than just purely manual, because the web is just gigantic. And we can't manually review the whole web. We have to find ways to do as much as possible algorithmically. And in many cases, there's still weird things out there that we don't catch algorithmically that maybe we do have to take manual action on. Cool. OK, we're kind of out of time, so I'll pause here. Thank you all for joining in. I hope you found this useful. And if you have a scheme that can replace a Treflang that is easier to understand and implement, let me know. And we'll see what we can do. Otherwise, I wish you all a great week and maybe see some of you again on Friday. Bye, everyone. Bye-bye. Thank you. Bye.