 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangouts. My name is John Mueller. I am a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hours Hangouts. Looks like a bunch of questions were submitted already, but if any of you want to get started with the first question, you're welcome to jump on in. OK, I'll start with her. OK, my name is Koji Kawano. I'm an in-house CEO. And we've been hit by manual actions back in September. We've been trying to solve this. This is a user-generated spam, manual actions. Basically, people took advantage of our internal search and generated a bunch of URLs and have Google Chrome. There's thousands out there. So we had no indexed them. We have not been blocking that path in the robots.txt file. But no indexing didn't really work. So we are now returning 4.10 instead. And that didn't work. By meaning, I submitted a reconsideration request, but that didn't work. So now we are trying to, we've created an XML sitemap with some of about 3,000 URLs and have tried to get Google to crawl them and recognize that 4.10. So I don't know that we are in the correct path or how long this is going to take. OK, do you want to drop the URL of your site maybe in the chat? I can take a quick look to see what might be happening there. So without knowing which URL it is, my immediate guess would be that these URLs are still indexed at the moment. And the manual action is just in place to prevent those URLs from showing up. But this shouldn't affect the rest of the site. So let me just try to copy and paste those and work here. Let me just double check really quickly. OK, thank you very much. And we can kind of see. I don't see any manual action there at the moment. Let me see if I have the right URL. Yeah, I don't see it. Oh, wait, it's .org. OK. Yeah. That makes a difference. Yeah. What I mostly see there is a bunch of URLs that are essentially removed for spam reasons. But these are very specific URLs. So those are probably what you're seeing in Search Console or a sample of those that you're seeing. And essentially what that just means is those specific URLs are removed and everything else is just ranking as normal. So from our point of view, it's not that you need to take care of this. We're kind of taking care of it for you in the search results already. It wouldn't negatively affect your site to do something, to kind of have those removed a little bit faster. But it's not something you would urgently need to do. So the manual action that you see there, you could essentially just leave that there. OK. All right. Then why is the message staying there, though? Because from a spam point of view, we're still removing those URLs. And we're removing them for web spam reasons, essentially. Just because they could be index as well. So it's something where if you were to remove them with maybe the URL removal tool, it would be the same thing as if the web spam team removed them. We're flagging them as spam because that's kind of what they were there. People taking advantage of the search results pages on your site, that's kind of obnoxious. We're kind of used to that. And we try to kind of surgically remove those specific pages from the search results. And if we can do that, then we don't need to affect the rest of the site. OK. Do you recommend that XML site not to be left up? It's being out there for about a week now. I don't think you need that. I would just work to prevent new pages like that from being indexed. And if you're currently returning 4.10 or if you have a no index on those pages, then those will drop out of the index over time. The manual action that you have there is something that will also expire over time. But usually, that's something that expires over a longer period of time. So it's not something that the web spam team won't say, kind of like it'll drop out next week. But again, it's not something that you'd need to take care of because the web spam team is already kind of cutting those specific URLs out. And the rest of your site wouldn't be affected by that. OK, good to know. Thank you. All right. Then there's a JavaScript question that has a JavaScript dropdown that leads to product paging. Should I submit the URL? But it seems that Google can't crawl the dropdown. Yeah, regarding that question, it's wondering the hydra. Yeah, the website is using JavaScript. I'm not sure which tool I should verify it, but it's near a third party tool or I should use inspection tool within the news search console to see if the crawl will see those packing with those within the dropdown. And the dropdown is in desktop. But in terms of the mobile, there's another dropdown. So I'm not sure whether Google can identify those desktop mobile. Yeah. OK. So at the moment, the site isn't being mobile-first indexed. So we'll use the desktop version for crawling and indexing. I could imagine, depending on what all is on the site in the mobile version, it might be that we would switch that to mobile-first indexing. And then that would be something that might make it harder to crawl the rest of the website. But for evaluating when a website is ready for mobile-first indexing, we also take into account the links on the site, especially the internal links. So if, for example, those internal links are missing completely on the mobile version, where you just have a text field and you type the query rather than using a dropdown, then that's something where we probably wouldn't switch the site over to mobile-first indexing. But still, it seems like something that's probably worth double checking on your side to make sure that the dropdown also works on mobile. And then you should be able to see that in the mobile friendly test, where if you look at the rendered HTML, it should have the dropdown and the links there, too. So my guess is at the moment, it's not critical. But it's definitely something I wouldn't put off for too long. OK, yeah, but the mobile version of the dropdown is this can be functional, but just the layout that it works, interaction is kind of different. Yeah, that's fine. If it's functional, and if we can still find the internal links on the mobile version, then that's kind of what we're looking for. All right, great. We're looking into that, thanks. All right, bunch of questions in the chat today. So let me just run through those quickly. A question about structured data. In order for rich snippets to start displaying reviews instead of votes, we switched over to review count instead of rating count. But this is causing a warning to display for aggregate rating when the field is empty, when there are no reviews. How can we solve this? So that's kind of normal. If you don't have any reviews on there, then I wouldn't mark it up as such. So my recommendation would be to just leave out those that kind of structured data if you don't have content that you can fill out there. In general, if there's a warning, then that just means for those pages, we wouldn't show any rich results for that. So that's also not particularly critical. Like, we're already filtering those out, but we're letting you know that your markup is kind of weird here in that you're telling us there are reviews, but at the same time, you're also saying, actually, there are no reviews here that we can show, which is why we're flagging that as a warning. So in general, I think it's fine to switch over to that markup, but make sure that you're filling it out properly or just leaving it out when you don't have anything to fill out there. A question about Google's search engine misinterpreting what are companies selling. Could you elaborate on that? He answered it. He said to leave it out. It doesn't make any sense. Hello. OK. Hello. Go for it. Yeah, I wrote a longer explanation and submitted it via the site. But basically, we make the ingredients for biological disease research. And we've noticed that after August core update, we've lost a lot of traffic in the organic space. Couple that with what we see in AdWords, which we get a lot of ad disapprovals, thinking that we're selling pharmaceuticals. And so what we're looking for, we don't sell any pharmaceuticals, like zero. Both what we do is we sell things that go into a laboratory used by researchers to understand the interactions with particular substances with cells and genes and all that. And what we've ended up doing is we've seen in the AdWords space that they're only doing a simple string search. So if they find a word that they think is a medicine, they then disapprove the ad. But we found it in titles of publications and captions about what the testing was done, all of these kind of factors. And so that makes us wonder if there is a better way to at least start providing signals to the search engine that we're not in the medicine business. That's an interesting question. I don't know offhand what we need to do there or what we need to pick up on. What might be useful there is I think you posted in the Hangout as well. Did you include a link to your site somewhere so that we can? Well, I can put a link to our site really quickly. OK. Because I probably need to pass that on to the team to see if there's something specific that we need to do there to see if we're picking things up properly. One thing to keep in mind is that the whole ad side is generally completely separate from the search side. So the ranking within the ad side, the approvals within the ad side, they wouldn't reflect anything with how we would pick up a site for search. So if from the ad side you're seeing disapprovals or getting flagged for other things, that's kind of the ad side, maybe getting confused. But it doesn't mean that the search side would see that similar. Well, the problem is that I don't have any confidence that the search engine itself, which is used by both systems, what I've done is I've done research to find where the drug term was. And it is pretty obvious that they're picking off the drug term without understanding the context. So the context might be. The drug term used to by the drug term might be an explanation of a test result. We use this particular drug substance in the test and what it looks like in the image. It's not what we're selling. It's the word. So what I'm worried about is the context is being missed completely by the search. I could imagine that's tricky, but I'd really need to pass that on to their team here and see what they think. If we're picking that up properly or if it's something where maybe there is some additional information that you could be providing on your side that I can pass back to you, that would make it easier for us to pick this up properly. Well, one of the things we've been thinking about doing is using schema information to pass on and say, here's our. We sell to researchers. We don't sell to consumers. We sell to organizations that do disease research. Those kinds of things in the schema in the definition. I mean, would that help the search engine figure this out? Those are the kinds of tips we're looking for. Yeah, I think that would probably help to have some more structured data around what specifically you're providing there. It might also be a matter of double checking the queries that where you're seeing these issues and thinking about, is this really something where a researcher would be looking for this term? Or is this something where a mainstream user would be looking for this term and expecting, perhaps, something different? But I realize it's just. There is crossover. Yeah. I think it's a really tricky topic. So I really need to figure out with the team to see what we could be doing there differently if our algorithms are picking this up in some kind of a weird way that we could be handling better. Well, I'll give you our standing offer to Google, which is we have a PhD in biochemistry from Cambridge, a PhD in microbiology from Wisconsin, and a PhD in molecular biology from University of Washington and USC. We'd be more than happy to help teach the engine how to recognize biological reagents and targets. OK, that's cool. OK, I'll pass all that on. Thank you. All right, let me grab some that were submitted as well so that we don't exclude those completely. The first one I have on my list is sitemap URLs say discovered, but not indexed. There's no explanation why these pages were excluded. I think we covered this in one of the previous Hangouts. And Barry wrote a large article about this. Essentially, for a lot of URLs, we just don't index everything. So that's kind of normal. In the past, it was just that in the sitemaps information in Search Console, we would show you submitted so many URLs, and we index some smaller number of those URLs. And that's completely normal. We generally don't index everything that we find. So for the most part, that could be completely normal and not something that you really need to worry about. Essentially, we're trying to recognize the relevant URLs on your website, the ones that we would show in the search results, and try to crawl and index those. How do you know whether a site has been penalized by Google? We haven't had any formal notification that a site has been penalized, but I've been working for a site for a while with no return. Is there something that we're missing? So in general, if there is a manual action taken on Google's side, then that would be visible in Search Console, in the manual action section in Search Console. That is, by the way, moving to the new Search Console as well, I think, tomorrow. So that'll be slightly different in Search Console, but that should definitely be there. If there is no manual action, then essentially what you're seeing is the normal algorithmic ranking that we have for all websites. And that can be the case that we don't rank all sites highly. That's kind of normal, so not necessarily something where something is completely wrong. The question is formulated in a way that's kind of hard to understand the details, though. When you say it's like with no return, it could be anything from its index, but just not ranking. Or maybe it's not index at all. Maybe there's a technical issue behind this instead of a ranking or kind of a manual action issue there. What I would recommend doing there is going to the Webmaster Help Forum and including your URL, the queries that you're trying to target, and seeing what the experts say about this case. If there's something technical that's involved where they can pinpoint, like maybe you have this setting on your server set up wrong, or maybe Google is confused by this configuration that you're using, that's something they can flag fairly quickly and help you to resolve that. If there's something more from a quality side, then oftentimes they can give you some tips as well with regards to the direction that you could be heading. The URL inspection tool will show me a page that can't be indexed because it has a meta robots tag set to no index. However, there's no tag in place, so I don't understand why. I took a quick look at this particular URL. And from what I can tell in our systems, we last crawled or processed this URL a couple months ago. So it might be that there's no meta robots tag on there now, but maybe there was one in the past. And in general, this is something that does take a bit of time to be reprocessed. So a couple of months is kind of normal for a URL that might not be your home page or your primary page unassigned. So what will probably happen there is we'll recrawl and reprocess this at some point. And we'll see that there is no meta robots tag anymore, and we'll index it normally. You can also encourage us to do this a little bit faster by using the URL inspection tool and using the live test. And from there, I believe, submitting it to indexing. So that's something that helps us to understand, oh, this URL changed. And we should make sure that we have the most recent version so that we can reflect that in the search results. So that's kind of the direction I would head there. Can you include a link to the article from Barry about URL is not being indexed? I am sure someone will drop that link into the comments of the post. I was noticing my site was moved to Mobile First Indexing in August. In September, there were major drops in organic traffic. The contents are the same on both devices. What could be the possible reason? So in general, moving to Mobile First Indexing is something that happens very quickly. And if there were any issues associated with the site in the mobile version, then on the one hand, we would try to avoid moving it to Mobile First Indexing. And on the other hand, you would see those changes pretty much immediately as soon as we have it in the Mobile First Index version. So if you're seeing changes in September or sometime later, then those would be normal organic ranking changes as they always have. And I have seen on Twitter that I think in September or end of August, there were some ranking changes that people were seeing. So I could imagine that these are just kind of the normal core ranking algorithm changes that we always have. And it wouldn't be related to the Mobile First Indexing. Regarding rel previous and rel next indicated in the meta tag, in this case, if paginated pages are not linked on its parent page, does it impact the crawling and indexing of the paginated pages? Yes, if we can't understand which pages belong together with rel next and rel previous, but if there are no links on the page at all, then it's really hard for us to crawl from page to page. So using the rel next and rel previous link elements in the head of a page is a great idea to tell us how these pages are connected. But you really need to have on-page normal HTML links that go between one page to the next page and maybe to a view all page or something like that. So that's really still recommended. I noticed the structured data. I think we talked about that briefly. I think you had a question about some of the details that we talked about there, though. Do you want to ask more about that, the reviews and ratings? Sure, can you hear me? Yes. Hi, so I run an SEO company. And so we actually create structured data for clients who have e-commerce websites. And so when we create the structured data, we are moving over from votes to reviews. So instead of using the I think it's rating count, we're now doing review count. And now this has resulted in an error with the structured data where it says, I believe the aggregate rating field is recommended. Please provide a value if available. So basically, if the client doesn't have any reviews, then there is a warning. And we're trying to figure out how to get rid of the warnings because if we don't put any aggregate rating, that'll result in an error as well. It'll say that the aggregate rating is necessary. I would say if there are no reviews there, then I just wouldn't use the review markup for those specific cases. So if there's no reviews, don't use the review schema markup for that product page? Yeah. And that's one moment. Okay, so I was just talking to my husband. Okay, so just leave it out for that product page. Okay, I'll see if that is possible. Yeah, I mean, if it's a warning, then it wouldn't be critical. It's not that we would ignore the rest of the structured data on the site. It's just saying like, hey, you're supplying the review markup here, but you don't have all of the details that we would actually show it. So at that point, you might as well just remove the markup if you don't have any of the content that you'd provide in the markup. Okay, all right, thank you. Sure. All right. How to implement custom headers in Azure Blob? I have no idea. Maybe there's someone watching this who knows more who can leave a comment about this, but I have no idea how this is set up in Azure and how you would provide that for HTML pages. So it specifically asks about adding a canonical URL for PDF. So that would be a link element in the header of the HTTP response. But I have no idea how you would set that up, specifically on that configuration. New page, an old, completely irrelevant theme website creating a subdomain, creating a new domain, which one is easy to rank. Does Google, in this case, use a site-level ranking factor? So I think it's asking if you create a new page on an existing website, should you put it on a new domain or on a subdomain? I don't know. For the most part, I would not worry too much about the SEO side of things in that regard. If you really need to host it separately and put it on a separate domain, then sometimes that's more of a technical question or more of a policy question rather than SEO question. In general, if you set up a completely new website for content that you're providing, then that does mean that we have to first understand that this is a relevant website and figure out how to show it in the context of the rest of the web. But in general, my recommendation is usually if this is additional content that you're providing on an existing website, I would try to include that within your existing website. And instead of kind of separating things out into lots of small websites, really build a strong single website that has a concentrated value for your business. How does normal page get a benefit while that AMP version is ranking in the search results? Google treats them separately. So if you have a normal HTML page and you connect an AMP page to that, you have the link rel AMP HTML to the AMP page and the link rel canonical back to the normal web page, then essentially what happens there is we treat that as one set. So there is no specific ranking benefit for having this configuration. It just really means that if we were to show you an AMP page, we would know that for this specific web page, we have this AMP URL that we can show. And we can use the AMP cache and serve that really quickly, all of the normal AMP things that play into that. So it's not that there is any magical ranking advantage by going to an AMP page or setting up a specific AMP configuration, but rather there are multiple ways that you can use AMP. And one of them is to have a separate AMP URL from your traditional HTML page. And there's no specific ranking advantage to doing that. It's just a technical setup that you do. Does Google support nested sitemap index files? Where one index sitemap file references another one? No, we don't support that. I don't believe anyone has supported that. I believe that's also specifically called out in the sitemap spec as something that's not supported. So you generally need to set up separate sitemap files, separate sitemap index files, and submit them separately if you need to go a lot further. Clarification on pagination for an e-commerce site. What should the canonical of the AMP page in the series be from whatever research it should be self-canonical? Yes, that's correct. So essentially, the canonical should point to the version of the page that you want to have index. What should the meta-robots tag of the AMP page have, no index or index? That's a bit harder. That's essentially a question of what page do you want to have indexed in the search results? And how are the individual products on your site linked among each other? So that's something where some sites decide to say, we want to have the paginated pages indexed. And that can be perfectly fine. If you have good content on those pages and we can pick those pages up and show them in the search results separately, that's really useful to have. On the other hand, a lot of sites kind of just have, if you start scrolling down through the category pages on page 5, 6, 7, 8, you start to have kind of repetitive content compared across the different categories. You don't get a lot of really useful pages out of those paginated pages. And in those cases, it might make sense to say, I'll just no index everything after, say, page 2 or page 5 or whatever you decide. And that way, you can focus on more on the actual pages that you do want to have indexed. So there's no really hard rule there. It's really more a matter of making sure that the pages that you provide for indexing are pages that you want to have indexed and that you want your site to be found for. In my industry, we have annual enrollment periods. We have a top page that discusses annual enrollment period. I also publish new content every year, targeting the year and the query. And we've done that in 2018, 2019. Should I canonicalize these pages at some point? If so, what should be the top-level page or the most recent page? I imagine that's always kind of tricky, because on the one hand, you might want to have the older content index as well. So if people are explicitly also looking for what were the requirements back in 2004, and you have content from 2004, then that might be useful to have. On the other hand, if nobody is actively looking for this older content, then at some point might as well cut that off and say, OK, everything goes to my general page on this theme. Another idea here is to say that you have one page, which just is for the current version, and you move all of the older ones kind of to an archive version. So you have, in this case, an enrollment page for the current year, which is just on a generic URL, like whatever your website is, slash enrollment. And for the previous years, you kind of move that off to an archive version where you have enrollment slash 2017 slash 2016, the older versions. That's generally a pretty useful strategy, because that way you build up that generic enrollment page. Over the years, you'll collect more links to that page. People will see that page as being quite relevant, because it's regularly updated. It always has the current version on there. We'll still be able to find the older versions that people are specifically looking for that. But we'll always be able to pick up the current version fairly quickly. And you can use the same strategy anytime you have something that's regularly repeating. That could be an event, like if you have a conference. It could be if you have products that are regularly updated. You might have one page for, say, I don't know, iPhone. And you could have separate archive pages for iPhone 3, iPhone 4, iPhone 5, all of those older versions. So that way the generic product page would kind of grow in value over time. And the individual versions for the older ones, they would be around. If people explicitly looking for that, we can show that in search. But they wouldn't get in the way of the kind of the most recent version showing up fairly well in search. So that's a general strategy that you can use across different kinds of products, different kinds of websites, where you have kind of this periodic update of the content and always want to make sure that the current version is as visible as possible. Wanted to ask your opinion on what if we use Ajax for pagination of an e-commerce site, which would not have its own pagination URLs. And the website relies on View All and Sitemap, HTML, in order to help Google crawl and find all of the product pages. So it's hard to say how exactly you mean using Ajax for pagination. In general, what's important for us, especially for e-commerce sites, is that we can find the individual product pages and that we have some idea of their context within the website. So if all of these products are only linked from one shared Sitemap HTML page, then that doesn't really give us a lot of context about these individual products. And it can make it a little bit harder for us to actually crawl all of your content, especially if you start going from a couple hundred products to a million products. And suddenly your Sitemap HTML page is a million links to individual products, which really make it hard for us to figure out where do we stop looking for links on this page. Because all of these links are all on one single page, rather than set up in a way that you have kind of a clear category structure or even different categories and then kind of paginated pages, which would be kind of like subcategories almost. So if we can still understand the context of these pages and the links to the individual products, if you just use JavaScript for this, that could be perfectly fine. On the other hand, if you use JavaScript in a way for this that we can't actually crawl through and you require us to go through the Sitemap HTML page, then that sounds like something that would be suboptimal. And it might work for a small set of products, but it'll definitely be worse for a really large set of products. So depending on how far you want to go, how you're planning to expand, where you are now, that'd be something where I'd say, maybe it makes sense to get a clean setup rather than setting up this complicated JavaScript setup that doesn't really work in a scalable way for a lot of URLs. So JavaScript itself is not something that would block indexing, but depending on how you have this set up here, if you don't have separate URLs, for example, we can't really crawl that even if we could process the JavaScript on those pages. Let's see. We have a website that will be targeting many different countries, but at the moment, all of the content is in English. Is it fine to use hreflang to specify which is a specific URL for each country? Yes, you can do that. In practice, that seems a bit of a waste because probably what would be happening is we index all of these different versions for individual countries, and they'd all be competing with each other. So instead of having one really strong English page that would rank well globally, you start having all of these small English versions for individual countries, and they're all competing with each other for those English queries. So my recommendation there would be to try to figure out what you really want to target, where you need to have separate content, and to explicitly set up hreflang versions and pages just for those versions, not for everything that you can find. So it's generally not a good strategy to just say, oh, I have this list of 100 countries, and my content is valid in all of these countries. Therefore, I'll just make 100 versions of the same content because you just end up having content that's much more diluted, that has a lot harder in the search results to be shown. So that'd be something where you probably want to find a more strategic approach. Is it bad for Google if there's a microdata, RFD, and a JSON-LD on a product page that has the same information in the other section? So in general, that's not bad for us if we find different ways of structured data that have the same information. But usually, that means that you have to maintain separate versions of the same markup on your pages, which sometimes means that things get broken, and one version suddenly shows this markup, and the other version shows a slightly different markup. So my recommendation would be to stick to one version if at all possible. If you're transitioning between microdata to JSON-LD, then maybe you have a period in between where you have both of the versions on your pages. As long as you can make sure that these two versions have the same content, then that's generally OK. But in the long run, I'd really recommend making sure that you have one version of markup on your pages, especially for the structured data markup, and that that version is the correct version. If you have a directory site of local US businesses, should you select target users in the US in Search Console? Will that be a mistake on the off chance that you have a blog post that does well in Europe? I think you could do that. So what would probably happen there is, so essentially the setting there is for geo-targeting, which means that when we can recognize that a user is explicitly looking for content in that country and we see that your site has selected geo-targeting for that country, then we can show your site a little bit higher in the search results for those specific users. It wouldn't mean that it would be lower in other countries. It's just like more visible in the US because we see that you're saying your site is focusing on the US and we see that the user is explicitly looking for content in the US. If, on the other hand, your users are kind of just generically searching and they're not explicitly searching in a way that you could infer that they want to have a local version, then you wouldn't really see any change from selecting geo-targeting there. So if your content is generic and global and your users just happen to be in the US mostly, then that's not something where you need to use geo-targeting. On the other hand, if your users are all in one country and you can tell they're explicitly looking for local content, then geo-targeting is a good way to let us know that actually your content is really well suited for those users. Would it be fine to not reply or approve comments on a news post or how beneficial is it to allow comments on news posts? That's totally up to you. From a search point of view, what happens in general is we see this content as being a part of your page. And if this is content that you want to be found for, then have it visible. On the other hand, if it's content that you think is not useful for your site, it's not useful for users, it's not something you want to be associated with. For example, if it's just dummy link drops that people are dropping into your site's comments because they can have their script run and just drop these automatically, then that's something where it probably makes sense to block those or remove those kind of comments. So in general, if you want to be found for something, then have that content on your site. If you don't want to be found for the content that people are placing on your site, then make sure that it's not on your site. Ultimately, it's your website. You're kind of responsible for what is shown on your website. And we will rank your pages based on what you provide on your website. And if you provide us comments that are low quality, that are kind of problematic, then that's what we will use for indexing. So that's always kind of a trade-off there with regards to how much work you spin to focus on making sure that those comments are actually good with regards to approving or blocking comments and kind of maintaining all of that. And some sites just say, oh, I don't don't have time to deal with all of this. I'll just block comments completely. Ultimately, that's kind of a strategic decision on your side and not something that would kind of play into the search side directly. What will be the best way to solve search results from an Australian site coming into the American search results? Currently, we have international targeting set for the appropriate country, as well as working hreflang to both sites. Is there anything we can do? Each one of our dealers maintains their own website and targets their own approved territories. Not really sure how you mean that. I think, in general, what can be tricky is if you have separate dealer websites that are essentially run completely independently and therefore individual countries. And ideally, you'd want to have these kind of link together with hreflang to say, well, if someone is searching for this product in this country, that's the right one. And that one points to the other country versions. But that's sometimes hard to get individual websites to actually do. And sometimes these individual websites are set up in a way that you can't really map one-to-one on a URL basis between the different sites. And in cases like that, it's really hard to say that this content on this website that's not marked up with hreflang explicitly is equivalent to the content on another website. And we should be picking out which one to show in the search results. So that can be kind of tricky sometimes. A few things also worth saying, we generally crawl from the US. So if you're doing anything fancy with geotargeting on the website itself, like redirecting users to their local country version or showing a banner for local country versions, then Googlebot and crawling from the US might trigger that. For example, if you have an Australian site and it automatically redirects US users to the US version, then Googlebot would be redirected as well, which means we would have trouble indexing the Australian version. What was it? The other thing that I generally recommend doing there is if you can set that up to have a kind of a instead of redirecting users that are coming from the wrong country set it up so that there's a subtle banner on top that's pointing users to the right version. So if you can recognize that a US user is going to the Australian site, then have a banner on top saying, hey, there's a US version of this content. Click here to go there directly. That way, we can pick that up and follow that link as well. And users, when they accidentally get to the wrong version, which is always a possibility, they can still find their way to the right version. John, can I cut in with a quick question? Sure. I posted earlier in the chat, it's a screenshot related with Search Console data. Just thought to take an opinion on, so basically, average position is moving weekly down over the weekend. It's similar with traffic, which is really, I mean, not strange, but you don't see every day average position moving in a cycle like that. And it does overlap a lot with CTR, which is also strange. That CTR is also dropping a lot only over weekend, and so is the average position. Is it possible that only CTR is driving the average position to drop that much over the weekend, or what would be an explanation for that? My guess is that the queries change slightly. So I wouldn't expect to see a ranking drop over the weekend. I mean, I don't know for sure. But as far as I know, we don't have anything in our algorithms that are like, oh, this website is of this type. Therefore, on the weekend, we'll rank it completely differently. I don't think we would have anything like that. But what I do see a lot is that there are different user patterns in that people search in different ways, and some sites get a lot of traffic during the week. Some sites get a lot of traffic during the off hours and during the weekends. And those patterns can be quite visible, and they affect which queries usually people are using. And it might be that maybe on the weekends, people are searching for something where your site isn't ranking that well. Therefore, the average position is down. Whereas during the week, they're searching for something where your site is ranking well. Therefore, the average position is higher again. Is this true? But it's the same type of URLs and queries. Just the user intent is changing over the weekend. So they have a different behavior as far as browsing, like a user experience within the site. And again, CTR, it's really impacted over the weekend. But the query and the URLs are more or less the same. I would still suspect that it's more of what users are actually doing. I really don't think we would have any algorithms that would try to figure out, is this a weekday or a weekend site and treat them differently? But is it possible that CTR is driving that? So CTR, because CTR is lower, it's driving the web. I suspect it's more a matter of people just searching differently. Because if they're really searching in the same way, then the click-through rate would be kind of the same. So that's something where I suspect if you drill down and look at the types of queries that are happening, you'd also see the number of impressions going down. You might see some queries going up more on the weekend, others going down more on the weekend. So that's something where, at least from the sites that I've seen over time, that there is a very visible effect there with regards to weekday and weekend. And some are just more visible on the weekend. Some are just a lot less visible on the weekend. So CTI, although it's a ranking factor, it's not that powerful to really drive the query. I don't know if we would call a CTR a ranking factor. So I'd really just assume this is a completely normal ranking change. I mean, not ranking changes, but essentially based on what people are searching for. Yeah, usually in time this is different. Thanks a lot. All right. Wow, time flies. We just have a couple minutes left. So I'll open it up for more questions from your side. If there is anything specific. I don't. This is Sebastian from Argentina. Can you hear me? Yes. Ah, thanks. Well, talking about performance, what is the most important metric that you're taking into account from Google's perspective? I mean, is the first paint, is the page, it's what? Or is a mix of everything? The last, yeah, kind of a mix of everything. So we try to take into account various lab measurements, which we can kind of algorithmically determine. And we take into account various what we call the field data points that we have, which are also visible in page speed insights, to see what is happening in practice. So it's kind of a mix there. We don't point at any specific metric that we say, this is exactly what we use for ranking, partially, because we need to be able to adjust that over time. OK, thank you very much. All right. Let's see, maybe one more question if anyone has something. Well, if no one goes, I go again. OK. Is there any faster way to push content to Google than the Google News side map? I mean, we work specifically with digital newspapers. So our most critical aspect is to push everything to Google. But is there any way to push it faster than the Google News side map? Usually the News side map is the fastest way. So that's something that we can crawl fairly quickly, because there's a limited size. And we should be able to pick that up fairly quickly after you ping the side map URL. OK, that's good. Thanks. Cool. All right. So I need to jump out. So I can't stay a little bit longer, but I think on Friday we should have a little bit more time. If there's anything on your mind next Friday, I'll also be doing one in-person here in Zurich. So there are probably some people from Zurich and from the region coming to join in. I'm sure that will be fun as well. Apart from that, thank you all for joining and hope to see you all again in one of the future hangouts. Bye, everyone. Bye-bye. Bye-bye.