 All right. Welcome, everyone, to today's Webmaster Central Office Hours. My name is John Mueller. I am a webmaster trends analyst here at Google in Switzerland. And part of what we do are these office hour hangouts where people can jump in and ask any question around their website and web search. And we can try to find answers for them. Thank you for joining the first one of this year. I hope you have a fantastic year that the year works out well for you and lots of exciting things coming up and that you're successful at whatever you're working on. But I guess we can get started with questions from you all to keep things going. So if anything on any of your side is burning and you really need to find an answer, feel free to jump on in now. Afterwards, I'll go through some of the questions that were submitted. I'll take a crack at it. OK. Well, it's not burning really. But since Barry mentioned the URL parameters tool, I'm still curious over what exactly does the representative URL option mean, since it's not really documented in the official talks. I looked this up last year when you asked and I forgot. So I don't know. I need to double check. My understanding is we just pick one URL as a representative for that set and index it like that. But I don't know what exactly came out of looking things up. So do you mean like a session ID? Should the representative URL often be chosen for programs like that? I would assume for a session ID, we would drop that parameter completely. Because usually accessing the URL without a session ID as a part of the query string shows us the same content, and it's a cleaner URL. So we would probably just drop that completely. Right. But if you'd like to set it up in the parameter tool, is that the option you would choose, the representative URL option? Or just let Google decide or if you had to use the tool? I don't know. I don't know. I haven't looked at it this year. So your guess is as good as mine this year. That's funny. John, I have a question. There have been a lot of chatter about reconsideration requests after manual actions taking longer than usual. Is that true? I mean, obviously, people went away for the holidays, but. I don't know about longer than usual, but sometimes they do take quite a bit of time to be reprocessed. So there's no backlog that your team is experiencing. We have to hire more people to get through these support requests. I mean, if it takes longer than usually that's a sign of a backlog. But I mean, it's not the case that we artificially delay things with regards to reconsideration requests. Sometimes what just happens is that the team works on this in batches, and they will go through one set of reconsiderations and then go through next set. And depending on how they batch things, it might be by country or it might be by type of issue, those kind of things. So that's something that sometimes happens. And sometimes you see that with regards to the backlog or kind of the time that it takes to process things where suddenly a whole bunch of things will get reprocessed fairly quickly. And then it takes a while again for things to kind of catch up again. OK, thank you. All right. So let me go through some of the questions that were submitted. And as always, you're welcome to jump in in between. If there's anything that you'd like to add or more information that you need, can you target English speakers in the US and UK using the same content, but with relevant hreflang tags? For example, example.com slash US English page 1 or GB English page 1. Content is the same, not geotargeted or anything. Yes, you can do that. I don't know if that's kind of the best use of your time or your website in general, but you can do that. If you want to create separate pages for individual countries and they're all the same language and you have hreflang set up, then you can set it up like this. What will probably happen in practice is we will recognize that these pages are the same. They are essentially the same. And we will just index one of those pages as a canonical. And then we'll try to use the hreflang annotations that you have to show the appropriate URL in the search results. So we'll pick one canonical for that set and we'll focus our crawling and indexing on that canonical URL. And when we show it to users, we'll try to swap things out with the appropriate hreflang tags. So it's something where you can do that. From an indexing point of view, we just pick one of those pages anyway. So you could theoretically save yourself the trouble and just have that one English page. But essentially, that's up to you. On larger sites, sometimes it's really hard to separate out the content and to create separate content versions for individual countries. But for maybe other parts of that site, you do have clear different country versions. So sometimes that's just kind of a side effect there. I wouldn't recommend doing this as a thing on its own. So don't just take an English website and split it up into different country versions and hope that you have any advantage from that. Because essentially, you're just giving us a lot more work. We have to crawl all of these different country versions. We have to recognize that they're all the same and we'll swap out the URLs in the search results and try to get it right. But it's not guaranteed that we'll always get it right. So you're creating a lot of extra overhead and you're probably not going to get a lot of value out of that. On the other hand, if the rest of your site does have appropriate country versions and you have some pages that are the same like this, then that's not something I would lose any sleep over. John, can I ask something related to that to extra planning? So we're working with a website that they have a Romanian version and an English version. And that's because most of their business is in Romania, but they do have one offline location somewhere in Europe, which is why they have the English version. So the site has HRE flag tags set up between the English and Romanian versions. The problem is that about, not half, but a big portion of Romanian users, when they download Chrome, they usually download using the English version. So it's automatically set up to go to Google in English rather than in Romanian. So they keep getting the, when they're searching for the brand name, they keep getting the English version of the website, which is not that ideal, even though they were in Romania. Now, one solution I can think of is simply add an additional HRE flag tag like EN-RO. So to make sure that they do get to the Romanian version, not to the English version, but other than actually forcefully redirecting them, I'm not sure what we could do about that because we don't really want Romanian users to get to the English version of the site from Google. Usually what I recommend for things like that is to show a banner on the page so that users, when they go there, they have a banner in their language or where the banner clearly says, like, if you're coming from Romania, here's the Romanian version. That's usually the recommended approach. I think flagging a page is saying English for Romania and then it's actually in Romanian. That would probably be a bit tricky because we do try to recognize the language of the page as well. And if you're telling us it's English, but it's actually recognized as being Romanian, then there's a chance that we'll just ignore that hreflang value. So that's something that might be making a tricky. But in general also, especially with Chrome and other browsers where when you download it and it defaults to the English US settings, that's something that we do try to catch when it comes to localization. So especially on Google, when you go to Google and you have kind of the default, default browser settings, then that's something where we say, well, this is kind of the absolute default version. And there's a chance that maybe this user didn't explicitly choose the English version. But rather, it was kind of installed like that by default. So that's something where we do try to figure out which of these versions is actually the right one to show. But especially for brand queries, that's something where I could imagine it's really tricky for us to recognize what languages are actually looking for. So for things like that, I try to do more with just like a banner or some other way to let people know about the appropriate local version. Theoretically, you could also redirect to the Romanian version. But that might make people unhappy who explicitly want to go to the English version of a local website. Right. Well, we've checked analytics for this issue specifically. So one problem is a big portion of users. So it's not just like 10%, 5%, 10%. It's like 30%. Something like that of Romanian users get to the English version. And the vast majority of them then choose to go to the Romanian version. So it's basically not an idea of situations. It's an extra step for them to take in order to get to where they're actually one to get. So we're going to welcome the writer and get him to do the redirect, then actually posting a banner in case they did actually want to see the English version and maybe something like that. No. Would the redirect mess up any of your signals or a 302 redirect sometime? I mean, it wouldn't mess things up for us, because we would probably crawl from the US anyway. So we wouldn't see that redirect. But it could confuse people. I mean, another approach that you could take is to put the language into the title tag so that when people search for the brand, they see explicitly like English or whatever the Romanian word is for Romanian so that they can make that choice themselves. But I would tend to focus more on a banner rather than forcing a redirect. But that's something you can also test, like set up a banner and see if people actually click on it or not, try it out with a redirect, and maybe have a banner then that lets them go back to the English version if they want. And then you can kind of test both of those sides. I'm not sure what you mentioned about the title tag, because it is set up like this. So you have the brand name and the Romanian description of the shop, then the brand name and the English description of the shop. But obviously, it's just one of them showing up in the search results when you're searching for the brand name. So it's not like users can choose one or the other. Well, in that case, you probably don't need to do anything about it. Yeah, thanks. Sure. All right, if you have a real estate site that has listings in various countries, do you need an hreflang tag for the listings as well? The listings will technically not be translated because they're totally different. So in such a case, the hreflang tag will only be used on a subdirectory, like the language version. Yeah, if these are different things, then the hreflang tag would not be approached here. So the hreflang matter tag link element, I guess, is only relevant if it's the same thing and you have a version for one country in one language and the same version of that thing for a different country, different language. But if these are completely different things, like one is a radio and the other is a car, then you wouldn't have the hreflang links between those pages. For category pages, it definitely makes sense for the home page, maybe for some marketing pages. That definitely makes sense. But in your case specifically, I'd say for the individual items, it wouldn't make sense. It's also something where you have to consider the amount of work that's required to set up this hreflang annotation between pages and to maintain that annotation. And if you're not seeing the wrong traffic going to individual page types, then it's probably a sign you don't need to set up the hreflang annotations. So that's something where I'd really try to save yourself the trouble of just setting it up for the sake of having it set it up because you're not going to see any ranking advantage for the hreflang annotations. It's just that when we do show one of those pages, we'll try to show the appropriate one. I'm a product expert over on the Google News Publisher forum. I'm hearing more and more publishers reporting that they're either being outranked by sites that are scraping and republishing their content or being scraped in the scraper sites appearing on Google with their articles hours before Google indexes and surfaces the original content. What advice can you give for publishers experiencing these issues? What can Google do to try to help these publishers? PS, DMCA complaints, removals are too after the fact to be effective for news sites. So I would still do DMCA kind of complaints if you can do that. If that's something that makes sense in your legal environment, obviously that's something that depends a little bit on the legal situation. You can't just randomly send them out to people. But that's something I would definitely still do, even if that's afterwards, because that's something that we do try to understand on a bigger picture scale as well. So that's definitely one thing. The other thing where, I guess, the second part of your question, where when scraper sites are appearing hours before the original content appears, that seems to me more of an issue that we're actually not crawling and indexing the normal content as quickly, rather than kind of that these scraper sites are doing something different. And most of the time, that's due to technical issues in the sense that if we can't crawl and index a site that quickly, if there's something making it hard for us to crawl a website, then it will take longer. So that could be things like we're not able to understand where kind of like the main hubs of a news site are, where the category pages are, where the main home pages. And we get lost crawling all of these different URL parameters or different parts of the website. And we don't kind of, through our normal crawling, we don't run into the new content as quickly. And for things like that, you can do kind of the normal SEO things, like making sure that you have fewer of these pages out there that you don't want to have indexed, so that we don't get sidetracked with all of this, making sure that you have a clear structure set up with canonical URLs, that you don't have kind of too many URLs leading into different parts of the site, and then making sure that you have proper site map set up in a way that you will flag us new URLs as quickly as possible when you publish them, ideally with a correct last modification date in the site map file, because that does help us to understand when new content is published. So those are kind of the main things. Then obviously with regards to the server that we can crawl as quickly as possible, we need to have a server that responds fairly quickly. That doesn't return server errors. So those are kind of the, I'd say, kind of technical baseline things. One of the difficulties that sometimes plays a role is that sometimes things get stuck on our site as well, where we try to crawl sites fairly quickly, and then sometimes things kind of get backlogged a bit in some of our pipelines, and then suddenly we're like a few hours behind in crawling sites. And that's generally pretty rare. I think that has happened maybe last year a few times, but I know the team is working to try to avoid that, and that's not something that you can really work around. And if when I ask the team about things like that and ask them, like, are you going to improve this? Of course, we see this as a bug. It's not something that we're doing on purpose, that we're crawling sites slower. In any case, it's more that, well, sometimes things break on our site as well, and then it gets a bit slower. So that might be some of that what you're seeing there. The first part of the question is something that's really kind of hard for me to answer in a general way, like when a scraper ranks above your original content, that's something where I really need to see more examples. So if you want to send me some example URLs, some queries that you see, especially the timely things when you see them popping up in the forum, I'd love to pass that on to the team, because that is something that they do also care about as well. One tricky aspect there is also that sometimes the site might technically be a news website, but that doesn't mean that it's a really good website. And trying to understand that difference is sometimes a bit tricky, but I'm pretty sure you've seen that in the forums quite a bit, that people will say, I have this great news website. And you look at it and it's like, you can hardly read the English. It looks like you copied it from somewhere else. It's like, eh. Right, right. I'm talking more about sites that are actually have original content. They'll have something good. And then some scraper ranks above them. And it's just so disheartening for the publishers. Yeah, yeah. I mean, if you can send me some examples, I'd love to pass that on to the team. OK, what's the best way to send those to you? You can drop me a note on Twitter. That's probably easiest. Let me just drop my email address here in the chat. And essentially, when you see these things coming up on in the forums and it feels like it's kind of timely, then send those over my way, and I can chat with the team to see what we can do to improve that. OK, great. Thank you very much. Sure. All right. When going through change of address, what's the preferred amount of time to keep 301 redirects in place from the original domain? What kind of ranking disruption should be expected as a result of the change? So technically, 301 redirect is a permanent redirect. So ideally, you would keep that forever. I think as a minimal baseline, what I recommend is at least a year, because what can happen on our side is, depending on the URLs, it can take up to maybe a half a year or so for us to crawl all URLs on a website. So if you give it a year, then at least we've seen that redirect twice across all of those URLs on the website. Practically speaking, I would try to keep it longer still. So especially if this is a website that you've built up over the years and you're moving to a new domain, then I would try to keep that redirect in place as long as you can so that anyone going into that old domain will still get a redirect. And even if that's a couple of years later, the other reason why I would try to keep that redirect up as long as possible is to make sure that you don't accidentally drop that domain and some spammer takes over your old domain, and suddenly they're putting spam out that looks like it might be associated with your brand, or at least with your previous brand. And that's something that is always kind of awkward, because you don't have control over that domain anymore, so you can't really fix that problem. But at the same time, it looks like you're putting all of that spam out there, which is not really what you want. So ideally keep that redirect as long as possible and definitely try to keep the old domain as long as possible. If you... Yeah, sorry. Go for it. I'm guessing this kind of depends also if there are any internal or external links pointing to that domain. I mean, I assume if there are none, then that baseline you're talking about can be adjusted since Google will likely crawl that those old URLs less rather than finding links. Yeah, I mean, anytime you're moving domains in our documentation, we also have that step of going out and trying to get all of those old links updated so that they point at the kind of new location as well. But it's always awkward when you see kind of spam ranking with your old domain name. So I try to keep that as long as possible. And if you decide to reuse the domain name yourself for something else, then that's something a little bit different. I'd still give it that one year at least with the redirects, but then at least it's still under your control and kind of what you put out there with your previous brand name attached to it is still something that you can control. So that's a little bit less of an issue I'd say. I'd like to know how representative the sample is of the data of clicks in Search Console for a big site with lots of users. The site has many more URLs and queries than the 50,000 a day. And there's no way of knowing how representative that data sample is when you exclude brand queries. It goes on a little bit and YouTube cuts it off. Essentially, we do try to show as much as possible. And for very large sites, it is possible that you have more queries than the 50,000 a day that, I don't know, that's the limit or the 50,000 data points that we have there a day. Not quite sure where that number is, but that's something where for most sites, we try to get as much as possible. We filter things out like queries that are only done on a very rare basis, which could potentially be kind of private information in the queries. So we try to filter that out. For very large websites, we try to show essentially the bulk of the queries there. And at some point, we'll cut it off and say, well, these are your queries that just have fewer URLs. So we kind of cut it off there. So that's generally how that set is put together. With regards to how representative that set is, I don't know how you would quantify how representative that is. It's essentially as much as information as we can give there. What sometimes do where I think you can probably get a little bit more information is if you have a subdirectory setup or a subdomain setup, then verifying those sections of the site separately sometimes give you kind of the full set of information for each of those sections. So that might be one thing to try out there as well. Can you explain how Google calculates the average position in Search Console? Search Console uses a weighted average with impressions. The documentation doesn't explain it in depth. We actually have pretty good documentation on how the average top position is calculated. So I would double check that Help Center page. I think it's called What Are Impressions, Clicks, and Position, something like that in the Search Console Help Center. So I try to dig that up. Essentially, what happens is we have two ways of looking at this information. On the one hand, we look at it per URL. And on the other hand, we look at it per site for each query. So the per query thing is essentially if someone is searching for something and multiple of your pages would rank for that particular query, then we would take the average of the top most position and use that. Whereas if you're looking at it on a per URL basis, then obviously we use the average position of that particular URL. So what can happen, for example, is if you have things that are shown as site links, you might have maybe your home page as position one and individual pages otherwise on your site that are listed as position five or six in the site links section. So on a per query basis, we would count that as position one. On a per URL basis for the individual things shown in the site links, we would count that as position five because that's where that one particular URL is shown. So if you compare across per query and per URL basis, then there will be a difference. And that's essentially based on that. So that makes it a little bit tricky. From my point of view, I think it's fine to separate it out like that. But I think it's something that, especially with a larger site, you really need to be aware of how this data is put together so that you can compare these results a little bit easier. So that when you look at Search Console and you see one number shown on top and then one number from the table and you look at it in a different way, like you filter things out and you see different numbers, then that's something where you kind of need to understand this is not like Search Console trying to mess with you, but rather this is essentially a side effect of how things are counted in Search Console. All right, subdomain, subfolder leasing. Wow, OK, this gigantic question. I am going to have to cut this a little bit shorter, I guess. So I think we've gone through this for a number of times. So I don't really know how much more I can say here. Essentially, I think what kind of the question comes together is sometimes large websites take a part of their website and they essentially rent it to other sites or they let others place ads essentially on that part of their website. And sometimes that shows up in Search because it's content that's on websites. And there's lots of back and forth on whether or not that should be OK or how that should be treated. And I think there are just different points of views out there with regards to how that can be seen. So this is a gigantic question with lots of statistics and examples in it. So I don't really have anything specific to add there. But I think, in general, this is something where when it comes to Search, there are always different ways of looking at things. And there are different ways of considering ways of looking at it and saying, well, this is a good result or this is a better result. And people can have different opinions on this. So that's something where I wouldn't see this as a clear black and white thing where this is always bad or this is always good, but rather that we, with our algorithms and the engineers that are working on the quality side, we need to find the appropriate way of showing this kind of content in Search. And sometimes it makes sense to show it very visibly. Sometimes maybe it doesn't make that much sense. Maybe we need to find other ways of recognizing high quality search results and showing that appropriately. So from that point of view, I don't think we'll have a clear black and white answer. Like you're always allowed to do this or you're never allowed to do this or you'll never see this in Search. But rather, over time, as we get feedback on our search results, we'll try to improve things the way that we can show them in Search. Hello, John. Yeah. Yeah, hello, everyone. Can I ask two questions? Sure. Sure. I have the first one is about the Sponsored link attribute. I've seen four months or so. You announced guys all the different user generating content and response or and so on. So I have analyzed a few sites, and I've seen that just a few of them, pretty much the biggest ones, use this attribute. But most of the common site, they still don't use it. So I don't know. My question is, do you recommend using it for media reach and things like that? Or what would be like the most common using of these attributes? And the second, well, I don't know if I ask the second. Yeah, just, yeah. So I would try to use them as appropriate. So it's something where I wouldn't say you need to revamp your website immediately and switch to whatever kind of link attribute is most appropriate for your specific use case. If you're using NoFollow and that's working for you, then that's fine. If you're setting up a new website and you have user-generated content, then maybe using the URL UGC is the right approach there. From our point of view, we don't see the more specific types as something that is required from a website. So it's not something that we wanted to push out there and say, hey, everyone needs to change how they're working with this. But rather, it can give us more information if you are more specific. So going forward, if you can give us kind of the more correct information, that would be great. If you don't want to update your website to do that, then that's also fine. So it's not like a critical thing that everyone must do. Right, got it. Thanks. And the second one, please, is imagine you have a really large website and you generate thousands of URLs with, let's say, team A and team B. Imagine it's a betting website, sports betting. And obviously, you have the generic sport betting category, let's say football, basketball, tennis, or whatever. And then you have a team A versus team B. Usually, people is always searching to bet just for the specific category, not for team A or team B. Wouldn't make sense for, I mean, what is the most correct way to procedure in this case? Just getting the main category page of the sports betting as a do-follow with the right canonical tag and then the rest of the team A, team B with a no-index or all the whole bunch of URLs of the website, that could be half million URLs, all of them indexed. And when is a team A versus team B with the right canonical to the right category, which is the best practice? Essentially, that's up to you. So that doesn't really help you in your specific use case. But it's something where you're balancing, kind of having concentrated content on your site, where you're saying, like, I only have 10 pages because I focus on the category pages and the rest of the pages are no-index or they have the real canonical set to those 10 pages. That's fine if you want to concentrate things together. But it might also be fine to say, well, these specific kind of individual items are things that people are looking for. And I want to provide that information in search, where you say, well, I will expand and show those pages as well. And that's sometimes more of a strategic decision rather than a pure SEO decision because you can have fewer pages and they can be stronger, so they could rank better for more competitive queries. On the other hand, you could also have more different pages and they rank more kind of in the long tail queries. And finding that balance between content for kind of the competitive area, where you have fewer pages that are focused on that, and also covering the long tail area is something like finding that balance is sometimes tricky. Sometimes that's also something where you can test it out. And you can say, well, for this one specific event, I will only create category pages, like fewer pages and see if that works well. And for the other one, maybe I'll create a certain amount of long tail pages and see if that works well. Right, yeah, the thing with that is with the long tail queries, we already generate content for covering them. So it's kind of, they do cannibalize sometimes. Sure, I don't know. Yeah, I mean, in that case, I think you have the answer there yourself. If you already cover those long tail queries well, then make sure that your important pages are kind of in that competitive area where you're really concentrating the value on those core pages. Absolutely. Thank you very much. Sure. Do unnatural queries like a site colon, are they also counted in search console performance reports? As far as I know, they're not counted. At least I haven't seen them in any of the reports that I've looked at. So from my point of view, I think we try to filter those kind of things out. We also try to filter a lot of other kinds of more unnatural type queries out. Oftentimes, these fall out automatically anyway, because very few people use site colon queries to search. If you publish news content that's sometimes unpublished after discovering that your editorial guidelines have been breached, would a 4.10 help speed up the removal from search so that you don't have thousands of readers landing on a 404 page? Submitting to Google for a removal option can take a long time. So I think for news content, probably 404 or 4.10 doesn't make any difference. So I would say that you can save yourself the trouble of trying to special case that kind of situation. The URL removal tool would probably be a good approach here. The URL removal tool, if you have the site verified in search console, usually takes effect in less than a day. So that's fairly quick. With regards to general news content like this, where you really want to make it clear that you did not mean to publish something like this, I would still return 404 and maybe show some kind of information on that page and say, well, this article doesn't exist, or whatever you would want the user to kind of guide them to the existing content on your website. But I don't think you would see any advantage of using 4.10 versus 404. So from a theoretical point of view, 4.10 generally falls out a little bit faster for normal web content, like if we're crawling things every couple of days and we see a 4.10, then that generally falls out a little bit faster from a practical point of view. That difference is very minimal. So compared to the amount of effort that you need to kind of special case, the situation of returning 4.10 for this particular type of situation where a page no longer exists, I would just save myself the trouble and just return 404 normally across all of those pages. So that difference between 4.10 and 404 is sometimes overrated. Let's see. If we have a backlink from a site that has declining traffic, is the value of that backlink devalued in the eyes of Google as a result of the decline in traffic from the site that we have the backlink from? If so, should we disavow? So first off, you do not need to disavow any links just because you think that maybe the site is not getting as much traffic as it used to. So definitely no need to disavow anything like that. If these are natural links to your site, if these are normal sites that are linking to your website, there is nothing wrong with that. There is no need to disavow anything like that. I remember way in the early days when we had PageRank in the toolbar, people would be like, oh, do I need to ask people to remove links to my site if their PageRank is 3 or PageRank is 2? It was like, low PageRank links, they will cause my site harm. And that's also not the case. So just because a site is not as popular or not as popular as it used to be, that does not mean that that link from that site is in any way bad and that you need to in any way kind of react on that and say, oh, this is bad. I need to block it from Google. So that's something where I would really look at it more as this is a link that I kind of placed there a couple of years ago when I didn't know better and I can't contact the webmaster anymore or they don't want to remove it anymore and I don't want to take it into account. Then that's something you might want to disavow. Or if you had someone do that on a large scale, like go out and buy a ton of links and now you have a manual action on your website for links spam, then maybe that's something you'd want to disavow to kind of help clean up. But just because that site is not as good as it used to be or doesn't get as much traffic as it used to, that is definitely not a situation where you need to disavow anything. So I would really use the disavow tool only in those cases where you do have a manual action when it comes to link spam or when you look at your links and you're like, Google is going to catch me and give me a manual action next week kind of thing. That's another reason to use the disavow tool to kind of preemptively catch that. And those are really the primary use cases of the disavow tool. Definitely not for things that just aren't as popular as they used to be. Another really common variation of this question is, what if I got a link from a page in a language that I don't understand? Do I need to disavow that? And that's also not something that you need to disavow. It can happen that you get a link from a website in a completely different language. And it's a normal link. You don't need to block that. Can Google detect if my content is the same, even if I change my WordPress theme or even if I modify my HTML5 structure? Sometimes. So a lot of times, we can recognize that the content is the same. And if you're moving things around on your website, you move things from one URL to another. And for whatever reason you can't set up redirects properly, then we will try to recognize that and treat that more as a move rather than a completely separate URL. So that's something we can kind of do. But obviously, if you're moving things around on your website, using a redirect is much cleaner and makes it much easier for us to recognize what's actually happening there. The question goes on, I guess, in a different direction. Some people copy my content. Is Google prepared to know that my content was the first? And if so, what happens if I then modify my article to add some more paragraphs? Will Google still consider me as the author? What would happen if I change the authority of an article? So I'm not quite sure how you change the authority of an article, but in general, we do try to recognize original content and show that appropriately in search. That doesn't mean it will always rank first. So that's something that I see quite a bit with our blog, for example, in that we will publish a blog post and it's all fantastic. And then Barry goes out and writes a blog post about our blog post. And then suddenly Barry ranks for our content, which from one point of view, why is this guy ranking for our content? But on the other hand, he adds a lot of value to those pages. And those are completely valid pages and it makes sense sometimes to rank those above our content. So just because someone else who is writing about your content or using parts of your content ranks above your content doesn't necessarily mean that something is broken or that there's something wrong there. So sometimes it's just, well, other people write about the same topic and they quote some of your content and that's perfectly fine and valid. My question about backlinks, I was reading some articles and I commented on that article. As we know, mostly when we comment on an article, they take our URL in our name. So I put my name, not the company name and the company URL. And now the problem is they're showing my content on the latest comments page and the latest comment is showing on all pages of their site. So now I have 33,000 backlinks coming from that site. Is this a sign of backlinks spam or something that I should disavow or do something else? So if this is a legitimate comment on a site, then that's not really backlinks spam. That's essentially a comment that you placed on a website. I think it's a slippery slope in the sense that people sometimes take this and say, oh, if this is a natural link, then maybe I should go out and comment on 100,000 sites and get 100,000 links. And then at that point, our algorithms will probably say, well, all of the links are links that the person placed themselves. So maybe we should kind of ignore those links. But if these are individual comments that are left across the web and you're interacting with the rest of the web in comment sections, that's completely fine. A lot of cases, these kind of comment links will be with a no follow or with the rel UGC nowadays. Anyway, so it's not that we would take those links into account when it comes to ranking. But we would still show them in Search Console when you look at the links report in Search Console. So we also show no follow links there. So I guess circling back, just because suddenly you see a lot of links from one website doesn't necessarily mean that there's anything bad that's happening there or that you need to disavow that website. If those are natural links, those are natural links. There is nothing to disavow about that. We have a directory site that's been online for nine years. We've collected 5 million reviews and had rich snippet local business review star ratings display for more than eight years. Around December 6, these went away, yet we have no manual action against us. And we could cut the rest of the question off. Let me see if I can reload. Let's see. Where are we? Here we go. We're stumped while competitors remain intact. Can you provide us some insight? So it'd be kind of useful to know which site specifically you're looking at there. Maybe if you can post in the forum, we can take a look there, or other people can take a look. We did change some things with regards to the review star ratings when we show them in Search with regards to what type of reviews we would show. So that might be something where you're seeing those changes. And those kind of policy changes, from our point of view, they can happen over time, where we say, well, this is something that didn't work out so well, so we're going to change the way that we show them in Search. And that's not necessarily something where if you're implementing it that way and we decided that way is not something we want to show in Search, then that's essentially something we wouldn't show in Search. With regards to competitors remaining intact, sometimes people implement rich results in the wrong way. And that's not necessarily something I'd recommend copying. So if your competitors are implementing them incorrectly and they're still being shown, then my recommendation would be not to just blindly copy the wrong markup that they're doing, but rather like to leave it at that. No. OK, wow, we're running really low on time. So maybe I'll just open it up for any questions from any of you for the last couple of minutes. Yeah, I have a quick question. Is there a reason we've got a pretty big website that's about 200,000 URLs and it's also being dynamically created in sitemaps every day, I think closer to the end of the day? In the last month or so, I've really seen the last read. Numbers go down in search console in terms of how often all of our different sitemaps are read. It usually used to be like every day, every other day, and now it's just really flagging. And I didn't know since we're creating so much new content day in and day out, it's pretty crucial that it's being read all the time. Is there any way, one, is there any reason why that's happening? And two, is there any way we can help improve that? Thank you. Yeah, I'd almost tend to post that in the help forum to try to get some more eyes on it, because it might be that there are ways that you can improve the structure of the sitemap. It could be that maybe we're just not catching up with the amount of crawling on the website itself. It could also be that just from the kind of content that you're creating, we're looking at that and saying, well, it's not worthwhile to index all of the new content all the time. Maybe there are ways to kind of concentrate things a little bit. But a lot of that is feedback you could get from the help forum, so I try that there. You can also ping me on Twitter, maybe send me URL that I can take a look at to see if there is anything generally wrong there or not. I have a question. All right, Rick, from Bing basically made a short video saying keyword research practices are kind of going away and should be replaced by intent research practices. So it's not thinking about keywords, thinking about intent. Do you have any thoughts about that? I didn't see that, but I think in general, there is probably always going to be a little bit of room for keyword research because you're providing those words to users. And even if search engines are trying to understand more than just those words, showing specific words to users can make it a little bit easier for them to understand what your pages are about and can sometimes drive a little bit of that conversion process. So I don't see these things going away completely, but I'm sure search engines will get better over time to understand more than just the words on a page. Thank you. All right, I need to jump out. It's been great having you all here. Thanks for joining in for the new year. And I have the next one lined up on Friday, so feel free to join there or the German one on Thursday, of course. All right, bye, everyone. See you next time.