 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland, well, at home, at least. And part of what we do are these Office Hour Hangouts where folks can join in and ask questions around their website, around search, Google, SEO, whatever. And we'll try to find answers. As always, a bunch of stuff was submitted already on YouTube. But if any of you want to get started with the first question, you're welcome to jump in now. Hey, John. Hi. Hi. I have a quick question about the ATREF link on Shopify web shops. So we have both the visitors from UK and from US and from Australia and other from New Zealand, other English language speaking countries. So how do we deal with duplicate content since our private descriptions and our collection pages do have the same content? But how do we deal with duplicate content on the ATREF link on these pages? Good question. So in general, you don't really need to worry about the duplicate content part. That's something that we can usually figure out on our own. I think the bigger question is really whether or not you need to have country-specific versions of those pages indexed or not. And for the most part, I would tend towards having fewer pages indexed rather than more pages. So if there's any way that you can limit the number of country-specific pages that you actually make available for indexing, that tends to make everything a lot easier. Because we can crawl faster. We can index things better. We can rank them a little bit better. Just everything is a lot easier if there are fewer versions. If you do need to make country-specific pages, then using the ATREF lang annotations is a good idea. With ATREF lang, you can do that on a per-page basis. So you don't necessarily need to do it across all types of pages or across all pages within the website in general. I see some websites do it for their home pages, for example, because that's something that's kind of unique per country. But maybe the products are actually the same across different countries. So that's something where you can try to find that balance yourself. And really, the main thing that I would try to focus on is just using fewer versions rather than just going out. And it's like, I can automate this. I'll put it on 50 country versions and then five language versions per country. It's easy to do programmatically, but then you create all of these URLs and it just makes things so much harder. All right, that makes sense. So would you say that the ATREF lang text kind of eliminates duplicate content, or do we still have to localize the English language, because US and UK are a little different? The ATREF lang helps us to recognize which URL to show. But we would still recognize that it's duplicate content in cases like this. And what would happen is for indexing, we would try to fold it together, pick one version for indexing. And then when showing it to the user, we'll try to swap out the URL against the appropriate local version. So it's not something that's critical, that you'll get a manual action or something will go wrong with your website. It's just kind of it makes it a little bit trickier. Cool, thanks a lot, John. That's everything. Sure. All right, someone else had a question as well, I think. Hi, John. Hi. Hi. I wrote it in the comments, but I can ask it. Are URLs used plusses in our product description and learning pages? Is this best practice, or should we be using dashes, or does it make much difference to the crawler and for crawlability? Yeah, it doesn't make any difference for crawling. Usually, people tend to use dashes more, because it makes everything a little bit easier with regards to kind of when you're crawling it yourself or when you're looking at it yourself, because a lot of tools, they swap out the plus against the space. And then if you have spaces in URLs, then sometimes things are a bit trickier to track. Like if you have them in an Excel sheet or some other text file with spaces in the URL, sometimes it makes things a little bit harder. That's why people tend to go towards dashes or underscores, but plusses work just as well. Purely from a technical point of view, that should work. OK, that's good to know. I've got another question about local search. Sure. So before pre-lockdown, we were using the Google blog, the post section, in regards to that, because we've got over 400 stores. Could we post the same blog post on all 400 stores, or should it be unique to each store, each location? Does it matter? I assume you mean the Google My Business post. Yeah, I don't know offhand. My understanding is you can do whatever you want there, because it's really only visible for those local locations. But I would probably double check with maybe some folks in the Google My Business help forum just to make sure. Because as far as I understand, there's an API now for posts as well. So that sounds like they're OK with posting things a little bit more frequently. That's fine. Yeah, but thank you. Sure. John? Hi. I have a question. I notice sometimes that companies do their categories, let's say companyname.com, backslash product, and then they'll do the same thing, backslash pricing. And then for some odd reason, on that same site, they'll do blog.companyname.com. Is there a preferred structure? Should it have been backslash blog for consistently? That's essentially up to the site. Sometimes people use it on a subdomain just purely for technical reasons. If they have a different infrastructure for the blog, then maybe they'll put it on a different subdomain so they can host it separately. But just purely from an SEO point of view, you can do that however you want. I tend to favor having everything on the same host name just to make it easier for tracking. So if you have a blog, maybe put it in a subdirectory instead of a subdomain. But essentially, you can do it any way you want. OK, one other quick question. I notice on Google documents that the linking internally sort of takes you, doesn't take you to a separate tab. Usually in the midst of like a white paper or something, if you click on the link, it will take you to what it's referencing. Is there a reason for that? I mean, I've always thought the separate tab just makes it easier to come back. But I have noticed that on Google documents and didn't know if that was a best practice or just that's your style. I really don't know. I never noticed that. At least from an SEO point of view, it doesn't change anything if you open it in a new tab or within the same tab. So that probably is OK. My guess is it just depends on whoever was writing that content. I know for some areas, the tech writers have very clear guidelines on how to structure things, just to make things as consistent as possible. But it might be that maybe in white papers or in blog posts, people are a little bit more flexible and just try things out in different ways. OK, thank you. Sure. Hey, John, quick other question. I'm using the change. I know this is not a support form, but I'm using the change of the address tool in Search Console. And I've had a website being moved since October 19. I see this on multiple of my sites. And I saw that last week, you mentioned something about having redirects up for at least six months. So what should we do about the address tool that's being taken longer than usual or expected? So you set up the change of address, and you also have the redirect in place? Everything is still in place. It's been in place since October, yeah. Yeah. And what are you seeing? Or what is? It just says, still says, this website is being moved. Oh, OK. Yeah, I think that setting just remains in place for a certain period of time. So it's essentially just the UI in Search Console that says, by the way, if you want to do something with the website, keep in mind that you actually wanted to have it move somewhere else. All right, thanks. OK, let me run through some of the submitted questions. We probably have more room for your questions as well along the way. And if there's something that comes up in between, that kind of fits through one of the questions, feel free to jump in. Let's see, the first one. Does the link value deprecate with age? Like the older link is less value at forwards? I don't know, maybe it should be the opposite, right? It's like the older link is, maybe the stronger it should be. But just purely from an SEO point of view, on the one hand, it feels like you're probably focusing too much on links. So that's kind of the one thing. On the other hand, it's not so much that we keep track of age of the links, but rather that sites evolve over time. So for example, if you get a link from a newspaper website and that's in an article that's currently linked on the front page because it's a really important article, then obviously that's going to be a really important link for us, because we notice that that link is there. It's linked fairly closely to the home page. It's something that's really relevant at the moment. However, that news website is going to evolve. And over time, that article that might have been on the front page is suddenly on page 2 or is in an archive somewhere or is in a section for articles from the year 2020, which might be like 50 years ago at some point. So it's not as relevant anymore there. So it's not so much that the link itself is aging, but rather that the website where that link was has evolved. And over time, that place where that link was is no longer as relevant as it used to be. So that's something that, especially when it comes to news websites where things are changing fairly quickly, that's something that's definitely always involved. If we have 10 subpages with a list of product categories and all of them have canonical instruction directed to the first of them, how does this affect from a ranking point of view the products from subpages 2 to 10 of the list? This is something that comes up from time to time. It's like you have a paginated set and you just want the first page indexed, so you set the canonical to the first page of that set. And generally, what would happen here is that we would see that canonical. And if we process that canonical and think, though, this is a good selection, then we will only index the first page there. So essentially, everything that is on these pages where you're saying this page itself is not canonical, we would drop that from the index. We would not index that. Assuming, again, we process that canonical and say, oh, you're right, this is a good selection. Canonicalization is a bit tricky in that it's not just the derail canonical on a page that we use. We do want to make sure that the content of the page is equivalent. We take into account other factors as well. So it's not like a clear one-to-one relationship there. But essentially, if you have things on pages and they're only mentioned on those specific pages, and at the same time, you're telling us that we should be indexing a different page, then it's very likely that those things on the non-canonical page will essentially be dropped from our search results because you're telling us that we shouldn't be indexing them. Let's see. Is there any difference between link value from static and main content? I mean, links from navigations or product pages? Do they have the same value? Again, it feels a little bit like you're focusing too much on links rather than trying to make a website that works well for users. The tricky part with a lot of these things is obviously that we try to make our algorithms so that they respond to things similar to how users would respond. So if you're trying to essentially game the algorithms the way that they're set up now, that's something that if users change their behavior and users decide, oh, this kind of content is no longer as interesting to me or no longer as useful for me, then that's something where you're essentially stuck with that old version. So with this specific situation, usually what happens here is we do focus on the primary content on a page, and that's something that makes sense from a user point of view. If you have one page, then usually you focus on what is actually unique about this page, and you kind of ignore the rest. I mean, you still use it, but you primarily focus on the primary content. So my general recommendation here, if you're doing internal linking, is essentially just to focus on making your website work well for users. We have a number of stores, and our store addresses are also displayed on certain category pages, which offer those services specific to those stores. Next to the text of those addresses, we also have an internal link called branch details, going to specific branch information page. How well does Google understand the association between the two with such a generic text anchor? From an SEO point of view, should we make the anchor more specific and have the city name of the branch, something which distinguishes it from other branches on other pages, like product A from RB branch? Or does that sound like too much? So this is something where we try to pull in essentially the information that we have about your website, the different pages that we know, the kinds of content that we have on your site. So if you have one page that's about one specific topic, and from there you link to other pages within your website, which are about kind of a subtopic of that, then it's not so much the case that you need to provide all of the context through the anchor text, but rather we understand a little bit of that through the linking as well. So in a case like this, or in a general case where you have category pages and then you have subcategories or maybe products that you're linking to from those higher level pages, there is not really a need to provide full context of that category page on every link that you have there. So if you have maybe a shoe store, obviously, like everyone, I guess, if you have a shoe store and you have a category for running shoes and as a subcategory, maybe you have, I don't know, different colored running shoes. So maybe you'd have like blue running shoes or yellow running shoes. It's not so much the case that you need to provide the full context of all of those details with every link that you have on the page to the individual product. So you wouldn't need to, when you're linking to a specific product, say, well, this is a running shoe and it's in yellow and it's this brand or this type of running shoe. But instead, it's usually enough just to provide kind of the more detailed context there. And that makes sense for users as well, because when users are looking at the page, they focus on the parts that are kind of unique that stand out with regards to the link to the individual product. It wouldn't really make it so useful for them if you have a list of, let's say, individual running shoes and for each of the shoes, you mentioned all of the attributes of the categories that are higher there as well. So usually instead, it just makes sense as I go out. You drill down like the breadcrumb trail lets you know what the categories are and you just want the more specific information there. So from that point of view, I try to just focus the anchor text on what makes sense, what's kind of unique within the existing context of those pages. I want to move some of my best articles from one domain to a new one, and I don't want to redirect or move the whole domain to avoid sending bad signals to the good ones as well. I want to start completely fresh site from an algorithm point of view. What's the best way to do it? Should I delete the content of my old domain first and then add it as a new domain? Or what's the best way to avoid cross domain duplicate content issues? I would know indexing the old articles help if the content stayed live on the old domain as well. What about links that go to those articles that I want to move? Any way to safely redirect them without harming the new site? Yeah, I think this is a tricky situation where you kind of need to make up your mind what you want. Because on the one hand, you're saying you want to redirect to a different domain and you want to keep all of the good things. On the other hand, you're saying you don't actually want to be associated with that old domain. You want to start fresh, right? So it's something where you almost need to make up your mind there. And depending on how that works out, that's something where you can kind of go one direction or go the other direction. I think it's generally fine to take individual pieces of content and to redirect them to other websites. It's important to keep in mind that what you're doing is kind of splitting the website up into individual pieces, which means from an algorithm point of view, it tends to be a little bit harder for us to understand exactly how we should process that. So essentially, we need to figure out which of these signals we can kind of forward, which of these signals we have to recalculate, looking at the new website, looking at the old website. And from a practical point of view, that tends to mean that things just take a lot longer to be processed. So if you're moving one to one from one domain to another, that's something that's easy for us to process. If you're splitting things up or if you're merging things together, that's always going to be a lot harder. So that's just purely from splitting things up with regards to kind of not being associated with the old website and still having something on the new website. Usually, my recommendation here is if you want to start completely fresh, make sure you're starting completely fresh. So avoid reusing the existing content. Avoid reusing kind of the existing URL structure. Avoid redirecting or linking to the new website. Because if you really want to start fresh, if you realize, for example, you did a lot of things wrong and it's going to take so much time to actually clean up all of these things that happened over the years, then starting completely fresh makes it such that it's like we can really start fresh with a new website. Whereas as soon as you do redirects or set up links or set up canonicals or reuse the same content on another website, then suddenly it's a situation where our algorithms look at that new website and say, well, this is actually part of the old website. We will help the webmaster because clearly they forgot to set up a site move here. We will help the webmaster by linking these two together and by treating them as one website. And that sounds like that's the kind of thing that you want to avoid. So if you really want to separate things out, do that completely and do that as cleanly as possible. John, quick follow-up question to that. Sure. If I were to trash my old website and start a new one, would Google, if someone picked up my old website and restored my content, would Google know that it's a new owner? Do they check by any chance? Or am I just stuck with a competitor that took my old website and my content? Should I keep the domain? Is that the delay? Yeah, I would definitely keep the domain. That's something where, especially if you've built up something over time, I would try to keep those domains as long as possible. You don't need to put any content on them. But by keeping the domain, you're kind of preventing that situation that someone takes over the domain and they reuse your old content or they post something completely different. And then suddenly, your brand or your old brand name is associated with this content that you have no control over. Are there any period of time where you have to put down a website and let it go blank before Google knows it's a new owner? Not necessarily. The tricky part is that sometimes people mess up and the website goes down or they forget to renew. And then after a while, they realize and they renew their domain name and they put the content back up. So it's something where sometimes our algorithms are trying to be helpful. And that might be too helpful in a case like this. OK, so you could be lucky with an expired domain or unlucky. Yeah, I definitely wouldn't use expired domains as a clear strategy because you just don't know what you're getting into with regards to the positive or the negative sides. All right, thanks. Sure. OK, I noticed that there are a few people who had some questions around Google Discover. I don't really have any clear answers on that at the moment, but I'll definitely pass that on to the team. So I think some of these were about an Android website in Italy that's not being shown in Google Discover. And another one was also, I think, a tech site that wasn't from Sweden that was not being shown in Google Discover. I think the tricky part is Discover is a very organic search feature in that there's no clear kind of things that you need to do to be shown in Discover. And it's very possible that our algorithms decide, like, we should highlight this website more in Discover, or maybe we should highlight this website a little bit less in Discover. And that's very hard to kind of take as something where Google is doing it correctly or Google is doing it incorrectly, because there are obviously no queries involved. It's not the case that someone is searching for a website and then not seeing it. But rather, we try to bubble up information that we think is useful to users through Discover. But some of these questions here have a lot of details, so I'll definitely copy and paste that over to the Discover team to see if there's something that on our site we need to improve or if there's something that maybe we can improve in our documentation to make it a little bit easier to understand what you might be seeing in cases like this. We have a small budget to freshen things up in our e-commerce website, so having to choose between desktop or mobile. Am I correct in believing that both desktop and mobile rankings are derived from the Google indexing the mobile version of the website? So would you say if we had to choose which one from an SEO point of view to improve, then the mobile version would be more important as this would impact rankings on both? Although we receive more traffic on mobile, the bounce rate is also higher. So in general, if we've switched the website over to mobile-first indexing, which we've switched most websites over to mobile-first indexing now, then we would only be using the mobile version for indexing as a basis for understanding what is on the website as a basis for understanding the context between individual URLs of a website. So we would really only be using the mobile version for that, especially if you have a different version on desktop and mobile, then we would essentially just be using the mobile version there. So if you're really in this, I'd say, tough situation that you need to decide which of your versions is your favorite one to work on more, then if we've switched to mobile-first indexing, then I think the mobile version is probably the one that you should be focusing on. However, I wouldn't say that you can completely delete the desktop version, and everything will continue to be OK, because probably you still have a lot of users on desktop as well. Usually my recommendation for people who are working on the website and trying to figure out desktop or mobile is to find a way to move more of the content to a more responsive design so that you just have one version. Because if you just have one version to maintain, then you don't have to make that choice, and by fixing the content once or the pages once, then you're essentially fixing it for all of your users. So moving to a responsive design would make this a lot easier if you absolutely can't move to a responsive design, you need to pick one or the other, and you really only care about SEO, which is kind of tricky in itself, then the mobile version would be the one to focus on. Does the product listing page lose pay drink or some other signals? If the product listing page has links pointing to filtered listing pages that have meta robots and no index pages. So I think this kind of goes back to theoretical understanding of how page rank flows within a website. And this is actually something that our systems are pretty good at, so it's not something where I would say that a normal website needs to worry about how these kind of pages are linked together with regards to how page rank flows within a website. Usually the bigger impact here, especially with an e-commerce website, if it's a larger e-commerce website, is the crawling side in that if you have links to a lot of filtered pages that have unique URLs, maybe there are extra filters or facets in those URLs themselves, then that's something where we would have to go off and crawl all of these pages before we recognize there's actually no index there. So from a crawling point of view, I would try to keep it as clean as possible. But purely from an indexing point of view or from a page rank point of view, this is not something that I would really worry about. Does a link in the content of the page, which already exists in the main navigation, make any sense from a ranking point of view? So I think this kind of goes back to links in the main content versus in things like the sidebar in the footer, those kind of things. From my point of view, this definitely makes sense if it makes sense for your users. Purely from an SEO point of view, if a link from one page to another already exists on a page, then having that same link there a second time doesn't change anything. But users are probably more important, and you probably want to make sure that they actually convert on your website. So making a website that works well for users would be my priority here. It's not that I would stop linking internally within a website just because you already have one link there, which might be enough from an SEO point of view. If an internal link is underneath some text but related to that topic, does Google still understand and recognize that, or should the internal link always be within the text? We do try to understand the context of internal links. So things like anchor text and the text around that link, they do help us to understand a little bit more. But if you're asking the link within a block of text or just right after a block of text, I don't think that plays any role at all. So that's not something I'd worry about. We have a number of categories on our e-commerce site with one or two items in them. While it might be easy for users to visually find them, would it be better if these categories were removed or 301 and the products moved into bigger categories as currently those categories are weak and have created a lot more internal links going around the site? Yeah, so this is something that is something where you almost need to figure this out for your website individually. Usually it's more a question of if you have kind of a flat website architecture where everything is linked together, maybe where you have very few categories, or if you have a very deep website structure where, essentially, any user or search engine, if they want to go to a specific product page, they have to jump through a bunch of different categories or subcategory sections to actually find those pages. And essentially what you want is something that's kind of a mix there. You don't want it too flat because it's too flat, and it's hard to understand which pages belong together. And you don't want it too deep because then users have to click around a lot to find those pages, and search engines have a lot more trouble finding and recognizing the importance of individual pages on your website. So finding that kind of balance size where you have enough items in each category to kind of make sure that it's more of a, I don't know, a balanced view of a website where it's not too flat, not too deep. That's something that I tend to recommend, but there is no hard, I don't know, guideline where I'd say two products is not enough, or you should make subcategories if you have more than 20 products. That's something that you almost need to look at from a website to website basis. And it's sometimes hard because obviously your business will grow over time, and you'll have more of some kind of product, and you'll have fewer of other kinds of products. And at some point, you almost need to restructure the kind of category, subcategory setup that you have there. And purely from a search point of view, these kind of restructurings to make things a little bit more balanced, I'd say they definitely make sense, but it's always something that involves a lot of effort to make sure that you're setting up redirects and everything properly. So I wouldn't just blindly jump in and say, two products is not enough. I will redirect them and fix things. The long question about discover again, I'll take that to the team. Our blog section of our website just has a lot of articles categorized by month. They were uploaded. There is no grouping as such as to putting all related articles within a particular section together, but there is some internal linking between related articles. Could that be the main reason that our articles are not indexed anymore? They used to all be indexed, but then after a particular algorithm update, you de-index them. It's the only part of the website that is de-indexed, even though the information and the articles are good quality. I don't think this would be a reason for our systems to de-index a lot of articles on our website. Essentially, we need to be able to discover those pages so that we have the opportunity to go out and index them. But if we can discover those pages and they're just linked in a way that might not be optimal, then that generally wouldn't be a reason for us to say those pages themselves are not useful enough to actually be indexed. Usually, when I see questions like this where it's like, we have a bunch of articles and suddenly they're not being indexed as much as they used to be, it's generally less of a technical issue and more of a quality issue. So it's not so much that we can't find those pages because probably they're still linked within your website or at least they were findable in the past. It's probably not the case that we can't index them because that's probably easy for you to check. And you can double check that there's not a no index and that they can be crawled properly without crawl errors. That's usually pretty easy to double check. But it's more a matter of just our algorithms looking at that part of the website or the website overall and saying, we don't know if it actually makes that much sense for us to index so many pages here. Maybe it's OK if we just index a smaller part of the website in Selen instead. So that's something where sometimes it's worth taking a hard look at your site or getting someone more neutral to look at your website and to give you some advice and say, well, it looks like all of these articles are about irrelevant things that happened at some point. And maybe they were relevant at some point in the past, but they're not so important anymore. It might be that someone looks at these articles and says, well, it looks like you've been rewriting existing articles from maybe other people's websites. Maybe that's not such a great thing to do. Maybe that's something that Google doesn't really appreciate. Which is definitely true. But all of these things are elements where sometimes when you're creating this website, it's like if you're a baby and you're doing the best that you can to make it grow, but having someone neutral look at it can sometimes give you a little bit of a kick to actually be a little bit more critical with your website, with your own content, and to find ways to significantly improve it. And I realize that's always hard to hear. And there's no simple path to significantly improving a website. But sometimes that's really the direction that you need to head if you care about this website in the way that it's shown in search. When you search for basically anything, Meme Generator, those three pages of the same website show up. So there's one website that shows up. And other sites don't show up, which arguably have better Meme Generators. So I can't judge the quality of these Meme Generators, so it's really hard to say there. But in general, it can happen that a bunch of pages from the same website show up for individual queries. And that's something where with our algorithms, we try to limit the number of pages that we show for normal generic queries from the same website. Usually that limit is, I don't know, maybe two or three results from the same website. But if our algorithms have kind of a sense for the user actually searching for one specific website, then we will definitely show more results from that website. And sometimes our algorithms are kind of on the edge there or don't understand the query properly. And they show more from one website than maybe they showed from others as well. But in general, it's not something where we would say it's clearly broken if there are three pages shown from the same website. So that's something where if you're seeing individual queries where you feel, well, this could be improved if there were more diversity in those search results, then I would definitely use the feedback link on the bottom of the search results pages, because that does go to the team that works on these pages. But at the same time, I would also see it as something where reasonable people can argue in both directions on that topic. And it's not something where, for the most part, we would see this as a clear bug if we showed more pages from the same website in a particular query. Internal WordPress search question. Ahrefs shows me duplicate content issues on my internal search pages. Is this a problem for Google, even when they're noindex? Can you tell us a best practice for internal search on content pages? So if these pages are blocked by robots text, blocked by noindex, noindex robots meta tag, then essentially they would not be indexed. And we would not see that duplicate content. So just purely from that point of view, that's something where probably it's good to take this kind of feedback from tools and to double check that it's working the way that you want it. But in this particular case, I think that's OK. Even if there's duplicate content on these pages, if they're not being indexed, then you're kind of doing the right thing. With regards to internal search pages in general, I would tend to differentiate between different kinds of internal searches. In particular, if you make your internal search pages so that they're more like category pages, then that feels like something that makes sense to be indexed, at least maybe the first page of that. On the other hand, if the internal search pages are essentially any random keywords that people can provide and you will create a search page for that, then that feels like something that probably is not worth indexing individually. And then it's more a question of, should you be blocking these by robots text so that they don't go to your server at all, or should you just use no index? From our point of view, both of these are valid options. The robots text option makes a lot of sense. If those searches on your website create a significant load in that you don't want random crawlers to get stuck in those search pages and start to try to crawl through a whole bunch of pages like that. On the other hand, using robots text also means that we don't know that these pages should not be indexed. And it's possible that if someone links to one of these pages, we would index that page without knowing the content, of course. So no index might be better. I tend to focus more on the no index side rather than using robots text for internal search pages just because that gives you a little bit more control. And again, if these search pages are more like category pages in the end, or if you use the same setup for your category pages as you would use for your internal searches, then having those category-like pages index is perfectly fine because they're also useful for users. Client has a medical-based information page. Now they want to also create a community or forum on their subdomain. During the time of EAT, should they totally no index the forum? Because normal users will discuss there about medical topics. Or does Google recognize that these topics are community-driven? We want to avoid conflicts and problems with the main domain. So EAT is not really a ranking factor. It the sense that you need to optimize those attributes for your website in order to rank well. So that's kind of the one thing there. But I would try to see this more from a user point of view rather than purely an SEO point of view. And from a user point of view, it can certainly make sense to have some kind of a forum or a community associated with an existing, maybe well-known medical brand. But you probably want to make sure that it's positioned in a way that it's really clear, that this is actually user-generated content. Maybe it's not vetted by medical professionals. Maybe it's people who are just anecdotally giving advice on what they observed during like maybe they had one of these diseases as well. And anecdotally, they noticed that things got better when they did something specific. And purely from a medical point of view, that might be more correlation, not causation there. So it might not be really clear medical advice that people are posting there. So from that point of view, you probably just want to make sure that it's positioned appropriately for users. So instead of focusing on how that would work from an SEO point of view, make sure it works well for users first. And then usually, the SEO side falls into place on its own. So that's kind of the direction that I would head there. If in some cases, I can't automate the site map on a back end, will crawler-generated site map be a second best option? Or in this case, is it just better not to include an XML site map at all? I would strongly recommend finding ways to automate the site map file because you really want to make sure that every small change that you make on a website is reflected in the site map file. And usually that means if you have a larger website, it's a lot easier for us to actually find those changes. I think if you're crawling your own website to generate a site map file, then you also have to keep in mind that maybe Google will just crawl your website and figure out which URLs are there as well. So it's hard to say that you're actually staying ahead of the curve by providing a site map file versus just letting Google crawl on its own right away. However, sometimes this does help us to get started with a website. And even with smaller websites, sometimes if you change something that's kind of lower level within the website, then bubbling that up through an updated site map file can help. So my recommendation here would be to strongly look towards finding automated solutions for the site map file. But if you still want to provide your own site map file and you crawl your own website for that, then that's still an option. It's not something that you should avoid, let's say. What will be the best solution if we'd like to run a test for two weeks where we change bits of the content, such as, for example, an H1? The current setup limits us to change the content directly on the page for a portion of the traffic. This means that users and crawlers see the changed H1. Could this impact rankings if the H1 is completely different to what we had? What if we ran a test like this every few weeks? So I think the question is kind of refined to if I change my content temporarily, will Google reflect that in search? And yes, if we see that the content has changed, then we will reflect that in search. So these kind of tests where you're changing the content to see kind of what the reaction is, they would tend to have a reaction. It might be that the reaction is very subtle and very small. So if you're just kind of moving text blocks around on a page and Google sees the same text in just in different positions on a page, then probably for the most part, that wouldn't change anything. But it would still be indexed as a new page, because you changed things on that page. If you ran such a test every couple of weeks, in general, you would have this change in indexing every couple of weeks, which from my point of view, probably just makes it harder to track the results properly because when it comes to search, it's a little bit unclear when exactly a page would be reprocessed. So let's say you're changing a page once a week, and we re-crawl and update our index for that page every, let's say, six days. Then depending on when you make those changes and that time frame of when we re-crawl and reprocess that particular page, those effects might be visible in search in a way that makes it really hard to track back what actually happened there. So if you're making these kind of changes frequently on the same URLs, then it's really hard to determine which particular change resulted in which particular change when it came to search. If we could change the current setup, what would be the best option? For example, using a parameter to change the content can not equalize to the product page without a parameter. What would be a good solution that would allow a significant amount of testing that doesn't hurt our organic visibility? So one approach that I've seen in general for AB testing is to set up different versions of the page and then to canonicalize, like you mentioned here, so that for indexing, we would tend to focus on that canonical version. That seems like a good approach. The other thing to also keep in mind is that if you're testing different things, sometimes maybe it's also worth seeing how that reacts with regards to SEO as well. Because maybe there are changes that you could make on a page which don't harm your visibility in search but rather improve it. So that's something where sometimes maybe it doesn't make sense to completely decouple those tests from search. OK, wow, we just have a few minutes left. A whole bunch of questions still remaining. But maybe we can switch over to any questions from you all in the meantime. I have a question, John. OK. OK, so I've got this website showing up in the search with the wrong opening hours. Actually, I thought the reason was due to some pages that were crawled back at Christmas time. They showed the Christmas opening hours. We got Google to re-crawl all of those pages, checked it in Search Console. But still, the wrong opening hours are displayed in the SERP as a sort of a special feature, not as a snippet from the website itself. What data source would these be based on now that everything seems to have been crawled correctly? I don't know. Hard to say. In general, if it's from the page itself, then it might just be a matter of us re-crawling that page and being able to reprocess everything appropriately. Sometimes it's not updated from the first time that we re-crawl a page, especially the derived metadata of a page that sometimes just takes a little bit longer to be reprocessed. That might be one thing. The other thing is that it might also be that we're associating other data with that particular page. So for example, if you have a Google My Business listing and we know that it's associated with this website and you update the hours in the Google My Business listing, then it's sometimes a matter of, well, do we kind of match those hours? Do we also show that on the website? Are these things that should be treated separately? That's sometimes tricky. So that's the other thing I might look at there. If it's really been in place for a longer period of time and you're sure that things have been re-crawled and reprocessed a couple of times, then I'd love an example like that if you could send it to me and I can take a look at that with the team then. OK, I'll do that then, because we've checked everything now. OK, sure. You can just drop it here in the chat and I can pick it up afterwards. It's already there. Yeah, great. Cool. Hello, John. Hi. Yeah. OK, so I have a question regarding the length of how Google Algorithms can remember signals and things from a certain website from the past. So let's say the question is if these algorithms detect patterns that occur in the span of several years or they in general focus only on the current state of the website or page, minus the length for re-crawling or something like that. So let's say we screw something up very badly, but then we fix it afterwards. Is there a possibility that it will be still doomed for next, I don't know, how many years? So let's say one mistake in the past can trigger a penalty several years later. Is that possible? Usually, we try to focus on the current situation of a website. So if the current setup is correct, then that's something that we would focus on there. There are very few situations where things linger a little bit longer. One might be with regards to links. So if there are external links to this website and you fix some of these and it's just a matter of us reprocessing all of these things, then sometimes that can take a significant amount of time to be updated. Another one that I've seen from time to time is with regards to geo-targeting. If you significantly change the targeting of your website, then sometimes that just takes a while to be updated. I think for the most part, those are kind of the ones. There are some really weird edge cases sometimes that just stick around for a really long time. But that's not something where I'd say that's by design. And if you do something like this wrong, then you're stuck with it forever. But usually more matter of weird systems on our side that are not updating as quickly as they could be updating. So I think especially when it comes to things like technical issues, that's something where if you fix those issues, if we can recrawl and reprocess those pages, then that's fixed. That's not something that would linger for longer. So would you say it mostly construes third parties, like links, for example, right? That is not so easily crawlable when it's on our website? No, no. I think there might be some, I mean, just because we look at the cases that go wrong, I feel like sometimes there are lots of things that are going wrong. But it's probably just a very small subset of pages where I really see things kind of stick around a little bit longer than I'd like to see them stick around. I think another case where I've seen something similar like this happen is when we recognize that a website is really spammy and they go out of their way and they clean things up completely, then it's possible that our algorithms might think, well, well, this is just an affiliate website. We need to be a little bit more careful here and that this kind of status sticks around for a little bit longer. Or for example, if we recognize that a website is an adult website and at some point we pick up that information and if the website itself changes completely or if the same domain name is reused for other purposes, then that's something where it can sometimes take a little bit longer for our algorithms to kind of get used to the current state. And by a little bit longer, that can sometimes be a couple months. That can also be several years. OK, thank you. John, sorry. I know I've asked a question already. Sure, go for it. Thanks. I've always wondered if high-value keywords, like a big section of the population is searching for certain keywords, is there any human part of the algorithm that qualifies these high-value key terms, or is it all just algorithms called by bots? It is pretty much all algorithms all the way. Yeah, I think there are some aspects that might look like there's a little bit more human touch to them. In particular, when we see that certain types of queries tend to be a lot more spammy, then what can happen is that the web spam team will go through those search results and try to figure out which one of these are actually legitimate content and which type of this content is more spam. And then it's not so much that they're kind of like determining the search results, determining the order, but rather they're looking at those queries and saying, well, this is a really important query. Lots of people are searching for it. And we're providing terrible search results because spammers are getting through our system somehow. Therefore, we need to take manual action on some of those sites. So it's not that we're filtering or sorting things out for those queries, but rather that the web spam team takes the time to look for the normal type of spam within some of these queries. Well, OK. Yeah, put it. OK, I think we're pretty much at time. I'll pause here, but I'll remain online a little bit. So if any of you want to stick around, you're welcome to stick around kind of off the record, I guess. So thank you all for joining in. Thanks for submitting so many questions and for asking so many things. I hope you all have a great weekend. In the meantime, I'll set up the next batch of office hours probably early next week. So if anything is still missing, you're welcome to drop those in there. Or, of course, drop by the Webmaster Help forums or catch us on Twitter. All right. Thanks a lot, everyone, and wish you a great time. Until next time. Thanks, Joe.