 All right, welcome, everyone, to today's Webmaster Central Office Hours Hangout. My name is John Mueller. I'm a Webmaster Trends Analyst here at Google in Switzerland. And part of what we do are these Office Hour Hangouts, where webmasters and SEOs and Barry can join in to chat about their websites and search and interesting topics they've seen along the way. A bunch of stuff is submitted on YouTube already. But if any of you want to get started, jump on in. May I? Go for it. Hello. I'm working for one of the world's best surgeon in this field. He has many recognized academic publications, countless awards from respected authorities, world-renowned countless interviews made and published by respected newspapers, magazines, TV channels in the United States. He's actively taking place in many organizations and authoritative institutions like respected universities in the United States too. His website was performing first page on many related search terms until about March 2019 update. And then it was 90% of its traffic from Google in the next day, dramatic decline. I'm working and researching for seven months nonstop now. And we have actually no progress in search at all. And our performance kept declining. Meanwhile, we manually and very carefully reviewed all the back things we can reach. And this was really spammy once carefully. Search Console shows no errors, but excluded pages that we are totally fine with. We are actively maintaining according to Search Console. We developed mobile and desktop score, 200% of Google page speed insights and it didn't have to be covered. The problem that we were able to identify was that his website was serving pages with exact text copy of his interviews copied from those news websites. Actually, 30% of site content was like this. And would that be the problem? Because we were removing all of them and applied for a removal on Search Console. Yet it didn't solve the problem for passing one month after we took the action. And the other thing we tried was moving to WordPress, but it accelerated to decline. After three weeks of experiments, we decided to roll back. And what would be your opinion about this case? It's hard to say without knowing the site. So that's kind of the first part that's a bit tricky. In general, with these kind of things, one thing I would do is make sure to have a forum thread where you have the details so that people can look at that. And so that when you need to get more eyes on it, you can point them at one place where you have all of the information. So that's kind of what I would try to do there. In general, when you see changes from one day to the next, usually that's not a technical issue because technical issues would have that normal path of indexing as well associated with that where you'd have a large amount of changes within a couple of days and the rest kind of slowly moving forward. Whereas if you're saying it's really from one day to the next, then that sounds more like our algorithms are, for whatever reasons, maybe not as happy with the site as we were in the past or where we think maybe it wasn't as relevant as we initially thought. So assuming that's what you've been able to figure out, that it really happened from one day to the next and that it's across the board, all of the pages, then I would focus less on technical issues and more on overall quality and things like that. And would there be a problem about medical content quality in terms of the citation or references? So we'd be doing academic style citation on our medically advising content? I don't see an issue either way. I think that's essentially up to you. But to me, I would try to take a step back and think about how the website is perceived overall and try to find maybe some bigger picture issues that are maybe worth focusing on. I don't think it's something like you have the wrong HTML for a citation. That's not something that I would consider to be a problem. OK, one last question. I'm like, it really has this recognition in real life, but not on Google. And we really were searching as a team, researching, reading any kind of articles on every respected also the Google's pages. And would it be possible if I sent you a case, would you have a look at it for us? I'm happy to take a look. You can post maybe a link in the chat, and I can pick that up afterwards as well. But I can't guarantee that there'll be anything specific that I'd be able to say. Yeah, sure. Thank you very much. Sure. All right. Any other questions before we get started with the submitted ones? John, could I jump in and ask some infinite scroll-based questions? All right. Basically, it's an e-com. I'm sort of looking at that moment. I'm trying to implement infinite scroll. And the question that I've mainly set was around the idea that Google self, that Google selected canonical URLs might be different from webmaster-initiated canonical URLs. So the behavior that's being displayed from the chart at the moment is that it are referencing the first page in the page-related series on category pages as the canonical URL. And I guess the question that I have is, do Google drop the outbound signals on the subsequent pages of the canonical URL specified? I don't see a big issue with us dropping the second pages like that. But it's hard to say exactly what your setup is. It sounds like when you scroll down, it changes the URL. Like, it's an infinite, like how is the infinite scroll setup? It's terribly implemented at the moment. There's no reference to the other URL. What the developers seem to have done is they've stuck a real preven next in the head of the document that references subsequent pages. And you guys pay no attention to that. It looks like they're also soft, potentially soft 404-ing subsequent pages, because it looks like Google's guessing the next page in the paginated series to index all the content from what I can see. Of course, I don't have access to webmaster tools on that particular site. It's a demo site, I feel like. So I guess the question that I'm asking really is if there's five pages within a category, we're trying to get all of the maximum SEO value from working with some big players. So we want to squeeze every last dropout of the SEO one site that we can. So internal links, all that kind of stuff. Do we really want to get Google to index all of these category, page 2, page 3, if all the products are on there, rather than just dumping in a 20,000 page XML file? There's no signals. I don't see any SEO value from that. Is it worth the extra effort to implement, I guess, your solution from 2014? Does it make sense to do the extra effort? Yeah. So do you have a category set up at the moment, or is it just like one giant list that leads to the products? They will have another favorite topic of use, I think. It's like they're going to implement a mega menu, which is not ideal, is it really? But I think it's the best they have. OK. So I think if they have category, it's kind of like a category-based set up, where they have one page that lists all products kind of thing, but they have those products in individual categories as well, and the lists per category is kind of reasonable, then I think that should be fine. So the important part from our side is that we're able to find all of the product URLs. So that's something you could check with a local crawler. If you have something like Screaming Frog or Deep Crawl, one of those tools where you can crawl the website yourself and kind of double-check that actually it's possible to reach every one of the product pages, for example. And if we can reach all of those product pages through a category page, then that kind of infinite list of general products that you might also have, that's less of an issue. But that's something where it kind of depends on how you have that set up, how many items you have per page and how the infinite scrolling is set up, how many categories you split that across. I think what I would try to do is balance kind of a reasonable hierarchy for your product so that you have a reasonable amount of categories and that you have within each category kind of a reasonable amount of individual pages that lead to the product pages so that it's not too flat, but also not just one long listing that's very deep. So on an idea, would you want Google to be able to land on a category page, scroll through five pages, and then pick up the URLs that way, because there's a hierarchy implemented. There's on-page signals. There's internal links that are going there from the page. There's header ones and title tags. You would want Google to be able to see all of that and those products listed within that hierarchy to pass those signals to get maximum SEO benefit. I wouldn't look at it for maximum SEO benefit, but just purely from a technical point of view is Google even able to find those products. And then kind of like when that baseline of Google is even able to find everything is passed, which is usually pretty hard. It's not always that trivial. Then that's something where you can think about which of these products are important and how can I present them in a way that Googlebot, when it crawls, recognizes that these are important. Sure. That's great. I'm just thinking about the PageRank algorithm really and the flow of internal. Yeah. Usually, it's not worth thinking about the PageRank algorithm at that level, because especially with these really large websites, the big issue is really, can it even be found? And if it's findable and if it's a reasonable structure that it's not too deep, not too flat, then that's something where it should work out. Do you have a preference on categorizing terms of the menu structure? I know you're not a fan of mega menus, neither am I. Do you have anything that you could maybe point me to to have a look at a more optimal setup, perhaps? I don't think we have anything that I could point at there. What I would do is mostly focus on usability. And if from a crawling point of view, we can find everything and usability is OK that users don't bounce, then that's like a good point to be. Great. OK, thanks. Sure. Yeah, so John. Sure. Hi. Hello. Hi. Yeah, so John, I just wanted to know being SEO, would you recommend SEO people to focus on page rank while optimizing for internal linking or now they should not do like this? Because I'm sure that some, not some, but majority of SEOs think for internal links only from page rank perspective. So what is your suggestion on this? I don't think that makes sense. So I mean, I think it's interesting to look at the different algorithms that Google uses. But I don't think focusing on things like internal page rank makes a lot of sense. I would try to look at it more from a holistic point of view if there's something important on your website and make sure that that's really important to everyone. But I don't think it makes sense to calculate the internal page rank and try to optimize things around like that. So one example that we often see is that people try to take their terms of service page and kind of hide that from Google's algorithm so that it doesn't collect any page rank. And usually that doesn't make any sense because we understand that websites have a structure like this. And there's no need to hide that, to block it from indexing, to use internal and no follow links, anything like that. So that's something where it's very easy to get lost in the weeds and to do a lot of analysis to try to find the optimal page rank flow within a website. But most of the time it's wasted effort. And in this case, one example is that when I have seen a lot of people asking for link in XML sitemap is enough or we should give the internal link to that particular page. And most of the times in forum, people are asking only thinking that, OK, page rank may flow. This is why everyone recommends to add an internal linking. I think you should always have an internal link to a page. If something is important, then make sure it's findable. I don't, not from a page rank point of view, but just general point of view. If you care about the page, then it should be findable for users when they go to your website. OK, thanks. Sure. Hello, John. Hi, let me run through some of the submitted questions first. And I'll get back to all of you that are trying to get some live questions in as well. Because lots of these things got submitted over time and useful to go through them too. So the first one is I work for the World Health Organization, and our website is not findable. It would so, there's like a long rant there too, but it would be useful to know which websites you're talking about. So if you can send me the site or post a link, I'm happy to take a look at that. I've been fooling around with some rank trackers out there. They tell me that for all of the keywords in their database, my site ranks position 11 way more often than any other position. I can reproduce this in multiple services and anecdotally as well. Looking at this data on the graph, the distribution is not even close to normal. Nothing like my competitors distributions. It looks really like some kind of penalty or artificial limit that's happening to my site. Can you think of any penalty or something else that would result in disproportionate number of sites results landing in position 11? No. So I'm not aware of anything specific that would cause something like this. My guess is that it's probably tricky across different rank trackers. So that's something where instead of using external rank tracking tools, I would look at the search results yourself and try to think about what is actually going on there. Because it's something where we often show a lot of different elements in the search results. And there are lots of ways to be seen in the search results. Position 11, just having that number alone, I don't think that really tells you a lot. So from that point of view, I wouldn't focus on this kind of artificial ranking number that you're seeing there, but really try to look at what is actually happening in search. I know there's a lot of talk this week, but do you know if the no follow change is live for crawling and indexing? I know the original date was March 1. Just wondering if that's live now. So this is something where we made some changes in our search algorithms with regards to how we treat no follow links. I believe the change was split into two parts, one being able to pass signals through no follow links and the other with using no follow links for discovering new URLs. And essentially, on our side, the change is that internal systems are now able to do this. That doesn't mean that internal systems are currently doing it, but at least from a policy point of view, they're able to pick this on. And if there are teams within Google that say, I would want to use this, then that's something that they're open to doing that. So it's possible for them to do it. It's not the case that it's like pumping everything full through those links now. It really depends on what teams internally are testing, evaluating, and what makes sense for them. We implement organization markup. John, do you know if teams are actually doing anything with that? Now that they're allowed to? I don't know. Thank you. They can, but I don't know. I also think it's not something where you would see a big invisible change in the search results just because they're suddenly doing this. I imagine the changes would be more subtle. A big invisible change that you can see. That would be amazing. I don't know. I mean, it's something where we also try to use it as a signal. So it's not like we will just ignore it completely, but we'll try to figure out where it makes sense and where it doesn't make sense. So to summarize, the policy change allows anybody in Google to use it for indexing and ranking purposes. But it doesn't mean that you are doing it. And as far as you know, nobody is really doing anything at Google since the change. So I haven't looked into whether or not people are actually doing it, but I don't know of anything specific. OK, thank you. We implemented organization markup from schema.org. There's no postal code on the page. Can we add it only for robots? Oops. Can we implement it only for robots, like meta postal code and then a content? If not, please list the situations when webmasters can add markup data only for robots. So structured data should be visible in the page itself. It should be equivalent to what you're showing to users. So that's something where I would show it to users. So I don't know why, for example, a postal code would not be visible. But essentially, it should be visible for users if you want us to pick it up. If a website, for legal reasons, is not allowed to market its products or a subset of them in the US and the most common approach is to block US IPs, this will, however, lead to not being indexed on Google for any market, since blocking US IPs means blocking Googlebot. Also only allowing Googlebot but not human visitors from the US seems risky because of Google's policy of handling bots and users the same way. So how should we handle a situation like this? So you're right. It's tricky because our policy does say you should handle Googlebot the same way as you would handle other users from that same region. So that's something where if you're not able to show users in the US that content, you should not show Googlebot that content when it's crawling from the US. That's kind of the baseline policy there. It's obviously kind of the subtle differences between US-based content and other countries content. Like if you want to block all users in Switzerland and Google crawls from the US, then that's perfectly fine. But if you want to block users in the US and Google crawls from the US, then that's something where you will see a change with regards to indexing. We don't have any provision for saying this content is only accessible in individual countries. So that's kind of, I think, a tricky situation there. Our general recommendation here is to make pages that are accessible by users in the US, which might be simplified pages, which might be more like marketing pages rather than the actual content that you're providing so that you can allow those pages to be indexed in Google because they're accessible in the US while the actual kind of content that might not be accessible in the US is hosted somewhere else and appropriately blocked from crawling. I know some people say, well, Googlebot isn't a normal user and I'll just cloak to Googlebot. But like you mentioned, that's generally a risky approach because we would probably see that as cloaking and that would be a violation of our Webmaster guidelines. So that's something I'd recommend not doing like that. We're migrating our radio brand websites into one aggregator brand website. What helps Google to understand individual radio brands under the aggregator brand without losing news article snippets in search? And Google discovered traffic after the change when everything is under the same domain. Does Google have any best practices for these situations? We might publish the same news article for multiple brands if it fits the radio brand's audience. So in general, this kind of merging of multiple websites into one website is always tricky. So that's something to keep in mind when you're merging and splitting websites, then that takes a lot longer for us to process. And the outcome is not something that is trivially able to determine what will happen in the end. When you're moving from one domain to another, we can essentially take everything from here and just pass it on there. But if you're merging things, the final version is essentially a different website than the individual version. So that's kind of a new state and not something that's easily kind of determinable ahead of time. And that's something where I would expect to see fluctuations at least in kind of the midterm until things settle down. And it might be that the final state is very different than the initial state. It might be that the final state is much better than all of these individual ones. It might also be that the final state is a little bit less than the individual one. So that's something where there's no real kind of trick to handling this, but really where you have to take it page by page and migrate everything to that new kind of bigger website. If I can, if they have duplicate content because they have different sites under that one aggregate, would you just pick out one article to be a canonical? Or how would you do that? I think that's generally the best approach. So if you can pick one article and set that as a canonical and make sure that that's the best one that you want to have indexed, then I think that's a good approach. In general, we're able to recognize this kind of duplicate content. We'll try to figure out the canonicalization ourselves. But if kind of the webmaster or the owner has preferences with regards to canonicalization, then I would definitely just let us know about that so that we can do it more in the way that you want it done. OK, thanks. May I ask another question? Sure. Considering March 2019's update, if the medical website haven't been updating the most competitive content for the passing one year, would that cause a day-to-day dramatic fall right after March 2019 update? So I don't know offhand what happened in March 2019. So I don't remember what all happened back then. But usually, if you haven't been updating content for a while, then that's something where, over time, we will see this content as being less relevant. But it's not that we'll say, oh, it's like 17 pages. Haven't been updated. That's too much. We will demote the whole website. That's not the case. OK, thank you very much. I've seen that, apparently, Google has some problems to show the right spelling of meta title and description in Persian language. It shows a Persian alphabet from the end to the start. And it wouldn't be possible for readers to read that as it's meaningless. I'd love to have some examples. So if you're here or if you can send me some examples separately or send me some screenshots on Twitter, that would be fantastic. It's hard for me to judge what exactly you're seeing there. Second question is wondering why Google no longer supports pagination with rel next and rel previous. Previously, I've encountered some problems in terms of defining the first page for Google as it gets my second page instead of the first one. And what's the ideal implementation that Google wants from webmasters in this regards is just internal links enough. So internal links is definitely what we're looking for. So we don't use the rel next and rel previous. One of the things that really helps us with regards to pagination is to have kind of a clear hierarchy in the paginated pages. So instead of linking from page one to all of the pages of your set, link from page one to page two and from page two to page three and appropriately kind of forwards through the paginated set. Because then it's a lot easier for us to understand this page is embedded within your website in this particular way because it's like linked from here and it links back to the previous page and to the next page. And then we can determine kind of this next and previous blow ourselves without anyone needing to define that specifically. So that's kind of the recommendation that I'd have there. I've sometimes seen cases where we show maybe page three or page four in the search results when someone is searching for something generically. And usually that comes back to, well, you're linking to page four from the beginning already. So it's hard for us to determine why page one would be much more relevant than page four. So that's something where I try to just give us a clear pagination through appropriate links on the page. And usually we can pick that up. My question is about thin content. The niche I'm working on is gaming. Recently I built a tool which lets the user which tells users about any game specifications. Last month I launched the service and almost 1,500 games were added to the database, meaning 1,500 new pages were created for each game. The Google algorithm found these as thin content, which is true. But from a user's point of view, these are perfectly fine. These are sort of question-answer pages. For example, what are the specifics of the Fortnite game? How do I tell Google about such content? Such other sections on my website use appropriate length content where needed. So in general, I'm not quite sure how you were able to see that Google sees these as thin content. We don't have kind of like a thin content tool in Search Console. So my guess is we're just trying to figure out how to rank these pages. And if these are multiple pages that you're creating for individual questions about individual games, then I could see how our algorithms might look at these pages and say, well, we don't know how important this page is overall. So my recommendation there might be to combine some of this information that you have and create really strong pages rather than to dilute your content across tons and tons of different pages. So that's kind of the direction I would head there. It's not so much that you can tell Google, like, this question is really important. And you should index this page that only has maybe two sentences on it. But rather, you need to show Google that this content is actually really important and relevant because you provide a lot of detail there. So that's something where I would go in that direction more to combine the kind of the self-written content that you have about these individual games together with all of this kind of database-generated content that you have and create really high-quality and valuable landing pages for the kind of content that you want to provide. I didn't take a look at your website itself, so maybe you already have something in that direction. But that's generally the direction I would go and create kind of fewer pages with more better quality content than to have lots of pages with small pieces of content. I have a few questions about breadcrumb markup. I've noticed that Google will drop the final page from breadcrumb trail displayed in the search results. For example, the breadcrumb trail might be domain.com folder instead of domain folder and page. My questions are, does Google drop the final page from a breadcrumb trail if the page title is not included in the breadcrumb list that's visible to the users of the page? I don't think the page title would have anything to do with that in a case like that. So that's something where usually it doesn't make sense to, at least from my point of view, it doesn't make a lot of sense to show a breadcrumb for the individual page and kind of the page itself because that's kind of overlapping. And the second question is, will an excessive character length cause Google to drop the final page title in the breadcrumb trail from displaying in the search results? I don't know of any specific character length limit there, but there's only so much room in the search results, especially on mobile. There is not a ton of room for really strong, long, and kind of combined breadcrumb trails. So that's something where we do have to cut things off at some point and have either the dot, dot, dot in the middle or a shortened breadcrumb in general. So that's something to keep in mind when you're creating these breadcrumbs. Exactly what user agent do I need to filter our server logs for to check Googlebot crawling our site? We have a Help Center page on the user agents, so I would check that out, especially for the mobile user agent. It looks a lot like a normal mobile phone, but there are a bunch of different user agents that are worth looking at. Can a self-referential canonical with no index confuse Googlebot? So I don't think that would confuse us, but if it has a no index on it, then we wouldn't index it. And then the canonical link there doesn't really play a role anyway. So that's something where I don't see it confusing us, but we just wouldn't index that page because it has a no index on it. If page A is non-indexed and canonicalized to another page, say page B, will Google consider page A as no index or canonicalized? If it has a no index, it's a no index page. We wouldn't index that particular page. We would potentially index page B, but that's kind of separate from page A in a case like that. And it will not also render the page, right? So does it mean that if Google is finding no index, it will not even render the canonical? That's correct. So I mean, the canonical is kind of separate. But if a page has a no index meta tag on it, then we will not render that page. So if your JavaScript changes the no index into an indexable page, we would not notice that. Correct. Yeah, got it, got it. Thanks. Is there still anything like the no ODP, robots meta tag? There is no open directory project anymore, at least as far as I know. So there's nothing really to do in either of those cases. In the past, the no ODP tag would tell us not to use the open directory project. But since that's gone, we couldn't use it if we wanted to use it. So not much we can do there. I would like to know if it's bad to repeat the anchor text in my internal links. For example, say I have an article called Solar System. Can I link it every time with the anchor text Solar System? Or by contrary, will Google algorithms think that it's spam? We generally won't think that's spam. I think that it's normal to have multiple internal links to individual pages within your website. Sometimes you have multiple internal links on the same page. And that's completely normal. The thing that where it would kind of get spammy is when essentially all of this text on the page is looks a lot like keyword stuffing. So if, for example, you're repeating this link hundreds of times on that page, and it's always Solar System, Solar System, Solar System, and at some point, our algorithms would look at this page and say, this page overall looks pretty low quality because there's just so much repetition in here. It looks like keyword stuffing. And that might be the case if you're looking at something maybe several hundred times on a page. It's definitely not the case if you're talking about maybe five or six times the same link on a page. So those are kind of the two extremes there. I think, for the most part, websites don't have a problem with this kind of excessive linking internally because users, when they look at a page like that, they also have trouble navigating that. If everything is like Solar System, Solar System, then users have trouble finding the useful content there. So in this case, John, in this case, how Google treats HTML sitemap then? In HTML sitemap, well, it's linked to a lot of pages. Yes. Should we not have this page then? It depends. I would make it for users. And HTML sitemap, from my point of view, is something that helps users to find the content that they're looking for. It's not something I would rely on for search engines. So for search engines, I would really make sure the internal navigation works well, that you have a clean sitemap file. And usually, you don't need an HTML sitemap file additionally. So just from my point of view, the cases I've looked at, at least, is if it's a very large website and you feel you need to have an HTML sitemap, then probably that's a sign that your normal internal navigation is bad already. And you need to fix that. Whereas if it's a smaller website, then we wouldn't need an HTML sitemap anyway. So oftentimes, the decision about whether or not to make an HTML sitemap is tied to, am I actually doing the right thing, otherware on my website? And if you're doing the right thing elsewhere, then you generally don't need an HTML sitemap. OK. For a website with science content, Evergreen, should I add the credits image, even if they're a public domain and the bibliography with the help with EAT? I think in general, if you have images that you use that have credits that are associated with that, I would specify that on your pages. It seems like the right thing to do, regardless of the kind of website that you have. So that's something where I think that makes sense in any case with regards to bibliography. I think that also makes sense in any case. If you have links to more detailed information where users can dig in and find out more, then provide that to users. I think that's a good service for users that shows them that you're serious about your content and that helps our algorithms to figure out as well that actually this website might know what it's talking about. If I have two websites for different countries, like .com and .no for Norway, and I add the same content in both websites, but I add a canonical tag and a second website to show the first website, and the content on my second website will get crawled by Google or does it count as duplicate content in a website by Google? So if it's duplicate content, it's duplicate content. So I think that last part is kind of obvious. The important thing here is that duplicate content on its own is not a bad thing. It's not that we will demote your website and say it's terrible because there's some duplicate content on it. It just for us means that we understand this is a duplicate. And when someone searches for this content, we'll try to show the most appropriate version to show to that user. So it's not a case that we'd say this is a bad website because there's duplicate content here, but rather which one of these pages is the right one to show to users, and we'll try to pick the appropriate one. So with that in mind, that might be that a user in Norway, if they're searching, we might say, well, this is the local version of the website, and it looks like they're searching for local content. Maybe we'll show that. It might also be that we look at this and say, well, there's a rel canonical telling us the dot com is maybe the appropriate canonical. So maybe we'll show that. And these kind of things are sometimes tricky decisions in the sense that for canonicalization, we use a lot of different factors. That includes the rel canonical. It includes redirects, includes internal and external links. All kinds of stuff come together. And the clearer we can recognize which of one of your pages should be the canonical, the more we'll tend towards that one and use that one. From a ranking point of view, it doesn't matter. So if you have multiple pages, they have the same content, we will pick one of these as canonical and rank it like that. If we shift the canonical to another one, perhaps you have a redirect or rel canonical setup, then the ranking will be pretty much exactly the same, just with a different URL shown. So it's usually not something that is critical for you to fix in search, but more like, if you have a preference, then let us know about that preference and we'll try to respect that. And less that there's a magic trick that you need to do to rank higher with this kind of a configuration. The PageSpeed Insight tool flags AMP URL discovered with the option to reanalyze in terms of mobile page speed as a ranking factor, which score is used, that of the standard URL or of the AMP URL. So for speed, in general, we use both lab tests as well as field data. So field data is what users would see in search when they go to these pages. And if users would see the AMP page, then that's something we would use there. So that's kind of the two sides that are involved there, which can come together. So when you're testing these pages for speed, my recommendation would be to test both of them. Usually AMP pages are very fast, but you can also make slow AMP pages. So I would really double check both of these versions and make sure that that what users would see is actually the speed that you'd want them to see it. Does updating old content is determined as creating new content by Google? Let's see. Hard to understand this question. I think it goes in the direction of, do I need to keep updating all of my old content? I have hundreds of articles, and I don't have time to keep updating everything every day. So we don't have any magic algorithm that looks at your content and says you need to have this updated every week. But rather, we do look at your website overall. And if overall, we think your website is maybe not as relevant at some point as it used to be, then we'll treat it as being less relevant overall. So it's not the case that you need to kind of go through and keep updating everything, but if you're actively creating new content, then maybe your website will continue to be relevant over time. So it's not that you have to go through all of your news articles and keep updating them with the newest facts, but really make sure that the overall picture of your website is what you want to have reflected in search. I have a quick question. My question is, nowadays, why Google Webmaster Tools becomes so rigorous? And considering most of the structured data warnings as errors, are these important to have on a website? For example, breadcrumbs and missing fields, please advise. So I wouldn't necessarily say that Search Console or Webmaster Tools is more rigorous than in the past, but structured data does evolve over time. And it can happen that things which are fine at some point with regards to structured data become warnings at some point, maybe even errors at some point. So just because you've implemented structured data once doesn't mean it will always remain valid just in that way. In particular, our search features evolve. And some of these search features rely on structured data. And if the new version of a search feature needs extra information from a website, for example, that you don't have in your current version of the structured data setup, then it might be that we would say, well, in order for us to display your website in that search feature, you need to provide this information. It's not the case that we would say your website is bad because it doesn't provide the structured data the way that would match our requirements at the moment, but rather if you want to take advantage of this search feature, kind of this fancy way of presenting your website in search, then you need to make sure that you're fulfilling the requirements for that. So from a ranking point of view, that's less of an issue, but it's really more that, well, there are fancy new ways to show your website in search. And if you want to kind of work along with that, then make sure that we can work with your website for that. And we try to minimize the changes in structured data requirements and kind of warnings and recommendations as much as possible. We actively work together with the product teams to really minimize these kind of changes because they are disruptive. It is something where if you've implemented it once and then suddenly Google changes what it needs to have for a search feature, and that's frustrating as a webmaster because maybe you need to do some extra work to get that done or at the very least update a plug-in to kind of reflect those new changes, it's always a hassle. So we try to minimize that as much as possible and we try to have as much of a path forward as possible where we'll say that maybe something becomes a recommendation and it will be flagged as a warning for a while. And then after some time we can say, well, we've been telling you that it would be good to have this set up for a longer time now. Maybe now is the time for you to actually make that change. But I think it's not so much Search Console but just generally the whole ecosystem around structured data keeps evolving. And that makes it exciting because there's always new stuff happening, but it also means there's a lot of work sometimes involved in keeping up. OK, we just have a couple of minutes left. So maybe we should just switch over to last questions from you all. Hi, John. Hi. I have a question and I'm not exactly a technical SEO expert, so I hope I'll be able to express the situation as accurately as possible. We have a multi-regional website and this means that basically for every page that we create, we have eight different regional versions. Now, on top of these regional versions, we also have a global version of each page. And from what I understand, this is to make sure that we are distributing authority evenly throughout the website and not just to one particular region. Is this best practice or are we shooting ourselves in the foot with the global pages? I think that can work well. So what I would try to do in a setup like that is to make sure that the regional versions have a strong reason for existing. So instead of just taking the same content and saying, well, this is a version for Australia, New Zealand, and the UK, kind of really make it clear that this is a localized version of the content, which in some cases might be that you just have the local addresses and phone numbers on there, but at least to really clearly make sure that it's easy for us to recognize when this page is relevant and when we should show that in search. You can also use the hreflang tags between the pages to tell us what the connection is between those pages. But in general, that's a very common setup. So I wouldn't necessarily see that as something that you'd want to avoid. OK, awesome. Thank you. Hi, John. Quick question for you. I'm using Dialogflow and working on FAQ pages with markup schema properly. But when I plug it into Dialogflow, they have a beta for knowledge. It doesn't read the markup schema properly. So it wants it to be in plain text. Am I doing it wrong? I don't know. I haven't tried anything with Dialogflow. So I would check with, I think, that's handled by Firebase, right? Mm-hmm, correct. I would check with the Firebase support folks on that. OK. Yeah, it seems like counteracting answers. Like, you want it schema, but then you don't. Yeah, I think, for the most part, these are probably different use cases where we want it in schema for search. But maybe Dialogflow needs to process it differently when it tries to understand the pages. OK, thank you. Sure. Hi, John. Last question from my side. If you allow. OK, so yeah. So John, recently I had two pages, two separate pages for one entity. Some was like blog page and some was main course page. Blog page was ranking for main course page. And featured snippet was also appearing from blog page. So I just took that particular paragraph from blog page, pasted it in my course page, redirected blog page to course page. But what I saw is that now featured snippet is gone. The same content is appearing in other page, but featured snippet is gone. In this case, how to analyze this? Was it really content or other third factors which were playing the role? Because we are using the same content. Yeah, please. Yeah, I don't know. It's hard to say. So the featured snippet is something that is an algorithmic organic search element from our point of view. And it uses a lot of different signals to understand when to show something as a featured snippet like that. It might be that if you just take this piece of text and put it on another page, and the page essentially has a lot of other content on it, that we might think, well, maybe this piece of text is not a relevant representation of that page itself. So just taking a piece of text and copying it somewhere else and hoping that that new page will rank in the same way, I don't think that generally works. And for featured snippets, it probably, in most cases, doesn't work that well either. And all the page signals are passed with redirection? Essentially, with a redirect, you're telling us that the new page replaces the old one. So the signals that we have, we can pass. That doesn't mean it'll rank the same way, because if the content is different and you're just redirecting to that different content, if the layout is different, if the navigation is different, all of those things can make it differently relevant for individual queries. Yeah, but it was a very similar page, not a big change. I don't know, by the way, what happened. I would try things out and see what works well, especially if you're looking at individual pages, then maybe that's something where you can experiment a little bit and see what works well. Yeah, cool. Thanks. Cool. All right, maybe one last question from anyone. Hi, John. Can I ask a quick question, please? Sure. You talked before about noindex, and obviously noindex on a meta tag at the top of the page should stop Google from indexing a page, but we're seeing some kind of strange behavior where we have certain pages that are marked noindex, but we're still seeing them in Search Console showing up as index, and we're actually seeing it says index, but excluded by robots. And then now I can send you some links if you want to. But it seems like it's the old scenario where a page might have used to have been indexed, then it was blocked by robots, then noindex was removed, and of course, Google then would never have crawled it to know that noindex was removed. But it's not that. These are pages that have always been noindexed, and when you go into the inspection tool, it says that they are marked to be indexed, and there's no typos or code issues or anything on the page. They all, by the day, find everything. Just some very strange behavior. I'm not sure if you're aware of it, or if there's any situations where noindex would be indexed by Google. It's just that it's to do with faceted navigation, so we don't really want to be saying kind of duplicate content, so I can stop appearing there. Yeah. I don't know. So if it has a noindex and we recognize a noindex, we should treat it as a noindex. So we don't use it as a signal where we'd say, well, maybe we'll index it anyway. So it's a pretty clear sign. But I'm happy to take a look. If you can send me some examples. Yeah, that's how I understood noindex, but I just wasn't sure. I was thinking I'm going a bit crazy, but I'll pop your name in and I'll stick it in there. Thanks a lot, Matt. Sounds good. Cool. All right. So let's take a break here. If you have more questions, feel free to drop them in for the Friday office hours hangout. And maybe we'll get to them there. Or if you want, of course, you're always welcome to jump into the Webmaster Help Forum where there are lots of experts that are able to help out with a lot of these more common situations. All right. So with that, let's take a break here and hope to see you all in one of the future hangouts. Bye, everyone. Thank you. Bye. Thanks, John.